The most successful marketing failure ever has quietly evolved into a foundational layer of today’s Web, finally made usable by Large Language Models (LLMs). This is the story of how a distributed, machine-readable Web of data became accessible through natural language.
The vision of a Semantic Web—where data is interlinked and machine-interpretable—is now a reality. Projects like the Linked Open Data (LOD) Cloud and Schema.org have turned the Web into a vast, decentralized database.
Powered by core web technologies like HTTP, Resource Description Framework (RDF), and Uniform Resource Identifiers (URIs), over 90% of web pages now embed RDF-based metadata, forming a global intelligence collective.
The Web has become a machine-interpretable knowledge system, realizing the original vision of a **Giant Global [Entity Relationship] Graph (GGG)**.
Adoption was slowed by three persistent obstacles, despite most websites already using metadata formats like JSON-LD, RDFa, or Plain Old Semantic HTML (POSH).
The problem of uniquely identifying concepts was solved from the start: HTTP URLs provide a free, global-scale identification system.
RDF is a language for meaning, not a single format. Its flexibility (JSON-LD, RDFa, Turtle) caused confusion but offered powerful adaptability.
Effective visualization requires intelligent, metadata-driven navigation—not just pretty pictures. Ontologies act like a GPS for data exploration.
The original promise is here because UI/UX finally caught up. AI and tools like OPAL unlock data that was always there.
Powered by Virtuoso Data Spaces from OpenLink Software, this SPARQL endpoint handles large-scale queries with hyperlink-driven exploration.
OPAL + Model Context Protocol (MCP) Server loosely couples LLMs with structured data across file systems, relational DBs, and knowledge graphs.
The Semantic Web enables powerful new business model flywheels for data creators and consumers.
Made for humans, powered by machines, experienced through hyperlinks, not hype.