Visualising the symphony of agentic AI architecture
A framework-agnostic data model and interactive visualization mapping every layer of a production agentic AI system.
If you've spent any time building with agentic AI, you know the feeling: the architecture diagrams in your head are getting unwieldy. There are prompts feeding into models, models running on inference infrastructure, agents orchestrating tool calls through MCP, safety guardrails wrapping everything, storage layers capturing outputs, context stores feeding back into the loop -- and nobody seems to have a single, canonical way to visualize the whole thing. I got tired of drawing partial diagrams on whiteboards and decided to build a proper data model that captures every layer at once.
What the project actually is
Agentic AI Architecture Visualisation is a framework-agnostic data model that maps every moving piece of a production agentic AI system. The core philosophy is deceptively simple: define all the architectural relationships once in a canonical JSON data model, validate it against a JSON Schema, and then render it in whatever visualization library suits your workflow -- D3.js, Mermaid, React Flow, Cytoscape, you name it.
danielrosehill/Agentic-AI-Architecture-Visualisation View on GitHubThe repo is structured around three core artifacts. The data/architecture.json file is the single source of truth, containing every architectural layer with its position, color, and child nodes, plus directed edges between layers with labels and styles. The data/schema.json provides JSON Schema validation so the data model stays consistent as it evolves. And docs/architecture.md offers narrative documentation explaining what each piece does and how they all connect.
Walking through the architecture
The model traces the full lifecycle of an agentic system from top to bottom. It starts with the prompt layer -- user prompts, system prompts, and vendor-level instructions that shape behavior. Those feed into the model layer, which encompasses commercial APIs, open-source models, and fine-tuned variants. Models sit atop inference infrastructure that might be cloud-hosted, self-hosted, on-prem, or running on edge devices.
Below inference sits the agent layer itself, where safety guardrails and observability plug in. Agents connect downward through MCP (Model Context Protocol) for tool use, pass through a human-in-the-loop approval step for actions that need oversight, and then reach the integrations layer -- your APIs, databases, and external services. Storage captures conversations and outputs, feeding into a context store that uses RAG and memory to loop information back to agents. That feedback loop is, in my opinion, the most important part of the whole diagram and the thing that makes agentic systems fundamentally different from stateless prompt-response interactions.
The D3.js reference implementation
The repo ships with a reference implementation in D3.js that loads architecture.json at runtime and renders an interactive SVG with zoom, pan, and tooltips. You can spin it up with npx serve . and open the browser. It's deliberately minimal -- the whole point is that the data model is the star, and visualizations are interchangeable skins. The D3 implementation is just proof that the data model works and can drive a real, interactive output.
Why I think this matters
Most agentic AI tutorials focus on one slice of the stack -- the orchestration layer, or the RAG pipeline, or the tool-calling mechanism. But production systems involve all of these simultaneously, and understanding how they connect is critical for debugging, scaling, and explaining your architecture to colleagues, clients, or contributors. I've found that having a shared visual vocabulary makes conversations about system design dramatically more productive.
There's also a pedagogical dimension. If you're trying to learn agentic AI architecture, staring at a single framework's documentation gives you a keyhole view. This data model tries to be framework-agnostic precisely because the architectural patterns are bigger than any one tool. Whether you're using LangGraph, CrewAI, AutoGen, or just raw Claude Code subagents like I often do, the layers are the same -- you just implement them differently.
Contributing and what's next
The project is open source under MIT and I'm actively looking for contributions, especially new visualization implementations. If someone wants to build a Mermaid renderer, a React Flow version, or a Cytoscape layout, the data model is right there waiting. I'd also welcome corrections or additions to the architecture model itself -- this is a living document and the agentic AI landscape moves fast. You can validate changes against the schema with npx ajv validate -s data/schema.json -d data/architecture.json to make sure everything stays consistent. Check out the full repo at GitHub.