Annihilate

Decentralised Agentic Architectures & Data : Defining NANDini

Mukul Kemla
CalendarDec 9, 2025
Clock5 min read
Decentralised Agentic Architectures & Data : Defining NANDini

Most people don’t come in direct contact with big data. But almost everyone experiences its side effects: content recommendations that seem eerily accurate, targeted ads that follow you across platforms, and increasingly, interactions with large language models (LLMs). In the last two years the use of LLMs has exploded. Big platforms like these gather information from everywhere, process it in the cloud, and return insights through models so large they demand their own infrastructure and governance bureaucracies. Centralization, in this sense, has become the architecture of modern intelligence.

But as we centralise data, we also centralise risk. Governance becomes a tax, innovation slows, and control concentrates in a few systems that are opaque, brittle, and detached from local knowledge.

Last week we formally announced our collaborative research efforts with MIT as NANDINI - Networked Agents Natural Distillation of Interconnected Nodal Intelligence

In this we propose a network in which each node runs its own local intelligence: closely attuned to local objectives. These nodes do not share raw information to external systems. Instead, when cross-domain coordination is needed, they communicate via federated protocols, exchanging distilled insights, alerts, or negotiated proposals. Privacy is preserved; governance overhead is dramatically lower; and collective intelligence arises not from a central monolith but from a relational mesh more akin to how human societies function.

NANDINI reframes what it means to build and own intelligence in the digital era. It turns every data repository into an active participant in a broader cognitive network. This structure reduces risk, increases trust, and allows intelligence to scale horizontally.

Under the NANDINI vision, data itself is reshaped. It becomes a living, connected resource, anchored where it originates and enriched by its ongoing use. Collection shifts from indiscriminate scraping toward purposeful sensing: each node gathers only what is relevant to its local domain. Storage becomes adaptive and interconnected, it resides in formats and systems best suited to its function, whether temporal streams, relational knowledge graphs, or compressed embeddings. Exchange, too, transforms: rather than shipping entire datasets across networks, nodes share distilled representations, statistical summaries, semantic relations, or learned parameters. Data thus evolves from a commodity to a conversation, continuously refined through its relationships with other intelligences in the network.

This alternative might be called localised “small dataism.” Intelligence moves toward the edge rather than dragging all information toward a central hub. Each domain retains ownership of the data it generates. Rather than exporting raw information to external systems, these domains provide data products or intelligent APIs, offering interaction under clear, transparent rules. In this way, the system becomes a federation of responsible entities, not a monolith. Intelligence is no longer the product of accumulated scale alone, but of connecting smaller, localized intelligences in ways that are meaningful and mutually accountable.

   

From Vision to Prototype - the First NANDini Experiment

The data landscape following the adoption of this technology remains a mystery; however, our current experiment aims to test this vision.

We’re building a NANDINI prototype that draws from multiple real-world data streams like news, company data, and public digital traces to model relationships between entities.

Each node in this experiment acts as a local agent with its own dataset and connection logic.

A news node analyzes breaking stories, identifying emerging entities and relationships (e.g., partnerships, conflicts, acquisitions).

A company data node monitors financials, board changes, and disclosures.

When relationships need to be inferred across domains, the nodes communicate using federated insight exchange (for example, “Entity C’s risk exposure has increased due to proximity to Entity D’s decline”).

To ground this vision, we can draw on two emerging architectures in the AI agent space: Google’s A2A (Agent2Agent) and MIT’s NANDA (Networked AI Agents in Decentralized Architecture). Together, they sketch the scaffolding for a real-world version of that network of intelligent nodes.

These conversations happen under a protocol - a lingua franca for agent collaboration. That’s where A2A comes in.

Google’s Agent2Agent (A2A) protocol is an open standard for secure, structured communication between agents, allowing them to discover capabilities, negotiate task assignments, exchange artifacts or messages, and coordinate workflows across domains. An agent can publish an “Agent Card” advertising what it can do and how it can be contacted. A2A supports long-running tasks, modality negotiation (text, media, forms), and secure authentication by default. Importantly, A2A is intended to preserve agent internal states: agents need not expose their memory, tools, or private logic.

As agent ecosystems grow, you need more than just messaging. You need discovery, identity, capsule attestation, and trust infrastructure. That is the ambition of MIT’s NANDA. The NANDA project envisions a foundational infrastructure for a true “Internet of AI Agents.” Just as DNS, TLS, and HTTP underpin the modern Internet, NANDA proposes registry, verification, and reputation layers for agents.

Because the registry is decentralized and cross-protocol, new agents plug in seamlessly. NANDA thus offers the glue of trust and discovery to unify the relational web of agents.

NANDINI gestures toward a new chapter - one where intelligence returns to the edge, where every node becomes both learner and teacher, and where the network itself begins to think. Decentralized intelligence won’t emerge overnight. But if we can design systems that trust relational knowledge as much as they trust computation, we might finally build digital societies that reflect the best of our human ones. As we move forward, we’ll continue to share what we learn through each prototype, experiment, and unexpected discovery.

   

References