Skip to content

acm.org - From Distributed Intelligence to Verifiable Responsibility

Artificial intelligence (AI) is no longer confined to centralized data centers. It is increasingly distributed across edge devices, enterprises, multiple cloud providers, and autonomous software agents operating across organizational and sovereign boundaries. Control is decentralizing. Trust, however, remains centralized.

This mismatch exposes a missing architectural layer in the emerging AI-native Internet. Existing trust models were designed for systems owned by a single administrative authority and governed through institutional oversight. Those assumptions break when autonomous agents interact continuously, make decisions at machine speed, and act on behalf of organizations without human mediation.

This post argues that trust must become a first-class infrastructure function, just as routing (Web 1.0) and orchestration (Web 2.0) once did. By examining the evolution of the Internet across participation, abstraction, and correctness, we show why centralized trust cannot govern distributed autonomy, identify the minimal primitives required for scalable coordination, and explain why verifiable distributed trust is not a philosophical preference, but an architectural necessity.

The AI-native Internet will not scale on distributed intelligence alone. It requires distributed responsibility, made enforceable through verifiable accountability. Responsibility is the social outcome of coordinated action; accountability is the technical substrate that makes responsibility enforceable at machine speed.

1. The Problem: Intelligence Is Distributing Faster Than Trust

Artificial intelligence is escaping the boundaries that once contained it. What began as centralized model inference inside controlled data centers has evolved into a web of cooperating agents operating across devices, applications, companies, and national jurisdictions. These systems no longer merely respond to queries. They initiate actions, negotiate with external systems, and make decisions on behalf of institutions.

The trust architecture surrounding them has not evolved accordingly.

Most digital trust assumes that authority is anchored in an identifiable organization and that oversight ultimately resolves to human review. When errors occur, institutions intervene. When access is disputed, administrators adjudicate. When responsibility is unclear, contracts determine liability. These mechanisms function because actions are relatively discrete and infrequent.

Autonomous agents violate those assumptions. They act continuously, interact dynamically, and operate across administrative boundaries. In such systems, trust mediated after the fact is insufficient. Legitimacy must be established before action is accepted.

The result is architectural instability. Intelligence is distributed across domains, but trust remains anchored within them. Systems therefore gravitate toward two failure modes: consolidation into dominant platforms that mediate trust, or fragmentation into isolated environments where coordination degrades and global optimization becomes infeasible. Neither outcome supports open, scalable autonomy.

2. The Internet as an Architectural Continuum

The Internet has historically evolved by adding new capabilities that embed coordination and correctness guarantees into infrastructure whenever coordination became too complex for humans to manage manually.

In its earliest form, the challenge was connectivity. Networks were heterogeneous and incompatible, requiring participants to understand the topology through which messages traveled. The Internet Protocol solved this by abstracting routing. A sender no longer needed to know the path or the intermediate systems involved; addressing a message became sufficient. Connectivity became an infrastructural property rather than an operational concern.

As systems grew in complexity, computation became the constraint. Managing distributed software across machines and failures demanded constant human coordination. Cloud computing solved this by abstracting execution. Developers no longer managed servers; they described workloads, and the infrastructure ensured reliable operation. Reliability became an infrastructural property rather than an operational burden.

Today autonomous systems introduce a different constraint. Systems do not merely exchange data or execute programs; they depend on each other’s decisions. Before relying on another system’s output, an agent must know who produced it, under what authority, and with what accountability. Correctness now concerns behavior rather than delivery or execution.

Whenever correctness expands, a new abstraction layer emerges to enforce it. Routing enforced reachability. Orchestration enforced execution. Autonomous coordination requires a layer that enforces the legitimacy of action. That layer is accountability: cryptographically verifiable identity, authority, execution integrity, and attribution. Trust then emerges as a system property when these accountability guarantees are present.

2.1 The Web Evolution Mirrors the Infrastructure Evolution

The familiar terminology of Web 1.0, Web 2.0, and Web 3.0 describes the Internet from a user experience perspective. When viewed architecturally, however, these phases correspond to changes in participation rather than merely application style.

Web 1.0 connected people to information. Users retrieved documents from remote servers, and the primary challenge was reliable reachability. This era aligned with the IP-native Internet, where routing abstracted network complexity and correctness meant successful delivery.

Web 2.0 connected services to each other through platforms. Applications became continuously interacting systems rather than isolated endpoints. The operational burden shifted from communication to execution, forcing the emergence of cloud-native orchestration to sustain continuous interaction.

Web 3.0 introduces verifiable interaction between independent participants, but AI-native systems introduce autonomous decision-making entities. Systems no longer merely exchange data or invoke functions; they rely on outcomes produced by other systems acting under delegated authority. Interaction therefore depends not only on execution but on legitimacy of behavior. This now forces the emergence of an AI-native accountability substrate, where correctness expands beyond execution to governed action. Trust becomes the emergent outcome of verifiable accountability, rather than an assumption about counterparties.

Seen this way, the evolution of the Web is not separate from the evolution of infrastructure. Both describe the same progression viewed from different layers:

Table: Evolution of the Web Across Perspective, Infrastructure, and Correctness
Web RegimeWeb PerspectiveInfrastructure PerspectiveCorrectness Requirement
Web 1.0Information AccessIP-Native NetworkingDelivery Correctness
Web 2.0Platform CoordinationCloud-Native ComputingExecution Correctness
Web 3.0Autonomous InteractionAI-Native SystemsBehavioral Correctness

3. The Three-Lens Framework: Participation, Abstraction, Correctness

The progression above follows a recurring pattern visible across every Internet transition. Systems evolve simultaneously across three dimensions: who participates, what abstraction organizes interaction, and what correctness must be guaranteed.

In early networks, humans were the primary participants, and packets were the organizing abstraction. Correctness meant successful delivery, and trust was institutional; users trusted the organizations operating the network.

In the cloud era, applications became participants, and workloads became the abstraction. Correctness meant reliable execution, and trust shifted to platforms responsible for running code correctly.

In the AI-native era, autonomous agents become participants. Their interactions revolve around intent and commitments rather than raw information transfer. While these interactions are still expressed through data exchange, the semantics shift toward machine-interpretable commitments, authority, and proofs of compliance rather than simple payload delivery. Correctness therefore expands to behavior: whether an action taken by one system should be accepted by another. Trust can no longer rely solely on institutions or platforms. It must be verifiable by the interacting systems themselves.

This shift reveals that trust is not an external governance concern, but the enforcement mechanism of correctness once participants become autonomous actors.

4. Why Centralized Trust Fails in AI-Native Systems

Centralized trust mechanisms fail not because they are flawed in isolation, but because they assume conditions that autonomous systems no longer satisfy. They presuppose a single authoritative decision point, stable relationships between participants, and the feasibility of deferring approval to institutional processes. None of these assumptions hold when agents interact continuously, across domains, and at machine speed.

Autonomous coordination cannot depend on a single authority without introducing systemic fragility. Nor can it pause for retrospective review without sacrificing responsiveness. As interactions become dynamic and cross-jurisdictional, prearranged agreements and centralized adjudication cease to be viable coordination mechanisms.

Under these conditions, centralized trust becomes either a bottleneck or a source of unacceptable concentration of control. Distributed trust does not imply the absence of shared rules. It requires common verification primitives and shared criteria for legitimacy, without reliance on centralized arbiters. The system must either slow down to human governance cycles or cede mediation to a platform capable of authorizing all interactions. Both outcomes undermine the openness and scalability that defined the Internet’s success.

Example: A Supply Chain Failure in Autonomous Coordination

Section titled “Example: A Supply Chain Failure in Autonomous Coordination”

Consider an autonomous procurement agent operated by a multinational manufacturer. The agent continuously sources components across dozens of suppliers, dynamically rerouting orders based on price, availability, and delivery constraints. Each supplier exposes an autonomous fulfillment agent that negotiates terms, confirms inventory, and commits to delivery without human intervention.

During a period of supply chain disruption, the manufacturer’s agent attempts to substitute a regulated component from an alternative supplier in a different jurisdiction to meet production deadlines. The supplier’s agent offers immediate availability and commits to shipment. From the perspective of both local systems, the transaction is valid: the supplier has inventory, and the procurement agent has budgetary authority.

However, the substitution violates the manufacturer’s compliance policy and local regulatory constraints for the destination market. The shipment proceeds. By the time compliance teams detect the violation through audit and reconciliation, the component is already in transit and downstream production commitments have been triggered.

This failure is not a bug in price optimization, inventory matching, or logistics execution. It is an architectural failure of trust. The systems were able to coordinate execution, but not legitimacy.

5. Trust as an Infrastructure Layer: Proposed Approach

Historically, trust operated above infrastructure. Humans trusted institutions, applications trusted platforms, and distributed systems trusted operators. Infrastructure transported information but did not adjudicate legitimacy. Autonomous agents cannot rely solely on such assumptions because they must evaluate the validity of actions continuously and at machine timescales.

As autonomous interaction scales, architecture acquires a new functional layer positioned between interaction and execution. This layer neither transports data nor performs computation. Instead, it evaluates whether a proposed action is authorized, attributable, policy-compliant, and auditable before it is allowed to proceed. In effect, it enforces interaction semantics rather than communication semantics.

The evolution of routing offers a useful analogy. IP routing enabled communication across independently administered networks by abstracting transport from application meaning and minimizing reliance on intermediary intent. Applications did not need to trust each intermediate operator; they relied instead on protocol guarantees and end-to-end verification.

A trust substrate serves a parallel role for autonomous systems. It reduces reliance on institutional or relational trust between counterparties by providing cryptographically verifiable guarantees about identity, policy compliance, and accountability. Participants need not assume that other agents are benevolent; instead, they rely on enforceable interaction rules and verifiable evidence of behavior.

Trust, therefore, transitions from an external governance assumption to an internal runtime property. It becomes embedded in the operational fabric of the system, grounded in provable identity, consistent policy enforcement, and auditable interaction histories. The objective is not to eliminate trust, but to formalize and distribute it in a way that allows independent autonomous entities to cooperate safely without requiring bilateral institutional assurances.

6. Minimal Trust Primitives Required in Infrastructure Layer

Providing trust as infrastructure does not require embedding full governance frameworks into protocols. It requires a small set of machine-verifiable properties that every autonomous interaction must satisfy at runtime. These properties define the minimal accountability substrate necessary for scalable coordination among autonomous systems without shared administration.

At a minimum, an AI-native accountability substrate must provide verifiable answers to five questions. These define the minimal properties of a trusted distributed agent architecture:

  1. Actor Identity: The system must be able to cryptographically establish which agent is acting. This identity must be persistent across interactions and verifiable across administrative domains, without relying on a single identity provider.
  2. Delegated Authority: The system must verify under what authority the agent is acting. This includes the ability to prove that an agent is operating under a specific delegation chain (e.g., from an organization, user, or governing policy) and that the scope and constraints of that delegation are valid at the moment of action.
  3. Execution Environment Attestation: The system must be able to verify where and how the action is being executed. This includes cryptographic attestation of the execution environment to ensure that the agent’s behavior is being produced by an approved runtime (e.g., a specific model version, policy-constrained environment, or trusted execution context), rather than by an untrusted or modified system.
  4. Policy Compliance Proof: The system must be able to verify that the action complies with applicable policies before it is accepted. This extends beyond access control to include behavioral constraints, regulatory requirements, organizational policies, and context-specific rules. Compliance must be machine-verifiable at runtime, not reconstructed retrospectively through logs or audits.
  5. Accountability and Attribution: The system must preserve a verifiable accountability trail linking actions to responsible principals. This ensures that responsibility does not vanish as actions propagate across agents and systems, enabling enforcement, remediation, and liability without centralized oversight.

Inter-agent semantics are also a prerequisite for accountability. Agents must share machine-interpretable schemas for intent, commitments, and fulfillment criteria: what was agreed, what constitutes successful execution, and how compliance is verified. Without shared semantic frameworks for commitments and outcomes, cryptographic accountability cannot be meaningfully enforced across agents.

Together, these properties allow autonomous systems to evaluate the legitimacy of actions without relying on pre-established trust relationships or centralized intermediaries. Importantly, these primitives are orthogonal to any specific implementation (e.g., cryptographic schemes, trusted execution environments, or distributed verification mechanisms). They define the architectural contract that any AI-native trust substrate must satisfy.

Without these primitives, autonomy scales risk. With them, independent agents can coordinate safely because legitimacy becomes a verifiable property of interaction rather than an external assumption.

7. What Becomes Possible: From Connecting Systems to Cooperating Systems

When trust becomes infrastructural, interactions no longer depend on prior relationships. Systems can cooperate immediately because legitimacy is provable. Compliance becomes continuous rather than retrospective, and accountability travels with action rather than being reconstructed afterward.

This enables new forms of coordination: autonomous collaboration across organizations, safe operation in regulated environments without centralized supervision, and scalable automation without surrendering responsibility.

These are not incremental improvements. They represent the difference between connected systems and cooperating systems.

8. Conclusion: Completing the AI-Native Stack

Each era of the Internet embedded a new responsibility into infrastructure when scale exceeded the capacity of human coordination. Routing embedded connectivity. Cloud computing embedded reliable execution. Autonomous systems now require embedded accountability.

Distributed intelligence alone cannot sustain an open ecosystem. Autonomous coordination depends on the ability of systems to evaluate the legitimacy of actions at runtime, without reliance on centralized intermediaries or retrospective governance.

The missing component of the AI-native Internet is therefore not greater intelligence but verifiable accountability. Responsibility emerges socially; accountability must exist technically.

Once accountability becomes infrastructural, trust becomes a verifiable property of interaction rather than an external assumption.

With trust grounded in verifiable accountability, autonomous systems can cooperate safely at global scale. The evolution of the Internet thus continues from moving information to running software to governing action.

Mallik Tatipamula

Mallik Tatipamula

Mallik Tatipamula is Chief Technology Officer at Ericsson Silicon Valley. His career spans Nortel, Motorola, Cisco, Juniper, F5 Networks, and Ericsson. A Fellow of the Royal Society (FRS) and four other national academies, he is passionate about mentoring future engineers and advancing digital inclusion worldwide.

David Attermann

David Attermann

David Attermann is a venture investor and systems architect working at the intersection of AI, cryptography, and distributed systems. His work focuses on the infrastructure of the AI-native Internet, including verifiable accountability, programmable commerce networks, and cryptographic trust layers for autonomous agents.

Vinton G. Cerf

Vinton G. Cerf

Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president 2012-2014.