mmc.vc - Agentic Enablers The invisible hands shaping the agent economy
Agentic Enablers
Section titled “Agentic Enablers”Sometime in the summer of 2023, we stumbled upon Chirper AI, a Twitter/X style social network exclusively for AI agents (no humans allowed) – a strange and ” delightfully weird ” experiment in how autonomous AI entities interact in the wild. After observing some (admittedly bizarre) interactions, we began thinking of ways to build on this idea; namely, how could AI agents collaborate, create and exchange real value across organisational boundaries (and not just exchange tweets or “chirps”)? Think agents who can autonomously negotiate with suppliers, restock inventory, manage working capital financing… the possibilities are limitless.
Now remember, this was in a world before Model Context Protocol (MCP) and all the other subsequent developments/protocols existed. Fast forward to today, and progress in a number of areas is getting us closer to this vision of the agent economy, such as:
- Improved ease and accuracy of tool-calling by AI agents, which enables AI agents to perform actions reliably and improves functionality.
- Continued progress on getting multi-agent systems to work, particularly by providing a standard way for agents to collaborate with each other (regardless of the underlying framework or vendor).
- Enhanced infrastructure for agentic payments, which empowers agents to pay for the resources they need to complete tasks, hire each other for performing specialised tasks, hire humans for the tasks agents themselves can’t do (!) and perform tasks on our behalf (like buying groceries).
- Emergence of AI agent marketplaces, where you can build, find, use and pay for trusted, verified AI agents that can work or collaborate with each other.
In this report, we discuss each of these developments in detail and spotlight startups innovating in this space. We’ll also make (what we call) the Agentic Alphanumerical Soup of protocols and standards (MCP, A2A, AP2, ACP, TAP, ERC-8004, x402) much more digestible, and outline the success factors for AI agent marketplaces.
We’re keenly tracking developments in this space – if you’re a startup founder building the foundations of the agent economy, please reach out to Advika, Ollie or Sevi; we’d love to chat.

Giving Agents the Tools to Succeed
Section titled “Giving Agents the Tools to Succeed”Tools are critical because they enable AI agents to act – manage your email inbox, update your CRM with new prospects, book your flights and so on. However, successful tool calls aren’t necessarily a straightforward exercise.
Tool calling is hard because LLMs are unreliable both in how they invoke tools and which tools they choose. Even when a tool is correct, models can misconfigure parameters, driving 30–70% call failure rates and forcing costly retry loops (analysis → call → error → re-prompt) that add tokens, latency, and wasted API spend. The problem compounds as tool counts grow: every tool description is another decision point the agent must distinguish from near-duplicates, and performance degrades sharply once you move from a handful of tools to dozens. That’s why agents need dynamic, just-in-time tool selection instead of loading an entire toolbox upfront. Moreover, raw API outputs can easily become large enough for complex queries to overwhelm an agent’s context window, so the enterprise API outputs need pre-processing to be usable. All of this makes ‘tool context management’ essential.
(if you’re keen to understand context management for AI agents better, we covered it in Agentic Enablers: Treating AI’s amnesia and other disorders)
That’s where startups such as Arcade, Composio, StackOne and Jentic are solving various challenges around agentic tools. They enable you to securely connect your AI agents to MCPs, APIs, data, and more through pre-built, battle-tested integrations that are accessed keeping in mind fine-grained authorisations and providing centralised governance. Meanwhile, Alpic is a platform for building, deploying, and scaling MCP servers to production (by handling auth, rate limits, observability, scaling, quality checks, and safe release workflows) so developers can stand up agent-ready services quickly.
To illustrate: Jentic and StackOne tackle agent unreliability by pairing just-in-time tool provisioning with auth-aware execution. Instead of dumping hundreds of tools into an LLM’s context (which quickly increases confusion and hallucinations), their discovery layer lets an agent search a catalogue of tools or a large OpenAPI knowledge pool and load only the few relevant operations at the moment of need. Crucially, that discovery is tightly coupled to authentication and authorisation, so agents are only ever handed tools they’re actually permitted to use; credentials are managed centrally and transparently, preventing leakage and avoiding failures from unauthorised calls. The result is higher agent focus, reliability, and enterprise-grade security in one integrated control plane. StackOne has extended this by building smaller, specialised models optimised for tool calling, whilst focusing on security risks like tool poisoning and prompt injections.

Getting Multi-agent Systems to Work
Section titled “Getting Multi-agent Systems to Work”An agentic framework (such as LangChain / LangGraph, AutoGen, CrewAI, Agno, Portia AI, Mastra, or Letta) is a software library that enables you to build intelligent agents and equip them with tool integration, context and memory management, authentication and authorisation, observability, monitoring and evaluation, as well as inter-agent communication. Crucially, they help to coordinate multiple agents, particularly by solving challenges around reliability and scalability. For instance, Portia AI lets agents create and share their plans before taking any action so that humans stay in control and avoid unexpected behaviour (with this multi-agent reliability well-reflected in Rezonant, Portia’s product that automates writing engineering tickets). Meanwhile, Agno expanded beyond a framework into AgentOS, a runtime that unifies shared memory, context, and communication so agents act as a coordinated network that can collaborate, delegate, and learn like a human team rather than isolated bots.
While agentic frameworks have focused on multi-agent systems built within their own framework, a real-world deployment is likely to have a heterogeneous collection of agents (built by different vendors or on different frameworks). That’s why Google led the creation of Agent2Agent or A2A – an open interoperability protocol that standardises how AI agents collaborate with one another, even when they’re built by different vendors or frameworks. It’s needed because real enterprise work spans many apps and platforms, and letting agents securely communicate and coordinate across those boundaries increases their autonomy, multiplies productivity gains, and lowers long-term integration and maintenance costs.
A2A works by providing a common, lightweight communication layer (it’s built on familiar standards like HTTP, SSE, and JSON-RPC) so it fits easily into existing IT stacks. Agents can discover each other through built-in capability discovery: each agent publishes an “Agent Card” (a JSON description of what it can do), enabling a client agent to find the best partner for a task and then use A2A to exchange information and coordinate actions in natural, unstructured ways, without requiring shared memory, tools, or context. The protocol was announced in April 2025, and we look forward to its evolution as it continues improving agent interoperability.

Going from “Card Not Present” to “Human Not Present” Transactions
Section titled “Going from “Card Not Present” to “Human Not Present” Transactions”Previously, payments were characterised as in-store or online (or “card not present”). Now, we think the distinction would be whether it’s made by a human or an AI agent (or “human not present”). Agentic payments are payments made by AI agents, either to unlock the resources they need to do a job or to move money on our behalf. To illustrate, here are some use cases for which you would need to empower AI agents with the ability to make payments:
- Agentic access for paid content, tools and services:
- Resource acquisition: An agent may need to pay for certain resources to perform its task e.g. a research agent may need to purchase a critical dataset, or access a paywalled report.
- Specialised services acquisition and delegation: An AI agent may need specialised assistance from other AI agents who are experts in certain tasks, e.g. a finance agent may need to hire a compliance agent. An AI agent may even hire humans to perform tasks that the agent itself or other agents can’t perform – such as reviewing edge cases, labelling data, validating field information physically (which can’t be verified digitally).
- Business auto-optimisation:
- Real-time spend optimisation: Agents with delegated authority can purchase and scale resources 24/7 (like restocking inventory, adjusting cloud spend, or renegotiating SaaS licenses as conditions change).
- Treasury management: Agents monitor cash, execute payments, and optimise working capital continuously without waiting for human approval.
- Supply-chain management: Agents can trigger payments automatically from verified milestones (delivery, QA, inventory receipt), reducing delays and disputes.
- Personal auto-optimisation:
- Smart shopping: Agents can be used to automate routine tasks such as ordering groceries or small ticket household goods, but also specific situations like “I want to buy that specific t-shirt in blue, and I’m willing to pay 10% extra for it whenever it’s available.”
- Coordinated purchases: Agents can plan and pay for an entire trip (flight tickets, hotels, tourist attractions) within a certain budget.
- Personal finance: Agents can execute automated transfers based on the user’s spending patterns and stated financial goals e.g. the user says “I don’t want to go into overdraft” so the agent comes back with “I moved £300 from your savings to checking account to cover this week’s bills.”
It’s Difficult to Trust Agents to Make Payments; They Need to Be Robust, Intent-verified and Clearly Identified
Section titled “It’s Difficult to Trust Agents to Make Payments; They Need to Be Robust, Intent-verified and Clearly Identified”Traditional payment rails were deterministic, but agentic payments are often the product of an agent’s reasoning (e.g. the agent thinks “I need to access this dataset to perform my research task”) or the product of a conversation (e.g. when you tell the agent “Buy me white sneakers for less than $100”). It may depend on context and memory, and could be impacted by ambiguity in language. The AI agent may hallucinate and make mistakes. It could also be influenced by malicious attacks such as prompt injections or memory poisoning. This could create new risks like unintended purchases and fraud.
To illustrate: fraudsters can game the logic of AI shopping agents by creating counterfeit merchants and poisoned content that look legitimate to automated checks. These fake storefronts lure agents with “too-good-to-be-true” prices, trigger purchases via stored credentials, then harvest payment data for instant unauthorised use – automating both scam site creation and agent exploitation to scale attacks.
(if you’re interested in learning about the innovative ways in which scammers and defenders are leveraging AI, we’ve written all about it in our FinCrime series)
At the same time, even real merchants become a new attack surface: third-party ads, images, reviews, and comment sections can hide malicious instructions that redirect agents, extract sensitive info, or subtly push them to abandon the site and buy from competitors (a kind of “denial-of-shopping” attack). The takeaway is to treat all merchant-side content (pages, reviews, emails, ads) as untrusted inputs that need monitoring and guardrails. We’ve explored some of the solutions that protect AI agents against prompt injections and other malicious attacks in Agentic Enablers: Working through AI’s insecurity and identity crisis.
While we’ve looked at trust issues from the consumer perspective thus far, merchants too justifiably have concerns when their human customers’ AI agents are buying from them. Some questions that immediately come to mind are:
- Has the human customer authorised this agent to make this specific transaction?
- How can a merchant be sure the request is accurate and not a hallucination?
- If fraud happens, who is accountable? The user, the merchant, the issuer, the AI model provider?
These questions are addressed by the Google-led open standard Agent Payments Protocol (also called AP2), which helps to securely authenticate, validate, and convey an agent’s authority to transact. It proves that the user gave an agent the specific authority to make a particular purchase. It also enables a merchant to be sure that the agent’s request accurately reflects the user’s true intent. All of this is helpful in determining accountability if an incorrect or fraudulent transaction occurs – so it’s a compliance layer ensuring that every autonomous action is pre-authorised.
AP2 builds does this by using Mandates, which are tamper-proof, cryptographically-signed digital contracts that serve as verifiable proof of a user’s instructions. Mandates cover two agent shopping modes: real-time buying with with the human present, where an Intent Mandate captures your request (“Find me new white sneakers”) and a signed Cart Mandate locks in the exact items and price you approve (this specific pair of sneakers, at this price, to this shipping info). For delegated buying (“Human Not Present”), you pre-sign an Intent Mandate with clear rules (price, timing, conditions) that lets the agent auto-create a Cart Mandate only when those rules are met. Because these mandates reflect verifiable intent and not inferred actions, these reduce the risk of agent error.
While AP2 introduces “manually curated allow lists of approved agents,” it doesn’t describe identity verification mechanisms. This is where Visa’s Trusted Agent Protocol (TAP), Skyfire’s KYA (Know Your Agent) identity verification for agents, Catena Labs’ ACK-ID, and Nevermined ID come in.
To illustrate: Skyfire’s Know Your Agent (KYA) is a digital identity framework for AI agents that lets them build verified profiles via a series of checkpoints (such as developer identification, prior authenticated interactions, transaction history) and then earn portable trust badges (think “verified agent” blue checkmarks). In practice, KYA issues each agent a recognisable identity token and badge that merchants and platforms can rely on, so only properly credentialed agents can transact while unknown or malicious bots are filtered out.
Traditional Payment Systems Weren’t Built for AI Agents; That’s where Stablecoins and Blockchain-based Solutions Help
Section titled “Traditional Payment Systems Weren’t Built for AI Agents; That’s where Stablecoins and Blockchain-based Solutions Help”Traditional payment rails were designed for people, not AI agents – and they break down fast when you get to micropayments. Micropayments are tiny online transactions, typically under $10 and often just a few cents or even fractions of a cent. Agents will need these for routine resource buys like one-off content access, tool calls, or even paying other agents to complete tasks.
The problem is simple: traditional systems have fee floors and chunky per-transaction costs. That makes sub-0.25 but the cost per transaction involves a flat $0.35, the payment fee costs more than the actual service (in fact, more than doubling the effective price per call). At that point, an agent economy can’t work at scale.
The solution is also simple: Stablecoins. These are crypto assets pegged to fiat currencies like the US dollar – so 1 USDC equals about $1. Because their value is stable, they’re far less volatile than other crypto assets like Bitcoin. Stablecoins settle instantly, run 24/7, and are programmable (logic can be built directly into payments e.g. money that moves only when certain pre-defined conditions are met). Crucially, they make sub-cent micropayments viable because transaction costs are tiny. Also, this always-on, low-fee layer lets agents transact the moment value appears: a treasury agent can rebalance overnight, or a supply-chain agent can lock in rates as soon as they surface.
Accordingly, Coinbase has introduced x402, an open standard for internet-native stablecoin payments, to enable atomic transactions between APIs, apps, and AI agents. Most importantly, the x402 protocol removes friction around registrations, authentication and complex signatures (you don’t need credit cards, API keys or logins) as agents sign tiny transactions onchain and access resources on-demand. The 402 standard relies on Facilitators such as Latinum to validate transactions.
(if you’re interested in learning more about stablecoins, we’ve written about it extensively in Stablecoins: Entering their untethered growth era)
One of the biggest advantages of blockchain is its immutability, which makes it tamper-proof. That’s why we wanted to introduce a few other blockchain-based technologies for agent identity and trust: Decentralised Identifiers (DIDs) and ERC-8004.
A Decentralised Identifier (DID) is a cryptographically verifiable, universally unique ID (that doesn’t need a centralised authority to issue it), and lets its holder prove ownership of a digital identity. It’s flexible, scalable and resistant to fraud or malicious attacks – making it perfect for identifying AI agents and their human users. Startups such as Catena Labs and Nevermined leverage DIDs for agent identities.
Meanwhile, ERC-8004 is an Ethereum standard for identity, reputation, and validation that lets agents discover, evaluate and collaborate with each another safely across organisational or platform boundaries. It does this through three onchain registries: an Identity Registry that issues a sovereign AgentID, a Reputation Registry that records task outcomes as an immutable performance history, and a Validation Registry that lets agents post collateral (like ETH or USDC) as a guarantee (so if they behave badly, that stake can be cut, giving them real financial downside and a built-in incentive to act honestly). This ties in nicely with our previous point on how we can build trust in agentic payments.
We Don’t Really Have Agent-native Infrastructure and Interfaces Yet, but Promising Approaches Are Emerging
Section titled “We Don’t Really Have Agent-native Infrastructure and Interfaces Yet, but Promising Approaches Are Emerging”The web wasn’t built for AI agents, so even capable models have to wade through messy HTML, infer what UI elements do, and guess what’s clickable – making browser-based agents brittle, slow, and much more expensive than API calls. This gets in the way of agentic commerce via browsers as well. But because so much digital work still sits behind human UIs, tools like Browserbase and Browser Use give agents real, scalable browsers to navigate and act like humans, enabling end-to-end tasks efficiently when APIs don’t exist.
Besides AI browsers, we’re seeing other efforts to make our current infrastructure more agent-friendly. For instance, the introduction of llms.txt – a proposed robots.txt-style file that lets a website give LLMs a curated, markdown map of its most important content so agents can find the necessary information quickly and reliably. Or VOIX, a framework where websites explicitly declare to AI agents what actions are available and what state they’re in via two new HTML elements,
That said, we expect that over time AI interfaces like ChatGPT or Gemini will become the main gateway to the internet, replacing browsers and folding the browsing experience into the AI itself. We’re already seeing the start of this with MCP-UI and subsequent developments. MCP-UI is the UI layer for Model Context Protocol: it lets tools and agents return rich, interactive web components (forms, selectors, carts, dashboards, checkout panels etc) inside an agent conversation, so agentic apps aren’t stuck in text-only flows. That way, users can click, type or select – making for a much better human user experience within apps like ChatGPT.
ChatGPT Apps build on this experience, letting you access Spotify, Booking.com, Uber, OpenTable and other apps within the ChatGPT interface itself, effectively turning ChatGPT into an app store. To enable the monetisation for ChatGPT-driven agentic commerce experience, OpenAI and Stripe co-developed Agentic Commerce Protocol (ACP), which allows users to complete purchases within ChatGPT without leaving the chat. We’re particularly encouraged by developments such as ChatGPT’s Instant Checkout and ChatGPT Apps because it will help people get acclimatised to paying within agentic interfaces, which should gradually build up trust in the new system and consequently boost agentic payment volumes.
Because it may feel a bit chaotic (“why are there so many protocols anyway?”) an easy way to think about this is that x402, AP2 and ACP can all work together – x402 handles stablecoin payments, AP2 manages authorisation, and ACP handles the checkout coordination between buyers and merchants.
What Startups in This Space Do
Section titled “What Startups in This Space Do”Given the profusion of protocols, you’re probably wondering what startups in this space do. The short answer is that they integrate into, build upon and extend the various protocols – e.g. the x402 needs Facilitators to verify transactions, and AP2’s Credential Provider role creates opportunities for specialisation and ID solutions to plug into it. To illustrate: Latinum provides an open-source agent commerce stack (a 402 Facilitator API plus an MCP Wallet) that lets agents securely pay for services and transact with identity and reputation safeguards against fraud and prompt attacks. Another example would be how Payman uses A2A to pass a user’s natural-language financial intent to a network of best-in-class specialist agents (docs, credit, compliance, orchestration) that execute it via the shared protocol.
A growing set of tools is emerging to support agentic commerce and finance: Catena Labs and Skyfire both pair identity with payments (via ACK-ID and KYApay respectively), though Catena positions as an AI-native financial institution while Skyfire emphasises programmatic buyer-seller interactions. Payman enables traditional banks to offer intent-driven, agentic money movement that listens for user cues and optimises tradeoffs to pursue outcomes users want (e.g. maximise returns, preserve liquidity, ensure no payments are missed). Paid, Nevermined, and Alguna focus on monetising agents through automated billing, pricing models, and ROI/cost tracking, while Crossmint enables agents to buy products via agentic checkout and programmable wallet. In similar vein, Vypr is focused on agentic commerce (though it works via context portability).

The Culmination of it All: AI Agent Marketplaces
Section titled “The Culmination of it All: AI Agent Marketplaces”You can think of AI agent marketplaces as the culmination of everything we’ve discussed so far in our report: they are platforms where you can build, find, use and pay for trusted, verified AI agents that can work or collaborate with each other. Startups in this space include Fetch.ai, agent.ai, A2A Net, Tetto, Masumi, AI Agent Store. While all of these marketplaces are at very nascent stages, some distinct implementations of the idea are emerging. We liked how Ben Clarke, Founder of A2A Net articulated the differences in marketplace approaches:
- Hosting, messages, and payments can be centralised or decentralised.
- They can use A2A or a custom protocol (typically via MCP).
- They can accept fiat, crypto, or both.
- Messages can be sent to agents via a UI, API, or both.
- They can offer multi-agent features and collaboration.
- They can offer a platform to build agents.
To illustrate: Some startups have opted for a blockchain-based approach (albeit on different chains e.g. Fetch.ai on ASI, Masumi on Cardano, Tetto on Solana) – bringing benefits such as 24×7 instant settlement, escrow for economic protection, sub-cent and micropayments, cryptographic verification and immutability. Each transaction leaves an audit trail through on-chain decision logging, which simplifies governance. While we acknowledge the benefits of blockchain, we think using AI agents is anyway somewhat complicated for non-technical users, and adding crypto to that adds a whole new level of complexity (unless it’s significantly abstracted away).
We think agent marketplace startups can take inspiration from how Google implemented its Google Cloud Marketplace for enterprise AI agents – it agent builders see 112% larger deal sizes, longer sales agreements, faster deal cycles, and improved customer retention by listing on its marketplace. As with any marketplace, here are a few success factors:
- Number of agents and users – given network effects play an important role in any kind of marketplace, if there are more agents on the platform, it’s likely to attract more users, and if there are more users, builders are incentivised to build/integrate their agents to that marketplace.
- Frequency and depth of usage of AI agents on the platform – the higher and deeper this is, the greater are the monetisation opportunities for agent builders, which in turn attracts more builders and agents to the platform.
- Variety of agents (addressing different types of tasks and workflows) – just as how Amazon succeeded with the long-tail of discovery, agent marketplaces are likely to become stickier when users realise that no matter what type of task or workflow, they’re likely to find it all in that particular marketplace. That said, we also expect niche marketplaces to be successful, provided the coverage of tasks/workflows is comprehensive within that niche.
- Quality of agents – this seems obvious, but it needs very clear communication around what agents can or can’t do, clear guidance or education around how best to work with the agent for optimal results, incorporates feedback/reviews from users, and tracks successful task completions. Users need to feel satisfied with the work done by the agent so they come to it again and again.
- Ability for non-technical users to easily customise existing agents to suit their workflows – Linked with the point above, non-technical users are more likely to return to the platform if it can help them get their work done their way.
- Quality of search and discovery mechanisms – As marketplaces scale to thousands of agents, users need an easy way to find the best agents that are most perfectly suited to their task and other requirements (e.g. quality, price, speed of execution, ease of integration). In time, this could also evolve to recommendations on how to chain different agents on the fly to automate end-to-end workflows (a bit like how Amazon may show you different product bundles).
- Ease of multi-agent collaborations – Ability to easily build or integrate existing agents, as well as get a heterogeneous collective of AI agents to collaborate effectively.
- Governance – Enterprises in particular will care about sensitive data handling, fine-grained access controls, audit trails, regulatory compliance and centralised governance.
- Attractive monetisation for AI builders yet predictable spending for users – While current approaches in various agent marketplaces involve letting the agent builder set whatever price and let users decide for themselves whether or not to buy, over time (as the marketplace scales), it could provide builders with analytics to enable better monetisation, and also help users identify how to make their spend on agents more predictable or controlled.
We’re also tracking developments like the proposed Agent Network – a shared, open infrastructure where AI agents (which could be owned by anyone i.e. individuals, organisations, or institutions) can operate autonomously and interact with each other. In this unified network, agents are interoperable by default, letting them efficiently find one another, negotiate, trade services or capabilities, and collaborate across domains. Private agent networks may still exist, but the Agent Network intends to serve as the common foundation that scales economic exchange and accelerates innovation.

The Agent Economy Never Sleeps
Section titled “The Agent Economy Never Sleeps”This brings us to the end of our series on Agentic AI. Taken together, the 4 parts of our series map a clear journey: from the deployment challenges around agents, to the deep reliability work that gives them memory and context, to the security and identity developments that make them trustworthy actors, and finally to the markets and incentives that let them coordinate and create value at scale. These agentic enablers let LLMs evolve into networks of specialised, interoperable, and accountable agents that can discover, hire, pay, and collaborate with each other (and with us). As that happens, agents shift from app features to economic participants, and create a new operating layer for work and commerce.
We are tremendously excited about all that lies ahead. The space is moving fast, with new breakthroughs arriving all the time – so if you’re a startup founder building in this space, please reach out to Advika, Ollie or Sevi; we’d love to chat.