Integration

Building with the EVM Blockchain MCP Server: Query Smart Contracts from Claude, Cursor and ChatGPT

How to give Claude, Cursor, and ChatGPT live access to EVM smart contracts using the Model Context Protocol — install, example prompts, and what MCP does (and doesn't) solve.

evmquery team · · 8 min read
EVM Blockchain MCP Server — query contracts from Claude, Cursor, and ChatGPT

Large language models are fluent in Solidity and in block explorers, but they’re blind to the actual chain state unless you wire one up. If you want Claude to tell you your current Aave health factor, which DAOs you’ve voted in, or what the floor price of a collection is right now, the model needs a live feed from the network — not a screenshot, not a paste, not a stale training snapshot.

That’s what the Model Context Protocol (MCP) is for. And it’s why an EVM blockchain MCP server is worth installing even if you’ve never written a line of Solidity.

TL;DR

MCP lets AI assistants like Claude, Cursor, and ChatGPT call external tools. An EVM MCP server exposes smart contract reads as tool calls, so the model can fetch live onchain data in the same conversation. The evmquery MCP server is a hosted HTTP endpoint at https://api.evmquery.com/mcp, ships two tools (execute_query and describe_schema), auto-resolves ABIs and proxies, and currently supports Ethereum, Base, and BNB Smart Chain.

What is the Model Context Protocol?

MCP is an open standard introduced by Anthropic in late 2024 for connecting AI assistants to external data sources and tools. Where function calling lets you hand a single model a one-off list of tools, MCP treats tools as servers the client can discover, introspect, and call — the same way a language server feeds your IDE.

In practical terms: an MCP server is a small program (local or remote) that advertises a set of tools. An MCP client (Claude Desktop, Cursor, Zed, or ChatGPT with MCP enabled) speaks the protocol, reads the tool list, and lets the model decide when to invoke them. The server returns structured data; the model folds that data into its next reply.

This matters for blockchain because the chain is fundamentally external state. No amount of training data will tell you what block 21 million looks like. The model needs a live tool.

What does an EVM MCP server actually do?

An EVM MCP server exposes the chain as a small, well-typed toolbox. The shape that scales is expression-based: instead of one tool per RPC method, you give the model a way to write a tiny query against a named contract and read the typed result back. The evmquery server is the canonical example, and it ships exactly two tools:

  • execute_query — run a Smart Expression Language (SEL) expression against one or more named contracts on a chain. Returns the typed value plus block metadata and credits consumed.
  • describe_schema — introspect what’s callable on a given set of contracts (every view / pure method, plus the SEL helpers, list macros, and types). The model calls this before writing an expression so it knows what exists.

That’s it. Two tools, one expression language, every read pattern that fits in a SEL expression. A good MCP design here is narrow on purpose — every extra tool is one more thing for the model to misuse.

The catch is in the fine print of what execute_query has to do under the hood. Anyone can wrap eth_call in an MCP server. The hard parts are:

  • ABI resolution. Claude doesn’t know the ABI of the contract you just named. A good server fetches it (from Etherscan, Sourcify, or an embedded catalogue) so the model can call functions by name, not by selector.
  • Proxy handling. Roughly a third of production contracts are behind EIP-1967 proxies. A naive eth_call hits the proxy’s empty fallback. The server has to resolve the implementation, merge ABIs, and decode against the right one.
  • Chain coverage. “Ethereum” is fine until someone asks about a balance on Base. A single-chain MCP server will send the model in circles.
  • Rate limiting and caching. LLMs retry. A lot. Without caching, a single conversation can burn through an RPC quota in minutes.

Servers that skip these end up being cute demos. Servers that handle them end up being the tool you reach for daily.

Installing the evmquery MCP server

The evmquery MCP server is a hosted HTTP endpoint at https://api.evmquery.com/mcp. There’s no local process to run, no npx install, no Docker container — your client connects to the URL and signs in with OIDC. No API key to generate, paste, or rotate for MCP.

The fastest path is Claude Code, which has a one-liner:

claude mcp add --scope user --transport http evmquery https://api.evmquery.com/mcp

For Claude Desktop, Cursor, VS Code, Windsurf, Zed, and other clients that take a JSON config, the equivalent block is:

{
  "mcpServers": {
    "evmquery": {
      "url": "https://api.evmquery.com/mcp"
    }
  }
}

Restart your client. On first use it will open a browser window for OIDC sign-in; after that, the execute_query and describe_schema tools appear in the tool picker. For ChatGPT’s MCP connector, point it at the same URL and sign in with OIDC when prompted.

New to evmquery? The free tier gives you 2,000 credits per month, which is more than enough to validate the setup — grab it here.

Prefer API keys?

The HTTP endpoint also accepts X-API-Key if your client doesn’t speak OIDC or you’d rather hard-pin credentials. Generate a key in the dashboard and set it in the headers block of your client’s MCP config. OIDC is the default because it avoids leaking long-lived secrets into client configs.

Three prompts that prove it works

The fastest way to tell if an MCP server is actually useful is to ask questions that must touch the chain. Here are three we use as smoke tests for a reads-shaped MCP like evmquery.

1. “What’s my Aave v3 health factor on Base?”

Good MCP + good model gives you a decoded number and a sentence of context. Underneath, the model picks execute_query and runs a SEL expression like:

chain:      evm_base
schema:     { aave: 0xA238Dd80C259a72e81d7e4664a9801593F98d1c5 }
context:    { wallet: sol_address = 0xYourWallet… }
expression: formatUnits(aave.getUserAccountData(wallet).healthFactor, 18)

What the server has to do that a naive wrapper won’t: auto-resolve the Aave Pool ABI from verified source (so the model can call getUserAccountData by name, not selector), unwrap the EIP-1967 proxy to the implementation, decode the six-tuple return as a typed struct, and let formatUnits scale healthFactor from its 1e18 fixed-point form into a readable ratio. The model never sees an ABI; it sees aave.getUserAccountData(wallet).healthFactor and gets a number back.

2. “Is BAYC #7890 still owned by vitalik.eth?”

A pure on-chain question — exactly what execute_query is for. The model uses describe_schema once to confirm ownerOf exists on the BAYC contract, then runs:

expression: bayc.ownerOf(solInt(7890)) == solAddress("0xd8dA…6045")

Returns true or false in one round, one credit-per-call. The same shape works for “who owns token #N”, “is this token minted”, or “how many NFTs does this wallet hold” — all single SEL expressions.

3. “What’s the OpenSea floor right now?”

This one is a deliberate failure case to listen for. Floor prices live on a marketplace API, not on-chain — so an honest MCP server says so instead of inventing a tool. evmquery’s MCP refuses to make up data: ask it for an off-chain price and it’ll tell you it can read contract state, not marketplace orderbooks. That’s the right behavior. Pair it with a separate marketplace MCP if you want both.

If your MCP server (a) handles question #1 without you pasting ABIs and (b) is honest about question #3, you’ve found a keeper.

Why not just write a function-calling tool?

A fair question. Why bother with MCP if you can define tools directly in your agent framework?

Three reasons, in order of importance:

  1. Portability. An MCP server works in Claude Desktop and Cursor and Zed and ChatGPT and any future client that speaks the protocol. A custom tool binding only works in the SDK you wrote it against.
  2. Separation. The server runs in its own process (or remotely). It can maintain caches, hold credentials, resolve ABIs, retry failed RPCs — all without polluting your agent code. Your agent prompt stays short; the tool implementation stays out of your git history.
  3. Discovery. MCP clients list tools with their schemas. The model sees “read any ERC-20 balance” with typed inputs, not “a function called callContract that takes a random JSON blob.” That structured surface is what makes MCP tools feel native.

Put bluntly: if you’re writing a single-purpose script, use function calling. If you want a persistent capability that shows up across every AI tool you use, ship an MCP server.

Where MCP stops helping

MCP is powerful but narrow. A few things it does not solve:

  • Signing transactions. Reading is safe. Writing is a loaded gun. evmquery’s MCP server is read-only by design — it will never broadcast on your behalf. For writes, you want an explicit wallet flow, not an ambient AI session.
  • Real-time subscriptions. MCP is request/response. If you need “tell me when this balance changes,” you want webhooks or an indexer, not an AI loop polling.
  • Multi-hop workflows. A single MCP call returns a single result. If you’re chaining “fetch these 50 contracts, then filter, then alert,” you’re rebuilding orchestration on top of the model. That works for small tasks; for anything scheduled, reach for an n8n workflow instead.

The right mental model: MCP is for conversational onchain work. The model asks, the chain answers, the conversation continues. For everything else, there are better primitives.

Next steps

  • Read more about the developer integrations — the MCP server shares its query engine with the REST API and n8n node.
  • Building for an agent-heavy workflow? The /for/ai-users page has recipes for Claude Desktop and Cursor specifically.
  • Want to batch reads efficiently instead of one-by-one? The Multicall3 guide covers the batching primitive that evmquery uses under the hood.

Five minutes to onchain Claude

Grab a free key, paste the MCP config block into Claude Desktop, and the model can answer your real USDC balance from inside the conversation.