The Vercel AI SDK makes it straightforward to build AI-powered apps in TypeScript, but the model arrives blind to the chain. Ask it for your current USDC balance and it will either fabricate a number from training data or refuse. That data lives on-chain, changes every block, and was never in any training corpus. Without a live tool, the model can’t help.
Tool calling fixes this. Define a function, describe its inputs to the model, and the SDK handles routing, calls, and result injection. This post walks through adding a single evmquery_read tool to the Vercel AI SDK so any prompt about onchain state returns a live, decoded result from the chain.
TL;DR
Install the ai package, define a tool() that POSTs to https://api.evmquery.com/api/v1/query, pass it to streamText(), and your chat handler can answer questions about USDC balances, Aave positions, and any EVM contract view function. The free tier gives 2,000 credits/month, no credit card needed.
How tool calling works in the Vercel AI SDK
The AI SDK ships streamText() and generateText() with a tools option. You pass it a record of named tools; each tool has a description, a Zod parameters schema, and an execute function.
When the model decides it needs external data to answer a prompt, it emits a tool call with arguments that match your schema. The SDK validates those arguments, calls execute, and feeds the result back to the model with no manual parsing or JSON wrangling. The model receives the structured result and incorporates it into its reply.
For onchain queries, this is the right primitive. The model already knows when it needs external data (balance checks, price reads, position health) and when it doesn’t (explaining how Uniswap V3 works). You do not have to hard-code that decision in your application logic.
What you’ll build
A TypeScript handler with one tool: evmquery_read. The tool takes a chain identifier, a named contract address map, a CEL expression, and optional context variables. It calls evmquery’s REST API and returns the decoded result. The model decides when to invoke it and how to present the answer.
The handler works in a Next.js App Router route, an Express endpoint, or a plain Node.js script. There is no framework dependency beyond the ai package and a provider adapter.
If you are building a product that needs programmatic access to chain data, continue here. If you want to query the chain directly from Claude Desktop or Cursor without writing any code, the evmquery MCP server is the faster path.
Project setup
Start from any TypeScript project. Node 18 or later is required for the native fetch API.
npm install ai @ai-sdk/anthropic zod
Set two environment variables. The Anthropic adapter is used here; any AI SDK-compatible provider works.
ANTHROPIC_API_KEY=sk-ant-...
EVMQUERY_API_KEY=eq_...
Get your free evmquery key at https://app.evmquery.com/onboarding?plan=free. The free tier covers 2,000 credits per month, which is roughly 1,000 typical contract reads.
Defining the evmquery tool
The evmquery REST API accepts a single POST body: a chain, a named contract map, a CEL expression, and optional typed context variables. The response includes a result field with the decoded, human-readable value.
Here is the full tool definition:
import { tool } from "ai";
import { z } from "zod";
const EVMQUERY_API = "https://api.evmquery.com/api/v1/query";
export const evmqueryReadTool = tool({
description:
"Read live data from an EVM smart contract. Use for current token balances, " +
"DeFi positions, pool state, or any contract view function on Ethereum, Base, " +
"or BNB Smart Chain. Do not use for historical data or event logs. " +
"Supported chains: evm_ethereum, evm_base, evm_bnb_mainnet.",
parameters: z.object({
chain: z
.enum(["evm_ethereum", "evm_base", "evm_bnb_mainnet"])
.describe("Chain to query"),
contracts: z
.record(z.string())
.describe(
"Named contract addresses. Key is the short name used in the expression, " +
"value is the 0x address. Example: { usdc: '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48' }"
),
expression: z
.string()
.describe(
"CEL expression to evaluate. Named contracts become variables. " +
"formatUnits(usdc.balanceOf(wallet), usdc.decimals()) returns a human-readable balance."
),
context: z
.record(z.string())
.optional()
.describe(
"Runtime values for wallet addresses or other parameters used in the expression. " +
"Example: { wallet: '0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045' }"
),
}),
execute: async ({ chain, contracts, expression, context }) => {
const body: Record<string, unknown> = {
chain,
schema: { contracts },
expression,
};
if (context && Object.keys(context).length > 0) {
const contextTypes = Object.fromEntries(
Object.keys(context).map((k) => [k, "sol_address"])
);
body.schema = { contracts, context: contextTypes };
body.context = context;
}
const res = await fetch(EVMQUERY_API, {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": process.env.EVMQUERY_API_KEY!,
},
body: JSON.stringify(body),
});
if (!res.ok) {
const text = await res.text();
throw new Error(`evmquery ${res.status}: ${text}`);
}
const data = await res.json();
return { result: data.result, block: data.blockNumber };
},
});
A few design choices worth noting:
- Context types are fixed to
sol_addressfor simplicity. The model passes wallet addresses; the tool types them correctly without exposing evmquery’s type system to the model. - The description is specific about scope. Telling the model what the tool does not do (historical data, event logs) prevents misrouting and hallucinated tool calls.
- The expression is passed through verbatim. The model constructs the CEL expression from the contract name and the method it wants to call. The
describe_schemaendpoint is available if you want to let the model introspect available methods first, useful for unknown or user-supplied contracts.
Wiring into a streaming chat handler
With the tool defined, the chat handler is four lines of logic:
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { evmqueryReadTool } from "./tools/evmquery";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-6"),
system:
"You are a blockchain data assistant. When the user asks about token balances, " +
"DeFi positions, pool prices, or any current contract state, call the " +
"evmquery_read tool to fetch live data before answering. " +
"Always include the block number in your reply so the user knows the result is current.",
messages,
tools: { evmquery_read: evmqueryReadTool },
maxSteps: 3,
});
return result.toDataStreamResponse();
}
maxSteps: 3 allows up to three tool round-trips per response. Most single-contract reads take one step. If the user asks a multi-contract question, the model may chain calls or batch them depending on what the expression supports.
Next.js App Router
Drop the POST handler into app/api/chat/route.ts. Pair it with the AI SDK’s useChat hook on the client side and you have a full streaming chat UI with live onchain data in under 50 lines total.
Two live examples
All expressions below were validated against the live chain before publication.
USDC balance on Ethereum
Prompt: “What is the USDC balance of 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045 on Ethereum?”
The model emits this tool call:
{
"chain": "evm_ethereum",
"contracts": { "usdc": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48" },
"expression": "formatUnits(usdc.balanceOf(wallet), usdc.decimals())",
"context": { "wallet": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045" }
}
Validated result: 5,567.40 USDC at block 24,957,162.
The formatUnits helper reads the contract’s own decimals() return value, so the scaling is always correct regardless of whether the token uses 6, 8, or 18 decimals.
WETH balance on Base
Prompt: “Check the WETH balance of 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045 on Base.”
{
"chain": "evm_base",
"contracts": { "weth": "0x4200000000000000000000000000000000000006" },
"expression": "formatUnits(weth.balanceOf(wallet), weth.decimals())",
"context": { "wallet": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045" }
}
Validated result: 0.0628 WETH at block 45,166,293.
No ABI file. No RPC endpoint to configure. No decimal scaling to hard-code. The expression handles all of that.
Struct results and DeFi positions
The expression language returns structured values, not just scalars. Aave’s getUserAccountData method returns a six-field struct:
{
"chain": "evm_ethereum",
"contracts": { "aave": "0x87870Bca3F3fD6335C3F4ce8392D69350B4fA4E2" },
"expression": "aave.getUserAccountData(wallet)",
"context": { "wallet": "0x..." }
}
The response includes totalCollateralBase, totalDebtBase, availableBorrowsBase, currentLiquidationThreshold, ltv, and healthFactor, all decoded and returned as a JSON object. The model receives the full struct and can surface the health factor, flag liquidation risk, or compute a collateral ratio without any additional parsing on your side.
For wallets with no active Aave position, healthFactor returns the maximum uint256 value, which represents no debt (effectively infinite health). The model will interpret this correctly from the context you provide in the system prompt.
Extending the tool for integer context
The current implementation maps all context variables to sol_address. If you need integer parameters, for example checking whether a wallet’s balance exceeds a threshold, extend the schema with a contextTypes field:
parameters: z.object({
// ... existing fields ...
contextTypes: z
.record(z.enum(["sol_address", "sol_int", "bool"]))
.optional()
.describe("Override types for context variables. Default is sol_address."),
}),
// In execute:
const contextTypes = Object.fromEntries(
Object.keys(context).map((k) => [
k,
params.contextTypes?.[k] ?? "sol_address",
])
);
This gives the model control over how variables are typed when the prompt involves numeric thresholds or boolean flags.
Developers and AI builders
If you are building AI tooling for DeFi or onchain apps, the developer resources page covers the evmquery REST API in full, including multi-wallet batch macros, list filtering, and expression examples for common patterns. If you are focused on AI agent workflows specifically, the AI users page covers both the REST tool approach above and the MCP surface.
REST tool vs MCP: picking the right surface
| REST tool (this post) | MCP server | |
|---|---|---|
| Use case | Custom apps, backend agents, programmatic access | Claude Desktop, Cursor, VS Code, any MCP client |
| Setup | Add tool to your AI SDK handler | Paste one config block into your client |
| Control | Full: schema, error handling, logging | Client manages the conversation |
| Code required | ~50 lines | Zero |
If you are building a product, the REST tool gives you full control over the schema, error messages, and how results are formatted before the model sees them. If you want to query the chain interactively from your IDE today, the evmquery MCP server guide gets you there in under five minutes.
Next steps
- Set up the evmquery MCP server in Claude Desktop and Cursor (no code required)
- Monitor Aave health factors and ERC-20 balances with a Python polling script
- Read EVM contract data from Python without ABI files
- Browse the evmquery REST API docs for multi-wallet batch macros, list filtering, and expression reference