MCP · Model Context Protocol
Bring your Autonodal intelligence into Claude, ChatGPT, and Cursor.
Don't come to Autonodal to ask questions. Ask them wherever you already work. Your network, your signals, your huddles — delivered by the AI assistant you already use.
Generate a connection →
What this gives you
A read-only tool surface on top of your Autonodal sandbox, exposed via MCP to any compatible AI client. Claude can now answer questions about your network, your pipeline, your intelligence — using your actual data, in your actual sandbox, with tenant isolation architecturally enforced.
Supported clients
Claude Desktop
SSE transport
Any MCP client
Generic config
What you can do with it
Five tools available in v1, all read-only. The LLM decides which to call based on your question.
Search your network
"Who in my network runs finance at a venture-backed fintech in APAC?"
Get a person's full dossier
"Tell me everything you know about Sarah Chen — her signals, our history, and who introduced us."
Find a warm path
"I need to reach the CTO at Acme — who's my best path? Scope it to my Atomic BD huddle."
Surface signals that matter
"What's happened in my feed in the last 48 hours that's in the peak mandate window?"
Orient around your huddles
"What huddles am I in and what's my contribution to each?"
Setup — under 2 minutes
1. Generate your connection
Open Settings → MCP Connections. Pick your client. Click Generate. You'll get a config snippet shown once.
Important: The token is shown exactly once at creation. Copy it into your client config immediately, or save it somewhere safe. You can always revoke and regenerate if needed.
2. Paste into your client
Claude Desktop
- Open Claude Desktop
- Go to
Settings → Developer → Edit Config
- Paste the JSON into the
mcpServers object
- Restart Claude Desktop
ChatGPT
- Open ChatGPT → Settings → Connectors
- Add a new MCP server with the provided URL and Authorization header
- Approve the tools you want enabled
Cursor
- Open Cursor → Settings → MCP
- Paste the config JSON
- Restart Cursor
3. Try it
Ask your AI assistant:
"What can you do with my Autonodal connection?"
It will list the 5 tools and explain what each one does. Then ask a real question — "find me a warm path to a CFO at a Series B fintech" — and watch the assistant work.
Privacy & isolation
Your data stays yours
Your network, signals, and huddles live in your tenant sandbox. The AI queries it on your behalf — nothing is uploaded or shared.
Tenant isolation enforced
Every query runs with your tenant context. Postgres RLS policies, token-bound tenant resolution, and Qdrant filters all enforce it independently.
No model training
Your data doesn't train the assistant's model. Anthropic and OpenAI treat MCP tool responses as ephemeral context.
Revoke anytime
Delete a connection in settings and it dies on the next request. No caching, no grace period.
Read-only for now
v1 is deliberately read-only. The AI can search, read, and reason over your data — but it cannot send messages, claim signals, or modify anything. When you want to act, you do it in Autonodal.
Write tools (stage an intro, claim a signal, log an interaction) are coming in a later phase with explicit per-action confirmation flows.
Tools available in v1
autonodal_search_network — semantic search across your people graph
autonodal_get_person — full dossier including scores, history, introduction provenance
autonodal_find_warm_path — best intro path to a target (optionally huddle-scoped)
autonodal_get_recent_signals — signals with calibrated timing phases
autonodal_list_huddles — your active huddles with contribution summaries
Rate limits
- 100 calls per minute per token (burst)
- 5,000 calls per day per token
- 100,000 calls per month per tenant
If you hit a limit, the assistant will see a polite error and you can retry.
Next steps
Generate your connection →