You open Claude Desktop. You ask: "How did my AI visibility move this week, and which sources should I pitch?". Claude pulls the data from Mentionable through MCP, looks at your tracked prompts, sees which domains cite your top two competitors but not you, ranks them by impact score and marketplace price, and gives you a four-line outreach shortlist.
No tab switching. No copy-pasting dashboard screenshots into a chat. No "let me check that in Mentionable and come back". Your AI has the data, because the data goes where AI lives.
What is MCP and why is it in a GEO product
The Model Context Protocol is an open standard introduced by Anthropic in late 2024 that lets AI agents call external tools and data through a common interface. A client (Claude Desktop, Cursor, Claude Code, ChatGPT with custom connectors, Zed) implements the client side. A vendor (Mentionable, in this case) implements a server that exposes tools and resources. The agent discovers the tools, invokes them when relevant to the user's request, reads the response, and incorporates the data into its answer.
For an AI visibility tracker, this is not a nice-to-have. It is the natural distribution channel.
Most marketing tools built in the last decade assumed the dashboard was the product. You pay, you log in, you read charts, you export CSVs. The model made sense when marketers read reports in the morning and took decisions in a UI.
The model does not make sense when the marketer's daily tool is an AI agent. A consultant drafting a blog post inside Cursor, an agency operator using Claude Desktop to run 5 client projects, a founder asking ChatGPT to summarize their weekly metrics: they do not want to leave their AI to check a dashboard. They want the dashboard's answer inside the AI.
MCP is the protocol that makes this work. Mentionable's server makes it real.
The 7 tools, and what they unlock
The MCP server exposes 7 tools on launch, aligned with the highest-signal data Mentionable already tracks.
list_projects
Your workspace's projects, with tenant scoping applied automatically. The entry point for any agent workflow. Every other tool takes a projectId, so this is the first call the agent makes when you ask "what is happening across my brands".
list_prompts
Every tracked prompt for a project, with its full stats: total runs, mention rate, visibility per LLM (ChatGPT, Perplexity, Gemini, Grok, Copilot, Google AI Mode, Google AI Overview), latest result per LLM with brand position and sentiment. An agent can answer "which prompts am I losing visibility on this month" in one call.
list_llm_sources
The most valuable tool for content strategy. Returns every domain that appeared in LLM responses for your tracked prompts, broken down into three types of appearance:
- Visible citations: the LLM cited this domain in its answer, the user saw the link
- Hidden citations: the LLM read this domain in its context but did not cite it in the final answer (available on LLMs that expose this signal, typically Perplexity and Gemini)
- Fan-out searches: the LLM ran a background search and found this domain in the results, whether it cited it or not
The fan-out data is a GEO goldmine. It tells you which domains the LLM considers when answering prompts in your niche, even if no citation was ever shown to the user. Those domains are the real influence map. An agent can ask the tool for domains with high fan-out count but low visible-citation count, and you immediately see the sites LLMs consult but do not credit: prime candidates for content collaboration or backlink outreach.
list_backlink_opportunities
Domains where placing a backlink could lift your GEO visibility, ranked by a composite impact score that weights citation count, fan-out frequency, number of distinct LLMs that cite the domain, and number of your tracked prompts where the domain appears. Each domain comes with its current marketplace offers (price, provider) from Mentionable's aggregated backlink data. An agent can answer "what is my best backlink opportunity under €500" with a single tool call and return a sorted shortlist, not a wall of noise.
list_competitors
The competitors tracked on a project, with aggregated mention totals, Share of Voice, per-LLM presence, and status (confirmed, suggested, rejected). The foundation for competitive intelligence conversations with your AI.
list_competitor_sources
The single most useful outreach endpoint. Given a specific competitor, the tool returns every domain where that competitor gets cited, with mention count, LLMs, top URLs, and sample context from the LLM response. If your competitor is cited 12 times on a specific review site and you are not, that site is a direct outreach target. An agent can prepare a full competitive outreach list in one prompt.
bulk_update_competitor_status
The write tool. Accepts up to 50 competitor updates in a single call, each either CONFIRMED, SUGGESTED (pending) or REJECTED. An agent can run the common operational workflow entirely from chat: "list the 12 competitor suggestions detected last week, keep these 8 as real competitors, reject these 4 as noise". One call, atomic results, per-item error handling when an ID is invalid.
Real workflows, not demos
The value of MCP shows up in specific workflows that collapse multi-step manual work into a single conversation.
Content briefs from AI citation data
You ask Claude: "Write a content brief for the prompt 'best gym management software'. Use the top 5 cited sources as reference material and explain why each ranks."
Claude calls list_llm_sources filtered on that prompt, gets the top 5 domains with their fan-out queries and sample URLs, reads the titles and seenAs classification, and drafts a brief that structurally mirrors what the LLMs already value. Your brief is not speculation. It is derived from real citation signals.
Outreach lists without spreadsheets
You ask ChatGPT: "Find me 15 outreach targets for this month. I want domains that cite at least two of my three top competitors but have never cited us."
ChatGPT calls list_competitor_sources on each of your three top competitors, intersects the domain lists, compares against list_llm_sources for your own citation footprint, removes domains where you already appear, and returns a ranked list with the competitor citation counts and your current absence score. A manual version of this workflow takes 90 minutes of spreadsheet work.
Operational triage in one conversation
You ask Claude Desktop: "Show me the competitor suggestions detected in the last 14 days. I will review each and confirm or reject."
Claude calls list_competitors filtered on SUGGESTED status recently created, presents them with context and mention count, you approve or reject each from chat, Claude calls bulk_update_competitor_status once with the full batch. Triage that used to eat 20 minutes of clicking in the app, done in a two-minute conversation.
Security model you can reason about
A single principle: an API key can never grant access beyond what the user already has.
Keys are scoped to your TenantMember (your user-tenant pair). If you belong to two workspaces, each workspace has its own set of your keys. You create a key in the workspace settings under Integrations, give it a name, optionally restrict it to specific projects within your scope, copy the plaintext once (shown a single time), and configure your AI client.
On every call, the server resolves the effective permissions by intersecting your current membership permissions with the key's optional restrictions. If you later get removed from the workspace, your keys stop working immediately on the next call. If your role gets downgraded to read-only (customer), your keys immediately start rejecting write operations. Nothing is cached, no stale permission grants.
Plaintext is never stored. The server stores a SHA-256 hash of the key and a visible prefix for identification in the UI. Revocation is one click and takes effect on the next call.
Rate limits are applied per key using a sliding window: 100 requests per minute by default, generous enough for normal AI agent traffic.
Setup in under 60 seconds
For Claude Desktop, add the server to your configuration:
{
"mcpServers": {
"mentionable": {
"url": "https://mentionable.ai/api/mcp",
"headers": {
"Authorization": "Bearer mnt_sk_YOUR_KEY"
}
}
}
}
For Cursor, the format is similar. For Claude Code, use the CLI to register the server. For ChatGPT, add Mentionable as a custom connector using the same URL and Bearer token.
Create a key in Mentionable → Settings → Integrations → API Keys, paste it into your client config, restart the client, and your next Claude conversation can call your Mentionable data.
Why no other AI visibility tracker offers this
The short answer: because building an MCP server correctly means committing to a stable tool contract, proper per-user authentication, membership-aware permissions, and a data model worth querying. Most AI visibility tools are dashboard shells on top of scraping. They do not have normalized, API-friendly data underneath.
Mentionable has had a proper multi-tenant data model and auth layer from day one, because the app itself is multi-user. Adding MCP was a natural extension, not a retrofit.
Otterly, Peec AI and Profound ship dashboards. Mentionable ships an AI-native data platform that also has a dashboard.
The bigger bet
The tools that survive the LLM transition are the ones that stop assuming the human will open their UI. The product is the data. The UI is one interface. MCP is another. Tomorrow there will be more.
If you run marketing in 2026 and your AI assistant cannot access your visibility data without you copy-pasting, you are paying for the wrong tool.
Try it
Start a free trial, create an API key from Settings → Integrations, paste the Bearer token into Claude Desktop, and ask your AI your first real question about your visibility.
Related articles
- Competitor Tracking - the underlying tracking that makes the MCP competitor tools meaningful.
- Source & Citation Tracking - the data behind list_llm_sources.
- Outreach Opportunities - the in-app version of the backlink workflow that the MCP also exposes.