MCP Server
Connect AI agents to Spider through the Model Context Protocol. Crawl, scrape, search, and extract web data from Claude, Cursor, Windsurf, and any MCP-compatible client.
Connect
Point your MCP client at https://mcp.spider.cloud/mcp with your API key as a Bearer token. No install needed. Or run locally via npx spider-cloud-mcp if you prefer.
Hosted server (recommended)
Available Tools
The MCP server exposes 22 tools across three categories.
Core Tools
spider_crawl: Crawl a website and extract content from multiple pagesspider_scrape: Scrape a single page and extract its contentspider_search: Search the web and optionally crawl resultsspider_links: Extract all links from a pagespider_screenshot: Capture page screenshotsspider_unblocker: Access blocked content with anti-bot bypassspider_transform: Transform HTML to markdown or textspider_get_credits: Check your API credit balance
AI Tools (Subscription Required)
spider_ai_crawl: AI-guided crawling using natural language promptsspider_ai_scrape: AI-powered structured data extractionspider_ai_search: AI-enhanced semantic web searchspider_ai_browser: AI-powered browser automationspider_ai_links: AI-powered intelligent link extraction
Browser Tools
spider_browser_open: Open a remote browser session with anti-bot protectionspider_browser_navigate: Navigate to a URL and wait for loadspider_browser_click: Click an element by CSS selectorspider_browser_fill: Fill a form field with textspider_browser_screenshot: Capture a screenshot of the current pagespider_browser_content: Get page HTML or visible textspider_browser_evaluate: Execute JavaScript in the page contextspider_browser_wait_for: Wait for a selector, navigation, or network idlespider_browser_close: Close session and release resources
Usage Examples
Once configured, your AI agent calls Spider tools directly. Here are common workflows.
Scraping and crawling
Browser automation workflow
Browser Session Lifecycle
Browser tools give your AI full control over a remote cloud browser. Sessions are isolated per user and include anti-bot protection, proxy rotation, and cross-browser support out of the box.
How it works
- Call
spider_browser_opento start a session. Returns asession_id. - Pass that
session_idto any browser tool: navigate, click, fill, screenshot, evaluate, wait, or get content. - Call
spider_browser_closewhen done to stop billing.
Key parameters
session_id(required on all browser tools except open): Identifies which browser session to controlselector(click, fill, wait_for): CSS selector targeting an element, e.g."button.submit","#login-btn"timeout(click, fill, wait_for): Max wait time in ms before failing. Default: 10,000ms for click/fill, 30,000ms for wait_forexpression(evaluate): JavaScript to run in the page context, e.g."document.title"format(content):"html"for full DOM or"text"for visible text onlybrowser(open):"auto","chrome","chrome-new", or"firefox"stealth(open): 0 (auto), 1 (standard), 2 (residential proxy), 3 (premium proxy)
Limits
- Up to 5 concurrent sessions per MCP connection
- Sessions auto-close after 5 minutes of inactivity
- Always close sessions when finished to avoid unnecessary charges
Parameters
Core and AI tools accept the same parameters as the corresponding Spider API endpoint. Common parameters include url, return_format, limit, depth, proxy_enabled, and request. See the API reference for the full list.
Hosted Server
Connect remotely without running anything locally. Point any MCP client at https://mcp.spider.cloud/mcp using Streamable HTTP transport with your API key as a Bearer token. No Node.js required. Same 22 tools, same billing.
Add hosted Spider MCP server to Claude Code
npm Package
The server is published as spider-cloud-mcp on npm. Source code is available on GitHub.