Skip to main content
NEW AI Studio is now available Try it now

MCP Server

Connect AI agents to Spider through the Model Context Protocol. Crawl, scrape, search, and extract web data from Claude, Cursor, Windsurf, and any MCP-compatible client.

Connect

Point your MCP client at https://mcp.spider.cloud/mcp with your API key as a Bearer token. No install needed. Or run locally via npx spider-cloud-mcp if you prefer.

Hosted server (recommended)

claude mcp add spider \ --transport http https://mcp.spider.cloud/mcp \ -H "Authorization: Bearer your-api-key"

Available Tools

The MCP server exposes 22 tools across three categories.

Core Tools

  • spider_crawl: Crawl a website and extract content from multiple pages
  • spider_scrape: Scrape a single page and extract its content
  • spider_search: Search the web and optionally crawl results
  • spider_links: Extract all links from a page
  • spider_screenshot: Capture page screenshots
  • spider_unblocker: Access blocked content with anti-bot bypass
  • spider_transform: Transform HTML to markdown or text
  • spider_get_credits: Check your API credit balance

AI Tools (Subscription Required)

  • spider_ai_crawl: AI-guided crawling using natural language prompts
  • spider_ai_scrape: AI-powered structured data extraction
  • spider_ai_search: AI-enhanced semantic web search
  • spider_ai_browser: AI-powered browser automation
  • spider_ai_links: AI-powered intelligent link extraction

Browser Tools

  • spider_browser_open: Open a remote browser session with anti-bot protection
  • spider_browser_navigate: Navigate to a URL and wait for load
  • spider_browser_click: Click an element by CSS selector
  • spider_browser_fill: Fill a form field with text
  • spider_browser_screenshot: Capture a screenshot of the current page
  • spider_browser_content: Get page HTML or visible text
  • spider_browser_evaluate: Execute JavaScript in the page context
  • spider_browser_wait_for: Wait for a selector, navigation, or network idle
  • spider_browser_close: Close session and release resources

Usage Examples

Once configured, your AI agent calls Spider tools directly. Here are common workflows.

Scraping and crawling

User: "Scrape spider.cloud and give me the pricing details" Agent uses spider_scrape with: url: "https://spider.cloud" return_format: "markdown" User: "Crawl the docs and find all API endpoints" Agent uses spider_crawl with: url: "https://spider.cloud/docs" limit: 50 return_format: "markdown"

Browser automation workflow

User: "Log into my dashboard and screenshot the analytics page" Agent uses spider_browser_open → returns session_id: "abc-123" Agent uses spider_browser_navigate with: session_id: "abc-123" url: "https://app.example.com/login" Agent uses spider_browser_fill with: session_id: "abc-123" selector: "input[name='email']" value: "user@example.com" Agent uses spider_browser_click with: session_id: "abc-123" selector: "button[type='submit']" Agent uses spider_browser_wait_for with: session_id: "abc-123" selector: ".dashboard-loaded" Agent uses spider_browser_navigate with: session_id: "abc-123" url: "https://app.example.com/analytics" Agent uses spider_browser_screenshot with: session_id: "abc-123" → returns base64 PNG image Agent uses spider_browser_close with: session_id: "abc-123"

Browser Session Lifecycle

Browser tools give your AI full control over a remote cloud browser. Sessions are isolated per user and include anti-bot protection, proxy rotation, and cross-browser support out of the box.

How it works

  1. Call spider_browser_open to start a session. Returns a session_id.
  2. Pass that session_id to any browser tool: navigate, click, fill, screenshot, evaluate, wait, or get content.
  3. Call spider_browser_close when done to stop billing.

Key parameters

  • session_id (required on all browser tools except open): Identifies which browser session to control
  • selector (click, fill, wait_for): CSS selector targeting an element, e.g. "button.submit", "#login-btn"
  • timeout (click, fill, wait_for): Max wait time in ms before failing. Default: 10,000ms for click/fill, 30,000ms for wait_for
  • expression (evaluate): JavaScript to run in the page context, e.g. "document.title"
  • format (content): "html" for full DOM or "text" for visible text only
  • browser (open): "auto", "chrome", "chrome-new", or "firefox"
  • stealth (open): 0 (auto), 1 (standard), 2 (residential proxy), 3 (premium proxy)

Limits

  • Up to 5 concurrent sessions per MCP connection
  • Sessions auto-close after 5 minutes of inactivity
  • Always close sessions when finished to avoid unnecessary charges

Parameters

Core and AI tools accept the same parameters as the corresponding Spider API endpoint. Common parameters include url, return_format, limit, depth, proxy_enabled, and request. See the API reference for the full list.

Hosted Server

Connect remotely without running anything locally. Point any MCP client at https://mcp.spider.cloud/mcp using Streamable HTTP transport with your API key as a Bearer token. No Node.js required. Same 22 tools, same billing.

Add hosted Spider MCP server to Claude Code

claude mcp add spider --transport http https://mcp.spider.cloud/mcp \ -H "Authorization: Bearer your-api-key"

npm Package

The server is published as spider-cloud-mcp on npm. Source code is available on GitHub.