Skip to main content
NEW AI Studio is now available Try it now
AI Agents

Crawl, search, extract as agent tools

Spider lets your agents crawl, search, and extract web data through a single API or MCP server. Returns clean markdown that fits in context windows without burning tokens on HTML.

One command to web-enable your AI

Add Spider as an MCP server and your AI tool gets crawling, search, and extraction out of the box. No wrapper code needed.

Claude Cursor Windsurf Claude Code
# Claude Code
claude mcp add spider -- npx -y spider-cloud-mcp
# Or add to Claude Desktop / Cursor config:
{
  "mcpServers": {
     "spider" : {
       "command" : "npx",
       "args" : ["-y", "spider-cloud-mcp" ],
       "env" : {
         "SPIDER_API_KEY" : "your-api-key"
      }
    }
  }
}

Capabilities

CRAWL Core

Read entire websites

Submit a URL, get clean markdown for every page. No HTML parsing. Output is token-efficient and fits directly in context windows.

SEARCH Real-time

Query the live web

Search the web and optionally crawl results in the same request. Ground agent responses in live data instead of training cutoff knowledge.

EXTRACT AI

Pull structured data

Set extra_ai_data: true with a custom_prompt and Spider extracts data matching your description. Or use css_extraction_map for CSS/XPath selectors.

ACTIONS Browser

Interact with pages

The Browser API provides a CDP WebSocket connection for programmatic control: navigate, click, type, scroll, and capture screenshots. Use the MCP spider_ai_browser tool for natural-language browser automation.

MCP Protocol

Native tool for AI IDEs

Install the MCP server and Claude, Cursor, or Windsurf can call Spider directly. One command setup, zero config files to maintain.

FRAMEWORKS SDK

Drop-in agent tools

Published integrations for LangChain (document loader) and CrewAI (SpiderTool). The REST API and Python/Node SDKs work with any framework.

Framework integrations

CrewAI Python
from crewai_tools import SpiderTool

spider = SpiderTool()

agent = Agent(
  role= "Research Analyst" ,
  tools=[spider],
  goal= "Find and summarize data" ,
)
LangChain Python
from langchain_community.document_loaders \
    import SpiderLoader

loader = SpiderLoader(
  url= "https://docs.example.com" ,
  mode="crawl",
)
docs = loader.load()

Your agent is one command away from the entire web

Add Spider as an MCP tool and start crawling. No credit card required.