NEW AI Studio is now available Try it now
POST /search

Search API

Go from a search query to fully extracted web content in a single request. Spider searches the web, discovers relevant pages, crawls each result, and returns clean content.

One Query, Many Results

A single search request fans out across the web and converges back to structured content.

1 query
anthropic.com
openai.com
langchain.com
1
API call
N
results discovered
N
pages extracted

How Search + Crawl Works

1

You Send a Query

A plain-text search query — the same kind you'd type into a search engine. Set search_limit to control how many results to crawl.

2

Spider Searches the Web

Queries major search engines and collects the top result URLs. Use country_code for geo-targeted results.

3

Each Result Is Crawled

Every discovered page is loaded, rendered with a real browser if needed, and its content extracted in your chosen format.

4

Clean Content Returned

Structured content from every result page delivered as markdown, text, HTML, or raw bytes — ready for your pipeline.

Without Spider Search

# Step 1: Search API
serp = serp_api.search("query")
urls = [r.url for r in serp.results]

# Step 2: Scrape each result
pages = []
for url in urls:
    html = scraper.fetch(url)
    text = parse_and_clean(html)
    pages.append(text)

# 3 services, N+1 API calls, error handling...

With Spider Search

# One call does everything
pages = spider.search(
    "query",
    params={
        "search_limit": 5,
        "return_format": "markdown",
    }
)

# Done. 1 service, 1 call, full content.

Key Capabilities

Configurable Result Count

Set search_limit to control how many search results Spider crawls — from 1 to dozens. Balance thoroughness against cost and latency.

Deep Crawl Results

Combine search_limit with limit to not just scrape result pages but crawl deeper into each discovered site.

Geo-Targeted Search

Use country_code to get localized results. See what users in different regions find for the same query.

All Output Formats

Get search-sourced content as markdown, text, HTML, or raw bytes — the same formatting and cleaning as crawl and scrape.

Metadata Enrichment

Enable metadata to get page titles, descriptions, and keywords alongside extracted content for building search indexes.

Streaming Support

Use JSONL content type to stream results as each page is crawled and processed. Start consuming data immediately.

Code Examples

Try It Now

Type any search query below. Spider will search the web, crawl the top results, and return clean content. No sign-up required.

Type a query and hit Search to see live results from Spider's Search API.

No sign-in required. 25 free searches per day.

Why Spider Search

Features that set Spider's Search API apart from alternatives.

50K req/min

No Rate Limit Walls

Most search APIs cap at 100-500 concurrent requests. Spider handles 50,000 per minute. Your agent keeps working during traffic spikes.

N:1 batch

Batch Multiple Queries

Send an array of search queries in a single API call. Spider runs them all in parallel and returns grouped results. One round trip instead of N.

Auto paginate

Auto-Pagination

Set search_limit to 50 and auto_pagination to true. Spider pages through search results automatically. No manual page incrementing.

Search API Comparison

How Spider's Search API compares to alternatives for AI and LLM use cases.

Feature Spider Firecrawl Jina SerpAPI
Search + scrape One call One call Snippets only Search only
Batch queries Yes No No No
Auto-pagination Yes No No Manual
Rate limit 50K/min 2-150 500 RPM 5K/mo
Geo-targeting City-level Limited No Country
Time filters 5 levels Yes No Yes
Output formats 4+ Markdown Markdown JSON
Avg cost/search ~$0.003 ~$0.01 Free* $0.01

*Jina free tier is rate-limited to 500 RPM with 10M token cap.

AI-Powered Search

POST /ai/search

Describe what you're looking for in plain English. Spider's AI models optimize your query, search the web, extract relevant content, and return structured results. No manual parameter tuning needed.

# Natural language search
curl -X POST https://api.spider.cloud/ai/search \
  -H "Authorization: Bearer $KEY" \
  -d '{
    "prompt": "Find recent benchmarks comparing RAG frameworks",
    "search_limit": 5,
    "return_format": "markdown"
  }'
Query Optimization AI rewrites your prompt into effective search queries.
Smart Extraction Vision models pull the most relevant content from results.
Model Selection Choose Qwen, Schematron, or bring your own via OpenRouter.

Popular Use Cases

RAG with Live Web Data

Feed a user's question into the search API, retrieve relevant pages, and pass the content to an LLM for grounded, up-to-date answers. No need to maintain a static corpus.

Market Research

Search for competitor names, product categories, or industry terms and automatically collect the latest information from multiple sources in a single request.

Content Curation

Build automated pipelines that discover and aggregate the best articles on specific topics for newsletters, research reports, or knowledge bases.

SERP Monitoring

Track how search results change over time for keywords that matter to your business. Compare results across regions using country_code targeting.

Related Resources

Search the web programmatically

Combine search discovery with content extraction in a single API call.