•
2 min read
Searching with Spider
Table of Contents
Instant Search
Spider offers lightning-fast search capabilities, delivering results in under 2 seconds depending on the query. This enables instant access to live web data, making it ideal for:
- Feeding real-time content into large language models (LLMs)
- Building intelligent agents and data pipelines
- Crawling and collecting fresh, targeted data
Search Endpoint Usage
POST /search
Use this endpoint to compile a list of relevant websites for crawling and resource collection.
Request Parameters
search
(required) – The search query to executesearch_limit
– Number of top results to return (e.g.2
)page
– The page to search (e.g.2
)country
– Choose the country code ("us"
,"fr"
, etc.)fetch_page_content
(optional) – Iftrue
, the search will perform a crawl to gather the content (costs apply)
Example Request (Python)
import requests, os
headers = {
'Authorization': f'Bearer {os.getenv("SPIDER_API_KEY")}',
'Content-Type': 'application/json',
}
json_data = {
"search": "sports news today",
"search_limit": 2
}
response = requests.post('https://api.spider.cloud/search', headers=headers, json=json_data)
print(response.json())
Search Results Format
The API returns structured results as an array of objects:
[
{
"title": "ESPN – Serving Sports Fans. Anytime. Anywhere.",
"description": "Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports.",
"url": "https://www.espn.com/"
},
{
"title": "Sports Illustrated",
"description": "SI.com provides sports news, expert analysis, highlights, stats and scores for the NFL, NBA, MLB, NHL, college football, soccer...",
"url": "https://www.si.com/"
}
]
Combo Search and Crawl
Spider allows combo operations — search the web and crawl selected results in a single workflow by setting fetch_page_content
to false
.
All of the default parameters for crawling can be passed in to optimize the run.
Benefits:
- Gather deeper structured content from top search results
- Automate extraction for real-time summaries, scraping, or analysis
- Reduce steps and latency in data workflows.
Rate Limits
- Spider Search is designed for high-throughput use. We support:
- Up to 50,000 search requests per minute
- Multiple search providers
- Scalable, distributed crawling and parsing