Developer Quickstart
Get your environment set up and make your first API request.
Install the Spider client
pip install spider_clientCreate and Export an API Key
Create an API key from the dashboard, then export it as an environment variable:
Export an environment variable on *nix systems
export SPIDER_API_KEY="your_api_key_here"Make Your First API Request
Pick an endpoint and run the example below.
Request
Handle the Response
Every response is a JSON array. Each element contains the page URL, content in your requested format, an HTTP status code, and a cost breakdown. Check the status field to confirm the page loaded successfully before processing the content.
Parse the response
import requests, os
headers = {
'Authorization': f'Bearer {os.getenv("SPIDER_API_KEY")}',
'Content-Type': 'application/json',
}
response = requests.post(
'https://api.spider.cloud/crawl',
headers=headers,
json={"url": "https://example.com", "limit": 5, "return_format": "markdown"}
)
data = response.json()
for page in data:
if page.get('status') == 200:
print(f"URL: {page['url']}")
print(f"Content length: {len(page.get('content', ''))} chars")
print(f"Cost: {page['costs']['total_cost_formatted']}")
else:
print(f"Failed: {page['url']} — {page.get('error', 'Unknown error')}")Pro Tip:
For large crawls, use streaming to process pages as they arrive instead of waiting for the full response. Set the
Content-Type header to application/jsonl.Next Steps
Now that you can make requests and handle responses, explore these areas:
- Concepts — Request modes, output formats, streaming, and the credits system.
- Scraping and Crawling — Depth control, response fields, and cost breakdowns.
- Efficient Scraping — Batch requests, retries, timeouts, and multi-URL patterns.
- JSON Scraping — Extract structured data from JSON-LD, Next.js SSR, and other embedded formats.
- Recipes — Copy-paste code for every API endpoint: scraping, crawling, search, screenshots, streaming, and webhooks.