Pricing
Spider operates on a pay-as-you-go model for pricing. You only pay for the data you consume, with no extra costs once you stop using the service.
How does our pricing work?
View our breakdown tables below to understand how charges are applied.
Charge Model
Cost Type | Amount |
---|---|
Per Req | $1.00 / GB |
Per Req | $0.001 / min. cpu |
Add-Ons
Add-ons | Cost |
---|---|
$0.0003 / avg per req | |
$0.0001 / avg per req | |
$0.01 / 1k tokens | |
$0.30 / GB month |
Features
Feature | Cost |
---|---|
/crawl | $0.0003 / avg per req |
/links | $0.0002 / avg per req |
/screenshot | $0.0006 / avg per req |
/pipeline | $0.0003 / avg per req |
/search | $0.005 / per req |
/transform | $0.0001 / per req |
/data/query | $0.0001 / per req |
proxy.spider.cloud | $1-2 / per gb |
Average cost per 100 pages
Pick the request option on demand.
Need unique scraped data?
Contact us at sales@spider.cloud to talk about what we can do.
Are you Enterprise?
Reach out to us at sales@spider.cloud to learn more.
FAQ
Frequently asked questions about Spider
What is Spider?
Spider is a leading web crawling tool designed for speed and cost-effectiveness, supporting various data formats including LLM-ready markdown.
How can I try Spider?
Purchase credits for our cloud system or test the Open Source Spider engine to explore its capabilities.
What are the rate limits?
Everyone has access to 50,000 requests per second for the core API.
Is Spider suitable for large scraping projects?
Absolutely, Spider is ideal for large-scale data collection and offers a cost-effective dashboard for data management.
Can you crawl all pages?
Yes, Spider accurately crawls all necessary content without needing a sitemap.
What formats can Spider convert web data into?
Spider outputs HTML, raw, text, and various markdown formats. It supports JSON
, JSONL
, CSV
, and XML
for API responses.
Does it respect robots.txt?
Yes, compliance with robots.txt is default, but you can disable this if necessary.
Am I billed for failed requests?
We do not charge for any failed request on our endpoints.