Spider vs. ScrapingBee: The Credit Multiplier Problem
ScrapingBee advertises generous credit counts: 250,000 on the Freelance plan, 1,000,000 on Startup. What the pricing page doesn’t make obvious is that a single request can consume 75 credits. That 250,000-credit plan might only give you 3,333 actual requests.
Spider has no credit multipliers. A page is a page.
How ScrapingBee’s credit system actually works
ScrapingBee multiplies your credit cost based on the features you enable:
| Configuration | Credits per request |
|---|---|
| Regular proxy, no JS | 1 |
| Regular proxy + JS rendering | 5 |
| Premium proxy, no JS | 10 |
| Premium proxy + JS rendering | 25 |
| Stealth proxy + JS rendering | 75 |
JavaScript rendering is on by default. If you don’t explicitly disable it, every request costs at least 5 credits. Need stealth proxies for protected sites, which is most production scraping today, and each request costs 75 credits.
Here’s what that means for each plan:
| Plan | Advertised credits | Actual requests (stealth+JS) | Monthly cost |
|---|---|---|---|
| Freelance | 250,000 | 3,333 | $49 |
| Startup | 1,000,000 | 13,333 | $99 |
| Business | 3,000,000 | 40,000 | $249 |
That $49/month Freelance plan gives you 3,333 requests when scraping protected sites, roughly $14.70 per 1,000 pages.
Spider pricing: no multipliers
Spider bills based on bandwidth ($1/GB) and compute time ($0.001/min) — not credit multipliers. There are no 75x surcharges for enabling stealth or JavaScript rendering. A typical production workload averages around $0.65 per 1,000 pages. See spider.cloud/credits for the full breakdown.
| Spider | ScrapingBee | |
|---|---|---|
| Typical cost per 1,000 pages | ~$0.65 (avg) | $14.70 (stealth+JS) |
| Credit multiplier | None | 1x to 75x |
| JS rendering | Included in compute | 5x multiplier |
| Stealth proxy | Included in compute | 75x multiplier |
| CAPTCHA solving | Included | Additional credits |
| Pricing model | Bandwidth + compute | Credit-based with multipliers |
At 40,000 pages per month with stealth:
- ScrapingBee: $249 (Business plan, fully consumed)
- Spider: ~$26 (typical workload)
The bigger gap: browser control
ScrapingBee is a request-response API. You send a URL, you get HTML back. There’s no way to interact with a page.
That works until you need data that only appears after a user action, like clicking “see more,” scrolling through a feed, navigating tabs, or filling a form. ScrapingBee can’t do any of that.
Spider Browser gives you a live browser session over WebSocket. Here’s what scraping a Zillow listing looks like:
import { SpiderBrowser } from "spider-browser";
const spider = new SpiderBrowser({
apiKey: process.env.SPIDER_API_KEY,
stealth: 0, // auto-escalates when blocked, no extra cost
});
await spider.init();
await spider.page.goto("https://www.zillow.com/homedetails/123-main-st");
await spider.page.click(".see-more-facts");
await spider.page.waitForSelector(".all-facts-table");
const data = await spider.page.extractFields({
price: "[data-testid='price']",
beds: ".beds",
baths: ".baths",
sqft: ".sqft",
yearBuilt: ".year-built",
lotSize: ".lot-size",
});
console.log(data);
await spider.close();
With ScrapingBee, you’d get the initial HTML and miss every field hidden behind that “see more” click.
AI extraction without the surcharge
Spider Browser includes AI methods at no extra cost:
extractFields(): structured data from the DOM via CSS selectorsextract(): describe what you want in plain English, get JSON backact(): click, fill, scroll via natural languageobserve(): discover interactive elements on the pageagent(): autonomous multi-step agent powered by any LLM
ScrapingBee offers AI extraction as a paid add-on that consumes additional credits on top of the base request cost.
Output formats
ScrapingBee returns raw HTML. You parse it yourself with BeautifulSoup or Cheerio.
Spider returns data in the format you need: markdown, JSON, plain text, XML, CSV, or JSONL. If you’re feeding content into an LLM pipeline, the markdown output is ready to use without any post-processing.
Side-by-side: scraping a protected page
ScrapingBee
import requests
response = requests.get(
"https://app.scrapingbee.com/api/v1/",
params={
"api_key": "YOUR_API_KEY",
"url": "https://example.com",
"render_js": "true", # 5x multiplier
"premium_proxy": "true", # 25x multiplier
"stealth_proxy": "true", # 75x multiplier
},
)
# 75 credits gone for one page
html = response.text
Spider
import requests
response = requests.post(
"https://api.spider.cloud/crawl",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"url": "https://example.com",
"return_format": "markdown",
},
)
# 1 credit. Same price regardless of protection level.
markdown = response.json()[0]["content"]
When ScrapingBee still makes sense
ScrapingBee works fine if you only scrape static, unprotected pages at 1 credit each and your volume fits within the Freelance plan. It’s a simple REST API that does what it says.
But if you’re scraping protected sites, building interactive workflows, or feeding data into AI pipelines, the 75x multiplier adds up fast, and the lack of browser control becomes a hard wall.