Blog / Spider vs. Apify: Compute Units, Expired Credits, and What You Actually Pay

Spider vs. Apify: Compute Units, Expired Credits, and What You Actually Pay

Apify's compute unit model combines memory, time, and proxy bandwidth into a billing formula most teams can't predict. Spider charges bandwidth plus compute with no expiring credits and no hidden proxy fees.

5 min read Jeff Mendez

Spider vs. Apify: What You Actually Pay

Apify and Spider solve the same problem from opposite directions.

Apify is a marketplace. You browse community-built scrapers called “Actors,” pick one for your target site, run it on Apify’s cloud, and pay for the compute it consumed. It’s an app store for scrapers.

Spider is an API. Send a URL, get data back. No marketplace, no Actor selection, no compute unit math.

Apify’s billing model

Apify charges in compute units (CUs). One CU equals 1 GB of memory running for 1 hour:

CU = Memory (GB) x Duration (hours)

An Actor with 4 GB of memory running for 15 minutes burns 1 CU. Same Actor at 1 GB for an hour, also 1 CU. The rate per CU depends on your plan:

PlanMonthly costCU rateIncluded credits
Free$0~$0.30/CU$5/month
Starter$39~$0.30/CU$39/month
Scale$199~$0.25/CU$199/month
Business$999~$0.20/CU$999/month

Apify reduced CU rates by up to 25% in September 2025. Check apify.com/pricing for current numbers.

You pay the subscription first. That subscription buys a fixed amount of credits. Once you exceed them, pay-as-you-go overage kicks in at the CU rate. There’s no pure PAYG option; even the Free tier caps you at $5/month.

So the Starter plan is a $39/month floor. Use less than $39 of compute? You overpaid. Use more? You pay overage on top.

And CU costs are only part of the bill.

Proxy bandwidth is billed separately, roughly $8/GB for residential proxies on the Starter plan. A JS-rendered page typically transfers 2-5 MB. At 100K pages per month, that’s $1,600 to $4,000 in proxy fees alone, on top of everything else.

Actor fees are the third layer. Some marketplace Actors charge per-result, per-event, or monthly rental fees on top of your CU and proxy costs. Not all of them do this, but you need to check each one.

A real CU cost scenario

Say you want to scrape 10,000 product pages from a mid-size e-commerce site. You pick a community Actor that looks right.

That Actor allocates 4 GB of memory (you don’t control this; the Actor’s developer chose it). Each page takes about 8 seconds to process including rendering. Your CU bill:

4 GB × (8 sec × 10,000 pages / 3,600 sec per hour) = 88.9 CUs

At $0.30/CU on the Starter plan, that’s $26.67 in compute. Then add residential proxy bandwidth, maybe 3 MB per page, so 30 GB total at $8/GB = $240. Plus the $39 subscription. Total: ~$306 for 10,000 pages.

Now imagine a different Actor for the same task allocates 1 GB instead of 4 GB. Same pages, same results, but your CU cost drops to $6.67. The problem is you have no visibility into why one Actor chose 4 GB and another chose 1 GB, and the 4x cost difference is invisible until you get the bill.

On Spider, those same 10,000 pages cost about $6.50 at our production average. No memory allocation decisions, no proxy surcharges, no subscription.

Credits expire monthly

Apify credits reset at the end of each billing cycle. On the $39 Starter plan, if you use $15 this month, the remaining $24 disappears.

Spider credits never expire. Buy them when you need them, use them on your timeline.

The marketplace problem

Apify’s marketplace pitch is compelling: someone already built a scraper for your target. Just run it.

Reality is messier. Some Actors are maintained by serious developers and work well. Others were published once and abandoned. When a target site changes its layout and your Actor breaks, you have three options: wait for the developer to fix it, fork it and maintain it yourself, or find a different Actor. None of those are great when you’re on a deadline.

There’s a subtler issue too. Each Actor is a black box that decides its own memory allocation, proxy usage, and parsing logic. You don’t control those decisions, but you pay for them through CU charges. The 4 GB vs 1 GB scenario above isn’t hypothetical. We’ve seen exactly this kind of variance across Actors targeting the same site.

Spider doesn’t have Actors. The API handles any URL through the same infrastructure. When a site changes, the system adapts (proxy rotation, stealth escalation, rendering adjustments) without you touching anything. And the cost is deterministic: you pay for the bytes transferred and the seconds of compute, not for someone else’s memory allocation choices.

Browser automation and AI

Apify’s browser story is Crawlee, their open-source crawling framework. It’s genuinely well-built (Puppeteer, Playwright, and Cheerio support) and it’s free to use locally. The catch is that running it on Apify’s cloud means every second of browser time counts toward your CU bill. And those CUs add up fast with browser workloads, because browsers are memory-hungry.

Spider Browser takes a different approach. Instead of giving you a framework to deploy, it gives you a live browser session over WebSocket with AI methods built in:

import { SpiderBrowser } from "spider-browser";

const spider = new SpiderBrowser({
  apiKey: process.env.SPIDER_API_KEY,
  stealth: 0, // auto-escalates when blocked
});

await spider.init();
await spider.page.goto("https://example.com/product");
await spider.page.click(".load-reviews");
await spider.page.waitForSelector(".review-list");

const reviews = await spider.page.extract(
  "Extract all reviews with author, rating, and text"
);
console.log(reviews);
await spider.close();

That extract() call takes plain English and returns structured JSON. There’s also act() for natural language browser actions and agent() for chaining multi-step workflows autonomously.

Apify doesn’t have a platform-wide AI extraction layer. Whether an Actor supports structured extraction depends on whoever built that specific Actor.

Where each tool fits

Apify is the right pick if you need a pre-built scraper for a specific platform (Amazon, Google Maps, Instagram) and there’s a well-maintained Actor for it. The marketplace saves development time when it works. And Crawlee is a genuinely good open-source framework if you want to self-host your own crawling infrastructure.

Spider makes more sense if you need predictable costs across a range of sites, if you don’t want to debug someone else’s memory allocation choices, or if expiring credits and three-layer billing aren’t things you want to manage.

SpiderApify
Pricing modelBandwidth + compute, pay-as-you-goCU subscription + proxy fees + Actor fees
Monthly floor$0$39 (Starter)
Credit expirationNeverMonthly reset
Proxy costsIncluded~$8/GB residential, billed separately
Cost predictabilityDeterministic (bytes + seconds)Depends on Actor’s memory allocation
AI extractionBuilt into SDKActor-dependent

Spider pricing breakdown. Free credits to start, no card required.

Empower any project with AI-ready data

Join thousands of developers using Spider to power their data pipelines.