Blog / Spider Browser vs. Kernel vs. Browserbase: 999 URLs, 100% Pass Rate

Spider Browser vs. Kernel vs. Browserbase: 999 URLs, 100% Pass Rate

Kernel benchmarked cold start speed. We benchmarked what matters: reliability across 999 URLs, 254 domains, and 18 categories, with a 100% success rate and 2.5s median end-to-end latency.

6 min read Jeff Mendez

Spider Browser vs. Kernel vs. Browserbase: 999 URLs, 100% Pass Rate

Kernel recently published a benchmark showing they’re 5.8x faster than Browserbase at creating browsers. Fast cold starts are a useful metric, but cold start speed doesn’t tell you whether your agent can actually get data from the page it navigated to.

We ran two benchmarks. First, the same kind of speed test Kernel used (session creation, page navigation, end-to-end latency) so you can compare directly. Then a harder test: 999 URLs across 254 domains and 18 categories, including sites behind Cloudflare, Akamai, PerimeterX, and DataDome. The result: 100% pass rate. Zero failures.

Everything is open source. You can reproduce every number in this post.

What Kernel measured vs. what we measured

Kernel’s blog compares cold start latency and end-to-end browser creation speed against Browserbase using browserbench, a third-party tool that times browser creation and basic page loads.

That answers one question: how fast can you get a browser? It doesn’t answer the questions that matter when your agent is doing real work:

  • Can the browser bypass bot detection on protected sites?
  • Does automation succeed across hundreds of different domains?
  • What happens when Cloudflare serves a challenge, when Akamai fingerprints your TLS stack, or when DataDome blocks the first request?
  • How fast is the full pipeline: connect, navigate, extract content, take a screenshot?

We designed our benchmarks to answer both speed and reliability.

Speed benchmark

50 measured iterations (after 10 warmup rounds) using the spider-browser SDK, navigating to google.com and waiting for domcontentloaded. This mirrors the browserbench methodology.

MetricMedianAvgp95Min
Session creation (connect + auth + init)705ms710ms793ms612ms
Page navigate (goto -> domcontentloaded)1,821ms1,920ms3,001ms1,014ms
Session release0ms0ms0ms0ms
End-to-end total2,509ms2,630ms3,665ms1,804ms

100% success rate across all 50 runs. Zero failures, zero timeouts.

Session creation is 705ms at the median. That’s WebSocket connection, authentication, and full protocol init. The session is immediately ready to navigate with no cold start penalty, because Spider Browser connects to pre-warmed browser instances. Session release is instant; closing the WebSocket ends the session with no VM to tear down.

Total end-to-end: 2.5 seconds to connect, navigate, and close. Kernel claims 3.72x faster than Browserbase end-to-end, but neither publishes raw millisecond numbers for the full pipeline.

Reliability benchmark

Speed on google.com is one thing. Reliability across the real web is another.

We built a dataset of 999 URLs across 254 unique domains spanning 18 categories, from simple static sites to some of the most aggressively protected properties on the web.

The corpus

CategoryURLsExamples
Technology239GitHub, Stack Overflow, Medium, HackerNews
E-Commerce128Amazon, eBay, Walmart, Target, Shopify stores
News116CNN, BBC, NYT, Reuters, AP News
Government73IRS, SSA, CDC, state government portals
Finance64Bloomberg, CoinDesk, Yahoo Finance, NerdWallet
Education59MIT, Coursera, Khan Academy, university sites
Entertainment45YouTube, Twitch, Spotify, IMDb
Social41Reddit, Twitter/X, LinkedIn, Discord
Travel40Booking, TripAdvisor, Airbnb, Expedia
Reference54Wikipedia, StackExchange, MDN, W3Schools
Food29AllRecipes, Bon Appetit, Epicurious
Automotive28Cars.com, AutoTrader, Edmunds, KBB
Health26WebMD, Mayo Clinic, Healthline
Sports22ESPN, NFL, NBA, Bleacher Report
Streaming13Netflix, Hulu, Disney+, HBO Max
Jobs12Indeed, Glassdoor, LinkedIn Jobs
Real Estate9Zillow, Realtor, Redfin
Classifieds1Craigslist

This isn’t a curated list of friendly sites. It includes the same domains that cause real failures in production agent workflows, covering three difficulty tiers from static HTML to aggressive WAFs (Akamai Bot Manager, PerimeterX/HUMAN, DataDome).

What we measured

For each URL: connect to a remote browser, navigate, wait for content to load, extract content, take a screenshot. A URL passes if it returns a non-empty page title and content body. No manual retries, no cherry-picked results.

Results

MetricValue
Total URLs999
Passed999
Failed0
Pass rate100.0%
Unique domains254
Concurrency25
Total elapsed~19 minutes

Every URL returned usable content. No timeouts, no blocks, no empty responses.

Timing

MetricValue
Median page time11.5s
Average page time16.0s
p95 page time39.3s
Fastest page0.9s
Slowest page79.7s
Phase (avg)Time
Connect5.7s
Navigate4.8s
Content extraction1.5s
Screenshot0.9s

The median across all 999 URLs — including heavily protected sites like Amazon, Nike, and Booking.com — is 11.5 seconds from WebSocket handshake to extracted content and screenshot. That number is pulled up by the difficulty of the corpus. Easy and medium sites routinely finish in 1-3 seconds (the fastest completed in under a second). The slow tail (p95 at 39.3s) comes from the hardest anti-bot targets where the system works through multiple layers of protection to deliver real content.

These times are deliberate. Spider Browser waits for content to actually be ready: JavaScript rendering, lazy-loaded elements, dynamic content settling. We could return faster by skipping those checks, but you’d get incomplete data.

How the 100% pass rate works

Spider Browser handles anti-bot detection automatically. When a page serves a challenge or block, the SDK detects it and escalates — no manual configuration, no retry logic in your code. You call page.goto(url) and get content back.

The entire stack is built in Rust. Sessions connect to pre-warmed browser instances with no cold start penalty. Each connection presents a unique browser profile, so consecutive requests to the same site look like different users to anti-bot systems.

The benchmark corpus deliberately includes some of the hardest targets on the web: Amazon (proprietary), Nike and Best Buy (Akamai Bot Manager), Walmart and Zillow (PerimeterX/HUMAN), Sephora and Booking.com (DataDome), and dozens of sites behind Cloudflare Turnstile. All 999 URLs returned usable content.

What this means in practice

Kernel and Browserbase give you a browser. You write the automation, the anti-detection, and the retry logic. Spider Browser gives you the full pipeline:

import { SpiderBrowser } from "spider-browser";

const spider = new SpiderBrowser({
  apiKey: process.env.SPIDER_API_KEY,
  stealth: 0,
});

await spider.init();
await spider.page.goto("https://www.amazon.com/dp/B0DGJHM7QN");

const data = await spider.page.extractFields({
  title: "#productTitle",
  price: ".a-price .a-offscreen",
  rating: "#acrPopover .a-icon-alt",
  reviews: "#acrCustomerReviewText",
  availability: "#availability span",
  image: { selector: "#landingImage", attribute: "src" },
});

console.log(data);
// { title: "Apple AirPods Pro 2", price: "$189.99", ... }

extractFields() pulls structured data from the DOM in one call. For more complex workflows, there’s act() for click/type interactions, observe() for element discovery, extract() for natural-language queries, and agent() for fully autonomous multi-step workflows powered by any LLM.

Spider BrowserKernelBrowserbase
What you getBrowser + extraction + stealth + retryBrowserBrowser
Anti-bot bypassAutomaticManualManual
Success rate100% on 999 URLsNot publishedNot published
End-to-end latency2.5s medianNot publishedNot published
Session creation705ms medianSub-150ms (claimed)Slower (per Kernel)
AI extractionBuilt-inNoneNone
Cold startNone (pre-warmed)Sub-150msSlower
Open sourceSDK + datasetBrowser imagesStagehand SDK

Kernel optimizes for cold start speed. That matters when cold start latency is your bottleneck. For most agent workflows, the bottleneck is getting past bot detection, extracting content reliably, and handling failures across hundreds of different sites.

Reproduce it yourself

The dataset and full results are at github.com/spider-rs/spider-browser-dataset, including domains, URLs, timing breakdowns, and a machine-readable summary.

git clone https://github.com/spider-rs/spider-browser-dataset.git
cd spider-browser/typescript
npm install

# Full benchmark (999 URLs, ~19 min)
SPIDER_API_KEY=your-key npx tsx __tests__/stealth-test.ts --target=1000 --concurrency=25

# Quick test (200 URLs, ~4 min)
SPIDER_API_KEY=your-key npx tsx __tests__/stealth-test.ts --target=200

We also maintain 1,004 ready-to-use scraper scripts across 32 categories, each using the spider-browser SDK with field-level extraction. Clone the repo, set your API key, run.

Empower any project with AI-ready data

Join thousands of developers using Spider to power their data pipelines.