Web Unblocker API
Some websites go beyond standard protections with advanced anti-bot systems, CAPTCHAs, and browser fingerprint checks. The Unblocker endpoint applies specialized techniques to access this content when standard crawling and scraping can't get through.
When to Use Unblocker vs. Standard Crawl
Standard Crawl / Scrape
Works for most websites. Spider's built-in Smart mode already handles JavaScript rendering, common bot detection, and standard protections. Start here, it's cheaper and faster.
- • Standard JavaScript-rendered sites
- • Basic rate limiting / IP blocks
- • Common Cloudflare / CDN protections
Unblocker
For sites where standard requests return blocks, challenges, or empty content. Costs 10-40 additional credits per successful request but applies advanced techniques.
- • Advanced CAPTCHA / challenge pages
- • Sophisticated browser fingerprint checks
- • Sites that actively detect and block bots
How It Works
Browser Fingerprinting
Spider launches a real browser session with authentic fingerprint characteristics: screen dimensions, WebGL rendering, fonts, and hardware profiles that match genuine user browsers.
Challenge Resolution
When the target site presents challenges, puzzles, or verification steps, Spider automatically handles them. The page waits until the actual content loads behind the protection layer.
Content Extraction
Once past protections, content is extracted using the same powerful engine as the standard crawl and scrape endpoints. Same output formats, same quality.
Key Capabilities
Automatic Retry Logic
If the first attempt is blocked, Spider automatically retries with different fingerprints, proxy paths, and timing patterns to maximize success rate.
Session Persistence
Maintain browser sessions across requests with session. Once a site is unblocked, subsequent requests can reuse the authenticated session state.
Premium Proxy Integration
Combine with residential, mobile, or ISP proxies for maximum stealth. Geo-target your requests so they originate from the right country.
Same API Contract
The Unblocker accepts the same parameters as the crawl and scrape endpoints. Switch between them by changing the endpoint path without any code restructuring.
Pay Per Success
The additional 10-40 credits are only charged on successful unblocking. If Spider can't get through, you don't pay the premium.
Custom JS Execution
Run JavaScript on the page after unblocking with evaluate_on_new_document. Interact with elements, expand content, or trigger actions.
Code Examples
from spider import Spider
client = Spider()
# Access a site with heavy bot protection
result = client.unblocker(
"https://protected-site.com/data",
params={
"return_format": "markdown",
"proxy_enabled": True,
"country_code": "us",
}
)
print(result[0]["content"]) curl -X POST https://api.spider.cloud/unblocker \
-H "Authorization: Bearer $SPIDER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://protected-site.com",
"return_format": "markdown",
"session": true,
"fingerprint": true,
"proxy_enabled": true
}' Credit Cost
Standard crawl credits apply to every request
Additional credits per successful unblock
No premium charge if unblocking fails
Related Resources
Access any website, no matter the protection
Advanced anti-bot bypass when standard scraping isn't enough.