Key Concepts
Learn the foundational ideas behind Spider, including how it enables fast, concurrent, and streaming-based crawling.
Web Crawling Efficiency
Spider is designed for high-speed and efficient web crawling with support for concurrency and streaming at scale.
Concurrent Crawling
Concurrent crawling enables Spider to handle multiple requests at once, making it ideal for large-scale scraping operations. This significantly improves data collection speed and effectiveness.
Screenshot Capabilities
Spider provides built-in screenshot functionalities, offering an efficient way to capture the visual representation of web pages. This is invaluable for applications needing visual confirmations or rendering analysis.
AI Integrations
Spider seamlessly integrates with AI models, allowing for advanced data processing and analysis directly within the crawling process. This includes text summarization, sentiment analysis, and more.
Credits System
The usage of Spider is based on a credit system, enabling efficient usage tracking and cost management. For a detailed breakdown of credits and pricing, refer to the Spider pricing page. The units that you see in the API request relate to the following formula ($1 / 10,000 credits)
. Credits are deducted as a decimal to stay true to the cost.