API Reference

The Spider API is based on REST. Our API is predictable, returns JSON-encoded responses, uses standard HTTP response codes, authentication, and verbs. Set your API secret key in the authorization header to commence with the format Bearer $TOKEN. You can use the content-type header with application/json, application/xml, text/csv, and application/jsonl for shaping the response.

The Spider API supports multi domain actions. You can work with multiple domains per request by adding the urls comma separated.

The Spider API differs for every account as we release new versions and tailor functionality. You can add v1 before any path to pin to the version. Executing a request on the page by pressing the Run button will consume live credits and consider the response as a genuine result.

Just getting started?

Check out our development quickstart guide.

Not a developer?

Use Spiders no-code options or apps to get started with Spider and to do more with your Spider account no code required.

Base Url
https://api.spider.cloud
Client libraries

Crawl websites

Start crawling a website(s) to collect resources.

POSThttps://api.spider.cloud/crawl

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/crawl', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": "<html>...",
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Search

Perform a search and gather a list of websites to start crawling and collect resources.

POSThttps://api.spider.cloud/search

Request body

  • search required string

    The search query you want to search for.

  • search_limit number

    The limit amount of urls to fetch or crawl from the search results. Remove the value or set it to 0 to crawl all URLs from the search results.

  • fetch_page_content boolean

    Fetch all the content of the websites by performing crawls. The default is true; if this is disabled, only the search results are returned instead.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"search":"a sports website","search_limit":3,"limit":25,"return_format":"markdown"}

response = requests.post('https://api.spider.cloud/search', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": "<html>...",
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Crawl websites get links

Start crawling a website(s) to collect links found.

POSThttps://api.spider.cloud/links

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/links', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "url": "https://spider.cloud",
    "status": 200,
    "error": null
  },
  // more content...
]

Screenshot websites

Start taking screenshots of website(s) to collect images to base64 or binary.

POSThttps://api.spider.cloud/screenshot

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/screenshot', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": "base64...",
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Transform HTML

Transform HTML to Markdown or text fast. Each HTML transformation costs 1 credit. You can send up to 10MB of data at once. The transform API is also built into the /crawl endpoint by using return_format.

POSThttps://api.spider.cloud/transform

Request body

  • data object

    A list of html data to transform. The object list takes the keys html and url. The url key is optional and only used when the readability is enabled.

  • readability boolean

    Use readability to pre-process the content for reading. This may drastically improve the content for LLM usage.

  • return_format string

    The format to return the data in. Possible values are markdown, commonmark, raw, text, and bytes. Use raw to return the default format of the page like HTML etc.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"return_format":"markdown","data":[{"html":"<html>\n<head>\n  <title>Example Transform</title>  \n</head>\n<body>\n<div>\n    <h1>Example Website</h1>\n    <p>This is some example markup to use to test the transform function.</p>\n    <p><a href=\"https://spider.cloud/guides\">Guides</a></p>\n</div>\n</body></html>","url":"https://example.com"}]}

response = requests.post('https://api.spider.cloud/transform', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
{
    "content": [
      "Example Domain
Example Domain
==========
This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission.
[More information...](https://www.iana.org/domains/example)"
    ],
    "credits_used": 2,
    "error": "",
    "status": 200
  }

Query

Query a website resource from our database. This costs 1 credit per successful retrieval.

POSThttps://api.spider.cloud/data/query

Request body

  • url string

    The exact path of the url that you want to get.

  • domain string

    The website domain you want to query.

  • pathname string

    The website pathname you want to query.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/json',
}

json_data = {"url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/data/query', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
{
  "content": "<html>
    <body>
      <div>
          <h1>Example Website</h1>
      </div>
    </body>
  </html>",
  "error": "",
  "status": 200
}

Proxy-Mode
Alpha

Spider also offers a proxy front-end to the service. The Spider proxy will then handle requests just like any standard request, with the option to use high-performance residential proxies at 1TB per/s.

**HTTP address**: proxy.spider.cloud:8888**HTTPS address**: proxy.spider.cloud:8889**Username**: YOUR-API-KEY**Password**: PARAMETERS
Example proxy request
import requests, os


# Proxy configuration
proxies = {
    'http': f"http://{os.getenv('SPIDER_API_KEY')}:request=Raw&premium_proxy=False@proxy.spider.cloud:8888",
    'https': f"https://{os.getenv('SPIDER_API_KEY')}:request=Raw&premium_proxy=False@proxy.spider.cloud:8889"
}

# Function to make a request through the proxy
def get_via_proxy(url):
    try:
        response = requests.get(url, proxies=proxies)
        response.raise_for_status()
        print('Response HTTP Status Code: ', response.status_code)
        print('Response HTTP Response Body: ', response.content)
        return response.text
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return None

# Example usage
if __name__ == "__main__":
     get_via_proxy("https://www.choosealicense.com")
     get_via_proxy("https://www.choosealicense.com/community")

Pipelines

Create powerful workflows with our pipeline API endpoints. Use AI to extract contacts from any website or filter links with prompts with ease.

Crawl websites and extract contacts

Start crawling a website(s) to collect all contacts utilizing AI. A minimum of $25 in credits is necessary for extraction.

POSThttps://api.spider.cloud/pipeline/extract-contacts

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/pipeline/extract-contacts', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": [{
      "full_name": "John Doe",
      "email": "johndoe@gmail.com",
      "phone": "555-555-555",
      "title": "Baker"
    }],
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Label website

Crawl a website and accurately categorize it using AI.

POSThttps://api.spider.cloud/pipeline/label

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/pipeline/label', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": ["Government"],
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Crawl websites from text

Crawl website(s) found from raw text or markdown.

POSThttps://api.spider.cloud/pipeline/crawl-text

Request body

  • text required string

    The text string to extract urls from. The max limit for the text is 10mb.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

json_data = {"text":"Check this link: https://example.com and email to example@email.com","limit":25,"return_format":"markdown"}

response = requests.post('https://api.spider.cloud/pipeline/crawl-text', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
[
  {
    "content": "<html>...",
    "error": null,
    "status": 200,
    "credits_used": 2,
    "url": "https://spider.cloud"
  },
  // more content...
]

Queries

Query the data that you collect. Add dynamic filters for extracting exactly what is needed.

Websites Collection

Get the websites stored.

GEThttps://api.spider.cloud/data/websites

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/websites?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": [
    {
      "id": "2a503c02-f161-444b-b1fa-03a3914667b6",
      "user_id": "6bd06efa-bb0b-4f1f-a23f-05db0c4b1bfd",
      "url": "6bd06efa-bb0b-4f1f-a29f-05db0c4b1bfd/example.com/index.html",
      "domain": "spider.cloud",
      "created_at": "2024-04-18T15:40:25.667063+00:00",
      "updated_at": "2024-04-18T15:40:25.667063+00:00",
      "pathname": "/",
      "fts": "",
      "scheme": "https:",
      "last_checked_at": "2024-05-10T13:39:32.293017+00:00",
      "screenshot": null
    }
  ]
}

Pages Collection

Get the pages/resources stored.

GEThttps://api.spider.cloud/data/pages

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/pages?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": [
    {
      "id": "733b0d0f-e406-4229-949d-8068ade54752",
      "user_id": "6bd06efa-bb0b-4f1f-a29f-05db0c4b1bfd",
      "url": "https://spider.cloud",
      "domain": "spider.cloud",
      "created_at": "2024-04-17T01:28:15.016975+00:00",
      "updated_at": "2024-04-17T01:28:15.016975+00:00",
      "proxy": true,
      "headless": true,
      "crawl_budget": null,
      "scheme": "https:",
      "last_checked_at": "2024-04-17T01:28:15.016975+00:00",
      "full_resources": false,
      "metadata": true,
      "gpt_config": null,
      "smart_mode": false,
      "fts": "'spider.cloud':1"
    }
  ]
}

Pages Metadata Collection

Get the pages metadata/resources stored.

GEThttps://api.spider.cloud/data/pages_metadata

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/pages_metadata?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": [
    {
      "id": "e27a1995-2abe-4319-acd1-3dd8258f0f49",
      "user_id": "253524cd-3f94-4ed1-83b3-f7fab134c3ff",
      "url": "253524cd-3f94-4ed1-83b3-f7fab134c3ff/www.google.com/search?query=spider.cloud.html",
      "domain": "www.google.com",
      "resource_type": "html",
      "title": "spider.cloud - Google Search",
      "description": "",
      "file_size": 1253960,
      "embedding": null,
      "pathname": "/search",
      "created_at": "2024-05-18T17:40:16.4808+00:00",
      "updated_at": "2024-05-18T17:40:16.4808+00:00",
      "keywords": [
        "Fastest Web Crawler spider",
        "Web scraping"
      ],
      "labels": "Search Engine",
      "extracted_data": null,
      "fts": "'/search':1"
    }
  ]
}

Leads Collection

Get the pages contacts stored.

GEThttps://api.spider.cloud/data/contacts

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/contacts?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": [
    {
      "full_name": "John Doe",
      "email": "johndoe@gmail.com",
      "phone": "555-555-555",
      "title": "Baker"
    }
  ]
}

Crawl State

Get the state of the crawl for the domain.

GEThttps://api.spider.cloud/data/crawl_state

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/crawl_state?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": {
    "id": "195bf2f2-2821-421d-b89c-f27e57ca71fh",
    "user_id": "6bd06efa-bb0a-4f1f-a29f-05db0c4b1bfg",
    "domain": "spider.cloud",
    "url": "https://spider.cloud/",
    "links": 1,
    "credits_used": 3,
    "mode": 2,
    "crawl_duration": 340,
    "message": null,
    "request_user_agent": "Spider",
    "level": "info",
    "status_code": 0,
    "created_at": "2024-04-21T01:21:32.886863+00:00",
    "updated_at": "2024-04-21T01:21:32.886863+00:00"
  },
  "error": ""
}

Crawl Logs

Get the last 24 hours of logs.

GEThttps://api.spider.cloud/data/crawl_logs

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/crawl_logs?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": {
    "id": "195bf2f2-2821-421d-b89c-f27e57ca71fh",
    "user_id": "6bd06efa-bb0a-4f1f-a29f-05db0c4b1bfg",
    "domain": "spider.cloud",
    "url": "https://spider.cloud/",
    "links": 1,
    "credits_used": 3,
    "mode": 2,
    "crawl_duration": 340,
    "message": null,
    "request_user_agent": "Spider",
    "level": "info",
    "status_code": 0,
    "created_at": "2024-04-21T01:21:32.886863+00:00",
    "updated_at": "2024-04-21T01:21:32.886863+00:00"
  },
  "error": ""
}

Credits

Get the remaining credits available.

GEThttps://api.spider.cloud/data/credits

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/credits?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": {
    "id": "8d662167-5a5f-41aa-9cb8-0cbb7d536891",
    "user_id": "6bd06efa-bb0a-4f1f-a29f-05db0c4b1bfg",
    "credits": 53334,
    "created_at": "2024-04-21T01:21:32.886863+00:00",
    "updated_at": "2024-04-21T01:21:32.886863+00:00"
  }
}

Crons

Get the cron jobs that are set to keep data fresh.

GEThttps://api.spider.cloud/data/crons

Request params

  • limit string

    The limit of records to get.

  • page number

    The current page to get.

  • domain string

    Filter a single domain record.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/crons?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response

User Profile

Get the profile of the user. This returns data such as approved limits and usage for the month.

GEThttps://api.spider.cloud/data/profile

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/crawl?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": {
    "id": "6bd06efa-bb0b-4f1f-a29f-05db0c4b1bfd",
    "email": "user@gmail.com",
    "stripe_id": "cus_OYO2rAhSQaYqHT",
    "is_deleted": null,
    "proxy": null,
    "headless": false,
    "billing_limit": 50,
    "billing_limit_soft": 120,
    "approved_usage": 0,
    "crawl_budget": {
      "*": 200
    },
    "usage": null,
    "has_subscription": false,
    "depth": null,
    "full_resources": false,
    "meta_data": true,
    "billing_allowed": false,
    "initial_promo": false
  }
}

User-Agent

Get a real user agent to use for crawling.

GEThttps://api.spider.cloud/data/user_agents

Request params

  • limit string

    The limit of records to get.

  • os string

    Filter a by a device ex: Android, Mac OS, Android, Windows, Linux and more.

  • page number

    The current page to get.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/user_agents?limit%3D25%26return_format%3Dmarkdown', 
  headers=headers)

print(response.json())
Response
{
  "data": {
    "agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
    "platform": "Chrome",
    "platform_version": "123.0.0.0",
    "device": "Macintosh",
    "os": "Mac OS",
    "os_version": "10.15.7",
    "cpu_architecture": "",
    "mobile": false,
    "device_type": "desktop"
  }
}

Download file

Download a resource from storage.

GEThttps://api.spider.cloud/data/download

Request body

  • url string

    The exact path of the url that you want to get.

  • domain string

    The website domain you want to query.

  • pathname string

    The website pathname you want to query.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

response = requests.get('https://api.spider.cloud/data/download?', 
  headers=headers)

print(response.json())
Response
{
  "data": "<file>"
}

Manage

Configure data to enhance crawl efficiency: create, update, and delete records.

Website

Create or update a website by configuration.

POSThttps://api.spider.cloud/data/websites

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

  • limit number

    The maximum amount of pages allowed to crawl per website. Remove the value or set it to 0 to crawl all pages.

  • request string

    The request type to perform. Possible values are http, chrome, and smart. Use smart to perform HTTP request by default until JavaScript rendering is needed for the HTML.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/crawl', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
{
  "data": null
}

Website

Delete a website from your collection. Remove the url body to delete all websites.

DELETEhttps://api.spider.cloud/data/websites

Request body

  • url required string

    The URI resource to crawl. This can be a comma split list for multiple urls.

Example request
import requests, os

headers = {
    'Authorization': os.environ["SPIDER_API_KEY"],
    'Content-Type': 'application/jsonl',
}

json_data = {"limit":25,"return_format":"markdown","url":"https://spider.cloud"}

response = requests.post('https://api.spider.cloud/crawl', 
  headers=headers, 
  json=json_data)

print(response.json())
Response
{
  "data": null
}