CrewAI Integration
Give CrewAI agents the ability to scrape and crawl the web using Spider's SpiderTool. Agents get LLM-ready content from any URL.
Install
Install Spider and CrewAI tools:
Install Spider client and CrewAI
pip install spider_client 'crewai[tools]'Set your API key as SPIDER_API_KEY in your environment.
Usage
Initialize SpiderTool and pass it to your agent. Data returned is LLM-ready markdown by default.
Simple Instantiation of SpiderTool
from crewai_tools import SpiderTool
# To enable scraping any website it finds during its execution
spider_tool = SpiderTool(api_key='YOUR_API_KEY')SpiderTool Arguments
Configuration options for the SpiderTool.
api_key(string, optional): Specifies Spider API key. If not specified, it looks forSPIDER_API_KEYin environment variables.website_url(string): The website URL. Will be used as a fallback if passed when the tool is initialized.log_failures(bool): Log scrape failures or fail silently. Defaults totrue.custom_params(object, optional): Optional parameters for the request.
Agent Setup
Create an agent with SpiderTool to scrape and research web content.
Agent Setup with SpiderTool
from crewai import Agent, Task
# Create a researcher agent
research_agent = Agent(
role="Web Researcher",
goal="Find and summarize information about the contents of a website URL",
backstory='You are an expert web researcher tasked with analyzing website content and extracting valuable insights.',
tools=[spider_tool()],
verbose=True # Enable logging for debugging
)
# Example task for the agent
task = Task(
description='Analyze the website content and provide key insights',
agent=research_agent
)Next steps
See the CrewAI tasks docs for running multi-agent workflows.