Agentic Workflows on AWS with Amazon Bedrock, Claude 3 Haiku, and CrewAI
EDIT (Oct 2, 2024): updated to use CrewAI v0.65.2, removed LangChain dependency (unnecessary in latest CrewAI)
EDIT (May 18, 2024): updated code to uselangchain-aws
package
TL;DR: This post demonstrates how to leverage the power of Anthropic’s Claude-3 language models through Amazon Bedrock and the CrewAI framework, enabling seamless integration of large language models and the definition of agentic workflows into custom AI applications.
Introduction
The world of artificial intelligence is rapidly evolving, and staying ahead of the curve requires leveraging cutting-edge technologies and frameworks. In this post, we’ll explore the powerful combination of Anthropic’s Claude-3 language model, Amazon Bedrock, and the CrewAI framework, showcasing how to harness their capabilities to build robust AI applications.
Understanding CrewAI and Amazon Bedrock
Amazon Bedrock is a fully managed service that provides access to state-of-the-art large language models (LLMs) from various providers, including Anthropic’s Claude 3. It abstracts away the complexities of deploying and scaling LLMs, allowing developers to focus on building their applications.
CrewAI is a framework that simplifies the development of AI applications by providing a structured approach to managing agents, tasks, and workflows. It enables developers to create intelligent agents, assign them specific tasks, and orchestrate their interactions, all while maintaining control and transparency.
The main concepts of CrewAI revolve around three core entities: Agents, Tasks, and Crews.
- Agents: These are standalone units programmed to perform tasks , make decisions and communicate with other agents. They can use Tools which can be simple search functions or complex integrations involving other chains, APIs, etc.
- Tasks: Tasks are assignments or jobs that an AI agent needs to complete. They can include additional information like which agent should do it and what tools they might need.
- A Crew is a team of agents, each with a specific role, that work together to achieve a common goal. The process of forming a crew involves assembling agents, defining their tasks and establishing a sequence of task execution.
Harnessing Claude-3 with Amazon Bedrock and CrewAI
The notebook we’ll be exploring demonstrates how to set up and utilize Claude-3, a powerful language model developed by Anthropic, through the Amazon Bedrock service and the Crew.AI framework.
The code sets up an agent called researcher
with the goal of uncovering cutting-edge developments in AI and data science. It then assigns a task to conduct a comprehensive analysis of the latest AI advancements in 2024, with the expected output being a bullet-pointed report. Finally, it creates a `Crew` instance with the agent and task, and executes the workflow.
First, install both boto3
and crewai
. To do so, run the following CLI command:
pip install boto3 botocore crewai crewai_tools ---upgrade --quiet
You can start by configuring CrewAI Agents, Tasks, Crew and LLM. For this example, we’re also going to be providing the agent with a tool to search for data online.
from crewai import Agent, Task, Crew, LLM
from crewai_tools import SerperDevTool
# CrewAI LLM config uses LiteLLM
llm = LLM(model="bedrock/anthropic.claude-3-haiku-20240307-v1:0")# Define the Agent
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="You work at a leading tech think tank…",
verbose=True,
allow_delegation=False,
llm=llm,
tool=[SerperDevTool()] # Tool for online searching
)# Define the Task
task1 = Task(
description="Conduct a comprehensive analysis of the following topic: {topic}",
expected_output="Full analysis report in bullet points",
agent=researcher
)
crew = Crew(
agents=[researcher],
tasks=[task1],
verbose=True,
)
Finally, you can execute the crew:
result = crew.kickoff(inputs={"topic":"Amazon Bedrock Foundation Models"})
As the crew executes, CrewAI will:
- decompose the task into actions using ReAct (Reasoning and Act), optionally using the tools assigned to the agents
- do multiple calls to Amazon Bedrock to complete each step from the previous phase
Once done, you can see this result in Markdown format:
from IPython.display import Markdown
Markdown(result)
You’re done! 🎉️
Adding Memory
Learn more on CrewAI Memory docs.
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
When configuring a crew, you can enable and customize each memory component to suit the crew’s objectives and the nature of tasks it will perform. By default, the memory system is disabled, and you can ensure it is active by setting memory=True
in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting embedder
to a different model, like in the following example:
crew = Crew(
agents=[researcher],
tasks=[task1],
verbose=True,
memory=True,
embedder={
"provider": "aws_bedrock",
"config":{
"model": "amazon.titan-embed-text-v2:0",
"vector_dimension": 1024
}
}
)
The embedder
only applies to Short-Term Memory, which uses Chroma for RAG using the EmbedChain package.
Conclusion
By combining the power of Anthropic’s Claude 3 language model, Amazon Bedrock’s managed LLM service, and the Crew.AI framework, developers can streamline the development of AI applications and harness the potential of state-of-the-art language models. This approach simplifies the integration of LLMs, enabling developers to focus on building intelligent agents, assigning tasks, and orchestrating workflows tailored to their specific needs.