Amazon Bedrock explained with memes — Converse API and Tool Usage w/ Anthropic Claude 3

Using Converse API and Tools to achieve new heights with Claude 3 in Amazon Bedrock

Davide Gallitelli
8 min readJun 13, 2024
Image by the author

In this blog, we’ll explore the new features of Amazon Bedrock, focusing on the Converse API and tool usage with Anthropic Claude 3. These updates make it easier for developers to create dynamic and user-friendly AI interactions. With the Converse API, managing multi-turn conversations becomes simpler and more consistent. Plus, the ability to use tools, or function calling, means the model can access external data to give more accurate and helpful responses.

Converse API vs Invoke API

I know it’s complicated, I’m here to help — Image by the author

On May 30th, Amazon Bedrock announced the new Converse API, which provides a consistent way for developers to invoke Amazon Bedrock models and manage multi-turn conversations. This API simplifies the process by removing the need to adjust for model-specific differences and enabling structured conversational history. Additionally, it supports Tool use (function calling) for select models, allowing developers to access external tools and APIs, expanding the capabilities of their applications.

To use the Converse API, you must use the following Amazon Bedrock models:

Models that support the Converse API — ref: Use the Converse API, Amazon Bedrock Doc

Let’s start by deep diving on the API itself, since that’s where the value lies. For the sake of the example, I’m not going to use the Streaming APIs, however note that both model invocation and the Converse API support answer streaming. Also, all examples below will use Anthropic Claude 3 Haiku.

First, let’s look into the “classic” Invoke API for Anthropic Claude 3 models. Here is what you would need to write with standard Python code (using boto3) without the Converse API:

# Use the native inference API to send a text message to Anthropic Claude.
import boto3
import json
from botocore.exceptions import ClientError

# Create a Bedrock Runtime client in the AWS Region of your choice.
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Set the model ID, e.g., Claude 3 Haiku.
model_id = "anthropic.claude-3-haiku-20240307-v1:0"

# Define the prompt for the model.
prompt = "Describe the purpose of a 'hello world' program in one line."

# Format the request payload using the model's native structure.
native_request = {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 512,
"temperature": 0.5,
"messages": [
{
"role": "user",
"content": [{"type": "text", "text": prompt}],
}
],
}

# Convert the native request to JSON.
request = json.dumps(native_request)

try:
# Invoke the model with the request.
response = client.invoke_model(modelId=model_id, body=request)

except (ClientError, Exception) as e:
print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
exit(1)

# Decode the response body.
model_response = json.loads(response["body"].read())

# Extract and print the response text.
response_text = model_response["content"][0]["text"]
print(response_text)

Now, let’s instead look at the same model invocation, with the Converse API this time:

# Use the Conversation API to send a text message to Anthropic Claude.

import boto3
from botocore.exceptions import ClientError

# Create a Bedrock Runtime client in the AWS Region you want to use.
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Set the model ID, e.g., Claude 3 Haiku.
model_id = "anthropic.claude-3-haiku-20240307-v1:0"

# Start a conversation with the user message.
user_message = "Describe the purpose of a 'hello world' program in one line."
conversation = [
{
"role": "user",
"content": [{"text": user_message}],
}
]

try:
# Send the message to the model, using a basic inference configuration.
response = client.converse(
modelId=model_id,
messages=conversation,
inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9},
)

# Extract and print the response text.
response_text = response["output"]["message"]["content"][0]["text"]
print(response_text)

except (ClientError, Exception) as e:
print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
exit(1)

You might ask yourself, “What are the differences here?” It’s a fair question, as on the surface, it doesn’t feel like much has changed. However, upon closer inspection, you can see several key differences:

  1. Consistency and Simplification — The Converse API provides a consistent interface that works with all models supporting messages, moving parameters out of the conversation body itself. This means you don’t need to worry about structuring the request payload according to each model’s native format, simplifying your code and reducing potential errors.
  2. Message Handling — In the Converse API, messages are handled in a way that closely mimics a natural conversation, making the interaction flow more intuitive and aligned with how conversational AI is typically used.
  3. Inference Configuration — The inference configuration, such as maxTokens and temperature, is provided as a separate parameter in the Converse API, keeping the conversation payload cleaner and more readable.

By leveraging the Converse API, you gain a more streamlined and consistent approach to interacting with the model, which can significantly enhance your development experience and make your codebase easier to maintain.

Tool Usage / Function Calling

An example flow for Converse API Function Calling — Image by the author

One of the advantages of switching to the Converse API is the capability to use tools when answering requests, also knows as function calling. When working with Amazon Bedrock models, incorporating tools can significantly enhance the model’s ability to generate accurate and contextually relevant responses. Tool usage, also known as function calling, involves providing definitions for tools that the model can request to use during inference. Let’s explore how this works using the Converse API.

Step 1: Send the Message and Tool Definition

Stranded — Image by the author

Begin by defining the tool using a JSON schema and passing it along with the user message in the Converse API request. For example, a tool to get the most popular song on a radio station can be defined as follows:

{
“tools”: [
{
“toolSpec”: {
“name”: “top_song”,
“description”: “Get the most popular song played on a radio station.”,
“inputSchema”: {
“json”: {
“type”: “object”,
“properties”: {
“sign”: {
“type”: “string”,
“description”: “The call sign for the radio station, e.g., WZPZ.”
}
},
“required”: [“sign”]
}
}
}
}
]
}

The user message is also included:

[
{
“role”: “user”,
“content”: [{“text”: “What is the most popular song on Radio XYZ?”}]
}
]

Step 2: Get the Tool Request from the Model

Smart LLM models — Image by the author

Upon receiving the request, the model evaluates if the tool is necessary to generate a response. If so, it returns a response indicating the need for tool use, along with the required input parameters.

Step 3: Make the Tool Request for the Model

Tool usage is nothing but a trade offer between input and output — Image by the author

Using the provided tool information, execute the tool request. This tool request is the key part where the actual result gets produced before being fed back into the LLM for generating an answer. You should implement your own business logic at this stage. This tool can be:

  • a local code execution (e.g. calculator)
  • an API call (retrieve some information)
  • an external service

Then, send the result back to the model in a follow-up message. Note that the message has to be configure as if it was sent by the user (“role”: “user”):

{
“role”: “user”,
“content”: [
{
“toolResult”: {
“toolUseId”: “tooluse_id”,
“content”: [
{
“json”: {
“song”: “Never Gonna Give You Up”,
“artist”: “Rick Astley”
}
}
]
}
}
]
}

Step 4: Get the Model Response

Finally, the model uses the tool’s result to generate a comprehensive response to the original query. And yes, you’ve been rickrolled in 2024.

Example Code

Below is an example of how to implement tool usage with the Converse API:

import boto3
import json
from botocore.exceptions import ClientError

client = boto3.client("bedrock-runtime", region_name="us-east-1")
model_id = "anthropic.claude-3-haiku-20240307-v1:0"

def get_top_song(sing: str):
# Implement get_top_song()
return {
"title": "Never Gonna Give You Up",
"author": "Rick Astley",
"played_times": 420
}

tool_config = {
"tools": [
{
"toolSpec": {
"name": "top_song",
"description": "Get the most popular song played on a radio station.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "The call sign for the radio station."
}
},
"required": ["sign"]
}
}
}
}
]
}

messages = [
{
"role": "user",
"content": [{"text": "What is the most popular song on Radio XYZ?"}]
}
]

try:
response = client.converse(modelId=model_id, messages=messages, toolConfig=tool_config)
tool_request = response['output']['message']['content'][0]['toolUse']

##### Implement your tool logic here #####
tool_result = get_top_song(tool_request['input']['sign'])
##########################################

tool_result_message = {
"role": "user",
"content": [
{
"toolResult": {
"toolUseId": tool_request['toolUseId'],
"content": [{"json": tool_result}]
}
}
]
}

final_response = client.converse(modelId=model_id, messages=[tool_result_message])
print(final_response['output']['message']['content'][0]['text'])

except ClientError as e:
print(f"ERROR: {e}")

This code can be run very easily (and cheaply) in an AWS Lambda function, or even locally on your PC, provided you have configured the AWS credentials correctly.

By following these steps, you can effectively integrate tools into your interactions with Amazon Bedrock models, enabling more dynamic and accurate responses based on real-time data and functions.

Conclusions

Congratulations are in order — Image by the author

Using the Converse API and tool integration, developers can take their AI applications to the next level with Amazon Bedrock and Anthropic Claude 3. These features not only streamline development but also enhance the model’s capabilities, resulting in more accurate and context-aware interactions. By embracing these innovations, you’ll be able to build more sophisticated and user-friendly AI solutions. For more information, check out the Amazon Bedrock User Guide.

Thanks for reading this Medium blog post. If you like Generative AI Application Development, Low-Code No-Code Machine Learning, and in general Technology News, please consider following me and sign up for new blogs via email. Feel free to reach out to suggest new topics and collaborations! You can do so via LinkedIn.

--

--