Share Memory Across Users Using Graphs

In this recipe, we will demonstrate how to share memory across different users by utilizing graphs. We will set up a user thread, add graph-specific data, and integrate the OpenAI client to show how to use both user and graph memory to enhance the context of a chatbot.

First, we initialize the Zep client, create a user, and create a thread:

1# Initialize the Zep client
2zep_client = AsyncZep(api_key="YOUR_API_KEY") # Ensure your API key is set appropriately
3
4# Add one example user
5user_id = uuid.uuid4().hex
6await zep_client.user.add(
7 user_id=user_id,
8 first_name="Alice",
9 last_name="Smith",
10 email="[email protected]"
11)
12
13# Create a new thread for the user
14thread_id = uuid.uuid4().hex
15await zep_client.thread.create(
16 thread_id=thread_id,
17 user_id=user_id,
18)

Next, we create a new graph and add structured business data to the graph, in the form of a JSON string. This step uses the Graphs API.

1graph_id = uuid.uuid4().hex
2await zep_client.graph.create(graph_id=graph_id)
3
4product_json_data = [
5 {
6 "type": "Sedan",
7 "gas_mileage": "25 mpg",
8 "maker": "Toyota"
9 },
10 # ... more cars
11]
12
13json_string = json.dumps(product_json_data)
14await zep_client.graph.add(
15 graph_id=graph_id,
16 type="json",
17 data=json_string,
18)

Finally, we initialize the OpenAI client and define a chatbot_response function that retrieves user and graph memory, constructs a system/developer message, and generates a contextual response. This leverages the Threads API, graph API, and the OpenAI chat completions endpoint.

1# Initialize the OpenAI client
2oai_client = OpenAI()
3
4async def chatbot_response(user_message, thread_id):
5 # Retrieve user memory
6 user_memory = await zep_client.thread.get_user_context(thread_id)
7
8 # Search the graph using the user message as the query
9 results = await zep_client.graph.search(graph_id=graph_id, query=user_message, scope="edges")
10 relevant_graph_edges = results.edges
11 product_context_block = "Below are some facts related to our car inventory that may help you respond to the user: \n"
12 for edge in relevant_graph_edges:
13 product_context_block += f"{edge.fact}\n"
14
15 # Combine context blocks for the developer message
16 developer_message = f"You are a helpful chat bot assistant for a car sales company. Answer the user's message while taking into account the following background information:\n{user_memory.context}\n{product_context_block}"
17
18 # Generate a response using the OpenAI API
19 completion = oai_client.chat.completions.create(
20 model="gpt-4o-mini",
21 messages=[
22 {"role": "developer", "content": developer_message},
23 {"role": "user", "content": user_message}
24 ]
25 )
26 response = completion.choices[0].message
27
28 # Add the conversation to memory
29 messages = [
30 Message(name="Alice", role="user", content=user_message),
31 Message(name="AI assistant", role="assistant", content=response)
32 ]
33 await zep_client.thread.add_messages(thread_id, messages=messages)
34
35 return response

This recipe demonstrated how to share memory across users by utilizing graphs with Zep. We set up user threads, added structured graph data, and integrated the OpenAI client to generate contextual responses, providing a robust approach to memory sharing across different users.