Share Memory Across Users Using Group Graphs

In this recipe, we will demonstrate how to share memory across different users by utilizing group graphs. We will set up a user session, add group-specific data, and integrate the OpenAI client to show how to use both user and group memory to enhance the context of a chatbot.

First, we initialize the Zep client, create a user, and create a session:

1# Initialize the Zep client
2zep_client = AsyncZep(api_key="YOUR_API_KEY") # Ensure your API key is set appropriately
3
4# Add one example user
5user_id = uuid.uuid4().hex
6await zep_client.user.add(
7 user_id=user_id,
8 email="[email protected]"
9)
10
11# Create a new session for the user
12session_id = uuid.uuid4().hex
13await zep_client.memory.add_session(
14 session_id=session_id,
15 user_id=user_id,
16)

Next, we create a new group and add structured business data to the graph, in the form of a JSON string. This step uses the groups API and the graph API:

1group_id = uuid.uuid4().hex
2await zep_client.group.add(group_id=group_id)
3
4product_json_data = [
5 {
6 "type": "Sedan",
7 "gas_mileage": "25 mpg",
8 "maker": "Toyota"
9 },
10 # ... more cars
11]
12
13json_string = json.dumps(product_json_data)
14await zep_client.graph.add(
15 group_id=group_id,
16 type="json",
17 data=json_string,
18)

Finally, we initialize the OpenAI client and define a chatbot_response function that retrieves user and group memory, constructs a system/developer message, and generates a contextual response. This leverages the memory API, graph API, and the OpenAI chat completions endpoint.

1# Initialize the OpenAI client
2oai_client = OpenAI()
3
4async def chatbot_response(user_message, session_id):
5 # Retrieve user memory
6 user_memory = await zep_client.memory.get(session_id)
7
8 # Search the group graph using the user message as the query
9 results = await zep_client.graph.search(group_id=group_id, query=user_message, scope="edges")
10 relevant_group_edges = results.edges
11 product_context_string = "Below are some facts related to our car inventory that may help you respond to the user: \n"
12 for edge in relevant_group_edges:
13 product_context_string += f"{edge.fact}\n"
14
15 # Combine context strings for the developer message
16 developer_message = f"You are a helpful chat bot assistant for a car sales company. Answer the user's message while taking into account the following background information:\n{user_memory.context}\n{product_context_string}"
17
18 # Generate a response using the OpenAI API
19 completion = oai_client.chat.completions.create(
20 model="gpt-4o-mini",
21 messages=[
22 {"role": "developer", "content": developer_message},
23 {"role": "user", "content": user_message}
24 ]
25 )
26 response = completion.choices[0].message
27
28 # Add the conversation to memory
29 messages = [
30 Message(role="user", role_type="user", content=user_message),
31 Message(role="assistant", role_type="assistant", content=response)
32 ]
33 await zep_client.memory.add(session_id, messages=messages)
34
35 return response

This recipe demonstrated how to share memory across users by utilizing group graphs with Zep. We set up user sessions, added structured group data, and integrated the OpenAI client to generate contextual responses, providing a robust approach to memory sharing across different users.

Built with