Chat History Memory

Vector Search Over Chat History

Zep helps developers search through long-term memory stores to find relevant historical conversations efficiently. With automated embedding and advanced indexing, Zep offers robust search capabilities that are straightforward and effective.

Searching for Messages or Summaries

Zep enables vector similarity searches for Messages or Summaries stored within its system. This feature lets you populate prompts with past conversations that are contextually similar to a specific query, organizing the results by a similarity Score.

Choosing Between Summaries and Messages

Zep supports searches for both Messages and Summaries. Since individual messages might miss some conversational context, Summaries are often the preferred choice for executing searches. For more on this, check out the section on message limitations.

MMR Reranking for Summaries

Summaries can sometimes overlap in information, especially when the Message Window is set low. In such cases, employing Maximum Marginal Relevance (MMR) to rerank search results can be beneficial. Zep includes built-in, hardware-accelerated support for MMR, making it simple and easy to use.

Constructing Search Queries

Zep’s Collection and Memory search support semantic search queries, JSONPath-based metadata filters, and a combination of both.

Memory search also supports querying by message creation date.

Read more about constructing search queries.

1from zep_cloud.client import AsyncZep
2
3client = AsyncZep(
4 api_key=API_KEY,
5)
6
7# This uniquely identifies the user's session
8session_id = "my_session_id"
9
10search_results = await client.memory.search(
11 session_id,
12 text="Is Lauren Olamina a character in a book?",
13 search_scope="summary", # This could be messages or summary
14 search_type="mmr", # remove this if you'd prefer not to rerank results
15 mmr_lambda=0.5, # tune diversity vs relevance
16)
17
18for search_result in search_results:
19 # Uncomment for message search
20 # print(search_result.messsage.dict())
21 print(search_result.summary.dict())
1{
2 "summary": {
3 "uuid": "b47b83da-16ae-49c8-bacb-f7d049f9df99",
4 "created_at": "2023-11-02T18:22:10.103867Z",
5 "content": "The human asks the AI to explain the book Parable of the Sower by Octavia Butler. The AI responds by explaining that Parable of the Sower is a science fiction novel by Octavia Butler. The book follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.",
6 "token_count": 66
7 },
8 "metadata": null,
9 "dist": 0.8440576791763306
10}

Hybrid Search for Chat History with Metadata Filters

Besides the vector similarity search for Messages and Summaries stored in Zep, you can also use metadata filters for your searches. You also have the option to conduct searches based purely on metadata.

1client.memory.search(
2 session_id=session_id,
3 query="I enjoy reading science fiction.",
4 metadata={
5 "where": {"jsonpath": '$[*] ? (@.foo == "bar")'},
6 },
7)
1{
2 "dist": 0.7170433826192629,
3 "message": {
4 "content": "I've read many books written by Octavia Butler.",
5 "created_at": "2023-06-03T22:00:43.034056Z",
6 "metadata": {
7 "foo": "bar",
8 "system": {
9 "entities": [
10 {
11 "Label": "PERSON",
12 "Matches": [
13 {
14 "End": 46,
15 "Start": 32,
16 "Text": "Octavia Butler"
17 }
18 ],
19 "Name": "Octavia Butler"
20 }
21 ]
22 }
23 },
24 "role": "human",
25 "token_count": 13,
26 "uuid": "8f3a06dd-0625-41da-a2af-b549f2056b3f"
27 },
28 "metadata": null,
29 "summary": null
30}

Search Ranking and Limits

Vector Indexes

Zep automatically creates HNSW (Hierarchical Navigable Small World) indexes for all messages, summaries, and documents. This means you get speedy and relevant search results right out of the box, without the hassle of manually setting up or integrating a vector store and building indexes. Zep uses an optimized distance function similar to cosine distance for search ranking.

Embedding Models

Zep uses the BAAI/bge-large-en model for text embeddings, known for its high performance and optimization for semantic search. Keep in mind, this model has a 512 token maximum sequence length, which is important when deciding how to chunk your documents.

Limitations When Searching Over Messages or Short Document Chunks

Zep can return all messages from a search up to a certain row limit. This limit can be adjusted by passing a limit query string argument to the search API. Due to the sparsity issue we’ll touch on below, we recommend sticking to the top 2-3 messages in your prompts. Or, you could analyze your search results and use a distance threshold to filter out messages that aren’t relevant.

Handling Short Texts in Embeddings

Searching through chat histories can be tricky. Chat messages are often brief and might not carry much “information”. When these short texts are turned into high-dimensional embedding vectors, the result can be very sparse vectors.

This sparsity means a lot of these vectors end up being close to each other in the vector space, which can lead to a higher chance of getting false positives in your search results for relevant messages. As a result, we recommend searching over Summaries, which include more information than Messages.