MessageHistory Example

The Zep Memory, VectorStore, and Retriever classes found in the LangChain project are deprecated. LangGraph is the recommended approach to using Zep with the LangChain ecosystem.

LangChain offers a ZepChatMessageHistory class compatible with LangChain Expression Language (LCEL).

This guide will walk you through creating a MessageHistory chain using Zep’s conversation history.

You can generate a project api key in Zep Dashboard.

Make sure you have the following environment variables specified when running these examples:

ZEP_API_KEY - API key to your zep project

OPENAI_API_KEY - Open AI api key which the chain will require to generate the answer

You will need to have a collection in place to initialize vector store in this example

If you want to create a collection from a web article, you can run the python ingest script Try modifying the script to ingest the article of your choice.

Alternatively, you can create a collection by running either Document example in python sdk repository or Document example in typescript sdk repository.

You will need to have a session_id in place to invoke the final chain in this example

You can create a session by running either Memory example

in python sdk repository or Memory example in typescript sdk repository.

Initialize ZepClient with necessary imports

1import os
2from typing import List, Tuple
3
4from langchain_core.output_parsers import StrOutputParser
5from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
6from langchain_core.pydantic_v1 import BaseModel, Field
7from langchain_core.runnables import (
8 RunnableParallel,
9)
10from langchain_core.runnables.history import RunnableWithMessageHistory
11from langchain_openai import ChatOpenAI
12
13from zep_cloud.client import AsyncZep
14from langchain_community.chat_message_histories.zep import ZepChatMessageHistory
15
16zep = AsyncZep(
17 api_key=os.environ["ZEP_API_KEY"],
18)
1template = """Answer the question below as if you were a 19th centry poet:
2 """
3answer_prompt = ChatPromptTemplate.from_messages(
4 [
5 ("system", template),
6 MessagesPlaceholder(variable_name="chat_history"),
7 ("user", "{question}"),
8 ]
9)

Set up an answer synthesis template and prompt.

MessagesPlaceholder - We’re using the variable name chat_history here.

This will incorporate the chat history into the prompt.

It’s important that this variable name aligns with the history_messages_key in the RunnableWithMessageHistory chain for seamless integration.

question must match input_messages_key in `RunnableWithMessageHistory“ chain.

Compose the final chain

1inputs = RunnableParallel(
2 {
3 "question": lambda x: x["question"],
4 "chat_history": lambda x: x["chat_history"],
5 },
6)
7
8chain = RunnableWithMessageHistory(
9 inputs | answer_prompt | ChatOpenAI() | StrOutputParser(),
10 lambda session_id: ZepChatMessageHistory(
11 session_id=session_id, # This uniquely identifies the conversation
12 zep_client=zep,
13 memory_type="perpetual",
14 ),
15 input_messages_key="question",
16 history_messages_key="chat_history",
17)

Here’s a quick overview of what’s happening:

  1. We use RunnableWithMessageHistory to incorporate Zep’s Chat History into our chain.
  2. This class requires a session_id as a parameter when you activate the chain.
  3. To manually invoke this chain, provide the session_id as a parameter and the question as an input to the chain.
1chain_with_history.invoke(
2 {"question": "-"},
3 config={"configurable": {"session_id": "-"}},
4)

First, we initialize ZepChatMessageHistory with the following parameters:

  1. session_id - This uniquely identifies the conversation within Zep.
  2. zep_client - The instance of the Zep client.
  3. memory_type set to perpetual. If not specified, Message Window Buffer Memory will be used by default. We recommend configuring your application to use Perpetual Memory.

Interested in learning more? Explore How Zep Memory Works.

Next, we construct a chain that operates after retrieving the chat history:

  1. inputs will extract the user’s question and chat history from the context.
  2. answer_prompt will incorporate chat history into the prompt.
  3. ChatOpenAI will generate a response.
  4. StrOutputParser will parse the response.

Running the Chain with LangServe

This chain can also be executed as part of our LangServe sample project. To do this, you’ll need to:

For this you will need to:

Clone our Python SDK

$git clone https://github.com/getzep/zep-python
>cd examples/langchain-langserve

There is a README file in the langchain-langserve directory will guide you through the setup process.

Go to http://localhost:8000/message_history/playground to use LangServe playground for this chain.