VectorStore Example
Zep Python SDK ships with ZepVectorStore
class which can be used with LangChain Expression Language (LCEL)
Let’s explore how to create a RAG chain using the ZepVectorStore
for semantic search.
You can generate a project api key in Zep Dashboard.
Before diving into these examples, please ensure you’ve set the following environment variables:
ZEP_API_KEY
- API key to your zep project
OPENAI_API_KEY
- Open AI api key which the chain will require to generate the answer
You will need to have a collection in place to initialize vector store in this example
If you want to create a collection from a web article, you can run the python ingest script Try modifying the script to ingest the article of your choice.
Python
TypeScript
Need a project API key? Create one from the Zep Dashboard.
Initialize ZepClient with necessary imports
Python
TypeScript
Initialize ZepVectorStore
Python
TypeScript
Let’s set up the retriever. We’ll use vectorstore
for this purpose and configure it to use MMR search result reranking.
Python
TypeScript
Create a prompt template for synthesizing answers.
Python
TypeScript
Create the default document prompt and define the helper function for merging documents.
Python
TypeScript
Python
TypeScript
Let’s set up user input and the context retrieval chain.
Compose final chain
Python
TypeScript
Here’s a quick rundown of how the process works:
inputs
grabs the user’s question and fetches relevant document context to add to the prompt.answer_prompt
then takes this context and question, combining them in the prompt with instructions to answer the question using only the provided context.ChatOpenAI
calls an OpenAI model to generates an answer based on the prompt.- Finally,
StrOutputParser
extracts the LLM’s result into a string.
To invote this chain manually, simply pass the question
into the chain’s input.
Python
TypeScript
Running the Chain with LangServe
You can run this chain, along with others, using our LangServe sample project.
Here’s what you’ll need to do:
Clone our Python SDK
Review the README in the langchain-langserve
directory for setup instructions.
After firing up the server, head over to http://localhost:8000/rag_vector_store/playground
to explore the LangServe playground using this chain.