VectorStore/Q&A, MMR support¶
NOTE: this uses Cassandra's experimental "Vector Similarity Search" capability. At the moment, this is obtained by building and running an early alpha from a specific branch of the codebase.
Cassandra's VectorStore
allows for Vector Similarity Search with the Maximal Marginal Relevance (MMR) algorithm.
This is a search criterion that instead of just selecting the k stored documents most relevant to the provided query, first identifies a larger pool of relevant results, and then singles out k of them so that they carry as diverse information between them as possible.
In this way, when the stored text fragments are likely to be redundant, you can optimize token usage and help the models give more comprehensive answers.
This is very useful, for instance, if you are building a Q&A chatbot on past Support chat recorded interactions.
First prepare a connection to a vector-search-capable Cassandra and initialize the required LLM and embeddings:
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores import Cassandra
from cqlsession import getLocalSession, getLocalKeyspace
localSession = getLocalSession()
localKeyspace = getLocalKeyspace()
Below is the logic to instantiate the LLM and embeddings of choice. We choose to leave it in the notebooks for clarity.
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'VertexAI':
from langchain.llms import VertexAI
from langchain.embeddings import VertexAIEmbeddings
llm = VertexAI(temperature=0)
myEmbedding = VertexAIEmbeddings()
print('LLM+embeddings from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = OpenAI(temperature=0)
myEmbedding = OpenAIEmbeddings()
print('LLM+embeddings from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM+embeddings from OpenAI
Note: for the time being you have to explicitly turn on this experimental flag on the cassio
side:
import cassio
cassio.globals.enableExperimentalVectorSearch()
Create the store¶
Create a (Cassandra-backed) VectorStore
and the corresponding LangChain VectorStoreIndexWrapper
myCassandraVStore = Cassandra(
embedding=myEmbedding,
session=localSession,
keyspace=localKeyspace,
table_name='vs_test2_' + llmProvider,
)
index = VectorStoreIndexWrapper(vectorstore=myCassandraVStore)
This command simply resets the store in case you want to run this demo repeatedly:
myCassandraVStore.clear()
Populate the index¶
Notice that the first four sentences express the same concept, while the fifth adds a new detail:
BASE_SENTENCE_0 = ('The frogs and the toads were meeting in the night '
'for a party under the moon.')
BASE_SENTENCE_1 = ('There was a party under the moon, that all toads, '
'with the frogs, decided to throw that night.')
BASE_SENTENCE_2 = ('And the frogs and the toads said: "Let us have a party '
'tonight, as the moon is shining".')
BASE_SENTENCE_3 = ('I remember that night... toads, along with frogs, '
'were all busy planning a moonlit celebration.')
DIFFERENT_SENTENCE = ('For the party, frogs and toads set a rule: '
'everyone was to wear a purple hat.')
Insert the three into the index, specifying "sources" while you're at it (it will be useful later):
myCassandraVStore.add_texts(
[
BASE_SENTENCE_0,
BASE_SENTENCE_1,
BASE_SENTENCE_2,
BASE_SENTENCE_3,
DIFFERENT_SENTENCE,
],
metadatas=[
{'source': 'Barney\'s story at the pub'},
{'source': 'Barney\'s story at the pub'},
{'source': 'Barney\'s story at the pub'},
{'source': 'Barney\'s story at the pub'},
{'source': 'The chronicles at the village library'},
],
)
['d86ebe8bf2f6fa27ff01db7e3c4a21ab', 'be4fffdf596f08c1d3f4d9effc2f327e', 'd25517400eac2ff0eb4c9c2b38d5e7db', '7bb2aec568c5577c107a403f2fb1a64e', '4654a61925e397ea5f097019b2fa56d2']
Query the store¶
Here is the question you'll use to query the index:
QUESTION = 'Tell me about the party that night.'
Query with "similarity" search type¶
If you ask for two matches, you will get the two documents most related to the question. But in this case this is something of a waste of tokens:
matchesSim = myCassandraVStore.search(QUESTION, search_type='similarity', k=2)
for i, doc in enumerate(matchesSim):
print(f'[{i:2}]: "{doc.page_content}"')
[ 0]: "There was a party under the moon, that all toads, with the frogs, decided to throw that night." [ 1]: "I remember that night... toads, along with frogs, were all busy planning a moonlit celebration."
Query with MMR¶
Now, here's what happens with the MMR search type.
(Not shown here: you can tune the size of the results pool for the first step of the algorithm.)
matchesMMR = myCassandraVStore.search(QUESTION, search_type='mmr', k=2)
for i, doc in enumerate(matchesMMR):
print(f'[{i:2}]: "{doc.page_content}"')
[ 0]: "There was a party under the moon, that all toads, with the frogs, decided to throw that night." [ 1]: "For the party, frogs and toads set a rule: everyone was to wear a purple hat."
Query the index¶
Currently, LangChain's higher "index" abstraction does not allow to specify the search type, nor the number of matches subsequently used in creating the answer. So, by running this command you get an answer, all right.
# (implicitly) by similarity
print(index.query(QUESTION, llm=llm))
The frogs and toads were throwing a party under the moon that night. They were busy planning and preparing for the celebration.
You can request the question-answering process to provide references (as long as you annotated all input documents with a source
metadata field):
responseSrc = index.query_with_sources(QUESTION, llm=llm)
print('Automatic chain (implicitly by similarity):')
print(f' ANSWER : {responseSrc["answer"].strip()}')
print(f' SOURCES: {responseSrc["sources"].strip()}')
Automatic chain (implicitly by similarity): ANSWER : The frogs and toads were meeting in the night for a party under the moon. SOURCES: Barney's story at the pub
Here the default is to fetch four documents ... so that the only other text actually carrying additional information is left out!
The Q&A Process behind the scenes¶
In order to exploit the MMR search in end-to-end question-answering pipelines, you need to recreate and manually tweak the steps behind the query
or query_with_sources
methods. This takes just a few lines.
First you need a few additional modules:
from langchain.chains.retrieval_qa.base import RetrievalQA
from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain
You are ready to run two Q&A chains, identical in all respects (especially in the number of results to fetch, two), except the search_type
:
Similarity-based Q&A¶
# manual creation of the "retriever" with the 'similarity' search type
retrieverSim = myCassandraVStore.as_retriever(
search_type='similarity',
search_kwargs={
'k': 2,
# ...
},
)
# Create a "RetrievalQA" chain
chainSim = RetrievalQA.from_chain_type(
llm=llm,
retriever=retrieverSim,
)
# Run it and print results
responseSim = chainSim.run(QUESTION)
print(responseSim)
The party was held under the moon and was planned by both toads and frogs.
MMR-based Q&A¶
# manual creation of the "retriever" with the 'MMR' search type
retrieverMMR = myCassandraVStore.as_retriever(
search_type='mmr',
search_kwargs={
'k': 2,
# ...
},
)
# Create a "RetrievalQA" chain
chainMMR = RetrievalQA.from_chain_type(
llm=llm,
retriever=retrieverMMR
)
# Run it and print results
responseMMR = chainMMR.run(QUESTION)
print(responseMMR)
The party was held under the moon and was attended by both frogs and toads. Everyone was required to wear a purple hat.
Answers with sources¶
You can run the variant of these chains that also returns the source for the documents used in preparing the answer, which makes it even more obvious:
chainSimSrc = RetrievalQAWithSourcesChain.from_chain_type(
llm,
retriever=retrieverSim,
)
#
responseSimSrc = chainSimSrc({chainSimSrc.question_key: QUESTION})
print('Similarity-based chain:')
print(f' ANSWER : {responseSimSrc["answer"].strip()}')
print(f' SOURCES: {responseSimSrc["sources"].strip()}')
Similarity-based chain: ANSWER : The toads and frogs were planning a moonlit celebration. SOURCES: Barney's story at the pub
chainMMRSrc = RetrievalQAWithSourcesChain.from_chain_type(
llm,
retriever=retrieverMMR,
)
#
responseMMRSrc = chainMMRSrc({chainMMRSrc.question_key: QUESTION})
print('MMR-based chain:')
print(f' ANSWER : {responseMMRSrc["answer"].strip()}')
print(f' SOURCES: {responseMMRSrc["sources"].strip()}')
MMR-based chain: ANSWER : The party that night was thrown by frogs and toads, and everyone was required to wear a purple hat. SOURCES: Barney's story at the pub, The chronicles at the village library