Connect Neum AI pipelines to your chatbot
pipeline.search()
method.
GPT-4
to build a very simple bot. In the bot, we will generate answers based on a user input using ChatCompletion
capabilities. The user query might be a question or order for the model to generate a response.
Pipeline
object. Using the pipeline object, we will search the contents extracted and stored by it. Make sure that before you query the pipeline, you have run
it successfully using pipeline.run()
results
in the form of a NeumSearchResult
object. Each of the NeumSearchResult
objects contains an id
for the vector retrieved, the score
of the similarity search and the metadata
which includes the contents that were embedded to generate the vector as well as any other metadata that was included. A full list of available metadata for the pipeline can be accessed by querying pipeline.available_metadata
.
From the results, we can extract the text content that was attached to it.
pipeline.search()
. We have the context stored in a variable already.
Pipeline
and then using the context to improve the system prompt for the chatbot to properly answer the user query. Now we can generate a response from the LLM.
Pipeline
object and providing the context in the system prompt of the chatbot over and over throughout a conversation, to make sure the chatbot has the corrrect context at every turn.