Quickstart
Create your first pipeline
This guide will show you how to create your first pipeline with Neum AI:
- Configure source to pull data from
- Configure the embedding model
- Configure the vector storage to store to
- Run the pipeline and verify with a search
Choose your enviornment
Neum AI support the ability to create and run your RAG pipelines both locally and in our cloud enviornment. To learn more about the differences between the enviornments, see Neum AI Cloud vs Local Development
Set up enviornment
We will start by installing the required dependencies:
Configure data source
For this guide, we will start with the Website
source. This source will scrape the web contents of a site and return the HTML in the body.
To configure the Data Connector
, we will specify the url
property. The connector also supports a Selector
to define what information from the connector should be used as content to embed and what content should be attached to the vector as metadata.
We will then choose a loader and chunker to be used to pre-process the data extracted from the source. For the Website
source, we will use an HTML Loader
as we are extracting HTML code and will use the Recursive Chunker
to split up the text. We will configure the Data Connector
, Loader
and Chunker
into a SourceConnector
.
Next we will configure the embedding service we will use to turn the chunks of text into vector embeddings.
Configure embed connector
We will use the OpenAIEmbed
connector. This connector uses text-ada-002
, one of the most popular embedding model in the market to generate vector embeddings.
Configure the connector with an OpenAI Key.
Next we will configure the vector storage service we will use to store the vector embedding we generated.
Configure sink connector
We will use the WeaviateSink
connector. Weaviate is a popular open-source vector database.
Configure the Weaviate connector with the connection parameters including: url
and api_key
. Other parameters are available to further configure the connector. For example, we will use class_name
to define a name for the index we are creating.
We now have all the parts of the pipeline configured, lets put it all together and run it.
Run the pipeline
To run the pipeline all together, we will first configure a Pipeline
object and then use the built-in methods to run it.
Then we will run the pipeline locally with the provided built-in methods
run
is not intended for production scenarios. Take a look at our cloud offering where we handle large-scale parallelization, logging and monitoring for you!The run method returns the number of vectors that were processed.
Search the pipeline
Finally, once it is done running, we can now query the information we just moved into vector storage.
Deploy the pipeline
Once you have tested the pipeline locally, you can now take the configuration you created and deploy it to the Neum AI Cloud.
Deploy pipeline
Deploy your pipeline configuration to Neum AI Cloud to take advantage of the full set of capabilities like scheduling, synchronization and logs.
Was this page helpful?