Homepage | Documentation | Blog | Discord | Twitter
Neum AI is a data platform that helps developers leverage their data to contextualize Large Language Models through Retrieval Augmented Generation (RAG) This includes extracting data from existing data sources like document storage and NoSQL, processing the contents into vector embeddings and ingesting the vector embeddings into vector databases for similarity search.
It provides you a comprehensive solution for RAG that can scale with your application and reduce the time spent integrating services like data connectors, embedding models and vector databases.
You can reach our team either through email ([email protected]), on discord or by scheduling a call wit us.
Sign up today at dashboard.neum.ai. See our quickstart to get started.
The Neum AI Cloud supports a large-scale, distributed architecture to run millions of documents through vector embedding. For the full set of features see: Cloud vs Local
Install the neumai
package:
pip install neumai
To create your first data pipelines visit our quickstart.
At a high level, a pipeline consists of one or multiple sources to pull data from, one embed connector to vectorize the content, and one sink connector to store said vectors. With this snippet of code we will craft all of these and run a pipeline:
from neumai.DataConnectors.WebsiteConnector import WebsiteConnector
from neumai.Shared.Selector import Selector
from neumai.Loaders.HTMLLoader import HTMLLoader
from neumai.Chunkers.RecursiveChunker import RecursiveChunker
from neumai.Sources.SourceConnector import SourceConnector
from neumai.EmbedConnectors import OpenAIEmbed
from neumai.SinkConnectors import WeaviateSink
from neumai.Pipelines import Pipeline
website_connector = WebsiteConnector(
url = "https://www.neum.ai/post/retrieval-augmented-generation-at-scale",
selector = Selector(
to_metadata=['url']
)
)
source = SourceConnector(
data_connector = website_connector,
loader = HTMLLoader(),
chunker = RecursiveChunker()
)
openai_embed = OpenAIEmbed(
api_key = "<OPEN AI KEY>",
)
weaviate_sink = WeaviateSink(
url = "your-weaviate-url",
api_key = "your-api-key",
class_name = "your-class-name",
)
pipeline = Pipeline(
sources=[source],
embed=openai_embed,
sink=weaviate_sink
)
pipeline.run()
results = pipeline.search(
query="What are the challenges with scaling RAG?",
number_of_results=3
)
for result in results:
print(result.metadata)
from neumai.DataConnectors.PostgresConnector import PostgresConnector
from neumai.Shared.Selector import Selector
from neumai.Loaders.JSONLoader import JSONLoader
from neumai.Chunkers.RecursiveChunker import RecursiveChunker
from neumai.Sources.SourceConnector import SourceConnector
from neumai.EmbedConnectors import OpenAIEmbed
from neumai.SinkConnectors import WeaviateSink
from neumai.Pipelines import Pipeline
website_connector = PostgresConnector(
connection_string = 'postgres',
query = 'Select * from ...'
)
source = SourceConnector(
data_connector = website_connector,
loader = JSONLoader(
id_key='<your id key of your jsons>',
selector=Selector(
to_embed=['property1_to_embed','property2_to_embed'],
to_metadata=['property3_to_include_in_metadata_in_vector']
)
),
chunker = RecursiveChunker()
)
openai_embed = OpenAIEmbed(
api_key = "<OPEN AI KEY>",
)
weaviate_sink = WeaviateSink(
url = "your-weaviate-url",
api_key = "your-api-key",
class_name = "your-class-name",
)
pipeline = Pipeline(
sources=[source],
embed=openai_embed,
sink=weaviate_sink
)
pipeline.run()
results = pipeline.search(
query="...",
number_of_results=3
)
for result in results:
print(result.metadata)
from neumai.Client.NeumClient import NeumClient
client = NeumClient(
api_key='<your neum api key, get it from https://dashboard.neum.ai',
)
client.create_pipeline(pipeline=pipeline)
If you are interested in deploying Neum AI to your own cloud contact us at [email protected].
We have a sample backend architecture published on GitHub which you can use as a starting point.
For an up-to-date list please visit our docs
Our roadmap is evolving with asks, so if there is anything missing feel free to open an issue or send us a message.
Connectors
Search
Extensibility
Experimental
Additional tooling for Neum AI can be found here: