Building a Powerful Text Search System with Pinecone, Langchain, and OpenAI Embedding


In today’s data-driven world, businesses and developers often need to implement powerful text search capabilities. Traditional search algorithms may not always provide optimal results, especially when dealing with large amounts of unstructured text data. This is where Pinecone, Langchain, and the OpenAI service come into play. In this blog post, we will explore the steps required to set up and leverage these tools to build a highly accurate and efficient text search system.

Step 1: Setting up the Index

To begin, we need to set up an index in Pinecone. Install the required Python packages, including pinecone-client, openai, and tiktoken. Then proceed with the following code snippet:

import pinecone

pinecone.init(api_key="YOUR_API_KEY", environment="YOUR_ENVIRONMENT")

pinecone.create_index("langchain-demo", dimension=1536, metric="cosine")

The dimension parameter is set to 1536 because we will be using the “text-embedding-ada-002” OpenAI model, which has an output dimension of 1536. If you need to delete the index, use the pinecone.delete_index("langchain-demo") command.

Step 2: Importing Libraries and Setting up Keys

Next, we need to import the required libraries and set up the necessary keys. Import the following libraries:

import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain.document_loaders import TextLoader

Set the PINECONE_API_KEY and PINECONE_ENV variables to your Pinecone API key and environment. Additionally, set the OPENAI_API_KEY environment variable to your OpenAI API key.

os.environ["OPENAI_API_KEY"] = 'your openai api key'

Step 3: Preparing the Data and Embedding Layer

Now, load the text data (here we use an example) and prepare the embedding layer using the OpenAI service. Use the TextLoader class from Langchain to load the text data:

loader = TextLoader("state_of_the_union.txt")
documents = loader.load()

We can then split the documents into smaller chunks using the CharacterTextSplitter class:

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

Finally, initialize the OpenAI embeddings:

embeddings = OpenAIEmbeddings()

Step 4: Chunking the Documents and Indexing the Embedding Vectors

In this step, we will chunk the documents into smaller pieces and index the OpenAI embedding vectors using Pinecone. Use the following code snippet:

import pinecone

pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_ENV,
)

index_name = "langchain-demo"

docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)

query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)

print(docs[0].page_content)

Step 5: Adding More Texts to the Index

To add more texts to an existing index or start with an empty index, use the following code snippet:

index = pinecone.Index("langchain-demo")
vectorstore = Pinecone(index, embeddings.embed_query, "text")

vectorstore.add_texts(["More text to add as an example!"])

If you need to add metadata to the index, you can pass a list of dictionaries with the texts:

vectorstore.add_texts(["More text to add as an example!"], [{'name':'example'}])

Conclusion:

By following these steps, you can build a powerful text search system using Pinecone, Langchain, and the OpenAI service. These tools allow you to leverage advanced text embeddings and indexing capabilities to achieve highly accurate and efficient search results. Whether you need to search through large volumes of documents or implement a recommendation system, this combination of tools can significantly enhance your application’s performance and user experience.


Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !
  TOC