-In this example, assume textual data is vectorized and stored within an Azure Cosmos DB for MongoDB vCore database. Both the vectorized data and embeddings/vector field are stored in the same document. A vector search index has been created on the vector field. When a message is received from a chat application, this message is also vectorized using the same embedding model (ex. Azure OpenAI text-embedding-ada-002) which is then used as input to the vector search index. The vector search index returns a list of documents whose vector field is semantically similar to the incoming message. The unvectorized text stored within the same document is then used to augment the LLM prompt. The LLM receives the prompt and generates a response back to the requestor based on the information it has been given.
0 commit comments