Skip to content

Commit c115cd4

Browse files
committed
workin
1 parent 7af8e82 commit c115cd4

File tree

3 files changed

+1
-33
lines changed

3 files changed

+1
-33
lines changed

02_Overview_Azure_OpenAI/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,5 +42,3 @@ Here are a few important things to know in regards to the security and privacy o
4242
- Your fine-tuned Azure OpenAI models are available exclusively for your use.
4343

4444
https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy#how-does-the-azure-openai-service-process-data
45-
46-

07_Explore_OpenAI_models/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ The DALL-E model, enables the use of a text prompt provided by a user as the inp
3030

3131
## Selecting an LLM
3232

33-
Before a Large Language Model (LLM) can be implemented into a solution, the specific LLM to use must be chosen.
33+
Before a Large Language Model (LLM) can be implemented into a solution, an LLM model must be chosen.
3434

3535

3636

11_Prompt_Engineering/README.md

Lines changed: 0 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -6,37 +6,7 @@
66
- Create a flow that communicates with the deployed Azure OpenAI and Azure Cosmos DB services
77
- Implement a web search plugin
88

9-
## What is a prompt
109

11-
A prompt is an input or instruction provided to an Artificial Intelligence (AI) model to direct its behavior and produce the desired results. The quality and specificity of the prompt are crucial in obtaining precise and relevant outputs. A well-designed prompt can ensure that the AI model generates the desired information or completes the intended task effectively. Some typical prompts include summarization, question answering, text classification, and code generation.
12-
13-
## What is prompt engineering
14-
15-
Prompt engineering is the iterative process of designing, evaluating, and optimizing prompts to produce consistently accurate responses from language models for a particular problem domain. It involves designing and refining the prompts given to an AI model to achieve the desired outputs. Prompt engineers experiment with various prompts, test their effectiveness, and refine them to improve performance. Performance is measured using predefined metrics such as accuracy, relevance, and user satisfaction to assess the impact of prompt engineering.
16-
17-
## General anatomy of a prompt
18-
19-
Instruction, context, input, output indicator
20-
21-
## Zero-shot prompting
22-
23-
Zero-shot prompting is what we would consider the “default”. This is when we provide no examples of inputs/expected outputs to the model to work with. We’re leaving it up to the model to decipher what is needed and how to output it from the instructions.
24-
25-
## Few-shot prompting
26-
27-
Few-shot prompting provides examples to guide the model to the desired output.
28-
29-
## RAG
30-
31-
GPT language models can be fine-tuned to achieve several common tasks such as sentiment analysis and named entity recognition. These tasks generally don't require additional background knowledge.
32-
33-
The RAG pattern facilitates bringing private proprietary knowledge to the model so that it can perform Question Answering over this content. Remember that Large Language Models are indexed only on public information.
34-
Because the RAG technique accesses external knowledge sources to complete tasks, it enables more factual consistency, improves the reliability of the generated responses, and helps to mitigate the problem of "hallucination".
35-
36-
In some cases, the RAG process involves a technique called vectorization on the proprietary data. The user prompt is compared to the vector store and only the most relevant/matching pieces of information are returned and stuffed into prompt for the LLM to reason over and provide an answer.
37-
38-
## Chain of thought
39-
## ReAct
4010

4111
## Lab
4212
### Diagram RAG using Azure Cosmos DB for MongoDB vCore as a retriever

0 commit comments

Comments
 (0)