Skip to content

Commit 086e8e5

Browse files
committed
2 parents e586e74 + 71ffe16 commit 086e8e5

File tree

2 files changed

+109
-3
lines changed

2 files changed

+109
-3
lines changed

02_Overview_Azure_OpenAI/README.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,41 @@ Azure OpenAI is designed to accelerate the development of AI applications, allow
66

77
https://microsoft.github.io/PartnerResources/azure/data-analytics-ai/openai
88

9+
Here are ways that Azure OpenAI can help developers:
10+
11+
- **Simplified integration** - Simple and easy-to-use APIs for tasks such as text generation, summarization, sentiment analysis, language translation, and more.
12+
- **Pre-trained models** - AI models that are already fine-tuned on vast amounts of data making it easier for developers to leverage the power of AI without having to train their own models from scratch.
13+
- **Customization** - Developers can also fine-tune the included pre-trained models with their own data with minimal coding, providing an opportunity to create more personalized and specialized AI applications.
14+
- **Documentation and resources** - Azure OpenAI provides comprehensive documentation and resources to help developers get started quickly.
15+
- **Scalability and reliability** - Hosted on Microsoft Azure, the OpenAI service provides robust scalability and reliability that developers can leverage to deploy their applications.
16+
- **Responsible AI** - Azure OpenAI promotes responsible AI by adhering to ethical principles, providing explainability tools, governance features, diversity and inclusion support, and collaboration opportunities. These measures help ensure that AI models are unbiased, explainable, trustworthy, and used in a responsible and compliance manner.
17+
- **Community support** - With an active developer community developers can seek help via forums and other community support channels.
18+
19+
https://azure.microsoft.com/en-us/blog/explore-the-benefits-of-azure-openai-service-with-microsoft-learn/
20+
21+
22+
## Comparison of Azure OpenAI and OpenAI
23+
24+
Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E, and Whisper models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
25+
26+
With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
27+
28+
https://learn.microsoft.com/en-us/azure/ai-services/openai/overview#comparing-azure-openai-and-openai
29+
30+
31+
## Data Privacy and Security
32+
33+
Azure OpenAI stores and processes data to provide the service and to monitor for uses that violate the applicable product terms. Azure OpenAI is fully controlled by Microsoft. Microsoft hosts the OpenAI models in Microsoft Azure for your usage of Azure OpenAI, and does not interact with any services operated by OpenAI.
34+
35+
Here are a few important things to know in regards to the security and privacy of your prompts (inputs) and completions (outputs), your embeddings, and your training data when using Azure OpenAI:
36+
37+
- are NOT available to other customers.
38+
- are NOT availabe to OpenAI.
39+
- are NOT used to improve OpenAI models.
40+
- are NOT used to improve any Microsoft or 3rd party products or services.
41+
- are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
42+
- Your fine-tuned Azure OpenAI models are available exclusively for your use.
43+
44+
https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy#how-does-the-azure-openai-service-process-data
45+
46+

03_Overview_AI_Concepts/README.md

Lines changed: 71 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,55 @@
1-
# Overview of the following concepts (summaries with reference links):
1+
# Overview of AI Concepts
22

33

44
## Large Language Models (LLM)
55

6+
A Large Language Models (LLM) is a type of AI that can process and produce natural language text. LLMs are "general purpose" AI models trained using massive amounts of data gathered from various sources; like books, articles, webpages, and images to discover patterns and rules of language.
7+
8+
Understanding the capabilities of what an LLM can do is important when deciding to use it for a solution:
9+
10+
- **Understand language** - An LLM is a predictive engine that pulls patterns together based on pre-existing text to produce more text. It doesn't understand language or math.
11+
- **Understand facts** - An LLM doesn't have separate modes for information retrieval and creative writing; it simply predicts the next most probable token.
12+
- **Understand manners, emotion, or ethics** - An LLM can't exhibit anthropomorphism or understand ethics. The output of a foundational model is a combination of training data and prompts.
13+
14+
### Foundational Models
15+
16+
Foundational Models are specific instances or versions of an LLM. Examples of these would be GPT-3, GPT-4, or Codex. Foundational models are trained and fine-tuned on a large corpus of text, or code in the case of a Codex model instance.
17+
18+
A foundational model takes in training data in all different formats and uses a transformer architecture to build a general model. Adaptions and specializations can be created to achieve certain tasks via prompts or fine-tuning.
19+
20+
### Difference between LLM and traditional Natural Language Processing (NLP)
21+
22+
LLMs and Natural Language Processing (NLP) differs in their approach to understanding and processing language.
23+
24+
Here are a few things that separate NLPs from LLMs:
25+
26+
| Traditional NLP | Large Language Models |
27+
| --- | --- |
28+
| One model per capability is needed. | A single model is used for many natural language use cases. |
29+
| Provides a set of labeled data to train the ML model. | Uses many terabytes of unlabeled data in the foundation model. |
30+
| Describes in natural language what you want the model to do. | Highly optimized for specific use cases. |
31+
32+
33+
https://learn.microsoft.com/en-us/training/modules/introduction-large-language-models/2-understand-large-language-models
34+
35+
36+
637
## Standard Patterns
738

8-
### RAG
39+
### Retrieval Augmentation Generation (RAG)
940

1041
Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides grounding data. Adding an information retrieval system gives you control over grounding data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to your enterprise content sourced from vectorized documents, images, audio, and video.
1142

43+
GPT language models can be fine-tuned to achieve several common tasks such as sentiment analysis and named entity recognition. These tasks generally don't require additional background knowledge.
44+
45+
The RAG pattern facilitates bringing private proprietary knowledge to the model so that it can perform Question Answering over this content. Remember that Large Language Models are indexed only on public information.
46+
Because the RAG technique accesses external knowledge sources to complete tasks, it enables more factual consistency, improves the reliability of the generated responses, and helps to mitigate the problem of "hallucination".
47+
48+
In some cases, the RAG process involves a technique called vectorization on the proprietary data. The user prompt is compared to the vector store and only the most relevant/matching pieces of information are returned and stuffed into prompt for the LLM to reason over and provide an answer. The next set of demos will go into this further.
49+
1250
https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview
1351

14-
### CoT
52+
### Chain of Thought (CoT)
1553

1654
Instead of splitting a task into smaller steps, with Chain of Though (CoT) the model response is instructed to proceed step-by-step and present all the steps involved. Doing so reduces the possibility of inaccuracy of outcomes and makes assessing the model response easier.
1755

@@ -21,6 +59,14 @@ https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-pro
2159

2260
### Others?
2361

62+
#### Zero-shot prompting
63+
64+
Zero-shot prompting is what we would consider the “default”. This is when we provide no examples of inputs/expected outputs to the model to work with. We’re leaving it up to the model to decipher what is needed and how to output it from the instructions.
65+
66+
#### Few-shot prompting
67+
68+
Few-shot prompting provides examples to guide the model to the desired output.
69+
2470
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions#provide-grounding-context
2571

2672

@@ -30,6 +76,28 @@ What are you trying to solve with finding relevant data through vector?
3076

3177
## Prompt Engineering
3278

79+
### What is a prompt
80+
81+
A prompt is an input or instruction provided to an Artificial Intelligence (AI) model to direct its behavior and produce the desired results. The quality and specificity of the prompt are crucial in obtaining precise and relevant outputs. A well-designed prompt can ensure that the AI model generates the desired information or completes the intended task effectively. Some typical prompts include summarization, question answering, text classification, and code generation.
82+
83+
Simple examples of prompts:
84+
85+
- _""_
86+
- _""_
87+
88+
### What is prompt engineering
89+
90+
Prompt engineering is the iterative process of designing, evaluating, and optimizing prompts to produce consistently accurate responses from language models for a particular problem domain. It involves designing and refining the prompts given to an AI model to achieve the desired outputs. Prompt engineers experiment with various prompts, test their effectiveness, and refine them to improve performance. Performance is measured using predefined metrics such as accuracy, relevance, and user satisfaction to assess the impact of prompt engineering.
91+
92+
### General anatomy of a prompt
93+
94+
Instruction, context, input, output indicator
95+
96+
https://learn.microsoft.com/en-us/semantic-kernel/prompts/
97+
3398
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering
3499

35100
https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/
101+
102+
https://learn.microsoft.com/en-us/training/modules/introduction-large-language-models/3-large-language-model-core-concepts
103+

0 commit comments

Comments
 (0)