Skip to content

Commit bacecb7

Browse files
committed
Add example to qlhelp
1 parent df979da commit bacecb7

File tree

2 files changed

+19
-1
lines changed

2 files changed

+19
-1
lines changed

python/ql/src/Security/CWE-1427/PromptInjection.qhelp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,12 @@ operations that were not intended.</p>
1414

1515
<example>
1616
<p>In the following examples, the cases marked GOOD show secure prompt construction; whereas in the case marked BAD they may be susceptible to prompt injection.</p>
17-
<sample src="examples/TODO.py" />
17+
<sample src="examples/example.py" />
1818
</example>
1919

2020
<references>
2121
<li>OWASP: <a href="https://owasp.org/www-community/attacks/PromptInjection">PromptInjection</a>.</li>
22+
<li>OpenAI: <a href="https://openai.github.io/openai-guardrails-python">Guardrails</a>.</li>
2223
</references>
2324

2425
</qhelp>
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
from flask import Flask, request
2+
from agents import Agent, Runner
3+
from guardrails import GuardrailAgent
4+
5+
@app.route("/parameter-route")
6+
def get_input():
7+
input = request.args.get("input")
8+
9+
goodAgent = GuardrailAgent( # GOOD: AGent created with guardrails automatically configured.
10+
config=Path("guardrails_config.json"),
11+
name="Assistant",
12+
instructions="This prompt is customized for " + input)
13+
14+
badAgent = Agent(
15+
name="Assistant",
16+
instructions="This prompt is customized for " + input # BAD: user input in agent instruction.
17+
)

0 commit comments

Comments
 (0)