-
|
I'm wondering if it's beneficial to change the call prompt to llama3 to align it with what is described here https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/#meta-llama-3-instruct. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
|
Thanks for the comment. Do you mean specifically for the Ollama portion? I.e., changing prompt = (
f"Given the input `{format_input(entry)}` "
f"and correct output `{entry['output']}`, "
f"score the model response `{entry[json_key]}`"
f" on a scale from 0 to 100, where 100 is the best score. "
f"Respond with the integer number only."
)to prompt = (
"<|begin_of_text|><|start_header_id|>system<|end_header_id|> "
"You are a helpful AI assistant for travel tips and recommendations<|eot_id|>"
"<|start_header_id|>user<|end_header_id|>"
f"Given the input `{format_input(entry)}` "
f"and correct output `{entry['output']}`, "
f"score the model response `{entry[json_key]}`"
f" on a scale from 0 to 100, where 100 is the best score. "
f"Respond with the integer number only."
)I think that's not necessary because Ollama already uses the system prompt internally. Or do you mean changing the prompt style for the model we are finetuning? In that case, I agree that it makes sense. However, as I describe in the book, I used Alpaca because that's the "original" one. Changing the prompt style is actually exercise 7.1. Good suggestion, though! |
Beta Was this translation helpful? Give feedback.
I am referring to the ollama portion (def generate_model_scores(), def format_input()). I mean to change/personalize the system part in the messages [{
"role": system,
"content": ...
}]
and use as content something similar to what is now the instruction_text + more personalization (currently everything is managed in the user role)
instruction_text = (
f"You are an expert system on scoring bla bla bla"
f" Score on a scale from 0 to 100, where 100 is the best score. "
f"Respond with the integer number only."
f"Write a response that appropriately completes the following request."
f"Follow instructions exactly"
)
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion
…