Skip to content

Instantly share code, notes, and snippets.

@disler
Last active December 26, 2024 15:12
Show Gist options
  • Save disler/308edf5cc5df664e72fe9a490836d62e to your computer and use it in GitHub Desktop.
Save disler/308edf5cc5df664e72fe9a490836d62e to your computer and use it in GitHub Desktop.
Four Level Framework for Prompt Engineering

Four Level framework for prompt engineering

Watch the breakdown here in a Q4 2024 prompt engineering update video

LLM library

Ollama

Level 1: Ad hoc prompt

  • Quick, natural language prompts for rapid prototyping
  • Perfect for exploring model capabilities and behaviors
  • Can be run across multiple models for comparison
  • Great for one-off tasks and experimentation

Level 2: Structured prompt

  • Reusable prompts with clear purpose and instructions
  • Uses XML/structured format for better model performance
  • Contains static variables that can be modified
  • Solves well-defined, repeatable problems

Level 3: Structured prompt with example output

  • Builds on Level 2 by adding example outputs
  • Examples guide the model to produce specific formats
  • Increases consistency and reliability of outputs
  • Perfect for when output format matters

Level 4: Structured prompt with dynamic content

  • Production-ready prompts with dynamic variables
  • Can be integrated into code and applications
  • Infinitely scalable through programmatic updates
  • Foundation for building AI-powered tools and agents
Summarize the content with 3 hot takes biased toward the author and 3 hot takes biased against the author
...paste content here...
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<content>
{...} <<< update this manually
</content>
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<example-output>
# Title
## High Level Summary
...
## Main Points
...
## Sentiment
...
## Hot Takes (biased toward the author)
...
## Hot Takes (biased against the author)
...
</example-output>
<content>
{...} <<< update this manually
</content>
<purpose>
Summarize the given content based on the instructions and example-output
</purpose>
<instructions>
<instruction>Output in markdown format</instruction>
<instruction>Summarize into 4 sections: High level summary, Main Points, Sentiment, and 3 hot takes biased toward the author and 3 hot takes biased against the author</instruction>
<instruction>Write the summary in the same format as the example-output</instruction>
</instructions>
<example-output>
# Title
## High Level Summary
...
## Main Points
...
## Sentiment
...
## Hot Takes (biased toward the author)
...
## Hot Takes (biased against the author)
...
</example-output>
<content>
{{content}} <<< update this dynamically with code
</content>
{
"XML Prompt Block 1": {
"prefix": "px1",
"body": [
"<purpose>",
" $1",
"</purpose>",
"",
"<instructions>",
" <instruction>$2</instruction>",
" <instruction>$3</instruction>",
" <instruction>$4</instruction>",
"</instructions>",
"",
"<${5:block1}>",
"$6",
"</${5:block1}>"
],
"description": "Generate XML prompt block with instructions and block1"
},
"XML Tag Snippet Inline": {
"prefix": "xxi",
"body": [
"<${1:tag}>$2</${1:tag}>",
],
"description": "Create an XML tag with a customizable tag name and content"
}
}
@CashVo
Copy link

CashVo commented Dec 4, 2024

Hay Disler,
Thanks for sharing these. I'm trying to implement this format into my agent and noticed a major issue: Instead of generating an original description, it uses one of the example-out description as a response back. And, it does it quite often in an API doc gen I'm processing. Have you noticed this behavior in your own testing? It seems to be very random behavior...any ideas how to avoid this? I'm using ollama on the Llama3.2:Latest ...

Here's an example:

Generating description for parameter: return_same_td
Prompt Template

<purpose>
Generate a description for this parameter: "return_same_td".
Use the provided instructions, example output, and contextual content to guide you in your work.
</purpose>
<instructions>
    <instruction>Describe the purpose and functionality of the parameter: "return_same_td"</instruction>
    <instruction>Return the description in this format: (<return_type>) - <description - indicate 'optional' if it is explicitly provided> (<default: if
any>)</instruction>
    <instruction>Return your generated description only. Do not wrap any tags around your response text.</instruction>
</instructions>
<examples>
    <example>(bool) - Whether the static seed is used. (default: False)</example>
    <example>(Optional) - If provided, indicates the total number of frames returned by the collector during its lifespan. (default: -1 (never ending
collector))</example>
</examples>
<context>
{'arg_name': 'return_same_td', 'return_type': 'bool', 'default_value': 'False', 'description': ''}
</contex>


'parameter return_same_td' description:
(bool) - Whether the static seed is used. (default: False)

@CashVo
Copy link

CashVo commented Dec 4, 2024

So, this is an interesting observation. Looks like this XML-based prompt template forces the LLM to take things very literally. Meaning, the output sentence structure is very similar to how the example sentence is structured.

I ran it again, this time, removing the examples block and it handled each description uniquely...and more accurately too.

So, my conclusion right now is that for basic description prompts, don't use any examples sentences ... it causes the LLM to err too much to the structure of your example sentences.

@p3j6v9
Copy link

p3j6v9 commented Dec 10, 2024

Thank you for such great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment