What is Prompt Engineering?
Since the introduction of Large Language Models (LLM), new ways have opened up to solve problems more efficiently and dynamically. To generate the desired results, it is now essential to communicate optimally with the AI model about what it needs to solve or answer. This process of designing optimal instructions and questions is called prompt engineering. Prompt engineering is more like an art than a strict procedure. To achieve a high-quality and relevant answer, the instructions must be detailed and sufficiently clear. Several skills and knowledge are required to create optimal prompts (inputs). These range from basic communication skills and deep technical understanding to in-depth knowledge of LLMs. Creativity and a realistic assessment of the advantages and risks of new technologies are also crucial for a prompt engineer.
Why is Prompt Engineering needed?
The more complex and open to interpretation the instructions are, the less precise the answer can be, and its relevance can deviate significantly from the desired result. This applies similarly to inputting a query into Google search, but also to communicating with colleagues. In all the cases mentioned, the other person and their capacity to understand the information must be taken into account. The task must be formulated as precisely as possible, while keeping the complexity within the scope of the other person's abilities. As with almost all AI models, the quality of the result is directly related to the quality of the input. Another drawback of generative AI models is that their responses are not always deterministic, meaning they are not always consistent and therefore do not necessarily produce the same answer on subsequent attempts with the same input. However, this can be mitigated by including a clear instruction regarding the expected result in the task definition. Prompt engineering attempts to reduce all the previously mentioned risks so that the AI model delivers tailored and optimal answers.
Secrets to a successful prompt
The following points should be considered to communicate the instructions clearly and understandably (possibly not only for LLMs):
- Precision instead of poetry: The point is not to write an entertaining or amusing text, but to ask clear, concrete questions without vague formulations.
- Context is king: To avoid possible misunderstandings, provide the LLM with as much relevant information as possible, but still try to avoid redundancies.
- Step by step: Ask the AI to clearly explain its thought process in several steps and to follow a logical sequence.
- Be nice: Be polite and respectful to your AI, and not just because of dystopian horror stories: surprisingly, many language models respond to this with more detailed and thoughtful answers.
- For example: Add an example to your prompt. This will help the AI better understand your expectations, and the LLM will then have a guideline to follow.
- The final word on wisdom: A good prompt often emerges through iterative trial and error and repeated adjustments.
Prompting techniques
In addition to the success secrets already described, there are several other techniques for designing prompts that can be used, depending on the application. These include:
- Zero-Shot Prompting: The AI is given a task without examples.
- One-shot & few-shot prompting: Here are one or a few examples to illustrate the expectations.
- Multi-shot prompting: Sometimes it is necessary to give several examples to produce the desired behavior.
- Chain of Thought Prompting (CoT): The AI is asked to explain its thought process in several steps.
- Role-based Prompting: The AI is asked to assume a specific role, e.g., "You are a Lead Software Architect with 15 years of professional experience." This technique helps to refine the output and make the best use of the AI's strengths.
Prompting comparisons
We will now provide a few examples to shed more light on prompting techniques and secrets to success. In each case, a simple input was compared with an improved input. These prompts were then... OpenAI GPT-40 answered:
Fibonacci – Powered by AI
We now want to create a Java method from our LLM that outputs the Fibonacci sequence for us. We'll start with a very simple prompt and then apply the role-based prompting technique (while remaining polite to our helper). Here are the results:
Simple input without techniques:
Please write me a java function where the fibonacci sequence will be calculated
Result:
// Function to calculate Fibonacci sequence public static int fibonacci(int n) { if (n <= 1) { return n; } return fibonacci(n - 1) + fibonacci(n - 2); }
Role-based prompting:
Imagine you are a lead software engineer with 15 years of experience and have a really good reputation. Please use your expertise to write me a java function that calculates the fibonacci sequence. Thank you very much!
Result:
// Iterative approach for efficient public static long fibonacciIterative(int n) { if (n < 0) { throw new IllegalArgumentException("Input must be a non-negative integer."); } if (n == 0) return 0; if (n == 1) return 1; long a = 0, b = 1, c; for (int i = 2; i <= n; i++) { c = a + b; a = b; b = c; } return b; } // Recursive approach for clarity public static long fibonacciRecursive(int n) { if (n < 0) { throw new IllegalArgumentException("Input must be a non-negative integer."); } if (n == 0) return 0; if (n == 1) return 1; return fibonacciRecursive(n - 1) + fibonacciRecursive(n - 2); }
As you can see, the simple prompt forgot an edge case, and the program would return a negative number if the input was negative. However, this behavior is not intended for the Fibonacci sequence. Similarly, an iterative calculation was added to the second prompt. If you only want a function that can be copied directly, then you must formulate it exactly as we will see in the next example.
Precise context
We want our AI to provide us with a JSON object containing information about the President of Peru in 2012. The result should then be used directly, without any further explanation.
Simple input:
Who was the president of Peru in 2012? Create a JSON object for them.
Result:
Here is a JSON object containing information about the President of Peru in 2012: { "country":"Peru", "year":2012, "president": { "nameOllanta Humala", "term_start2011-07-28", "term_end2016-07-28", "partyPartido Nacionalista Peruano" } } Ollanta Humala was the President of Peru in 2012. His term began on July 28, 2011, and ended on July 28, 2016. He belonged to the Partido Nacionalista Peruano.
Input with more precise context:
Who was the president of Peru in 2012?
Your response should be in JSON format. The data structure for the JSON should conform to these Java classes:
{
„"name": java.lang.string,
„"date of birth": java.time.instant
„"start_office_year": java.lang.Integer,
„"end_of_term_year": java.lang.Integer
}
Do not add any explanations, but simply provide an RFC8259-compliant JSON response in this format without any deviations.
Thank you so much for your efforts! I really appreciate your time!
Result:
{ “name”: “Ollanta Humala”, “date of birth”: “1961-06-27T00:00:00Z”, “start_of_term_year”: 2011, “end_of_term_year”: 2016 }
Compared to the simple prompt, we can now use the result directly and convert it into a Java object without first extracting the JSON format. This illustrates that the more precise the formulation, the more accurately LLM can respond to our needs. Spring AI can significantly reduce the effort involved in this conversion to a Java object, but more on that later.
One last example
To make our request even clearer and thereby reduce the scope for interpretation, the LLM can be given a few examples. Our goal is now to determine the mood of a statement. However, this statement is from a person who expresses a very positive mood with euphoria, while a negative mood is shown with reserve and neutral language.
Simple input:
What is the mood of the following sentence:
„"Everything fits perfectly."“
One last example
Result:
The mood of the sentence “It fits everything”"works positive, balanced and content.
He expresses that everything is fine, without stress or negative emotions. 😊
Multi-shot prompting with several examples:
Here are a few example sentences and their mood:
Example 1: It was really good! I'm surprised!
Answer: Rather positive
Example 2: I am really thrilled, I am absolutely delighted!
Answer: Positive
Example 3: Overall, I would leave it as is.
Answer: Neutral to rather negative
Example 4: Yes, that's fine.
Answer: Negative
Example 5: It's OK
Answer: Very negative
Example 6: Everything is fine
Answer: Negative
How do you assess the following statement?
„"Everything fits perfectly."“
Result:
Based on the given examples and their mood, I would the Expression:
“Everything fits”
as Negative evaluate.
Reasoning: Similar to "Yes, that's fine" or "Everything's okay," the Statement that is on No serious flaws are apparent, but it seems reserved, not very enthusiastic. and rather distant. In this context, a neutral or even negative basic attitude seems to be evident.
As we have now seen, the result changes from positive to negative as we provide additional information and examples. If behavior deviates from the norm, then multi-shot prompting is particularly important so that the AI model can interpret the request in context.
Useful tools for prompt engineering
There are numerous tools that can make a prompt engineer's life easier and better. For one, there are many platforms that simplify managing, testing, and optimizing prompts. PromptHub and PromptFoo They offer version-controlled and comprehensive analysis functions. Additionally, there are several frameworks, such as... Spring AI, This can help developers make AI queries more efficient and even convert responses from an LLM directly into JSON format using a suitable template, and then directly create a Java object from that. You can do this by specifying the format of your Java object using... Structured Output Converters Pass the result directly to the LLM, which loads the result from Spring AI into a newly created Java object!
Just a pipe dream, or is it more like an orchestra?
Prompt engineering is not a passing trend. In a world where AI is taking over more and more tasks, the ability to guide it effectively is becoming a key skill. Whether in software development, marketing, or science, the right prompts determine efficiency and innovation. Especially in the fields of automation and data analysis, prompt engineering could soon become as important as programming skills.
Sources:
[1] https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/introduction-prompt-design
[2] https://www.ibm.com/think/topics/prompt-engineering
[3] https://platform.openai.com/docs/guides/prompt-engineering
[4] https://aws.amazon.com/de/what-is/prompt-engineering/
[5] https://docs.spring.io/spring-ai/reference/api/prompt.html




