Creating the Optimal ChatGPT Prompt: Best Practices and Examples

Introduction

The quality of ChatGPT's responses depends largely on how well you formulate your prompt - that is, your input or question1. A precisely formulated prompt can help you get more accurate, relevant, and useful results. In this article, we'll show you how to create an optimal ChatGPT prompt. We'll present proven methods (Best Practices) recommended by official sources like OpenAI and Microsoft, and provide practical examples for various use cases. Our goal is to give you a guide for effectively "communicating" with ChatGPT - following the motto: You program the AI with words1.

Best Practices for Effective ChatGPT Prompts

Formulate clearly, specifically and precisely

Make your request as unambiguous as possible to leave little room for interpretation. Both OpenAI and Microsoft emphasize that concrete instructions lead to more relevant and focused responses23. Avoid vague phrasing or overly general questions. Instead of e.g.

Tell me something about climate change
you should specify:
Explain the economic impacts of climate change in developing countries over the next ten years1
The more specific your prompt (topic, timeframe, perspective etc.), the more targeted the response will be. Details about the desired response - such as length, depth of topic or style - should be mentioned in the prompt whenever possible2.

Include context and references

Provide the model with the necessary context it needs for a well-founded response. If your question relates to a specific text, data, or previous chat history, make this information available. OpenAI recommends clearly separating such additional content from the actual command, for example by using quotation marks or a clear keyword2. A prompt for summarization might look like this: 1

Summarize the following text in 3 bullet points. Text: "[text to summarize]"
By providing reference text, you reduce the risk of the model producing hallucinations (fabricated content)4. If you don't have reference text, you can still provide context by narrowing the question specifically or providing background information in the prompt. Microsoft also advises giving the model an alternative for knowledge gaps - for example by noting in the prompt:
Respond with 'not found' if the information isn't in the text3
This way you control how ChatGPT should respond when the needed information isn't available, rather than having it make something up. Also note the input length: ChatGPT prompts can be detailed, but there is a token limit (the maximum input length, usually several thousand words). Use the available space efficiently and avoid unnecessary repetitions or filler words3.

Specify desired format and output

Describe in what format or style the response should come. Should the output be e.g. a list, a paragraph, a JSON object or a poem? State this explicitly. OpenAI recommends making the desired output format clear - if necessary by including an example in the prompt2. For example, you could write:

Respond in JSON format:
followed by an example structure. Similarly, you can specify the desired length of the response (e.g. "in 3-5 sentences" instead of "keep it short" - precise length specifications are better than vague terms2). If you prefer a particular style or tone (formal, humorous, technical, simply explained etc.), it's worth mentioning this in the ChatGPT prompt. Such specifications help the model tailor the response to your expectations.

Assign a role or persona in the ChatGPT prompt

It can be useful to have the model assume a specific role to steer the tone and perspective of the response4. You could start with a phrase like:

You are an experienced doctor...
or
Act as an IT security expert...
This changes the tone and level of detail in the response according to the given perspective. OpenAI recommends this technique to adapt responses to the desired context4. In one example, ChatGPT was once asked simply when the best time for fall foliage in New England is, and once asked to answer this question as an experienced tree biologist and formulate the response understandably for children - the result was clearly different and much more tailored to the second request1. Through such persona specifications, you can control whether the response should be simplified, technically in-depth, creative or e.g. tailored to a specific target group5.

Use examples (Few-Shot Prompts)

If a simple prompt (without examples) doesn't deliver the desired result, examples can help. The idea behind this: Show the model what you expect by demonstrating one or more example inputs with the corresponding desired outputs in the prompt. This approach is called Few-Shot Prompting. For example, for a complicated extraction task, you could first provide one or two example sentences plus the correctly extracted information, and then follow with a third sentence where the model should generate the output. OpenAI shows an example where keywords are extracted from texts: First two example texts with matching keyword lists are given, then Text 3 and Keywords 3: - the model should then complete this list2. Through such examples in the prompt, ChatGPT better understands what is desired. However, examples should be limited to the most important cases so the prompt doesn't become unnecessarily long.

Break down complex tasks

If your request is too extensive or multi-part, consider breaking it down into smaller sub-steps4. You can first send one aspect of the task to ChatGPT, then check the response and build on it with follow-up questions. For example, you could first ask for a list of ideas and then ask for details about each idea, rather than demanding everything in a single prompt. OpenAI recommends dividing complex tasks into simpler, sequential steps4. This way, the individual request remains manageable and the AI can focus more specifically on each step. You also retain more control over the entire process and can make adjustments if needed.

Enable step-by-step thinking

For tricky questions, it can help to explicitly instruct the model to use a

step-by-step thought
process. OpenAI calls this "giving the model time to think"4. In practice, this could look like adding to the prompt:
First think carefully and derive your conclusion before responding
or
Show your thoughts step by step, and then give the final answer
This approach is based on the Chain-of-Thought methodology and can lead to more logically consistent and well-founded responses, as the model must explicitly structure its thoughts. Particularly for mathematical or logical problems, such a prompt often yields better results because errors in the derivations are more likely to be avoided.

Formulate positively rather than negatively

Phrase instructions as positive action requests rather than prohibitions. OpenAI shows that a prompt consisting only of prohibitions

Don't do X, don't output Y
is less effective2. It's better to write what the model should do, rather than just listing what it shouldn't do2. For example, instead of
Don't disclose confidential information
you could write:
Respond only with information from the provided text and mark missing data as 'unknown'
Through this positive, constructive phrasing, the model has clear action steps to follow, and the risk of unwanted outputs decreases.

Iteratively refine and test the prompt

View prompting as an iterative process. Rarely will you achieve the perfect output on the first try. It's normal and recommended to gradually improve the prompt: rephrase your request, add details or remove unnecessary parts, and observe how the responses change. OpenAI points out that a change to the prompt may immediately show better results for some examples, but might perform worse on a broader test basis4. Therefore, it makes sense to try a prompt with several different inputs to ensure the results are consistently good. Also use the possibility in an ongoing ChatGPT conversation to ask follow-up questions or adjust the instruction without having to start from scratch - ChatGPT "remembers" the history. According to experts, you can thus gradually steer the response in the desired direction, for example by saying after an initial response

Give more details about this
or
Phrase it more simply1
This ongoing refinement is a powerful tool to eventually obtain the optimal output.

Anatomy of a good prompt: Structure following the "o1 Prompt"

A particularly structured approach is described by developer Swyx with the so-called "o1 Prompt"6. Here, a good prompt is divided into four logical sections - a concept that can help you formulate complex tasks for ChatGPT clearly and precisely:

This structure is particularly suitable for more complex tasks or when you expect very specific results. It can also be clearly organized in several paragraphs or separated by delimiters (e.g. "--") in the prompt. Tip: Even if you only use 2 or 3 of the elements, the quality of the response often increases.

Practical Application Examples

Summarizing a text

Suppose you want ChatGPT to summarize a longer article or paragraph. An effective prompt might look like this:

Summarize the following text in 3-5 bullet points: "[article text]"
In this prompt, a list of bullet points is clearly requested as output format, and the text to be processed is separated from the command by quotation marks or ""2. Additionally, the number of desired points (3-5) is precisely specified, which is more precise than e.g. "summarize briefly"2. ChatGPT can thus better extract the content and list the most important information concisely.

Creative content (story)

If you want ChatGPT to write a story or creative text, it's worth specifying the style and theme exactly. Example:

Write a short, humorous story about an astronaut dog on Mars, in the style of a fairy tale.
This prompt gives genre (fairy tale), tone (humorous), length (short) and even an unusual character. Through the detailed specifications, the likelihood increases of getting an original and satisfying story that exactly meets these criteria2. Had you only entered "Write a story about a dog in space", the result would probably have been less specific and possibly less original. This example demonstrates how important detailed instructions are, especially for creative tasks.

Code generation and help

If you use ChatGPT for programming (e.g. to generate code or explain errors), you should tailor the prompt to the developer perspective. A good approach is to suggest that the model present code directly in the response. Example:

Write a Python function that converts miles to kilometers. Begin your response with the complete code and then briefly explain how it works.
By instructing to start the response with code, the model is guided in the right direction. OpenAI has shown that so-called "Leading words" like import in a prompt can encourage the model to output the response directly as code2. In our example, ChatGPT might respond with a Python code block (including def definition and possibly an import math if needed) and then provide an explanation in prose. This way you get both the requested code and its explanation.

Q&A with expert knowledge

Imagine you need an expert answer, for example in the medical field. Instead of simply asking

What can you do against migraines?
, you could optimize the prompt:
You are an experienced neurologist. Explain to me, in understandable terms, what can be done against migraines, and also mention possible risks.
Here the model is placed in the role of a neurologist (persona) and instructed to formulate understandably for laypeople while covering certain aspects (treatment options and risks). With these specifications, the response will likely be more substantiated and structured than if you just asked the open question. Additionally, the phrase "in understandable terms" makes it clear that overly technical language shouldn't be used - an example of how you can control the tone of the response5. This prompt design helps obtain a targeted, high-quality response that contains both expert knowledge and is comprehensible for laypeople.

Summary

In summary: An optimal ChatGPT prompt is unambiguous, detailed, and contains all relevant information so the model knows exactly what is required. By providing context, format specifications and possibly examples, you can steer the AI responses in the desired direction. The best practices presented here - validated by OpenAI, Microsoft and other experts - serve as a guide. Ultimately, practice in prompt writing improves results: Don't hesitate to try different formulations and learn from the model's reactions. Over time, you'll develop a feel for which prompt techniques are most effective to get precise and helpful responses from ChatGPT.

Sources

  1. Effective Prompts for AI: The Essentials - MIT Sloan Teaching & Learning Technologies
  2. Best practices for prompt engineering with the OpenAI API | OpenAI Help Center
  3. Azure OpenAI Service - Azure OpenAI | Microsoft Learn
  4. The Official ChatGPT-Prompt Engineering Guide from OpenAI Is Here
  5. The art of the prompt: How to get the best out of generative AI - Source
  6. latent.space: o1 Skill Issue – How to Write a Prompt