- AInauten.net
- Posts
- π¨βπ Great prompt techniques for even better results
π¨βπ Great prompt techniques for even better results
PLUS: Confusing updates from OpenAI explained
Hello AInauts,
Welcome to the latest issue of your favorite newsletter!
Weβre explaining the confusing updates from OpenAI and have a cool new tool update if you are into editing videos. Another big focus today is the ever important topic of prompting, to make sure you get the best out of ChatGPT.
This is what we have in store for you:
π³ OpenAIs confusing updates explained
π₯ Prompt Engineering: How to get the most out of AI language models
π₯ Great features in the new Kling AI video update
Letβs get into it!
π³ OpenAIs confusing updates explained
Phew, the last few days have been a bit exhausting in the world of AI.
New updates are coming all the time, everything is getting even better, faster, more insane. Google, OpenAI, Meta, etc. It is easy to get confused here. And it doesn't stop, because this week, OpenAI has pushed some new models into the market.
we've got a lot of good stuff for you this coming week!
kicking it off tomorrow.
β Sam Altman (@sama)
6:43 PM β’ Apr 13, 2025
It all started last week with the new memory feature in ChatGPT (currently not accessible in the EU) - we'll write about it in detail next week. This week, a few new models were on OpenAI's agenda:
Which model should you use and when?
"Huh, why is everyone talking about 4.1 now? Hasn't 4.5 already been released?"
Good question - and a brief clarification before we go straight into GPT-4.1, o3 and o4-mini (spoiler: we're excited!).
All those model names, versions and abbreviations can make your head spin. Understandable. In the end, we all just want one thing: the best AI that has our backs!
Here is our highly simplified decision tree:
In ChatGPT, simply use ChatGPT 4o for most topics
For complex requests that require a lot of "thinking", use the o3 model
Done. Ignore the rest for now. You'll be fine.
The advantages of GPT 4.1
BUT: For all those who have been around for a while, it makes sense to take a closer look at the new GPT 4.1 - even if it is currently only available via the API and not in ChatGPT itself. Why is that? Quite simply:
Finally, an OpenAI model with a huge context window of 1 million tokens!
In other words: you can work with lots of data! A conversation can go on for a very long time without losing knowledge and context.
It is fast, follows prompts very well and is very affordable.
Itβs main focus is on programming skills (generating websites works well).
The model knowledge cut-off is June 2024.
So much for GPT-4.1. It's fast, good and cheap - you can find prompting tips here. Incidentally, this is probably also the reason why 4.5 is already being retired after just a few weeks in the spotlight.
How to use GPT 4.1 directly
As mentioned, GPT 4.1 is (currently) only accessible via the API. However, if you use a chat tool like TypingMind or ChatLLM, for example, you can already use 4.1.
Even more new models: o3 and o4 mini for more brain power
But it gets even better: after OpenAI sent us completely into the reasoning labyrinth with model names like o1, o3-mini and o3-mini-high (...whoever was responsible for the naming had more than just a creative flight of fancy...), the next big update was released yesterday: o3, o4-mini and o4-mini-high are now officially available. Not to be confused with the 4o model, of course... π€―
o3 is OpenAI's most powerful reasoning model to date. It excels in programming, math, science and visual perception. Example: Analyzing complex financial charts in a PDF presentation
o4-mini is a smaller, more efficient version of o4, but it performs impressively, especially for math, coding and visual tasks. Example: Batch checking of Python code snippets
Both models can now include images in their thought processes. For example, they can analyze sketches or whiteboards.
The coolest thing about this release: For the first time, these models can use and combine all tools within ChatGPT independently, including web search, Python, image analysis, file analysis and image generation!
Relevant for developers: In addition, Codex CLI was introduced (Github) - a coding agent that runs directly in the terminal and uses the reasoning capabilities.
These new reasoning models are extremely smart and powerful - be sure to try them out! We're excited to see what else OpenAI has up its sleeve.
π₯ New from Google: How to get the most out of AI language models
Even the newest language models with more capabilities are no good if you don't talk to them properly.
After introducing so many new models recently, it makes sense to talk about prompt engineering again. In other words, the techniques on how to best interact with language models and get the most out of them.
Google has just published a 69-page monster guide on this topic.

We therefore pick out a few basic techniques as well as a few more advanced approaches that everyone should know.
Basic prompting techniques that everyone should know
One-Shot / Few-Shot Prompting: Here, you give the model one or more examples before you set your actual task. This clarifies expectations and significantly improves the results.
Convert these sentences into positive feedback:
Example:
"The presentation was too long." β "The presentation contained many valuable details. For the future, a more compact version could be even more effective."
Now you:
"The report contains many errors."
Role Prompting: Assign a specific role to the AI model to get more creative and targeted answers.
Act like an SEO expert who is conducting a website analysis. What are the first steps you recommend?
Instructions > Restrictions: Positive precise instructions lead to better results than negative restrictions.
Create a precise, fact-based product description for my organic tea.
Emphasize the taste, the aromas and the drinking experience.
Mention the actual ingredients: Organic green tea, ginger and lemongrass.
VS.
Create a product description for my organic tea.
Do not use exaggerated claims.
Do not mention that he cures diseases.
Write no more than 150 words.
Advanced prompting techniques for extra power
Chain-of-Thought (CoT): This technique takes the model through a step-by-step thought process - particularly useful for complex problems!
Develop a strategy to increase the conversion rate of our email newsletter.
Think step by step:
1. First analyze the typical weak points in email campaigns
2. Think about which metrics we need to improve
3. Develop concrete A/B test ideas for subject lines
Step-back prompting: First ask a more general question to activate broader knowledge - and then get specific.
What are the most common challenges when launching a new product? And how could these be specifically addressed in a new dietary supplement for stressed professionals??
Tree-of-Thoughts (ToT): Here, the model explores several paths of thought before arriving at a solution. Ideal for creative tasks!
Create three different ideas for a viral TikTok campaign for new sportswear.
For every idea:
β What is the creative hook?
β Which emotion does it appeal to?
β Which target group fits?
β What would an example video look like?
In the end: Which idea seems the most promising, and why??
These were just a few excerpts of possible techniques that Google lists.
As we always say: Prompts are the crucial element for your success with AI in your daily work!
The differences in response quality are enormous if you use just a few simple techniques. And a bonus tip:
Use reasoning LLMs (like ChatGPT o3, Claude 3.7 Thinking) for complex tasks that require intermediate steps and reasoning - for example, math problems or multistep logic puzzles.
Use regular models (such as ChatGPT 4o) for simple knowledge questions, translations or summaries.
π₯ Great features in the new KLING AI video update
Last but not least, a quick update on AI video!
KLING AI has entered phase 2.0 and has just introduced new video and image models. KLING AI has been one of the top video models for a while now, and the new KLING 2.0 Master Model also looks very promising.

Of course, new models can generally do more and are better.
KLING 2.0 Master now processes prompts with sequential actions and expressions. Movements are smoother and more natural.
KOLORS 2.0 generates images in over 60 different styles and adheres precisely to the elements, colors and positioning of the motifs.
The best new KLING feature: The multimodal editor
But our favorite feature is the following: the KLING 1.6 model gets a multimodal editor!
This means that you can now combine videos and images, add images to existing videos and adapt them, delete elements from videos and much more.
The whole thing is super simple. Let's take an example:
We have a video of a woman opening an empty gift box.
We now want to add our Nauti stuffed animal to this video as the contents of the box.
We simply upload both in the editor and describe exactly that in the prompt.
60 seconds later: Nauti is in the box and the wife is happy!
Pretty cool, isn't it?

We made it! But no need to be sad. The AInauts will be back soon, with new stuff for you.
Reto & Fabian from the AInauts
P.S.: Follow us on social media - that motivates us to keep going π!
X, LinkedIn, Facebook, Insta, YouTube, TikTok
Your feedback is essential for us. We read EVERY comment and feedback, just respond to this email. Tell us what was (not) good and what is interesting for YOU.
π Please rate this issue:Your feedback is our rocket fuel - to the moon and beyond! |