- AInauten.net
- Posts
- π¨βπ JSON-Prompting: Do you really get better results with AI?
π¨βπ JSON-Prompting: Do you really get better results with AI?
PLUS: Create AI avatars with just one image
Hello AInauts,
Welcome to the latest issue of your favorite newsletter!
Today again a colorful mix of applicable news and new tools. From hacks to language models to AI images, weβve got you covered!
This is what we have in store for you:
π― JSON-Prompting: Does it really lead to better results?
π¨π³ Another new open-source monster model from China
π― How to create AI avatars & twins with just one picture
Here we go!
π― JSON prompting: Does it really lead to better results?
Ok, we have to talk about a topic that is currently making quite a stir: JSON prompting!


It is a prompt technique that supposedly produces massively better results with ChatGPT & Co. than conventional methods.
We've been looking into it more closely over the last few days and wanted to give you our take on it today.
What is JSON prompting anyway?
First the basics: JSON (JavaScript Object Notation) is a structured data format with fields and values.
Like a digital profile. It looks like this: {"name": "John", "age": 25}
.
If you already automate with AI and work with APIs or webhooks etc., you know the format.
The idea of JSON prompting is to format your prompts as JSON, as it supposedly leads to much better responses and results.
Here's an example:
Instead of writing β¦
Please analyze the following customer review: Our new CRM system is great, but the setup was complicated. Support helped well.
β¦ you could create a JSON-Prompt like this:
{
"task": "Analyze customer text",
"text": "Our new CRM system is great, but setting it up was complicated. Support helped well.",
"analyze": {
"sentiment": "rate positive/neutral/negative",
"main problems": "identify all problems",
"recommended_actions": "concrete next steps"
}
}
The main difference is that the JSON format forces a structure for your prompt, which can be an advantage for the AI - especially when it comes to responses. But more on that in a moment.
Is it really better?
The first use case we are interested in is everyday prompting and chatting with the LLMs.
Before you get started and test it yourself, let's take a look at a study by Microsoft, which came out at the end of 2024.
It can be summarized as follows:
GPT-3.5: A whopping +42% accuracy with JSON for multiple-choice questions
GPT-4: Only minimal differences between the formats
Conclusion: The stronger and newer the model, the less format-sensitive
Our take: Yes, the models were trained on tons of JSON. But also on plain text and Markdown.
Yes, the models like structured input. For everyday use, however, markdown
as a format or XML parameters <> </>
are perfectly adequate.
β Not useful for: creative texts, consulting, storytelling, open conversations.
When should you definitely use JSON prompting?
JSON prompting also has a few advantages, and there are some use cases where it can be very useful.
It is a very structured and direct way of prompting. The LLMs are put into a "code" mindset and respond accordingly.
Here are a few advantages:
β
Structured data extraction: Extract/categorize information from texts cleanly
β
Context provision: Nesting allows for well-structured context
β
Complex multipart tasks: Prevents the model from forgetting partial aspects
β
Precise control: Less babble, only the desired information
β
Tool integration: Use results directly in apps/APIs
The last point in particular is probably the crucial one. Anyone who uses AI in automation should be familiar with JSON prompting!
It generates predictable, structured output. And especially if you connect several tools and systems with each other, you often need predictable, consistent values so that workflows in Zapier, n8n, Make etc. run more stably.

The structure of your prompts can have a major impact on the results. You also have the option of forcing structured outputs for some models.
Bottom line: JSON prompting is a tool for structure, not for quality. If you want to process AI outputs or need complex analyses, it is very practical. For normal chat situations? Completely overkill.
π¨π³ Another new open-source monster model from China
It feels like we write about a new language model from China every week⦠And once again, there is a new open source model that does fantastically well!
What could have been the Meta-Llama models are now all coming from China.
After Qwen, Kimi, DeepSeek and MiniMax, now comes Z.ai (awesome domain, btw) with a new flagship model, the GLM-4.5. In a benchmark comparison, it is only just behind OpenAI o3 and Grok 4!
The open-source models are now so good that you can actually replace the expensive flagships with them.

Why is this important?
We find this particularly exciting for 3 reasons.
1) Open source is always better than closed source
This is how we can all reduce dependencies on closed models.
2) Increases the pressure on OpenAI & Co.
The progress in China keeps the pressure on the big players high and ensures that we get newer and even better models faster and faster. We are very much looking forward to GPT-5, almost certainly in the next few weeks. π
3) You can actually only work with open-source models
Which primarily also means that it is almost free of charge. If you want to replace expensive models with cheaper open-source model, here is our suggestion:
Coding: Sonnet 4 β Qwen3 Coder
Texts: GPT-4.5 β Kimi K2
Reasoning: o3 β GLM-4.5
Multimodal: GPT-4o β Mistral Small 3.2
As always, you can find all models to try out on OpenRouter.

π― How to create AI avatars & twins with just one image (try it for free)
Finally, a short cool update for all friends of AI images. Ideogram has introduced a new model: Ideogram Character
Introducing Ideogram Character -- the first character consistency model that works with just one reference image. Now available to all users for free!
Create consistent character images in any style, expression, scene, and lighting. Who wants video next? π₯Ί
β Ideogram (@ideogram_ai)
6:00 PM β’ Jul 29, 2025
With this new model, you can create consistent AI images with the same person (character). These can be realistic images (hello, AI influencers!), but also comics, paintings or whatever your heart desires.
The exciting thing is that you only need a single picture of yourself or the person you want to portray.
If you've been with us for a while, you may remember the days when we had to train LoRAs (with at least 15 reference images) or use face swapping techniques.
With stronger models, this is now becoming increasingly easier and better.
This is how it works:
One of Ideogram's strengths is its usability. It's really super easy to use!
You create a free account here.
Then take a picture with your camera or upload one.
If you wish, you can choose from countless predefined templates or simply prompt them yourself.

It's really very simple. You can find more cool use cases for the feature on the Ideogram blog. For example, how to get your character into existing imagesβ¦
Our conclusion after the first tests
As you can see above, we have played around a bit and definitely think the new character model is very cool and easy to use.
In our tests, however, the model struggled a bit to display the character correctly, especially when using comic styles etc. (As you can clearly see above). With realistic pictures, however, we came very close to the original.
If you want to create similar pictures of yourself or other people quickly and easily, Ideogram is definitely a good choice.
However, if you want even better quality, your own LoRA is still probably the better choice. You can now train them in 10β15 minutes for 2 bucks.
We made it! But no need to be sad. The AInauts will be back soon, with new stuff for you.
Reto & Fabian from the AInauts
P.S.: Follow us on social media - it motivates us to keep going π!
X, LinkedIn, Facebook, Insta, YouTube, TikTok
Your feedback is essential for us. We read EVERY comment and feedback, just respond to this email. Tell us what was (not) good and what is interesting for YOU.
π Please rate this issue:Your feedback is our rocket fuel - to the moon and beyond! |