- AInauten.net
- Posts
- π₯ Weekly AI news: Did you miss it?!
π₯ Weekly AI news: Did you miss it?!
π¨βπ The most important AI updates at a glance
π₯ Weekly AI news: Did you miss it?!
π¨βπ The most important AI updates at a glance
AI-HOI, AInauts!
Maybe you didn't catch all the news, tools, and hacks about AI last week, or maybe you've only recently joined us. Either way, here's our recap with all the headlines from the newsletter - just one click away!
Click the links to jump right to the article - or read our picks below.
β Selection of the top posts of the last week β
π₯ Is OpenAI going bankrupt soon? The problem of the industry
Imagine you wake up one morning and your favorite AI assistant is gone. Why? Because OpenAI went bankrupt!
This is a possibility, at least if you believe the latest report from The Information. And this source is usually very well-informed...

Almost a year ago, we took this idea apart and disproved it - but in the meantime, the cards have been reshuffled.
There are some challenges:
High initial costs: The development of models such as GPT-4 (and of course its successors) devour a lot of money in research, development and training.
Ongoing operating costs: The more users, the higher the costs ...and OpenAI has just released GPT-4o mini for the whole world to use - for free!
Talent war: Since OpenAI's workforce has already grown to 1500 employees, the personnel costs amount to a whopping 1.5 billion dollars per year (= yes, on average one million per employee).
Open source models: There are a huge number of open models that can be used commercially - and in many areas these are comparable to models from OpenAI and peers...
High costs plague the industry
It is reported that OpenAI will face losses of up to 5 billion dollars in 2024. The high cost of technology development is more than 3 billion dollars for training new models alone, and almost 4 billion dollars for using Microsoft's servers to process queries.
Even the millions of users and billions in revenue are not enough to cover these massive operating costs ... OpenAI competitor Anthropic is also struggling with a similar situation and could burn over 2.7 billion dollars this year.
The massive investments in infrastructure have led to NVIDIA becoming one of the most valuable (and important!) companies in the world, practically overnight. Whether this meteoric rise will continue remains to be seen.

Revenue is sluggish ...
According to estimates, OpenAI generates annual sales of around 2 billion dollars, while investors have valued the start-up at 80 billion dollars. Thanks to strong interest from business customers, turnover is expected to double by 2025 (while the cost of intelligence continues to fall dramatically).
That's still nowhere near enough to cover the costs - but with supporters like Microsoft and Sequoia, it's unlikely that we'll have to do without ChatGPT any time soon.
And despite predicted losses, OpenAI continues to expand. A few days ago, it announced Google challenger SearchGPT (see below).
But Sam is apparently also thinking about reorganizing the messed-up company structure and switching to "for-profit" - which would not rule out an eventual IPO.
Our take: The hype is real, but so are the achievements
To be clear: OpenAI will NOT go bankrupt anytime soon. And the current hype around AI is real, but the financial challenges are also undeniable.
Critics are therefore loudly joining in the chorus that AI is not a profitable business idea. That the bubble will soon burst and bring us a Dotcom Crash 2.0. And it's not just OpenAI, the entire industry is feeling the strain - competition is fierce.
No wonder, when you can switch to a different, cheaper, faster, more open model with just changing "one line of code", so to speak! Every provider must therefore ask themselves these questions:
What is my competitive advantage?
Do I have a unique technology?
How can we become profitable?
Do we have a killer app?
What can the customer use reliably (and what is just a demo)?
Despite all these prophecies of doom, the models are getting better from week to week and are trumping human intelligence in one discipline after another. Surely it will be possible to find a viable business model?
π¬ SearchGPT - OpenAI wants to give Google a run for its money
OpenAI has introduced SearchGPT. This new function is designed to provide timely answers from web sources.
The biggest advantage of such a solution: Nobody wants links and ads, but answers to our questions - and that's exactly what SearchGPT provides!
You can join the waitlist here.

SearchGPT combines the power of ChatGPT with real-time information and sources from the web, and presents them in a visually appealing way. Instead of a simple list of links, the results are organized and made understandable.
In one example from OpenAI, the search engine summarizes information about music festivals and then presents short descriptions of the events with references.
In another example, it explains when tomatoes should be planted and describes different tomato varieties. After the results, you can ask follow-up questions or open other relevant links in the sidebar.
OpenAI works with third-party providers and uses content feeds to build its search results. It will later be integrated directly into ChatGPT and not as a standalone app, as it is now.
The prototype, which has been declared temporary, will initially only be made available to a small group of users (10,000, according to OpenAI) and publishers.

This is where the deals with publishers (e.g. TIME, NewsCorp, The Atlantic, Axel Springer, ...) come into play in order to prominently highlight their content. In the future, the content will also be further enriched with local information and apparently also product offers.
OpenAI has been crawling the web for some time now and has also been working on the topic of search for a while. For the answers, the company's own OpenAI index is used in combination with the Bing index. No wonder, Anthropic is crawling with full speed ahead ...
Of course, Google's dominance is still overwhelming, but the signal on the stock market was clear: Google lost over $40 billion in value immediately after the announcement!

The most important SearchGPT features at a glance:
Fast search with direct, up-to-date answers
Real-time data from the web with clear sources
Intuitive interaction enables conversation-like follow-up questions
Visual results with images and videos for better understanding
P.S.: In the meantime, we recommend that you also try out https://www.perplexity.ai. The tool is pretty cool, and we're using it more and more.
π Translating long texts with AI (... the token-problem)
Let's start with what we think is a super exciting topic, but often leads to frustration for many AI users. At the same time, we can shed light on a fundamental topic of language models.
Let's take a practical example: Letβs imagine you want to have a long text or even a book translated by AI.

It would be great if you could simply upload the book as a PDF to ChatGPT, and say "Please translate!", and after a few minutes you'll get a translated PDF.
(Yes, with Deepl.com this works to some extent - but for many use cases, the quality is not good enough all the time).
Unfortunately, of course, it doesn't work that easily. ChatGPT will start and then you will get an error message quite quickly, or it will simply stop:

You will have a similar experience if you want ChatGPT to write a book. ChatGPT stops writing at some point, and you have to manually hit "Continue", "Continue", "Continue" to make sure that something happens.
The reason for the problem: Token window limitations

If you've been working with language models and AI for a while, you've probably come across the terms "token" or "context window" many times.
Here is a simplified explanation of what these two terms mean:
πͺ Token
Language models do not go by words, but divide the text into so-called tokens. There is a very rough rule of thumb: 4 tokens are 3 words
πͺ Context Window
The context window is the maximum memory capacity that the language model has in a conversation. In other words, this is the maximum number of tokens that the model can keep in the cache to answer your questions.
For GPT-4o, for example, this is 128,000 tokens. If your prompts or conversations go beyond the scope of the context window, it forgets (usually the first) parts.
The main problem with our translation example is the maximum number of output tokens.
As you can see in the image above, GPT-4o can work with just under 4,000 tokens. This means that ChatGPT cannot give answers that are longer than approx. 3,000 words.
This is how you can easily solve the problem (manually)
If you want to translate texts that have many thousands of words, you first have to convert them into tokens and then split them up and translate them individually.
Or you can work with the rule of thumb - GPT-4o can handle 3,000 words in any case. Then you can simply use text splitting tools and have the long text split up. There are many options, here is one free tool.

This is how you can easily solve the problem (automated)
However, if you simply want to translate a PDF or an entire book, the manual process is of course very tedious and annoying.
The cool thing is: you can also set up the whole process completely automated with Zapier!
This allows you to build an automation that starts when you upload the PDF to a Google Drive, splits the text into predefined parts and then sends the individual parts one after the other to the chatbot of your choice for translation - for example Claude.
The results are then saved in a Google Sheet. This is what the flow looks like:

The most important element here is the text splitter of Zapier's own formatter app. There, you can select the function Split Text Into Chunks for AI Prompts. Then select the number of tokens below and you're done.

Of course, this is a rather advanced form of automation. But you really can have entire books translated automatically in just a few minutes.
P.S. If you are interested in learning more about automation topics, please reply to this e-mail.
π€ The (AI) tool everyone needs
We often receive messages with questions about tools. Which tool for presentations, which tool for texts, which tool for images...?
New applications hit the market every day. And we have to clean up our subscriptions once a month because we test so many tools (just for you) and pay for at least 1β2 months. π
We try to live by two rules:
Remain as open as possible about which tool and model we use
Rely on the tools that have proven that they are and will remain leaders
Today we want to talk about a tool that we can no longer live without. And is also affordable for almost everyone.
We're talking about ... Canva!
Before you say: "Eww, cold coffee!", keep reading.
First reason: Canva is constantly releasing new AI features, and integrating many functionalities of other AI tools.
For example, you can have Canva automatically find potentially viral highlights from your long-form videos. This will soon save us another tool, Opus Clip.

Simply take your long form video, open it in Magic Studio and click on Highlights.
Then you can select the best of the suggested highlights, and you have 3-4 potentially viral social media video clips.

And that's just one of Canva's countless AI features.
But now to reason number two:

We like to use Leonardo to quickly and easily train LoRAs, i.e. image models, on our data.
But Leonardo can do so much more. If you're interested, here's a short hype reel:
Since Leonardo is now part of Canva, we assume that many of the Leonardo features will soon be available in Canva.
And soon another tool will be replaced.
We could go on and on with this article. No matter what you need:
Images
Designs (of all kinds)
Powerpoints
Videos
Canva has it all. And it iterates incredibly fast, the Magic Studio gets new feature updates almost weekly.
What's more, Canva is free to use, and even Premium is quite affordable at around 10 dollars a month.
Here is a brief overview of the AI-Magic Studio from Canva
π― Friend - your AI companion for lonely days
Finally, a new concept has just been launched and will probably cause a lot of controversy again.

friend.com
Avi Schiffmann has introduced friend. (He has already won the domain game once with the snappy friend.com ).
The name says it all. Friend is your AI friend that you have hanging around your neck all the time.
Another AI device? Yes exactly, there have been quite a few here recently, such as Rewind.ai aka Limitless.ai (we are still waiting for our order), or the Humane Pin, which was completely destroyed by YouTuber MKBHD - rightly so, a really immature concept and product.
But friend comes along with a new approach. It's about friendship, about communication, not about productivity.
The promise is simple: never feel alone again. Watch the video here:
As with all AI hardware, data protection, latency and the like will play a major role here too. The question will be whether friend can really keep its promises and how it can position itself against ChatGPT-4o Voice.
We will see - and keep you posted!
P.S. You can pre-order friend for just under 90 dollars (Shipping only to the US or Canada).
Your AInauts, Fabian & Reto
Your feedback is essential for us. We read EVERY comment and feedback, just respond to this email. Tell us what was (not) good and what is interesting for YOU.
π Please rate this issue:Your feedback is our rocket fuel - to the moon and beyond! |