close menu button

Hello there!
Log into the academy:

In order to log into your academy profile, please provide your email and password

Forgot password?

Did you previously use

Don’t have an account yet?

AI’s big, dirty secret

It’s both magic and a magician’s act.

We are both excited and afraid of what’s to come.
Is AI going to replace you, dear reader? Is it going to replace me?

What does Human AI Pin, Rabbit R1, Google Gemini, Devin, Amazon, Chat GPT, Adobe Firefly, and others have in common?

It’s not what you think!

Let’s face it! The world is obsessed with AI.
AI has changed the world to some degree, but it changed our perceptions and expectations even more.


When Chat-GPT launched in November of 2022, many people couldn’t believe their eyes. You ask a question and within seconds you’re getting a detailed response that feels too good to be true.

Any sufficiently advanced technology is indistinguishable from magic.

--Arthur C. Clarke

Large Language Models

It was only months later that we fully grasped LLMs (Large Language Models) and the way they work. The main thing they want to achieve is not giving you the right answer — as they don’t know it obviously. The main purpose is to make you think they are smart.

What it means is they often hallucinate or make stuff up just to prove they can answer any question, even one with not enough data on answers or completely diluted by low-quality internet content.
In simple words — they are fed garbage content from the internet and can’t make much sense of it so they pretend just to impress you.

You can read more about this in my other article.

Of course, it is getting better — also at finding its own flaws, but the prediction was that by 2023 we will all be out of our jobs.


2024 came and … AI has gotten worse

There are of course some pretty impressive achievements still.

Midjourney is continuing to push boundaries both in terms of image quality and copyright controversies.
SORA from OpenAI has shown extremely impressive (sometimes) video footage generated from a single prompt.

The AIs that save lives by analyzing X-rays are constantly improving too.
But the general perception of many people has shifted from awe to “meh”.


Have we hit a wall?

With most modern technology it can be easier to get from 0 to 95% than from 95% to 100%.
It seems like the growth is astronomical for a while, and then both physics, imagination, and human factors slow us down to a crawl.

That’s why instead of a 100x better ChatGPT we got SORA — a completely different category of AI that was nowhere near 90–95% before.

With the major CPU architectures we used to have “double the power” almost every year and now we’re at a level of small, incremental improvements. Getting the latest gen is not as essential anymore.


Where AI slows down

With AI there are multiple issues that need to first be resolved before the next big boost over that 95% threshold. As stated by OpenAI — they both need specific, AI-ready chips and a lot of power.

NVIDIA is shifting its efforts into AI (from gaming) but it’s likely OpenAI prefers to make their own chips in-house.
Chat GPT creators also stated they may need a couple of thermonuclear power plants to just power their future AI models. These things are not easy to build.


We’re in a bubble

We’re currently in a bubble not that different from the dotcom one of the early 2000s. I lived near San Francisco in 2000/2001 and I remember the excitement with everything digital and “web” related.

People were making crazy stupid things “on the web” counting on that web component bringing them money.

Of course, some businesses from that era actually were useful and successful — Google and Amazon came out of the dotcom bubble unscathed.
But most of them failed miserably, running on an empty promise and a useless idea.


AI in every app

Apps started adding AI to everything lately. It doesn’t matter if it’s useful or not, AI has to be in it. Fueled by AI bros' silly tweets the hype grew to extreme proportions.

If your workout timer app doesn’t have an AI it’s apparently worthless. Sure, it tracks your workouts just the same, but without AI? Forget about it!

By now we all grew tired of these kinds of tweets, but of course, the hype train continues to figure out new ways to engage people with the premise of the great AI future.


Google Gemini

Google wanted to compete with OpenAI so in late 2023 they released a demo of Gemini — their own AI model. And it was so impressive it made people consider quitting tech and starting an organic farm in the middle of nowhere.

Later, it turned out it was sped up and specifically scripted. Most of the tiers of that model also wouldn’t be capable of those feats shown in the demo either.


Adobe Firefly

I have to say some of the generative AI in Photoshop truly is impressive. Being able to select a portion of an image and then realistically replace it with something else looked like finally a great use case of these tools for creatives.

Adobe also talked a lot about ethics and how they only train their models on open Adobe Stock materials. Well… that turned out to be quite untrue as Bloomberg revealed they actually trained it on Midjourney images.

The same Midjourney constantly gets in trouble for violating copyrights and has the artist community outraged.


Then there’s Devin

When the Devin demo was published developers felt doomed. Is it finally here? A self-learning AI that can code with multiple inputs and outputs at the same time? An AI that works more like a real developer and can solve complex problems?

We’re doomed, aren’t we?

Well, maybe, but many parts of that demo turned out to have been staged as well. It likely did spark a lot of financial interest in the company though, right?


AI in real life

Amazon AI-powered store shut down, revealing that it wasn’t really AI that was making sure your granola bars were safely checked out and paid for. No, it was over 1000 people from India manually checking camera feeds and adjusting the shopping cart in real-time.

Some even said that it’s still AI. Just a different meaning of the term.

Actually India instead of Artificial Intelligence.


AI devices

The AI hype has also spawned a plethora of AI-enabled devices.
Let’s talk about two of the most prominent ones.

Humane AI Pin

The AI Pin got a lot of hype. After all, ex-Apple engineers and designers who previously worked on the iPhone had surely a revolution in store for us, right?

Well, sort of. The premise of the device is actually quite great and futuristic. The ability to go screen-less, take notes, and photos, and get information about everything via voice sounds like a dream.

That dream started slowly falling apart during the official keynote when some of the answers to the questions seemed quite odd.
For example, the question about the amount of protein in some almonds held in their hand yielded a response that was … greatly exaggerated if not completely false.

There’s almost no chance that a handful of almonds had that much protein.
They showcase it on their website with a dragonfruit too.


Now think about it — if this really worked it would be a true, amazing revolution in technology, comparable to the first iPhone.

But it doesn’t.

Most reviews are extremely negative with MKBHD saying it’s the worst new product he has ever reviewed. Now that’s saying something!
The device is plagued by extreme latency (waiting up to 10–15 seconds to get a response), low readability projector display with wonky navigation, poor battery life, and many other things.

It gives us a glimpse into a potential AI future that just isn’t there yet.
And it’s unlikely it will arrive VERY soon.


Rabbit R1

The Rabbit R1 is a small, AI-based device that you carry in your pocket. It does have a touch-screen, but most interactions happen through voice like in the AI Pin.

It looks smart and capable in the demos and had an impressive keynote that shows even without huge budgets you can still become a viral sensation.
Most of that virality however comes from claims from the company saying they sold out their initial batches extremely quickly.

It seems like thousands of devices were sold out almost instantly and each new batch only lasted hours.


This of course may be true, but seems a little unlikely, especially given the fact that most of the hype around this device was BECAUSE of those sold-out statements by the company itself.

They’re impossible to verify, and while FOMO can be an extremely potent thing, we should be cautious here as well.
Chances are that when it is released it will suffer many of the same issues as the AI Pin. Lots of hype, but not much actual utility.

What does it all have in common?

The bottom line here is simple. There are some actually impressive AI technologies, but even those are plagued by copyright concerns and simple ethics.

OpenAI got sued by the New York Times for apparently taking thousands of articles without permission, with Chat GPT writing its responses too close to the literary style of NYT for comfort.

the times 2

And then there’s all the hype of adding AI into software, devices or even real-life shopping experiences.
The one thing many of them have in common is they exaggerate their worth and potential.

This is obviously to attract more money and investors in the short term, but many of those stories remind me very closely of Theranos and how Elizabeth Holmes fooled everyone with a miracle new technology.

Don’t you feel that too?

PS. Elizabeth Holmes has of course nothing to do with AI and is used here as a comparison of inflating investor expectations.

Liked the article? Share it!

twitter iconlinked in iconfacebook icon

Similar articles


Sora: End of creativity part 2

Read article

The end of the internet

Read article