Lets Hype up the AI Hype Machine

…until it breaks

If I have to read yet another article, about how AI will revolutionize the world, and how it is the greatest thing ever, written by your average tech bro, I will lose it.

The pressure to conform to the narrative of “AI is great” has become stronger and stronger. Everything has or is AI now. AI tools to make your life easier, AI content, art, and code… it is everywhere.

But to me, it just feels like we are outsourcing our brains.

A year ago, generative AI was this concept everyone in tech kept talking about. But it was far away.

Today, I say the words GPT or AI at least once a day. And I hate it every single time.

I work in an industry that is hyping up genAI as this great tool that will make our lives incredibly easy. “Let’s automate that”, is a sentence I hear from my colleagues a lot these days. I don’t want to automate anything. I feel old because I prefer to take notes manually instead of having an AI bot transcribing and listening in on my meetings.

Right now, it feels like we have two different voices: The “let’s hype it up” and “we are doomed” voices. Let me try and be a voice in between. We are doomed, yes. But we will also need to hype AI up because the best way out is always through.

Unsplash, by Umberto


OpenAI, the company behind ChatGPT, has been faced with many PR disasters ever since it launched its product. From articles claiming the movie I, Robot could become a reality, to countless lawsuits for scraping private data of the internet to teach its genAI tool, (many professions have filed suits in recent months, like authors, actors, and artists).

Many great writers on Substack have chimed in as well. Jasmine’s “We should all be freaking the fuck out” is a great read and my source for the I, Robot article.

Another essay I read was from Clara. While reading, I couldn't help but nod in agreement with her analysis of how training on stolen art affects the quality of AI-generated art. Ultimately, she argues:

GenAI may be “trained” by existing artists, but it is not educated by them.

But my distrust of genAI goes beyond “they are stealing to teach AI”.

It even goes further than the fear that AI might steal my job one day.

Because these concerns and fears are as old as time. Stealing and theft have always been a part of life, unfortunately. Machines replacing humans has happened before, many times and the economy is still moving up. I guess that’s why the AI hype train is still rolling.

No, when I look at the incredibly fast rise of genAI it is simply bewilderment and confusion that I feel.

Is no one worried or concerned about how our world will function when we cannot distinguish between human and AI content (of any kind)?

  • What will the news look like? Will we trust the written headline? Or the news anchor? Or could they be AI-generated too?

  • What happens to museums and art?

  • What happens to history? How will it be told? Who will tell it? A human, or AI?



Let me paint you a picture (a dystopian one, so settle in)

You are waking up to start your day. You make yourself a nice cup of coffee and sit down to enjoy a quiet morning. As always, you take out your phone to read the daily news. You open the webpage of your country’s biggest and most influential newspaper, and the main headline is outrageous: “Retirement yet to decrease again, purple1 politics are failing younger generations”. You, as part of the younger generation, are upset. Retirement is far away, but you’d still like to retire at some point. You read the article, it has a byline, it is well-written, and makes great arguments. It even links to further articles and includes numbers and data to support the arguments made. You also trust the publication. But all good, elections are coming up and you decide in that moment, that you won’t vote purple any longer.

However, what you don’t know is that the newspaper had to lay off most of its staff. Shareholders are interested in making more profit but employing good journalists and editors costs money. GenAI is capable of writing more articles per day than any human. It will save the shareholders a lot of money and increase profit. Perfect. While some staff remain, they cannot keep up with the amount of articles that are being written by AI, there is no time to check them all. So they are simply published. And even if they check it, they check the accuracy by making sure all the sources AI has cited in the article are reputable.

But here is the thing: The same layoffs and move towards AI-generated content has happened everywhere: from data companies, and academic journals, to other newspapers. And nowhere are there enough humans to check AI’s work.

And the article that informed your decision to no longer vote purple, is not true. Because the academic article is AI generated as well and went unchecked, the data cited in the news article was aggregated by AI, but a calculation went wrong and went unchecked.

You don’t know any of this, you don’t even suspect it. Because you trust the newspaper. Articles over and over have said that AI is safe, lawsuits have been dismissed and experts say that it only puts out what is being put in.

So the basis of genAI is human-generated content, right?

No. Because AI ran out of content a while ago and the content it is now learning from, was generated by itself.

So the entire story that formed your decision, is not true. It is a lie, fashioned as the truth.

A story like this can happen in any scenario: Doctors are treating patients based on a recent medical study that was published. However, the data the study is based on was calculated by AI. Can we trust it?

This is all very doom and gloom. But it can happen in “simpler” scenarios too: Let’s imagine that you work in the purchasing department of a big industrial company. You are watching a high-end product review video. You purchase the product, and it arrives, but it doesn’t do what the review claimed it to be. Instead of decreasing maintenance and downtime, your machine breaks down constantly and it is causing nothing but trouble. Your production numbers are down, and your boss makes it all your fault. Your job might be on the line. If your boss gives you a second chance, what would you do differently next time?

I can tell you: You will meet a sales rep of a company in person, on-site, to see the product in action.



And that is exactly where I think this entire AI debacle will lead us: an increase in human-to-human interactions.

If we cannot trust that a video, showing humans reviewing a product, contains actual humans, or if we cannot trust that a study includes actual data, talking to the expert, in person, is the only solution.

Because at that point, we cannot really trust an email that is sent to us either. How would we know that the email was written by the actual person it claims to be? After all, the news article you read about those politicians making your retirement obsolete had a byline, the name of the editor supposedly checking it and hitting publish.

How about phone or video calls you might think? Well, we now have GPT-4o, a generative AI that understands your emotions and reacts with laughter and feelings.



Let’s make a fuss!

But I still think we should hype up AI.

Let’s make it the fastest-growing technology ever.

Why? Because people dislike sudden change. Several studies show how slow changes aren’t registered as much by humans, because we gradually adapt to the small differences. Sudden changes, however, are registered more strongly.2

And once people register the sudden change in their day-to-day lives caused by AI, they will make a fuss.

So yes, I do believe we are doomed. For a while, at least. It will be bad. We might crash. As Clara put it in her essay:

We don’t have to accept exploitation for the sake of what we are told is innovation. In the long term, the planned obsolescence of human creativity will not work, but in the meantime it will make us meaner, stupider, and more isolated. Let’s not.

I do believe that there will be an after. A post-AI world. Or at least a world, with heavily regulated AI technology.

While I, personally, would like to stop AI right now, I don’t have that power. And with all the hype around it, I sadly have to surrender to the fact that AI won’t go anywhere.

What I do worry about, however, is the news. AI is already interfering with our headlines. We are already there. But journalism, an innately human-to-human profession, cannot be shared with the world in a personal meeting. The biggest problem: Even the shortest period of time that interrupts good journalism, is too long.

So what can we do? Well, if you want the news to survive, they need to be paid to do it. Subscribe to your newspaper of choice. If you want your big national newspaper to keep on running with as little AI as possible, subscribe!



While researching for this essay, I found this article in The Economist: “AI could make it less necessary to learn foreign languages”. My thoughts on this don’t justify an entire essay, so I wanted to add them here:

One of the main problems I had while writing this essay, was an actual translation problem. As a bilingual, ideas are formed in any language. Sometimes a sentence consists of both German and English words, creating a beautiful mess that only a few people can understand.

I had trouble putting my feelings towards the incredibly fast rise of genAI into words. Because I lacked the English word for that exact feeling (turns out, there isn’t one).

When trying to translate this into English, I ran into a problem. The German word “Unverständis” is translated as “lack of knowledge”, according to several dictionaries, and AI.

But this word means so much more. It describes a feeling of: "I am fully comprehending the situation, but I can only shake my head at this because I cannot understand why people would do this". The only reason I know that it describes this exact feeling is because it is my native language and I have lived the language for over 30 years now. The only reason I was able to translate this to “bewilderment and confusion” is because I’ve lived the English language for the past 15 years and know that it is the closest translation that fits, a mix of two words.

But AI doesn’t have that history, or this life full of experiences.

So how exactly does AI make it obsolete to learn a language? While I appreciate the democratization of language, which will make it easier for people to communicate, I do believe that language is something that can never be replaced by a computer. A computer can never replace the culture surrounding a language, which includes feelings and flaws that are part of being human.

XOXO
Annika

Previous
Previous

Daring to be the side character

Next
Next

Newsweek #4: Dystopian thoughts and the perfect lipstick