Skip to main content

Don’t roll your eyes — AI isn’t just another doomed tech fad

Stop me if you’ve heard this one before: “This new technology will change everything!”

It’s a phrase regurgitated endlessly by analysts and tech executives with the current buzzword of the moment plugged in. And in 2023, that buzzword is AI. ChatGPT has taken the world by storm, Microsoft redesigned its Edge browser around an AI chatbot, and Google is rushing to integrate its AI model deeply into search.

Recommended Videos

I don’t blame you if you think AI is just another fad. I understand the skepticism (and frankly, the cynicism) around claiming any technology is some revolution when so many aren’t. But where augmented reality, the metaverse, and NFTs have faded into relative obscurity, AI isn’t going anywhere — for better and worse.

This isn’t new

Messed up Quick Settings on a Google Pixel 7 Pro.
Joe Maring/Digital Trends

Let’s be clear here: AI impacting everyday life isn’t new; tech companies are just finally bragging about it. It has been powering things you use behind the scenes for years.

For instance, anyone who’s interacted with Google search (read: everyone) has experienced a dozen or more AI models at play with only a single query. In 2020, Google introduced an update that leveraged AI to correct spelling, identify critical passages in articles, and generate highlights from YouTube videos.

It’s not just Google, either. Netflix and Amazon use AI to generate watching and shopping recommendations. Dozens of AI support chat programs power customer service from Target to your regional internet provider. Navigation programs like Google Maps use AI to identify roadblocks, speed traps, and traffic congestion.

The Netflix logo in app.
Image used with permission by copyright holder

Those are just a few high-level examples. Most things that could previously be done with a static algorithm — if ‘this,’ then ‘that’ — can be done now with AI, and almost always with better results. AI is even designing the chips that power most electronics today (and doing a better job than human designers).

Companies like Google and Microsoft are simply pulling back the curtain on the AI that’s been powering their services for several years. That’s the critical difference between AI and the endless barrage of tech fads we see every year.

Better over time

Microsoft's redesigned Bing search engine.
Image used with permission by copyright holder

AI’s staying power hinges on the fact that we’re all already using it, but there’s another important element here. AI doesn’t require an investment from you. It absolutely requires a ton of money and power, but that burden rests on the dozens of companies caught up in the AI arms race, not on the end user.

It’s a fundamental difference. Metaverse hype tells you that you need to buy an expensive headset like the Meta Quest Pro to participate, and NFTs want you to cough up cold cash for code. AI just asks whether you want the tasks you’re already performing to be easier and more effective. That’s a hell of a lot different.

AI doesn’t have the growing pains of this emerging (soon-to-be-dead) tech, either. It has problems of its own, which I’ll dig into next, but the basis of generative AI has already been refined to a point that it’s ready for primetime. You don’t have to hassle with expensive, half-baked tech that doesn’t have many practical applications.

It also holds a promise. AI models like the ones now powering search engines and web browsers use reinforcement learning. They’ll get things wrong, but every one of those missteps is put pack into a positive feedback loop that improves the AI as time goes on. Again, I understand the skepticism around believing that AI will magically get better, but I trust that logic much more than I trust a tech CEO telling me a buzzword is going to change the world.

A warning sign

A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.
Shutterstock

Don’t get it twisted; this is not a resounding endorsement of AI. For as many positives as that can bring, AI also brings some sobering realities.

First and most obviously: AI is wrong a lot of the time. Google’s first demo of its Bard AI showed an answer that was disproven by the first search result. Microsoft’s ChatGPT-powered Bing has also proven that complex, technical questions often throw the AI off, resulting in a copy-paste job from whatever website is the first result in the search engine.

That seems tame enough, but a constantly learning machine can perpetuate problems we already have online — and develop an understanding that those problems aren’t valid. For instance, graphics card and processor brand AMD recently announced in an earnings call that it was “undershipping” chips, which lead many outlets to initially report the company was price fixing. That isn’t the case. This term simply refers to the number of products AMD is shipping to retailers and signifies that demand is lower. Will an AI understand that context? Or will it run with the same misunderstanding that usually trusted sources are already erroneously repeating?

It’s not hard to see a negative feedback loop of misinformation around these complex topics, nor how these AIs can learn to reinforce negative stereotypes. Studies from Johns Hopkins show the often racist and sexist bias present in AI models, and as the study reads: “Stereotypes, bias, and discrimination have been extensively documented in machine learning methods.”

Shutterstock

Safeguards are in place to protect against this type of bias, but you can still skirt these guardrails and reveal what the AI believes underneath. I won’t link to the examples to avoid perpetuating these stereotypes, but Steven Piantadosi, a professor and researcher of cognitive computer science at UC Berkely, revealed half a dozen inputs that would produce racist, sexist responses within ChatGPT just a couple of months ago — and none of them were particularly hard to come up with.

It’s true that AI can be prodded into submission on these fronts, but it hasn’t been yet. Meanwhile, Google and Microsoft are caught up in an arms race to debut their rival AIs first, all carrying these same underpinnings that have been present in AI models for years. Even with protection, it’s a matter of when, not if, these models will deteriorate into the same rotten core that we’ve seen through AIs since their inception.

I’m not saying this bias is intentional, and I’m confident Microsoft and Google are working to remove as much of it as possible. But the momentum behind AI right now pushes these concerns into the background and ignores the implications they could have. After all, the AI revolution is upon us, and it won’t quickly fade into obscurity like another tech fad. My only hope is that the never-ending need for competition isn’t enough to uproot the necessity for responsibility.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more
2023 was the year of AI. Here were the 9 moments that defined it
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

ChatGPT may have launched in late 2022, but 2023 was undoubtedly the year that generative AI took hold of the public consciousness.

Not only did ChatGPT reach new highs (and lows), but a plethora of seismic changes shook the world, from incredible rival products to shocking scandals and everything in between. As the year draws to a close, we’ve taken a look back at the nine most important events in AI that took place over the last 12 months. It’s been a year like no other for AI -- here’s everything that made it memorable, starting at the beginning of 2023.
ChatGPT’s rivals rush to market

Read more