The most interesting guy in the room...

DC Newman

Slightly irreverent, effortlessly brilliant Voiceover.

AI: A solution in search of a problem

In quite a few industries, there has been a huge amount of hand wringing over AI of late. Personally, I’ve been mulling over some thoughts about AI for a while now. And I kept talking myself out of posting them because who really cares what I think about AI? Then I realized that part of my manifesto for this year is “Just do the thing.” So here goes nothing.

Some folks are convinced that AI is coming for all of our jobs. And while I can understand the concerns, I don’t think those concerns are as real as they are being presented. I think that we are all being distracted so we are focused on all of the wrong concerns with AI. Instead of worrying about it taking our jobs. Let’s worry about the AI data centers sucking up all of the electricity and water we have. Or we can worry about the large scale copyright issues. Maybe we should worry about the privacy issues. I know, lets figure out if there is actually a real benefit to all of this AI stuff that’s being shoved at us.

Overall though, I think that AI is a solution in search of a problem. I know, that’s a bold statement. So let’s address some specifics about the state of AI in today’s world.

There is no killer use case for AI

There is no “Killer app’ that everyone feels a compelling NEED to have in their life that only AI provides. If you look closely at all of the discussions about AI from the folks “in the business” they are all usually using terms like “Will” and “Eventually” and “Someday”. But right now, today, there is no killer app that we all can’t live without. If you want proof, just look at all of the places that companies are trying to shoehorn “AI” into. If it was so amazing and revolutionary, why is everyone trying to stuff it into everything? I can tell you why.

AI companies are hoping that they will accidentally find that killer use case, so right now it’s getting stuffed into everything. (Side note: Most AI features in random everyday items require some sort of account, and an app to really utilize, so your data and usage is being harvested for future training, or to be sold to data brokers at a profit.) Yay!

The corollary of promotion.

Also, take a look at the sheer volume of gushing articles and press from AI companies about how transformative AI “will” be. If there was a killer use case, they would not need to spend so much time talking about how amazing it “will” be or “could” be. We would all SEE the amazing thing that it did better than anything else, and we would all want it for that specific use case. All of the current talk about AI is speculative and vague. If AI was truly groundbreaking and world changing, shouldn’t there be more specifics being talked about right now? Shouldn’t it be more obvious to us common folks, the consumers, why we should want all of this “AI” goodness in our lives?

Artificial Intelligence (AI) is not “Intelligent”.

Large Language Models, on which all of the popular AI’s have been “trained” are essentially massive databases of information. (Which is why the word “Training” is a misnomer in my opinion.) These models are “trained” by scraping the internet and ingesting as much data as it can find into it’s database. Once it’s in the database, they run their magic AI algorithm (It’s ‘proprietary’, so we can’t tell you how it works or what it does.) Then that newly “processed” data sits there until you type in a prompt. Then the system scans the massive table of data that it has and looks for similar strings of text. It’s not thinking about what you asked. It can’t reason, it can’t make an informed decision.

It’s basically pattern matching the text you entered. Then it scans the data that’s related to those matching prompts that it found, and it cobbles together an “Answer” shaped reply to serve to you. (Which is why hallucinations happen. The system is pattern matching, not actually thinking.) I.e. Not actually “intelligent”.

Hallucinations are a feature not a bug.

There are large and justified concerns with the tendency of AI to make things up or “hallucinate.” This is not going away no matter how the AI folks try to spin it. It’s a core feature of how these models work. So hallucinations are never going away. The AI is not actually understanding what you ask. It’s pattern matching your prompt and then gluing together a bunch of similar information that was related to prompts that looked like yours. And then it provides you an answer shaped reply. It may LOOK like an answer. And some of them may even SEEM like actual answers. But they really are not. In fact, a recent study shows that AI returned incorrect data 60% of the time.

More concerning is the tendency of newer models to deliberately provide a wrong answer to prompt more engagement. (YOU spending more time interreacting with the AI “Agents” is more data that they can use for “training”.)

AI cannot think, or reason, or infer things like humans can.

Large Language Models are effectively big honking databases full of data scraped from the web. (And we all know by now that anyone can publish just about anything on the internet. So those LLM’s are ingesting some percentage of factually incorrect data.) And while proponents of AI will breathlessly tout all that they “will” do in the future, the reality is that they are big fancy search engines. (And how many of those folks promoting AI have financial ties to the industry and directly benefit from the hype and the wider adoption of AI?) LLM’s can’t understand WHY we are asking what we are, which does provide important context to the type of answer that is appropriate. AI has no nuance. No humanity. And no filter for appropriateness.

Which means that for every prompt that you use to “save time”, you then have to invest your time and energy into fact checking the output from that prompt. Doesn’t seem like a time saver to me to be honest.

We really still don’t understand much about how our brains work.

Yet we are supposed to believe that we are a year or so off from an “AGI” (Artificial General Intelligence) that can not only think and reason on it’s own, but be as capable as all humans across almost all areas of intelligence? Please. You can ask a current AI the same question and phrase it two different ways and get two vastly different answers.

But aren’t companies already going all in on AI?

Yes. But a recent MIT study showed that 95% of AI Pilot programs at companies have filed to generate any measurable impact on profits. So there is that.

The Ethics of AI.

Without getting too deep into the weeds, the entire AI industry is built on essentially stealing information from other people to create the training data. Copyright and ethics don’t seem to matter to these companies. They just need massive amounts of data. And in true Tech-Bro fashion, they have decided that’s it’s probably cheaper to steal the data and then settle lawsuits, than it is to do it right from the beginning. Which also ties into the fact that…

AI is too expensive to sustain.

Currently, based on public data, there are NO AI Companies that are actually profitable. They are all burning more cash than they are bringing in, and that has been going on year over year since their introduction. (And they are burning a staggering amount of cash every year.) Costs are rising, and even with increases in subscription fees, their core products are still not actually making money. Those core products which are based on training data that according to the courts was stolen. And we haven’t even talked about the other expenses involved in AI. Physical data centers, Specialized AI GPU Processors, water, and electricity.

Per the MIT Technology Review, “The latest reports show that 4.4% of all the energy in the US now goes toward AI data centers”. And some estimates put water usage for just ChatGPT at around 39 million gallons of water a DAY.

So am I worried about AI taking all the jobs?

Not really. It’s a pattern matching regurgitation machine that has been trained on largely stolen and uncompensated data, that lives in massive data centers that are straining the electrical grid, and the water supplies, being run by tech companies that currently can’t make a profit on a supposedly “revolutionary” technology.

It’s a bubble waiting to burst.

More specifically it’s a “Plagiarism driven, plausible sounding BS generator.”

And do we all really need more BS in our lives?