We live in the age of big data. Humans have recorded – written and taped and filmed – so much content that we can no longer hope to ever know it all. Between all the books ever written, the measurements recorded, and the analyses uploaded to the internet, even dedicated experts have no hope of interacting with every piece of information in their field. Isn’t that wild? The repository of human knowledge has long surpassed the capabilities of any one person.
But not the capabilities of our machines, who now have enough storage space and processing power to binge every textbook, forum, and comment section ever written, all quicker than you can eat a sandwich. This is the miraculous power of AI; a power that we have harnessed for the betterment of medicine, research, entertainment, and marketing.
Here’s the problem: the current direction popular AI is going in – it’s okay to say it – is bad.There’s a proud billboard on the way into the city that touts an AI product meant for replacing entire HR departments. ChatGPT lures in both students and professionals for, uh, extensive online help in their assignments. Meanwhile, in our court houses and senate, AI art and photography generators are challenging the legal and philosophical definitions of creativity.
Here’s why that’s no good
Let’s put aside the fact that replacing humans in the humanities is a silly move. Because why do we even have the humanities in the first place, if it’s not for humans to do?
Let’s ignore the potential that replacing human arbitrators, analyzers, decision-makers, and creatives will put millions of people out of work with no relevant fallbacks.
Let’s not consider a future wherein AI generated content dominates online spaces, leaving us with the question of “wait, what do we train the AI on now?” and an information-scraping loop that makes the internet even more hostile to real, accurate information.
Instead, let’s look at the fact that AI is objectively worse at the flexible decision-making that we are trying to hand off the reins to. No matter how much storage, processing power, and data we feed these machine learning models or neural networks – they will always be shackled to the limitations of their training data.
Narrow AI, our current form of AI which is task-specific, is rigidly trained on past information without the flexibility to apply that training to anything new. It has no understanding of why it’s doing what it’s doing and is an unthinking subject to whatever patterns come from its ocean of training data. Programmers can carefully aim this workhorse at the target they’re looking for, but the blinders on this AI make its path completely inflexible. That’s no good, especially when every database is biased differently.
Is that the type of intelligence we want churning out HR decisions?
Even programs such as ChatGPT, which seem fantastically close to the general intelligence promised to us by science fiction, are dumber than they appear. Did you know that ChatGPT does not know what it is going to say until it has said it? Trained on the tumultuous straits of the internet, ChatGPT is an extremely advanced version of the “what word should come next?” game that our phones love to play based on the past texts we sent to our mom. It does not conceive ideas, it cannot encapsulate opinions, and it doesn’t know what the point of the next word is – only that it should probably come next.
Is that the type of intelligence we want writing any sort of meaningful manuscripts?
Even if we think of fancy future AI that reaches general intelligence, a human-like cognitive state that can apply its processing powers to many different tasks, it would still be bound to old patterns from its training data. And, sure, humans love summaries of their own behavior (listicles, anyone?), but creativity is also the ability to generate new ideas. Until an AI is also feeling, breathing, and sensing as a human, it will not be able to generate new content with an intent to convey a human perspective.
Is that the type of intelligence we want replacing our artists?
Issues also arise from the “black box problem,” describing our inability to determine how some AI make their decisions. We know there is an algorithm, but not how it works. It’s like a student reaching the correct answer on their math problem but not showing their work. A programmer, like a teacher in that situation, cannot know that the correct answer was not a coincidence. The likelihood that it was a coincidence goes down with each correct response, but there is no way to ever be 100% certain. What if, a mere minute after complete approval, the AI encounters the first situation in which its algorithm fails? On what scale can we trust a decision maker that cannot fully explain its decisions?
Are our AI dreams shattered?
Maybe, if you dream of handsome, self-aware robots from Star Trek: The Next Generation, or of an online assistant churning out an original dissertation fit to publish.
It comes down to: AIs are not generative entities. But they are wonderful tools that can augment humans for potential like never before. Computers compute, processors process, and humans create and judge. For new dreams of AI, think less robots – more cyborgs.
This partnership is already extremely successful where it has been implemented. There are AIs helping doctors find cancer cells before their human eyes can detect them. AIs have been trained to help farmers identify crop diseases across their acres. We’re also seeing AI assisting us with identifying threatening weather patterns. Think of what AI can do when helping in other endeavors within healthcare, research, and infrastructure. When it can use its incredible processing power to provide humans with the information needed to make informed decisions.
At the end of the day, if there’s one thing to be emphasized, it’s this:
You are not replaceable
Your humanity is intrinsic. It’s found in your companionship, your passions, and your simple existence. As a human, your potential for growth is astounding and your ability to create networks of support exceeds that of any machine. A person’s individuality, an incredibly valuable thing, comes from a journey that starts before they are even born and continues from every interaction with the world around them.
That individuality should be augmented, not used as fuel for someone else’s gain.
In a time when loneliness and isolation are overwhelming the world, replacing humans in our day-to-day lives is the last thing we should be thinking about. Talking to a coworker, seeing a doctor, reaching out to someone for help… those are all moments of connection with others. Even reading a (can’t believe this has to be specified – humanmade) poem or looking at art can make us feel less alone. What does it matter if AI can perform just as well, if it takes away the bare-boned minimum humanity that some people encounter today? You, a wonderful human, deserve to feel anchored and connected to those around you.
That is why we should be making more of a racket. We shouldn’t have to worry about automation taking over the joys of life, nor the important decisions that take human experience into account. Instead of trying to replace what humans do (and enjoy doing), AI should stand by us and help. Big data or not, preserving the humanity in our lives should be a top priority.
Now go meet a fellow human! You’re more than earned it.
RELATED:
Why In-Person Friendships are Invaluable
Technology