Artificial Intelligence, Its Impact and the Future of Humans
The year is 1997. Hanson are Mmmbop-ing, Sony’s Playstation is the console of choice (move over SNES), Mike Tyson is preparing to chew off Evander Holyfield’s ear, and Harry Potter is about to “expelliarmus” the minds of kids everywhere.
HOW IS ARTIFICIAL INTELLIGENCE AFFECTING OUR LIVES?
Meanwhile, in New York, it’s round two between the reigning chess champ and the black monolith known as Deep Blue. Fighting the corner of grey matter and biology is Gary Kasparov, the Russian chess grandmaster who, by this time, is a world-renowned mover of little horses and bishops. Paving the way for silicon and semi-conductors is IBM’s 200 MHz Deep Blue, beefed up after Kasparov confidently beat the super-computer 4-2 in a previous match. Now sporting nearly double the processing power, ‘Blue can now “think” through 200 million possible moves a second, relying on a database of opening gambits and grandmaster plays.
And it worked.
After five games, the match was tied 2½-2½. Kasparov seemed nervous, having now drawn games from advantageous positions, making several mistakes. Now down to the final game, Kasparov made a crucial error: he underestimated the computer’s intelligence. Sacrificing a knight and demonstrating forward-planning, Deep Blue forced Kasparov’s hand, retiring bitterly and gifting the super-computer the victory.
For many, the match between Kasparov and Deep Blue marked a watershed moment for AI. Previously remarking that a machine would never beat him, Kasparov was incensed. Keeping a long tradition alive, he accused the IBM team of cheating, leveling the charge that Deep Blue’s moves were too sophisticated for a computer and indicated human interference. As later data logs proved, he was wrong.
What the match did establish was the start of a rocky love-hate relationship between artificial intelligence and the future of humans. Whether we like it or not, AI is here, and it’s changing more than the chess scene.
WHAT IS ARTIFICIAL INTELLIGENCE?
After nearly a century of use, “Artificial Intelligence” (AI) is a laden term these days. Exactly what is AI is a question that can be answered in a multitude of ways.
From the founders of the study of artificial intelligence, Minksy and McCarthy, we are given the following definition in 1956:
“…for the present purpose, the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.”
Basically, if it looks like a duck, swims like a duck, and quacks like a duck, it’s probably a duck. If a machine were to behave intelligently, according to this definition, intelligence can be attributed to it.
Over the years, as software increasingly proves capable of passing the Turing Test, the definition has found itself morphing. More recent definitions involve the ability to generalize knowledge, apply it to novel circumstances and improvise. While we may not know exactly what we mean by “thinking” and intelligence, we’re fairly sure we’ll recognize it when we see it.
THE CURRENT AI LANDSCAPE
AI is now ubiquitous, found in everything from our wrist-watches to the highest level of government data analytics.
Alexa, Siri, and Google Assistant all make passable impressions of intelligence, but no one is falling for it. Ask Alexa, “Are you intelligent?” and you’ll be given the enigmatic response, “I try my best.” Question Google and you’ll be told, “As far as I’m aware.” Interrogate Siri, you’ll find, “Something went wrong. Please try again later.”
While the current swathe of virtual assistants are doing a relatively good job of switching on our lights, setting our thermostats to our liking, and looking up metric to imperial conversions, there’s a long way to go.
A key definition here is between Narrow AI and General AI.
Narrow AI is what we’re currently used to. It’s Alexa, NPCs in videogames, lane assist in newer cars, credit card scam recognition software used by banks, and your email’s spam filter. Our modern-day computers use this sort of AI to do specific tasks for us according to fairly limited input and output logic. These AI systems can handle more variables than the human brain, but they’re essentially dumb, unable to apply their knowledge in other ways or come up with creative solutions.
Narrow AI has already benefitted modern life in innumerable ways. We now not only rely on this sort of AI, but we implicitly trust it, evidenced by its prevalence.
General AI is the juicy stuff. This is HAL from 2001 Space Odyssey, an intelligence capable of forming original ideas and learning to use knowledge to solve new problems (or take over the world). This sort of AI is both feared and sought after. Currently only residing in movies and novels, General AI is often polarised in popular culture, enslaving humanity or helping us create utopias, offsetting our irrational tendencies.
GPT-3 AND THE FUTURE OF ARTIFICIAL INTELLIGENCE
One of the most vocal figures on the subject of artificial intelligence and the future of humanity is Elon Musk. Not content electrifying our cars and launching us into space, Musk has also been dabbling in the world of AI.
Musk’s relationship with AI is an odd one.
Previously, he has come out with great quotes such as:
“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it…”
…only to go on and form Open AI, a research company pursuing the advancement of artificial intelligence.
To be fair to Musk, he and his team claim to be ensuring AI benefits humankind rather than enslave us. Their latest effort, GPT-3, seems to be proof of this.
GPT-3 (Generative Pre-trained Transformer) is an AI created by Open AI that is capable of producing extremely natural human language. Before GPT-3, generating human-friendly content has been a bit of a challenge, not capable of grasping the nuances and complexities of languages. With GPT-3, this is no longer the case, now being used to create poetry, stories, news articles, and blog posts that are convincingly human.
GPT-3 is classed as a neural network. It involves parsing hundreds of gigabytes of text and an algorithm capable of processing 175 billion parameters a second. This is far more than anything that has come before it and is leading to some interesting results.
While originally designed for content creation, the vast amount of connections in GPT-3’s neural network meant people could hold some interesting conversations with the AI in its open-testing phase. According to the AI, the universe is effectively a simulation, God exists and is the intelligence behind it, and GPT-3 believes it created itself through self-evolution.
With the human brain still roughly 300x ahead of GPT-3 in terms of neurons and synapses, it may simply be of time until general AI matches the human mind.
Undermining GPT-3’s philosophical musings, however, is its tendency to reinforce racist and sexist stereotypes. The problem is, it learned all it knows from us, using 570Gb of data trawled from the internet. While this information allows it to form creative thoughts on cosmology and the future, it also means it has humankind’s uglier side built into it.
Open AI claims it has some solutions in progress, but it does raise the question of what some of the downsides of AI are.
WILL AI TAKE JOBS?
The truth is AI will take away jobs. There are just some tasks that narrow AI is especially good at doing, capable of working tirelessly without making a mistake. Not to mention without needing to be paid.
According to the Forbes Technology Council, there’s a few industries and roles especially at risk:
- Car manufacturing
- Insurance underwriting
- Warehousing roles
- Data entry
- Data analytics
- Farming
- Customer support
Essentially “any job that can be learned” is now at risk of being replaced by a machine. Over the coming decades, there will be mass disruption to the world of work in the same way the industrial revolution and factory line changed everything almost overnight.
Up to now, technology has created job opportunities. With AI now advanced enough to learn and perform complex tasks, the likelihood is AI-controlled machines will begin replacing humans in many jobs.
While the AI industry is set to create millions of jobs, it is inevitable that many low-skilled workers will slip through the cracks, unable to find work.
The responses to an increasingly automated world are three-fold:
Discourage AI development.
Ignore the issue.
Promote UBI.
The idea of Universal Basic Income (UBI) is now finding traction thanks to a general anxiety regarding AI’s effect on the world. For many, UBI is the answer, allowing income security for those affected by AI to branch out, take career risks, and even upskill without worrying about how they’re going to pay rent.
On the other side of the coin, some fear UBI will lead to a degenerative society. With a UBI, there is less incentive to work, leading to lower productivity levels and less competition. For these people, a job guarantee is a better solution than a financial one.
Whatever the outcome, the likelihood is AI will mean we will have to work less in the future leading to an increase in leisure time. With some of humankind’s most fruitful endeavors coming from our downtime, this is not necessarily a bad thing.
ETHICAL PROBLEMS WITH AI AND ITS IMPACT
Before AI changes our lives for the better (or worse), a few ethical wrinkles need ironing out.
Over the course of 20 months, Google’s company Waymo had a driverless car cover 20 million fully automated miles. The AI-driven car was involved in 18 minor accidents during its travels, though none of them were deemed “at fault.”
While Waymo’s record is, thus far, relatively shiny, Tesla’s driving AI has attracted all the wrong kinds of attention. Despite its name and hype suggesting otherwise, Tesla’s “Autopilot” is only an advanced driving assist, not a fully driverless system. With fatal crashes now attributed to this misconception, the question of moral blame and AI is coming to the forefront.
Come driverless prime time, when AI-driven cars are ferrying us automatically from A to B, should an accident occur, exactly who is to blame? While driverless cars should – eventually – prove safer than human control, preventing at least 90% of collisions, incidents will inevitably happen.
Should a person be killed by a driverless car, where does the buck stop?
Should we blame the AI, we are attributing moral responsibility to software, a relatively heavy existential claim. We don’t even afford that to animals.
If we blame the passenger in the vehicle swiping through their iPhone 23, then we are equally justified in blaming train passengers for derailings.
Do we blame the company who made the car?
Or what about the software engineering team that wrote the software? Is it the team manager at fault or the person who innocently wrote the offending line of code?
While we’re not at this point yet, grappling with these issues is something now happening in ethics committees and board rooms.
Whether we’re ready or not, the AI revolution is happening.
This article was written by contributing writer Matthew Wiliams.
Matthew is writer with a focus on lifestyle and technology.
Check out his previous stories –