Flaunt Weekly
HomeTechIs It Possible For Artificial Intelligence To Extinguish Humanity?
Is It Possible For Artificial Intelligence To Extinguish Humanity?

Is It Possible For Artificial Intelligence To Extinguish Humanity?

Many people believe that artificial intelligence will be the end of humanity, but scientists say this is not the case.

Most individuals in the globe today have some form of AI-enabled technology that they utilise in their everyday life.

They ask Alexa to switch off their smart lights or use Siri to check the weather — these are all examples of AI that many people are unaware of.

Despite the extensive (and mostly safe) application of this technology in practically every aspect of our existence, some people still think that machines will one day wipe mankind out.

Over the years, different writings and films have reinforced this apocalyptic vision.

Even well-known scientists such as Stephen Hawking and Elon Musk have spoken out against technology’s threat to mankind.

In a 2020 interview with the New York Times, Musk predicted that AI will become substantially smarter than humans by 2025 and that things would become “unstable or bizarre.”

Despite Musk’s prognosis, most experts in the area believe mankind has nothing to fear from AI — at least for the time being.

The vast majority of AI is “narrow.”

The worry of AI taking over stems from the possibility that robots could develop awareness and turn against their creators.

In order for AI to do this, it would need to not only have human-like intellect, but also the ability to forecast the future and prepare ahead.

AI is currently incapable of accomplishing either.

When asked if artificial intelligence poses an existential danger to humanity, they said yes. “The long-sought objective of a ‘general AI’ is not on the horizon,” writes Matthew O’Brien, a robotics engineer at Georgia Institute of Technology, on Metafact. We just do not understand how to create a universal adaptable intelligence, and it is uncertain how much further work is required to get there.”

The truth is that robots typically do what they’re designed to do, and we’re still a long way from generating the ASI (artificial superintelligence) required for this “takeover” to be possible.

Most AI technology used by machines today is deemed “narrow” or “weak,” which means it can only apply its understanding to one or a few tasks.

Under the same Metafact thread, George Montanez, a data scientist at Microsoft, stated, “Machine learning and AI systems are a long way from cracking the hard problem of consciousness and being able to form their own objectives contrary to their programming.”

Artificial intelligence 200421 istock 1299497622
AI technology today is incapable of human-like intellect or foresight.
iStockphoto/Getty Images

AI may be able to assist humans in better understanding ourselves.

Some academics even go so far as to claim that AI is not only not a threat to humanity, but that it may potentially help us understand ourselves better.

“Today, thanks to AI and robotics, we can simulate hypotheses linked to awareness, emotions, intellect, and ethics in robots and colonies of robots and compare them on a scientific basis,” said Antonio Chella, a professor of robotics at the University of Palermo.

“As a result, we can employ AI and robots to better comprehend ourselves.” In conclusion, I believe AI offers a chance to become better humans by better understanding ourselves,” he concluded.

People are concerned that artificial intelligence (ai) will be utilised for overoptimization, weaponization, and ecological collapse. Istockphoto/getty images
People are concerned that artificial intelligence (AI) will be utilised for overoptimization, weaponization, and ecological collapse. iStockphoto/Getty Images

There are dangers associated with AI.

However, it is apparent that AI (as well as any other technology) may represent a threat to humans.

According to Ben Nye, Director of Learning Sciences at the University of Southern California’s Institute for Creative Technologies, some of these hazards include overoptimization, weaponization, and ecological collapse (USC-ICT).

“If the AI is purposefully meant to murder or destabilise nations,” he said on Metafact, “accidental or test releases of a weaponized, viral AI might easily be one of the next major Manhattan Project scenarios.”

“We’re already seeing better virus-based assaults by state-sponsored actors,” Nye continued.

Magazine made for you.