With technology growing at a near exponential rate, the concept behind Hollywood blockbusters, such as iRobot and The Terminator series, are becoming more and more tangible as the days pass. AI companies are in competition to be the first to breakthrough the tech-based world and with this break through comes a troublesome questionable future: singularity.
Singularity is what is known as the invention of artificial super intelligence that will trigger a runaway reaction of technological growth. This would cause an intelligence explosion surpassing human intelligence rather quickly and would result in immeasurable changes to human civilization.
It’s a domino effect. As computers increase in power it becomes possible, if not probable, that it can build a machine even more powerful and intelligent than mankind. This machine could potentially be able to rewrite its own software making it adaptable to their environment. These recurrent self-improvement variations would lead the way for extremely rapid and exponential growth before any limits, such as the laws of physics, set in.
Singularity begs the question “when”, more so than “if”. While some put the timeframe of this inescapable reality within the next 20 to 50 years, others are not so optimistic.
Elon Musk, CEO of SpaceX, believes it is right around the corner. “The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.”
Elon Musk, who is a well-known forerunner of the potential for damage held by artificial intelligence, has even gone so far as to invest in an up and coming AI company, DeepMind, specifically for keeping tabs on the fast growing technology. Musk explains while he is normally a technology advocate, he urges people to heed his well-informed warnings stating, “This is not a case of crying wolf about something I don’t understand.”
The creation of a machine that holds far superior intelligence and the ability to learn and grow based upon environmental factors causes much reason for concern. Though it is hard to grasp or predict what a post singularity world would entail, it is safe to say in a world where computers are vastly superior to mankind the need for a human civilization is minuscule at best.
Many people see artificial intelligence as a means to advance our knowledge in multiple areas, such as space travel and our healthcare field. The idea of a self-updating machine begs the question: at what cost? Singularity could quite possibly mean cures for a wide array of what we assumed to be incurable diseases, such as cancer or many autoimmune diseases. It could also be the answer to the questions of the universe that physicists have been actively searching for. These cures and answers will come at a potentially deadly cost: human civilization.
Singularity single handedly makes the existence of people unnecessary. Anything a human can do, a machine will do better and faster. This thought by itself has raised many concerns. “I am not alone in thinking we should be worried.” Musk explains, “The leading AI companies have taken great steps to ensure safety. They recognize the danger but believe that they can shape and control the digital super intelligences and prevent bad ones from escaping into the internet. That remains to be seen.”
With the promise of notoriety and more money than you know what to do with, there is much incentive for artificial intelligence companies to become the first to breakthrough the tech market. The reality of this breakthrough happening is not only attainable but nearly in reach. With so many potential negative outcomes stemming from singularity one question stands out above the rest: