How artificial intelligence can destroy the world
Artificial intelligence: will it one day kill us?
Artificial intelligence lives in the big wide world of big data. Only with their help can we use the endless sea of information. A single person, even an entire army, would be completely overwhelmed. AI can do that. And soon it will be able to do even more. According to its inventors, it will be a thousand times smarter than all of humanity in the near future ... welcome to the Google scale.
Artificial intelligence is software that updates itself. It deals with masses of data and programs that are beyond our human imagination. It does not stubbornly follow the given algorithms of its programmer. She is capable of learning, independent, and she also does things that we do not understand.
“Artificial intelligence can become the greatest achievement of mankind. Unfortunately, it can also be the last. "
Stephen Hawking, astrophysicist
Intelligent systems that continue to develop on their own are imperceptibly pushing themselves ever deeper into tasks that were previously reserved for top human beings. More and more objects are being equipped with AI in smart homes and smart cities. Global financial transactions are being shifted from the trading floor to data centers, where automated algorithms process billions in deals in microseconds.
Whether it's buying a flight ticket, reserving a hotel room or ordering from Amazon, prices and conditions are adjusted to the market every minute - completely self-sufficient and often in a way that not even the top managers involved can understand. Every day we transfer more and more responsibility for urban planning and energy supply, in management and in medicine to artificial intelligence, even in warfare.
When artificial intelligence reaches a critical mass and is able to write its own software at high speed, it will multiply explosively. Small cores with adaptive intelligence will network, core by core, to form decentralized mainframes. In the Internet of Things, they are networked with one another in an opaque way. They will collect data, borrow performance and exchange software. Like drops of mercury on a glass plate, they will find each other - and merge with each other.
After that, their growth will be explosive.
AI will be worldwide.
On the order of Google.
And in an emergency there is no plug to pull out.
Evolution Without Us: Will Artificial Intelligence Kill Us?
In his current book Evolution without us (19.99 euros) Jay Tuck summarizes the results of a two-year exclusive research with German drone pilots and US armaments planners, NATO military strategists and AI researchers.
But what happens if we disregard Charles Darwin's laws of nature and create an intelligent being that is far superior to us? What if we are no longer Darwin's Darling?
It has been known that AI occasionally gets out of hand. For example, the developers of a Microsoft chat program called TAY learned that their AI was doing unpredictable things. TAY was supposed to be Microsoft's answer to Apple's SIRI and Google's ALLO - polite, educated and always up to date. Only a few answers were programmed into it beforehand. TAY was controlled by artificial intelligence and should learn independently.
But TAY ran amok.
Shortly after its launch in February of this year, TAY suddenly started spreading racist slurs across the Twitter universe. TAY spread genocide slogans and wildest conspiracy theories. The community was shocked. And Microsoft had a problem. Nobody could explain where the racist attacks came from. "We had to take TAY offline and make adjustments," said a spokesman. Somebody had taught the Microsoft machine bad things. The software had picked them up.
Experts weren't surprised. Learning software should learn. And that is not always controllable.
And yet, artificial intelligence is being given more and more responsibility - even for heavy weapons. AI combat robots have long played an important role in the military. Every third vehicle in the US armed forces is now an intelligent machine. The well-known drones in Germany and the USA are remote-controlled and out of date.
Reading tip: Google promises not to build any AI weapon systems
The next generation will fly entire operations alone - including landing on an aircraft carrier. The X-47b Pegasus, a delta wing aircraft with the appearance of a UFO, has a range of 4,000 kilometers and a payload of an estimated 2,000 kilograms. On the way she makes all the decisions herself. Exception: the so-called "kill decision". This is reserved by law to a human operator.
The final threshold for artificial intelligence is set to fall soon. You want to leave the decision between life and death to the machines. In official strategy papers, Pentagon planners have announced robotic weapons that kill self-sufficiently. The aim is "complete independence from human decisions", as it is called in an official army document. In the Navy documents, scenarios are considered in which "unmanned submarine drones track down, track, identify and destroy the enemy - all fully automatically."
The machines are not ready yet. They still make mistakes. During maneuvers in Iraq in 2007, an intelligent robot (type SWORD) suddenly aimed its 5.56 mm machine gun at its own troops. Only the courageous intervention of a soldier could prevent a bloodbath at the last second. The SWORD combat robot was classified as unsafe and the field operation was canceled.
The incident was a wake up call.
“Artificial intelligence is the greatest existential threat to humanity. We conjure up the devil. "
Elon Musk, Tesla
Artificial intelligence, learning software - that means nothing else than that software writes its own updates. In the process, she learns things that cannot be foreseen and does things that we cannot understand. Often their own developers cannot even decipher the code that the self-learning software wrote.
In the course of time, many thought leaders in the IT world fear that artificial intelligence can completely free itself from human influence in this way. The only question is: what does she do then?
Many believe it could kill us.
Some believe it will kill us.
By now, many of Silicon Valley's most renowned thinkers are seriously concerned. Men like Elon Musk and Bill Gates, Peter Thiel and Stephen Hawking are convinced that artificial intelligence can become an existential threat to us. In the near future, it might be able to wipe out all of humanity.
To this day, our society has not understood the damage that big data has caused. Nobody saw them coming. It grew at an exponential rate - from kilobyte floppies to megabyte floppy disks to gigabyte sticks to terabyte hard drives - in increases of a thousandfold.
- Can I seal instead of grout
- Why is Islam so strict against apostasy?
- An abnormal EEG always means seizures
- CSS vs LESS vs SASS
- Define projections for several shapefiles in ArcMap
- Is Bollywood responsible for rape in India
- What is debt financing and equity financing
- How artificial intelligence can destroy the world
- Their work changes the personality of everyone
- Are the node js docs good
- Why does a company require diversification
- Why is predicting protein function important
- Are military missions given names of operations?
- Should I go ahead or try again?
- What is an NLC
- Is the dysmorphic disorder of the BDD body real
- What is the price of Keto Bodytone
- How do I get serious and disciplined
- Is math necessary for this?
- Should the United Kingdom recapture Normandy
- What happened to the SEAL Team 1
- Who is the supreme deity
- How is food at a police station
- What if the government bans makeup?