Google is developing a robot that can think for itself, imagine and plan ahead based on past experiences and or recognition of right and wrong.
What we do as humans is truthfully unique, being able to imagine, plan ahead, and reason are unique to humans meaning that no other creation on this Earth has that capability. However, look out mankind, because Google is teaching its Artificial Intelligence agents how to plan and imagine.
Imagining the consequences of your actions before you take them is a powerful tool of human cognition. When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking. This form of deliberative reasoning is essentially ‘imagination’, it is a distinctly human ability and is a crucial tool in our everyday lives.
According to Google Engineers; “If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future. Beyond that, they must be able to construct a plan using this knowledge.”
What the researchers created was something that both mixes trial-and-error with simulation capabilities to better learn about its environment. In other words, Google scientists have created ‘agents’ that use sequential decision-making processes that can learn to construct, evaluate, and execute plans.
Google is rapidly creating a machine that can think on its own, act on its own, and now imagine on its own without taking into precaution the potentially dire consequences of their own actions to create such a machine. According to Elon Musk, such technologies spell disastrous implications for humankind.
What Elon Musk warns about Google and other Tech Giants, including himself, are creating. In addition to Google, Facebook’s Artificial Intelligence created it’s own language because English was apparently too slow. The system that Facebooks AI created was so advanced that the researchers had to shut it down because it prompted concerns they could lose control of AI.
The observations made by both the Facebook group and other Artificial Intelligence researchers was that in each instance the AI being monitored by humans diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”
The concern over Artificial Intelligence is real, and the consequences could and will be draconian in nature, take, for example, Google’s AI that uses ‘imagination,’ to solve problems. What if say, it imagined that the best way to solve an international issue was to wipe certain humans off of the Earth? What’s to stop the AI from devising a plan to accomplish the goal?
What is needed is to introduce the concept of God to the machines and the entire scriptures for the machine to digest.
More silliness.
Aren’t the drones and the programmers/controllers of those robot drones doing this already?
As a computer scientist, robotics and AI don’t scare me because I know that all these things are only as good as the software program that drives them. Software written by mere humans. Humans that, in most cases, don’t believe in God, and therefore can’t imagine the brains true function.
What does scare me is the very real idea that advancements in AI and robotics could come close enough to provide a vehicle to host things we would rather not see hosted.
Agreed! I am not scared of the idiots who see their salvation in computers, rather I am concerned with the technology that creates itself and recreates itself.. that is very, very bad news.