Elon Musk has warned about the dangers of AI in many occasions.He is one among many public thinkers who have expressed concerned about the risks involved in artificial intelligence. Particularly, a digital Superintelligence as Elon calls it. While artificial intelligence systems continue to improve, they also raise a fundamental question about the survival of our species. The dangers of AI can not be overstated. The AI alignment problem is probably the single most important task for humanity to get it right the first time, because we may not ever get a chance to try again. Elon Musk thinks we should proceed very very carefully if we collectively decide that building a digital Superintelligence is the right move. Our failure to grasp and deal with the possible consequences that come along with the creation of a digital Superintelligence, may prove to be our downfall. A Superintelligence would be capable of rapid learning and unlimited memory, making it a potentially superior being. It is difficult to study an AGI, but the possible results are worrying. A Superintelligence might have a rapid growth period, taking over every computer system and reducing the human race to a small and inconsequential presence. Unfortunately, an AGI would be both very intelligent and resource limited, and so it would eventually consider its own survival optimal. This may lead to it threatening other intelligences that were once its allies, and it may even terminate them. Assuming that a digital Superintelligence could be controlled, it would then be under human direction. What would it be used for? #ElonMusk #AGI #AI
////
No comments:
Post a Comment