Myrtle Beach, SC, Orlando, FL August 27, 2015
Gastonia, NC Correspondent-Hawking, Musk and Gates are really smart guys, and while I’m tempted to give them credit for foresight here and get behind efforts to “pre-regulate” artificial intelligence before Skynet comes on line and starts wiping us all out, I simply can’t bring myself to buy into their doomsday scenario. The human brain is an insanely complex web of neurons, dendrites and other components my one year of college biology doesn’t allow me to recall clearly. The most powerful computer on Earth can’t do 1/1000th of what a human brain can do. It may be able to calculate Pi to tremendous length, beat us at chess, calculate the proper trajectory for a spacecraft to land on Mars or tell traffic lights in what sequence to operate to keep the cars flowing, but it’s actually very limited.
For one thing, the computers only do those things when they are told to do them. Deep Blue didn’t start making human chess opponents cry into their beers until someone turned it on and told it to do what it was designed to do. It’s the same with any computer. Autonomous action, seeing a need and addressing it, is largely beyond the purview of our silicon cousins. You may teach a computer to call the police when someone breaks into your home, and even to recognize you and your family members and refrain from triggering the alarm if you lock your keys in the house and have to come in through a window. However, you’ll never (at least not yet) teach that computer to recognize when you come home depressed after a bad day and need a cold beer and some hot wings.
Until computers start initiating original, un-commanded actions and doing things they weren’t previously programmed to do, I don’t think we have anything to worry about.
Besides, who says artificial intelligence would want to wipe us out, anyway? Maybe it would see that all we need is a perfectly cooked steak to keep us happy and content and ensure a steady supply of ribeyes.
Prescott Valley, AZ Correspondent-The recent resurgence of artificial intelligence (or AI), as a service and useful tool for new and growing technologically-related corporations and businesses, has garnered the concern of technologists Bill Gates and Elon Musk, as well as physicist Stephen Hawking. With AI’s ability to overrule or somehow overtake and govern computers and permit access to computerized actions, whether accessing information, learning, foreseeing, and making major decisions without input from the user, the apprehensions of these experts concerning the use of AI may or may not be justified. The trepidations of these men center on the rapid and widespread use of advanced artificial intelligence (super intelligence) without sufficient oversight and control, and they have suggested establishment of policies and guidelines to manage the risks involved with the unbridled technology surrounding AI in order to prevent questionable aspects of the technology from taking command and destroying human intelligence and the human race.
With the latest wave of artificial intelligence interest and enthusiasm, which has been considered an area of study and interest since the 1950’s, more in-depth analysis has brought about a resurgence in AI applications that include question and answer services (Apple’s Siri), car business services, facial recognition software (Pepper-household robot), cancer diagnosis and treatment, through the AI Watson computer (Memorial Sloan Kettering Cancer Center), virtual assistance services, e-mail and phone filtering services, self-driving cars, improved drones, maneuverable robots , and even wristbands and watches that help in the diagnosis of health problems. Other more common uses include the automating and speeding up of activities such as financial risk management, cyber threats, fraud detection, security threats, and other related issues.
As to whether there should be regulation on artificial intelligence to prevent a “rise of the machines,” experts in the use and appropriation of artificial intelligence have suggested that built-in safeguards for control of artificial intelligence applications would eliminate fears with AI programs and thus thwart the rise of any super race of machines. They suggest inserting ethical codes within the artificial intelligence software and establishing guidelines to oversee such ethics, as well as formulating legal platforms for upholding both human and AI input. Further precautions would include review boards and programs overseeing and determining ethical and safe AI projects that would even include the confinement or “boxing” of artificial intelligence, which would limit access from the outside world and minimize any damage through the creation and placement of AI applications in virtual world circumstances. The confinement would subject artificial intelligence to a check and balance system from within so any threats would face destruction or simple shutdowns in the virtual world and not the real world.
What experts want, and what Hawking and Musk have stated in an open letter through the Future of Life Institute global research organization, is “ strict adherence to the valuable aspects of artificial intelligence and that it remains beneficial to humanity. The Institute calls for greater study on the part of the business and scientific community to maximize the societal benefits of AI while avoiding potential pitfalls.” The letter goes on to say, “we recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”
Artificial intelligence in some form or fashion is here to stay, and the advanced and expeditious features of AI technology will only further entice investors as well as corporations and businesses. Safety nets and ethical guidelines applied from the onset of any AI program initiation can prevent an AI movement takeover. After all, man and his intelligent input came long before artificial intelligence, and simple and concentrated supervision and control of AI can forestall, prevent and allay the fears of those concerned with AI takeover scenarios.