Artificial intelligence (AI) is quickly developing, and humans don’t have a choice in the matter. However, Kevin Kelly said humans do “have a tremendous amount of choice in developing the character, the policies, the nature of it [and] who owns it.”
This past Tuesday, Kelly discussed the current and future roles that AI will have as technology continues to develop around the world, as well as questions that need to be asked over the upcoming years.
In 1993 Kelly co-founded and was the executive editor for Wired, a magazine that “illuminates how technology is changing every aspect of our lives—from culture to business, science to design,” according to their website.
The “challenges” of artificial intelligence are “exactly the kind of issues that we like to study,” policy studies chair Mark Crain said in his introduction to Kelly during the talk on Tuesday.
“In most cases, we’re not trying to replicate human thinking. The power of these minds is that they do not think like humans,” Kelly said.
Current AI technologies with the likes of LettuceBot, a precision agriculture machine that inspects individual plants on a farm, and solar-powered weed bots are likely to become cheap in seven to 10 years, according to Kelly.
With the existence of robots, there will be new jobs for humans in which productivity is not important, as humans are “terrible for efficiency,” Kelly said.
Kelly also discussed the possibility of programing creativity, emotion trackers and pain into robots.
It’s “pretty easy to teach ethics to AI” as ethics can be coded in, he said. Adding that since humans don’t have “great ethics ourselves,” it will prove to be difficult to teach what is not always agreed upon.
Kelly further suggested that “if we create 1% more than we destroy,” we have “all we need to have civilization.”
After the talk, one audience member asked about the possibility of AI developing free will and refuting humans to the point where they stop listening for commands.
Kelly answered their question by referring to AI created by humans as “mind children” that must be trained so that “when we let them go, they don’t kill us.” He added that there is proof that this strategy works.
Another audience member asked if AI will eventually be able to transition to an organic sense.
Kelly said that in the long term, he predicts that people may attempt to modify human genes in the same way they have altered machines to make them more intelligent. He then predicted a division in those who want to modify genes and those who don’t.
“It’s a good take because you always think of AI as the way it works in video games,” Nicholas Sabella ‘23 said. “But [Kelly] has a different view so it’s actually pretty nice to hear a different side of it.”
“AI is the most powerful technology of our time. Its growing presence inevitably transfers decision-making and the capacity for moral authority to algorithms and those who create them,” Crain wrote in an email. “We have a responsibility to think critically about what happens when autonomous and intelligent systems make decisions that require ethical judgments and moral principles.”