The government needs to work faster to regulate AI, which has more potential for the good of humanity than any other invention preceding it, Brad Smith, Microsoft (MSFT) president and vice chair, said on CBS’ “Face the Nation”.
Its uses are almost “ubiquitous” Smith said, “in medicine and drug discovery and diagnosing diseases, in scrambling the resources of, say, the Red Cross or others in a disaster to find those who are most vulnerable where buildings have collapsed,” the executive added.
Smith also said AI isn’t as “mysterious” as many think, adding it is getting more powerful.
“If you have a Roomba at home, it finds its way around your kitchen using artificial intelligence to learn what to bump into and how to get around it,” Smith said.
Regarding concerns about AI’s power, Smith said any technology that exists today looked dangerous to people who lived before it.
Smith said that there should be a safety break in place.
Job disruptions due to AI will unfold over years, not months, Smith said.
“For most of us, the way we work will change,” Smith said. “This will be a new skill set we’ll need to, frankly, develop and acquire.”
To prevent instances like the fake photo of the explosion near the Pentagon, Smith said there needs to be a watermark system, or “use the power of AI to detect when that happens.”
“You embed what we call metadata, it’s part of the file, if it’s removed, we’re able to detect it. If there’s an altered version, we in effect, create a hash. Think of it like the fingerprint of something, and then we can look for that fingerprint across the internet,” Smith said, adding a new path should be found to find a balance between regulating deepfakes and misleading ads and free expression.
With a US presidential election year approaching and the ongoing threat of foreign cyber influence operations, Smith said the tech sector needs to come together with governments in an international initiative.
Smith supports a new government agency to regulate AI systems.
“Something that would ensure not only that these models are developed safely, but they’re deployed in say, large data centers, where they can be protected from cybersecurity, physical security and national security threats,” Smith said.
Smith did not believe a six-month pause on AI systems that are more powerful than GPT4 is “the answer,” as Elon Musk and Apple co-founder Steve Wozniak have said.
“Rather than slow down the pace of technology, which I think is extraordinarily difficult, I don’t think China’s going to jump on that bandwagon,” Smith said. “Let’s use six months to go faster.”
Smith suggested an executive order where the government itself says it will only buy AI services from companies that are implementing AI safety protocols.
“The world is moving forward,” Smith said. “Let’s make sure that the United States at least keeps pace with the rest of the world,” CNN reports.