Regulating artificial intelligence (AI) programmes like ChatGPT is a smart idea, according to the CEO of the popular chatbot's developer on Tuesday.
Before being published, companies developing powerful AI should be obliged to follow safety criteria that include internal and independent testing, according to Sam Altman, CEO of OpenAI, testifying before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. The session was held to investigate what rules and advice would be required to help regulate chatbots like the one produced by OpenAI.
ChatGPT can assist you create prose, computer code, school papers, and even legal documents by answering queries on thousands of topics. It has aroused worries around the world due to the potential for misuse, and it is now more accessible after being released on Apple iOS on Thursday.
Authorities in the United States, the European Union, and Canada have all been looking into chatbots like ChatGPT. ChatGPT is prohibited in China, Iran, North Korea, and Russia; the chatbot returned to Italy last month after the country's data protection body resolved privacy issues.
Altman stated that he would prefer stakeholders to be involved in both the initial and ongoing regulatory processes.
“It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting,” he said in his testimony.
Because the AI area is fast evolving, any governance framework, according to Altman, should be flexible enough to be applied to technical breakthroughs.
Sen. Richard Blumenthal (D-Conn.) exhibited ChatGPT's capabilities by opening the hearing with a statement written by the chatbot and read out by clones of his voice. Blumenthal was then concerned that the chatbot was being exploited to produce a deep imitation of his voice supporting Russian President Vladimir Putin.
Altman said his “worst fears” about AI is it could “cause significant harm to the world.”
“I think if this technology goes wrong, it can go quite wrong,” he said. “And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”
Elsewhere The Securities and Exchange Commission's chair, Gary Gensler, expressed fear on Tuesday that generative AI systems like ChatGPT could create the next financial disaster if not used appropriately.
“You don’t have to understand the math, but [you have] to understand, really, how the risk management is managed,” said Gensler at a conference hosted by the Financial Industry Regulatory Authority, according to the Wall Street Journal.
By fLEXI tEAM
Comments