[ad_1]
A U.S. congressman has begun advocating for a federal division to manage the usage of synthetic intelligence, postulating a dystopian future the place AIs will make key selections and autonomous weapons roam America.
Rep. Ted Lieu (D-CA) authored an opinion piece in The New York Times on Monday, arguing that AI has emerged as a robust instrument that can be utilized to learn humanity — or deceive it, and worse. In truth, the instance he cited, reproduced above, wasn’t written by Lieu, however by ChatGPT, the AI chatbot developed by OpenAI. (OpenAI received a multibillion-dollar investment from Microsoft on Monday, amid person reviews that it’s unveiling a paid model.)
Lieu, who earned a B.S. diploma in Computer Science from Stanford, famous that AI is now current in all the pieces from good audio system to Google Maps. But the place AI fails, folks could be harm: The editorial factors out {that a} driver blamed Tesla’s self-driving mode for an eight-car pileup on the San Francisco Bay Bridge.
According to Lieu, Congress is solely incapable of passing laws that may regulate AI: The know-how strikes too quick, and legislators lack the required data to set legal guidelines and pointers. Instead, “[w]hat we need is a dedicated agency to regulate AI,” Lieu wrote. “An agency is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error. Creating such an agency will be a difficult and huge undertaking because AI is complicated and still not well understood.”
Lieu cited different companies, such because the Food and Drug Administration (FDA) as proof that the federal government might nonetheless regulate a brand new and rising know-how. But, he stated, the method couldn’t occur in a single day. Instead, Lieu stated he would suggest the formation of a non-partisan AI Commission to offer suggestions on how such a federal company might be shaped, what it ought to regulate and what requirements might apply.
The National Institute of Standards and Technology has already printed an AI Risk Management Framework, a non-binding doc that Lieu proposes that authorities construct upon and add compliance mechanisms to. “We may not need to regulate the AI in a smart toaster, but we should regulate it in an autonomous car that can go over 100 miles per hour,” Lieu wrote.
Already, ChatGPT has almost passed the bar exam, scoring a 50.3 % appropriate response fee. (A rating of 68 % is required to cross.) Artists are involved that AI artwork might threaten their very own commissions. Is it potential Lieu is hoping to manage AI earlier than it comes for his job, too?
[adinserter block=”4″]
[ad_2]
Source link