Home Latest Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

0
Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

[ad_1]

“The framework enables a set of binding requirements for federal agencies to put in place safeguards for the use of AI so that we can harness the benefits and enable the public to trust the services the federal government provides,” says Jason Miller, OMB’s deputy director for administration.

The draft memo highlights sure makes use of of AI the place the know-how can hurt rights or security, together with health care, housing, and law enforcement—all conditions the place algorithms have prior to now resulted in discrimination or denial of providers.

Examples of potential security dangers talked about within the OMB draft embrace automation for critical infrastructure like dams and self-driving automobiles just like the Cruise robotaxis that had been shut down last week in California and are underneath investigation by federal and state regulators after a pedestrian struck by a car was dragged 20 feet. Examples of how AI might violate residents rights within the draft memo embrace predictive policing, AI that may block protected speech, plagiarism- or emotion-detection software program, tenant-screening algorithms, and methods that may impression immigration or baby custody.

According to OMB, federal businesses presently use greater than 700 algorithms, although inventories supplied by federal businesses are incomplete. Miller says the draft memo requires federal businesses to share extra concerning the algorithms they use. “Our expectation is that in the weeks and months ahead, we’re going to improve agencies’ abilities to identify and report on their use cases,” he says.

Vice President Kamala Harris talked about the OMB memo alongside different accountable AI initiatives in remarks at the moment on the US Embassy in London, a visit made for the UK’s AI Safety Summit this week. She mentioned that whereas some voices in AI policymaking give attention to catastrophic risks just like the position AI can some day play in cyberattacks or the creation of organic weapons, bias and misinformation are already being amplified by AI and affecting people and communities day by day.

Merve Hickok, writer of a forthcoming e-book about AI procurement coverage and a researcher on the University of Michigan, welcomes how the OMB memo would require businesses to justify their use of AI and assign particular folks accountability for the know-how. That’s a probably efficient approach to make sure AI doesn’t get hammered into each authorities program, she says.

But the supply of waivers might undermine these mechanisms, she fears. “I would be worried if we start seeing agencies use that waiver extensively, especially law enforcement, homeland security, and surveillance,” she says. “Once they get the waiver it can be indefinite.”

[adinserter block=”4″]

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here