Home Latest Generative AI Learned Nothing From Web 2.0

Generative AI Learned Nothing From Web 2.0

0
Generative AI Learned Nothing From Web 2.0

[ad_1]

If 2022 was the 12 months the generative AI increase began, 2023 was the 12 months of the generative AI panic. Just over 12 months since OpenAI released ChatGPT and set a record for the fastest-growing client product, it seems to have additionally helped set a document for quickest authorities intervention in a brand new expertise. The US Federal Elections Commission is wanting into misleading marketing campaign adverts, Congress is asking for oversight into how AI firms develop and label coaching information for his or her algorithms, and the European Union handed its new AI Act with last-minute tweaks to answer generative AI.

But for all of the novelty and velocity, generative AI’s issues are additionally painfully acquainted. OpenAI and its rivals racing to launch new AI fashions are going through issues which have dogged social platforms, that earlier era-shaping new expertise, for almost 20 years. Companies like Meta by no means did get the higher hand over mis- and disinformation, sketchy labor practices, and nonconsensual pornography, to call just some of their unintended penalties. Now these points are gaining a difficult new life, with an AI twist.

“These are completely predictable problems,” says Hany Farid, a professor on the UC Berkeley School of Information, of the complications confronted by OpenAI and others. “I think they were preventable.”

Well-Trodden Path

In some instances, generative AI firms are instantly constructed on problematic infrastructure put in place by social media firms. Facebook and others got here to depend on low-paid, outsourced content moderation employees—usually within the Global South—to maintain content material like hate speech or imagery with nudity or violence at bay.

That similar workforce is now being tapped to help train generative AI fashions, usually with equally low pay and tough working situations. Because outsourcing places essential capabilities of a social platform or AI firm administratively at arms size from its headquarters, and infrequently on one other continent, researchers and regulators can wrestle to get the total image of how an AI system or social community is being constructed and ruled.

Outsourcing also can obscure the place the true intelligence inside a product actually lies. When a chunk of content material disappears, was it taken down by an algorithm or one of many many hundreds of human moderators? When a customer support chatbot helps out a buyer, how a lot credit score is because of AI and the way a lot to the employee in an overheated outsourcing hub?

There are additionally similarities in how AI firms and social platforms reply to criticism of their ailing or unintended results. AI firms speak about placing “safeguards” and “acceptable use” insurance policies in place on sure generative AI fashions, simply as platforms have their phrases of service round what content material is and isn’t allowed. As with the principles of social networks, AI insurance policies and protections have confirmed comparatively straightforward to avoid.

[adinserter block=”4″]

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here