According to the European Commission, companies that use AI tools such as ChatGPT and Bard should label their content

0



Companies who use generative AI technologies like ChatGPT and Bard that have the potential to produce disinformation should mark such material as part of their efforts to prevent false news, according to European Commission deputy head Vera Jourova.


Microsoft-backed OpenAI's ChatGPT, which was unveiled late last year, has become the fastest-growing consumer application in history, sparking a race among tech companies to bring generative AI products to market.


Concerns are growing, though, about possible abuse of the technology and the risk that bad actors, including governments, would use it to spread considerably more disinformation than previously.


"Signatories who integrate generative AI into their services, such as Bingchat for Microsoft and Bard for Google, should include necessary safeguards to ensure that these services are not used by malicious actors to generate disinformation," Jourova said at a news conference.


"Signatories with services that have the potential to disseminate AI-generated disinformation should put technology in place to recognise such content and clearly label it to users," she added.


Companies such as Google, Microsoft, and Meta Platforms that have signed up to the EU Code of Practise to combat disinformation should report on their safeguards in July, according to Jourova.


She advised Twitter, which dropped out of the Code last week, to brace itself for more regulatory scrutiny.


"By leaving the Code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be vigorously and urgently scrutinised," Jourova said.


Tags

Post a Comment

0Comments
Post a Comment (0)
To Top