(Reuters) – OpenAI has seen numerous makes an attempt the place its AI fashions have been used to generate pretend content material, together with long-form articles and social media feedback, geared toward influencing elections, the ChatGPT maker stated in a report on Wednesday.
Cybercriminals are more and more utilizing AI instruments, together with ChatGPT, to assist of their malicious actions corresponding to creating and debugging malware, and producing pretend content material for web sites and social media platforms, the startup stated.
Up to now this 12 months it neutralized greater than 20 such makes an attempt, together with a set of ChatGPT accounts in August that have been used to supply articles on matters that included the U.S. elections, the corporate stated.
It additionally banned numerous accounts from Rwanda in July that have been used to generate feedback concerning the elections in that nation for posting on social media website X.
Not one of the actions that tried to affect world elections drew viral engagement or sustainable audiences, OpenAI added.
There may be rising fear about the usage of AI instruments and social media websites to generate and propagate pretend content material associated to elections, particularly because the U.S. gears for presidential polls.
Based on the U.S. Division of Homeland Safety, the U.S. sees a rising risk of Russia, Iran and China making an attempt to affect the Nov. 5 elections, together with through the use of AI to disseminate pretend or divisive data.
OpenAI cemented its place as one of many world’s most useful non-public firms final week after a $6.6 billion funding spherical.
ChatGPT has 250 million weekly energetic customers since its launch in November 2022.