OpenAI has seen numerous makes an attempt the place its synthetic intelligence fashions have been used to generate pretend content material, together with long-form articles and social media feedback, geared toward influencing elections, the ChatGPT-maker stated in a report on Wednesday (Oct 9).
Cybercriminals are more and more utilizing AI instruments, together with ChatGPT, to assist of their malicious actions akin to creating and debugging malware, and producing pretend content material for web sites and social media platforms, the start-up stated.
Thus far this 12 months it neutralised greater than 20 such makes an attempt, together with a set of ChatGPT accounts in August that have been used to provide articles on matters that included the USA elections, the corporate stated.
It additionally banned numerous accounts from Rwanda in July that have been used to generate feedback in regards to the elections in that nation for posting on social media web site X.
Not one of the actions that tried to affect international elections drew viral engagement or sustainable audiences, OpenAI added.
There’s growing fear about the usage of AI instruments and social media websites to generate and propagate pretend content material associated to elections, particularly because the US gears for presidential polls.
Based on the US Division of Homeland Safety, the US sees a rising risk of Russia, Iran and China trying to affect the Nov 5 elections, together with through the use of AI to disseminate pretend or divisive data.
OpenAI cemented its place as one of many world’s most precious personal firms final week after a US$6.6 billion funding spherical.
ChatGPT has 250 million weekly lively customers.