Millions of data were allegedly collected and used for training ChatGPT. CEO Sam Altman has to answer for it in court. OpenAI allegedly stole “large amounts of personal data” to train ChatGPT. Specifically, collecting medical data and information from children weighs heavily in court.
A respective lawsuit was filed against OpenIA in a California court this week.
“Despite established laws regarding the purchase and use of personal data, the defendant took a different approach: Theft!” That’s how directly lawyers worded the accusation in the 157-page indictment. Incredible data had also been collected from social media profiles, Reddit posts and all websites linked to those posts.
This data included “private information and private conversations,” as well as “medical data and information about children.” All of this data had been fed into OpenAI’s software without the consent or knowledge of the people involved, the accusation formulated. That, it said, resulted in illegal theft affecting millions of Americans who don’t even use AI tools.
OpenAI would have no qualms about storing people’s “digital footprint.”
Thus, the software would also keep the data of ChatGPT users, just like those of users who have software in the application that integrated ChatGPT. Programs such as Snapchat, Spotify and Microsoft Teams were mentioned here.
The lawsuit aims to freeze the commercial use and further development of OpenAI software until stricter regulators are introduced. Above all, the possibility of users opting out of the data collection of private information with an “opt-out” is the declared goal of the indictment. Even financial compensation will be achieved for those affected by the data collection.
The lawsuit admits that AI platforms undoubtedly have the “potential to do good,” but they can equally pose a “catastrophic risk to humanity.” This includes the risk of massively influencing the job market or distributing misinformation on a large scale.
“Powerful companies, armed with an unparalleled and high concentration of technological capabilities, have embarked on a careless race to release AI technologies as quickly as possible,” the lawsuit reads. The risks would be put aside in the process due to “technological progress.”
In March, ChatGPT was already banned in Italy due to security concerns. At companies like Microsoft and Amazon, female employees were urged not to feed the chatbot sensitive information. South Korea’s Samsung has banned the use of generative.