- 790
- 211
Last week, OpenAI users were alarmed by a court order that required the company to preserve logs of ChatGPT chats, including deleted ones. The company has already appealed the order, arguing that it affects the privacy of hundreds of millions of ChatGPT users worldwide.
In a statement, OpenAI COO Brad Lightcap explains that the court order was issued in response to a lawsuit filed by The New York Times and other news organizations. The plaintiffs claim that deleted conversations with the chatbot may contain evidence that users encouraged ChatGPT to generate text that could reproduce fragments of copyrighted articles (such as news outlets).
The fact is, according to the plaintiffs, OpenAI improperly used their materials to train its models. As a result, the models can generate very close to the original text and produce entire excerpts from copyrighted publications.
The court’s order to preserve the logs came after news organizations raised concerns that people using ChatGPT to bypass the paywall might be intimidated by the lawsuit and “delete all their searches and prompts to cover their tracks.”
To comply with the injunction, OpenAI must “retain all user content for an indefinite period of time,” based on the assumption that the plaintiffs will find “something to support their claims” in the logs, the company explains.
The order affects all chats of ChatGPT Free, Plus, and Pro users, as well as users of OpenAI’s API. It emphasizes that the order does not affect ChatGPT Enterprise or ChatGPT Edu customers, or anyone who has signed a Zero Data Retention agreement (essentially a special mode of working with the company’s API, in which user data is not stored at all and is not used for either analysis or model training).
OpenAI says it has already filed a motion to stay the order. However, the company is now being forced to back down on “long-standing privacy norms” and relax the rules that users rely on under ChatGPT’s terms of service.
The company’s statement also notes that OpenAI is now unsure whether it can comply with the EU’s strict General Data Protection Regulation (GDPR), which grants users the “ right to be forgotten .”
[td]“We firmly believe that The New York Times has gone beyond its means,” Lightcap said. “We will work to appeal this ruling so that we can continue to put your trust and privacy first.”[/td]OpenAI included an FAQ with its statement, explaining what user data the company stores and how it may be disclosed. For example, as noted above, the ruling does not affect business customers of the OpenAI API or users who have signed Zero Data Retention agreements, since their data is not stored at all.
As for everyone else, OpenAI says that users’ deleted chats can indeed be accessed, but they “will not be automatically shared with The New York Times.” Instead, the data will be “stored in a separate, secure system” and cannot be accessed or used for any purpose other than to comply with legal obligations.
The company reassures users, emphasizing that “only a small, vetted team of OpenAI legal and security personnel will be able to access this data if necessary to comply with our legal obligations.”
In a statement, OpenAI COO Brad Lightcap explains that the court order was issued in response to a lawsuit filed by The New York Times and other news organizations. The plaintiffs claim that deleted conversations with the chatbot may contain evidence that users encouraged ChatGPT to generate text that could reproduce fragments of copyrighted articles (such as news outlets).
The fact is, according to the plaintiffs, OpenAI improperly used their materials to train its models. As a result, the models can generate very close to the original text and produce entire excerpts from copyrighted publications.
The court’s order to preserve the logs came after news organizations raised concerns that people using ChatGPT to bypass the paywall might be intimidated by the lawsuit and “delete all their searches and prompts to cover their tracks.”
To comply with the injunction, OpenAI must “retain all user content for an indefinite period of time,” based on the assumption that the plaintiffs will find “something to support their claims” in the logs, the company explains.
The order affects all chats of ChatGPT Free, Plus, and Pro users, as well as users of OpenAI’s API. It emphasizes that the order does not affect ChatGPT Enterprise or ChatGPT Edu customers, or anyone who has signed a Zero Data Retention agreement (essentially a special mode of working with the company’s API, in which user data is not stored at all and is not used for either analysis or model training).
OpenAI says it has already filed a motion to stay the order. However, the company is now being forced to back down on “long-standing privacy norms” and relax the rules that users rely on under ChatGPT’s terms of service.
The company’s statement also notes that OpenAI is now unsure whether it can comply with the EU’s strict General Data Protection Regulation (GDPR), which grants users the “ right to be forgotten .”
As for everyone else, OpenAI says that users’ deleted chats can indeed be accessed, but they “will not be automatically shared with The New York Times.” Instead, the data will be “stored in a separate, secure system” and cannot be accessed or used for any purpose other than to comply with legal obligations.
The company reassures users, emphasizing that “only a small, vetted team of OpenAI legal and security personnel will be able to access this data if necessary to comply with our legal obligations.”