This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
ChatGPT creator under investigation by FTC
The Federal Trade Commission is investigating whether the way that artificial intelligence programme ChatGPT gathers its information is harmful to consumers.
AI programme ChatGPT has been causing a stir in the drinks industry in recent months.
In March, the drinks business reported that ChatGPT had passed three of the Master Sommelier theory exams, scoring 92% on the introductory Court of Master Sommelier test, 86% on the Certified Sommelier exam and 77% on the Advanced Sommelier exam.
Earlier this year, ChatGPT was also tasked by a London bar to create the “best cocktail in the world”, with the AI bot creating its own signature serve containing gin, St-Germain elderflower liqueur, Absinthe, Cointreau, orange bitters and sparkling wine.
According to ChatGPT creator OpenAI, the programme’s latest model “exhibits human-level performance on various professional and academic benchmarks”.
Now, concerns have been raised over how OpenAI obtains the vast amounts of information and data that ChatGPT uses to form its answers to user requests.
In a letter sent to OpenAI, first reported by The Washington Post, the Federal Trade Commission (FTC) informed the company it is probing whether OpenAI has “engaged in unfair or deceptive” practices related to data security or “relating to risks of harm to consumers.”
The 20-page document from the FTC demands that OpenAI describe in detail its processes for developing and training large language models such as ChatGPT.
“As a general matter, some of what we’re seeing in this space is that ChatGPT and some of these other services are being fed a huge trove of data,” said FTC chair Lina Khan.
“We’ve heard reports where people’s sensitive information is showing up in response [to] an inquiry from somebody else. We’ve heard about libel, defamatory, flatly untrue things that are emerging. That’s the type of fraud, deception that we’re concerned about.”
The investigation centres around whether the artificial intelligence company has violated consumer protection laws. And whether the chatbot has provided false information that could cause “reputational harm.”
If found to have breached protection laws, OpenAI could face fines.
“It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” OpenAI founder Sam Altman wrote on Twitter. “We protect user privacy and design our systems to learn about the world, not private individuals.”
Italian regulators temporarily blocked ChatGPT over privacy concerns, and privacy watchdogs in France, Spain, Ireland and Canada are also said to be paying close attention after receiving complaints.
US comedian Sarah Silverman is suing ChatGPT for copyright infringement, alleging that the AI system had obtained illegal copies of her work.
Furthermore, last week OpenAI struck a deal with The Associated Press (AP) in which the AI company will license AP’s archive of news stories.
Related news
UK Christmas lights could buy 14 million mulled wines