The emergence of generative artificial intelligence tools that allow users to efficiently produce new and detailed online reviews with almost no work has placed merchants, service providers and consumers in uncharted territory, watchdog groups say and researchers.
Fake reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are usually exchanged on private social media groups between fake review brokers and companies willing to pay. Sometimes these reviews are initiated by companies that offer their customers incentives such as gift cards for positive reviews.
But AI-powered text generation tools, popularized by OpenAI’s ChatGPT, allow fraudsters to produce reviews faster and in higher volume, tech industry experts say.
This deceptive practice, illegal in the United States, occurs year-round, but becomes a bigger problem for consumers during the holiday season, when many people rely on reviews to help them make a purchase. gifts.
Where counterfeits appear
Fake reviews are found across a wide range of industries, from e-commerce, accommodation and dining to services such as home repairs, medical care and piano lessons.
The Transparency Company, a technology company and monitoring group that uses software to detect fake reviews, said it started seeing AI-generated reviews appear in large numbers in mid-2023 and they have been increasing since .
For a report released this month, the Transparency Company analyzed 73 million reviews across three industries: home services, legal and medical. Nearly 14% of reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated.
“It’s just a really, really good tool for these fraudsters,” said Maury Blackman, an investor and advisor to tech startups who has reviewed Transparency Company’s work and is expected to lead the organization starting Jan. 1.
In August, software company DoubleVerify said it was seeing a “significant increase” in apps for mobile phones and smart TVs with reviews powered by generative AI. The reviews were often used to trick customers into installing apps that might hijack devices or serve ads constantly, the company said.
The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the market with fraudulent reviews.
The FTC, which this year banned the sale or purchase of fake reviews, said some Rytr subscribers used the tool to produce hundreds or even thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses.
Probably also on leading online sites
Max Spero, CEO of AI detection company Pangram Labs, said the software his company used almost certainly detected that some AI-generated reviews posted on Amazon were rising to the top of search results. review because they were very detailed and seemed to work well. think.
But determining what is fake and what is not can be difficult. External parties can fail because they don’t have “access to data signals that indicate patterns of abuse,” Amazon said.
Pangram Labs conducted detections on some prominent online sites, which Spero declined to name due to nondisclosure agreements. He said he independently reviewed Amazon and Yelp.
Many AI-generated reviews on Yelp appear to have been posted by people trying to post enough reviews to earn an “Elite” badge, intended to let users know they should trust the content, said Spero.
The badge provides access to exclusive events with local business owners. Scammers also want it to make their Yelp profiles appear more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Review Watch.
Of course, just because a review is AI-generated doesn’t mean it’s fake. Some consumers may experiment with AI tools to generate content that reflects their true feelings. Some non-native English speakers say they are turning to AI to ensure they use accurate language in the reviews they write.
“This can help with reviews [and] make it more informative if it comes from good intentions,” said Sherry He, a marketing professor at Michigan State University who has studied fake reviews. She says tech platforms should focus on the behavior patterns of bad actors, something leading platforms are already doing, instead of discouraging legitimate users from turning to AI tools.
What companies do
Leading companies are developing policies on how AI-generated content fits into their systems to remove fake or abusive reviews. Some already use algorithms and investigative teams to detect and remove fake reviews, but give users some flexibility to use AI.
Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-assisted reviews as long as they reflect their true experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy.
“With the recent increase in consumer adoption of AI tools, Yelp has invested significantly in methods to better detect and mitigate this type of content on our platform,” the company said in a statement.
The Coalition for Trusted Reviews, launched last year by Amazon, Trustpilot, job review site Glassdoor and travel sites Tripadvisor, Expedia and Booking.com, said that while deceivers can use AI to illicit purposes, the technology also presents “an opportunity to push back against those who seek to use reviews to mislead others.” »
“By sharing best practices and raising standards, including developing advanced AI detection systems, we can protect consumers and maintain the integrity of online reviews,” the group said.
The FTC’s rule banning fake reviews, which took effect in October, allows the agency to fine companies and individuals who engage in the practice. Tech companies that host such reviews are immune from this penalty because they are not legally responsible under U.S. law for content that third parties post on their platforms.
Tech companies including Amazon, Yelp and Google have sued fake review brokers who they accuse of peddling counterfeit reviews on their sites. The companies say their technology blocked or removed a large number of suspicious reviews and accounts. However, some experts say they could do more.
“Their efforts so far are far from enough,” said the dean of Fake Review Watch. “If these tech companies are so determined to eliminate review fraud on their platforms, why can I, a person working without automation, find hundreds, if not thousands, of fake reviews, every day?”
Spot fake reviews
Consumers can try to spot fake reviews by watching for a few possible warning signs, researchers say. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product’s full name or model number is another potential giveaway.
When it comes to AI, research by Balazs Kovacs, a professor of organizational behavior at Yale University, has shown that people cannot tell the difference between reviews generated by AI and those written by humans. Some AI detectors may also be fooled by shorter text, common in online reviews, the study found.
However, there are “AI indications” that online shoppers and service seekers should keep in mind. Panagram Labs says reviews written with AI are typically longer, highly structured, and include “empty descriptors,” such as generic phrases and attributes. The writing also tends to include clichés like “the first thing that hit me” and “game changer.”