While you try a brand new restaurant or book a hotel, do you think about the net reviews? Do you submit online reviews yourself? Do you listen in the event that they are filtered and moderated? Does that impact your personal online review submissions?
A research team comprising of Rensselaer Polytechnic Institute’s T. Ravichandran, Ph.D., professor within the Lally School of Management, and Jason Kuruzovich, Ph.D., associate professor within the Lally School of Management; and Lianlian Jiang, Ph.D., assistant professor within the Bauer College of Business on the University of Houston, examined these questions in recently published research. In a world where businesses thrive or die by online reviews, it can be crucial to contemplate the implications of a platform’s review moderation policies, the transparency of those policies, and the way that affects the reviews which might be submitted.
“In 2010, Yelp debuted a video to assist users understand how its review filter works and why it was crucial,” said Jiang. “Then, Yelp added a piece to display filtered reviews. Previously, Yelp didn’t disclose details about its review filter. This transformation presented the right opportunity to look at the effect of policy transparency on submitted reviews.”
Ravichandran and team compared reviews of over 1,000 restaurants on Yelp to those self same restaurants on TripAdvisor, whose practices remained unchanged and was not transparent about its review filter. They used a difference-in-difference (DID) approach. They found that the variety of reviews submitted to Yelp decreased. People who were submitted were increasingly negative and shorter in length in comparison with TripAdvisor. Also, the more positive a review, the shorter it was.
“Platforms are pressured to have content guidelines and take measures to stop fraud and make sure that reviews are legitimate and helpful,” said Ravichandran. “Nonetheless, most platforms aren’t transparent about their policies, leading consumers to suspect that reviews are manipulated to extend profit under the guise of filtering fraudulent content.”
Platforms use sophisticated software to flag and filter reviews. Once a review is flagged, it’s filtered out and never displayed, and it isn’t factored into the general rating for a business.
“Whether or to not be transparent about review filters is a critical decision for platforms with many considerations,” said Kuruzovich.
Users may put in less effort and time into their reviews if they believe that they’ve a big probability of being filtered, or they might do the other to make their reviews less prone to be filtered. Since most fake reviews are overly positive, users may assume that positive reviews are more than likely to be filtered and act accordingly. Nonetheless, with a transparent policy, those that submit fake reviews could also be incentivized to alter their ways.
“Review moderation transparency comes at a value for platforms,” said Ravichandran. “Users reduce their contribution investment, or the quantity of effort and time that they put into their reviews. This, in turn, affects the standard and characteristics of reviews. Although transparency helps to position a platform as unbiased toward advertisers, the resultant decrease within the variety of reviews submitted impacts the platform’s usefulness to consumers.”
“This research informs businesses on best practices and consumer behavior within the digital world,” said Chanaka Edirisinghe, Ph.D., acting dean of the Lally School of Management. “Online reviews pose great opportunity for firms, but additionally raise complex questions. Platforms must earn the trust of users without sacrificing engagement.”