Abstract:
Today mobile crowdsourcing platforms invite users to provide anonymous reviews about service experiences, yet many reviews are found biased to be extremely positive or ne...Show MoreMetadata
Abstract:
Today mobile crowdsourcing platforms invite users to provide anonymous reviews about service experiences, yet many reviews are found biased to be extremely positive or negative. The existing methods find it difficult to learn from biased reviews to infer the actual service state, as the state can also be extreme and the platform cannot verify the truthfulness of reviews immediately. Further, reviewers can hide their (positive or negative) bias types and proactively adjust their anonymous reviews against the platform's inference. To our best knowledge, we are the first to study how to save mobile crowdsourcing from cheap-talk and strategically learn from biased users’ reviews. We formulate the problem as a dynamic Bayesian game, including users’ service-type messaging and the platform's follow-up rating/inference. Our closed-form PBE shows that an extremely-biased user may still honestly message to convince the platform of listening to his review. Such Bayesian game-theoretic learning obviously outperforms the latest common schemes especially when there are multiple diversely-biased users to compete. For the challenging single-user case, we further propose a time-evolving mechanism with the platform's commitment inferences to ensure the biased user's truthful messaging all the time, whose performance improves with more time periods to learn from more historical data.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 8, August 2024)