Representation and querying of unfair evaluations in social rating systems

Abstract

Social rating systems are subject to unfair evaluations. Users may try to individually or collaboratively promote or demote a product. Detecting unfair evaluations, mainly massive collusive attacks as well as honest looking intelligent attacks, is still a real challenge for collusion detection systems. In this paper, we study the impact of unfair evaluations in online rating systems. First, we study the individual unfair evaluations and their impact on the reputation of people calculated by social rating systems. We then propose a method for detecting collaborative unfair evaluations, also known as collusion. The proposed model uses frequent itemset mining technique to detect the candidate collusion groups and sub-groups. We use several indicators to identify collusion groups and to estimate how destructive such colluding groups can be. The approaches presented in this paper have been implemented in prototype tools, and experimentally validated on synthetic and real-world datasets.

Keywords

Unfair evaluation; Reputation; Degree of fairness; Collusion; Biclique; Rating system

Date of this Version

3-2014

DOI

10.1016/j.cose.2013.09.008

Comments

Elsevier, 8th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, Computers & Security, Volume 41, March 2014, Pages 68–88

Share

COinS