Abstract
Personalized recommendation is deemed ubiquitous. Indeed, it has been applied to several online services (e.g., E-commerce, advertising, and social media applications, to name a few). Learning unknown user preferences from user-provided data lies at the core of modern collaborative filtering recommender systems. However, there is an incentive for malicious attackers to manipulate the learned preferences, which could affect business decision making, by injecting poisoned data. In the face of such a poisoning attack, while previous works have proposed a number of defense methods succeeding in other machine learning (ML) tasks, little is effective for collaborative filtering (CF). Thereof, we present a new defense scheme called poison-tolerant collaborative filtering (PTCF), which is highly robust against poisoning attacks on collaborative filtering. Different from the defenses that remove outliers or search a min-loss subset, the PTCF scheme enables collaborative filtering on an attacked training dataset while guarantees system's availability and integrity. We evaluate extensively the PTCF scheme on a public dataset (Jester) and two real-world datasets (Movie and E-Shopping), and demonstrate that the PTCF scheme is significantly effective in providing robustness.
Original language | English |
---|---|
Pages (from-to) | 4589-4599 |
Number of pages | 11 |
Journal | IEEE Transactions on Dependable and Secure Computing |
Volume | 21 |
Issue number | 5 |
DOIs | |
Publication status | Published - 16 Jan 2024 |
Bibliographical note
Publisher Copyright:© 2004-2012 IEEE.
Keywords
- Collaborative filtering
- Data models
- Optimization
- Recommender systems
- Sparse matrices
- Task analysis
- Training
- poisoning attacks
- recommender system
- supervised learning