Counterfactual Explanations for Sequential Recommender Systems
TLDR: This project explores counterfactual explanations in Sequential Recommender Systems (SRSs). It investigates how minimal changes in user interaction histories influence recommendations. The study introduces a genetic algorithm tailored for discrete sequences and an automata learning method for interpretability. Evaluations on MovieLens datasets confirm the effectiveness of these methods in providing actionable counterfactuals while preserving model fidelity.
Master Thesis (PDF) • GitHub Repository
Introduction
Sequential Recommender Systems (SRSs) excel at capturing evolving user preferences but often lack interpretability. This project addresses this challenge by introducing counterfactual explanations—insights into the minimal changes necessary in a user’s interaction history to alter recommendation outcomes.
Methodology
Two approaches were proposed:
- Genetic Algorithm (GA): Optimized for discrete sequences, GA efficiently explores alternative user interaction paths.
- Automata Learning (AL): Constructs an interpretable surrogate model, providing a transparent view of SRS decision-making.
Experiments were conducted in four settings:
- Targeted vs. Untargeted explanations
- Categorized vs. Uncategorized recommendations
Experimental Results
The methods were tested on the MovieLens 100K and 1M datasets. Results demonstrated:
- The ability to generate minimal, actionable counterfactuals.
- High fidelity in maintaining recommendation accuracy.
- Improved interpretability of sequential recommendation decisions.
Counterfactual Examples
| Original History | Counterfactual Change | New Recommendation |
|---|---|---|
| [Movie A, Movie B, Movie C] | Remove Movie B | Movie D instead of Movie E |
| [Movie X, Movie Y, Movie Z] | Add Movie W | Movie V replaces Movie U |
Conclusion
This research contributes to Explainable AI by providing a framework for interpreting SRS decisions through counterfactual reasoning. By understanding “what-if” scenarios, users can gain insights into recommendation logic, fostering trust and transparency.
Future work includes expanding to other domains and refining explanation granularity.
This project was developed as part of my Master’s Thesis at the University of Rome “La Sapienza”, completed in March 2025 with a final score of 110/110 with honors.