With the increasing demand for predictable and accountable Artificial Intelligence, the ability to explain or justify recommender systems results by specifying how items are suggested, or why they are relevant, has become a primary goal. However, current models do not explicitly represent the services and actors that the user might encounter during the overall interaction with an item, from its selection to its usage. Thus, they cannot assess their impact on the user's experience. To address this issue, we propose a novel justification approach that uses service models to (i) extract experience data from reviews concerning all the stages of interaction with items, at different granularity levels, and (ii) organize the justification of recommendations around those stages. In a user study, we compared our approach with baselines reflecting the state of the art in the justification of recommender systems results. The participants evaluated the Perceived User Awareness Support provided by our service-based justification models higher than the one offered by the baselines. Moreover, our models received higher Interface Adequacy and Satisfaction evaluations by users having different levels of Curiosity or low Need for Cognition (NfC). Differently, high NfC participants preferred a direct inspection of item reviews. These findings encourage the adoption of service models to justify recommender systems results but suggest the investigation of personalization strategies to suit diverse interaction needs.
translated by 谷歌翻译