User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.
UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).
UMUAI homepage with description of the scope of the journal and instructions for authors.
Springer UMUAI page with online access to the papers.
Latest Results for User Modeling and User-Adapted Interaction
The latest content available from Springer-
Deep shared learning and attentive domain mapping for cross-domain recommendation
Abstract
Cross-domain recommendations (CDR) present a viable solution and are increasingly used to address the cold-start problem. Recently, CDR methods are utilizing deep models to generate latent preferences from context vectors or rating matrices and transfer these preferences between domains. However, many of these models focus on learning latent preferences using domain-related information and often disregard preference patterns from the contrary domain. Incorporating the contrary domain preference patterns into deep models can improve the generation of more effective latent representations. Moreover, existing CDR models face challenges in effectively transferring mapped preferences between domains due to the large features disparity between them. In this study, we tackle these problems and present a novel Deep Shared Learning and Attentive Domain Mapping (DSAM) approach for CDR. Specifically, we propose a variant of Long Short-Term Memory (LSTM) called shared learning LSTM, which incorporates the learning of cross-domain preference patterns alongside domain-specific user/item embeddings derived from textual reviews to dynamically generate shared contextual representations in each domain. We further exploit a multi-head self-attentive network to match item-specific knowledge from the source and target domains into different subspaces. We aggregate this learned knowledge to predict rating scores for cold-start users in the target domain. We efficiently optimize this framework in an end-to-end fashion. Experimental results on five real-world datasets demonstrate the effectiveness of our proposed approach against various groups of recommendation models. Additionally, we provide insights to help understand the model architecture and its robustness in handling cold-start users.
-
Recommender systems based on neuro-symbolic knowledge graph embeddings encoding first-order logic rules
Abstract
In this paper, we present a knowledge-aware recommendation model based on neuro-symbolic graph embeddings that encode first-order logic rules. Our approach is based on the intuition that is the basis of neuro-symbolic AI systems: to combine deep learning and symbolic reasoning in one single model, in order to take the best out of both the paradigms. To this end, we start from a knowledge graph (KG) encoding information about users, ratings, and descriptive properties of the items and we design a model that combines background knowledge encoded in logical rules mined from the KG with explicit knowledge encoded in the triples of the KG itself to obtain a more precise representation of users and items. Specifically, our model is based on the combination of: (i) a rule learner that extracts first-order logic rules based on the information encoded in the knowledge graph; (ii) a graph embedding module, that jointly learns a vector space representation of users and items based on the triples encoded in the knowledge graph and the rules previously extracted; (iii) a recommendation module that uses the embeddings to feed a deep learning architecture that provides users with top-k recommendations. In the experimental section, we evaluate the effectiveness of our strategy on three datasets, and the results show that the combination of knowledge graph embeddings and first-order logic rules led to an improvement in the predictive accuracy and in the novelty of the recommendations. Moreover, our approach overcomes several competitive baselines, thus confirming the validity of our intuitions.
-
Investigating meta-intents: user interaction preferences in conversational recommender systems
Abstract
We propose the concept of meta-intents (MI) which represent high-level user preferences related to the interaction styles and decision-making support in conversational recommender systems (CRS). For determining meta-intent factors, we conduct an exploratory study with 212 participants, and a confirmatory study with 394 participants, from this, we obtain a reliable and stable MI questionnaire with 22 items corresponding to seven concepts. These seven factors cover important interaction preferences. We find that MI can be linked to users’ general decision-making style and can thus be instrumental in translating general psychological user characteristics into more concrete design guidance for CRS. We further explore the correlations between MI and user interactions in real CRS scenarios. For this purpose, we propose a CRS framework and implement a chatbot in the smartphone domain to collect real interaction data. We conduct an online study with 99 participants and an interview study in the laboratory with 19 participants. Regarding the impact of MI on interaction behavior, we observe that dialog-initiation, efficiency-orientation and interest in details have a significant and direct impact on interaction behavior. Based on the findings, we provide some heuristic suggestions for leveraging MI in the design and adaptation of CRS. Our studies show the usefulness of the meta-intents concept for bridging the gap between general user characteristics and the concrete design of CRS and indicate their potential for personalizing the interaction in real-time conversations.
- Preface to the special issue on news personalization and analytics
-
An explainable content-based approach for recommender systems: a case study in journal recommendation for paper submission
Abstract
Explainable artificial intelligence is becoming increasingly important in new artificial intelligence developments since it enables users to understand and consequently trust system output. In the field of recommender systems, explanation is necessary not only for such understanding and trust but also because if users understand why the system is making certain suggestions, they are more likely to consume the recommended product. This paper proposes a novel approach for explaining content-based recommender systems by specifically focusing on publication venue recommendation. In this problem, the authors of a new research paper receive recommendations about possible journals (or other publication venues) to which they could submit their article based on content similarity, while the recommender system simultaneously explains its decisions. The proposed explanation ecosystem is based on various elements that support the explanation (topics, related articles, relevant terms, etc.) and is fully integrated with the underlying recommendation model. The proposed method is evaluated through a user study in the biomedical field, where transparency, satisfaction, trust, and scrutability are assessed. The obtained results suggest that the proposed approach is effective and useful for explaining the output of the recommender system to users.