User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.
UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).
UMUAI homepage with description of the scope of the journal and instructions for authors.
Springer UMUAI page with online access to the papers.
Latest Results for User Modeling and User-Adapted Interaction
The latest content available from Springer-
A contrastive news recommendation framework based on curriculum learning
Abstract
News recommendation is an intelligent technology that aims to provide users with matching news content based on their preferences and interests. Nevertheless, current methodologies exhibit significant limitations. Traditional models often rely on simple random negative sampling for training, an approach that insufficiently captures the patterns and preferences of users’ clicking behavior, thereby undermining the model’s effectiveness. Furthermore, these systems often face challenges in insufficient modeling due to the limited nature of user interactions. Considering these challenges, this paper presents a contrastive news recommendation framework based on curriculum learning (CNRCL). Specifically, we relate the negative sampling process to users’ interests and employ curriculum learning to guide the negative sampling procedure. To address the issue of insufficient user interest modeling, we propose to use contrastive learning to bring the user closer to news that is similar to the candidate news, thus enhancing the model’s accuracy in predicting user interests, and compensating for limited click behavior. Extensive experimental results on the MIND dataset verify the effectiveness of the model and improve the performance of news recommendation. Our code can be obtained from https://github.com/IIP-Lab-2024/CNRCL.
-
A Bayesian framework for learning proactive robot behaviour in assistive tasks
Abstract
Socially assistive robots represent a promising tool in assistive contexts for improving people’s quality of life and well-being through social, emotional, cognitive, and physical support. However, the effectiveness of interactions heavily relies on the robots’ ability to adapt to the needs of the assisted individuals and to offer support proactively, before it is explicitly requested. Previous work has primarily focused on defining the actions the robot should perform, rather than considering when to act and how confident it should be in a given situation. To address this gap, this paper introduces a new data-driven framework that involves a learning pipeline, consisting of two phases, with the ultimate goal of training an algorithm based on Influence Diagrams. The proposed assistance scenario involves a sequential memory game, where the robot autonomously learns what assistance to provide when to intervene, and with what confidence to take control. The results from a user study showed that the proactive behaviour of the robot had a positive impact on the users’ game performance. Users obtained higher scores, made fewer mistakes, and requested less assistance from the robot. The study also highlighted the robot’s ability to provide assistance tailored to users’ specific needs and anticipate their requests.
- Correction: Twenty-Five Years of Bayesian knowledge tracing: a systematic review
- Preface to the special issue on conversational recommender systems: theory, models, evaluations, and trends
-
Recommender systems based on neuro-symbolic knowledge graph embeddings encoding first-order logic rules
Abstract
In this paper, we present a knowledge-aware recommendation model based on neuro-symbolic graph embeddings that encode first-order logic rules. Our approach is based on the intuition that is the basis of neuro-symbolic AI systems: to combine deep learning and symbolic reasoning in one single model, in order to take the best out of both the paradigms. To this end, we start from a knowledge graph (KG) encoding information about users, ratings, and descriptive properties of the items and we design a model that combines background knowledge encoded in logical rules mined from the KG with explicit knowledge encoded in the triples of the KG itself to obtain a more precise representation of users and items. Specifically, our model is based on the combination of: (i) a rule learner that extracts first-order logic rules based on the information encoded in the knowledge graph; (ii) a graph embedding module, that jointly learns a vector space representation of users and items based on the triples encoded in the knowledge graph and the rules previously extracted; (iii) a recommendation module that uses the embeddings to feed a deep learning architecture that provides users with top-k recommendations. In the experimental section, we evaluate the effectiveness of our strategy on three datasets, and the results show that the combination of knowledge graph embeddings and first-order logic rules led to an improvement in the predictive accuracy and in the novelty of the recommendations. Moreover, our approach overcomes several competitive baselines, thus confirming the validity of our intuitions.