User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

Deprecated: DateTime::__construct(): Passing null to parameter #1 ($datetime) of type string is deprecated in /home/umorg/public_html/libraries/src/Date/Date.php on line 126
15 April 2024

The latest content available from Springer
  • Personalized recommendations for learning activities in online environments: a modular rule-based approach


    Personalization in online learning environments has been extensively studied at various levels, ranging from adaptive hints during task-solving to recommending whole courses. In this study, we focus on recommending learning activities (sequences of homogeneous tasks). We argue that this is an important yet insufficiently explored area, particularly when considering the requirements of large-scale online learning environments used in practice. To address this gap, we propose a modular rule-based framework for recommendations and thoroughly explain the rationale behind the proposal. We also discuss a specific application of the framework.

  • Modeling of anticipation using instance-based learning: application to automation surprise in aviation using passive BCI and eye-tracking data


    Human-centered artificial intelligence (HCAI) needs to be able to adapt to anticipated user behavior. We argue that the anticipation capabilities required for HCAI adaptation can be modeled best with the help of a cognitive architecture. This paper introduces an ACT-R cognitive model that uses instance-based learning to observe and learn situations and actions in the form of mental models. These mental models enable the anticipation of the behavior of individual users. The model is applied to a use case of automation surprise in commercial aviation to test how anticipation can best be modeled for cockpit applications. Empirical data from a flight simulator study including behavioral, neurophysiological and eye-tracking measures from 13 pilots were used to evaluate the model. Results show that the accuracy of the model is significantly higher than chance, demonstrating that combining context information, user state data and a cognitive model can enable HCAI adaptation based on anticipated user behavior.

  • Federated privacy-preserving collaborative filtering for on-device next app prediction


    In this study, we propose a novel SeqMF model to solve the problem of predicting the next app launch during mobile device usage. Although this problem can be represented as a classical collaborative filtering problem, it requires proper modification since the data are sequential, the user feedback is distributed among devices, and the transmission of users’ data to aggregate common patterns must be protected against leakage. According to such requirements, we modify the structure of the classical matrix factorization model and update the training procedure to sequential learning. Since the data about user experience are distributed among devices, the federated learning setup is used to train the proposed sequential matrix factorization model. One more ingredient of our approach is a new privacy mechanism that guarantees the protection of the sent data from the users to the remote server. To demonstrate the efficiency of the proposed model, we use publicly available mobile user behavior data. We compare our model with sequential rules and models based on the frequency of app launches. The comparison is conducted in static and dynamic environments. The static environment evaluates how our model processes sequential data compared to competitors. The dynamic environment emulates the real-world scenario, where users generate new data by running apps on devices. Our experiments show that the proposed model provides comparable quality with other methods in the static environment. However, more importantly, our method achieves a better privacy-utility trade-off than competitors in the dynamic environment, which provides more accurate simulations of real-world usage.

  • Personalization of industrial human–robot communication through domain adaptation based on user feedback


    Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human–robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human–robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human–robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user’s facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human–robot collaboration applications.

  • Persuasive strategies and emotional states: towards designing personalized and emotion-adaptive persuasive systems


    Persuasive strategies have been widely operationalized in systems or applications to motivate behaviour change across diverse domains. However, no empirical evidence exists on whether or not persuasive strategies lead to certain emotions to inform which strategies are most appropriate for delivering interventions that not only motivate users to perform target behaviour but also help to regulate their current emotional states. We conducted a large-scale study of 660 participants to investigate if and how individuals including those at different stages of change respond emotionally to persuasive strategies and why. Specifically, we examined the relationship between perceived effectiveness of individual strategies operationalized in a system and perceived emotional states for participants at different stages of behaviour change. Our findings established relations between perceived effectiveness of strategies and emotions elicited in individuals at distinct stages of change and that the perceived emotions vary across stages of change for different reasons. For example, the reward strategy is associated with positive emotion only (i.e. happiness) for individuals across distinct stages of change because it induces feelings of personal accomplishment, provides incentives that increase the urge to achieve more goals, and offers gamified experience. Other strategies are associated with mixed emotions. Our work links emotion theory with behaviour change theories and stages of change theory to develop practical guidelines for designing personalized and emotion-adaptive persuasive systems.