User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

21 September 2023

The latest content available from Springer
  • Cognitive personalization for online microtask labor platforms: A systematic literature review


    Online microtask labor has increased its role in the last few years and has provided the possibility of people who were usually excluded from the labor market to work anytime and without geographical barriers. While this brings new opportunities for people to work remotely, it can also pose challenges regarding the difficulty of assigning tasks to workers according to their abilities. To this end, cognitive personalization can be used to assess the cognitive profile of each worker and subsequently match those workers to the most appropriate type of work that is available on the digital labor market. In this regard, we believe that the time is ripe for a review of the current state of research on cognitive personalization for digital labor. The present study was conducted by following the recommended guidelines for the software engineering domain through a systematic literature review that led to the analysis of 20 primary studies published from 2010 to 2020. The results report the application of several cognition theories derived from the field of psychology, which in turn revealed an apparent presence of studies indicating accurate levels of cognitive personalization in digital labor in addition to a potential increase in the worker’s performance, most frequently investigated in crowdsourcing settings. In view of this, the present essay seeks to contribute to the identification of several gaps and opportunities for future research in order to enhance the personalization of online labor, which has the potential of increasing both worker motivation and the quality of digital work.

  • What we see is what we do: a practical Peripheral Vision-Based HMM framework for gaze-enhanced recognition of actions in a medical procedural task


    Deep learning models have shown remarkable performances in egocentric video-based action recognition (EAR), but rely heavily on a large quantity of training data. In specific applications with only limited data available, eye movement data may provide additional valuable sensory information to achieve accurate classification performances. However, little is known about the effectiveness of gaze data as a modality for egocentric action recognition. We, therefore, propose the new Peripheral Vision-Based HMM (PVHMM) classification framework, which utilizes context-rich and object-related gaze features for the detection of human action sequences. Gaze information is quantified using two features, the object-of-interest hit and the object–gaze distance, and human action recognition is achieved by employing a hidden Markov model. The classification performance of the framework is tested and validated on a safety-critical medical device handling task sequence involving seven distinct action classes, using 43 mobile eye tracking recordings. The robustness of the approach is evaluated using the addition of Gaussian noise. Finally, the results are then compared to the performance of a VGG-16 model. The gaze-enhanced PVHMM achieves high classification performances in the investigated medical procedure task, surpassing the purely image-based classification model. Consequently, this gaze-enhanced EAR approach shows the potential for the implementation in action sequence-dependent real-world applications, such as surgical training, performance assessment, or medical procedural tasks.

  • Emotional intelligence and individuals’ viewing behaviour of human faces: a predictive approach


    Although several studies have looked at the relationship between emotional characteristics and viewing behaviour, understanding how emotional intelligence (EI) contributes to individuals’ viewing behaviour is not clearly understood. This study examined the viewing behaviour of people (74 male and 80 female) with specific EI profiles while viewing five facial expressions. An eye-tracking methodology was employed to examine individuals’ viewing behaviour in relation to their EI. We compared the performance of different machine learning algorithms on the eye-movement parameters of participants to predict their EI profiles. The results revealed that EI profiles of individuals high in self-control, emotionality, and sociability responded differently to the visual stimuli. The prediction results of these EI profiles achieved 94.97% accuracy. The findings are unique in that they provide a new understanding of how eye-movements can be used in the prediction of EI. The findings also contribute to the current understanding of the relationship between EI and emotional expressions, thereby adding to an emerging stream of research that is of interest to researchers and psychologists in human–computer interaction, individual emotion, and information processing.

  • Enhancing user awareness on inferences obtained from fitness trackers data


    In the IoT era, sensitive and non-sensitive data are recorded and transmitted to multiple service providers and IoT platforms, aiming to improve the quality of our lives through the provision of high-quality services. However, in some cases these data may become available to interested third parties, who can analyse them with the intention to derive further knowledge and generate new insights about the users, that they can ultimately use for their own benefit. This predicament raises a crucial issue regarding the privacy of the users and their awareness on how their personal data are shared and potentially used. The immense increase in fitness trackers use has further increased the amount of user data generated, processed and possibly shared or sold to third parties, enabling the extraction of further insights about the users. In this work, we investigate if the analysis and exploitation of the data collected by fitness trackers can lead to the extraction of inferences about the owners routines, health status or other sensitive information. Based on the results, we utilise the PrivacyEnhAction privacy tool, a web application we implemented in a previous work through which the users can analyse data collected from their IoT devices, to educate the users about the possible risks and to enable them to set their user privacy preferences on their fitness trackers accordingly, contributing to the personalisation of the provided services, in respect of their personal data.

  • Recommending on graphs: a comprehensive review from a data perspective


    Recent advances in graph-based learning approaches have demonstrated their effectiveness in modelling users’ preferences and items’ characteristics for Recommender Systems (RSs). Most of the data in RSs can be organized into graphs where various objects (e.g. users, items, and attributes) are explicitly or implicitly connected and influence each other via various relations. Such a graph-based organization brings benefits to exploiting potential properties in graph learning (e.g. random walk and network embedding) techniques to enrich the representations of the user and item nodes, which is an essential factor for successful recommendations. In this paper, we provide a comprehensive survey of Graph Learning-based Recommender Systems (GLRSs). Specifically, we start from a data-driven perspective to systematically categorize various graphs in GLRSs and analyse their characteristics. Then, we discuss the state-of-the-art frameworks with a focus on the graph learning module and how they address practical recommendation challenges such as scalability, fairness, diversity, explainability, and so on. Finally, we share some potential research directions in this rapidly growing area.