User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

09 March 2021

The latest content available from Springer
  • Personality expression and recognition in Chinese language usage


    Personality plays a pivotal role at work. Many scholars have investigated the association between personality and language usage habits in the English corpus. Given that the Chinese language has the largest number of native speakers in the world, it is essential to analyze the pattern of personality expression in Chinese, which has garnered less attention. In this study, we used the TextMind system to examine the correlation between word categories and personality traits based on Chinese Weibo content. We also compared the results with previous studies to demonstrate the similarities and differences of personality expression between English and Chinese. Additionally, this paper established a prediction model based on machine learning methods to recognize personality. Results showed that language features were powerful indicators of personality. Finally, we made recommendations for using personality expression in the recruitment and selection.

  • Effects of adapting to user pitch on rapport perception, behavior, and state with a social robotic learning companion


    Social robots such as learning companions, therapeutic assistants, and tour guides are dependent on the challenging task of establishing a rapport with their users. People rarely communicate with just words alone; facial expressions, gaze, gesture, and prosodic cues like tone of voice and speaking rate combine to help individuals express their words and convey emotion. One way that individuals communicate a sense of connection with one another is entrainment, where interaction partners adapt their way of speaking, facial expressions, or gestures to each other; entrainment has been linked to trust, liking, and task success and is thought to be a vital phenomenon in how people build rapport. In this work, we introduce a social robot that combines multiple channels of rapport-building behavior, including forms of social dialog and prosodic entrainment. We explore how social dialog and entrainment contribute to both self-reported and behavioral rapport responses. We find prosodic adaptation enhances perceptions of social dialog, and that social dialog and entrainment combined build rapport. Individual differences indicated by gender mediate these social responses; an individual’s underlying rapport state, as indicated by their verbal rapport behavior, is exhibited and triggered differently depending on gender. These results have important repercussions for assessing and modeling a user’s social responses and designing adaptive social agents.

  • Automatic generation and recommendation of personalized challenges for gamification


    Gamification, that is, the usage of game content in non-game contexts, has been successfully employed in several application domains to foster end users’ engagement and to induce a change in their behavior. Despite its impact potential, well-known limitations concern retaining players and sustaining over time the newly adopted behavior. This problem can be sourced from two common errors: basic game elements that are considered at design time and a one-size-fits-all strategy in generating game content. The former issue refers to the fact that most gamified applications focus only on the superficial layer of game design elements, such as points, badges and leaderboards, and do not exploit the full potential of games in terms of engagement and motivation; the latter relates to a lack of personalization, since the game content proposed to players does not take into consideration their specific abilities, skills and preferences. Taken together, these issues often lead to players’ boredom or frustration. The game element of challenges, which propose a demanding but achievable goal and rewarding completion, has empirically proved effective to keep players’ interest alive and to sustain their engagement over time. However, they require a significant effort from game designers, who must periodically conceive new challenges, align goals with the objectives of the gamification campaign, balance those goals with rewards and define assignment criteria to the player population. Our hypothesis is that we can overcome these limitations by automatically generating challenges, which are personalized to each individual player throughout the game. To this end, we have designed and implemented a fully automated system for the dynamic generation and recommendation of challenges, which are personalized and contextualized based on the preferences, history, game status and performances of each player. The proposed approach is generic and can be applied in different gamification application contexts. In this paper, we present its implementation within a large-scale and long-running open-field experiment promoting sustainable urban mobility that lasted 12 weeks and involved more than 400 active players. A comparative evaluation is performed, considering challenges that are generated and assigned fully automatically through our system versus analogous challenges developed and assigned by human game designers. The evaluation covers the acceptance of challenges by players, the impact induced on players’ behavior, as well as the efficiency in terms of rewarding cost. The evaluation results are very encouraging and suggest that procedural content generation applied to the customization of challenges has a great potential to enhance the performance of gamification applications and augment their engagement and persuasive power.

  • Empirical analysis of session-based recommendation algorithms


    Recommender systems are tools that support online users by pointing them to potential items of interest in situations of information overload. In recent years, the class of session-based recommendation algorithms received more attention in the research literature. These algorithms base their recommendations solely on the observed interactions with the user in an ongoing session and do not require the existence of long-term preference profiles. Most recently, a number of deep learning-based (“neural”) approaches to session-based recommendations have been proposed. However, previous research indicates that today’s complex neural recommendation methods are not always better than comparably simple algorithms in terms of prediction accuracy. With this work, our goal is to shed light on the state of the art in the area of session-based recommendation and on the progress that is made with neural approaches. For this purpose, we compare twelve algorithmic approaches, among them six recent neural methods, under identical conditions on various datasets. We find that the progress in terms of prediction accuracy that is achieved with neural methods is still limited. In most cases, our experiments show that simple heuristic methods based on nearest-neighbors schemes are preferable over conceptually and computationally more complex methods. Observations from a user study furthermore indicate that recommendations based on heuristic methods were also well accepted by the study participants. To support future progress and reproducibility in this area, we publicly share the session-rec evaluation framework that was used in our research.

  • From perception to action using observed actions to learn gestures


    Pervasive computing environments deliver a multitude of possibilities for human–computer interactions. Modern technologies, such as gesture control or speech recognition, allow different devices to be controlled without additional hardware. A drawback of these concepts is that gestures and commands need to be learned. We propose a system that is able to learn actions by observation of the user. To accomplish this, we use a camera and deep learning algorithms in a self-supervised fashion. The user can either train the system directly by showing gestures examples and perform an action, or let the system learn by itself. To evaluate the system, five experiments are carried out. In the first experiment, initial detectors are trained and used to evaluate our training procedure. The following three experiments are used to evaluate the adaption of our system and the applicability to new environments. In the last experiment, the online adaption is evaluated as well as adaption times and intervals are shown.