User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

20 January 2021

The latest content available from Springer
  • BARGAIN: behavioral affective rule-based games adaptation interface–towards emotionally intelligent games: application on a virtual reality environment for socio-moral development


    This paper presents a framework for adapting game elements to the player’s affective state and the integration of the framework in a virtual reality environment for moral development. These game elements include gestural and facial expressions of avatars during dialogues with the player, background music, the score, game mechanics, aesthetics and learning. The framework BARGAIN (Behavioral Affective Rule-based Games Adaptation Interface) is an authoring tool for affective game design providing a visual interface based on finite state machine (FSM) technique to represent the affective rules as state transitions graph dependent on the player emotional state assessed using facial expression recognition system based on electroencephalography (EEG) data. We conducted a user study (n = 29) examining the effects of the resulting affective virtual reality game on players’ experience using the Game experience Questionnaire (GEQ) (IJsselsteijn et al. in The game experience questionnaire, Technische Universiteit Eindhoven, Eindhoven, 2013). The results show significant correlation between the GEQ dimensions and the player's facial expressions during his interaction with the Non-Player Characters (NPCs) within the VR game. These findings highlight that adapting games to user's emotions enhance the players’ experience.

  • Acknowledgment to reviewers
  • Better targeting of consumers: Modeling multifactorial gender and biological sex from Instagram posts


    Along with the rapidly increasing influence and importance of advertisements and publicity in social networking services (SNS), considerable efforts are being made to provide user-customized services through an understanding of SNS content. Studies on online purchasing patterns based on user attributes have also been conducted; however, these studies used either only experimental methods (e.g., surveys or ethnographic accounts) or simple user attributes (e.g., age, biological sex, and location) for computational user modeling. This paper, through interviews with professional marketers, identifies their needs to understand multifactorial SNS user (potential customers) attributes—gender (i.e., masculine, feminine, androgynous) and biological sex (i.e., male and female) characteristics—for marketing purposes. Based on 33,752 Instagram posts, we develop a deep learning-based, classification model merged with three modalities—image (i.e., VGG16 feature and gesture), text (i.e., linguistic, tag, sentence, and category), and activity (i.e., reply and day). Our model achieves a better performance in classifying three gender types in the male, female, and male + female cases than the traditional machine learning models. Our study results reveal the applicability of identifying gender characteristics from posts in the marketing field.

  • Beyond binary correctness: Classification of students’ answers in learning systems


    Adaptive learning systems collect data on student performance and use them to personalize system behavior. Most current personalization techniques focus on the correctness of answers. Although the correctness of answers is the most straightforward source of information about student state, research suggests that additional data are also useful, e.g., response times, hints usage, or specific values of incorrect answers. However, these sources of data are not easy to utilize and are often used in an ad hoc fashion. We propose to use answer classification as an interface between raw data about student performance and algorithms for adaptive behavior. Specifically, we propose a classification of student answers into six categories: three classes of correct answers and three classes of incorrect answers. The proposed classification is broadly applicable and makes the use of additional interaction data much more feasible. We support the proposal by analysis of extensive data from adaptive learning systems.

  • Development of measurement instrument for visual qualities of graphical user interface elements (VISQUAL): a test in the context of mobile game icons


    Graphical user interfaces are widely common and present in everyday human–computer interaction, dominantly in computers and smartphones. Today, various actions are performed via graphical user interface elements, e.g., windows, menus and icons. An attractive user interface that adapts to user needs and preferences is progressively important as it often allows personalized information processing that facilitates interaction. However, practitioners and scholars have lacked an instrument for measuring user perception of aesthetics within graphical user interface elements to aid in creating successful graphical assets. Therefore, we studied dimensionality of ratings of different perceived aesthetic qualities in GUI elements as the foundation for the measurement instrument. First, we devised a semantic differential scale of 22 adjective pairs by combining prior scattered measures. We then conducted a vignette experiment with random participant (n = 569) assignment to evaluate 4 icons from a total of pre-selected 68 game app icons across 4 categories (concrete, abstract, character and text) using the semantic scales. This resulted in a total of 2276 individual icon evaluations. Through exploratory factor analyses, the observations converged into 5 dimensions of perceived visual quality: Excellence/Inferiority, Graciousness/Harshness, Idleness/Liveliness, Normalness/Bizarreness and Complexity/Simplicity. We then proceeded to conduct confirmatory factor analyses to test the model fit of the 5-factor model with all 22 adjective pairs as well as with an adjusted version of 15 adjective pairs. Overall, this study developed, validated, and consequently presents a measurement instrument for perceptions of visual qualities of graphical user interfaces and/or singular interface elements (VISQUAL) that can be used in multiple ways in several contexts related to visual human-computer interaction, interfaces and their adaption.