facebook

linkedin

User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

06 June 2020

The latest content available from Springer
  • Automatic generation and recommendation of personalized challenges for gamification

    Abstract

    Gamification, that is, the usage of game content in non-game contexts, has been successfully employed in several application domains to foster end users’ engagement and to induce a change in their behavior. Despite its impact potential, well-known limitations concern retaining players and sustaining over time the newly adopted behavior. This problem can be sourced from two common errors: basic game elements that are considered at design time and a one-size-fits-all strategy in generating game content. The former issue refers to the fact that most gamified applications focus only on the superficial layer of game design elements, such as points, badges and leaderboards, and do not exploit the full potential of games in terms of engagement and motivation; the latter relates to a lack of personalization, since the game content proposed to players does not take into consideration their specific abilities, skills and preferences. Taken together, these issues often lead to players’ boredom or frustration. The game element of challenges, which propose a demanding but achievable goal and rewarding completion, has empirically proved effective to keep players’ interest alive and to sustain their engagement over time. However, they require a significant effort from game designers, who must periodically conceive new challenges, align goals with the objectives of the gamification campaign, balance those goals with rewards and define assignment criteria to the player population. Our hypothesis is that we can overcome these limitations by automatically generating challenges, which are personalized to each individual player throughout the game. To this end, we have designed and implemented a fully automated system for the dynamic generation and recommendation of challenges, which are personalized and contextualized based on the preferences, history, game status and performances of each player. The proposed approach is generic and can be applied in different gamification application contexts. In this paper, we present its implementation within a large-scale and long-running open-field experiment promoting sustainable urban mobility that lasted 12 weeks and involved more than 400 active players. A comparative evaluation is performed, considering challenges that are generated and assigned fully automatically through our system versus analogous challenges developed and assigned by human game designers. The evaluation covers the acceptance of challenges by players, the impact induced on players’ behavior, as well as the efficiency in terms of rewarding cost. The evaluation results are very encouraging and suggest that procedural content generation applied to the customization of challenges has a great potential to enhance the performance of gamification applications and augment their engagement and persuasive power.

  • Development of measurement instrument for visual qualities of graphical user interface elements (VISQUAL): a test in the context of mobile game icons

    Abstract

    Graphical user interfaces are widely common and present in everyday human–computer interaction, dominantly in computers and smartphones. Today, various actions are performed via graphical user interface elements, e.g., windows, menus and icons. An attractive user interface that adapts to user needs and preferences is progressively important as it often allows personalized information processing that facilitates interaction. However, practitioners and scholars have lacked an instrument for measuring user perception of aesthetics within graphical user interface elements to aid in creating successful graphical assets. Therefore, we studied dimensionality of ratings of different perceived aesthetic qualities in GUI elements as the foundation for the measurement instrument. First, we devised a semantic differential scale of 22 adjective pairs by combining prior scattered measures. We then conducted a vignette experiment with random participant (n = 569) assignment to evaluate 4 icons from a total of pre-selected 68 game app icons across 4 categories (concrete, abstract, character and text) using the semantic scales. This resulted in a total of 2276 individual icon evaluations. Through exploratory factor analyses, the observations converged into 5 dimensions of perceived visual quality: Excellence/Inferiority, Graciousness/Harshness, Idleness/Liveliness, Normalness/Bizarreness and Complexity/Simplicity. We then proceeded to conduct confirmatory factor analyses to test the model fit of the 5-factor model with all 22 adjective pairs as well as with an adjusted version of 15 adjective pairs. Overall, this study developed, validated, and consequently presents a measurement instrument for perceptions of visual qualities of graphical user interfaces and/or singular interface elements (VISQUAL) that can be used in multiple ways in several contexts related to visual human-computer interaction, interfaces and their adaption.

  • Beyond binary correctness: Classification of students’ answers in learning systems

    Abstract

    Adaptive learning systems collect data on student performance and use them to personalize system behavior. Most current personalization techniques focus on the correctness of answers. Although the correctness of answers is the most straightforward source of information about student state, research suggests that additional data are also useful, e.g., response times, hints usage, or specific values of incorrect answers. However, these sources of data are not easy to utilize and are often used in an ad hoc fashion. We propose to use answer classification as an interface between raw data about student performance and algorithms for adaptive behavior. Specifically, we propose a classification of student answers into six categories: three classes of correct answers and three classes of incorrect answers. The proposed classification is broadly applicable and makes the use of additional interaction data much more feasible. We support the proposal by analysis of extensive data from adaptive learning systems.

  • Correction to: A case study of intended versus actual experience of adaptivity in a tangible storytelling system

    Since the publication of this article [Tanenbaum, K., Hatala, M., Tanenbaum, J. et al. A case study of intended versus actual experience of adaptivity in a tangible storytelling system.

  • Preface to the Special Issue on user modeling for personalized interaction with music