facebook

linkedin

User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

07 July 2020

The latest content available from Springer
  • Using autoencoders for session-based job recommendations

    Abstract

    In this work, we address the problem of providing job recommendations in an online session setting, in which we do not have full user histories. We propose a recommendation approach, which uses different autoencoder architectures to encode sessions from the job domain. The inferred latent session representations are then used in a k-nearest neighbor manner to recommend jobs within a session. We evaluate our approach on three datasets, (1) a proprietary dataset we gathered from the Austrian student job portal Studo Jobs, (2) a dataset released by XING after the RecSys 2017 Challenge and (3) anonymized job applications released by CareerBuilder in 2012. Our results show that autoencoders provide relevant job recommendations as well as maintain a high coverage and, at the same time, can outperform state-of-the-art session-based recommendation techniques in terms of system-based and session-based novelty.

  • Preface to the special issue on harnessing personal tracking data for personalization and sense-making

    Abstract

    Increasingly, people are making use of diverse digital services that create many types of personal data. The most recent addition to such services are self-tracking devices that are capable of creating very detailed personal activity records. The focus of this special issue is to explore how such activity records can be exploited to provide user-centric personalization services.

  • Generating post hoc review-based natural language justifications for recommender systems

    Abstract

    In this article, we present a framework to build post hoc natural language justifications that supports the suggestions generated by a recommendation algorithm. Our methodology is based on the intuition that reviews’ excerpts contain much relevant information that can be used to justify a recommendation; thus, we propose a black-box explanation strategy that takes as input a recommended item and a set of reviews and builds as output a post hoc natural language justification which is completely independent of the underlying recommendation model. To validate our claims, we also introduce three different implementations of our conceptual framework: the first one uses natural language processing and sentiment analysis techniques to identify relevant and distinguishing aspects discussed in the reviews and combines reviews’ excerpts mentioning these aspects in a natural language justification which is presented to the target user. The second implementation extends the first one by introducing automatic aspect extraction and text summarization, which are exploited to generate a unique synthesis presenting the main characteristics of the item that is used as justification. Finally, the third implementation tackles the problem of generating a context-aware justification, that is to say, a justification that differs on varying of the different contextual situations, by automatically learning a lexicon for each contextual setting and by using such a lexicon to diversify the justifications. In the experimental evaluation, we carried out three user studies in different domains, and the results showed that our methodology is able to make the recommendation process more transparent, engaging and trustful for the users, thus confirming the validity of the intuitions behind this work.

  • Effects of adapting to user pitch on rapport perception, behavior, and state with a social robotic learning companion

    Abstract

    Social robots such as learning companions, therapeutic assistants, and tour guides are dependent on the challenging task of establishing a rapport with their users. People rarely communicate with just words alone; facial expressions, gaze, gesture, and prosodic cues like tone of voice and speaking rate combine to help individuals express their words and convey emotion. One way that individuals communicate a sense of connection with one another is entrainment, where interaction partners adapt their way of speaking, facial expressions, or gestures to each other; entrainment has been linked to trust, liking, and task success and is thought to be a vital phenomenon in how people build rapport. In this work, we introduce a social robot that combines multiple channels of rapport-building behavior, including forms of social dialog and prosodic entrainment. We explore how social dialog and entrainment contribute to both self-reported and behavioral rapport responses. We find prosodic adaptation enhances perceptions of social dialog, and that social dialog and entrainment combined build rapport. Individual differences indicated by gender mediate these social responses; an individual’s underlying rapport state, as indicated by their verbal rapport behavior, is exhibited and triggered differently depending on gender. These results have important repercussions for assessing and modeling a user’s social responses and designing adaptive social agents.

  • Activity recognition using wearable sensors for tracking the elderly

    Abstract

    A population group that is often overlooked in the recent revolution of self-tracking is the group of older people. This growing proportion of the general population is often faced with increasing health issues and discomfort. In order to come up with lifestyle advice towards the elderly, we need the ability to quantify their lifestyle, before and after an intervention. This research focuses on the task of activity recognition (AR) from accelerometer data. With that aim, we collect a substantial labelled dataset of older individuals wearing multiple devices simultaneously and performing a strict protocol of 16 activities (the GOTOV dataset, \(N=28\)). Using this dataset, we trained Random Forest AR models, under varying sensor set-ups and levels of activity description granularity. The model that combines ankle and wrist accelerometers (GENEActiv) produced the best results (accuracy \(>80\%\)) for 16-class classification. At the same time, when additional physiological information is used, the accuracy increased (\(>85\%\)). To further investigate the role of granularity in our predictions, we developed the LARA algorithm, which uses a hierarchical ontology that captures prior biological knowledge to increase or decrease the level of activity granularity (merge classes). As a result, a 12-class model in which the different paces of walking were merged showed a performance above \(93\%\). Testing this 12-class model in labelled free-living pilot data, the mean balanced accuracy appeared to be reasonably high, while using the LARA algorithm, we show that a 7-class model (lying down, sitting, standing, household, walking, cycling, jumping) was optimal for accuracy and granularity. Finally, we demonstrate the use of the latter model in unlabelled free-living data from a larger lifestyle intervention study. In this paper, we make the validation data as well as the derived prediction models available to the community.