Conference Year

July 2018

Keywords

visual preferences, preference learning, machine learning, design of experiments, Gaussian process, active learning

Abstract

Occupants working in offices with controllable shading and lighting systems often perform adaptive actions (raising/lowering shades, increasing/decreasing light levels) in an attempt to restore their preferred state of the room. This preference for different room conditions (states) has been shown to vary from person to person and may be affected by exterior conditions. This paper presents an online data-driven methodology which actively queries a new occupant for learning their personalized visual preferences. Preference is governed by a latent preference relation equivalent to a scalar utility function (the higher the utility, the higher the preference for that state). Information about user preferences is available via pairwise - comparison queries (duels between two different states). We model our uncertainty about the utility via a Gaussian Process (GP) prior and the probability of the winner of each duel by means of a Bernoulli likelihood. This generalized preference model is then used in conjunction with different acquisition functions (pure exploration, expected improvement, dueling Thompson sampling) to drive the elicitation process by actively selecting new queries to pose to the occupant. Two different sets of experiments were conducted, focused on actively selecting new duels: (i) to learn the structure of utility everywhere with fewest possible queries and (ii) to learn the maximum of the utility with fewest possible queries. We illustrate the benefits of our frameworks by showing that our approach needs drastically fewer duels for inferring the structure or the maximum of underlying latent utility function as opposed to randomized data collection. The results of this study can be used to develop efficient, real-time adaptive shading and lighting controls.

Share

COinS