As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Preference learning is an essential component in numerous applications, such as recommendation systems, decision-making processes, and personalized services. We propose here a novel approach to preference learning that interleaves Gaussian Processes (GP) and Robust Ordinal Regression (ROR). A Gaussian process gives a probability distribution on the latent function values that generate users’ preferences. Our method extends the traditional non-parametric Gaussian process framework by approximating the latent function by a very flexible parameterized function, that we call θ-additive function, where θ is the parameter set. The set θ reflects the degree of sophistication of the generalized additive model that can potentially represent the user’s preferences. To learn what are the components of θ, we update a probability distribution on the space of all possible sets θ, depending on the ability of the parameterized function to approximate the latent function. We predict pairwise preferences by using the parameter set θ that maximizes the posterior distribution and by performing robust ordinal regression based on this parameter set. Experimental results on synthetic data demonstrate the effectiveness and robustness of our proposed methodology.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.