Query Word2vec Results
Query Word2vec Results
support ve 1 ZoubinGhahramani 0.756979 This note describes a method for approximate inference in infinite m
support ve 2 VladimirKoulikov 0.753291 We prove asymptotic normality for Lk -functionals int |hat Fn-Fn|^k
support ve 3 AnkurAgarwal 0.75229 We describe a sparse Bayesian regression method for recovering 3D
support ve 4 WilliamTriggs 0.75229 We describe a sparse Bayesian regression method for recovering 3D
support ve 5 PeterSollich 0.751849 The equivalent kernel (Silverman, 1984) is a way of understanding ho
support ve 6 LucPronzato 0.746651 We consider a nonlinear regression model with parameterized varian
support ve 7 NeilLawrence 0.746151 Summarising a high dimensional data-set with a low dimensional em
support ve 8 NeilLawrence 0.744409 In this paper we introduce a new underlying probabilistic model for p
support ve 9 BernhardSchölkopf 0.742674 We present a new approximation scheme for support vector decision
support ve 10 WolfKienzle 0.742674 We present a new approximation scheme for support vector decision
machine le 1 FabioCiravegna 0.77203 Armadillo [1] [2] is an automatic system for producing domainspecifi
machine le 2 RoderickMurray-Smith 0.745874 We describe a rhythmic interaction mechanism for mobile devices. A
machine le 3 ThierryArtieres 0.739673 We present a Hidden Markov Model-based approach to cluster sequ
machine le 4 MichaelPfeiffer 0.735735 In this paper we study the application of machinelearning methods in
machine le 5 BenjaminBlankertz 0.735263 The investigation of innovative Human-Computer Interfaces (HCI) pro
machine le 6 EstherKoller-Meier 0.732291 Recently, an optimization approach for fast visual tracking of articula
machine le 7 LucVanGool 0.732291 Recently, an optimization approach for fast visual tracking of articula
machine le 8 MatthieuBray 0.732291 Recently, an optimization approach for fast visual tracking of articula
machine le 9 ThomasNavinLal 0.729102 Designing a Brain Computer Interface (BCI) system one can choose fr
machine le 10 BenjaminBlankertz 0.725839 The Berlin Brain-Computer Interface (BBCI) project is guided by the id
hyperplane 1 HubertMingChen 0.124457 . Looking Algebraically at Tractable Quantified Boolean Formulas.
hyperplane 2 LjupcoTodorovski 0.106794 Subgroup discovery with CN2-SD
hyperplane 3 JohnWilliamson 0.068016 Missing Granular Synthesis for Display of Time-Varying Probability De
hyperplane 4 WalterDaelemans 0.065535 This tutorial introduces the concept of shallow parsing for text minin
hyperplane 5 NadaLavrac 0.061616 Induction of comprehensible models for gene expression datasets by
hyperplane 6 BenjaminBlankertz 0.051342 data set IIa: Spatial patterns of self-controlled brain rhythm modulati
hyperplane 7 BernhardSchölkopf 0.029644 We interpret several well-known algorithms for dimensionality reduc
hyperplane 8 IanNabney 0.024653 The role of visualisation in data analysis. Visualising with density mod
hyperplane 9 ChrisDance 0.019964 We present a novel method for generic visual categorization: the pro
hyperplane 10 GabrielaCsurka 0.019964 We present a novel method for generic visual categorization: the pro
margin 1 JoelRatsaby 0.531173 Sauer s Lemma is extended to classes mH of binary-valued functions
margin 2 LucienBirge 0.527553 The purpose of this paper is to give a new, easily tractable and sharp
margin 3 JoelRatsaby 0.520119 Let mF be a finite VC-dimension ( d ) class of binary-valued functions
margin 4 VladimirKoulikov 0.514683 We prove asymptotic normality for Lk -functionals int |hat Fn-Fn|^k
margin 5 JuhoRousu 0.514498 We present a sparse dynamic programming algorithm that, given two
margin 6 JohnShawe-Taylor 0.514498 We present a sparse dynamic programming algorithm that, given two
margin 7 VladimirKoulikov 0.513507 We consider the process hat Fn-Fn , being the difference between th
margin 8 FernandoPerez-Cruz 0.512543 In this paper, we revisit how maximum margin classifiers can be obta
margin 9 MarcSchoenauer 0.511597 Based on the theory of non-negative supermartingales, convergence
margin 10 RanGilad-Bachrach 0.511593 The Bayes classifier achieves the minimal error rate by constructing a
SVR 1 FabioCiravegna 0.000481 Armadillo [1] [2] is an automatic system for producing domainspecifi
kernel 1 BernhardSchölkopf 0.444463 - Kernel Methods in Computational Biology
kernel 2 WalterDaelemans 0.420598 This tutorial introduces the concept of shallow parsing for text minin
kernel 3 Jean-PhilippeVert 0.397057 - A Primer on Kernel Methods
kernel 4 JazKandola 0.378628 We present an algorithm based on convex optimization for constructi
kernel 5 PeterSollich 0.374041 The equivalent kernel (Silverman, 1984) is a way of understanding ho
kernel 6 NadaLavrac 0.370014 Induction of comprehensible models for gene expression datasets by
kernel 7 BlazFortuna 0.366361 This paper provides an overview of string kernels. String kernels com
kernel 8 PeterSollich 0.365307 The equivalent kernel is a way of understanding how Gaussian proce
kernel 9 ZoubinGhahramani 0.363709 This note describes a method for approximate inference in infinite m
kernel 10 FabioCiravegna 0.362056 Armadillo [1] [2] is an automatic system for producing domainspecifi
mate inference in infinite models that uses deterministic Expectation Propagation instead of Monte Carlo. For infinite Gaussian mixtures, t
nctionals int |hat Fn-Fn|^k g(t),dt , where Fn is the empirical distribution function of a sample from a decreasing density and hat Fn is the
n method for recovering 3D human body motion directly from silhouettes extracted from monocular video sequences. No detailed body sh
n method for recovering 3D human body motion directly from silhouettes extracted from monocular video sequences. No detailed body sh
s a way of understanding how Gaussian process regression works for large sample sizes based on a continuum limit. In this paper we show
el with parameterized variance and compare several methods of estimation: the Weighted Least-Squares (WLS) estimator; the two-stage L
with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existin
ing probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a
e for support vector decision functions in object detection. In the present approach we are building on an existing algorithm where the set
e for support vector decision functions in object detection. In the present approach we are building on an existing algorithm where the set
or producing domainspecific Semantic Web oriented annotation on large repositories. Integrating Information Extraction, Ontology machi
anism for mobile devices. A PocketPC with a three degree of freedom linear acceleration meter is used as the experimental platform for d
ed approach to cluster sequences. This problem is adressed in term of machinelearning Hidden Markov Models (HMM) structure from dat
machinelearning methods in complex computer games. A combination of hierarchical reinforcement machinelearning and simple heuristic
omputer Interfaces (HCI) provides a challenge for future multimedia research and development. Brain-Computer Interfaces (BCI) exploit th
st visual tracking of articulated structures based on Stochastic Meta-Descent (SMD) has been presented (Bray et al. 2004). SMD is a gradie
st visual tracking of articulated structures based on Stochastic Meta-Descent (SMD) has been presented (Bray et al. 2004). SMD is a gradie
st visual tracking of articulated structures based on Stochastic Meta-Descent (SMD) has been presented (Bray et al. 2004). SMD is a gradie
I) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special ca
CI) project is guided by the idea to train a computer using advanced machinelearning and signal processing techniques in order to improve
tified Boolean Formulas.
allow parsing for text mining applications. GAMBL, Genetic Algorithm Optimization of Memory-Based WSD
x optimization for constructing kernels for semi-supervised learning. The kernel matrices are derived from the spectral decomposition of g
s a way of understanding how Gaussian process regression works for large sample sizes based on a continuum limit. In this paper we show
gene expression datasets by subgroup discovery methodology Subgroup discovery with CN2-SD
kernels. String kernels compare text documents by the substrings they contain. Because of high computational complexity, methods for a
anding how Gaussian process regression works for large sample sizes based on a continuum limit. In this paper we show how to approxim
mate inference in infinite models that uses deterministic Expectation Propagation instead of Monte Carlo. For infinite Gaussian mixtures, t
or producing domainspecific Semantic Web oriented annotation on large repositories. Integrating Information Extraction, Ontology machi
or infinite Gaussian mixtures, the algorithm provides cluster parameter estimates, cluster memberships, and model evidence. Model param
asing density and hat Fn is the least concave majorant of Fn . From this we derive two test statistics for the null hypothesis that a probabili
equences. No detailed body shape model is needed, and realism is ensured by training on real human motion capture data. The tracker es
equences. No detailed body shape model is needed, and realism is ensured by training on real human motion capture data. The tracker es
m limit. In this paper we show (1) how to approximate the equivalent kernel of the widely-used squared exponential (or Gaussian) kernel
LS) estimator; the two-stage LS (TSLS) estimator, where the LS estimator obtained at the first stage is plugged into the variance function u
de an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of pr
r Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior s covariance function co
isting algorithm where the set of support vectors is replaced by a smaller so-called reduced set of synthetic points. Instead of finding the r
isting algorithm where the set of support vectors is replaced by a smaller so-called reduced set of synthetic points. Instead of finding the r
on Extraction, Ontology machinelearning and Semantic Browsing into Organizational Knowledge Processes
he experimental platform for data acquisition. Dynamic Movement Primitives are used to learn the limit cycle behavior associated with the
els (HMM) structure from data. Using a top-down approach, we iteratively simplify an initial HMM that consists in a mixture of as many le
nelearning and simple heuristics is used to learn strategies for the game Settlers of Catan (
uter Interfaces (BCI) exploit the ability of human communication and control bypassing the classical neuromuscular communication chann
ay et al. 2004). SMD is a gradient descent with local step size adaptation that combines rapid convergence with excellent scalability. Stoch
ay et al. 2004). SMD is a gradient descent with local step size adaptation that combines rapid convergence with excellent scalability. Stoch
ay et al. 2004). SMD is a gradient descent with local step size adaptation that combines rapid convergence with excellent scalability. Stoch
mental task. For the special case of classifying EEG signals we propose the usage of the state of the art feature selection algorithms Recurs
echniques in order to improve classification performance and to reduce the need of subject training. Instead of having the human adapt to
ally linear embedding (LLE) all utilize local neighborhood information to construct a global embedding of the manifold. We show how all t
teractive visualisation: Stretch and curvature plots; Hierarchical visualisation: user-defined and automated; Data queries. Visualising Dynam
ss variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of im
ss variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of im
the margin muh(x) of a binary valued function h at a point xin [n] is defined as the largest non-negative integer a such that h is constant on
ation to nonasymptotic lower bounds for the minimax risk of estimators.
gin no larger than N on every element in [n] , where the margin muh(x) of hin mH on a point xin [n] is defined as the largest non-negative i
asing density and hat Fn is the least concave majorant of Fn . From this we derive two test statistics for the null hypothesis that a probabili
sponding to a sample from a decreasing density. We extent Wang s result on pointwise convergence of hat Fn-Fn and prove that this differ
assifiers for non-separable data sets. This can be achieved by finding the separation in which incorrectly classified points have the smallest
d geometrical convergence rates are derived. In the d -dimensional case ( d > 1 ), the algorithm studied here uses a different step-size upda
he single concept in the class which has the minimal error. This way, the Bayes Point avoids some of the deficiencies of the Bayes classifie
on Extraction, Ontology machinelearning and Semantic Browsing into Organizational Knowledge Processes
he spectral decomposition of graph Laplacians, and combine labeled and unlabeled data in a systematic fashion. Unlike previous work usin
m limit. In this paper we show (1) how to approximate the equivalent kernel of the widely-used squared exponential (or Gaussian) kernel
nal complexity, methods for approximating string kernels are shown. Several extensions for string kernels are also presented. Finally strin
per we show how to approximate the equivalent kernel of the widely-used squared exponential (or Gaussian) kernel and related kernels. T
or infinite Gaussian mixtures, the algorithm provides cluster parameter estimates, cluster memberships, and model evidence. Model param
on Extraction, Ontology machinelearning and Semantic Browsing into Organizational Knowledge Processes
model evidence. Model parameters, such as the expected size of the mixture, can be efficiently tuned via EM with EP as the E-step. The s
ull hypothesis that a probability density is monotone. These tests are compared with existing proposals such as the supremum distance be
n capture data. The tracker estimates 3D body pose by using Relevance Vector machinelearning regression to combine a learned autoregr
n capture data. The tracker estimates 3D body pose by using Relevance Vector machinelearning regression to combine a learned autoregr
ponential (or Gaussian) kernel and related kernels, and (2) how analysis using the equivalent kernel helps to understand the machinelearni
d into the variance function used for WLS estimation at the second stage; and finally the recursively re-weighted LS (RWLS) estimator, wh
obabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the ad
prior s covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering
points. Instead of finding the reduced set via unconstrained optimization, we impose a structural constraint on the synthetic vectors such t
points. Instead of finding the reduced set via unconstrained optimization, we impose a structural constraint on the synthetic vectors such t
e behavior associated with the rhythmic gestures. We outline the open technical and user experience challenges in the development of us
sists in a mixture of as many left-right HMMs as training sequences. Our approach allows to learn, in an unsupervised manner, the cluster m
muscular communication channels. In general, BCIs offer a possibility of communication for people with severe neuromuscular disorders, su
ith excellent scalability. Stochastic sampling helps to avoid local minima in the optimization process. We have extended the SMD algorithm
ith excellent scalability. Stochastic sampling helps to avoid local minima in the optimization process. We have extended the SMD algorithm
ith excellent scalability. Stochastic sampling helps to avoid local minima in the optimization process. We have extended the SMD algorithm
re selection algorithms Recursive Feature Elimination and Zero-Norm Optimization which are based on the training of svm (SVM). These a
of having the human adapt to a predefined feedback that is computed from a fixed set of features, the BBCI adapts to the user s brain wa
e manifold. We show how all three algorithms can be described as kernel PCA on specially constructed Gram matrices, and illustrate the si
Data queries. Visualising Dynamics.
ffine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Na
ffine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Na
ger a such that h is constant on the interval Ia(x) =[x-a, x+a] subseteq [n] . Estimates are obtained for the cardinality of classes of binary val
d as the largest non-negative integer a such that h is constant on the interval Ia(x) =[x-a, x+a] subseteq [n] . An estimate on the cardinality
ull hypothesis that a probability density is monotone. These tests are compared with existing proposals such as the supremum distance be
Fn-Fn and prove that this difference converges as a process in distribution to the corresponding process for two-sided Brownian motion wi
sified points have the smallest {sl negative margin. This re-interpretation of the maximum margin classifier, when viewed as a soft margin
uses a different step-size update in each direction. However, the critical value for the step-size, and the resulting convergence rate do not
ficiencies of the Bayes classifier. We prove a bound on the generalization error for Bayes Point Machines when machinelearning linear clas
ion. Unlike previous work using diffusion kernels and Gaussian random field kernels, a nonparametric kernel approach is presented that in
ponential (or Gaussian) kernel and related kernels, and (2) how analysis using the equivalent kernel helps to understand the machinelearni
re also presented. Finally string kernels are compared to BOW. KERNEL CANONICAL CORRELATION ANALYSIS WITH APPLICATIONS
) kernel and related kernels. This is easiest for uniform input densities, but we also discuss the generalization to the non-uniform case. We
model evidence. Model parameters, such as the expected size of the mixture, can be efficiently tuned via EM with EP as the E-step. The s
M with EP as the E-step. The same approach can apply other infinite models such as infinite HMMs.
h as the supremum distance between hat Fn and Fn . Decoding aggregated profiles using dynamic calibration of machine vibration data
o combine a learned autoregressive dynamical model with robust shape descriptors extracted automatically from image silhouettes. We s
o combine a learned autoregressive dynamical model with robust shape descriptors extracted automatically from image silhouettes. We s
understand the machinelearning curves for Gaussian processes. Can Gaussian Process Regression Be Made Robust Against Model Mismatc
hted LS (RWLS) estimator, where the LS estimator obtained after k observations is plugged into the variance function to compute the k -th
. The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through
end the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process l
on the synthetic vectors such that the resulting approximation can be evaluated via separable filters. Applications that require scanning an
on the synthetic vectors such that the resulting approximation can be evaluated via separable filters. Applications that require scanning an
nges in the development of usable rhythmic interfaces. Granular Synthesis for Display of Time-Varying Probability Densities
pervised manner, the cluster models that best represent training data. We provide experimental results on two different application fields
e neuromuscular disorders, such as Amyotrophic Lateral Sclerosis (ALS) or spinal cord injury. Beyond medical applications, a BCI conjuncti
e extended the SMD algorithm with new features for fast and accurate tracking by adapting the different step sizes between as well as wi
e extended the SMD algorithm with new features for fast and accurate tracking by adapting the different step sizes between as well as wi
e extended the SMD algorithm with new features for fast and accurate tracking by adapting the different step sizes between as well as wi
training of svm (SVM). These algorithms can provide more accurate solutions than standard filter methods for feature selection. We adapt
I adapts to the user s brain waves by machinelearning ( let the machines learn ). One aspect of the BBCI is the capability of giving fast-resp
matrices, and illustrate the similarities and differences between the algorithms with representative examples. A Tutorial on Support Vect
g different classifiers: Na
g different classifiers: Na
dinality of classes of binary valued functions with a margin of at least N on a sample Ssubseteq[n] . n the Complexity of Good Samples for m
An estimate on the cardinality of mH with a dependence on N , n and d , is obtained. There exists a critical threshold N^* = O((nln n)/d) suc
h as the supremum distance between hat Fn and Fn . Decoding aggregated profiles using dynamic calibration of machine vibration data
wo-sided Brownian motion with parabolic drift. Asymptotic normality of the Lk -error of the Grenander estimator
when viewed as a soft margin formulation, will allow us to extend the range of SVM to any number of support vectors. We formulate the m
ulting convergence rate do not depend on the dimension. Those results are discussed with respect to previous work. Finally, rigourous num
en machinelearning linear classifiers, and show that it is at most 1.71 times the generalization error of the Bayes classifier, independent of
approach is presented that incorporates order constraints during optimization. This results in flexible kernels and avoids the need to choo
understand the machinelearning curves for Gaussian processes. Can Gaussian Process Regression Be Made Robust Against Model Mismatc
WITH APPLICATIONS
n to the non-uniform case. We show further that the equivalent kernel can be used to understand the machinelearning curves for Gaussia
M with EP as the E-step. The same approach can apply other infinite models such as infinite HMMs.
of machine vibration data
from image silhouettes. We studied several different combination methods, the most effective being to learn a nonlinear observation-up
from image silhouettes. We studied several different combination methods, the most effective being to learn a nonlinear observation-up
Robust Against Model Mismatch?
function to compute the k -th weight for WLS estimation. We draw special attention to RWLS estimation which can be implemented recu
sily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GPLVM). We devel
ore general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data
ations that require scanning an entire image can benefit from this representation: when using separable filters, the average computationa
ations that require scanning an entire image can benefit from this representation: when using separable filters, the average computationa
ability Densities
two different application fields, on-line handwriting signals and hypermedia navigation patterns. Handling spatial information in on-line ha
al applications, a BCI conjunction with exciting multimedia applications, e.g. a dexterity game, could define a new level of control possibiliti
ep sizes between as well as within video frames and by introducing a robust likelihood function which incorporates both depths and surfac
ep sizes between as well as within video frames and by introducing a robust likelihood function which incorporates both depths and surfac
ep sizes between as well as within video frames and by introducing a robust likelihood function which incorporates both depths and surfac
or feature selection. We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the nu
e capability of giving fast-response feedback. This was investigated in keyboard typing paradigms with self-paced as well as reactive finger
reshold N^* = O((nln n)/d) such that for N > N^* or Nleq N^* the cardinality of mH increases or decreases sharply toward the cardinality o
of machine vibration data
rt vectors. We formulate the machinelearning machinelearning similarly to the nu-SVM, therefore we will be able to readily control the nu
us work. Finally, rigourous numerical investigations on some 1-dimensional functions validate the theoretical results. {Dimension-independ
ayes classifier, independent of the input dimension and length of training. We show that when machinelearning linear classifiers, the Bayes
ls and avoids the need to choose among different parametric forms. Our approach relies on a quadratically constrained quadratic program
Robust Against Model Mismatch?
inelearning curves for Gaussian processes, and investigate how kernel smoothing using the equivalent kernel compares to full Gaussian pr
rn a nonlinear observation-update correction based on joint regression with respect to the predicted state and the observations. We dem
rn a nonlinear observation-update correction based on joint regression with respect to the predicted state and the observations. We dem
hich can be implemented recursively when the regression model in linear (even if the variance function is nonlinear), and is thus particular
ble model (GPLVM). We develop a practical algorithm for GPLVMs which allow for non-linear mappings from the embedded space giving a
ation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to twi
rs, the average computational complexity for evaluating a reduced set vector on a test patch of size (h x w) drops from O(hw) to O(h+w). W
rs, the average computational complexity for evaluating a reduced set vector on a test patch of size (h x w) drops from O(hw) to O(h+w). W
new level of control possibilities also for healthy customers decoding information directly from the users brain, as reflected in electroence
porates both depths and surface orientations. A realistic deformable hand model reinforces the accuracy of our tracker. The advantages of
porates both depths and surface orientations. A realistic deformable hand model reinforces the accuracy of our tracker. The advantages of
porates both depths and surface orientations. A realistic deformable hand model reinforces the accuracy of our tracker. The advantages of
paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting
paced as well as reactive finger movements in a time critical task. In both settings a prediction of the laterality of upcoming movements wa
harply toward the cardinality of mF or zero, respectively. This result is used to obtain an upper bound on the cardinality of a class mH(S) wh
e able to readily control the number of support vectors using the nu parameter. Kernel Methods and Their Potential Use in Signal Processi
l results. {Dimension-independent convergence rate for non-isotropic (1,lambda)-ES
ing linear classifiers, the Bayes Point is almost identical to the Tukey Median and Center Point. We extend these definitions beyond linear
constrained quadratic program (QCQP), and is computationally feasible for large datasets. We evaluate the kernels on real datasets using s
nlinear), and is thus particularly attractive for signal processing applications Moindres carr
m the embedded space giving a non-linear probabilistic version of PCA. We develop the new algorithm to provide a principled approach to
ther kernelised leading to twin kernel PCA in which a mapping between feature spaces occurs. machinelearning to Learn with the Informa
drops from O(hw) to O(h+w). We show experimental results on handwritten digits and face detection. Face Detection - Efficient and Rank
drops from O(hw) to O(h+w). We show experimental results on handwritten digits and face detection. Face Detection - Efficient and Rank
ain, as reflected in electroencephalographic (EEG) signals which are recorded non-invasively from user s scalp. This contribution introduce
our tracker. The advantages of the resulting tracker over state-of-the-art methods are corroborated through experiments. Smart Particle F
our tracker. The advantages of the resulting tracker over state-of-the-art methods are corroborated through experiments. Smart Particle F
our tracker. The advantages of the resulting tracker over state-of-the-art methods are corroborated through experiments. Smart Particle F
ssification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Fu
y of upcoming movements was possible before EMG onset. Enhancing Brain-Computer Interfaces by machinelearning Techniques
cardinality of a class mH(S) which consists of all functions in mF that have a margin greater than N on all elements of a sample Ssubset[n]
hese definitions beyond linear classifiers and define the Bayes Depth of a classifier. We prove generalization bound in terms of this new de
kernels on real datasets using svm, with encouraging results.
vely using motion capture based test sequences, and qualitatively on a test video sequence. 3D Human Pose from Silhouettes by Relevanc
vely using motion capture based test sequences, and qualitatively on a test video sequence. 3D Human Pose from Silhouettes by Relevanc
vide a principled approach to handling discrete valued data and missing attributes. We demonstrate the algorithm on a range of real-worl
ning to Learn with the Informative Vector machinelearning
Detection - Efficient and Rank Deficient
Detection - Efficient and Rank Deficient
p. This contribution introduces the Berlin Brain-Computer Interface (BBCI) and presents setups where the user is provided with intuitive c
experiments. Smart Particle Filtering for 3D Hand Tracking
experiments. Smart Particle Filtering for 3D Hand Tracking
experiments. Smart Particle Filtering for 3D Hand Tracking
ns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized. Robust EEG Channel Se
nelearning Techniques
ments of a sample Ssubset[n] of size |S|=l . The cardinality of mH(S) decreases at an exponential rate with respect to the margin paramete
bound in terms of this new definition. Finally we provide a new concentration of measure inequality for multivariate random variables to t
e from Silhouettes by Relevance Vector Regression
e from Silhouettes by Relevance Vector Regression
orithm on a range of real-world and artificially generated data-sets and finally, through analysis of the GPLVM objective function, we relat
ser is provided with intuitive control strategies in plausible multimedia-based bio-feedback applications. Yet at its beginning, BBCI thus add
alized. Robust EEG Channel Selection Across Subjects for Brain Computer Interfaces
espect to the margin parameter N for all N>N where N =Oleft((n-l)(ln d)/dright) . On machinelearning Multicategory Classification with Sam
tivariate random variables to the Tukey Median. Margin based feature selection - theory and algorithms
M objective function, we relate the algorithm to popular spectral techniques such as kernel PCA and multidimensional scaling. Optimising
at its beginning, BBCI thus adds a new dimension in multimedia research by offering the user an additional and independent communicati
and independent communication channel based on brain activity only. First successful experiments already yielded inspiring proofs-of-con
ielded inspiring proofs-of-concept. A diversity of multimedia application models, say computer games, and their specific intuitive control s
heir specific intuitive control strategies, as well as various virtual reality (VR) scenarios are now open for BCI research aiming at a further s
research aiming at a further speed up of user adaptation and increase of machinelearning success and transfer bit rates. Brain-computer
sfer bit rates. Brain-computer communication with slow cortical potentials: Methodology and critical aspects.
BoW Expert Score TRUE
support vector machine NadaLavrac 1 0 Bernhard Scholkopf
support vector machine FernandoPerez-Cruz 1 0 John Shawe-Taylor
support vector machine NicolasBaskiotis 0.317221 0 Vladimir Vapnik
support vector machine JanezZerovnik 0.317221 Corinna Cortes
support vector machine ArthurGretton 0.317221 Chris J.C. Burges
support vector machine CraigSaunders 0.258199 Isabelle Guyon
support vector machine BenjaminBlankertz 0.235702 Jason Weston
support vector machine BernhardSchölkopf 0.180579 André Elisseeff
support vector machine APhilipDawid 0.180334 Sayak Mukherjee
support vector machine HenryTirri 0.170251 Alexander J. Smola
machine learning NadaLavrac 1 Ronan Collobert
machine learning FernandoPerez-Cruz 1
machine learning Jean-FrançoisPessiot 0.316228
machine learning LaviShpigelman 0.182574
machine learning JaumeBaixeries 0.182574
machine learning AmauryHabrard 0.182574
machine learning MarcSchoenauer 0.138675
machine learning BlazZmazek 0.136083
machine learning JanLarsen 0.119523
machine learning ChristopherWilliams 0.110432
hyperplane NadaLavrac 1
hyperplane FernandoPerez-Cruz 1
hyperplane BorisHorvat 0
hyperplane BernhardSchölkopf 0
hyperplane BernhardSchölkopf 0
hyperplane Jean-FrançoisPessiot 0
hyperplane BernhardSchölkopf 0
hyperplane FlorenceAlcheBuc 0
hyperplane GaborLugosi 0
hyperplane MarcSebban 0
margin NadaLavrac 1
margin FernandoPerez-Cruz 1
margin CraigSaunders 0.597614
margin GerardGovaert 0.486664
margin GemmaCasas-Garriga 0.402015
margin NicolasGodzik 0.390567
margin GuidoDornhege 0.377964
margin DoriPeleg 0.373002
margin NikolasList 0.358569
margin JuhoRousu 0.358569
SVR BorisHorvat 1
SVR BernhardSchölkopf 1
SVR BernhardSchölkopf 1
SVR Jean-FrançoisPessiot 1
SVR BernhardSchölkopf 1
SVR FlorenceAlcheBuc 1
SVR GaborLugosi 1
SVR MarcSebban 1
SVR BernadettaTarigan 1
SVR BorisHorvat 1
kernel ArashRafiey-Hafshejan 1
kernel NadaLavrac 1
kernel FernandoPerez-Cruz 1
kernel GaborLugosi 0.707107
kernel LucaZaniboni 0.688247
kernel HansSimon 0.686803
kernel JoseBalcazar 0.624695
kernel GunnarRätsch 0.593732
kernel ElisabethGassiat 0.589768
kernel LucienBirge 0.58346
1 BoW
BernhardSchölkopf support vector machine NadaLavrac 0
John Shawe-Taylor support vector machine FernandoPerez-Cruz
Vladimir Vapnik support vector machine NicolasBaskiotis
Corinna Cortes support vector machine JanezZerovnik
Chris J.C. Burges support vector machine ArthurGretton
Isabelle Guyon support vector machine CraigSaunders
Jason Weston support vector machine BenjaminBlankertz
André Elisseeff support vector machine BernhardSchölkopf
Sayak Mukherjee support vector machine APhilipDawid
Alexander J. Smola support vector machine HenryTirri
Ronan Collobert machine learning NadaLavrac
Robert E. Schapire machine learning FernandoPerez-Cruz
Nello Cristianini machine learning Jean-FrançoisPessiot
Dale Schuurmans machine learning LaviShpigelman
Grace Wahba machine learning JaumeBaixeries
Thomas Hofmann machine learning AmauryHabrard
machine learning MarcSchoenauer
machine learning BlazZmazek
machine learning JanLarsen
machine learning ChristopherWilliams
hyperplane NadaLavrac
hyperplane FernandoPerez-Cruz
hyperplane BorisHorvat
hyperplane BernhardSchölkopf
hyperplane BernhardSchölkopf
hyperplane Jean-FrançoisPessiot
hyperplane BernhardSchölkopf
hyperplane FlorenceAlcheBuc
hyperplane GaborLugosi
hyperplane MarcSebban
margin NadaLavrac
margin FernandoPerez-Cruz
margin CraigSaunders
margin GerardGovaert
margin GemmaCasas-Garriga
margin NicolasGodzik
margin GuidoDornhege
margin DoriPeleg
margin NikolasList
margin JuhoRousu
SVR BorisHorvat
SVR BernhardSchölkopf
SVR BernhardSchölkopf
SVR Jean-FrançoisPessiot
SVR BernhardSchölkopf
SVR FlorenceAlcheBuc
SVR GaborLugosi
SVR MarcSebban
SVR BernadettaTarigan
SVR BorisHorvat
kernel ArashRafiey-Hafshejani
kernel NadaLavrac
kernel FernandoPerez-Cruz
kernel GaborLugosi
kernel LucaZaniboni
kernel HansSimon
kernel JoseBalcazar
kernel GunnarRätsch
kernel ElisabethGassiat
kernel LucienBirge