Mixomics
Mixomics
math.univtoulouse.fr/biostat
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Agenda
● Introduction
● Reminders (?)
● Explore one data set (PCA)
● Discriminant analysis (LDA, PLS-DA)
● Data integration (PLS, CCA, GCCA)
● Graphical outputs
● Extensions: sparse and multilevel
● Conclusion
2 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Introduction
3 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Research hypothesis
4 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Multidisciplinarity!
● Nearly unlimited quantity of data from multiple and
heterogeneous sources
● Computational issues to foresee
● Biological interpretation for validation
● Keep pace with new technologies
A close interaction between statisticians, bioinformaticians and
molecular biologists is essential to provide meaningful results
« Bio-info »
5 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Data integration
6 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Genotype
Condition
Samples
● Univariate
Mean, median, standard deviation...
● Multivariate unsupervised
PCA
● Multivariate supervised
PLS-DA
● Multi-block unsupervised
PLS (2 blocks), GCCA
● Multi-block supervised
GCC-DA 7 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
8 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Guidelines
● I want to explore one single data set (e.g. microarray data):
– I would like to identify the trends or patterns in your data, experimental bias or,
identify if your samples ‘naturally’ cluster according to the biological conditions:
Principal Component Analysis (PCA)
● I have one single data set (e.g. microarray data) and I am interested in
classifying my samples into known classes:
– Here X = expression data and Y = vector indicating the classes of the samples. I
would like to know how informative my data are to rightly classify my samples, as
well as predicting the class of new samples: PLS-Discriminant Analysis (PLS-DA)
● I want to want to unravel the information contained in two data sets, where
two types of variables are measured on the same samples (e.g.
metabolomics and transcriptomics data)
– I would like to know if I can extract common information from the two data sets (or
highlight thecorrelation between the two data sets). The total number of variables is
less than the number of samples: Canonical Correlation Analysis (CCA) or
Projection to Latent Sructures (PLS) canonical mode. The total number of variables
is greater than the number of samples: Regularized Canonical Correlation Analysis
(rCCA) or Projection to Latent Sructures (PLS) canonical mode
9 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Practical works
10 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Reminders (?)
11 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
12 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
X X
5
2
X X
4
2
X X
3
2
Variance
X X
2
2
X X
1
2
X4 X3 X2 X1 X X5 Standard
deviation
X1 X
X 2 X
X 3 X
X 4 X
X 5 X
13 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
14 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Covariance
n
Covariance cov(X,Y)1 X i X Yi Y cov(X,X)=var(X)
n i1
6
Intuitively :
- +
4
● If the + win
→positive linear relationship
2
●If the – win y Y
→négative linear relationship 0
Y
+ -
X
-4
9 8 .5 9 9 .0 9 9 .5 1 0 0 .0 1 0 0 .5 1 0 1 .0 1 0 1 .5
x
Y
The covariance depends on the physical units
correlation coefficient
X X 15 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Correlation
Some properties of correlation coefficients:
– Between –1 and 1
Correlation
ρ = 0.944 - ρs = 0.855 ρ = 0.722 - ρs = 0.915 ρ = 0.89 - ρs = 1
17 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Linear combination
2 variables 0.5
2 coefficients : c1 = 0.5 ; c2 = 2 W= 2
Height Weight
174.0 65.6 Linear combination of the 2 variables Height
175.3 71.8 and Weight with coefficients c1 and c2
193.5 80.7
186.5 72.6 65.6 218.20
174.0
187.2 78.8 71.8 231.25
175.3
181.5 74.8 80.7 258.15
193.5
184.0 86.4 72.6 238.45
186.5
184.5 78.4 78.8 251.20
LC = 0.5 187.2 + 2 =
175.0 62.0 74.8 240.35
181.5
184.0 81.6 86.4 264.80
184.0
184.5 78.4 249.05
X 175.0 62.0 211.50
184.0 81.6 255.20
Matrix notation: LC = XW
Center / scale
● After centering and scaling, the mean is zero and the standard
deviation is 1 (as the variance).
Z X X
i
i
19 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Center / scale
X X-mean(x) X/sd(x) [X-mean(x)] / sd(X)
v1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4
1 3.9 0.2 -3.2 0.6 1 1.4 0.4 -3.8 0.1 1 1.1 0.1 -0.6 0.8 1 0.7 0.3 -0.8 0.1
2 3.3 -1.7 -4.0 0.6 2 0.8 -1.5 -4.5 0.0 2 1.0 -1.1 -0.8 0.8 2 0.4 -1.0 -0.9 0.0
3 1.2 1.8 3.3 0.6 3 -1.3 2.0 2.8 0.0 3 0.3 1.2 0.7 0.7 3 -0.6 1.3 0.6 0.0
4 0.4 -0.9 -1.4 0.3 4 -2.1 -0.7 -2.0 -0.3 4 0.1 -0.6 -0.3 0.4 4 -1.0 -0.5 -0.4 -0.6
5 3.6 -0.6 10.3 -0.2 5 1.1 -0.4 9.8 -0.8 5 1.0 -0.4 2.0 -0.3 5 0.5 -0.3 2.0 -1.7
6 2.5 2.4 -7.5 0.4 6 0.0 2.5 -8.0 -0.2 6 0.7 1.5 -1.5 0.5 6 0.0 1.7 -1.6 -0.3
7 -1.4 0.6 3.2 1.2 7 -3.9 0.8 2.7 0.7 7 -0.4 0.4 0.6 1.6 7 -1.8 0.5 0.5 1.4
8 2.4 -1.2 -1.0 1.4 8 -0.1 -1.0 -1.5 0.9 8 0.7 -0.7 -0.2 1.8 8 -0.1 -0.6 -0.3 1.7
9 2.3 0.2 2.1 0.1 9 -0.2 0.4 1.6 -0.4 9 0.7 0.1 0.4 0.2 9 -0.1 0.2 0.3 -0.9
10 6.7 -2.6 3.4 0.7 10 4.2 -2.5 2.8 0.1 10 2.0 -1.7 0.7 0.9 10 1.9 -1.6 0.6 0.2
Mean 2.5 -0.2 0.5 0.6 Mean 0 0 0 0 Mean 1.1 -0.1 0.1 1.2 Mean 0 0 0 0
S.D. 2.2 1.5 5.0 0.5 S.D. 2.2 1.5 5.0 0.5 S.D. 1 1 1 1 S.D. 1 1 1 1
20 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Log transformation
City Population Log10
Toulouse 441 802 5.65
X Log2(X) Colomiers 35 186 4.55
Tournefeuille 25 340 4.40
0.125 = 2-3 -3 Muret 23 864 4.38
...
0.25 = 2-2 -2 CastanetTolosan 11 033 4.04
SaintOrens... 10 918 4.04
0.5 = 2-1 -1 SaintJean 10 259 4.01
Revel 9 361 3.97
1 = 20 0 PortetsurGaronne 9 435 3.97
Auterive 9 107 3.96
2 = 21 1 ...
La MagdelainesurT/ 1 006 3.00
4 = 22 2 Grépiac 990 2.99
Landorthe 946 2.98
8 = 23 3 VigouletAuzil 944 2.97
...
BelbèzedeLauragais 104 2.02
4<5<8 2 < ~2.3 < 3 SaintGermier 103 2.01
Seyre 102 2.01
2<3<4 1 < ~1.6 < 2 Gouzens 95 1.98
Lourde 98 1.99
0.1 < 0.125 ~ -3.3 < -3 Pouze 97 1.99
...
Y = log2(X) ↔ X = 2Y Saccourvielle 13 1.11
Cirès 13 1.11
Bourgd'Oueil 8 0.90
Y = log10(X) ↔ X = 10Y TrébonsdeLuchon 8 0.90
Caubous 6 0.78
Y = ln(X) ↔ X = eY = exp(Y) Baren 5 0.70
21 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
22 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
n individuals
23 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
24 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Height in cm
Weight 65.6 71.8 80.7 72.6 78.8 74.8 86.4 78.4 62.0 81.6
Weight in kg
25 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Height 174.0 175.3 193.5 186.5 187.2 181.5 184.0 184.5 175.0 184.0
Weight 65.6 71.8 80.7 72.6 78.8 74.8 86.4 78.4 62.0 81.6
Weight in kg
Height in cm
26 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Height 174.0 175.3 193.5 186.5 187.2 181.5 184.0 184.5 175.0 184.0
Weight 65.6 71.8 80.7 72.6 78.8 74.8 86.4 78.4 62.0 81.6
Waist g. 71.5 79.0 83.2 77.8 80.0 82.5 82.0 76.8 68.5 77.5
Waist g. in cm
kg
in
ht
e ig
W
Height in cm
27 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
4D ?
28 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Shoulder g.
Chest g.
Waist g.
Weight
Height
29 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Shoulder girth
th
ir
g
st
he
C
Waist girth
30 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Shoulder girth
th
ir
g
st
he
C
Waist girth
1st Principal Component :
«beefyness»
31 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
32 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
33 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
In other words
34 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
2
1
1
0
V1 V1 V1
0
-1
-1
-1
-2
-2
-2
3
2
2
2
1
1
1
V2 V2 V2
0
0
0
-1
-1
-1
-2
-2
-3
-2
2
1
1
1
0
V3 V3 V3
0
0
-1
-1
-1
-2
-2
-2
-2 -1 0 1 -2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 -2 -1 0 1 2 -2 -1 0 1 2
35 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Case 3)
36 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
1)
37 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
2)
38 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
3)
39 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Variables plot
The coordinates of
a variable Xj on a
principal component
PCi is given by the
correlation between
this variable and the
component PCi.
40 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Variables plot
Remember trigonometry
Correlation ≈ cosine and right triangles:
41 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Variables plot
1) 2) 3)
Correlation matrices
1) V1 V2 V3 2) V1 V2 V3 3) V1 V2 V3
V1 1.0 -0.10 0.00 V1 1.00 0.88 -0.05 V1 1.00 0.88 0.92
V2 -0.1 1.00 -0.12 V2 0.88 1.00 -0.11 V2 0.88 1.00 0.81
V3 0.0 -0.12 1.00 V3 -0.05 -0.11 1.00 V3 0.92 0.81 1.00
42 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
1) 2) 3)
43 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
44 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Screeplot
17 %
7%
2% 1%
17 %
7%
2% 1%
Individual plots
47 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Mean 108.1 94.2 75.3 70.6 174.4
Var. 68.6 37.5 50.8 85.7 109.3
49 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Covariance PC1 255.66 0.00 0.00 0.00 0.00
PC2 0.00 60.18 0.00 0.00 0.00
matrix PC3 0.00 0.00 23.48 0.00 0.00 255.66 + 60.18 + 23.48 + 8.61 + 4.01
PC4 0.00 0.00 0.00 8.61 0.00 = 351.94
between PCs PC5 0.00 0.00 0.00 0.00 4.01
255.66 is the greatest value of variance that we The same quantity of information (351.94)
can obtain on the individuals with a linear is kept but it is ``optimally’’ allocated.
combination of the initial variables.
50 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
4 replicates
1 replicat condition condition A
B (to be removed?)
3 replicates condition
B et 3 replicats control
51 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
3 replicates
C2_mut
1 replicate C1_mut
(to be removed?) 3 replicates
C1_wt
52 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
PCA = projection
53 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
PCA =
projection
I’m TWO-D boy. The
boy X-Y who doesn’t
care about the Z !
Scenario &
illustration
Pascal Jousselin
Colour
Laurence Croix
Web
pjousselin.free.fr
54 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Discriminant analysis
55 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
p quantitatives
variables 1 qualitative
variable
n individuals
56 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
separated? 40
41
42
0.32
1.14
-1.21
0.79
5.79
-2.88
-1.78
-1.64
-1.50
B
B
B
43 1.38 1.71 -2.11 B
44 -0.80 -0.38 -1.99 B
45 -2.04 -4.60 -2.00 B
46 7.67 5.84 -2.09 B
V1 V2 V3 47
48
-4.50
-0.19
-0.15
3.95
-1.85
-1.89
B
B
49 5.92 1.54 -1.72 B
Mean 0 0 0 50 4.82 -1.70 -2.41 B
Variance 20 10 2 58 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
58%
30
29
0 .3
15
13 12
20
30% 4
V a ria n c e s
10
23
45
0 .2
12% 17
5
10 42
10
3 26 6
0 .1
50
38 28
0
30 34 16
PC 2
1 41 9 8
44 V a2 r 3 Var 1
2 56
0 .0
7 18
0
47 31 27
5 15 37 32
4 20 1 2 4
● The 3 PC are clearly 39
43
49
-1 0
35
variables. 22 3
9
-2 0
33 Var 2 46
11 41
then by V2, then by V3.
-0 .2 -0 .1 0 .0 0 .1 0 .2 0 .3
PC 1
59 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
60 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
-1 0 -5 0 5 10
LD 1
Coefficients of linear
discriminants:
LD1
shoulder g. 0.12
chest g. -0.02
waist g. 0.11
weight -0.11
height 0.14
33 34 35 36 37 38
LD 1
The coefficients indicates that chest girth is the less discriminant variable
(loading -0.02)... The other variables participate nearly in the same way
(loadings around 0.1 in absolute value).
62 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
LDA: principle
63 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
64 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
PCA LDA
66 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Comparison ACP-PLSDA
PCA
PLS-DA
The Small Round Blue Cell
Tumors dataset from Khan et
al., (2001) contains
information of 63 samples
and 2308 genes. The
samples are distributed in
four classes as follows: 8
PLS-DA with variables
Burkitt Lymphoma (BL), 23 selection (see
Ewing Sarcoma (EWS), 12 Voir en 3D
neuroblastoma (NB), and 20 extensions sparse)
rhabdomyosarcoma (RMS).
68 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Data integration
69 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Data integration
The two types of variables are measured on the same
matching samples: X (n x p) and Y (n x q), n << p + q
p quantitative q quantitative
variables variables
n individuals
Aims:
● Understand the correlation/covariance structure between
70 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
72 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
CCA: principle
● The CCA can be viewed as an interative algorithm (like PCA)
● Maximize the correlation (ρ1) between two linear
combinations: one from variables X (t1), the other from
variables Y (u1).
t1 = a11X1 + a12X2 + … + a1pXp t1 and u1 are the first canonical
u1 = b11Y1 + b12Y2 + … + b1qYq variates and p1 is the first
canonical correlation.
ρ1 = cor(t1,u1) = maxt,u cor(t,u)
For next levels, iterate the process under orthogonality
●
constraints
● CCA is analog to PCA for the production and the interpretation
of graphical outputs.
● Mathematical aspects are in the same vein as PCA (eigen
decomposition of matrices)
73 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
76 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
… with limits
77 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Alternatives
● PLS related methods. In PLS, the algorithm is equivalent
to find linear combinations from X and Y variables that
have the greatest covariance.
78 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
79 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
80 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Generalisation
Tenenhaus, A., Philippe, C., Guillemot, V., Lê Cao K-A., Grill, J., Frouin, V. 2014,
Variable selection for generalized canonical correlation analysis, Biostatistics
81 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Graphical outputs
The same principles as those for PCA are still true for other
multivariate methods mentioned here:
82 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Propositions:
● Identify the pairs of highly related variables
83 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Variables set 2
Differ from usual
heatmaps crossing
individuals and
variables Variables set 1 84 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
85 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
2
7 10
6 1
10
6
4
5 2
3 5
9 4 3
86 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Extensions sparse
87 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Curse of dimensionality
https://en.wikipedia.org/wiki/Curse_of_dimensionality
https://en.wikipedia.org/wiki/Occam’s_razor
88 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Sparse PCA
High throughput experiments: too many variables, noisy or
irrelevant. PCA is difficult to visualise and understand.
→ clearer signal if some of the variable weights {a1, …, ap} were set
to 0 for the ‘irrelevant’ variables (small weights):
89 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Graphical outputs
PCA Sparse PCA
Représentation
des variables
Représentation
des individus
90 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Extensions multilevel
91 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Principle
● In repeated measures experiments, the subject
variation can be larger than the time/treatment
variation
● Multivariate projection based methodes make the
assumption that samples are independent of each
other
● In univariate analysis we use a paired t-test rather
than a t-test
● In multivariate analysis we use a multilevel approach
● Different sources of variation can be separated
(treatment effect within subjects and differences
between subjects)
92 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Paired data
Fig. from Westerhuis et al. (2009). Multivariate paired data analysis: multilevel
PLSDA versus OPLSDA. Metabolomics 6(1).
93 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Data decomposition
Decomposition of the data into within and between variations
X = Xm + Xb + Xw
offset term between-sample within-sample
Liquet, B. Lê Cao, K-A., et al. (2012). A novel approach for biomarker selection and
the integration of repeated measures experiments from two platforms, BMC
Bioinformatics, 13:325.
94 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
● No treatment effect
can be observed
96 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
To put it in a nustshell
● Multivariate linear methods enables to answer a wide range of biological
questions
– data exploration ● Principles
– classification
PCA : max var(aX) →a ?
– integration of multiple data sets PLS1 : max cov(aX, by) →a, b ?
PLS2 : max cov(aX, bY) →a, b ?
● Variable selection (sparse) CCA : max cor(aX,bY) →a, b ?
● Cross-over design (multilevel) PLSDA →PLS2
GCCA : max Σ cov(aiXi,bjXj) →ai, bi ?
98 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
Questions, feedback
Contact : [email protected]toulouse.fr
99 / 100
Introduction Reminders Exploration Discrimination Integration Graphics Extensions Conclusion
mixOmics development
Kim-Anh Lê Cao, Univ. Melbourne Methods development
Ignacio González, INRA Toulouse Amrit Singh, UBC, Vancouver
Benoît Gautier, UQDI Benoît Liquet, Univ. Pau
Florian Rohart, TRI, UQ Jasmin Straube, QFAB
Sébastien Déjean, Univ. Toulouse Philippe Besse, INSA Toulouse
François Bartolo, Methodomics Christèle Robert, INRA Toulouse
Xin Yi Chua, QFAB