Brain Dynamics Uncovered Using A Machine-Learning Algorithm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Research briefing

Brain dynamics
algorithms such as t-SNE, UMAP and
The problem
variational auto-encoders. We conducted
Modern neuroscience research is enabling several neuroscientific analyses to highlight

uncovered data collection at a rapid pace, yet our


ability to understand the complex nonlinear
CEBRA’s use for the life sciences. First, we
found a highly similar latent structure in

using a systems that make up the nervous system is


still limited1. It is only relatively recently that
neural data recorded using two techniques,
addressing an important open question

machine-
neuroscientists have been able to record about the comparability of data collected by
large-scale behaviour and neural activity. different means. Second, we used CEBRA to
However, the tools to jointly consider decode activity recorded in a brain region

learning such data have been lacking. Moreover,


there has been tremendous progress in
called the visual cortex in a mouse watching
a video, leading to high-performance

algorithm
machine-learning technologies2–4, yet decoding of that video. This demonstrates
questions remain about how real-world that even the primary visual cortex (which is
continuous and discrete time-series data often considered to underlie only relatively
can be analysed. Thus, there is a crucial basic visual processing) can be used to
need for innovative computational tools decode videos in a brain–machine-interface
to understand the complex relationships style.
between the information that can be
CEBRA is a machine-learning observed (albeit still limited) and the hidden
dynamics scientists want to understand. Future directions
method that can be used to
compress time series in a way The methods we introduce are not limited
that reveals otherwise hidden The solution to neuroscience. Many data sets involve
structures in the variability of time or joint information, and CEBRA can be
To tackle this problem, we undertook used in cases in which other dimensionality-
the data. It excels at processing a computational study to develop a reduction tools (such as UMAP or t-SNE) are
behavioural and neural data theoretically guaranteed algorithm used and can benefit from explicitly using
recorded simultaneously, and that enables us to estimate the latent time. We imagine that this technology could
(hidden) factors that underlie neural be useful for studying development, animal
it can decode activity from the data. We generalized a machine-learning behaviour and gene-expression data. One
visual cortex of the mouse brain technique, based on what is known as of CEBRA’s strengths is that it can combine
to reconstruct a viewed video. contrastive learning2,3, that learns how data across modalities and helps to limit
high-dimensional data can be arranged nuances (such as changes to the data that
(embedded) in a lower-dimensional space depend on how they were collected).
(called a latent space) such that similar data All tools have their limitations. CEBRA will
points are close together and more-different excel when there are enough observations,
ones are farther apart. The embedding but when there are too few data points
in this space can be used to infer latent (for example, from too few neurons), the
relationships and structures in the data. embedding structure and decoding suffer.
We extended this contrastive-learning If available, data sets can be merged to
approach to sample data with either discrete increase the number of observations used to
or continuous ‘labels’. These labels can be train the CEBRA neural network.
other data collected at the same time, or just An obvious next step is integrating
time itself (Fig. 1). We thus introduce three CEBRA into brain–machine interfaces to
variants of contrastive learning: supervised establish robust latent embeddings for high-
(using user-defined annotations), self- performance decoding with the required
supervised (using time-only labels) and a hardware. Our work is just one step towards
hybrid variant. The advantage of contrastive developing the theoretically endorsed
learning is its ability to jointly consider algorithms needed in neurotechnology.
neural data and behavioural labels: these
could be measured movements, abstract Mackenzie Weygandt Mathis is at the Swiss
labels (such as ‘reward’) or deconstructed Federal Institute of Technology Lausanne,
sensory features (such as colours or textures Geneva, Switzerland.
of images). One key feature of CEBRA is to
encourage users to first use self-supervised
learning (discovery-driven science), and
This is a summary of:
then do supervised learning (hypothesis
Schneider, S. et al. Learnable latent
testing) with labels to understand what
embeddings for joint behavioural and neural
contributes to the embedding. If the label
analysis. Nature https://doi.org/10.1038/
is not represented in the latent space, no
s41586-023-06031-6 (2023).
structure is found.
Cite this as: Our method performs substantially
Nature https://doi.org/10.1038/d41586-023- better in recovering the ground-truth data
01339-9 (2023). than do other dimensionality-reduction

Nature  |  Published online 3 May 2023


©
2
0
2
3
S
p
r
i
n
g
e
r
N
a
t
u
r
e
L
i
m
i
t
e
d
.
A
l
l
r
i
g
h
t
s
r
e
s
e
r
v
e
d
.
EXPERT OPINION REFERENCES
The authors developed a self- for neural data, which often evolves over time. 1. Urai, A. E., Doiron, B., Leifer, A. M.
supervised tool that embeds This new approach enables behaviours to be & Churchland, A. K. Nature Neurosci. 25,
high-dimensional neural activity effectively decoded from neural activity, with 11–19 (2022).
in a lower-dimensional space, allowing folks better accuracy than using other models.”
to relate behavioural actions to large-scale 2. Hyvärinen, A., Sasaki, H. & Turner, R. In
recordings of neural activity more easily. Anne Churchland is at the University Proc. 22nd Int. Conf. Artif. Intell. Stat. 89,
Techniques to do this are currently lacking, of California, Los Angeles, Los Angeles, 859–868 (2019).
especially because many were not developed California, USA.
3. Khosla, P. et al. In Adv. Neural Inf. Process.
Syst. 33, 18661–18673 (2020).

FIGURE 4. van der Maaten, L. J. P., Postma, E. O.


& van den Herik, H. J. J. Mach. Learn. Res.
10, 1–36 (2009).
a Behavioural data: b Discovery: c Hypothesis:
0m position time only position
5. Mathis, A. et al. Nature Neurosci. 21,
1281–1289 (2018).

Latent 1
Latent 1

1.6 m t2 La
ten ten
Left
Right

Latent 2 La t3

Figure 1 | Dissecting hidden structure in neural data. A machine-learning algorithm called CEBRA was
developed to learn how high-dimensional data can be embedded in a lower-dimensional space (called a
latent space, in latent dimensions), either using supervised ‘labels’ (hypothesis-driven) or unsupervised
(discovery-driven). These labels can be other, simultaneously collected data or just time itself, for example.
Here, neural activity was recorded in the brain of a rat running up and down a 1.6-metre track. a, The latent
embedding of behavioural data, coloured according to the rat’s position on the track. b, c, The latent
embedding of neural data trained unsupervised with time-only information (b) or supervised with the
position on the track (c). Schneider, S. et al./Nature (CC BY 4.0).

BEHIND THE PAPER FROM THE EDITOR


We develop methods to address various decoded from the mouse brain, I knew we Neuroscience has undergone a technological
questions. For example, DeepLabCut5, which had cracked it. The name CEBRA is an apt one revolution in the past 20 years or so. A product
uses deep learning, was designed to obtain because the algorithm ‘stripes’ the information of this revolution has been impressively large
high-accuracy marker-less behavioural data. to handle the joint data. Furthermore, zebras data sets of neural activity. The challenge now
Next, we reasoned that to understand how have become particularly fashionable in is to make sense of these data. The authors
neural dynamics give rise to movement, we machine learning, thanks to a technique provide an elegant toolkit for analysing such
needed to be able to assess any nonlinear called cycleGAN in which an image of a horse large data sets. The technology works not
relationships in our data. To this end, Steffen was converted into an image of a zebra. only when there is a clear hypothesis, but also
Schneider and I put our heads together, when researchers want to let the data be the
and Jin Lee worked with us to develop guide in an unbiased, hypothesis-free manner.
CEBRA (pronounced ‘zebra’) to uncover
hidden relationships between the brain and
behaviour. When I watched the video frames M.W.M. David Rowland, Associate Editor, Nature

Nature  |  Published online 3 May 2023


©
2
0
2
3
S
p
r
i
n
g
e
r
N
a
t
u
r
e
L
i
m
i
t
e
d
.
A
l
l
r
i
g
h
t
s
r
e
s
e
r
v
e
d
.

You might also like