0% found this document useful (0 votes)
9 views

ConvexCodes

Uploaded by

Sandip Banerjee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

ConvexCodes

Uploaded by

Sandip Banerjee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

SIAM J. APPL.

ALGEBRA GEOMETRY c 2017 Society for Industrial and Applied Mathematics


Vol. 1, pp. 222–238

What Makes a Neural Code Convex?∗


Carina Curto† , Elizabeth Gross‡ , Jack Jeffries§ , Katherine Morrison¶, Mohamed Omark ,
Zvi Rosen#, Anne Shiu††, and Nora Youngs‡‡

Abstract. Neural codes allow the brain to represent, process, and store information about the world. Combi-
natorial codes, comprised of binary patterns of neural activity, encode information via the collective
behavior of populations of neurons. A code is called convex if its codewords correspond to regions
defined by an arrangement of convex open sets in Euclidean space. Convex codes have been observed
experimentally in many brain areas, including sensory cortices and the hippocampus, where neurons
exhibit convex receptive fields. What makes a neural code convex? That is, how can we tell from
the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In
this work, we provide a complete characterization of local obstructions to convexity. This motivates
us to define max intersection-complete codes, a family guaranteed to have no local obstructions. We
then show how our characterization enables one to use free resolutions of Stanley–Reisner ideals in
order to detect violations of convexity. Taken together, these results provide a significant advance
in our understanding of the intrinsic combinatorial properties of convex codes.

Key words. neural coding, convex codes, simplicial complex, link, Nerve lemma, Hochster’s formula

AMS subject classifications. 92, 52, 05, 13

DOI. 10.1137/16M1073170

1. Introduction. Cracking the neural code is one of the central challenges of neuroscience.
Typically, this has been understood as finding the relationship between the activity of neurons
and the stimuli they represent. To uncover the principles of neural coding, however, it is not

Received by the editors May 3, 2016; accepted for publication (in revised form) December 21, 2016; published
electronically March 28, 2017.
http://www.siam.org/journals/siaga/1/M107317.html
Funding: This work began at a 2014 AMS Mathematics Research Community, “Algebraic and Geometric
Methods in Applied Discrete Mathematics,” which was supported by NSF DMS-1321794. CC was supported by
NSF DMS-1225666/1537228, NSF DMS-1516881, and an Alfred P. Sloan Research Fellowship; EG was supported
by NSF DMS-1304167 and NSF DMS-1620109; JJ was supported by NSF DMS-1606353; and AS was supported
by NSF DMS-1004380 and NSF DMS-1312473/1513364.

Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 ([email protected]).

Department of Mathematics, San José State University, San José, CA 95192 ([email protected]).
§
Department of Mathematics, University of Utah, Salt Lake City, UT 84112. Current address: Department of
Mathematics, University of Michigan, Ann Arbor, MI 48109 ([email protected]).

Department of Mathematics, The Pennsylvania State University, University Park, PA 16802. Current address:
School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639 ([email protected]).
k
Department of Mathematics, Harvey Mudd College, Claremont, CA 91711 ([email protected]).
#
Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, and Department of
Mathematics, University of California, Berkeley, Berkeley, CA 94720. Current address: Department of Mathematics,
University of Pennsylvania, Philadelphia, PA 19104 ([email protected]).
††
Department of Mathematics, Texas A&M University, College Station, TX 77843 ([email protected]).
‡‡
Department of Mathematics, Harvey Mudd College, Claremont, CA 91711. Current address: Department of
Mathematics, Colby College, Waterville, ME 04901 ([email protected]).
222
WHAT MAKES A NEURAL CODE CONVEX? 223

enough to describe the various mappings between stimulus and response. One must also
understand the intrinsic structure of neural codes, independently of what is being encoded [1].
Here we focus our attention on convex codes, which are comprised of activity patterns for
neurons with classical receptive fields. A receptive field Ui is the subset of stimuli that induces
neuron i to respond. Often, Ui is a convex subset of some Euclidean space (see Figure 1). A
collection of convex sets U1 , . . . , Un ⊂ Rd naturally associates to each point x ∈ Rd a binary
response pattern, c1 · · · cn ∈ {0, 1}n , where ci = 1 if x ∈ Ui , and ci = 0 otherwise. The set of
all such response patterns is a convex code.
Convex codes have been observed experimentally in many brain areas, including sensory
cortices and the hippocampus. Hubel and Wiesel’s discovery in 1959 of orientation-tuned
neurons in the primary visual cortex was perhaps the first example of convex coding in the
brain [2]. This was followed by O’Keefe’s discovery of hippocampal place cells in 1971 [3],
showing that convex codes are also used in the brain’s representation of space. Both discoveries
were groundbreaking for neuroscience and were later recognized with Nobel Prizes in 1981 [4]
and 2014 [5], respectively.
Our motivating examples of convex codes are, in fact, hippocampal place cell codes. A
place cell is a neuron that acts as a position sensor, exhibiting a high firing rate when the ani-
mal’s location lies inside the cell’s preferred region of the environment—its place field. Figure 1
displays the place fields of four place cells recorded while a rat explored a two-dimensional
environment. Each place field is an approximately convex subset of R2 . Taken together, the
set of all neural response patterns that can arise in a population of place cells comprises a
convex code for the animal’s position in space. Note that in the neuroscience literature, con-
vex receptive fields are typically referred to as unimodal, emphasizing the presence of just one
“hot spot” in a stimulus-response heat map such as those depicted in Figure 1.
place field of neuron #1 place field of neuron #2 place field of neuron #3 place field of neuron #4

Figure 1. Place fields of four CA1 pyramidal neurons (place cells) in rat hippocampus, recorded while the
animal explored a 1.5m × 1.5m square box environment. Red areas correspond to regions of space where the
corresponding place cells exhibited high firing rates, while dark blue denotes near-zero activity. Place fields were
computed from data provided by the Pastalkova lab, as described in [6].

Despite their relevance to neuroscience, the mathematical theory of convex codes was
initiated only recently [1, 7]. In particular, the intrinsic combinatorial signatures of convex
and nonconvex codes are not clear. Identifying such features will enable us to infer coding
properties from population recordings of neural activity, without needing a priori knowledge
of the stimuli being encoded. This may be particularly important for studying systems such as
olfaction, where the underlying “olfactory space” is potentially high-dimensional and poorly
understood. Having intrinsic signatures of convex codes is a critical step toward understanding
whether something like convex coding may be going on in such areas. Understanding the
structure of convex codes is also essential to uncovering the basic principles of how neural
224 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

networks are organized in order to learn, store, and process information.


1.1. Convex codes. By a neural code, or simply a code, on n neurons we mean a collection
of binary strings C ⊆ {0, 1}n . The elements of a code are called codewords. We interpret each
binary digit as the “on” or “off” state of a neuron and consider 0/1 strings of length n
and subsets of [n] = {1, . . . , n} interchangeably. For example, 1101 and 0100 are also denoted
{1, 2, 4} and {2}, respectively. We will always assume 00 · · · 0 ∈ C (i.e., ∅ ∈ C); this assumption
simplifies notation in various places but does not alter the core results.
LetSX be a topological space, and consider a collection of open sets U = {U1 , . . . , Un },
where ni=1 Ui ( X. Any such U defines a code,
 
 
def
[
(1) C(U) = σ ⊆ [n] | Uσ \ Uj 6= ∅ ,
 
j∈[n]\σ

def
where Uσ = ∩i∈σ Ui for σ ⊆ [n]. In other words, each codeword σ in C(U) corresponds to the
portion of Uσ that is not covered by other sets. In particular, C(U) is not the same as the
nerve of the cover, N (U) (see section 3.2). By convention, U∅ = X, and so ∅ ∈ C(U).
For any code C, there always exists an open cover U such that C = C(U) [1, Lemma 2.1].
However, it may be impossible to choose the Ui ’s to all be convex. We thus have the following
definitions, which were first introduced in [1].
Definition 1.1. Let C be a binary code on n neurons. If there exists a collection of open
sets U = {U1 , . . . , Un } such that C = C(U) and the Ui ’s are all convex subsets of Rd , then we
say that C is a convex code. The smallest d such that this is possible is called the minimal
embedding dimension, d(C).
Note that the definition of a convex code is extrinsic: a code is convex if it can be realized
by an arrangement of convex open sets in some Euclidean space. How can we characterize
convex codes intrinsically? If a code is not convex, how can we prove this? If a code is convex,
what is the minimal dimension needed for the corresponding open sets?
In this work, we tackle these questions by building on mathematical ideas from [1] and [8].
In particular, we study local obstructions to convexity, a notion first introduced in [8]. Our
main result is Theorem 1.3, which provides a complete characterization of codes with no local
obstructions. In section 2 we present a series of examples that illustrate the ideas summarized
in section 1.3. Sections 3 and 4 are devoted to additional background and technical results
needed for the proof of Theorem 1.3. Finally, in section 5 we show how tools from combina-
torial commutative algebra, such as Hochster’s formula, can be used to determine that a code
is not convex.
1.2. Preliminaries.
Simplicial complexes. A simplicial complex ∆ on n vertices is a nonempty collection of
subsets of [n] that is closed under inclusion. In other words, if σ ∈ ∆ and τ ⊂ σ, then τ ∈ ∆.
The elements of a simplicial complex are called simplices or faces. The dimension of a face,
σ ∈ ∆, is defined to be |σ| − 1. The dimension of a simplicial complex ∆ is equal to the
dimension of its largest face: maxσ∈∆ |σ| − 1. If ∆ consists of all 2n subsets of [n], then it is
WHAT MAKES A NEURAL CODE CONVEX? 225

the full simplex of dimension n − 1. The hollow simplex contains all proper subsets of [n], but
not [n], and thus has dimension n − 2.
Faces of a simplicial complex that are maximal under inclusion are referred to as facets.
If we consider the facets together with all their intersections, we obtain the set
( k )
def
\
F∩ (∆) = ρi | ρi is a facet of ∆ for each i = 1, . . . , k ∪ {∅}.
i=1

We refer to the elements of F∩ (∆) as max intersections of ∆. The empty set is added so that
F∩ (∆) can be regarded as a code, consistent with our convention that the all-zeros word is
always included.
Restrictions and links are standard constructions from simplicial complexes. The restric-
tion of ∆ to σ is the simplicial complex
def
∆|σ = {ω ∈ ∆ | ω ⊆ σ} .

For any σ ∈ ∆, the link of σ inside ∆ is


def
Lkσ (∆) = {ω ∈ ∆ | σ ∩ ω = ∅ and σ ∪ ω ∈ ∆} .

Note that it is more common to write Lk∆ (σ) or link∆ (σ), instead of Lkσ (∆) (see, for example,
[9]). However, because we will often fix σ and consider its link inside different simplicial
complexes, such as ∆|σ∪τ , it is more convenient to put σ in the subscript.
The simplicial complex of a code. To any code C, we can associate a simplicial complex
∆(C) by simply including all subsets of codewords:
def
∆(C) = {σ ⊆ [n] | σ ⊆ c for some c ∈ C} .

∆(C) is called the simplicial complex of the code, and is the smallest simplicial complex that
contains all elements of C. The facets of ∆(C) correspond to the codewords in C that are
maximal under inclusion: these are the maximal codewords.
Local obstructions to convexity. At first glance, it may seem that all codes should be
convex, since the convex sets Ui can be chosen to reside in arbitrarily high dimensions. This is
not the case, however, as nonconvex codes arise for as few as n = 3 neurons [1]. To understand
what can go wrong, consider a code with the following property: any codeword with a 1 in
the first position also has a 1 in the second or third position, but no codeword has a 1 in all
three positions. This implies that any corresponding cover U must have U1 ⊆ U2 ∪ U3 , but
U1 ∩ U2 ∩ U3 = ∅. The result is that U1 is a disjoint union of two nonempty open sets, U1 ∩ U2
and U1 ∩ U3 , and is hence disconnected. Since all convex sets are connected, we conclude that
our code cannot be convex. The contradiction stems from a topological inconsistency that
emerges if the code is assumed to be convex.
This type of topological obstruction to convexity generalizes to a family of local obstruc-
tions, first introduced in [8]. We define local obstructions precisely in section 3. There we also
show that a code with one or more local obstructions cannot be convex.
226 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

Lemma 1.2. If C is a convex code, then C has no local obstructions.


This fact was first observed in [8], using slightly different language. The converse, unfor-
tunately, is not true. See Example 2.3 for a counterexample that first appeared in [10].
1.3. Summary of main results. To prove that a neural code is convex, it suffices to exhibit
a convex realization. That is, it suffices to find a set of convex open sets U = {U1 , . . . , Un }
such that C = C(U). Our strategy for proving that a code is not convex is to show that it has
a local obstruction to convexity. Which codes have local obstructions?
Perhaps surprisingly, the question of whether or not a given code C has a local obstruction
can be reduced to the question of whether or not it contains a certain minimal code, Cmin (∆),
which depends only on the simplicial complex ∆ = ∆(C). This is our main result.
Theorem 1.3 (characterization of local obstructions). For each simplicial complex ∆, there
is a unique minimal code Cmin (∆) with the following properties:
(i) the simplicial complex of Cmin (∆) is ∆, and
(ii) for any code C with simplicial complex ∆, C has no local obstructions if and only if
C ⊇ Cmin (∆).
Moreover, Cmin (∆) depends only on the topology of the links of ∆:

Cmin (∆) = {σ ∈ ∆ | Lkσ (∆) is noncontractible} ∪ {∅}.

We will regard the elements of Cmin (∆) as “mandatory” codewords with respect to con-
vexity, because they must all be included in order for a code C with ∆(C) = ∆ to be convex.
From the above description of Cmin (∆), we can prove the following lemma.
Lemma 1.4. Cmin (∆) ⊆ F∩ (∆). That is, each nonempty element of Cmin (∆) is an inter-
section of facets of ∆.
Our proofs of Theorem 1.3 and Lemma 1.4 are given in section 4.2. Unfortunately, finding
all elements of Cmin (∆) for arbitrary ∆ is, in general, undecidable (see section 5). Nevertheless,
we can algorithmically compute a subset, MH (∆), of “homologically detectable” mandatory
codewords. In section 5 we show how to compute MH (∆) using machinery from combinatorial
commutative algebra. Lemma 1.4 also tells us that every element of Cmin (∆) must be an
intersection of facets of ∆—that is, an element of F∩ (∆). We thus have the inclusions

(2) MH (∆) ⊆ Cmin (∆) ⊆ F∩ (∆),

where both MH (∆) and F∩ (∆) are straightforwardly computable. Note that if MH (∆) =
F∩ (∆), then we can conclude that Cmin (∆) = F∩ (∆). Moreover, if C ⊇ F∩ (∆), then C ⊇
Cmin (∆), and thus C has no local obstructions (and is potentially convex). This motivates the
following definition.
Definition 1.5. A neural code C is max ∩-complete (or max intersection-complete) if C ⊇
F∩ (∆(C)).
We therefore have a simple combinatorial condition for a code that guarantees it has no
local obstructions.
Corollary 1.6. If a neural code C is max ∩-complete, then C has no local obstructions.
WHAT MAKES A NEURAL CODE CONVEX? 227

For n ≤ 4, the convex codes are precisely those codes that are max ∩-complete.
Proposition 1.7. Let C be a code on n ≤ 4 neurons. Then C is convex if and only if C is
max ∩-complete.
This is shown in Supplementary Text S1 (M107317 01.pdf [local/web 669KB]), where we
provide a complete classification of convex codes on n = 4 neurons. Proposition 1.7 does not
extend to n > 4, however, since beginning in n = 5 there are convex codes that are not max
∩-complete (see Example 2.2). This raises the following question: Are there max ∩-complete
codes that are not convex? In a previous version of this paper, we conjectured that all max
∩-complete codes are convex [11]. This conjecture has recently been proven [12] using ideas
similar to what we illustrate in Example 2.4.
Proposition 1.8 (see [12, Theorem 4.4]). If C is max ∩-complete, then C is convex.
Finally, in Supplementary Text S3 (M107317 01.pdf [local/web 669KB]) we present some
straightforward bounds on the minimal embedding dimension d(C), obtained using results
about d-representability of the associated simplicial complex ∆(C). In particular, we find
bounds from Helly’s theorem and the fractional version of Helly’s theorem. Unfortunately,
these results all stem from ∆(C). In our classification of convex codes for n ≤ 4, however, it is
clear that the presence or absence of specific codewords can affect d(C), even if ∆(C) remains
unchanged (see Supplementary Text S1, M107317 01.pdf [local/web 669KB]). The problem
of how to use this additional information about a code in order to improve the bounds on
d(C) remains wide open.
2. Examples. Our first example depicts a convex code with minimal embedding dimension
d(C) = 2.
Example 2.1. Consider the open cover U illustrated in Figure 2(a). The corresponding
code, C = C(U), has 10 codewords. C is a convex code by construction, and it is easy to check
that d(C) = 2. The simplicial complex ∆(C) (see Figure 2(b)) loses some of the information
about the cover U that is present in C. In particular, U2 ⊆ U1 ∪ U3 and U2 ∩ U4 ⊆ U3 is
reflected in C, but not in ∆(C). Note that we can infer U2 ⊆ U1 ∪ U3 directly from the code,
because any codeword with neuron 2 “on” also has neuron 1 or 3 “on.”

a U3 b
U1 1
. . U4

. . . . . . . . 3

2
U2 4

Figure 2. (a) An arrangement U = {U1 , U2 , U3 , U4 } of convex open sets. Black dots mark regions corre-
sponding to distinct codewords in C = C(U). From left to right, the codewords are 0000, 1000, 1100, 1010, 1110,
0110, 0010, 0111, 0011, and 0001. (b) The simplicial complex ∆(C) for the code C defined in (a). The two
facets, 123 and 234, correspond to the two maximal codewords, 1110 and 0111, respectively. This simplicial
complex is also equal to the nerve of the cover N (U) (see section 3.2).
228 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

Note that the convex code in Example 2.1 is also max ∩-complete, as guaranteed by
Proposition 1.7. The next example shows that this proposition does not hold for n ≥ 5.
a b c
3 4 2 3 4 5
3 4
2 5 2 5 1

Figure 3. (a) A simplicial complex ∆ on n = 5 vertices. The vertex 1 is an intersection of facets but is
not contained in the code C of Example 2.2. (b) The link Lk1 (∆) (see section 3.3). (c) A convex realization of
the code C. The set U1 corresponding to neuron 1 (shaded) is completely covered by the other sets U2 , . . . , U5 ,
consistent with the fact that 1 ∈
/ C.

Example 2.2. The simplicial complex ∆ shown in Figure 3(a) has facets 123, 134, and 145.
Their intersections yield the faces 1, 13, and 14, so that F∩ (∆) = {123, 134, 145, 13, 14, 1, ∅}.
For this ∆, we can compute the minimal code with no local obstructions, Cmin (∆) = {123, 134,
145, 13, 14, ∅}, as in Theorem 1.3. Note that the element 1 ∈ F∩ (∆) is not present in Cmin (∆).
Now consider the code C = ∆ \ {1}. Clearly, this code has simplicial complex ∆(C) = ∆: it
has a codeword for each face of ∆, except the vertex 1 (see Figure 3(a)). By Theorem 1.3, C
has no local obstructions because C ⊇ Cmin (∆). However, C is not max ∩-complete because
F∩ (∆) * C. Nevertheless, C is convex. A convex realization is shown in Figure 3(c).
The absence of local obstructions is a necessary condition for convexity. Unfortunately,
it is not sufficient: the following example shows a code with no local obstructions that is not
convex.
Example 2.3 (see [10]). Consider the code C = {2345, 123, 134, 145, 13, 14, 23, 34, 45, 3, 4, ∅}.
The simplicial complex of this code, ∆ = ∆(C), has facets {2345, 123, 134, 145}. It is straight-
forward to show that C = Cmin (∆), and thus C has no local obstructions. Despite this, it was
shown in [10] using geometric arguments that C is not convex. Note that this code is not max
∩-complete (the max intersection 123 ∩ 134 ∩ 145 = 1 is not in C).
The next example illustrates how a code with a single maximal codeword can be embedded
in R2 . This basic construction is used repeatedly in our proof of Proposition 1.7 (see Sup-
plementary Text S1, M107317 01.pdf [local/web 669KB]) and inspired aspects of the proof of
Proposition 1.8, given in [12].
Example 2.4. Consider the code C = {1111, 1011, 1101, 1100, 0011, 0010, 0001, 0000}, with
unique maximal codeword 1111. Figure 4 depicts the construction of a convex realization in
R2 . All regions corresponding to codewords are subsets of a disk in R2 . For each i = 1, . . . , 4,
the convex set Ui is the union of all regions whose corresponding codewords have a 1 in the
ith position. For example, U1 is the union of the four regions corresponding to codewords
1111, 1011, 1101, and 1100.
The above construction can be generalized to any code with a unique maximal codeword.
Lemma 2.5. Let C be a code with a unique maximal codeword. Then C is convex, and
d(C) ≤ 2.
WHAT MAKES A NEURAL CODE CONVEX? 229

0011

10

10
U1 U2

00

11
1111

01
11

U3 U4

00
00

1101

Figure 4. A convex realization in R2 of the code in Example 2.4. (Left) Each nonmaximal codeword is
assigned a region outside the polygon but inside the disk. (Right) For each neuron i, the convex set Ui is the
union of all regions corresponding to codewords with a 1 in the ith position.

Proof. Let ρ ∈ C be the unique maximal codeword, and let m = |C| − 2 be the number
of nonmaximal codewords, excluding the all-zeros word. Inscribe a regular open m-gon P in
an open disk, so that there are m sectors surrounding P , as in Figure 4. (If m < 3, let P be
an open triangle.) Assign each nonmaximal codeword (excluding 00 · · · 0) to a distinct sector
inside the disk but outside of P , and assign the maximal codeword ρ to P . Next, for each
i ∈ ρ let Ui be the union of P and all sectors whose corresponding codewords have a 1 in the
ith position, together with their common boundaries with P . For j ∈ [n] \ ρ, set Uj = ∅. Note
that each Ui is open and convex, and C = C({Ui }).
Lemma 2.5 can easily be generalized to any code whose maximal codewords are non-
overlapping (that is, having disjoint supports). In this case, each nonzero codeword is con-
tained in a unique facet of ∆(C), and the facets thus yield a partition of the code. We can
repeat the above construction in parallel for each part, obtaining the same dimension bound.
Proposition 2.6. Let C be a code with nonoverlapping maximal codewords (i.e., the facets
of ∆(C) are disjoint). Then C is convex and d(C) ≤ 2.
3. Local obstructions to convexity. For any simplicial complex ∆, there exists a convex
cover U in a high-enough dimensional space Rd such that ∆ can be realized as ∆(C(U)) [13].
For this reason, the simplicial complex ∆(C) alone is not sufficient to determine whether or
not C is convex. Obstructions to convexity must emerge from information in the code that
goes beyond what is reflected in ∆(C). As was shown in [1], this additional information is
precisely the receptive field relationships, which we turn to now.
3.1. Receptive field relationships. For a code C on n neurons, let
T U = {U1 , . . . , Un } be
any collection of open sets such that C = C(U), and recall that Uσ = i∈σ Ui .
Definition 3.1. A receptive field relationship (RF relationship) of C is a pair (σ, τ ) corre-
sponding to the set containment [
Uσ ⊆ Ui ,
i∈τ

where σ 6= ∅, σ ∩ τ = ∅, and Uσ ∩ Ui 6= ∅ for all i ∈ τ .


230 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

If τ = ∅, then the relationship (σ, ∅) simply states that Uσ = ∅. Note that relationships
of the form (σ, ∅) reproduce the information in ∆(C), while those of the form (σ, τ ) for τ 6= ∅
reflect additional structure in C that goes beyond the simplicial complex. A minimal RF
relationship is one such that no single neuron can be removed from σ or τ without destroying
the containment.
It is important to note that RF relationships are independent of the choice of open sets
U (see Lemma 4.2 of [1]). Hence we denote the set of all RF relationships {(σ, τ )} for a given
code C as simply RF(C). In [1], it was shown that one can compute RF(C) algebraically, using
an associated ideal IC .
Example 3.2 (Example 2.1 continued). The code C = C(U) from Example 2.1 has the fol-
lowing RF relationships: RF(C) = {({1, 4}, ∅), ({1, 2, 4}, ∅), ({1, 3, 4}, ∅), ({1, 2, 3, 4}, ∅),
({2}, {1, 3}), ({2}, {1, 3, 4}), ({2, 4}, {3})}. Of these, the pairs ({1, 4}, ∅), ({2}, {1, 3}), and
({2, 4}, {3}), corresponding to U1 ∩ U4 = ∅, U2 ⊆ U1 ∪ U3 , and U2 ∩ U4 ⊆ U3 , respectively, are
the minimal RF relationships.
The following lemma illustrates a simple case where RF relationships can be used to show
that a code cannot have a convex realization. (This is a special case of Lemma 3.6 below.)
Lemma 3.3. Let C = C(U). If C has RF relationships Uσ ⊆ Ui ∪ Uj and Uσ ∩ Ui ∩ Uj = ∅
for some σ ⊆ [n] and distinct i, j ∈
/ σ, then C is not a convex code.
Proof. By assumption, {(σ, {i, j}), (σ ∪ {i, j}, ∅)} ⊆ RF(C). It follows that the sets Vi =
Uσ ∩Ui 6= ∅ and Vj = Uσ ∩Uj 6= ∅ are disjoint open sets that each intersect Uσ , and Uσ ⊆ Vi ∪Vj .
We can thus conclude that Uσ is disconnected in any open cover U such that C = C(U). This
implies that C cannot have a convex realization, because if the Ui ’s were all convex, then Uσ
would be convex, contradicting the fact that it is disconnected.
The above proof relies on the observation that Uσ must be convex in any convex realization
U, but the properties of the code imply that Uσ is covered by a collection of open sets whose
topology does not match that of a convex set. This topological inconsistency between a set
and its cover is, at its core, a contradiction arising from the Nerve lemma, which we discuss
next.
3.2. The Nerve lemma. The nerve of an open cover U = {U1 , . . . , Un } is the simplicial
complex
def
N (U) = {σ ⊆ [n] | Uσ 6= ∅}.
In fact, N (U) = ∆(C(U)), so the nerve can be recovered directly from the code C(U). The
Nerve lemma tells us that N (U) carries a surprising amount of topological information about
the underlying space covered by U, provided U is a good cover. RecallT that a good cover is a
collection of open sets {Ui } where every nonempty intersection, Uσ = i∈σ Ui , is contractible.1
Lemma 3.4 (NerveSlemma). If U is a good cover, then ni=1 Ui is homotopy-equivalent to
S
N (U). In particular, ni=1 Ui and N (U) have exactly the same homology groups.
This result is well known and can be obtained as a direct consequence of [14, Corol-
lary 4G.3].
1
A set is contractible if it is homotopy-equivalent to a point [14].
WHAT MAKES A NEURAL CODE CONVEX? 231

Now observe that an open cover comprised of convex sets is always a good cover, because
Sn For example, if C = C(U) for a
the intersection of convex sets is convex and hence contractible.
convex cover U, then ∆(C) must match the homotopy type of i=1 Ui . This fact was previously
exploited to extract topological information about the represented space from hippocampal
place cell activity [15].
The Nerve lemma is also key to our notion of local obstructions, which we turn to next.
3.3. Local obstructions.
S Local obstructions arise when a code contains an RF relationship
(σ, τ ), so that Uσ ⊆ i∈τ Ui , but the nerve of the corresponding cover of Uσ by the restricted
sets {Ui ∩ Uσ }i∈τ is not contractible. By the Nerve lemma, if the Ui ’s are all convex, then
N ({Uσ ∩ Ui }i∈τ ) must have the same homotopy type as Uσ , which is contractible. If N ({Uσ ∩
Ui }i∈τ ) fails to be contractible, we can conclude that the Ui ’s cannot all be convex.
Now, observe that the nerve of the restricted cover N ({Uσ ∩ Ui }i∈τ ) is related to the nerve
of the original cover N (U) as follows:
N ({Uσ ∩ Ui }i∈τ ) = {ω ∈ N (U) | σ ∩ ω = ∅, σ ∪ ω ∈ N (U), and ω ⊆ τ }.
In fact, letting ∆ = N (U) and considering the restricted complex ∆|σ∪τ , we recognize that
the right-hand side above is precisely the link,
N ({Uσ ∩ Ui }i∈τ ) = Lkσ (∆|σ∪τ ).
We can now define a local obstruction to convexity.
Definition 3.5. Let (σ, τ ) ∈ RF(C), and let ∆ = ∆(C). We say that (σ, τ ) is a local
obstruction of C if τ 6= ∅ and Lkσ (∆|σ∪τ ) is not contractible.
Local obstructions are thus detected via noncontractible links of the form Lkσ (∆|σ∪τ ),
where (σ, τ ) ∈ RF(C). Figure 5 displays all possible links that can arise for |τ | ≤ 4. Non-
contractible links are highlighted in red. Note also that τ 6= ∅ implies σ ∈/ C and Uσ 6= ∅,
as the definition of an RF relationship requires that Uσ ∩ Ui 6= ∅ for all i ∈ τ . Any local
obstruction (σ, τ ) must therefore have σ ∈ ∆(C) \ C and Lkσ (∆|σ∪τ ) nonempty.
The arguments leading up to the definition of local obstruction imply the following simple
consequence of the Nerve lemma, which was previously observed in [8].
Lemma 3.6 (Lemma 1.2). If C has a local obstruction, then C is not a convex code.
In general, the question of whether or not a given simplicial complex is contractible is
undecidable [16]; however, in some cases it is easy to see that all relevant links will be con-
tractible. This yields a simple condition on RF relationships that guarantees that a code has
no local obstructions.
Lemma 3.7. Let C = C(U). If for each (σ, τ ) ∈ RF(C) we have Uσ ∩ Uτ 6= ∅, then C has no
local obstructions.
Proof. Let ∆ = ∆(C). Uσ ∩ Uτ = 6 ∅ implies Lkσ (∆|σ∪τ ) is the full simplex on the vertex
set τ , which is contractible. If this is true for every RF relationship, then none can give rise
to a local obstruction.
For example, if 11 · · · 1 ∈ C, then Uσ ∩ Uτ 6= ∅ for any pair σ, τ ⊂ [n], so C has no local
obstructions.
232 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

L1 L2 L3 L4
1 1
1
1

2 2 2 3

L5 L6 L7 L8
1 1 1 1

2 3 2 3 2 3 2 3

L9 L10 L11 L12


1 1 1 1

3 3 3 3

2 2 2 2
4 4 4 4

L13 L14 L15 L16


1 1 1 1

3 3 3 3

2 2 2 2
4 4 4 4

L17 L18 L19 L20 1


1 1 1

3 3
3 3
2 2
2 2 4
4 4 4

L21 L22 L23 1


L24 1
1 1

3 3 3 3

2 2 2 2
4 4 4 4

L25 L26 L27 L28


1 1 1 1

3 3 3 3

2 2 2 2
4 4 4 4

Figure 5. All simplicial complexes on up to n = 4 vertices, up to permutation equivalence. These can each
arise as links of the form Lkσ (∆|σ∪τ ) for |τ | ≤ 4. Red labels correspond to noncontractible complexes. Note
that L13 is the only simplicial complex on n ≤ 4 vertices that is contractible but not a cone.
WHAT MAKES A NEURAL CODE CONVEX? 233

4. Characterizing local obstructions via mandatory codewords. From the definition of


local obstruction, it seems that in order to show that a code has no local obstructions one
would need to check the contractibility of all links of the form Lkσ (∆|σ∪τ ) corresponding to
all pairs (σ, τ ) ∈ RF(C). We shall see in this section that in fact we only need to check for
contractibility of links inside the full complex ∆—that is, links of the form Lkσ (∆). This is
key to obtaining a list of mandatory codewords, Cmin (∆), that depends only on ∆ and not on
any further details of the code.
In section 4.1 we prove some important lemmas about links and then use them in sec-
tion 4.2 to prove Theorem 1.3.
4.1. Link lemmas. In what follows, the notation
def
conev (∆) = {{v} ∪ ω | ω ∈ ∆} ∪ ∆

denotes the cone of v over ∆, where v is a new vertex not contained in ∆. Any simplicial com-
plex that is a cone over a subcomplex, so that ∆ = conev (∆0 ), is automatically contractible.
In Figure 5, the only contractible link that is not a cone is L13. This is the same link that
appeared in Figure 3b of Example 2.2.
Lemma 4.1. Let ∆ be a simplicial complex on [n], σ ∈ ∆, and v ∈ [n] such that v ∈
/ σ and
σ ∪ {v} ∈ ∆. Then Lkσ∪{v} (∆) ⊆ Lkσ (∆|[n]\{v} ), and

Lkσ (∆) = Lkσ (∆|[n]\{v} ) ∪ conev (Lkσ∪{v} (∆)).

Proof. The proof follows from the definition of the link. First, observe that

Lkσ∪{v} (∆) = {ω ⊂ [n] | v ∈


/ ω, ω ∩ σ = ∅, and ω ∪ σ ∪ {v} ∈ ∆}
= {ω ⊂ [n] \ {v} | ω ∩ σ = ∅, and ω ∪ σ ∪ {v} ∈ ∆}
⊆ {ω ⊂ [n] \ {v} | ω ∩ σ = ∅, and ω ∪ σ ∈ ∆|[n]\{v} }
= Lkσ (∆|[n]\{v} ),

which establishes that Lkσ∪{v} (∆) ⊆ Lkσ (∆|[n]\{v} ). Next, observe that

conev (Lkσ∪{v} (∆)) \ Lkσ∪{v} (∆) = {{v} ∪ ω | ω ∈ Lkσ∪{v} (∆)}


= {{v} ∪ ω | ω ⊂ [n], v ∈
/ ω, ω ∩ σ = ∅, and ω ∪ {v} ∪ σ ∈ ∆}
= {τ ⊂ [n] | v ∈ τ, τ ∩ σ = ∅, and τ ∪ σ ∈ ∆}
= {ω ∈ Lkσ (∆) | v ∈ ω}.

Finally,
Lkσ (∆|[n]\{v} ) = {ω ∈ Lkσ (∆) | v ∈
/ ω}.
From here the second statement is clear.
Corollary 4.2. Let ∆ be a simplicial complex on [n], σ ∈ ∆, and v ∈ [n] such that v ∈
/ σ and
σ ∪ {v} ∈ ∆. If Lkσ∪{v} (∆) is contractible, then Lkσ (∆) and Lkσ (∆|[n]\{v} ) are homotopy-
equivalent.
234 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

Proof. Lemma 4.1 states that Lkσ∪{v} (∆) ⊆ Lkσ (∆|[n]\{v} ), and that Lkσ (∆) can be ob-
tained from Lkσ (∆|[n]\{v} ) by coning off the subcomplex Lkσ∪{v} (∆)—that is, by including
conev (Lkσ∪{v} (∆)). If this subcomplex is itself contractible, then the homotopy type is pre-
served.
Another useful corollary follows from the one above by simply setting ∆ = ∆|σ∪τ ∪{v} and
[n] = σ ∪ τ ∪ {v}. We immediately see that if both Lkσ (∆|σ∪τ ∪{v} ) and Lkσ∪{v} (∆|σ∪τ ∪{v} )
are contractible, then Lkσ (∆|σ∪τ ) is contractible.
Corollary 4.3. Assume v ∈ / σ and σ ∩ τ = ∅. If Lkσ (∆|σ∪τ ) is not contractible, then (i)
Lkσ (∆|σ∪τ ∪{v} ) is not contractible, and/or (ii) σ ∪ {v} ∈ ∆ and Lkσ∪{v} (∆|σ∪τ ∪{v} ) is not
contractible.
This corollary can be extended to show that for every noncontractible link Lkσ (∆|σ∪τ ),
there exists a noncontractible “big” link Lkσ0 (∆) for some σ 0 ⊇ σ. This is because vertices
outside of σ ∪ τ can be added one by one to either σ or its complement, preserving the non-
contractibility of the new link at each step. (Note that if σ ∪ {v} ∈
/ ∆, we can always add
v to the complement. In this case, Lkσ (∆|σ∪τ ) = Lkσ (∆|σ∪τ ∪{v} ), so we are in case (i) of
Corollary 4.3.) In other words, we have the following lemma.
Lemma 4.4. Let σ, τ ∈ ∆. Suppose σ ∩ τ = ∅, and Lkσ (∆|σ∪τ ) is not contractible. Then
there exists σ 0 ∈ ∆ such that σ 0 ⊇ σ, σ 0 ∩ τ = ∅, and Lkσ0 (∆) is not contractible.
The next results show that only intersections of facets (maximal faces under inclusion)
can possibly yield noncontractible links. For any σ ∈ ∆, we denote by fσ the intersection of
all facets of ∆ containing σ. In particular, σ = fσ if and only if σ is an intersection of facets
of ∆. It is also useful to observe that a simplicial complex is a cone if and only if the common
intersection of all its facets is nonempty. (Any element of that intersection can serve as a cone
point, and a cone point is necessarily contained in all facets.)
Lemma 4.5. Let σ ∈ ∆. Then σ = fσ ⇔ Lkσ (∆) is not a cone.
Proof. Recall that Lkσ (∆) is a cone if and only if all facets of Lkσ (∆) have a nonempty
common intersection ν. This can happen if and only if σ∪ν ⊆ fσ . Note that since ν ∈ Lkσ (∆),
we must have ν ∩ σ = ∅, and hence Lkσ (∆) is a cone if and only if σ 6= fσ .
Furthermore, it is easy to see that every simplicial complex that is not a cone can in fact
arise as the link of an intersection of facets. For any ∆ that is not a cone, simply consider

e = conev (∆); v is an intersection of facets of ∆,
e and Lkv (∆)
e = ∆.
The above lemma immediately implies the following corollary.
Corollary 4.6. Let σ ∈ ∆ be nonempty. If σ 6= fσ , then Lkσ (∆) is a cone and hence
contractible. In particular, if Lkσ (∆) is not contractible, then σ must be an intersection of
facets of ∆ (i.e., σ ∈ F∩ (∆)).
Finally, we note that all pairwise intersections of facets that are not also higher-order
intersections give rise to noncontractible links.
Lemma 4.7. Let ∆ be a simplicial complex. If σ = τ1 ∩ τ2 , where τ1 , τ2 are distinct facets
of ∆, and σ is not contained in any other facet of ∆, then Lkσ (∆) is not contractible.
WHAT MAKES A NEURAL CODE CONVEX? 235

Proof. Observe that Lkσ (∆) consists of all subsets of ω1 = τ1 \ σ and ω2 = τ2 \ σ, but ω1
and ω2 are disjoint because τ1 and τ2 do not overlap outside of σ. This means Lkσ (∆) has
two connected components and is thus not contractible.
Note that if σ is a pairwise intersection of facets that is also contained in another facet,
then Lkσ (∆) could be contractible. For example, the vertex 1 in Figure 3(a) can be expressed
as a pairwise intersection of facets 123 and 145 but is also contained in 134. As shown in
Figure 3(b), the corresponding link Lk1 (∆) is contractible.
4.2. Proof of Theorem 1.3 and Lemma 1.4. Using the above facts about links, we can
now prove Theorem 1.3 and Lemma 1.4. First, we need the following key proposition.
Proposition 4.8. A code C has no local obstructions if and only if σ ∈ C for every σ ∈ ∆(C)
such that Lkσ (∆(C)) is noncontractible.
Proof. Let ∆ = ∆(C), and let U = {Ui } be any collection of open sets such that C = C(U).
(⇒) We prove the contrapositive. Suppose there exists σ ∈ ∆(C) \ C such that Lkσ (∆)
is noncontractible. Then Uσ must be covered by the other sets {Ui }i∈σ / , and since Lkσ (∆)
is not contractible, the RF relationship (σ, σ̄) is a local obstruction. (⇐) We again prove
the contrapositive. Suppose C has a local obstruction (σ, τ ). This means that σ ∩ τ = ∅,
0
S
Uσ ⊆ i∈τ Ui , and Lkσ (∆|σ∪τ ) is not contractible. By Lemma 4.4, thereS exists σ ⊇ σ such
0
that σ ∩τ = ∅ and Lkσ0 (∆) is not contractible. Moreover, Uσ0 ⊆ Uσ ⊆ i∈τ Ui with σ 0 ∩τ = ∅,
which implies σ 0 ∈
/ C.
Theorem 1.3 now follows as a corollary of Proposition 4.8. To see this, let

Cmin (∆) = {σ ∈ ∆ | Lkσ (∆) is noncontractible} ∪ {∅},

and note that Cmin (∆) has simplicial complex ∆. This is because for any facet ρ ∈ ∆,
Lkρ (∆) = ∅, which is noncontractible, and thus Cmin (∆) contains all the facets of ∆. By
Proposition 4.8, any code C with simplicial complex ∆ has no local obstructions precisely
when C ⊇ Cmin (∆). Thus, Cmin (∆) is the unique code satisfying the required properties in
Theorem 1.3.
Finally, it is easy to see that Lemma 1.4 follows directly from Corollary 4.6.
5. Computing mandatory codewords algebraically. Computing Cmin (∆) is certainly sim-
pler than finding all local obstructions. However, it is still difficult in general because deter-
mining whether or not a simplicial complex is contractible is undecidable [16]. For this reason,
we now consider the subset of Cmin (∆) corresponding to noncontractible links that can be de-
tected via homology:
def
(3) MH (∆) = {σ ∈ ∆ | dim H
e i (Lkσ (∆), k) > 0 for some i},

where the He i (·) are reduced simplicial homology groups, and k is a field. Homology groups
are topological invariants that can be easily computed for any simplicial complex, and reduced
homology groups simply add a shift in the dimension of H e 0 (·). This shift is designed so that
for any contractible space X, dim H e i (X, k) = 0 for all integers i. Clearly, MH (∆) ⊆ Cmin (∆),
and MH (∆) is thus a subset of the mandatory codewords that must be included in any convex
236 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

code C with ∆(C) = ∆.2 On the other hand, MH (∆) ⊆ C does not guarantee that C has no
local obstructions, as a homologically trivial simplicial complex may be noncontractible.3
It turns out that the entire set MH (∆) can be computed algebraically, via a minimal free
resolution of an ideal built from ∆. Specifically,
(4) MH (∆) = {σ ∈ ∆ | βi,σ̄ (S/I∆∗ ) > 0 for some i > 0},
where S = k[x1 , . . . , xn ], the ideal I∆∗ is the Stanley–Reisner ideal of the Alexander dual ∆∗ ,
and βi,σ̄ (S/I∆∗ ) are the Betti numbers of a minimal free resolution of the ring S/I∆∗ . This is
a direct consequence of Hochster’s formula [9]:
(5) dim H
e i (Lkσ (∆), k) = βi+2,σ̄ (S/I∆∗ ).

See Supplementary Text S2 (M107317 01.pdf [local/web 669KB]) for more details on Alexan-
der duality, Hochster’s formula, and the Stanley–Reisner ideal.
Moreover, the subset of mandatory codewords MH (∆) can be easily computed using
existing computational algebra software, such as Macaulay2 [18]. We now describe this via
an explicit example.
Example 5.1. Let ∆ be the simplicial complex L25 in Figure 5. The Stanley–Reisner ideal
is given by I∆ = hx1 x2 x4 , x2 x3 x4 i, and its Alexander dual is I∆∗ = hx1 , x2 , x4 i ∩ hx2 , x3 , x4 i =
hx1 x3 , x2 , x4 i. A minimal free resolution of S/I∆∗ is
 
x2 x4 0
−x1 x3
 
0
 x4 
[ x1 x3 x2 x4 ] 0 −x 1 x 3 −x 2
0 ←− S/I∆∗ ←−−−−−−−−−−−−− S(−2) ⊕ S(−1)2 ←−−−−−−−−−−−−−−−−−−−−
 
x4
 −x 
 
 2 
x1 x3
S(−3)2 ⊕ S(−2) ←−−−−−−− S(−4) ←− 0
The Betti number βi,σ (S/I∆∗ ) is the dimension of the module in multidegree σ at step i of the
resolution, where S/I∆∗ is step 0 and the steps increase as we move from left to right. At step
0, the total degree is always 0. For the above resolution, the multidegrees at S(−2) ⊕ S(−1)2
(step 1) are 1010, 0100, and 0001; at S(−3)2 ⊕ S(−2) (step 2), we have 1110, 1011, and 0101;
and at S(−4) (step 4) the multidegree is 1111. This immediately gives us the nonzero Betti
numbers:
β0,0000 (S/I∆∗ ) = 1, β1,1010 (S/I∆∗ ) = 1, β1,0100 (S/I∆∗ ) = 1, β1,0001 (S/I∆∗ ) = 1,
β2,1110 (S/I∆∗ ) = 1, β2,1011 (S/I∆∗ ) = 1, β2,0101 (S/I∆∗ ) = 1, β3,1111 (S/I∆∗ ) = 1.
Recalling from (4) that the multidegrees correspond to complements σ̄ of faces in ∆, we can
now immediately read off the elements of MH (∆) from the above βi,σ̄ for i > 0 as
MH (∆) = {0101, 1011, 1110, 0001, 0100, 1010, 0000} = {24, 134, 123, 4, 2, 13, ∅}.
2
Note that while MH (∆) depends on the choice of field k, MH (∆) ⊆ Cmin (∆) for any k.
3
For example, consider a triangulation of the punctured Poincaré homology sphere: this simplicial complex
has all-vanishing reduced homology groups but is noncontractible [17].
WHAT MAKES A NEURAL CODE CONVEX? 237

Note that the first three elements of MH (∆) above, obtained from the Betti numbers β1,∗
in step 1 of the resolution, are precisely the facets of ∆. The next three elements, 0001, 0100,
and 1010, are mandatory codewords: they must be included for a code with simplicial complex
∆ to be convex. These all correspond to pairwise intersections of facets and are obtained
from the Betti numbers β2,∗ at step 2 of the resolution; this is consistent with the fact that
the corresponding links are all disconnected, resulting in nontrivial He 0 (Lkσ (∆), k). The last
element, 0000, reflects the fact that Lk∅ (∆) = ∆, and dim H1 (∆, k) = 1 for ∆ = L25. By
e
convention, however, we always include the all-zeros codeword in our codes (see section 1.2).
Using Macaulay2 [18], the Betti numbers for the simplicial complex ∆ above can be
computed through the following sequence of commands (choosing k = Z2 , and suppressing
outputs except for the Betti tally at the end):
i1 : kk = ZZ/2;
i2 : S = kk[x1,x2,x3,x4, Degrees => {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0,0,0,1}}];
i3 : I = monomialIdeal(x1*x2*x4,x2*x3*x4);
i4 : Istar = dual(I);
i5 : M = S^1/Istar;
i6 : Mres = res M; [comment: this step computes the minimal free resolution]
i7 : peek betti Mres
o7 = BettiTally{(0, {0, 0, 0, 0}, 0) => 1}
(1, {0, 0, 0, 1}, 1) => 1
(1, {0, 1, 0, 0}, 1) => 1
(1, {1, 0, 1, 0}, 2) => 1
(2, {0, 1, 0, 1}, 2) => 1
(2, {1, 0, 1, 1}, 3) => 1
(2, {1, 1, 1, 0}, 3) => 1
(3, {1, 1, 1, 1}, 4) => 1
Each line of the BettiTally displays (i, {σ}, |σ|) ⇒ βi,σ . This yields (in order)
β0,0000 = 1, β1,0001 = 1, β1,0100 = 1, β1,1010 = 1, β2,0101 = 1, β2,1011 = 1, β2,1110 = 1, β3,1111 = 1,

which is the same set of nonzero Betti numbers we previously obtained. Recalling again that
the multidegrees correspond to complements σ̄ in (4), and we care only about i > 0, this
output immediately gives us MH (∆)—exactly as before.
The above example illustrates how computational algebra can help us to determine whether
a code has local obstructions. However, as noted in section 2, even codes without local ob-
structions may fail to be convex. Though we have made significant progress via Theorem 1.3,
finding a complete combinatorial characterization of convex codes is still an open problem.
Acknowledgments. We thank Joshua Cruz, Chad Giusti, Vladimir Itskov, Carly Klivans,
William Kronholm, Keivan Monfared, and Yan X. Zhang for numerous discussions, and Eva
Pastalkova for providing the data used to create Figure 1.

REFERENCES

[1] C. Curto, V. Itskov, A. Veliz-Cuba, and N. Youngs, The neural ring: An algebraic tool for analyzing
the intrinsic structure of neural codes, Bull. Math. Biol., 75 (2013), pp. 1571–1611.
[2] D. H. Hubel and T. N. Wiesel, Receptive fields of single neurons in the cat’s striate cortex, J. Physiol.,
148 (1959), pp. 574–591.
238 CURTO, GROSS, JEFFRIES, MORRISON, OMAR, ROSEN, SHIU, YOUNGS

[3] J. O’Keefe and J. Dostrovsky, The hippocampus as a spatial map. Preliminary evidence from unit
activity in the freely-moving rat, Brain Res., 34 (1971), pp. 171–175.
[4] Nobel Media, Physiology or Medicine 1981-Press Release, Nobelprize.org, Nobel Media AB 2014, http:
//www.nobelprize.org/nobel prizes/medicine/laureates/1981/press.html, 1981.
[5] N. Burgess, The 2014 Nobel Prize in Physiology or Medicine: A Spatial Model for Cognitive Neuro-
science, Neuron, 84 (2014), pp. 1120–1125.
[6] C. Giusti, E. Pastalkova, C. Curto, and V. Itskov, Clique topology reveals intrinsic geometric
structure in neural correlations, Proc. Natl. Acad. Sci. USA, 112 (2015), pp. 13455–13460.
[7] C. Curto, V. Itskov, K. Morrison, Z. Roth, and J. L. Walker, Combinatorial neural codes from
a mathematical coding theory perspective, Neural Comput., 25 (2013), pp. 1891–1925.
[8] C. Giusti and V. Itskov, A no-go theorem for one-layer feedforward networks, Neural Comput., 26
(2014), pp. 2527–2540.
[9] E. Miller and B. Sturmfels, Combinatorial Commutative Algebra, Grad. Texts in Math. 227, Springer-
Verlag, New York, 2005.
[10] C. Lienkaemper, A. Shiu, and Z. Woodstock, Obstructions to convexity in neural codes, Adv. Appl.
Math., 85 (2017), pp. 31–59.
[11] C. Curto, E. Gross, J. Jeffries, K. Morrison, M. Omar, Z. Rosen, A. Shiu, and N. Youngs,
What Makes a Neural Code Convex?, preprint, https://arxiv.org/abs/1508.00150, 2016.
[12] J. Cruz, C. Giusti, V. Itskov, and W. Kronholm, On Open and Closed Convex Codes, preprint,
https://arxiv.org/abs/1609.03502, 2016.
[13] M. Tancer, Intersection patterns of convex sets via simplicial complexes: A survey, in Thirty Essays on
Geometric Graph Theory, Springer, New York, 2013, pp. 521–540.
[14] A. Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, UK, 2002.
[15] C. Curto and V. Itskov, Cell groups reveal structure of stimulus space, PLoS Comput. Biol., 4 (2008),
e1000205.
[16] M. Tancer, Recognition of collapsible complexes is NP-complete, Discrete Comput. Geom., 55 (2016),
pp. 21–38.
[17] A. Björner and F. H. Lutz, A 16-vertex triangulation of the Poincaré homology 3-sphere and non-
PL spheres with few vertices, Electronic Geometry Models, 2003.04.001, http://www.eg-models.de/,
2003.
[18] D. R. Grayson and M. E. Stillman, Macaulay2: A Software System for Research in Algebraic Geom-
etry, available at http://www.math.uiuc.edu/Macaulay2/.
[19] M. Tancer, d-representability of simplicial complexes of fixed dimension, J. Comput. Geom., 2 (2011),
pp. 183–188.
[20] E. Helly, Über Mengen konvexer Körper mit gemeinschaftlichen Punkten, Jahresber. Deutsch. Math.-
Verein., 32 (1923), pp. 175–176.
[21] G. Kalai, Intersection patterns of convex sets, Israel J. Math., 48 (1984), pp. 161–174.
[22] G. Kalai, Characterization of f -vectors of families of convex sets in Rd . I. Necessity of Eckhoff ’s condi-
tions, Israel J. Math., 48 (1984), pp. 175–195.
[23] G. Kalai, Characterization of f -vectors of families of convex sets in Rd . II. Sufficiency of Eckhoff ’s
conditions, J. Combin. Theory Ser. A, 41 (1986), pp. 167–188.

You might also like