1. Introduction
The phenomenon of slow cortical waves (delta waves) is a regime of brain activity that is observed in all mammals in a state of deep sleep or under anaesthesia [
1,
2]. It is characterized by a large-scale collective activation of groups of neurons, with a characteristic undulatory space-time pattern. Slow waves appear to propagate throughout the cortex modulating the spiking frequency of the underlying neurons populations. More specifically, as a slow wave passes, the involved neurons transit from a state of low spiking activity (down-state) to a more intense one (up-state). Since the first electroencephalogram (EEG) observations in the 1930s, many experimental studies have been conducted on the large-scale activity of neuronal populations. These studies have permitted to observe the brain moving between various states of activity, according to the cognitive state of the subject. The awake activity is distinguished from sleep, which is in turn subdivided into the Rapid Eye Movements (REM) stage (when the brain activity is comparable with the awake state) and in three non-REM (NREM) stages [
3]. NREM-3 identifies the deepest sleep state, characterized by delta waves that express in the lowest part of the frequency spectrum (
Hz ); the cortical activity during NREM-3 is emulated by deep anesthesia states.
In recent years, the comprehension of both the dynamics and the functional role of delta waves significantly improved. Current studies outline two complementary roles of slow waves, respectively implemented by the down-states and the up-states of neuronal populations [
4,
5,
6]. According to these studies, the down-states would help neurons to rest, favoring their prophylactic maintenance; in other words, they would guarantee the restorative function of sleep [
4]. On the other hand, the up-states would have a fundamental role in learning: the induction of spikes with appropriate temporal delays between pre- and post-synaptic neurons would trigger the mechanism of synaptic plasticity, enhancing and weakening the synaptic efficacies in order to encode the memories acquired during wakefulness [
6,
7]. Indeed, several studies showed the negative consequences of sleep deprivation on many cognitive functions, such as memories consolidation and psychomotor vigilance [
5].
Up to the present, electrophysiological recordings such as electroencephalography (EEG) and electrocorticography (ECoG) have been the standard methods for collecting slow wave data in humans. When combined with more invasive techniques like extracellular and intracellular recording both in vivo and ex vivo, animal models provide many entry points toward the investigation of neuronal functionality [
8,
9,
10]. Recently, cortex-wide mesoscopic optical imaging, combined with fluorescent indicators of activity, provided new insight on the spatio-temporal propagation pattern of the brain activity. Few studies took advantage of Voltage Sensitive Dyes (VSDs) to unravel the dynamics of cortical traveling waves across different anesthetic brain states, during slow wave sleep and during resting wakefulness in mice ([
11,
12]). Genetically encoded proteins are much less invasive than synthetic dyes, allow repeated measures on the same sample, and can be targeted to specific cell populations. The combination of encoded calcium and voltage indicators (GECIs and GEVIs, respectively) with wide-field microscopy is a unique approach for examining the neuronal activity in anesthetized and awakening mice at the mesoscale [
13,
14,
15,
16,
17,
18,
19,
20,
21]). In detail, GECIs enable to visualize fluctuations in calcium concentration, which is an indirect reporter of neuronal spiking activity [
14]. Compared to GEVIs, recently developed GECIs like GCaMP6 are characterized by a very high sensitivity, and within the GCaMP family, GCaMP6f (f = fast) represents the best compromise between sensitivity and kinetic speed in response to the presence of calcium ([
15]). By allowing mapping of spontaneous network activity over a broad range of frequencies (including the delta band), GECIs recently proved to be a powerful tool to dissect the spatio-temporal features of slow wave activity [
19,
20]. Despite the lower temporal resolution compared to electrophysiological methods, this approach increased both the spatial resolution and the specificity of the recorded neuronal population.
Here we took advantage of large scale fluorescence microscopy coupled to transgenic mice expressing GECIs in excitatory neurons. The description we adopt is that the passage of a slow wave (in the delta band) over a neuronal population causes the spiking frequency of each involved neuron to move from
(in the Down state) to
(in the Up state). Indeed, in this picture, the activity of the cortex during deep sleep is characterized by bi-stability, with neurons oscillating between two regimes, or modes, and bi-modality modulated by the passage slow waves, that induce a coherent state transition in the invested cell assemblies. The timestamp of the transition has to be identified to quantify the slow wave dynamics across the cortex surface, and we assume (as we will discuss in details below) that the dynamics of the fluorescence response function is fast enough to identify, at least, the presence of down- to up-state wavefronts. Rooted in this framework, and based on the time series extracted from the wide-field images on one cortical hemisphere of anesthetized mice (described in
Section 2.1), we (i) developed a pipeline of data analysis that can evaluate the main spatio-temporal features of wave propagation (
Section 2.2); (ii) built and validated a simple model of neuronal population activity (the Toy Model) that could simulate the propagation of cortical waves, to be employed as a calibration tool for the data analysis pipeline (
Section 2.3).
Section 3 illustrates the results obtained on the propagation of neuronal activity throughout the cortex;
Section 4 is for discussion and conclusions, with some remarks on future developments of the methods here presented and on further studies enabled by this approach.
2. Materials and Methods
Within this section, we describe the experimental procedures followed to collect the experimental data; then, we focus on the analysis pipeline and on the Toy Model.
2.1. Experimental Procedures and Data Processing
Experimental data, acquired from mice, have been provided by the European Laboratory for Non-Linear Spectroscopy (LENS) (LENS Home Page,
http://www.lens.unifi.it/index.php). All procedures involving mice were performed in accordance with the rules of the Italian Minister of Health (Protocol Number 183/2016-PR). Mice were housed in clear plastic cages under a 12 h light/dark cycle and were given
ad libitum access to water and food.
Mouse Model: We used a transgenic mouse line from Jackson Laboratories (Bar Harbor, ME, USA), the C57BL/6J-Tg(Thy1GCaMP6f)GP5.17Dkim/J (referred to as GCaMP6f mice; for more details, see The Jackson Laboratory, Thy1-GCaMP6f,
https://www.jax.org/strain/025393). In this mouse model, the calcium indicator GCaMP6f is selectively expressed in excitatory neurons [
22]. This fluorescent protein is ultra-sensitive to calcium ions concentration [
15,
22], and the fluorescence signal associated with fluctuations in calcium concentration is eventually correlated with the firing rate of neuronal populations underlying each pixel [
14,
23].
Surgery: Mice are anaesthetized with a mix of Ketamine and Xylazine in doses of 100 and 10 mg/kg respectively. Ketamine is an antagonist of the postsynaptic NMDA receptors [
24], largely used in literature to study anesthesia-related slow waves [
25,
26]. To obtain optical access to neuronal activity over the right hemisphere, the skin over the skull and the periosteum were removed, after applying the local anesthetic lidocaine (20 mg/mL). Wide-field imaging was performed right after the surgical procedure.
Wide-field Fluorescence Microscopy: For imaging of GCaMP6f fluorescence, a 505 nm LED light (M505L3 Thorlabs, Newton, NJ, USA) was deflected by a dichroic filter (DC FF 495-DI02 Semrock, Rochester, NY, USA) on the objective (2.5x EC Plan Neofluar, NA 0.085, Carl Zeiss Microscopy, Oberkochen, Germany). A 3D motorized platform (M-229 for the xy plane, M-126 for the z-axis movement; Physik Instrumente, Karlsruhe, Germany) allowed sample displacement. The fluorescence signal was selected by a band-pass filter (525/50 Semrock, Rochester, NY, USA) and collected on the sensor of a high-speed complementary metal-oxide semiconductor (CMOS) camera (Orca Flash 4.0 Hamamatsu Photonics, Bridgewater Township, NJ, USA). The fluorescence signal recorded in wide-field imaging originates from multiple cells in the cortical volume.
The dataset consists of a series of
pixels .tif images spaced out by a time step of 40 ms (sampling frequency of 25 Hz). Each image covers a 25 mm
2 (5 mm × 5 mm) area and offers a perspective from above on the brain cortex of two transgenic Thy1-GCaMP6f mice (hereafter, referred to as
Keta1 and
Keta2). The total collection time for each mouse was about 5 minutes (eight sets of 1000 frames, each set corresponding to an observation period of 40 ms × 1000 = 40 s). Modern high-density electrode arrays for ECoG recordings can interrogate up to 32 sites per hemisphere in mice and can reach a sampling frequency of 25 kHz when the signal from each channel is digitized [
27,
28,
29]. Compared to this, the resolution of wide field optical microscopy is much worse from a temporal point of view (25 Hz for the optical technique, up to 25 kHz for the electrode grid), but spatially higher (for the optical technique, each pixel is 50
m × 50
m, to compare with the 500
m electrode spacing for the grid that corresponds to a worsening of the spatial inspection by a factor 100; the spacing of the electrodes can be smaller than 500
m for ultra-thin flexible arrays [
30], approaching the spatial resolution of imaging recordings).
In every analyzed image the left cerebral hemisphere of the anaesthetized mouse is clearly visible, while the incomplete parts of the right hemisphere have been cut off, if present, at the beginning of the analysis pipeline. In addition, the black background appearing in the image has been masked since not informative. After this first processing steps, the
pixels raw images are reduced in size. In
Figure 1, a sample image is superimposed on the atlas of a mouse’s brain cortex, in order to tag different cortical areas in the data.
2.2. Data Analysis Pipeline
The data analysis pipeline named Calcium Imaging Slow Wave Analysis Pipeline (CaImanSWAP; SWAP is the name given to the software tools developed for the study of electrophysiological recordings [
29]) has been coded in Python, with the use of the
scikit-image package (Image Processing in Python,
https://scikit-image.org) for the processing of the raw data.
The pipeline consists of three main sequences. The first concerns image initialization and signal cleaning, and it is further subdivided into three parts:
Image initialization. Each .tif image is imported and translated in a matrix of floats. Matrices are then assembled so that a collection of images belonging to the same recording set is stored in a 3D-array: 2 dimensions are spatial (representing the cortex surface) while the third is time (that is quantized in 40 ms steps). Second, data are manipulated in order to keep informative content only, as already illustrated in
Figure 1. Matrices are cut in the spatial dimensions, isolating the portion of interest of the dataset (e.g., eliminating parts of the right hemisphere). Afterwards, in order to remove the non-informative black background, we make use of a masking process. The edges of the mask are obtained with the
find_contours function (
skimage.measure Python package) which adopts the
marching squares algorithm; each pixel outside the mask is forced to take the
Not A Number (
NaN) value. Since images in the same recording set share the same perspective on the cortex, the cropping process is applied
en bloc to each dataset. The cropping process (cutting and masking) is the step of the analysis procedure that cannot be fully automatized at this stage of software development, because it depends on the quality of the image set under study.
Background subtraction and spatial smoothing. Once the informative parts of the data are isolated, the pipeline proceeds with signal cleaning. The constant background is estimated for each pixel as the mean of the signal computed on the whole image set; it is then subtracted from images, pixel by pixel. Fluctuations are further reduced with a spatial smoothing: pixels are assembled in blocks (macro-pixels); at each time, the value of a macro-pixel is the mean signal of the 4 pixels belonging to it (from now on, macro-pixel is meant each time pixel is mentioned). This step may be unnecessary given the cleanness and regularity of the datasets we have used to test and develop the analysis pipeline, but our goal when devising the analysis procedure was to arrange a set of methods that could have been applied to any dataset, including noisy cases. The current implementation with macro-pixels corresponds to a factor of reduction in the number of inspected channels, with an equivalent grid step of 100 s for the pixel array, smaller than or comparable with the size currently accessible with surface multi-electrode arrays (MEAs), keeping this analysis still competitive in term of spatial resolution.
Spectrum analysis and time smoothing. In order to identify the dominant frequency band in each dataset, a real Fourier transform (RFT) is computed for each pixel. A unique spectrum for each dataset is then obtained as the mean of single pixels’ RFT. Given the mean spectrum, the frequency band of maximum intensity is identified and the signal is further cleaned by applying a 6° order band-pass Butterworth filter (provided by the
scipy Python package, Scientific Python,
https://www.scipy.org).
At the end of these preliminary steps, we obtain a 3D-array containing only the informative and cleaned signal. Subsequently, for normalization purposes, the signal for each pixel is divided by its maximum value; the unit is therefore .
The second part of the analysis consists of finding the set of upward transition times for each pixel, and of slicing the collection of transition times into waves. As already introduced, a wave is a coherence signal that propagates along the cortex surface and solicits the cell assemblies, having the effect of inducing a transition from Down-to-Up state in the involved neuronal populations. The frequency of the phenomenon is in the delta band, meaning that between one and four travelling signals (waves) can be observed in a second of recordings. The most common definitions of the timestamp of the upward transition are: (i) the onset of the transition; (ii) the crossing of a threshold that separates Up and Down states. This second option is not suitable in the case of calcium imaging, because of the timing of the transfer function that determines the light emission (with the term transfer function we refer to the curve shown in [
15],
Figure 1b, in analogy with the transfer function of a filter; indeed, the effect of obtaining the luminosity signal as the convolution of the neuronal spiking activity with the response function illustrated in [
15] can be schematised as a low-pass filter on the activity signal). Indeed, even if we used the fast variant (GCaMP6f), the dynamics of light emission is slower than the one of state transition. This is also the reason why we cannot estimate the slope of the Down-to-Up state transition from the profile of the function that rises from the minimum, because the fast slope is hidden in the slow variation of the transfer function. We assume, instead, that it is possible to recognize the trigger of the transitions in the signal’s minima of each pixel. This hypothesis is justified by the transfer function having a characteristic rise time
ms [
15], comparable with the theoretical minimum time interval between the passage of two waves on a pixel (this assumption, made for the wave rise time, is not valid for the decay time because the transfer function’s decay time is
ms [
15]). Indeed, assuming two different waves emitted in antiphase with a temporal distance of 250 ms (the maximum frequency of waves in the delta band is 4 Hz), the same neuronal population might be involved by the passage of two wavefronts in just 125 ms. Taking into account this borderline case, the actual sub-band where the phenomenon is observable is [0.5, 3.3] Hz (as better explained afterward, the deep anesthesia state causes the signal of interest to belong to this frequency sub-band). As long as the studied signal is part of this sub-band, the technique we developed is able to detect the down- to up-state transition dynamics of a neuronal population throughout its whole duration. In detail, as a wave is passing on a population, the corresponding pixel’s signal is already decaying: the wave passage causes its ascent and, therefore, a local minimum of the signal, that can be identified with the onset of a new transition. According to this hypothesis, it is thus possible to catch the quick upward transitions relying on optical techniques characterized by fluorescence response times much slower than the transition times themselves.
The analysis of minima can be split into two steps:
Search of minima. For each pixel, the signal’s minima are identified, comparing the intensity value at each instant (frame) to its previous and next one. This operation corresponds to evaluate the difference quotient of the data, and it used as the starting point for the subsequent step of interpolation.
Parabolic interpolation of minima. A parabolic fit is evaluated around each minimum. For the parabolic fit, five data points are used. Analytically, if a fit with a second order polynomial is tried (three parameters), 5 points are enough for the estimate of the fit parameters. We evaluated also the options of using a higher order polynomial (a quartic, five parameters), for which five points are theoretically enough, but practically seven or nine would be requested for a more accurate estimate of the function profile. However, under the assumption that each minimum corresponds to the passage of a wave, and that distinct waves are separated in time at least by 250 ms (neglecting the bordeline case of two waves in antiphase), using five points (i.e., inspecting 160 ms around the minimum, with the sampling time of 40 ms) should allow to isolate each wave contribution, while with seven or nine points (240 and 320 ms respectively) an overlap of more than one wave cannot be excluded. The three parabola’s parameters are saved into a proper data structure. The time of the minimum (interpolated between original frames) is obtained from the vertex of the parabola. With this information, assumed as the timestamp of the transition, it is possible to reconstruct the activity of the passing waves, following the movement both in time and space of the minima, as we will show in Results (
Section 3).
Once the collection of minima of the whole dataset is obtained, it is partitioned into slow waves. The collection of minima is given as an input to a MATLAB (MATLAB
®, The MathWorks, Inc., Natick, MA, USA,
https://www.mathworks.com/products/matlab.html) pipeline, already employed to analyze and elaborate data from electrodes [
9,
31,
32,
33] and recently revised and refined [
29]. On this, it is notable that the analysis pipeline that we devised nicely connects, after only minor adaptation efforts, with a well established analysis workflow, that was developed for addressing the study of completely different experimental data. This fact is already a result, since it represents a first successful step towards a plan of delivering a set of tools and methods, to extract quantitative information from electrophysiological signals, and to compare experiments, overcoming differences in the data taking and any systematic effects that may eventually depend on the data acquisition techniques.
Concerning this pipeline—that we call WaveHunt—its action consists in splitting the set of transition times into separate waves. In order to do this, it is necessary to set an initial maximum time interval (Time Lag) between transitions, so that they can be labelled as part of the same wave. This value is iteratively reduced by until every identified wave respects the unicity principle: every pixel cannot be involved more than once by the passage of a single wave. In addition, we adopt a globality principle: we only keep waves for which at least the of the total pixels are recruited (global waves). This selection is done in order to guarantee that what we call “wave” is actually a global collective phenomenon on the cortex. The procedure that returns the collection of waves from the set of trigger times at given positions on the cortex (channels or pixels) is a crucial issue. Criteria have to be defined “to cut” the time series, in order to segment the list of trigger times into waves. The unicity principle is an option, together with the globality request; of course, these assumptions select a subset of waves (global waves; predominance of regular propagation patterns, as planar and spherical wavefronts) and rule out overlapping patterns and partial waves. In other words, in the attempt of disentangling a complex phenomenon like the propagation of delta waves, our choices focus on extracting regular patterns, assuming they constitute a relevant part of the picture.
With the collection of waves, it is now possible to get to the third part of the analysis pipeline, where quantitative information on the slow wave activity is obtained. In this phase, three different types of measures are taken:
Excitability of the neuronal populations. For each minimum, the quadratic coefficient of the corresponding parabolic interpolation is taken. It is proportional to the concavity of the parabola, therefore it contains information on the excitability that the respective neuronal population (i.e., pixel) exhibits as a consequence to the wave that is passing. In detail, the quadratic coefficient is proportional to the second derivative of the function, and it is a measure of the speed of variation of the first derivative; the first derivative indicates a variation of the function; the function is a measure of the luminosity, and it is related to the state of the tissue (low/high emitting because of the variation in the calcium concentration). The first derivative can identify the minima of the function, and thus the timestamps at which a most significant variation (a change in the state, a Down-to-Up transition) is occurring; the fastest is such variation (monitored by the values of the second derivative), the fastest is the state transition, and thus, in our description, the most reactive (i.e., excitable) is the tissue underneath. The unit of the excitability is
. In order to study the distribution of excitability, values of excitability are collected within a histogram reporting statistics of the entire sample of minima. In addition to this, an
Excitability Map, representing the average excitability for each pixel, is produced (we point out that the information on the excitability can be obtained without splitting the collection of minima into global waves, i.e., before the WaveHunt step of the pipeline, that is necessary only for origin points and velocity. This is clearly shown in the flowchart at the end of this section (
Figure 2), in which the logic of the whole pipeline is represented in a schematic form).
Origin Points of the waves. Once that the collection of waves is obtained, each wave in the collection is examined separately. For each wave, there is a sequence of pixel activation; the first N pixel in the sequence are identified as the origin points (we set in this work; this value, given the number of non-NAN pixels in the images and the request of globality set at , corresponds to having the subset of origin points with less than of the pixels involved in each wave). Some pixels more often appear in this ranking, and are the ones that identify the dominant spatial origin of the waves.
Wave velocity. In order to obtain the average speed of each wave, the wave velocity is calculated on different points. If the passage time function of the wave
is known, the speed of a wave on a point
can be defined as the inverse of the module of the function’s gradient,
(the passage time function indicates, for each position
, the time at which that position has been reached by the wave during its propagation). Computing the gradient and taking its module, we obtain:
It should be noted that the function
represents the wave passage time (that we identify with the time of the minima) in the spatial continuum, bur we have this information only for the discrete points corresponding to the pixels. For this reason, the partial derivatives that appear in (
2) have been calculated as finite differences, with the distance between two pixels denoted as
d:
In other words, for any pixel identified by the pair of coordinates , the velocity of the wave on that pixel is computed using the information collected on the four adjacent pixels. Therefore, given the above calculation rule, for each wave only pixels with four valid neighbours are taken into account: these are pixels for which it is possible to evaluate both the and the . By averaging the values obtained over pixels that meet this condition, we obtain the average speed of a wave; this procedure is repeated for each wave, in order to create a histogram of average speeds.
Through the measures above listed, we have been able to quantitatively compare the data coming from the two different mice, and from the simulation obtained with the Toy Model. A flowchart of the whole analysis pipeline is presented in
Figure 2.
2.3. Toy Model
The theoretical model underlying the realization of the simulator is based on simple conceptual assumptions. The system is schematized as a two-dimensional grid of Pixels (Python objects that define the elements of the simulation are indicated in italic); width and height of the grid can be easily set to reproduce the shape and size of the experimental images. Each Pixel, being ideally the representation of a macroscopic portion of the cerebral cortex, contains a number of Neurons extracted from a normal distribution (the obtained number of Neurons per Pixel is unrealistic and underestimates the density of cells in the assembly, but it is requested for facing the limited processing resources of the simulation; in addition, it can allow a faster computation. Nevertheless, the impact of this choice is not relevant for the results of the Toy Model, since the Pixel’s signal is normalized). Neurons are the actual sources of the signal, as Poissonian emitters. In the simplest theoretical modelling, only two options are admitted for the expected value of the Poisson distribution that emulates the spiking activity of the Neurons: one for the Down state (expected value set at ), and the other for the Up state (expected value set at ). The simplest model has Neurons as independent, identical units, each holding an internal binary state that identifies the Neuron as active (Up) or idle (Down). The internal state of each Neuron depends only on the state of the Pixel to which the Neuron is assigned. It is straightforward that the Pixel can be in two states: active (if invested by a Wave), or idle (if no Wave has passed by the Pixel’s coordinates for a given time interval). If the Pixel is active, its Neurons are Up; if the Pixel is in idle, its Neurons are Down. Consequently, the Poissionian emitters of that Pixel will fire according to the corresponding expected value ( and , respectively).
Experimental evidences [
29] indicate that the ratio of the active state firing rate
to the idle state firing rate
can be estimated within the interval:
. Considering these results, it is therefore reasonable to set this ratio at 5 for the Toy Model; this assumption is simplistic but rooted in experimental observations: in [
29], we compute the distribution of
, the natural logarithm of the multi-unit activity, for 11 murine subjects, 32 channels per subject and, despite the large variability of the sample, we observe that the activity in the Up state is between
and
times larger than in the Down state (
is Euler’s number).
Each
Neuron has an intrinsic time of permanence in the active state,
, after which it returns to the idle state. To fix this parameter—univocal, for simplicity, for the entire population of
Neurons—we looked at experimental observations that place the duration of the Up state in a range
ms [
29]; in the Toy Model, we set
ms. A
Pixel is declared inactive when all its
Neurons are turned in idle state.
In order to model the presence of the fluorescent protein, from which the luminosity signal is obtained, the spiking activity of each
Pixel undergoes a convolution with a response function. The analytical form of the response is derived from the inspection of results reported in [
15], aiming at reproducing the trend of the fast variant (GCaMP6f); the function used in the Toy Model is a Lognormal, defined as:
This function depends on the set of parameters
and its shape varies with them. A first and significant constraint that the response function must respect is the agreement between the experimentally observed rise time
and the modelled one. Looking at results shown in Reference [
15], it is possible to identify a range of validity,
, while the modelled value can be taken as the mode
of the function, resulting in the following condition:
In order to be able to univocally identify the parameters
and
, it is necessary to impose an additional constraint, for example a moment of the distribution. However, given the difficulty of extracting such a piece of information from the visual inspection of the experimental response, it was preferred to explore a set of reasonable values compatible with the constrain expressed in (
5) and capable of reproducing as closely as possible the immediately visible characteristics of the function.
In
Figure 3, a collection of Lognormal trends is plotted for different
values that satisfy (
5). Trends are normalized to
, the median of the distribution—in order to facilitate the comparison between reports in [
15] and
Figure 3. This comparison shows important characteristics of the pair
: the corresponding function, when decaying, halves its maximum value after a time interval of
, and reaches zero after about one second, mimicking the properties of the curve reported by [
15] for the 6f variant of GCaMP. From these considerations, it emerges that the pair of parameters
is a valid choice for the description of the experimental response function, being able to reproduce the immediately observable features of amplitude, asymmetry and tail weight. This set
has thus been used for the entire population of
Neurons.
Once the simulation of Poissonian emitters is completed, each Pixel collects the sum of the signals produced by Neurons belonging to it. This population signal is then convolved with the above discussed Lognormal, to generate the fluorescence response, resulting in the brightness signal of the whole Pixel.
Since the Neurons described in the Toy Model are intrinsically independent objects, no form of wave propagation or interaction would be possible. The dynamics is therefore implemented separately according to two different conceptual schemes. A first possibility consists of defining a certain number of waves during the simulation initialization. A wave is nothing more than an object defined on the two-dimensional grid with its own parameters of shape, position, velocity and acceleration. For each instant of time, the simulation evaluates the correct wave dynamics (processed according to the parameters previously specified) and checks the spatial overlap of waves and Pixels, assigning the activation status to each Pixel. Each wave is equipped with further parameters in order to make its dynamics possibilities more flexible and broad; these parameters include an initial position of birth, a life time, a condition of reflection or absorption at the edges of the grid. A second possibility, developed for a more accurate comparison with the experimental data, consists of declaring a sequence of activation times for each Pixel. The simulation proceeds checking step by step which Pixel must be activated to respect the time sequences given in input.
On the next page, the logic implemented in the simulation—starting from the initialization phase up to the production of the final signal – is schematized in the form of a flowchart (
Figure 4).
The Toy Model is not intended to provide a complete picture of the light emission phenomenon, given its complexity and the fact that the fluorescence signal in the wide-field calcium imaging is the result of the interplay of several factors, not easily disentangled. The signal could indeed generate from neurites and somas of pyramidal neurons from both layer 2/3 and layer 5, and it was indeed reported that the neuronal firing is one of the components of the observed emission, but a large fraction of the fluorescence signal would originate in the neuropile, reporting on the activation of dendritic and axonal calcium channels [
34]. We decided to focus on the neuronal firing component only, for three reasons: (i) it is the most straightforward approach to follow, enabled by a simplified neuron model as a point-like Poissonian emitter; (ii) in an oversimplified frame in which only one element is included, its contribution can be more clearly evaluated and emulated; in this case, since we observe a good agreement between simulated and experimental data, we can conclude that the impact of neuronal firing on the observed Ca signal is not negligible; (iii) rather than provide a valid description of the light emission phenomenon or enlarge our understanding of the cortical dynamics, our intent with the Toy Model is, less ambitiously, to offer an additional and complementary tool to calibrate and validate the analysis pipeline. The Toy Model produces an output that can emulate the data extracted from the set of images, and for the purpose of testing the stability and the validity of the analysis procedure, we care more about the format than the accurate simulation of the slow wave dynamics. The goal was to feed the pipeline with two separate and independent inputs, aiming at attesting its flexibility and its potential to operate on different datasets without the need of carrying out a custom tuning of the procedure; the results we obtained on this are encouraging, as discussed in the following section.
4. Discussion
In this paper, we discuss two different lines of work. The former consists in the analysis of experimental images: we introduced a complete sequence of analysis capable of identifying the slow waves signal on the cortex. The latter consists in framing the experimental signals in a simple theoretical model which takes into account the statistics of the neuron activity and the fluorescence response of the proteins encoded in the mouse brain. A simulator (denoted as Toy Model) was implemented to synthesize artificial signals based on this model. Afterwards, the artificial signals were processed through the same pipeline developed for the experimental data, aiming at validating the analysis tools.
The first outcome we present is the release of the CaImanSWAP analysis pipeline. The preliminary version we distribute for the neuroscience community succeeds in providing a set of tools able to convert raw images (calcium imaging data acquired with wide-field microscopy) in a format that can be received as an input by an already established analysis workflow whose developments have been guided by a completely different experimental set-up. The advantage of this approach is that the same scientific problem (in this case, the dynamics of the slow wave activity in deeply anaesthetised brains) can be faced enriching the statistics of data with differently acquired experimental samples, combining optical and multielectrode electrophysiology, and having the analysis tools as the unifying element that overcomes any systematics that could be due to the data acquisition. In this framework, quantitative comparisons of different datasets and of experiments and simulations can be enabled, without introducing any tailored adjustments in the signal processing. In agreement with objectives and conclusions stated in [
35], we aim at going beyond visual inspection, enlarging the offer of tools that support quantitative studies of neuronal dynamics from imaging data; in addition, we pursue an effort towards the adoption of free software, potentially enlarging the number of users. Entering the details of other analysis solutions, approaches like the “optical-flow” generate velocity vector fields to determine speeds and directions of propagated activity (e.g., in [
35,
36]); conversely, our tool is not associated to vector fields but infers the spatiotemporal propagation features based on the source signal (i.e., localization of minimum values across the pixel field). Moreover, to estimate the major direction of propagation, the optical-flow approach relies on the selection of a region of interest, while our approach is not biased by this a priori choice. Finally, in the slow wave field, other studies reported analysis based on Granger causality (e.g., in [
37]), with the limit of an arbitrary choice of the region of interest and of not being able to perform single wave propagation analysis.
On the basis of what above commented, the first direction of development that we foresee is to adopt the tools here discussed to compare optical data and electrophysiological measurements. Our approach consists in having a common analysis pipeline that branches at the very beginning, when each type of experimental data shows its own specificity and requests a dedicated treatment for extracting the information from the raw signals. Once that the raw data are processed to get a collection of “trigger” times (i.e., whatever could be interpreted as related to transition times or activation times) over a set of “channels” (i.e., pixels or electrodes) with a defined geometry, such a collection can be set as an input for the common next steps of processing, regardless of the peculiarity of the raw data underneath. In particular, a crucial point of the pipeline is the WaveHunt algorithm, that splits the collection of times into a collection of waves. Here, beside improvements and further validation processes that could be included in the algorithm, it is where the wave analysis starts, that is the core of the quantitative investigation of the phenomenon. In addition to the measurements we have presented in this work, several other studies can be carried on, for instance: a principal component analysis (PCA), to identify the content of variability in the dataset and to define a parameter space where performing the wave clustering; the evaluation of quantitative measurements (such as the mean speed and the mean propagation pattern) for each identified cluster. Some preliminary tests are ongoing, aiming at a direct comparison of an extended set of experimental data, necessary for a solid statistical assessment of the findings and a larger significance of results.
A further direction of study could concentrate on data samples acquired from subjects in different neurophysiological states, but still manifesting slow wave activity; the approach we have developed could enable a quantitative comparison of different anesthetics (as GABAergics such as propofol, isoflurane and sevoflurane); also, the option of measuring the effects at different levels of anesthesia (from deep anesthesia towards wakefulness) could be evaluated.
The current status of the analysis pipeline is preliminary and there is large room for improvements, addressing different aspects of the processing. One of them is the filtering of signals. In the present stage, we adopted Butterworth filters, that are designed to have a frequency response as flat as possible in the passband (maximally flat magnitude filters), but introduce phase distortion that can affect the spatio-temporal patterns. As opposite, linear phase filtering is reached with different approaches, as Bessel filters (designed for a maximally linear phase response in the passband), filters with symmetric impulsive response as causal finite impulse response (FIR) filters, or by filtering the signals bidirectionally, to compensate any phase distortion and optimize for a constant group delay. Beside, the impact itself of the phase distortion should be evaluated; indeed, if a maximal impact is expected in audio applications for which a minimal distortion is required, the request could be less stringent for the kind of problem we are facing, since, assuming a comparable spectral content for the pixels, the same phase distortion is introduced in all the channels, and the determination of the timestamps could be not significantly affected. Conversely, the phase distortion issue should be taken into account if trying to correlate the pixels’ signal with other inputs with different spectral content and sampling dynamics, as Local Field Potential (LFP) data. In addition, the order of filtering should be investigated for optimising the performance of the analysis.
As already anticipated, actions can be implemented also in the WaveHunt algorithm, to improve the performance. Here, in particular, the focus is to define additional (enriched or complementary) criteria for extracting waves from the collection of trigger times that the algorithm receives as input. The obtained collection of waves depends on the defined criteria, and the current implementation favours the inclusion of planar and spherical global waves (in the presented setup, the globality threshold is set to the participation of 75% of channels), ruling out overlapping patterns, and implies the assumption of regular wavefronts as the dominant subset of propagation modes. This introduced limitation is partly justified by the results of the visual inspection of the raw data, for the movie obtained with the raw images shows a dominance of regular propagation across the entire portion of cortex under observation. Taking the Slow Wave Activity (SWA) regime as the state exhibiting a regular pattern of propagation of the spiking activity in opposition to the asynchronous regime (awake state), the definition itself of a proper “wave” can be restricted to regular propagation patterns, i.e., patterns fulfilling the unicity request we adopt in the pipeline, thus excluding cycloidal or aborted propagation. In addition, a more quantitative evaluation of the different propagation patterns could be reached introducing complementary criteria, in order to quantify the relative weight of different criteria and the relative extent (cardinality) of the wave collection depending on the criteria. In other words, since some selections may be mutually exclusive (for instance, the unicity and the globality principles cannot be compatible with the isolation of overlapping partial waves), the approach should be to define different sets of selection criteria, identifying for each set the capabilities and the limitations, and comparing the resulting collection of waves. This approach should be taken into account as a general guideline in the analysis of data, regarding any step of the processing. Indeed, part of our speculations about the pipeline consists in identifying how results and outputs are dependent on the analysis settings, since the unavoidable manipulation that data analysis introduces has an impact and may enlighten/isolate a given feature of the data or hide some other. One of the ambitions we have with the pipeline named SWAP is to equip the data analysis procedures with tools that enable the control and monitoring of the applied analysis criteria, in order to keep track of the adopted choices and to identify compatible and incompatible options or dependencies, towards a solid description of the processing steps aimed at a reliable comparison of datasets.
Concerning the Toy Model, the comparison between experimental and simulated data produced a reasonable agreement; future extended analysis may help figure out additional features, for a deeper understanding of the phenomena. Nevertheless, despite the current simplicity of the model, our findings, obtained just by adopting a LogNormal transfer function inspired by the experimental response described in [
15], suggest that the spiking activity, among the many contributions, is one of the principal components of the observed fluorescence fluctuations. The analytical description of the transfer function could be further refined using the Toy Model, assuming a given description dependent on a set of parameters and training a decision algorithm, such as a neural network, to infer the parameters, that could eventually be adopted in performing a deconvolution process.
The reconstruction of the wavefronts obtained with this analysis can also be used for the estimation, with a high spatial resolution, of effective connectivity matrices between the neuronal populations underlying the individual pixels. Eventually, the high-resolution spatio-temporal characterization of the Slow Wave Activity enabled by our methods could be an important ingredient to tune the parameters and the interconnect of spiking neural network simulations, like those empowered at similar spatio-temporal resolution by distributed engines [
7,
38]. This data-driven simulation approach will overcome the limits of current simulations that use simplified and stereotyped assumptions, and it will make possible the validation of competing theoretical models, by matching simulations with experimental data.