0% found this document useful (0 votes)
10 views

How To Read These Tutorials

This document provides a table of contents and introduction for tutorials on using the Neurostorm software. It outlines topics that will be covered, including how to read the tutorials, the main interface window, database structure, creating a first protocol, importing subject anatomy from MRI data, displaying anatomy and MRI images, linking raw MEG files to the database, reviewing continuous recordings, using multiple windows, and more. The goal is to provide a roadmap to guide users through tutorials on key functions of the Neurostorm software.

Uploaded by

SayedAli Nourian
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

How To Read These Tutorials

This document provides a table of contents and introduction for tutorials on using the Neurostorm software. It outlines topics that will be covered, including how to read the tutorials, the main interface window, database structure, creating a first protocol, importing subject anatomy from MRI data, displaying anatomy and MRI images, linking raw MEG files to the database, reviewing continuous recordings, using multiple windows, and more. The goal is to provide a roadmap to guide users through tutorials on key functions of the Neurostorm software.

Uploaded by

SayedAli Nourian
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 350

Table of Contents

Introduction.................................................................................................................................................2
How to read these tutorials...................................................................................................................2
Presentation of the experiment.............................................................................................................3
Neurostorm folders................................................................................................................................3
Starting Neurostorm for the first time.................................................................................................4
Main interface window..........................................................................................................................5
The text is too small...............................................................................................................................5
Database structure.................................................................................................................................6
Database files.........................................................................................................................................7
Create your first protocol.....................................................................................................................7
Protocol exploration............................................................................................................................10
Set up a backup....................................................................................................................................11
Changing the temporary folder..........................................................................................................11
Summary..............................................................................................................................................12
Roadmap..............................................................................................................................................12
Moving a database...............................................................................................................................13
Tutorial 2: Import the subject anatomy.....................................................................................................14
Download..............................................................................................................................................15
Create a new subject.............................................................................................................................15
Right-click doesn't work........................................................................................................................17
Import the anatomy..............................................................................................................................17
Using the MRI Viewer............................................................................................................................19
Fiducial points........................................................................................................................................20
Validation..............................................................................................................................................22
Graphic bugs..........................................................................................................................................23
MNI normalization.................................................................................................................................23
Alternatives...........................................................................................................................................24
Tutorial 3: Display the anatomy.................................................................................................................25
Anatomy folder......................................................................................................................................25
Default surfaces.....................................................................................................................................26
MRI Viewer............................................................................................................................................27

pg. 1
MRI contact sheets................................................................................................................................28
MRI in 3D...............................................................................................................................................29
Surfaces.................................................................................................................................................30
Get coordinates.....................................................................................................................................31
Subcortical regions: Volume..................................................................................................................32
Subcortical regions: Surface..................................................................................................................34
Registration MRI/surfaces.....................................................................................................................35
Interaction with the file system.............................................................................................................35
On the hard drive: MRI..........................................................................................................................39
On the hard drive: Surface.....................................................................................................................41
Tutorial 4: Channel file / MEG-MRI coregistration.....................................................................................42
License...................................................................................................................................................43
Presentation of the experiment............................................................................................................43
Link the raw files to the database..........................................................................................................45
Automatic registration...........................................................................................................................46
New files and folders.............................................................................................................................49
Review vs Import...................................................................................................................................50
Display the sensors................................................................................................................................50
Sensor map............................................................................................................................................51
Manual registration...............................................................................................................................52
Multiple runs and head positions..........................................................................................................53
Edit the channel file...............................................................................................................................54
On the hard drive..................................................................................................................................55
Tutorial 5: Review continuous recordings.................................................................................................57
Open the recordings..............................................................................................................................57
Navigate in time....................................................................................................................................58
Epoched vs. continuous.........................................................................................................................60
Display mode: Butterfly/Column...........................................................................................................61
Montage selection.................................................................................................................................62
Channel selection..................................................................................................................................62
Amplitude scale.....................................................................................................................................63
Time and amplitude resolution.............................................................................................................64
Filters for visualization...........................................................................................................................65

pg. 2
Mouse and keyboard shortcuts.............................................................................................................66
Tutorial 6: Multiple windows.....................................................................................................................67
General organization.............................................................................................................................68
Automatic figure positioning.................................................................................................................68
Example.................................................................................................................................................69
Multiple views of the same data...........................................................................................................70
User setups............................................................................................................................................71
Uniform amplitude scales......................................................................................................................71
Tutorial 7: Event markers..........................................................................................................................72
Lists of events........................................................................................................................................73
Adding events........................................................................................................................................74
Extended events....................................................................................................................................75
Bad segments........................................................................................................................................75
Hide event groups.................................................................................................................................76
Channel events......................................................................................................................................77
Notes.....................................................................................................................................................78
Display modes.......................................................................................................................................78
Custom shortcuts...................................................................................................................................79
Saving modifications..............................................................................................................................80
Other menus..........................................................................................................................................81
On the hard drive..................................................................................................................................82
Tutorial 8: Stimulation delays....................................................................................................................86
Note for beginners.................................................................................................................................87
Documented delays...............................................................................................................................87
Evaluation of the delay..........................................................................................................................88
Detection of the analog triggers............................................................................................................89
Repeat on acquisition run #02...............................................................................................................93
Delays after this correction....................................................................................................................93
Detection of the button responses........................................................................................................93
Another example: visual experiments...................................................................................................96
Tutorial 9: Select files and run processes..................................................................................................97
Selecting files to process.......................................................................................................................97
Filter by name........................................................................................................................................99

pg. 3
Selecting processes..............................................................................................................................100
Plugin structure...................................................................................................................................101
Note for beginners...............................................................................................................................102
Search Database..................................................................................................................................102
Advanced search queries.................................................................................................................105
Saving a pipeline..................................................................................................................................107
Automatic script generation................................................................................................................108
Process: Select files with tag................................................................................................................109
Report viewer......................................................................................................................................110
Error management..............................................................................................................................111
Control the output file names.............................................................................................................112
Tutorial 10: Power spectrum and frequency filters.................................................................................113
Evaluation of the noise level................................................................................................................114
Interpretation of the PSD....................................................................................................................116
Elekta-Neuromag and EEG users.........................................................................................................120
Apply a notch filter..............................................................................................................................120
Evaluation of the filter.........................................................................................................................122
Some cleaning.....................................................................................................................................124
Note for beginners...............................................................................................................................125
What filters to apply?..........................................................................................................................126
When to apply these filters?................................................................................................................127
Filter specifications: Low-pass, high-pass, band-pass..........................................................................128
Filter specifications: Notch..................................................................................................................137
Filter specifications: Band-stop............................................................................................................138
On the hard drive................................................................................................................................138
Tutorial 11: Bad channels........................................................................................................................140
Identifying bad channels......................................................................................................................140
Selecting sensors.................................................................................................................................141
Marking bad channels.........................................................................................................................143
From the database explorer................................................................................................................144
Epoching and averaging.......................................................................................................................145
On the hard drive................................................................................................................................145
Tutorial 12: Artifact detection.................................................................................................................146

pg. 4
Observation.........................................................................................................................................147
Detection: Heartbeats.........................................................................................................................148
Detection: Blinks..................................................................................................................................149
Remove simultaneous blinks/heartbeats............................................................................................150
Run #02: Running from a script...........................................................................................................151
Artifacts classification..........................................................................................................................151
Detection: Custom events...................................................................................................................152
In case of failure..................................................................................................................................153
Other detection processes...................................................................................................................154
Tutorial 13: Artifact cleaning with SSP.....................................................................................................155
Overview.............................................................................................................................................156
The order matters................................................................................................................................157
SSP: Heartbeats...................................................................................................................................158
Evaluate the components....................................................................................................................159
Evaluate the correction.......................................................................................................................161
SSP: Eye blinks.....................................................................................................................................162
Run #02...............................................................................................................................................165
Note for beginners...............................................................................................................................166
SSP: Generic.........................................................................................................................................167
Averaged artifact.................................................................................................................................168
Troubleshooting..................................................................................................................................170
SSP Theory...........................................................................................................................................170
SSP Algorithm......................................................................................................................................171
Extract the time series.........................................................................................................................172
On the hard drive................................................................................................................................175
Tutorial 15: Import epochs......................................................................................................................176
Import in database..............................................................................................................................177
Review the individual trials..................................................................................................................179
Raster plot...........................................................................................................................................180
Run #02...............................................................................................................................................180
Epoch length........................................................................................................................................181
On the hard drive................................................................................................................................182
Tutorial 17: Visual exploration.................................................................................................................185

pg. 5
2D/3D topography...............................................................................................................................186
Magnetic interpolation........................................................................................................................187
2D Layout.............................................................................................................................................188
Display as image..................................................................................................................................189
Time selection.....................................................................................................................................189
Snapshots............................................................................................................................................190
Movie studio........................................................................................................................................191
Contact sheets.....................................................................................................................................192
Edit the figures....................................................................................................................................193
Mouse shortcuts..................................................................................................................................194
Keyboard shortcuts..............................................................................................................................195
Tutorial 18: Colormaps............................................................................................................................195
Colormap menus.................................................................................................................................196
Standard color arrays..........................................................................................................................196
Custom color arrays.............................................................................................................................198
Color mapping.....................................................................................................................................199
Colormap management.......................................................................................................................200
New default colormaps.......................................................................................................................201
JET Alternative.................................................................................................................................202
Tutorial 20: Head modeling.....................................................................................................................202
Why estimate sources?.......................................................................................................................203
The origins of MEG/EEG signals...........................................................................................................204
Source models.....................................................................................................................................205
Forward model....................................................................................................................................207
Computation........................................................................................................................................208
Database explorer...............................................................................................................................210
On the hard drive................................................................................................................................210
Tutorial 22: Source estimation................................................................................................................212
Ill-posed problem................................................................................................................................213
Source estimation options...................................................................................................................214
Method............................................................................................................................................215
Measure [TODO]..............................................................................................................................217
Source model: Dipole orientations [TODO].....................................................................................218

pg. 6
Sensors............................................................................................................................................218
Computing sources for an average......................................................................................................219
Display: Cortex surface........................................................................................................................220
Why does it look so noisy?..................................................................................................................221
Display: MRI Viewer.............................................................................................................................222
Display: MRI 3D...................................................................................................................................223
Sign of constrained maps.....................................................................................................................224
Unconstrained orientations.................................................................................................................225
Source map normalization...................................................................................................................227
Delete your experiments.....................................................................................................................230
Computing sources for single trials......................................................................................................231
Averaging in source space...................................................................................................................232
Note for beginners...............................................................................................................................239
Averaging normalized values...............................................................................................................239
Display: Contact sheets and movies....................................................................................................240
Model evaluation.................................................................................................................................241
Advanced options: Minimum norm.....................................................................................................242
Depth weighting..............................................................................................................................243
Noise covariance regularization [TODO]..........................................................................................243
Regularization parameter [TODO]...................................................................................................244
Output mode...................................................................................................................................245
Advanced options: LCMV beamformer................................................................................................245
Advanced options: Dipole modeling....................................................................................................246
Combining MEG+EEG for source estimation........................................................................................247
On the hard drive................................................................................................................................248
Tutorial 23: Scouts...................................................................................................................................251
Hypothesis...........................................................................................................................................251
Creating a scout...................................................................................................................................252
3D display options...............................................................................................................................254
Scout function.....................................................................................................................................255
Option: Absolute / relative..................................................................................................................256
Multiple conditions..............................................................................................................................258
Other regions of interest.....................................................................................................................259

pg. 7
Multiple scouts....................................................................................................................................261
From the database explorer................................................................................................................262
Sign flip................................................................................................................................................263
Scout toolbar and menus.....................................................................................................................264
Menu: Atlas.....................................................................................................................................264
Menu: Scout....................................................................................................................................266
Menu: Sources.................................................................................................................................267
Scout region.........................................................................................................................................267
On the hard drive................................................................................................................................268
Tutorial 24: Time-frequency....................................................................................................................270
Introduction.........................................................................................................................................270
Morlet wavelets...................................................................................................................................271
Edge effects.........................................................................................................................................272
Simulation............................................................................................................................................273
Process options....................................................................................................................................276
Display: Time-frequency map..............................................................................................................277
Display: Mouse and keyboard shortcuts..............................................................................................279
Display: Power spectrum and time series............................................................................................280
Normalized time-frequency maps.......................................................................................................281
Tuning the wavelet parameters...........................................................................................................286
Hilbert transform.................................................................................................................................287
MEG recordings: Single trials...............................................................................................................291
Display: All channels............................................................................................................................296
Display: Topography............................................................................................................................298
Scouts..................................................................................................................................................299
Full cortical maps.................................................................................................................................302
Unconstrained sources........................................................................................................................305
Getting rid of the edge effects.............................................................................................................306
On the hard drive................................................................................................................................306
Tutorial 26: Statistics...............................................................................................................................308
Random variables................................................................................................................................309
Histograms...........................................................................................................................................310
Statistical inference.............................................................................................................................315

pg. 8
Parametric Student's t-test..................................................................................................................317
Example 1: Parametric t-test on recordings........................................................................................318
Correction for multiple comparisons...................................................................................................321
Nonparametric permutation tests.......................................................................................................323
Example 2: Permutation t-test.............................................................................................................324
FieldTrip implementation....................................................................................................................326
Example 3: Cluster-based correction...................................................................................................328
Example 4: Parametric test on sources................................................................................................330
Directionality: Difference of absolute values.......................................................................................332
Example 5: Parametric test on scouts..................................................................................................334
Convert statistic results to regular files...............................................................................................336
Example 6: Nonparametric test on time-frequency maps...................................................................338
Export to SPM......................................................................................................................................341
On the hard drive................................................................................................................................341
Citations...............................................................................................................................................343
References...........................................................................................................................................344

pg. 9
Introduction
This software was generated primarily with support from the National Institutes of Health under grants
R01-EB026299, 2R01-EB009048, R01-EB009048, R01-EB002010 and R01-EB000473.

Primary support was provided by the Centre National de la Recherche Scientifique (CNRS, France) for
the Cognitive Neuroscience & Brain Imaging Laboratory (La Salpetriere Hospital and Pierre & Marie Curie
University, Paris, France), and by the Montreal Neurological Institute to the MEG Program at McGill
University.

Additional support was also from two grants from the French National Research Agency (ANR) to the
Cognitive Neuroscience Unit (PI: Ghislaine Dehaene; Inserm/CEA, Neurospin, France) and to the
ViMAGINE project (PI: Sylvain Baillet; ANR-08-BLAN-0250), and by the Epilepsy Center in the Cleveland
Clinic Neurological Institute.

The latest version of this product was made by Mrs.Farazmand and all changes that are made by him is
reported at this doc

How to read these tutorials


The goal of these introduction tutorials is to guide you through most of the features of the
software. All the pages use the same example dataset. The results of one section are most of
the time used in the following section, so read these pages in the correct order.

Advanced

Some pages may contain too many details for your level of interest or competence. The sections
marked as [Advanced] are not required for you to follow the tutorials until the end. You can skip
them the first time you go through the documentation. You will be able to get back to the theory
later if you need.

Please follow first these tutorials with the data we provide. This way you will be able to focus
on learning how to use the software. It is better to start with some data that is easy to analyze.
After going through all the tutorials, you should be comfortable enough with the software to start
analyzing your own recordings.

You will observe minor differences between the screen captures presented in these pages and
what you obtain on your computer: different colormaps, different values, etc. The software being
constantly improved, some results changed since we produced the illustrations. When the

pg. 10
changes are minor and the interpretation of the figures remain the same, we don't necessarily
update the images in the tutorial.

If you are interested only in EEG or intra-cranial recordings, don't think that a MEG-based
tutorial is not adapted for you. Most of the practical aspects of the data manipulation is very
similar in EEG and MEG. First start by reading these introduction tutorials using the MEG
example dataset provided, then when you are familiar with the software

Presentation of the experiment


All the introduction tutorials are based on a simple auditory oddball experiment:

 One subject, two acquisition runs of 6 minutes each.


 Subject stimulated binaurally with intra-aural earphones.
 Each run contains 200 regular beeps and 40 easy deviant beeps.
 Recordings with a CTF MEG system with 275 axial gradiometers.
 Anatomy of the subject: 1.5T MRI, processed with FreeSurfer 5.3.
 More details will be given about this dataset along the process.

Neurostorm folders
Neurostorm needs different directories to work properly. If you put everything in the same
folder, you would run into many problems. Try to understand this organization before creating a
new database.

1. Program directory: "Neurostorm3"

 Contains all the program files: Matlab scripts, compiled binaries, templates, etc.
 There is no user data in this folder.
 You can delete it and replace it with a newer version at anytime, your data will be safe.
 Recommended location:
o Windows: Documents\Neurostorm3
o Linux: /home/username/Neurostorm3
o MacOS: Documents/Neurostorm3

2. Database directory: "Neurostorm_db"

 Created by user.
 Contains all the Neurostorm database files.
 Managed by the application: do not move, delete or add files by yourself.
 Recommended location:
o Windows: Documents\Neurostorm_db
o Linux: /home/username/Neurostorm_db

pg. 11
o MacOS: Documents/Neurostorm_db

3. User directory: ".Neurostorm"

 Created at Neurostorm startup. Typical location:

o Windows: C:\Users\username\.Neurostorm
o Linux: /home/username/.Neurostorm
o MacOS: /Users/username/.Neurostorm
 Contains:
o Neurostorm.mat: Neurostorm user preferences.
o defaults/: Anatomy templates downloaded by Neurostorm.
o mex/: Some mex files that have to be recompiled.
o plugins/: Plugins downloaded by Neurostorm
o process/: Personal process folder
o reports/: Execution reports
o tmp/: Temporary folder, emptied every time Neurostorm is started.
You may have to change the location of the temporary folder if you have a limited
amount of storage or a limited quota in your home folder (see below).

4. Original data files:

 Recordings you acquired and you want to process with Neurostorm.


 Put them wherever you want but not in any of the previous folders.

Starting Neurostorm for the first time


1. If you haven't read the installation instructions, do it now
2. Start Neurostorm from Matlab or with the compiled executable.

3. BST> Starting Neurostorm:


4. BST> =================================
5. BST> Version: 28-Jan-2015
6. BST> Checking internet connectivity... ok
7. BST> Compiling main interface files...
8. BST> Emptying temporary directory...
9. BST> Deleting old process reports...
10. BST> Loading configuration file...
11. BST> Initializing user interface...
12. BST> Starting OpenGL engine... hardware
13. BST> Reading plugins folder...
14. BST> Loading current protocol...
BST> =================================

15. Read and accept the license file.


16. Select your Neurostorm database directory (Neurostorm_db).

pg. 12
17. If you do something wrong and don't know how to go back, you can always re-initialize
Neurostorm by typing "Neurostorm reset" in the Matlab command window, or by
clicking on [Reset] in the software preferences (menu File > Edit preferences).

Main interface window


The Neurostorm window described below is designed to remain on one side of the screen. All
the space of the desktop that is not covered by this window will be used for opening other
figures.

Do not try to maximize this window, or the automatic management of the data figures might not
work correctly. Keep it on one side of your screen, just large enough so you can read the file
names in the database explorer.

The text is too small


If you have a high-resolution screen, the text and icons in the Neurostorm window may not scale
properly, leading the interface to be impossible to use. Select the menu File > Edit preferences,
the slider at the bottom of the option window lets you increase the ratio of the Neurostorm
interface.

pg. 13

Database structure
Neurostorm allows you to organize your recordings and analysis with three levels of definition:

 Protocol
o Group of datasets that have to be processed or displayed together.
o A protocol can include one or several subjects.
o Some people would prefer to call this experiment or study.
o You can only open one protocol at a time.
o Your Neurostorm database is a collection of protocols.
 Subject
o A person who participated in a given protocol.
o A subject contains two categories of information: anatomy and functional data.
o Anatomy: Includes at least an MRI volume and some surfaces extracted from the
MRI.
o Functional data: Everything that is related with the MEG/EEG acquisition.
o For each subject, it is possible to use either the actual MRI of the person or one of
the anatomy templates available in Neurostorm.
 Sub-folders
o For each subject, the functional files can be organized in different sub-folders.

pg. 14
o These folders can represent different recordings sessions (aka acquisition runs) or
different experimental conditions.
o The current structure of the database does not allow more than one level of sub-
folders for each subject. It is not possible to organize the files by session AND by
condition.

Database files
 The database folder "Neurostorm_db" is managed completely from the graphic user
interface (GUI).
 All the files in the database have to be imported through the GUI. Do not try to copy files
by yourself in the Neurostorm_db folder, it won't work.
 Everything in this folder is stored in Matlab .mat format, with the following architecture:
o Anatomy data: Neurostorm_db/protocol_name/anat/subject_name
o Functional data: Neurostorm_db/protocol_name/data/subject_name/subfolder/
 Most of the files you see in the database explorer in Neurostorm correspond to files on
the hard drive, but there is no one-to-one correspondence. There is extra information
stored in each directory, to save properties, comments, default data, links between
different items, etc. This is one of the reasons for which you should not try to manipulate
directly the files in the Neurostorm database directory.
 The structure of the database is saved in the user preferences, so when you start the
program or change protocol, there is no need to read again all the files on the hard drive.
 If Neurostorm or Matlab crashes before the database structure is correctly saved, the files
that are displayed in the Neurostorm database explorer may differ from what is actually
on the disk. When this happens, you can force Neurostorm to rebuild the structure from
the files on the hard drive: right-click on a folder > Reload.

Create your first protocol

pg. 15
1. Menu File > New protocol.

2. Edit the protocol name and enter: "TutorialIntroduction".


It will automatically update the anatomy and datasets paths. Do not edit manually these
paths, unless you work with a non-standard database organization and know exactly what
you are doing.
3. Default properties for the subjects: These are the default settings that are used when
creating new subjects. It is then possible to override these settings for each subject
individually.
o Default anatomy: (MRI and surfaces)
 No, use individual anatomy:
Select when you have individual MRI scans for all the participants of your
study.
 Yes, use default anatomy:
Select when you do not have individual scans for the participants, and you
would like to use one of the anatomy templates available in Neurostorm.
o Default channel file: (Sensors names and positions)
 No, use one channel file per acquisition run: Default for all studies
Different head positions: Select this if you may have different head
positions for one subject. This is usually not the case in EEG, where the
electrodes stay in place for all the experiment. In MEG, this is a common
setting: one recording session is split in multiple acquisition runs, and the
position of the subject's head in the MEG might be different between two
runs.
Different number of channels: Another use case is when you have multiple
recordings for the same subject that do not have the same number of
channels. You cannot share the channel file if the list of channels is not
strictly the same for all the files.

pg. 16
Different cleaning procedures: If you are cleaning artifacts from each
acquisition run separately using SSP or ICA projectors, you cannot share
the channel file between them (the projectors are saved in the channel
file).
 Yes, use one channel file per subject: Use with caution
This can be a setting adapted to EEG: the electrodes are in the same
position for all the files recorded on one subject, and the number of
channels is the same for all the files. However, to use this option, you
should not be using SSP/ICA projectors on the recordings, or they should
be computed for all the files at once. This may lead to some confusion and
sometimes to manipulation errors. For this reason, we decided not to
recommend this setting.
 Yes, use only one global channel file: Not recommended
This is never a recommended setting. It could be used in the case of an
EEG study where you use only standard EEG positions on a standard
anatomy, but only if you are not doing any advanced source
reconstruction. If you share the position of the electrodes between all the
subjects, it will also share the source models, which are dependent on the
quality of the recordings for each subject. This is complicated to
understand at this level, it will make more sense later in the tutorials.
4. In the context of this study, we are going to use the following settings:

o No, use individual anatomy: Because we have access to a T1 MRI scan of the
subject.
o No, use one channel file per condition/run: The typical MEG setup.

pg. 17
5. Click on [Create].

Protocol exploration
The protocol is created and you can now see it in the database explorer. It is represented by the
top-most node in the tree.

 You can switch between anatomy and functional data with the three buttons just above
the database explorer. Read the tooltips of the buttons to see which one does what.
 In the Anatomy view, there is a Default anatomy node. It contains the ICBM152 anatomy,
distributed by the Montreal Neurological Institute (MNI), which is one of the template
anatomy folders that are distributed with Neurostorm.
 The Default anatomy node contains the MRI and the surfaces that are used for the
subjects without an individual anatomy, or for registering multiple subjects to the same
anatomy for group analysis.
 There are no subjects in the database yet, so the Functional data views are empty.
 Everything you can do with an object in the database explorer (anatomy files, subjects,
protocol) is accessible by right-clicking on it.

pg. 18

Set up a backup
Just like any computer work, your Neurostorm database is always at risk. Software bugs,
computer or network crashes and manipulation errors can cause the loss of months of data
curation and computation. If the database structure gets corrupted or if you delete or modify
accidentally some files, you might not be able to get your data back. There is no undo button!

You created your database, now take some time to find a way to make it safe. If you are not
familiar with backuping systems, watch some online tutorials explaining how to set up an
automatic daily or weekly backup of your sensitive data. It might seem annoying and useless
now, but could save you weeks in the future.

Changing the temporary folder


If the amount of storage space you have for your user folder is limited (less than 10Gb), you may
have to change the temporary folder used by Neurostorm. If you work on a centralized network
where all the computers are sharing the same resources, the system admins may impose limited
disk quotas to all users and encourage them to use local hard drives instead of the limited and
shared user folder.

The default temporary folder of Neurostorm is located in your user folder


($HOME/.Neurostorm/tmp/), so when you import recordings or calculate large models, it may
quickly fill up your limited quota and at some point block your user account. To prevent this,
select the menu File > Edit preferences and set the temporary directory to a folder that is local
to your computer, in which you won't suffer from any storage limitation.

pg. 19

Summary
 Different folders for:
o the program (Neurostorm3).
o the database (Neurostorm_db).
o your original recordings.
 Never modify the contents of the database folder by yourself.
 Do not put the original recordings in any of the Neurostorm folders, import them with the
interface.
 Do not try to maximize the Neurostorm window: keep it small on one side of your screen.

Roadmap
The workflow described in these introduction tutorials include the following steps:

 Importing the anatomy of the subjects, the definition of the sensors and the recordings.
 Pre-processing, cleaning, epoching and averaging the EEG/MEG recordings.
 Estimating sources from the imported recordings.
 Computing measures from the brain signals of interest in sensor space or source space.

pg. 20
Advanced

Moving a database
If you are running out of disk space or need to share your database with someone else, you may
need to copy or move your protocols to a different folder or drive. Each protocol is handled
independently by Neurostorm, therefore in order to move the entire database folder
(Neurostorm_db), you need to repeat the operations below for each protocol in your database.

Copy the raw files

The original continuous files are not saved in the Neurostorm database. The "Link to raw files"
depend on a static path on your local computer and cannot be moved easily to a new computer.
You can copy them inside the database before moving the database to a different computer/hard
drive using the menu: File > Export protocol > Copy raw files to database. This will make local
copies in .bst format of all your original files. The resulting protocol would larger but portable.
This can also be done file by file: right-click > File > Copy to database.

Export a protocol

The easiest option to share a protocol with a collaborator is to export it as a zip file.

 Export: Use the menu File > Export protocol > Export as zip file.
Avoid using spaces and special characters in the zip file name.
 Import: Use the menu File > Load protocol > Load from zip file.
The name of the protocol created in the Neurostorm_db folder is the name of the zip file.

pg. 21
If there is already a protocol with this label, Neurostorm would return an error. To import
the protocol as a different name, you only need to rename the zip file before importing it.
 Size limitation: This solution is limited to smaller databases: creating zip files larger than
a few Gb can take a lot of time or even crash. For larger databases, prefer the other
options below.

Export a subject

Similar as the protocol export, but extracts only the files needed by a single subject.

 Export: Right-click on the subject > File > Export subject.


 Import as new protocol: Use the menu File > Load protocol > Load from zip file.
 Import in an existing protocol: Use the menu File > Load protocol > Import subject
from zip.

Move a protocol

To move a protocol to a different location:

1. [Optional] Set up a backup of your entire Neurostorm_db folder if your haven't done it
yet. There will be no undo button to press if something bad happens.
2. [Optional] Copy the raw files to the database (see above)
3. Unload: Menu File > Delete protocol > Only detach from database.
4. Move: Move the entire protocol folder to a different location. Remember a protocol
folder should be located in the "Neurostorm_db" folder and should contain only two
subfolders "data" and "anat". Never move or copy a single subject manually.
5. Load: Menu File > Load protocol > Load from folder > Select the new location of the
protocol
6. If you want to move the entire "Neurostorm_db" folder at once, make sure you detach all
the protocols in your Neurostorm installation first.

Duplicate a protocol

To duplicate a protocol in the same computer:

 Copy: Make a full copy of the protocol to duplicate in the Neurostorm_db folder, e.g.
TutorialIntroduction => TutorialIntroduction_copy. Avoid using any space or special
character in the new folder name.
 Load: Menu File > Load protocol > Load from folder > Select the new protocol folder

Tutorial 2: Import the subject anatomy


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

pg. 22
Neurostorm orients most of its database organization and processing stream for handling
anatomical information together with the MEG/EEG recordings, because its primary focus was
to estimate brain sources from MEG/EEG, which ideally requires an accurate spatial modelling
of the head and sensors.

If you don't have anatomical scans of your subjects or are not interested in any spatial display, various
solution will be presented along the tutorials, starting from the last section of this page. Be patient and
follow everything as instructed, you will get to the information you need.

Contents

1. Download
2. Create a new subject
3. Right-click doesn't work
4. Import the anatomy
5. Using the MRI Viewer
6. Fiducial points
7. Validation
8. Graphic bugs
9. MNI normalization
10. Alternatives

Download
The dataset we will use for the introduction tutorials is available online.

 Go to the Download page of this website, and download the file: sample_introduction.zip

 Unzip it in a folder that is not in any of the Neurostorm folders (program folder or database
folder).
 This is really important that you always keep your original data files in a separate folder: the
program folder can be deleted when updating the software, and the contents of the database
folder is supposed to be manipulated only by the program itself.

Create a new subject


The protocol is currently empty. You need to add a new subject before you can start importing
data.

1. Switch to the anatomy view (first button just above the database explorer).
2. Right-click on the top folder TutorialIntroduction > New subject.
Alternatively: Use the menu File > New subject.

pg. 23
3. The window that opens lets you edit the subject name and settings. It offers again the same
options for the default anatomy and channel file: you can redefine for one subject the default
values set at the protocol level if you need to. See previous tutorial for help.

pg. 24
4. Keep all the default settings and click on [Save].

Right-click doesn't work


If the right-click doesn't work anywhere in the Neurostorm interface and you cannot get to see
the popup menus in the database explorer, try to connect a standard external mouse with two
buttons. Some Apple pointing devices do not interact very well with Java/Matlab.

Alternatively, try to change the configuration of your trackpad in the system preferences.

Import the anatomy


For estimating the brain sources of the MEG/EEG signals, the anatomy of the subject must
include at least three files: a T1-weighted MRI volume, the envelope of the cortex and the
surface of the head.

Neurostorm cannot extract the cortex envelope from the MRI, you have to run this operation with an
external program of your choice. The results of the MRI segmentation obtained with the following
programs can be automatically imported: FreeSurfer, BrainSuite, BainVISA, CAT12, and CIVET. CAT is
the only application fully interfaced with Neurostorm, and available for download as a Neurostorm
plugin. However, FreeSurfer is more considered as a reference in this domain, therefore this is the
solution we decided to demonstrate in these tutorials.

The anatomical information of this study was acquired with a 1.5T MRI scanner, the subject had a
marker placed on the left cheek. The MRI volume was processed with FreeSurfer 7.1, the result of this
automatic segmentation process is available in the downloaded folder sample_introduction/anatomy.

1. Make sure that you are still in the anatomy view for your protocol.
2. Right-click on the subject folder > Import anatomy folder:

o Set the file format: FreeSurfer + Volume atlases

o Select the folder: sample_introduction/anatomy

o Click on [Open]
3. Number of vertices of the cortex surface: 15000 (default value).
This option defines the number of points that will be used to represent the cortex envelope. It
will also be the number of electric dipoles we will use to model the activity of the brain. This
default value of 15000 was chosen empirically as a good balance between the spatial accuracy
of the models and the computation speed. More details later in the tutorials.

pg. 25
4. The MRI views should be correct (axial/coronal/sagittal), you just need to make sure that the
marker on the cheek is really on the left of the MRI. Then you can proceed with the fiducial
selection.

pg. 26
Using the MRI Viewer
To help define these fiducial points, let's start with a brief description of the MRI Viewer:

 Navigate in the volume:


o Click anywhere on the MRI slices to move the cursor.
o Use the sliders below the views.
o Use the mouse wheel to scroll through slices (after clicking on the view to select it).
o On a MacBook pad, use the two finger-move up/down to scroll.

 Zoom: Use the magnifying glass buttons at the bottom of the figure, or the corresponding
shortcuts (keyboard [+]/[-], or [CTRL]+mouse wheel).

 Image contrast: Click and hold the right mouse button on one image, then move up and down.
 Select a point: Place the cursor at the spot you want and click on the corresponding [Set]
button.

 More information about all the coordinates displayed in this figure: CoordinateSystems

pg. 27
Fiducial points
Neurostorm uses a few reference points defined in the MRI to align the different files:

 Required: Three points to define the Subject Coordinate System (SCS):


o Nasion (NAS), Left ear (LPA), Right ear (RPA)
o This is used to register the MEG/EEG sensors on the MRI.
 Optional: Three additional anatomical landmarks (NCS):
o Anterior commissure (AC), Posterior commissure (PC) and any interhemispheric point
(IH).
o Computing the MNI normalization sets these points automatically (see below), therefore
setting them manually is not required.
 For instructions on finding these points, read the following page: CoordinateSystems.

Nasion (NAS)

 In this study, we used the real nasion position instead of the CTF coil position.

MRI coordinates: 127, 213, 139

Left ear (LPA)

 In this study, we used the connection points between the tragus and the helix (red dot on the
CoordinateSystems page) instead of the CTF coil position or the left and right preauricular
points.

MRI coordinates: 52, 113, 96

Right ear (RPA)

pg. 28

MRI coordinates: 202, 113, 91

Anterior commissure (AC)


MRI coordinates: 127, 119, 149

Posterior commissure (PC)


MRI coordinates: 128, 93, 141

Inter-hemispheric point (IH)

 This point can be anywhere in the mid-sagittal plane, these coordinates are just an example.

MRI coordinates: 131, 114, 206

Type the coordinates

pg. 29
 If you have the coordinates of the fiducials already written somewhere, you can type or copy-
paste them instead of the pointing at them in with the cursor. Right-click on the figure > Edit
fiducials positions > MRI coordinates.

Validation
 Once you are done with the fiducial selection, click on [Save].
 The automatic import of the FreeSurfer folder resumes. At the end you get many new files in the
database and a 3D view of the cortex and scalp surface. Here again you can note that the
marker is visible on the left cheek, as expected.

pg. 30
 The next tutorial will describe these files and explore the various visualization options.
 Close all figures and clear memory: Use this button in the toolbar of the Neurostorm
window to close all the open figures at once and to empty the memory from all the temporary
data that the program keeps in memory for faster display.

Graphic bugs
If you do not see the cortex surface through the head surface, or if you observe any other issue
with the 3D display, there might be an issue with the OpenGL drivers. You may try the
following options:

 Update the drivers for your graphics card.


 Upgrade your version of Matlab.
 Run the compiled version of Neurostorm (see Installation).

 Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or
Disabled.

 Send a bug report to MathWorks.

For Linux users with an integrated GPU and NVIDIA GPU, if you experience the troubles above, or the
slow navigation in the 3D display (usually with 2 or more surfaces). Verify that you are using the NVIDIA
GPU as primary GPU. More information depending on your distribution: Ubuntu, Debian and Arch
Linux.

MNI normalization
For comparing results with the literature or with other imaging modalities, the normalized MNI
coordinate system is often used. To be able to get "MNI coordinates" for individual brains, an extra step
of normalization is required.

To compute a transformation between the individual MRI and the ICBM152 space, you have two
available options, use the one of your choice:

 In the MRI Viewer: Click on the link "Click here to compute MNI normalization".

 In the database explorer: Right-click on the MRI > MNI normalization.

Select the first option maff8: This method is embedded in Neurostorm and does not require the
installation of SPM12. It is based on an affine co-registration with the MNI ICBM152 template from the
SPM software, described in the following article: Ashburner J, Friston KJ, Unified segmentation,
NeuroImage 2005.

pg. 31
Note that this normalization does not modify the anatomy, it just saves a transformation that
enables the conversion between Neurostorm coordinates and MNI coordinates. After computing
this transformation, you have access to one new line of information in the MRI Viewer.

This operation also sets automatically some anatomical points (AC, PC, IH) if not defined yet.
After the computation, make sure they are correctly positioned. You can run this computation
while importing the anatomy, when the MRI viewer is displayed for the first time, this will save
you the trouble of marking the AC/PC/IH points manually.

MacOS troubleshooting

Error "mexmaci64 cannot be opened because the developer cannot be verified":

 Neurostorm forum

 SPM12 website

 FieldTrip website

Alternatives
If you do not have access to an individual MR scan of the subject, or if its quality is too low to be
processed with FreeSurfer, you have other options:

 If you do not have any individual anatomical data: Use the default anatomy

 If you have a digitized head shape of the subject: Warp the default anatomy

pg. 32
Other options for importing the FreeSurfer anatomical segmentation:

 Automated import: We selected the menu Import anatomy folder for a semi-manual
import, in order to select manually the position of the anatomical fiducials and the number of
points of the cortex surface. If you are not interested in setting accurately the positions of the
fiducials, you can use the menu Import anatomy folder (auto): it computes the linear MNI
normalization first and use default fiducials defined in MNI space, and uses automatically 15000
vertices for the cortex.

 FreeSurfer options: We selected the file format FreeSurfer + Volume atlases for
importing the ASEG parcellation in the database. This slows down the import and increases the
size on the hard drive. If you know you won't use it, select FreeSurfer instead. A third menu is
avalaible to also import the cortical thickness as source files in the database.

Tutorial 3: Display the anatomy


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

Contents

1. Anatomy folder
2. Default surfaces
3. MRI Viewer
4. MRI contact sheets
5. MRI in 3D
6. Surfaces
7. Get coordinates
8. Subcortical regions: Volume
9. Subcortical regions: Surface
10. Registration MRI/surfaces
11. Interaction with the file system
12. On the hard drive: MRI
13. On the hard drive: Surface

Anatomy folder
The anatomy of the subject "Subject01" should now contain all the files Neurostorm could import from
the FreeSurfer segmentation results:

 MRI: T1-weighted MRI, resampled and re-aligned by FreeSurfer.


 ASEG / DKT / Desikan-Killiany / Destrieux: Volume parcellations (including subcortical
regions)

 Head mask: Head surface, generated by Neurostorm.


If this doesn't look good for your subject, you can recalculate another head surface using
different parameters: right-click on the subject folder > Generate head surface.

pg. 33
 Cortex_336231V: High-resolution pial envelope generated by FreeSurfer.
 Cortex_15002V: Low-resolution pial envelope, downsampled from the original one by
Neurostorm.

 Cortex_cereb_17005V: Low-res pial envelope + cerebellum surface extracted from ASEG


 White_*: White matter envelope, high and low resolution.
 Mid_*: Surface that represents the mid-point between the white and cortex envelopes.
 Subcortical: Save FreeSurfer subcortical regions as in the ASEG volume, but tesselated as
surfaces.

 For more information about the files generated by FreeSurfer, read the FreeSurfer page.

Default surfaces
 There are four possible surface types: cortex, inner skull, outer skull, head.
 For each type of surface, one file is selected as the one to use by default for all the operations.
 This selected surface is displayed in green.

 Here, there is only one "head" surface, which is selected.


 The mid, cortex and white surfaces can all be used as "cortex" surfaces, only one can be selected
at a time. By default, the low-resolution cortex should be selected and displayed in green.

 To select a different cortex surface, you can double-click on it or right-click > Set as default.

pg. 34
MRI Viewer
Right-click on the MRI to get the list of the available display menus:

Open the MRI Viewer. This interface was already introduced in the previous tutorial. It
corresponds to the default display menu if you double-click on the MRI from the database
explorer. Description of the window:

 MIP Anatomy: Maximum Intensity Projection. When this option is selected, the MRI viewer
shows the maximum intensity value across all the slices in each direction. This maximum does
not depend on the selected slice, therefore if you move the cursor, the image stays the same.

 Neurological/Radiological: There are two standard orientations for displaying medical scans.
In the neurological orientation, the left hemisphere is on the left of the image, in the radiological
orientation the left hemisphere is on the right of the image.

 Coordinates: Position of the cursor in different coordinate systems. See: CoordinateSystems


 Colormap: Click on the colorbar and move up/down (brightness) or left/right (contrast)
 Popup menu: All the figures have additional options available in a popup menu, accessible
with a right-click on the figure. The colormap options will be described later in the tutorials, you
can test the other options by yourself.

pg. 35
MRI contact sheets
You can get collections of slices in any direction (axial, coronal or sagittal) with the popup
menus in the database explorer or the MRI Viewer figure.

 Zoom: mouse wheel (or two finger-move on a MacBook pad)


 Move in zoomed image: click + move

pg. 36
 Adjust contrast: right click + move up/down

MRI in 3D
Right-click on the MRI file in the database explorer > Display > 3D orthogonal slices.

 Simple mouse operations:

o Rotate: Click + move. Note that two different types of rotations are available: at the
center of the figure the object will follow you mouse, on the sides it will do a 2D rotation
of the image.

o Zoom: Mouse wheel, or two finger-move on a MacBook pad.


o Move: Left+right click + move (or middle-click + move).
o Colormap: Click on the colorbar and move up/down (brightness) or left/right
(contrast).

pg. 37
o Reset view: Double click anywhere on the figure.
o Reset colormap: Double-click on the colorbar.
o Move slices: Right click on the slice to move + move.
(or use the Resect panel in the Surface tab)

 Popup operations (right-click on the figure):


o Colormap: Edit the colormap, detailed in another tutorial.

o MRI display: For now, contains mostly the MIP option (Maximum Intensity
Projection).

o Get coordinates: Pick a point in any 3D view and get its coordinates.
o Snapshots: Save images or movies from this figure.
o Figure: Change some of the figure options or edit it using the Matlab tools.
o Views: Set one of the predefined orientation.
o Note the indications in the right part of the popup menu, they represent the keyboard
shortcut for each menu.
 Keyboard shortcuts:
o Views shortcuts (0,1,2...9 and [=]): Remember them, they will be very useful when
exploring the cortical sources. To switch from left to right, it is much faster to press a
key than having to rotate the brain with the mouse.

o Zoom: Keys [+] and [-] for zooming in and out.


o Move slices: [x]=Sagittal, [y]=Coronal, [z]=Axial, hold [shift] for reverse direction.
 Surface tab (in the main Neurostorm window, right of the database explorer):
o This panel is primarily dedicated to the display of the surfaces, but some controls can
also be useful for the 3D MRI view.
o Transparency: Changes the transparency of the slices.

o Smooth: Changes the background threshold applied to the MRI slices. If you set it zero,
you will see the full slices, as extracted from the volume.

o Resect: Changes the position of the slices in the three directions.

Surfaces
To display a surface you can either double-click on it or right-click > Display. The tab "Surface" contains
buttons and sliders to control the display of the surfaces.

 The mouse and keyboard operations described for the 3D MRI view also apply here.
 Smooth: Inflates the surface to make all the parts of the cortex envelope visible.
This is just a display option, it does not actually modify the surface.

pg. 38
 Color: Changes the color of the surface.
 Sulci: Shows the bottom of the cortical folds with a darker color. We recommend to keep this
option selected for the cortex, it helps for the interpretation of source locations on smoothed
brains.

 Edge: Display the faces of the surface tesselation.


 Resect: The sliders and the buttons Left/Right/Struct at the bottom of the panel allow you to
cut the surface or reorganize the anatomical structures in various ways.

 Multiple surfaces: If you open two surfaces from the same subject, they will be displayed on
the same figure. Then you need to select the surface you want to edit before changing its
properties. The list of available surfaces is displayed at the top of the Surface tab.

 At the bottom of the Surface tab, you can read the number of vertices and faces in the
tesselation.

Get coordinates
 Close all the figures. Open the cortex surface again.
 Right-click on the 3D figure, select "Get coordinates".

pg. 39
 Click anywhere on the cortex surface: a yellow cross appears and the coordinates of the point
are displayed in all the available coordinates systems.
 You can click on [View/MRI] to see where this point is located in the MRI, using the MRI Viewer.

Subcortical regions: Volume


The standard FreeSurfer segmentation pipeline generates multiple volume parcellations of anatomical
regions, all including the ASEG subcortical parcellation. Double-click on a volume parcellation to open it
for display. This opens the MRI Viewer with two volumes: the T1 MRI as the background, and the
parcellation as a semi-transparent overlay.

 Adjust the transparency of the overlay from the Surface tab, slider Transp.
 The name of the region under the cursor appears at the top-right corner. The integer before
this name is the label of the ROI, ie. the integer value of the voxel under the cursor in the
parcellation volume.

pg. 40
 Close the MRI viewer.
 Double-click again on the subject's MRI to open it in the MRI viewer.
 Observe that the anatomical label is also present at the top-right corner of this figure; in this
case, the integer reprents the voxel value of the displayed MRI. This label information comes
from the ASEG file: whenever there are volume parcellations available for the subject, one of
them is loaded in the MRI Viewer by default. The name of the selected parcellation is displayed
in the figure title bar.
 You can change the selected parcellation with the right-click popup menu Anatomical atlas.
You can change the parcellation scheme, disable its use to make the MRI work faster, or show
the parcellation volume as an overlay (menu Show atlas). More information in the tutorial Using
anatomy templates.

pg. 41
Subcortical regions: Surface
Neurostorm reads the ASEG volume labels and tesselates some of these regions, then groups all
the meshes in a large surface file where the regions are identified in an atlas called "Structures".
It identifies: 8 bilateral structures (accumbens, amygdala, caudate, hippocampus, pallidum,
putamen, thalamus, cerebellum) and 1 central structure (brainstem).

These structures can be useful for advanced source modeling, but will not be used in the introduction
tutorials. Please refer to the advanced tutorials for more information: Volume source estimation and
Deep cerebral structures.

With the button [Struct] at the bottom of the Surface tab, you can see the structures separately.

pg. 42

Registration MRI/surfaces
The MRI and the surfaces are represented using the different coordinate systems and could be
misregistered for various reasons. If you are using the automated segmentation pipeline from
FreeSurfer or BrainSuite you should never have any problem, but if something goes wrong or in the case
of more manual import procedures it is always good to check that the MRI and the surfaces are correctly
aligned.

 Right-click on the low-res cortex > MRI Registration > Check MRI/surface registration

 The calculation of the interpolation between the MRI and the cortex surface takes a few
seconds, but the result is then saved in the database and will be reused later.
 The yellow lines represent the re-interpolation of the surface in the MRI volume.

Advanced

Interaction with the file system


For most manipulations, it is not necessary to know exactly what is going on at the level of the
file system, in the Neurostorm database directory. However, many things are not accessible from

pg. 43
the Neurostorm interface, you may sometimes find it useful to manipulate some piece of data
directly from the Matlab command window.

Where are the files ?

 Leave your mouse for a few seconds over any node in the database explorer, a tooltip will
appear with the name and path of the corresponding file on the hard drive.
 Paths are relative to current protocol path (Neurostorm_db/TutorialIntroduction). What is
displayed in the Neurostorm window is a comment and may have nothing to do with the real file
name. For instance, the file name corresponding to "head mask" is
Subjec01/tess_head_mask.mat.

 Almost all the files in the database are in Matlab .mat format. You can load and edit them easily
in the Matlab environment, where they appear as structures with several fields.

Popup menu: File

Right-click on a surface file: many menus can lead you to the files and their contents.

pg. 44
 View file contents: Display all the fields in the Matlab .mat file.

 View file history: Review the History field in the file, that records all the operations that were
performed on the file since if was imported in Neurostorm.

 Export to file: Export in one of the supported mesh file format.


 Export to Matlab: Load the contents of the .mat file in the Matlab base workspace. It is then
accessible from the Matlab command window.

 Import from Matlab: Replace the selected file with the content of a variable from the Matlab
base workspace. Useful to save back in the database a structure that was exported and modified
manually with the Matlab command window.

pg. 45
 Copy / Cut / Paste: Allow you to copy/move files in the database explorer. Keyboard shortcuts
for these menus are the standard Windows shortcuts (Ctrl+C, Ctrl+X, Ctrl+V). The database
explorer also supports drag-and-drop operations for moving files between different folders.

 Delete: Delete a file. Keyboard shortcuts: Delete key.


 Rename: Change the Comment field in the file. It "renames" the file in the database explorer,
but does not change the actual file name on the hard drive. Keyboard shortcut: F2

 Copy file path to clipboard: Copies the full file name into the system clipboard, so that you
can paste it in any other window (Ctrl+V or Paste menu)

 Go to this directory (Matlab): Change the current Matlab path, so that you can access the
file from the Matlab Command window or the Matlab Current directory window

 Show in file explorer: Open a file explorer window in this directory.


 Open terminal in this folder: Start a system console in the file directory (Linux and MacOS
only).

What are all these other files ?

 If you look in Neurostorm_db/TutorialIntroduction with the file explorer of your operating


system, you'll find many other directories and files that are not visible in the database explorer.

 The protocol TutorialIntroduction is divided in Anatomy and Datasets directories:

o Each subject in anat is described by an extra file: Neurostormsubject.mat

o Each folder in data is described by an extra file: Neurostormstudy.mat

 anat/@default_subject: Contains the files of the default anatomy (Default anatomy)


 data/@default_study: Files shared between different subjects (Global common files)
 data/@inter: Results of inter-subject analysis

pg. 46
 data/Subject01/@default_study: Files shared between different folders in Subject01
 data/Subject01/@intra: Results of intra-subject analysis (across different folders)

Advanced

On the hard drive: MRI


Right-click on the MRI > File > View file contents:

Structure of the MRI files: subjectimage_*.mat

 Comment: String displayed in the database explorer to represent the file.


 Cube: [Nsagittal x Ncoronal x Naxial] full MRI volume.
 Voxsize: Size of one voxel in millimeters (sagittal, coronal, axial).
 SCS: Defines the Subject Coordinate System. Points below are in MRI (millimeters)
coordinates.

o NAS: (x,y,z) coordinates of the nasion fiducial.


o LPA: (x,y,z) coordinates of the left ear fiducial.
o RPA: (x,y,z) coordinates of the right ear fiducial.
o R: [3x3] rotation matrix from MRI coordinates to SCS coordinates.

pg. 47
o T: [3x1] translation matrix from MRI coordinates to SCS coordinates.
o Origin: MRI coordinates of the point with SCS coordinates (0,0,0).
 NCS: Defines the MNI coordinate system, either with a linear or a non-linear transformation.
o AC: (x,y,z) coordinates of the Anterior Commissure.
o PC: (x,y,z) coordinates of the Posterior Commissure.
o IH: (x,y,z) coordinates of an Inter-Hemisperic point.
o (Linear transformation)
 R: [3x3] rotation matrix from MRI coordinates to MNI coordinates.

 T: [3x1] translation matrix from MRI coordinates to MNI coordinates.


o (Non-linear transformation)
 iy: 3D floating point matrix: Inverse MNI deformation field, as in SPM naming
conventions. Same size as the Cube matrix, it gives for each voxel its coordinates
in the MNI space, and is therefore used to convert from MRI coordinates to MNI
coordinates.

 y: 3D floating point matrix: Forward MNI deformation field, as in SPM naming


conventions. For some MNI coordinates, it gives their coorrespondance in the
original MRI space. To be interpreted, it has to be used with the matrix
y_vox2ras.

 y_vox2ras: [4x4 double], transformation matrix that converts from voxel


coordinates of the y volume to MNI coordinates.

 y_method: Algorithm used for computing the normalization ('segment'=SPM12


Segment)

o Origin: MRI coordinates of the point with NCS coordinates (0,0,0).


 Header: Header from the original file format (.nii, .mgz, ...)
 Histogram: Result of the internal analysis of the MRI histogram, mainly to detect background
level.

 InitTransf: [Ntransform x 2] cell-matrix: Transformations that are applied to the MRI before
importing the surfaces. Example: {'vox2ras', [4x4 double]}

 Labels: [Nlabels x 3] cell-matrix: For anatomical parcellations, this field contains the names and
RGB colors associated with each integer label in the volume. Example:<<BR>>{0, 'Background',
[0 0 0]}
{1, 'Precentral L', [203 142 203]}

 History: List of operations performed on this file (menu File > View file history).

pg. 48
Useful functions

 /toolbox/io/in_mri_bst(MriFile): Read a Neurostorm MRI file and compute the missing fields.

 /toolbox/io/in_mri(MriFile, FileFormat=[]): Read a MRI file (format is auto-detected).

 /toolbox/io/in_mri_*.m: Low-level functions for reading all the file formats.

 /toolbox/anatomy/mri_*.m: Routines for manipulating MRI volumes.

 /toolbox/gui/view_mri(MriFile, ...): Display an imported MRI in the MRI viewer.

 /toolbox/gui/view_mri_3d(MriFile, ...): Display an imported MRI in a 3D figure.

Advanced

On the hard drive: Surface


Right-click on any cortex surface > File > View file contents:

Structure of the surface files: tess_*.mat

 Atlas: Array of structures, each entry is one menu in the drop-down list in the Scout tab.
o Name: Label of the atlas (reserved names: "User scouts", "Structures", "Source model")
o Scouts: List of regions of interest in this atlas, see the Scout tutorial.
 Comment: String displayed in the database explorer to represent the file.
 Curvature: [Nvertices x 1], curvature value at each point.
 Faces: [Nfaces x 3], triangles constituting the surface mesh.

pg. 49
 History: List of operations performed on this file (menu File > View file history).
 iAtlas: Index of the atlas that is currently selected for this surface.
 Reg: Structure with registration information, used to interpolate the subject's maps on a
template.

o Sphere.Vertices: Location of the surface vertices on the FreeSurfer registered


spheres.

o Square.Vertices: Location of the surface vertices in the BrainSuite atlas.


o AtlasSquare.Vertices: Corresponding vertices in the high-resolution BrainSuite atlas.
 SulciMap: [Nvertices x 1], binary mask marking the botton of the sulci (1=displayed as darker).
 tess2mri_interp: [Nvoxels x Nvertices] sparse interpolation matrix MRI<=>surface.
 VertConn: [Nvertices x Nvertices] Sparse adjacency matrix, VertConn(i,j)=1 if i and j are
neighbors.

 Vertices: [Nvertices x 3], coordinates (x,y,z) of all the points of the surface, in SCS coordinates.
 VertNormals: [Nvertices x 3], direction (x,y,z) of the normal to the surface at each vertex.

Useful functions

 /toolbox/io/in_tess_bst(SurfaceFile): Read a Neurostorm surface file and compute the missing


fields.

 /toolbox/io/in_tess(TessFile, FileFormat=[], sMri=[]): Read a surface file (format is auto-


detected).

 /toolbox/io/in_tess_*.m: Low-level functions for reading all the file formats.

 /toolbox/anatomy/tess_*.m: Routines for manipulating surfaces.

 /toolbox/gui/view_surface(SurfaceFile, ...): Display an imported surface in a 3D figure.

 /toolbox/gui/view_surface_data(SurfaceFile, OverlayFile, ...): Display a surface with a source


map.

 /toolbox/gui/view_surface_matrix(Vertices, Faces, ...): Display a mesh in a 3D figure.

Tutorial 4: Channel file / MEG-MRI coregistration


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

pg. 50
The anatomy of your subject is ready. Before we can start looking at the MEG/EEG recordings,
we need to make sure that the sensors (electrodes, magnetometers or gradiometers) are properly
aligned with the MRI and the surfaces of the subject.

In this tutorial, we will start with a detailed description of the experiment and the files that were
recorded, then we will link the original CTF files to the database in order to get access to the
sensors positions, and finally we will explore the various options for aligning these sensors on
the head of the subject.

Contents

1. License
2. Presentation of the experiment
3. Link the raw files to the database
4. Automatic registration
5. New files and folders
6. Review vs Import
7. Display the sensors
8. Sensor map
9. Manual registration
10. Multiple runs and head positions
11. Edit the channel file
12. On the hard drive
13. Additional documentation

License
This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center,
Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a
tutorial data example for the Neurostorm software project. It is presently released in the Public Domain,
and is not subject to copyright in any jurisdiction.

We would appreciate though that you reference this dataset in your publications: please acknowledge
its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the
Neurostorm project seminal publication.

Presentation of the experiment


Experiment

 One subject, two acquisition runs of 6 minutes each.


 Subject stimulated binaurally with intra-aural earphones (air tubes+transducers), eyes opened
and looking at a fixation cross on a screen.
 Each run contains:
o 200 regular beeps (440Hz).
o 40 easy deviant beeps (554.4Hz, 4 semitones higher).

pg. 51
 Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed.
 The subject presses a button when detecting a deviant with the right index finger.
 Auditory stimuli generated with the Matlab Psychophysics toolbox.
 The specifications of this dataset were discussed initially on the FieldTrip bug tracker:
http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300.

MEG acquisition

 Acquisition at 2400Hz, with a CTF 275 system, subject in sitting position

 Recorded at the Montreal Neurological Institute in December 2013


 Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient
 Downsampled at a lower sampling rate: from 2400Hz to 600Hz: The only purpose for this
resampling is to make the introduction tutorials easier to follow the on a regular computer.

 Recorded channels (340):


o 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1)
o 1 Audio signal sent to the subject: UADC001 (#316)
o 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2)
o 26 MEG reference sensors (#5-#30)
o 274 MEG axial gradiometers (#31-#304)
o 2 EEG electrodes: Cz, Pz (#305 and #306)
o 1 ECG bipolar (#307)
o 2 EOG bipolar (vertical #308, horizontal #309)
o 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328)
o 20 Unused channels (#3, #4, #310-#315, #329-340)
 3 datasets:
o S01_AEF_20131218_01_600Hz.ds: Run #1, 360s, 200 standard + 40 deviants

o S01_AEF_20131218_02_600Hz.ds: Run #2, 360s, 200 standard + 40 deviants


o S01_Noise_20131218_02_600Hz.ds: Empty room recordings, 30s long
 Average reaction times for the button press after a deviant tone:
o Run #1: 515ms +/- 108ms

o Run #2: 596ms +/- 134ms

Head shape and fiducial points

 3D digitization using a Polhemus Fastrak device driven by Neurostorm


(S01_20131218_01.pos)

 More information: Digitize EEG electrodes and head shape

 The output file is copied to each .ds folder and contains the following entries:
o The position of the center of CTF coils.
o The position of the anatomical references we use in Neurostorm:
Nasion and connections tragus/helix, as illustrated here.

pg. 52
o Around 150 head points distributed on the hard parts of the head (no soft tissues).

Link the raw files to the database


 Switch to the "functional data" view.

 Right-click on the subject folder > Review raw file

o Select the file format: "MEG/EEG: CTF (*.ds...)"

o Select all the .ds folders in: sample_introduction/data

o In the CTF file format, each session of recordings is saved in a folder with the extension
"ds". The different types of information collected during each session are saved as
different files in this folder (event markers, sensor definitions, bad segments, MEG
recordings).

 Refine registration now? YES


This operation is detailed in the next section.

pg. 53
 Percentage of head points to ignore: 0
If you have some points that were not digitized correctly and that appear far from the head
surface, you should increase this value in order to exclude them from the fit.

Automatic registration
The registration between the MRI and the MEG (or EEG) is done in two steps. We start with a
first approximation based on three reference points, then we refine it with the full head shape of
the subject.

Step 1: Fiducials

 The initial registration is based on the three fiducial points that define the Subject Coordinate
System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI
viewer in the previous tutorial.

 These same three points have also been marked before the acquisition of the MEG recordings.
The person who recorded this subject digitized their positions with a tracking device (such as a
Polhemus FastTrak or Patriot). The position of these points are saved in the dataset.

 When we bring the MEG recordings into the Neurostorm database, we align them on the MRI
using these fiducial points: we match the NAS/LPA/RPA points digitized with the ones we
located in the MRI Viewer.
 This registration method gives approximate results. It can be good enough in some cases, but
not always because of the imprecision of the measures. The tracking system is not always very
precise, the points are not always easy to identify on the MRI slides, and the very definition of
these points does not offer a millimeter precision. All this combined, it is easy to end with an
registration error of 1cm or more.

 The quality of the source analysis we will perform later is highly dependent on the quality of the
registration between the sensors and the anatomy. If we start with a 1cm error, this error will
be propagated everywhere in the analysis.

pg. 54
Step 2: Head shape

 To improve this registration, we recommend our users to always digitize additional points on the
head of the subjects: around 100 points uniformly distributed on the hard parts of the head
(skull from nasion to inion, eyebrows, ear contour, nose crest). Avoid marking points on the
softer parts (cheeks or neck) because they may have a different shape when the subject is
seated on the Polhemus chair or lying down in the MRI. More information on digitizing head
points.

 We have two versions of the full head shape of the subject: one coming from the MRI (the head
surface, represented in grey in the figures below) and one coming from the Polhemus digitizer at
the time of the MEG/EEG acquisition (represented as green dots).
 The algorithm that is executed when you chose the option "Refine registration with head
points" is an iterative algorithm that tries to find a better fit between the two head shapes (grey
surface and green dots), to improve the intial NAS/LPA/RPA registration. This technique usually
improves significantly the registration between the MRI and the MEG/EEG sensors.

 Tolerance: If you enter a percentage of head points to ignore superior to zero, the fit is
performed once with all the points, then the head points the most distant to the cortex are
removed, and the fit is executed a second time with the head points that are left.
 The two pictures below represent the registration before and after this automatic head shape
registration (left=step 1, right=step 2). The yellow surface represents the MEG helmet: the solid
plastic surface in which the subject places his/her head. If you ever see the grey head surface
intersecting this yellow helmet surface, there is obviously something wrong with the
registration.
 At the end of the import process, you can close the figure that shows the final registration.

pg. 55
 A window reporting the distance between the scalp and the head points is displayed. You can
use these values as references for estimating whether you can trust the automatic registration
or not. Defining whether the distances are correct or abnormal depend on your digitization
setup.

pg. 56
Defaced volumes

When processing your own datasets, if your MRI images are defaced, you might need to proceed in a
slightly different way. The de-identification procedures remove the nose and other facial features from
the MRI. If your digitized head shape includes points on the missing parts of the head, this may cause an
important bias in automatic registration. In this case it is advised to remove the head points below
the nasion before proceeding to the automatic registration, as illustrated in this tutorial.

New files and folders


Many new files are now visible in the database explorer:

 Three folders representing the three MEG datasets that we linked to the database. Note the tag
"raw" in the icon of the folders, this means that the files are considered as new continuous files.
 S01_AEF_20131218_01_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #01
 S01_AEF_20131218_02_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #02
 S01_Noise_20131218_02_600Hz: Subject01, Noise recordings (no subject in the MEG)
 All three have been downsampled from 2400Hz to 600Hz.

Each of these new folders show two elements:

 Channel file: Defines the types and names of channels that were recorded, the position of the
sensors, the head shape and other various details. This information has been read from the MEG
datasets and saved as a new file in the database. The total number of data channels recorded in
the file is indicated between parenthesis (340).

 Link to raw file: Link to the original file that you imported. All the relevant meta-data was
read from the MEG dataset and copied inside the link itself (sampling rate, number of samples,
event markers and other details about the acquisition session). But no MEG/EEG recordings
were read or copied to the database. If we open this file, the values are read directly from the
original files in the .ds folder.

pg. 57
Review vs Import
When trying to bring external data into the Neurostorm environment, a common source of
confusion is the difference between the two popup menus Review and Import:

 Review raw file: Allows you to create a link to your original continuous data file. It reads the
header and sensor information from the file but does not copy the recordings in the database.
Most of the artifact cleaning should be done directly using these links.

 Import MEG/EEG: Extract segments of recordings (epochs) from an external file and saves
copies of them in the Neurostorm database. You should not be using this menu until you have
fully pre-processed your recordings, or if you are importing files that are already epoched or
averaged.

Display the sensors


Right-click on the CTF channels file and try all the display menus:

 CTF Helmet: Shows a surface that represents the inner surface of the MEG helmet.
 CTF coils (MEG): Display the MEG head coils of this CTF system: they are all axial
gradiometers, only the coils close to the head are represented. The small squares do not
represent the real shape of the sensors (the CTF coils are circular loops) but an approximation
made in the forward model computation.

 CTF coils (ALL): Display all the MEG sensors, including the reference magnetometers and
gradiometers. The orientation of the coils is represented with a red segment.

 MEG: MEG sensors are represented as small white dots and can be selected by clicking on
them.

 ECG / EOG: Ignore these menus, we do not have proper positions for these electrodes.

pg. 58
 Misc: Shows the approximate positions of the EEG electrodes (Cz and Pz).
 Use the [Close all] button to close all the figures when you are done.

Advanced

Sensor map
Here is a map with the full list of sensor names for this CTF system, it could be useful for
navigating in the recordings. Click on the image for a larger version.

pg. 59
Advanced

Manual registration
If the registration you get with the automatic alignment is incorrect, or if there was an issue when you
digitized the position of the fiducials or the head shape, you may have to realign manually the sensors
on the head. Right-click on the channel file > MRI Registration:

 Check: Show all the possible information that may help to verify the registration.
 Edit: Opens a window where you can move manually the MEG helmet relative to the head.
Read the tooltips of the buttons in the toolbar to see what is available, select an operation and
then right-click+move up/down to apply it. From a scientific point of view this is not exactly a
rigorous operation, but sometimes it is much better than using wrong default positions.
IMPORTANT: this refinement can only be used to better align the headshape with the
digitized points - it cannot be used to correct for a subject who is poorly positioned in the
helmet (i.e. you cannot move the helmet closer to the subjects head if they were not seated
that way to begin with!)

pg. 60
 Refine using head points: Runs the automatic registration described earlier.
There is nothing to change here, but remember to always check the registration scalp/sensors.

Advanced

Multiple runs and head positions


Between two acquisition runs the subject may move in the MEG helmet, the relative position of
the MEG sensors with the head surface changes. At the beginning of each MEG run, the
positions of the head localization coils are detected and used to update the position of the MEG
sensors.

 The two AEF runs 01 and 02 were acquired successively. The position of the subject's head in the
MEG helmet was estimated twice, once at the beginning of each run.
 To evaluate visually the displacement between the two runs, select at the same time all the
channel files you want to compare (the ones for run 01 and 02), right-click > Display sensors >
MEG.

pg. 61
 Typically, we would like to group the trials coming from multiple acquisition runs.
However, because of the subject's movements between runs, it is usually not possible to
directly compare the MEG values between runs. The sensors may not capture the activity
coming from the same regions of the brain.
 You have three options if you consider grouping information from multiple runs:
o Method 1: Process all the runs separately and average between runs at the source
level: The more accurate option, but requires more work, computation time and
storage.

o Method 2: Ignore movements between runs: This can be acceptable if the


displacements are really minimal, less accurate but much faster to process and easier to
manipulate.

o Method 3: Co-register properly the runs using the process Standardize > Co-register
MEG runs: Can be a good option for displacements under 2cm.
Warning: This method has not be been fully evaluated on our side, use at your own risk.
Also, it does not work correctly if you have different SSP projectors calculated for
multiple runs.

 In this tutorial, we will illustrate only method 1: runs are not co-registered.

Advanced

Edit the channel file


Display a table with all the information about the individual channels. You can edit all the
values.

 Right-click on the channel of the first folder (AEF#01) > Edit channel file:

 Index: Index of the channel in the data matrix. Can be edited to reorder the channels.
 Name: Name that was given to the channel by the acquisition device.
 Type: Type of information recordeded (MEG, EEG, EOG, ECG, EMG, Stim, Other, "Delete", etc)

pg. 62
o You may have to change the Type for some channels. For instance if an EOG channel
was saved as a regular EEG channel, you have to change its type to prevent it from being
used in the source estimation.
o To delete a channel from this file: select "(Delete)" in the type column.
 Group: Used to define sub-group of channels of the same type.
SEEG/ECOG: Each group of contacts can represent a depth electrode or a grid, and it can
o
be plotted separately. A separate average reference montage is calculated for each
group.
o MEG/EEG: Not used.
 Comment: Additional description of the channel.
o MEG sensors: Do not edit this information if it is not empty.
 Loc: Position of the sensor (x,y,z) in SCS coordinates. Do not modify this from the interface.
One column per coil and per integration point (information useful for the forward modeling).

 Orient: Orientation of the MEG coils (x,y,z) in SCS coordinates). One column per Loc column.
 Weight: When there is more than one coil or integration point, the Weight field indicates the
multiplication factor to apply to each of these points.

 To edit the type or the comment for multiple sensors at once, select them all then right-click.
 Close this figure, do not save the modifications if you made any.

Advanced

On the hard drive


Some other fields are present in the channel file that cannot be accessed with the Channel editor
window. You can explore these other fields with the File menu, selecting View file contents or Export to
Matlab, as presented in the previous tutorial.

pg. 63

Structure of the channel files: channel_*.mat

 Comment : String that is displayed in the Neurostorm database explorer.


 MegRefCoef: Noise compensation matrix for CTF and 4D MEG recordings, based on some
other sensors that are located far away from the head.

 Projector: SSP/ICA projectors used for artifact cleaning purposes. See the SSP tutorial.
 TransfMeg / TransfMegLabel: Transformations that were applied to the positions of the
MEG sensors to bring them in the Neurostorm coordinate system.

 TransfEeg / TransfEegLabel: Same for the position of the EEG electrodes.


 HeadPoints: Extra head points that were digitized with a tracking system.
 Channel: An array that defines each channel individually (see previous section).
 History: Describes all the operations that were performed with Neurostorm on this file. To get
a better view of this piece of information, use the menu File > View file history.

 IntraElectrodes: Definition of iEEG devices, documented in the SEEG tutorial.

Useful functions

 /toolbox/io/import_channel.m: Read a channel file and save it in the database.

 /toolbox/io/in_channel_*.m: Low-level functions for reading all the file formats.

 /toolbox/io/in_bst_channel.m: Read a channel file saved in the database.

pg. 64
 /toolbox/sensors/channel_*.m: Routines for manipulating channel files.

 /toolbox/gui/view_channels(ChannelFile, Modality, ...): Display the sensors in a 3D figure.

Tutorial 5: Review continuous recordings


Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

Contents

1. Open the recordings


2. Navigate in time
3. Epoched vs. continuous
4. Display mode: Butterfly/Column
5. Montage selection
6. Channel selection
7. Amplitude scale
8. Time and amplitude resolution
9. Filters for visualization
10. Mouse and keyboard shortcuts

Open the recordings


Let's look at the first file in the list: AEF#01.
Right-click on the Link to raw file. Below the first to menus, you have the list of channel types:

 MEG: 274 axial gradiometers


 ECG: 1 electrocadiogram, bipolar electrode across the chest
 EOG: 2 electrooculograms (vertical and horizontal)
 Misc: EEG electrodes Cz and Pz
 ADC A: Unused
 ADC V: Auditory signal sent to the subject
 DAC: Unused
 FitErr: Fitting error when trying to localize the three head localization coils (NAS, LPA, RPA)
 HLU: Head Localizing Unit, displacements in the three directions (x,y,z) for the three coils
 MEG REF: 26 reference sensors used for removing the environmental noise
 Other: Unused
 Stim: Stimulation channel, records the stim triggers generated by the Psychophysics toolbox
and other input channels, such as button presses generated by the subject

pg. 65
 SysClock: System clock, unused
Select > MEG > Display time series (or double-click on the file).

It will open a new figure and enable many controls in the Neurostorm window.

Navigate in time
The files we have imported here are shown the way they have been saved by the CTF MEG
system: as contiguous epochs of 1 second each. These epochs are not related with the stimulus
triggers or the subject's responses, they are just a way of saving the files. We will first explore
the recordings in this epoched mode before switching to the continuous mode.

From the time series figure

pg. 66
 Click: Click on the white or grey parts of figure to move the time cursor (red vertical line).
If you click on the signals, it selects the corresponding channels. Click again to unselect.

 Shortcuts: See the tooltips in the time panel for important keyboard shortcuts:
Left arrow, right arrow, page up, page down, F3, Shift+F3, etc...

 Bottom bar: The red square in the bottom bar represents the portion of the file that is
currently displayed from the current file or epoch. Right now we show all the epoch #1. This will
be more useful in the continuous mode.

 Zoom: Scroll to zoom horizontally around the time cursor (mouse wheel or two-finger
up/down).

 [<<<] and [>>>]: Previous/next epoch or page

From the time panel

 Time: [0, 998]ms is the time segment over which the first epoch is defined.
 Sampling: We downsampled these files to 600Hz for easier processing in the tutorials.
 Text box: Current time, can be edited manually.
 [<] and [>]: Previous/next time sample - Read the tooltip for details and shortcuts
 [<<] and [>>]: Previous/next time sample (x10) - Read the tooltip for details and shortcuts
 [<<<] and [>>>]: Previous/next epoch or page - Read the tooltip for details and shortcuts

From the page settings

 Epoch: Selects the current time block that is displayed in the time series figure.
 Start: Starting point of the time segment displayed in the figure. Useful in continuous mode
only.

 Duration: Length of this time segment. Useful in continuous mode only.

Time selection

 In the time series figure, click and drag your mouse for selecting a time segment.

 At the bottom of the figure, you will see the duration of the selected block, and min/max
values.
 Useful for quickly estimating the latencies between two events, or the period of an oscillation.
 To zoom into the selection: Shift+left click, middle click, or right-click > Time selection > Zoom
into.

pg. 67
 Click anywhere on the figure to cancel this time selection.

Epoched vs. continuous


 The CTF MEG system can save two types of files: epoched (.ds) or continuous (_AUX.ds).
 Here we have an intermediate storage type: continuous recordings saved in "epoched" files. The
files are saved as small blocks of recordings of a constant time length (1 second in this case). All
these time blocks are contiguous, there is no gap between them.
 Neurostorm can consider this file either as a continuous or an epoched file. By default it imports
the regular .ds folders as epoched, but we need to change this manually.
 Right-click on the "Link to raw file" for AEF#01 > Switch epoched/continuous
You should get a message: "File converted to: continuous".

 Double-click on the "Link to raw file" again. Now you can navigate in the file without
interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.

 With the red square at the bottom of the figure, you can navigate in time (click in the middle
and drag with the mouse) or change the size of the current page (click on the left or right edge
of the red square and move your mouse).

pg. 68
 Increase the duration of the displayed window to 3 seconds (Page settings > Duration).

 Close the figure.


 Repeat this operation with the other files to convert them all to a continuous mode.
o AEF#02 > Switch epoched/continuous
o Noise > Switch epoched/continuous

Display mode: Butterfly/Column


 Close all the figures.
 Double-click on the AEF#01 Link to raw file to open the MEG recordings.
 What we see are all the traces of the 274 sensors overlaid on top of each other.
 Click on the "Display mode" button in the toolbar of the Record tab.

 All the signals are now displayed, one below the other, but because we have 274 MEG channels
the figure is still unreadable. We need to select only a subset of these sensors.

pg. 69
Montage selection
 You can use the montage menu to select a group of sensors. This menu is accessible in two
ways:
o Record toolbar > Drop-down menu.

o Figure popup menu > Right-click on the figure > Montage

 Pre-defined groups of channels are available for some common MEG and EEG systems.
Notice the keyboard shortcut on the right for All channels (Shift+A). You can define your own
(Shift+B, C...) if you go to Edit montages.

 You can also use this menu to create your own sensor selections or more complex montages.
A separate tutorial is dedicated to the montage editor.

 Select the group: CTF LT (Left Temporal).

 More information about the Montage editor.

Channel selection
If you click on the white or grey areas of the figure, it changes the current time.
If you click on the lines representing the recorded signals instead, it selects the corresponding channels.

 When some channels are selected, an additional menu "Channels" is visible in the figure popup.
 Select "View selected" or press [Enter] to open the selected channels in a separate window.

pg. 70
 The management of the bad channels will be introduced in a separate tutorial.

Amplitude scale
A variety of display options allows you to adjust the amplitude scale for the recordings (vertical
axis). Most of these options are available in the right part of the time series figure, some are
repeated in the Record tab of the Neurostorm window.

 Increase/decrease gain: Buttons [+] and [-] on the right side of the figure. The shortcuts for
these buttons are indicated in the tooltips (leave the mouse for a short while over a button):
right-click and move your mouse, hold the Shift key and scroll, or use the keys "+" and "-".

 Auto-scale amplitude: Button [AS] in the figure.


Selected: the vertical scale is adapted to the new maximum amplitude when you scroll in the
file.
Not selected: The vertical scale is fixed, scrolling in the file does not affect the axis resolution.

 Flip Y axis: Exchange the direction of the Y axis, to have the peaks of negative values pointing
up. Useful mostly for clinical EEG.

pg. 71
 Set amplitude scale: Opens a window to enter the amplitude scale manually. The value
corresponds to the space between two horizontal lines in this figure.

 Set axis resolution: See section "Time and amplitude resolution" below.
 Remove DC offset: Button [DC] in the Record tab. When selected, the average value over
the entire current time window is subtracted from each channel. This means that if you change
the length of the time window, the value that is removed from each channel may change.
Always keep this option selected for unprocessed MEG recordings, unless you use a high-pass
filter.

 Normalize signals: Divide each signal by its maximal amplitude in the displayed time window.
The signals displayed with this normalization are unitless.

 Apply CTF compensation: Button [CTF] in the Record tab. Enable/disable the CTF noise
correction based on the reference sensors, when it is not already applied in the file. In the
current file, the CTF 3rd order gradient compensation is already applied, therefore this option is
not available.

 Vertical zoom: Use the zoom/scrolll buttons on the right of the figure or your mouse
(CTRL+Mouse wheel to zoom, middle-click+move to scroll) in order to look at specific channels
without having to change the montage.

 Uniform amplitude scales: Force all the time series figures to use the same amplitude scale.
Option available in the Record tab with the button or from the figure options menu when at
least two time series figures are visible. More details.

Advanced

Time and amplitude resolution


In the Neurostorm interface, the axis resolution is usually set implicitly: you can set the size of
the window, the duration or recordings reviewed at once and the maximum amplitude to show in
the figure. These parameters are convenient to explore the recordings interactively but don't
allow us to have reproducible displays with constant time and amplitude resolutions.

However, some applications are very sensitive to the horizontal and vertical scaling, such as the visual
detection of epileptic spikes. The shapes of traces the epileptologists try to identify are altered by the
axes resolution. This is detailed in the tutorial EEG and Epilepsy.

pg. 72
For this reason, we also added an option to set the figure resolution explicitly. The distance unit on a
screen is the pixel, we can set precisely how much time is represented by one pixel horizontally and how
much amplitude is represented by one pixel vertically.
Display menu in the right part of the figure > Amplitude > Set axis resolution (shortcut: CTRL+O)

Note that this interface does not store the input values, it just modifies the other parameters
(figure size, time window, max amplitude) to fit the resolution objectives. If you modify these
parameters after setting the resolution (resize the figure, leave the button [AS] selected and scroll
in time, etc) the resolution is lost, you have to set it again manually.

Filters for visualization


With the Filter tab, you can apply a band-pass filter to the recordings, or remove a set of specific
frequencies (example: the 50Hz or 60Hz power lines contamination and their harmonics). The
filters are applied only to the time window that is currently loaded. If the segment is too short for
the required filters, the results might be inaccurate.

These visualization filters provide a quick estimate for visualization only, the results are not saved
anywhere. To filter properly the continuous files, please use the Process1 tab (see tutorial #10).
The option "Filter all results" is not useful for now, it will be described later.

After testing the high-pass, low-pass and notch filters, uncheck them. Otherwise you may forget
about them, they would stay on until you restart Neurostorm. Note that as long as there are
visualization filters applied, the title of the Filter tab remains red.

pg. 73

Mouse and keyboard shortcuts


Keyboard shortcuts

 Left / right arrows:


o Change current time, sample by sample
o +Control key: Jump to previous/next epoch or page (same as [<<<] and [>>>])
o +Shift key: Jump to previous/next event (you need to have one event selected)
o MacOS: These shortcuts are different, please read the tooltips for [>], [>>] and [>>>]

 Page-up / page-down:
o Change current time, 10 samples at a time
o +Control key: Jump to the next/previous epoch or page, 10x faster
 F3/Shift+F3: Jump to the next/previous epoch or page (10% overlap between 2 pages)
 F4/Shift+F4: Jump to the next/previous half-page (50% overlap)
 F6/Shift+F6: Jump to the next/previous page with no overlap (0% overlap)
 Plus / minus: Adjust the vertical scale of the time series
 Shift + Letter: Changes the montage
 Control + B: Mark selected time segment as bad
 Control + D: Dock figure
 E: Add / delete event marker
 Control + E: Add / delete event marker for the selected channels
 Control + F: Open a copy of the figure, not managed by the Neurostorm window manager
 Control + H: Hide/show selected event group
 Control + I: Save figure as image

pg. 74
 Control + J: Open a copy of the figure as an image
 Control + O: Set axes resolution
 Control + L: Change display mode of events (dots, lines or hidden)
 Control + J: Open a screen capture of the figure
 Control + T: Open a 2D topography window at the current time
 Enter: Display the selected channels in a separate figure
 Escape: Unselect all the selected channels
 Delete: Mark the selected channels as bad
 1 2 3 4 5 6 7 8 9: User-defined shortcuts for new events (tutorial #7)

Mouse shortcuts

 Click on a channel: Select the channel


 Click: Change current time
 Shift + click: Force the selection of the current time (even when clicking on a channel)
 Click + move: Select time range
 Right-click: Display popup menu
 Right-click + move: Adjust the vertical scale of the time series
 Scroll: Zoom around current time
 Shift + scroll: Adjust the vertical scale of the time series
 Control + scroll: Zoom vertically
 Central click + move: Move in a zoomed figure
 Double click: Restore initial zoom settings (or edit the notes associated to the clicked event)

Tutorial 6: Multiple windows


Authors: Francois Tadel

Contents

1. General organization
2. Automatic figure positioning
3. Example
4. Multiple views of the same data

pg. 75
5. User setups
6. Uniform amplitude scales
7. Graphic bugs

General organization
This tutorial is a parenthesis to explain how the figures are positioned on the screen and how you
can organize your workspace more efficiently. One interesting feature of the Neurostorm
interface is the ability to open easily multiple views or multiple datasets simultaneously.

The buttons in the menu "Window layout options" can help you organize all the opened figures
in an efficient way. There are four options for the automatic placement of the figures on the
screen and you have the possibility to save your own specific working environment.

Remember that the Neurostorm window is designed to remain on one side of the screen. All the
space of the desktop that is not covered by this window will be used for opening other figures.
This available space is designated in the menus below as "Full area". Do not try to maximize the
Neurostorm window, or the automatic management of the data figures might not work correctly.

Automatic figure positioning


 Layout options: Defines how the figures are positioned on the screen
o Tiled: All the figures have similar sizes.
o Weighted: Some figures containing more information are given more space on the
screen. This mode is mostly useful when reviewing continuous recordings.

o Full area: Each figure takes all the space available for figures.

pg. 76
o None: The new figures are displayed at the default Matlab position, always at the same
place, and never re-organized after. Selecting this option can be useful if the auto-
arrangement does not work well on your system or if you want to organize your
windows by yourself. It is also automatically selected when using "user setups" (see
below).

 One screen / two screens: If you have multiple monitors, Neurostorm can try to place the
database window on one screen and all the other figures on the other screen. If you force
Neurostorm to use only one screen, all the figures should stay on the same screen.

 Full screen: If selected, the figures are set to their maximum size, covering the Neurostorm
window

 Show all figures: If you have many figures hidden by some other fullscreen window (Matlab,
Firefox to read this tutorial, etc), you don't have click on all of them in the taskbar to get them
back. Just make the Neurostorm window visible and click on this button, it would bring all the
figures back (not working on some Linux window managers).

 User setups: You can save a combination of figures currently opened on your desktop and re-
use it later on a different dataset. It can be very useful for reviewing long continuous files.

 Close all figures: Last button in the toolbar. Close everything and free the allocated memory.

Example
 Double-click on AEF#01 Link to raw file to open the MEG sensors.

 Open the EOG signals for the same file: Right-click on the file > EOG > Display time series.

 Open a 2D topography for the MEG sensors: Right-click on the file > MEG > 2D sensor cap.
This view represents the values of the all the MEG sensors at the current time point. This type of
figures will be described in another tutorial.

 Cycle through the options: Tiled, Weighted, Full area.

pg. 77
 Select the option None, close all the figures (using the [Close all] button), and re-open them.
Notice that now the position of the figures is not managed by Neurostorm anymore.

 Select again Weighted: the figures are automatically re-arranged again.

 Test the option Full screen.

 If you have two screens connected, you can try the options One screen / Two screens.

Advanced

Multiple views of the same data


 Keep all the existing figures: MEG, EOG, 2D topography.

 Open another time series view of the same file, same MEG sensors.
o Note that if you double-click again on the file, it just selects the existing figure.
o To force opening another view: Right-click on the file > MEG > Display time series.

o Only the first view that was open on this file shows the events bar and the time
navigation bar at the bottom of the figure. If you want the two MEG figures
displayed in the exact same way, you can close everything, then start by opening
the EOG view, then two MEG views.
 Re-arrange the figures in a nicer way.
 Select montage "CTF LT" for one figure, and montage "CTF RF" for the other.

o You can change individually the display properties of each figure.


o When creating a new figure, it re-uses the last display properties that were used.
o To change the properties of one figure, you have first to select this figure. Clicking on
the title bar of the figure is not enough, you have to click inside the figure (this is due to
some limitations of the Matlab figures implementation).

o When the new figure is selected, the controls in the Record tab are updated, and
you can change the display properties for this figure.
 There is currently a limitation relative to the continuous file viewer: it is not possible to review
two continuous datasets at the same time. This is usually not a problem because we typically
review the continuous files one after the other. It will be possible to open multiple data files
after we import them in the database, this is what is really useful.

pg. 78
Advanced

User setups
 Keep the four figures previously created (MEG LT, MEG RF, EOG, 2D sensor).
 In the menu "Window layout options" > User setups > New setup > "Test".

 Close all the figures (using the Close all button).


 Double-click again on the Link to raw file to open MEG sensors.
 In the menu "Window layout options" > User setups > Test.
It should restore your desktop exactly like it was when you saved it.

 Note that the layout None is now selected. Using custom window configurations disables the
automatic arrangement of the windows on the screen.

 This feature is interesting for users who need to review numerous files everyday in a very
specific way, for instance in the case of visual inspection of epilepsy recordings. It can save them
a substantial amount of time to load their reviewing environment in just one click.

Advanced

Uniform amplitude scales

pg. 79
 Set the display mode "butterfly" for the two MEG time series figures:
Uncheck the first button in the Record tab.

 With the button Uniform amplitude scale , in the Record tab, you can change the way
the amplitude of multiple time series figures is scaled.

 Selected: All the time series figures with similar units have the same y-axis scale, you can
compare visually the amplitudes between two datasets.

 Not selected: Each figure is scaled independently to its own maximum amplitude.

Tutorial 7: Event markers


Authors: Francois Tadel, Elizabeth Bock, John C Mosher

Contents

pg. 80
1. Lists of events
2. Adding events
3. Extended events
4. Bad segments
5. Hide event groups
6. Channel events
7. Notes
8. Display modes
9. Custom shortcuts
10. Saving modifications
11. Other menus
12. On the hard drive

Lists of events
You probably noticed colored dots on top of the recordings in the MEG figure. They represent the event
markers saved in this dataset. In this documentation, they can be called indifferently events, markers
or triggers. Some are stimulus triggers that were generated by the stimulation computer
(Psychtoolbox-3), others are the subject responses recorded from a button box. This tutorial shows how
to manipulate these markers.

 Open the MEG recordings for file AEF#01.

 Make sure it is configured as presented here: Montage "CTF LT", [DC] button selected, 3s pages.
 All the markers available in the file are listed in the Events section of the Record tab.
 On the left, you have the groups of events and the number of occurrence for each group:
o 200 standard audio stimulations

o 40 deviant audio stimulations


o 40 button responses: The subject presses a button with the right index finger when a
deviant is presented. This is a very easy task so all the deviants are detected

 On the right, you have the list of the time instants at which the selected event occurs.
 These two lists are interactive. If you click on an event group (left list), it shows the
corresponding occurrences in the right list. If you click on one particular event in the right list,
the file viewer jumps to it. It works the other way as well: if you click on a dot representing an
event in the MEG figure, the corresponding event group and occurrence are selected in the
Record tab.

pg. 81
Adding events
The markers can represent either stimulation triggers or subject responses that were recorded
during the acquisition. It can also be useful to add new markers during the analysis of the
recordings, to identify events of interest that are not detected at the time of the recordings, such
as artifacts (eye movements, heartbeats, subject movements) or specific patterns of brain activity
(epileptic spikes).

 Create a new group of events "Test" with the menu Events > Add group.

 Click on this new category to select it. It contains no occurrences at the beginning (x0).

 Then place the time cursor (red vertical bar) where you want to add a new marker "Test".
 Add a few occurrences with any of the three methods:
o In the Record tab: Select the menu Events > Add / delete event

o In the time series figure: Right-click > Add / delete event

o In the time series figure: Press key E

 If the display is too dense, it can be difficult to set the current time instead of selecting a
channel. Note that you can click outside of the white area to select the time (on top of the
figure), or use the shortcut Shift+click.

pg. 82
 Remove all the event occurrences in "Test", but not the group itself. Use any of the three
methods:
o In the Record tab: Select one or more event occurrences, press the Delete key.

o In the time series figure: Click on an event dot and right-click > Add / delete event.

o In the time series figure: Click on an event dot and press key E.

Extended events
You can also use this interface to create events that have a temporal extension, i.e., that last for
more than one time sample. This can be used to define bad segments in the recordings.

 In the time series window, select a time range (click + move).


 Add an event: menus or key E.

 The first occurrence you add in an event group defines its type: single time point (simple
events), or time range (extended events). You cannot mix different types of events in a group.
You get an error when you try to add a time segment in an event category that already contains
a simple event.

 Remove the event group "Test": Click on it in the list and press the Delete key.

Bad segments
It is very common to have portions of the recordings heavily contaminated by events coming
from the subject (eye blinks, movements, heartbeats, teeth clenching) or from the environment

pg. 83
(stimulation equipment, elevators, cars, trains, building vibrations...). Some of them are well
defined and can be removed efficiently, some are too complex to be modeled efficiently. For this
last category, it is usually safer to mark the noisy segments as bad, and ignore them for the rest
of the analysis.

To mark a segment of recordings as bad, the procedure is the same as for defining an extended
event: select a time window, and then tag it as bad with one of the following methods.

 In the Record tab: Select the menu Events > Reject time segment,

 In the time series figure: Right-click > Reject time segment,

 In the time series figure: Press Ctrl + B

It creates a new event group BAD, and adds an extended event to it. Later, when epoching this
file (extracting time blocks and saving them in the database), the trials that contain a bad
segment will be imported but tagged as bad, and ignored in the rest of the analysis.

You can create multiple groups of bad segments, for instance to identify different types of
artifacts. Any event group that contains the tag "BAD" will be considered as indicating bad
segments.

Advanced

Hide event groups


When you have too many events in the viewer, seeing the ones you are interested in can be
difficult. This will be the case for insteance after we detect the heartbeats in the signal, we will
have one event every second, which is not always interesting to see. Each event category can be
selectively hidden.

 In the record tab, select the group of events you want to hide.
 Use the menu Events > Show/Hide group, or press shortcut key H.

pg. 84
 The event group is greyed out, and the corresponding markers disappear from the viewer.

Advanced

Channel events
Some events can be attached to only one or a few channels. This is useful for instance for
reviewing clinical EEG recordings, where neurologists are tagging epileptic activity only on a
subset of the channels.

 First select the channels of interest by clicking on them (the signals should turn red).
 Place the time cursor where you want to create the event (click on the white or grey areas of the
figure, or use the shortcut Shift+Click).
 Right-click anywhere on the figure > Add/delete channel event, or shortcut Ctrl+E.

 The event marker appears directly on the selected channel, and the name of the channel
appears in the list of event times (in the Neurostorm window).

 Then you can deselect the channel (click again on it) or press the Escape key before creating a
new event attached to a different channel.

 If no channel is selected, you can proceed in this alternate way: position the time cursor where
you want to create the event, right-click directly on the channel to which to want to attach the
event, and select "Add/delete channel event".

pg. 85
Advanced

Notes
Additional comments can be added to the event, in case additional details should be displayed in
the file viewer. This is mostly useful for reviewing clinical recordings as well.

 Right-click on any event marker or event text (or double-click on it) > Edit notes.

 Enter the text to display next to the marker.

 Alternatively, you can double-click on the event in the list of event times (in the Neurostorm
window).

Advanced

Display modes
Three display modes are available for the event markers: dots, lines or hidden. Select the
corresponding menu in the display options, or press CTRL+L multiple times.

pg. 86
Advanced

Custom shortcuts
When reviewing long recordings and adding manually lots of events (eg. when marking
manually epileptic spikes), using the menus presented above is not convenient because they
require many mouse clicks.

Using the menu Events > Edit keyboard shortcuts, you can associate custom events to the key 1 to
9 of the keyboard. Define the name of the event type to create for each key, and then simply press the
corresponding key to add/delete a marker at the current time position. Three options are available for
each event type:

 Simple: Create a simple event where the red time cursor is placed.
 Full page: Create an extended event including the entire page of recordings, then move to the
next page of recordings. This option was added for a specific application (sleep staging) that
consists in labelling blocks of 30s through the entire file.

 Extended: Create an extended event with the time window indicated on the right of the panel
around the time cursor.

pg. 87
Saving modifications
Now you can delete all the event groups that you've just created and leave only the initial ones (button,
standard, deviant): select the event groups and press Delete, or use the menu Events > Delete
group.

When you close the continuous file viewer, or the last figure that shows a part of the raw file, the
dataset is unloaded, the file is released and the memory is freed.

If you edited the events for this file, you are asked whether to save the modifications or not. If
you answer "Yes", the modifications are saved only in the database link (Link to raw file), not in
the original file itself. Therefore, you would see the changes the next time you double-click on
the "link to raw file" again, but not if you open the original .ds file in another protocol or with an
external program.

pg. 88
Note that events you edit are not automatically saved until that moment. As you would do with any
other type of computer work, save your work regularly, to limit the damages of a program or computer
crash. In the Record tab, use the menu File > Save modifications.

Advanced

Other menus

File

 Import in database: Import blocks of the current continuous file into the database. Equivalent
to a right click on the "Link to raw file" in the database explorer > Import in database.

 Save modifications: Save the modifications made to the events in the database link.
 Add events from file: Import events from an external file. Many file formats are supported.
 Read events from channel: Read the information saved during the acquisition in a digital
auxiliary channel (eg. a stimulus channel) and generate events.

 Detect analog triggers: Detect transition events in an external analog channel, such as the
voltage of a photodiode exposed to light or a microphone recording a sound.

 Export all events: Save all the events in an external file.


 Export selected events: Same as "Export all events" but exports only the selected events.

pg. 89
Events

 Rename group: Rename the selected group of events. Shortcut: double-click.


 Set color: Change the color associated with an event group.
 Mark group as bad: Add a tag "bad" in the event name, so that it is considered as bad
segment.

 Sort groups: Reorders the event groups by name, or by time of the first occurrence.
 Merge groups: Merge two event groups into a new group. Initial groups are deleted. To keep
them, duplicate them before merging.

 Duplicate groups: Make a copy of the selected event groups.


 Convert to simple events: Convert a group of extended events (several time points for each
event), to simple events (one time point). An option window asks you whether to keep the first
or the last sample only of the extended events.

 Convert to extended events: Convert simple events to segments of a fixed length.


 Combine stim/response: Create new groups of events based on stim/response logic.
Example: Stimulus A can be followed by response B or C. Use this process to split the group A in
two groups: AB, followed by B; and AC, followed by C.

 Detect multiple responses: Finds the multiple responses (events that are too close to each
other)

 Group by name: Combine different event groups by name.


 Group by time: Combine simultaneous events and creates new event groups.
 Add time offset: Adds a constant time to all the events in a group, to compensate for a delay.
 Edit keyboard shortcuts: Custom associations between keys 1..9 and events
 Reject time segment: Mark the current time selection as bad.
 Jump to previous/next event: Convenient way of browsing through all the markers in a
group.
Shortcut: Shift + left/right

Advanced

On the hard drive


The nodes "Link to raw file" you see in the database explorer are represented by .mat files on the
hard drive. They contain all the header information extracted from the raw files, but do not
contain a full copy of the recordings.

pg. 90
All the additional information created from the Neurostorm interface (event markers, bad channels, SSP
projectors) are not saved back to the original raw files, they are only saved in the "Link to raw file". The
names of these files start with the tag data_0raw_, they share the same structure as all the imported
epochs (introduced later in the tutorials).

To explore the contents of these link files, right-click on them use the popup menus File > View file
contents or File > Export to Matlab.

pg. 91
Link to raw file structure: data_0raw_*.mat

pg. 92
 F: sFile structure, documents completely the continuous raw file, described below.
(for imported epochs, .F contains directly the MEG/EEG recordings [Nchannels x Ntime])

 Comment: String used to represent the file in the database explorer.


 Time: First and last time points recorded in the continuous file.
 ChannelFlag: [Nchannels x 1] list of good/bad channels (good=1, bad=-1)
 DataType: Type of data stored in this file.
o 'raw' = Link to a continuous raw file
o 'recordings' = Imported epoch
 Device: Acquisition system that recorded the dataset.
 Events: Not used in the case of links.
 Leff: Effective number of averages = Number of input files averaged to produce this file.
 History: Describes all the operations that were performed with Neurostorm on this file. To get
a better view of this piece of information, use the menu File > View file history.

sFile structure: This structure is passed directly to all the input/output functions on continuous
files.

 filename: Full path to the continuous raw file.


 format: Format of the continuous raw file.
 device: Acquisition system that recorded the dataset. Same as Link.Device.
 condition: Name of the folder in which this file is supposed to be displayed.
 comment: Original file comment.
 byteorder: Endianness, 'l' = Little Endian, 'b' = Big Endian
 prop: Structure, basic properties of the recordings
o times: First and last time points recorded in the continuous file.
o sfreq: Sampling frequency
o Leff: Number of files that were averaged to produce this file.
o currCtfComp: Level of CTF compensation currently applied.
o destCtfComp: Level of CTF compensation in which we want to view the file (usually: 3)
 epochs: Array of structures used only in the case of continuous recordings saved as "epochs"

pg. 93
 events: Array of structures describing the event markers in the file, one structure per event
group:

o label: Name of the event group


o color: [r,g,b] Color used to represent the event group, in Matlab format
o epochs: [1 x Nevt] Indicate in which epoch the event is located (index in the
sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode.
Nevt = number or occurrences of the event = number of markers in this group

o times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq),
aligned on exact sample instants (times = round(times*sfreq)/sfreq).
For extended events: [2 x Nevt], first row = start, second row = end

o reactTimes: Not used anymore


o select: Indicates if the event group should be displayed in the viewer.
o channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence can be
associated with one or more channels, by setting .channels{iEvt} to a cell-array of
channel names.

o notes: {1 x Nevt} Cell-array of strings: additional comments for each event occurrence
 header: Structure describing additional header information, depending on the original file
format.

 channelflag: List of good/bad channels, same information as Link.ChannelFlag.

Useful functions

 in_bst_data(DataFile, FieldsList): Read the structure for a data file.

Tutorial 8: Stimulation delays


Authors: Francois Tadel, Elizabeth Bock

The event markers that are saved in the data files might have delays. In most cases, the
stimulation triggers saved by the acquisition system indicate when the stimulation computer
requested a stimulus to be presented. After this request, the equipment used to deliver the
stimulus to the subject (projector, screen, sound card, electric or tactile device) always introduce
some delays. Therefore, the stimulus triggers are saved before the instant when the subject
actually receives the stimulus.

For accurate timing of the brain responses, it is very important to estimate these delays precisely
and if possible to account for them in the analysis. This tutorial explains how to correct for the

pg. 94
different types of delays in the case of an auditory study, if the output of the sound card is saved
together with the MEG/EEG recordings. A similar approach can be used in visual experiments
using a photodiode.

Contents

1. Note for beginners


2. Documented delays
3. Evaluation of the delay
4. Detection of the analog triggers
5. Repeat on acquisition run #02
6. Delays after this correction
7. Detection of the button responses
8. Another example: visual experiments

Note for beginners


This entire tutorial can be considered as advanced. It is very important to correct for the
stimulation delays in your experiments, but if you are not using any stimulation device, you do
not need this information. However, if you skip the entire tutorial, you will have uncorrected
delays and it will be more difficult to follow along the rest of the tutorials. Just go quickly
through the actions that are required and skip all the explanations.

Advanced

Documented delays
Reminder: The full description of this auditory dataset is available on this page: Introduction dataset.

Delay #1: Production of the sound

 The stimulation software generates the request to play a sound, the corresponding trigger is
recorded in the stim channel by the MEG acquisition software.
 Then this request goes through different software layers (operating system, sound card drivers)
and the sound card electronics. The sound card produces an analog sound signal that is sent at
the same time to the subject and to MEG acquisition system. The acquisition software saves a
copy of it in an audio channel together with the MEG recordings and the stim channel.
 The delay can be measured from the recorded files by comparing the triggers in the stim
channel and the actual sound in the audio channel. We measured delays between 11.5ms and
12.8ms (std = 0.3ms). These delays are not constant, we should adjust for them. Jitters in the
stimulus triggers cause the different trials to be aligned incorrectly in time, hence "blurred"
averaged responses.

Delay #2: Transmission of the sound

pg. 95
 The sound card plays the sound, the audio signal is sent with a cable to two transducers located
in the MEG room, close to the subject. This causes no observable delay.

 The transducers convert the analog audio signal into a sound (air vibration). Then this
sound is delivered to the subject's ears through air tubes. These two operations cause a
small delay.
 This delay cannot be estimated from the recorded signals: before the acquisition, we placed a
sound meter at the extremity of the tubes to record when the sound is delivered. We measured
delays between 4.8ms and 5.0ms (std = 0.08ms). At a sampling rate of 2400Hz, this delay can
be considered constant, we will not compensate for it.

Delay #3: Recording of the signals

 The CTF MEG systems have a constant delay of 4 samples between the analog channels
(MEG/EEG, auditory,etc) and the digital channels (stim, buttons, etc), because of an anti-aliasing
filter that is applied to the first and not the second. This translates here to a constant
'negative' delay of 1.7ms, meaning the analog channels are delayed when compared to the
stim channels.

 Many acquisition devices (EEG and MEG) have similar hidden features, read correctly the
documentation of your hardware before analyzing your recordings.

Evaluation of the delay


Let's display simultaneously the stimulus channel and the audio signal.

 Right-click AEF#01 link > Stim > Display time series: The stim channel is UPPT001.

 Right-click AEF#01 link > ADC V > Display time series: The audio channel is UADC001.

 In the Record tab, set the duration of display window to 0.200s.

pg. 96
 Jump to the third event in the "standard" category.
 We can observe that there is a delay of about 13ms between the time where the stimulus
trigger is generated by the stimulation computer and the moment where the sound is actually
played by the sound card of the stimulation computer (delay #1).

 What we want to do is to discard the existing triggers and replace them with new, more
accurate ones created based on the audio signal. We need to detect the beginning of the
sound on analog channel UADC001.
 Note that the representation of the oscillation of the sound tone is really bad here. The
frequency of this standard tone is 440Hz. It was correctly captured by the original
recordings at 2400Hz, but not in the downsampled version we use in the introduction
tutorials. It should still be good enough for performing the detection of the stimulus.

Detection of the analog triggers


Detecting the standard triggers

Run the detection of the "standard" audio triggers on channel UADC001 for file AEF#01.

 Keep the same windows open as previously.


 In the Record tab, select the menu File > Detect analog triggers.

 This opens the Pipeline editor window with the process Events > Detect analog triggers
selected. This window will be introduced later, for now we will just use it to configure the
process options. Configure it as illustrated below:

pg. 97
Advanced

Explanation of the options (for future reference, you can skip this now):

 Event name: Name of the new event category created to store the detected triggers.
We can start with the event "standard", and call the corrected triggers "standard_fix".

 Channel name: Channel on which to perform the detection (audio channel UADC001).
 Time window: Time segment on which you want to detect analog triggers.
Leave the default time window or check the box "All file", it will do the same thing.

 Amplitude threshold: A trigger is set whenever the amplitude of the signal increases above X
times the standard deviation of the signal over the entire file. Increase this value if you want the
detection to be less sensitive.

pg. 98
 Min duration between two events: If the event we want to detect is an oscillation, we don't
want to detect a trigger at each cycle of this oscillation. After we detect one, we stop the
detection for a short time. Use a value that is always between the duration of the stimulus (here
100ms) and the inter-stimulus interval (here > 700ms).

 Apply band-pass filter before the detection: Use this option if the effect you are trying to
detect is more visible in a specific frequency band. In our case, the effect is obvious in the
broadband signal, we don't need to apply any filter.

 Reference: If you have an approximation of the triggers timing, you can specify it here. Here we
have the events "standard" and we want to detect a trigger in their neighborhood.
If we do not use this option, the process creates only one new group with all the audio signals,
without distinction between the deviant and standard tones.

 Detect falling edge (instead of rising edge): Detects the end of the tone instead of the
beginning.

 Remove DC offset: If the signal on which we perform the detection does not oscillate around
zero or has a high continuous component, removing the average of the signal can improve the
detection. This should be selected when using a photodiode with a pull-up resistor.

 Enable classification: Tries to automatically classify the different types of events that are
detected based on the morphology of the signal in the neighborhood of the trigger.

Results of the detection

 Navigate through a few of the new "standard_fix" events to evaluate if the result is correct. You
can observe that the corrected triggers are consistently detected after the rising portion of the
audio signal, two samples after the last sample where the signal was flat.
 This means that we are over-compensating delay #1 by 3.3ms. But at least this delay is constant
and will not affect the analysis. We can count this as a constant delay of -3.3ms.

Detecting the deviant triggers

pg. 99
 Repeat the same operation for the deviant tones.
 In the Record tab, select the menu File > Detect analog triggers.

Some cleaning

 We will use the corrected triggers only, we can delete the original ones to avoiding any
confusion.
 Delete the event groups "deviant" and "standard" (select them and press the Delete key).

 Rename the group "deviant_fix" into "deviant" (double-click on the group name).
 Rename the group "standard_fix" into "standard".

pg. 100
 Close all: Answer YES to save the modifications.

Repeat on acquisition run #02


Repeat all the exact same operations on the link to file AEF#02:

 Right-click AEF#02 link > Stim > Display time series: The stim channel is UPPT001.

 Right-click AEF#02 link > ADC V > Display time series: The audio channel is UADC001.

 In the Record tab, select menu File > Detect analog triggers: standard_fix

 In the Record tab, select menu File > Detect analog triggers: deviant_fix

 Check that the events are correctly detected.


 Delete the event groups "deviant" and "standard" (select them and press the Delete key).

 Rename the group "deviant_fix" into "deviant" (double-click on the group name).
 Rename the group "standard_fix" into "standard".
 Close all: Answer YES to save the modifications.

Delays after this correction


We compensated for the jittered delays (delay #1), but not for hardware delays (delay #2). Note
that delay #3 is no longer an issue since we are not using the orginal stim markers, but the more
accurate audio signal. The final delay between the "standard_fix" triggers and the moment when
the subject gets the stimulus is now delay #2 and the over-compensation.

Final constant delay: 4.9 - 3.3 = 1.6ms

We decide not to compensate for this delay because it is very short and does not introduce any
jitter in the responses. It is not going to change anything in the interpretation of the data.

Advanced

Detection of the button responses


The subject presses a button with the right index finger when a deviant is presented. We don't
really need to correct this category of events because it is already correct. You can skip this
section if you are not interested in parsing digital channels.

pg. 101
The digital channel Stim/UDIO001 contains the inputs from the response button box (optic device,
negligible delay). Each bit of the integer value on this channel corresponds to the activation of one
button. We can read this channel directy to get accurate timing for the button presses.

 Right-click AEF#01 link > Stim > Display time series: The response channel is UDIO001.

 In the Record tab: Set the page duration to 3 seconds.

 Note on the DC removal: You may see the base value of the UDIO001 channel "below" the zero
line. This is an effect of the DC correction that is applied on the fly to the recordings. The
average of the signals over the current page is subtracted from them. To restore the real value
you can uncheck the [DC] button in the Record tab. Atlernatively, just remember that the
reference line for a channel doesn't necessarily mean "zero" when the DC removal option is on.

 In the Record tab, select menu File > Read events from channel: UDIO001 / Value

 You get a new event category 64, this is the value of the UDIO001 at the detected transitions.
There are 40 of them, one for the each button press. We can use this as a replacement for the

pg. 102
original button category.

 To make things clearer: delete the button group and rename 64 into button.

 Close all: Answer YES to save the modifications.


 Optionally, you can repeat the same operation for the other run, AEF#02. But we will not use
the "button" markers in the analysis, so it is not very useful.
 Note that these events will have delay #3 (when compared to MEG/EEG) since they are
recorded on a digital channel.

Advanced

Another example: visual experiments


We have discussed here how we could compensate for the delays introduced in an auditory
experiment using a copy of the audio signal saved in the recordings. A similar approach can be
used for other types of experiments. Another typical example is the use a photodiode in visual
experiments.

When sending images to the subject using a screen or a projector, we usually have jittered delays
coming from the stimulation computer (software and hardware) and due to the refresh rate of the
device. These delays are difficult to account for in the analysis.

To detect accurately when the stimulus is presented to the subject, we can place a photodiode in
the MEG room. The diode produce a change in voltage when presented with a change in light
input, for example black to white on the screen. This is typically managed with a small square of
light in the corner of the stimulus screen - turning white when the stimulus appears on the screen
and then black at all other times. The signal coming from this photodiode can be recorded

pg. 103
together with the MEG/EEG signals, just like we did here for the audio signal. Depending on the
photodiode, it is recommended to use a pull-up resistor when recording the signal. Then we can
detect the triggers on the photodiode output channel using the menu "detect analog triggers",
including the use of the 'Remove DC offset' option.

Tutorial 9: Select files and run processes


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The Neurostorm window includes a graphical batching interface. With the two tabs Process1 and
Process2 in the lower part of the window, you can select files from the database explorer and
assemble a processing pipeline. Most of the operations available in the interface can also be
executed this way, including everything we've been doing with Neurostorm so far.

On the other hand, some features are only available this way. It is the case for the frequency
filters we will need for the pre-processing of our auditory recordings. This tutorial is a
parenthesis to explain how to select files and run processes, we will resume with the cleaning of
the recordings in the next tutorial.

Contents

1. Selecting files to process


2. Filter by name
3. Selecting processes
4. Plugin structure

pg. 104
5. Note for beginners
6. Search Database
7. Saving a pipeline
8. Automatic script generation
9. Process: Select files with tag
10. Report viewer
11. Error management
12. Control the output file names
13. Additional documentation

Selecting files to process


The tab Process1 contains a empty box in which you can drag and drop any number of files or folders
from the database explorer. The easiest way to understand how it works is to try it.

 Try to drag and drop, in Process1, all the nodes you currently have in your database explorer.
 You will see that it accepts all the folders and all the recordings, but not the channel files.
 When you add a new node, the interface evaluates the number of files of the selected type that
each of them contain. The number in the brackets next to each node represents the number of
data files that were found in them.
 On top of the list, a comment shows the total number of files that are currently selected.

pg. 105
 The buttons on the left side allow you to select what type of file you want to process:
Recordings, sources, time-frequency, other. When you select another button, all the counts are
updated to reflect the number of files of the selected type that are found for each node.
 Right now, if you select another file type, it would show only "0" everywhere because there are
no sources or time-frequency decompositions available in the database yet.

 To remove files from the Process1 list:


o Select the nodes to remove (holding Shift or Ctrl key) and press the Delete key.

o Right-click on the list > Clear list

Filter by name
When you have lots of files in a folder, like multiple source reconstructions or time-frequency files for
each trial, it is difficult to grab just the ones you are interested in. After selecting your folders in the
Process1 box, you can refine the selection with the Filter search box at the bottom-right corner of the
window.

 The example below shows how to select the data files corresponding to the noise recordings: by
typing "Noise" in the search box and selecting the option "Search file paths". We cannot
perform the search "by name" because all the data files have the same name "Link to raw file".

 Reminder: To see the file name corresponding to a node in the database, leave your mouse over
it for a few seconds. You can do this both in the database explorer and the Process1 list.

pg. 106
The options offered in the Filter menu are:

 Search file paths: Look for the string in the full file paths (including their relative path).
 Search names: Look for the string in the names of the files, ie. what is displayed in the
database explorer to represent them (the .Comment field).

 Search parent names: Extends the search to the name of the parent files (applicable only to
source and time-frequency files, which can depend on a data file).

 Select files: Only the files that contain the string are selected.
 Exclude files: Only the files that DO NOT contain the string are selected.
 Reset filters: Removes the current file filters applied on Process1 and Process2.
 Case insensitive: Note that the search is not sensitive to case.
 Boolean logic: You can combine different keywords to make a more precise search using
advanced search queries. See the following section for more information.

Selecting processes
 Clear the file list and the search filters.
 Select all three datasets we have linked to our protocol.
You can select the three "link to raw file" nodes, the three folders or the entire subject node.

 Click on the [Run] button at the bottom-left corner of the Process1 tab.

 The Pipeline editor window appears. You can use it to create an analysis pipeline, i.e., a list of
processes that are applied to the selected files one after the other. The first button in the
toolbar shows the list of processes that are currently available. If you click on a menu, it's added
to the list.

 Some menus appear in grey. This means that they are not designed to be applied to the type of
data that you have in input, or at the end of the current pipeline.
 In the current example, we have a file with the type "continuous raw recordings", so we have
access mostly to menus to manipulate event markers, run cleaning procedures and import data

pg. 107
blocks. You can recognize a few operations that we executed in the previous tutorials: "Event >
Read from channel" and "Event > Detect analog triggers".

 When you select a process, a list of options specific to this process is shown in the window.
 To delete a process: Select it and press the Delete key, or use the [X] button in the toolbar.

 After selecting a first process, you can add another one. The output of the first process will be
passed to the second process without giving back the control to the user. This is how you can
build a full analysis pipeline with the interface.

 After adding a few processes, you can move a process up or down in the pipeline with the
[up arrow] and [down arrow] buttons in the toolbar. Click on a process in the pipeline to
edit its options.
 Select and delete a few processes to understand how this interface works. Just do not
click on RUN.

Plugin structure

pg. 108
All the menus available in the pipeline editor are actually plugins for Neurostorm. The processes
are functions that are independent from each other and automatically detected when starting
Neurostorm.

Any Matlab script that is added to the plugin folder (Neurostorm3/toolbox/process/functions/)


and has the right format will automatically be detected and made available in the GUI. This
mechanism makes it easy for external contributors to develop their own code and integrate it in
the interface.

More information: How to write your own process

To see where the function corresponding to a process is on the hard drive: select the process in
the pipeline editor, then leave your mouse for a few seconds over its title.

Note for beginners


Everything below is advanced documentation, you can skip it for now.

Advanced

Search Database
Sometimes when working with huge protocols, you can get lost in the size of your database tree.
While filtering from the process box as introduced in the previous section is one way to select
the files you are looking for, we have introduced a more straightforward approach to search for

pg. 109
file(s) in your database. At the right below the protocol selection dropdown, you can click on the
magnifying glass to open up the search dialog.

From there, you can create a new search query from the GUI, or type / paste an existing search
query string (see the following section for more details). Let's select "New Search" to create a
new query from the GUI.

From this menu, you can create a search query to apply on your active protocol. It has different
options:

 Search by: The file metadata to use for the search.


o Name: Name of the file in Neurostorm
o File type: Type of the file, see dropdown when selected for possible values
o File path: Path of the file in the Neurostorm database folder
o Parent name: Name of any parent file in the database tree (e.g. Subject or Folder)
 Equality: Type of equality to apply.
o Contains: File metadata contains the entered value
o Contains (case): Same as contains, but case sensitive
o Equals: Exact equality, the file metadata is equal to the entered value
o Equals (case): Same as equals, but case sensitive
 Not: Whether to invert the selected equality, e.g. DOES NOT CONTAIN vs CONTAINS.
 Search for: The value to search for.
 Remove: To remove the search row if not needed anymore.

pg. 110
 + and: To add a search row, with the AND boolean logic. If you have two rows A and B, then the
returned files will match both search A and B.

 + or: To add a search row, with the OR boolean logic. If you have two rows A and B, then the
returned files will match both search A or B.

In the above example, we are looking for raw files (File type = Raw data) whose parent name
contains the word "noise". This allows us to search for raw noise recordings.

Notice that you now have multiple tabs in your Neurostorm database. The "Database" tab contains all
files in your protocol, whereas the "noise" tab only contains the files that pass the search and their
parents. You can have multiple searches/tabs active so that you can easily create pipelines by dragging
and dropping different search results in the process box. Do keep in mind that if you drag and drop a
parent object in the process box (e.g. Subject01) with an active search, only files that pass the active
search will be processed by the pipeline.

Once a search is created, you can interact with it in different ways. You can right click on the tab
and Edit the search on the fly from the GUI, Copy the search to clipboard as a query string to use
it in a script, or Close the search.

You can also click on the magnifying glass when a search is active to get more options such as
Saving the search for later use and Generating a process call to apply this search in a script.

pg. 111

If you click Generate process call, a line of script will be generated for you to use your search
query as a process in a script. It will also be copied to clipboard.

Notice that your search was created to a query string:

 ([parent CONTAINS "noise"] AND [type EQUALS "RawData"])

This advanced query syntax is described in the following section.

Advanced search queries

For advanced users, you can write more complex search queries that can combine multiple
keywords and types of keywords using boolean logic. You can do this using the Neurostorm
search GUI and then copy your search as text to re-use later. These queries work for both
database searches and process filters. The syntax is rigid such that the order of the commands is
important, so we recommend you use the search GUI whenever possible to avoid errors. Search
queries can contain the following types of elements:

 Search parameters: These are simple searches that are on a specific type of value. They need
to be written in [square brackets]. They look like the following:

pg. 112
o [searchFor EQUALITY NOT "value"]
o SearchFor: Which field of the files metadata to search for It can have the following
values, in lower case:

 Name: Searches using the file name in Neurostorm


 Type: Searches using the file type in Neurostorm
 Path: Searches using the file path in the Neurostorm database folder
 Parent: Searches using the parents name in the Neurostorm database tree
o Equality: The type of equality you want to use to compare the file value to the searched
value. It can have the following values, in upper case:

 CONTAINS: Whether the searchFor field contains the text "value"


 CONTAINS_CASE: Same as CONTAINS, but case sensitive
 EQUALS: Whether the searchFor field exactly equals the text "value"
 EQUALS_CASE: Same as EQUALS, but case sensitive
o NOT: (optional) add this reserved keyword to return the opposite results of the search,
so for example, all files that do NOT CONTAIN the text "value".

o "value": the text you want to search for, in double quotes.

 Boolean operators: These are used to group together search parameters and search blocks
using boolean logic. Considering search parameters a, b and c, the following will return files that
pass searches a and a, or does not pass search c:

o (a AND b) OR NOT c
o AND: This combines search parameters and blocks such that both conditions have to
be met.

o OR: This combines search parameters and blocks such that either conditions have to be
met

o NOT: This precedes a search block or parameter such that the condition result is
reversed. So if a condition had to be met, it now has to not be met.

o Important note: AND and OR operators cannot be mixed together (you cannot have
both in the same search block), because otherwise it creates uncertainties.

 Search blocks: These are combinations of search parameters and boolean operators, wrapped
in (round brackets). You cannot have different boolean operators in the same block

Example

(([name CONTAINS "test1"] AND [type EQUALS "Matrix"]) OR NOT [parent CONTAINS
"test2"])

Effect: This will match all matrix files containing text "test1" or all files whose parent docontains
the text "test2".

Limitations of the GUI

pg. 113
The GUI does not support multiple nested search blocks. It only allows for one OR block
followed by one AND block. If your query is more advanced than this, you will not be able to
edit it with the search GUI. We recommend you use the process filter box instead.

Advanced

Saving a pipeline
After preparing your analysis pipeline by listing all the operations to run on your input files, you
can either click on the [Run] button, or save/export your pipeline. The last button in the the
toolbar offers a list of menus to save, load and export the pipelines.

 Load: List of pipelines that are saved in the user preferences on this computer.
 Load from .mat file: Import a pipeline from a pipeline_...mat file.
 Save: Save the pipeline in the user preferences.
 Save as .mat matrix: Exports the pipeline as Matlab structure in a .mat file. Allows different
users to exchange their analysis pipelines, or a single user between different computers.

 Generate .m script: This option generates a Matlab script.


 Delete: Remove a pipeline that is saved in the user preferences.
 Reset options: Neurostorm automatically saves the options of all the processes in the user
preferences. This menu removes all the saved options and sets them back to the default values.

pg. 114
Advanced

Automatic script generation


Here is the Matlab script that is generated for this pipeline.

Reading this script is easy: input files at the top, one block per process, one line per option. You
can also modify them to add personal code, loops or tests. Many features are still missing in the
pipeline editor, but the generated scripts are easy enough for users with basic Matlab knowledge
to edit and improve them.

Running this script from Matlab or clicking on the [Run] button of the pipeline editor produce
exactly the same results. In both cases you will not have any interaction with the script, it could
be executed without any direct supervision. You just get a report in the end that describes
everything that happened during the execution.

These scripts cannot be reloaded in the pipeline editor window after being generated. If you work
on a long analysis pipeline, save it in your user preferences before generating the corresponding
Matlab script.

pg. 115
Advanced

Process: Select files with tag


Since we are discussing the file selection and the pipeline execution, we can explore a few more
available options. We have seen how to filter the files in the Process1 box using the Filter search box.
We can get to the exact same result by using the process File > Select files: By tag before the
process you want to execute, to keep only a subset of the files that were placed in the Process1 list.

It is less convenient in interactive mode because you don't immediately see the effect of your file
filter, but it can be very useful when writing scripts. You can also combine search constraints by
adding the same process multiple times in your pipeline, which is not possible with the search
box.

 Make sure you still have the three datasets selected in the Process1 list.
 Select the process: File > Select files: By tag

 Select the options: Search: "Noise", Search the file names, Select only the files with the tag.
 Click on [Run] to execute the process.

 This process is useless if not followed immediately by another process that does something with
the selected files. It does nothing but selecting the file, but we can observe that the operation
was actually executed with the report viewer.

Advanced

pg. 116
Report viewer
Everytime the pipeline editor is used to run a list of processes, a report is created and logs all the
messages that are generated during the execution. These reports are saved in the user home
folder: $HOME/.Neurostorm/reports/.

The report viewer shows, as an HTML page, some of the information saved in this report
structure: the date and duration of execution, the list of processes, and the input and output files.
It reports all the warnings and errors that occurred during the execution.

The report is displayed at the end of the execution only if there were more than one process
executed, or if an error or a warning was reported. In this case, nothing is displayed.

You can always explicitly open the report viewer to show the last reports: File > Report viewer.

When running processes manually from a script, the calls to bst_report explicitly indicate when the
logging of the events should start and stop.

You can add images to the reports for quality control using the process File > Save snapshot, and
send the final reports by email with the process File > Send report by email.

With the buttons in the toolbar, you can go back to the previous reports saved from the same
protocol.

More information: Scripting tutorial

Advanced

pg. 117
Error management
 Select the same files and same process: File > Select files: By tag

 Note that the options you used during the previous call are now selected by default.
 Instead of "Noise", now search for a string that doesn't exist in the file name, such as "XXXX".

 Click on [Run] to execute the process. You will get the following error.

pg. 118
 If you open the report viewer, it should look like this.

Advanced

Control the output file names


If you are running two processes with different parameters but that produce exactly the same file paths
and file names, you wouldn't be able to select them with this process. But immediately after calling any
process, you can add the process File > Add tag to tag one specific set of files, so that you can easily
re-select them later.

Example: You run the time-frequency decomposition twice with different options on the same
files, tag the files after calculating them with different tags.

pg. 119

Tutorial 10: Power spectrum and frequency filters


Authors: Hossein Shahabi, Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy,
Sylvain Baillet

We are now going to process our continuous recordings to remove the main sources of noise.
Typically, we can expect contaminations coming from the environment (power lines, stimulation
equipment, building vibrations) and from the subject (movements, blinks, heartbeats, breathing,
teeth clenching, muscle tension, metal in the mouth or the body). In this tutorial, we will focus
first on the noise patterns that occur continuously, at specific frequencies.

We can correct for these artifacts using frequency filters. Usually we prefer to run these notch and
band-pass filters before any other type of correction, on the continuous files. They can be applied
to the recordings without much supervision, but they may create important artifacts at the beginning
and the end of the signals. Processing the entire continuous recordings at once instead of the imported
epochs avoids adding these edge effects to all the trials.

Contents

1. Evaluation of the noise level


2. Interpretation of the PSD
3. Elekta-Neuromag and EEG users
4. Apply a notch filter
5. Evaluation of the filter
6. Some cleaning
7. Note for beginners
8. What filters to apply?

pg. 120
9. When to apply these filters?
10. Filter specifications: Low-pass, high-pass, band-pass
11. Filter specifications: Notch
12. Filter specifications: Band-stop
13. On the hard drive
14. Additional documentation

Evaluation of the noise level


Before running any type of cleaning procedure on MEG/EEG recordings, we always recommend
to start with a quick evaluation of the noise level. An easy way to do this is to estimate the power
spectrum of all the signals over the entire recordings.

 Clear the list of files in the Process1 tab.


 Select the three datasets we have linked to our protocol.
You can select the three "link to raw file" nodes, the three folders or the entire subject node.

 Click on [Run] to open the pipeline editor window.


 Select the process "Frequency > Power spectrum density (Welch)"

 This process evaluates the power of the MEG/EEG signals at different frequencies, using the
Welch's method (see Wikipedia or MathWorks). It splits the signals in overlapping windows of a
given length, calculates the Fourier Transform (FFT) of each of these short segments, and
averages the power of the FFT coefficients for all the overlapping windows.

pg. 121
 Set the options as follows (click on [Edit] for the additional options):
o Time window: [All file]
Portion of the input file you want to use for estimating the spectrum.
It is common to observe huge artifacts at the beginning or the end of the recordings, in
this case you should exclude these segments from the calculation of the PSD.
In practice, using just the first 100s or 200s of the file can give you a good enough
impression of the quality of the recordings.

o Window length: 4 seconds


Estimator length = length of the overlapping time windows for which we calculate the
FFT. The number of time samples in the estimator is the same as the number of
frequency bins in the output file. Increasing this parameter increases the output
frequency resolution (distance between two frequency bins) but degrades the stability
of the estimator, as it also decreases the total number of averaged time windows. A
Hamming window is applied to each estimator window before the computation of the
FFT. See forum post: Effect of window length on the PSD

o Overlap: 50%
How much overlap do we want between two consecutive time windows.

o Units: Physical: U^2/Hz


Scaling of the spectrum. This only affects the values on the Y axis of the spectrum.
Physical units should be used in most cases.

pg. 122
"Normalized" gives normalized frequencies from 0 to 2pi (Hz·s).
"Before Nov 2020" reproduces the older Neurostorm spectrum scaling (see this forum
post).

o Sensor types or names: MEG


Defines the list of channels (names or types) on which you want to apply the process.

o Frequency definition: Matlab's FFT default


You have the option to directly use the frequency binning returned by the FFT, or run an
additional step of averaging these bins in larger frequency bands. Note that you can
freely edit these frequency bands.

o Output: Save individual PSD value.


This option will separately estimate the PSD for each of the three files in input, and
create three files in output. If you select the other option (save average), it calculates
the same three files but averages them on the fly and saves only one file in the
database.

o Implementation details: See function Neurostorm3/toolbox/timefreq/bst_psd.m

 Click on [Run] to start the execution of the process.


 Troubleshooting: If you get "Out of memory" errors, try to run this PSD estimation on a
shorter time segment. For example, set the time window to [0,100s] instead of the full file. This
process starts by loading all the needed recordings in memory, you might not have enough
memory available on your system to fit the entire dataset.

 It produces three new files, that appear as "depending" on the three datasets. The comments of
the files indicate how many overlapping windows could be used to estimate the PSD.
"179/4000ms" means 179 windows of 4s each (total 716s). With the 50% overlap, it sums up to
a little less than 2x the file length (360s).

Interpretation of the PSD

pg. 123
File: AEF#01

 Double-click on the new PSD file for run #01 to display it (or right-click > Power spectrum).

 The power spectrum is displayed in a figure similar to the time series, but the X axis
represents the frequencies. Most of the shortcuts described for the recordings are also
valid here. Clicking on the white parts of the figure or using the arrow keys of the
keyboard move the frequency cursor. The current frequency can also be controlled with a
new slider displayed in the Neurostorm window, just below the time panel.
 Each black line represents the power spectrum for one channel. If you click on a channel, it gets
highlighted in red. Click again for deselecting. Right-click on a selected channel to read its name.

 The frequency resolution of this power spectrum, ie. the distance between two frequency bins
represented in this figure, is about 0.15Hz. This precision depends on the length of the
estimator window you used. The FFT is computed on the number of time samples per window
(4s*600Hz), rounded to the next power of 2 (nextpow2), and represents the full spectrum of the
file (0-600Hz).
Frequency resolution = sampling_freq / 2^nextpow2(estimator_length*sampling_freq) =
0.1465 Hz

 The shape of this graph is normal, it does not indicate anything unexpected:
o Peaks related with the subject's alpha rhythms: around 10Hz and 20Hz.

o Peaks related with the power lines: 60Hz, 120Hz and 180Hz.
These datasets were recorded in Canada, where the alternating powerline current is

pg. 124
delivered at 60Hz. In Europe you would observe similar peaks at 50Hz, 100Hz and
150Hz.

 Add a topography view for the same file, with one of the two methods below:
o Right-click on the PSD file > 2D Sensor cap.

o Right-click on the spectrum figure > 2D Sensor cap (shortcut: Ctrl+T)

 Scroll in frequencies to see the spatial distribution of each frequency bin:

 We have already identified two artifacts we will need to remove: the eye movements and
the 60Hz+harmonics from the power lines.

File: AEF#02

 Open the spectrum view for the run AEF#02.


To view the signal units instead of dB, select Display Tab > Measure > Power. Then from
the display options icon on the right of the figure, select Amplitude > Log scale

pg. 125
 Add a 2D sensor cap view for the same file. Scroll to zoom in/out.
To display the sensor names, right-click on the figure > Channels > Display sensors.

 This spectrum looks very similar to the run #01: same alpha and power line peaks.
 Additionally, we observe higher signal power between 30Hz and 100Hz on many occipital
sensors. This is probably related to some tension in the neck muscles due to an
uncomfortable seating position in the MEG. We will see later whether these channels need to be
tagged as bad or not.

File: Noise recordings

 Open the spectrum view for the noise recordings.

 This shows the power spectrum of the signals that are recorded when there is no subject
in the MEG room. It gives a good and simple representation of the instrumental noise. If

pg. 126
you had one bad MEG sensor, you would see it immediately in this graph. Here
everything looks good.

X Log-scale

 One option is worth mentioning when displaying power spectra: the logarithmic scale for the X
axis. In the diaply options for the PSD figure > Frequency > Log scale. It is sometimes better
adapted to represent this type of data than a linear scale (especially with higher sampling
frequencies).

Elekta-Neuromag and EEG users


The Elekta-Neuromag MEG systems combine different types of sensors with very different
amplitude ranges, therefore you would not observe the same types of figures. Same thing for
EEG users, this might not look like what you observe on your recordings.

For now, keep on following these tutorials with the example dataset to learn how to use all the
Neurostorm basic features. Once you're done, read additional tutorials in the section "Other analysis
scenarios" to learn about the specificities related with your own acquisition system.

Apply a notch filter


This filter has been updated in 2019. In the new configuration, the user can define the 3-dB
bandwidth of the filter. Please consider that a smaller bandwidth means a sharper filter which in
some cases makes filter unstable. In case you want to reproduce the old filter, you can check the
box "Use old filter implementation". The 3-dB bandwidth is not applicable to the old
configuration.

For illustration purposes, we will now run a frequency filter to remove the 60Hz+harmonics
from the continuous files. Notch filters are adapted for removing well identified contaminations
from systems oscillating at very stable frequencies.

pg. 127
 Keep all the three datasets selected in the Process1 box.
Remember to always apply the same filters on the subject recordings and the noise recordings.

 Click on [Run] to open the Pipeline editor.


 Run the process: Pre-process > Notch filter

o Process the entire file at once: NO

o Sensor types or names: MEG

o Frequencies to remove: 60, 120, 180 Hz

o 3-dB notch bandwidth: 2 Hz

o The higher harmonics (240Hz) are not clearly visible, and too high to bother us in
this analysis.
 This process creates three new datasets, with additional "notch" tags. These output files are
saved directly in the Neurostorm database, in a binary format specific to Neurostorm (.bst).

pg. 128
 If you delete the folders corresponding to the original files (before the filter), your
original recordings in the .ds folders are not altered. If you delete the folders
corresponding to the filtered files, you will lose your filtered recordings in .bst format.
 To check where the file corresponding to a "Link to raw file" is actually saved on the hard drive,
right-click on it > File > Show in file explorer.

 Important: This is an optional processing step. Whether you need this on your own recordings
depends on the analysis you are planning to run on the recordings (see advanced sections
below).

Evaluation of the filter


 Right-click on the Process1 list > Clear list.

 Drag and drop the three filtered files in Process1.

 Run again the PSD process "Frequency > Power spectrum density (Welch)" on the new
files, with the same parameters as before, to evaluate the quality of the correction.

pg. 129
 Double-click on the new PSD files to open them.

 Scroll to zoom in and observe what is happening around 60Hz (before / after).

 See below an example of how this filter can affect the time series: top=before, bottom=after.
We show the reference sensor BR2 because it shows a lot more 60Hz than any MEG sensor
(sensor type "MEG REF"), ie. oscillations with a period of 16.7ms.
Note the edge effect at the beginning of the signal: the signals below are 1.5s long, the notch
filter at 60Hz is visibly not performing well during the first 500ms (blue window).

pg. 130
 If you look in the list of events, you would see a new category "transient_notch". This
corresponds to the time period during which can expect significant edge effects due to the
filtering. Neurostorm doesn't mark these blocks as bad by default, you would have to do it
manually - you will see how to do this in one of the following tutorials. In the case of this
dataset, the transient duration is much shorter than the delay before the first stimulation, so
not relevant in our processing pipeline. See the advanced sections below for more details about
the estimation of this transient duration.

Some cleaning
To avoid any confusion later, delete the links to the original files:

pg. 131
 Select the folders containing the original files and press Delete (or right-click > File > Delete).

 Always read the confirmation messages carefully, you will avoid some bad surprises.

 This is what your database explorer should look like at the end of this tutorial:

Note for beginners


Everything below is advanced documentation, you can skip it for now.

pg. 132
Advanced

What filters to apply?


The frequency filters you should apply depend on the noise present in your recordings, but also
on the type of analysis you are planning to use them for. This sections provides some general
recommendations.

High-pass filter

 Purpose: Remove the low frequencies from the signals. Typically used for:
o Removing the arbitrary DC offset and slow drifts of MEG sensors (< 0.2Hz),

o Removing the artifacts occurring at low frequencies (< 1Hz, e.g. breathing or eye
movements).

 Warnings:

o Edge effects: Transient effects you should discard at the start and end of each
filtered signal because the filtering window extends into time periods outside
those for which you have data.
o Avoid using on epoched data: You need long segments of recordings to run a
high-pass filter.
o Be careful with the frequency you choose if you are studying cognitive processes
that may include sustained activity in some brain regions (eg. n-back memory
task).

Low-pass filter

 Purpose: Remove the high frequencies from the signals. Typically used for:
o If the components of interest are below for example 40Hz, you may discard the faster
components in the signal by applying a low-pass filter with a frequency cutoff below
40Hz.
o Removing strong noise occurring at high frequencies (eg. muscle contractions,
stimulators).
o Display averages: In an event-related design, you will import and average multiple trials.
You may low-pass filter these averages for display and interpretation purposes.
o Statistics: In an event-related study with multiple subjects, the latency of the brain
response of interest may vary between subjects. Smoothing the subject-level averages
before computing a statistic across subjects may help reveal the effect of interest.
 Warnings:

o Edge effects: Transient effects you should discard at the start and end of each
filtered signal because the filtering window extends into time periods outside
those for which you have data.

pg. 133
o It is always better to filter continuous (non-epoched data) when possible.
o When filtering averages: Import longer epochs, average them, filter, then remove
the beginning and the end of the average to keep only the signals that could be
filtered properly

Band-pass filter

 Purpose: A band-pass filter is the combination of a low-pass filter and a high-pass filter, it
removes all the frequencies outside of the frequency band of interest.

 Warnings: The same considerations and warnings as for high and low pass filtering apply here.

Notch filter

 Purpose: Remove a sinusoidal signal at a specific frequency (power lines noise, head tracking
coils).

 Warnings:

o Use only if needed: It is not always recommended to remove the 50-60Hz power
lines peaks. If you don't have a clear reason to think that these frequencies will
cause a problem in your analysis, you don't need to filter them out.
o In an ERP analysis, the averaging of multiple trials will get rid of the 50-60Hz
power line oscillations because they are not time-locked to the stimulus.
o If you are using a low-pass filter, do not a apply a notch filter at a higher
frequency (useless).
 Alternatives: If the notch filter is not giving satisfying result, you have two other options.
o Band-stop filter: Similar to the notch filter, but more aggressive on the data.
Useful for removing larger segments of the spectrum, in case the power line peaks are
spread over numerous frequency bins or for suppressing other types of artifacts.

o Sinusoid removal: This process can do a better job at removing precise frequencies by
identifying the sinusoidal components and then subtracting them from the signals in the
time domain. This is not a frequency filter and works best on short segments of
recordings.
Run it on the imported epochs rather than on the continuous files.

When to apply these filters?


 Continuous files: Frequency filters used for pre-processing purposes should be applied before
epoching the recordings. In general, filters will introduce transient effects at the beginning and
the end of each time series, which make these parts of the data unreliable and they should be
discarded. If possible, it is safer and more efficient to filter the entire recordings from the
original continuous file at once.

pg. 134
 Before SSP/ICA cleaning: Artifact cleaning with SSP/ICA projectors require all the channels
of data to be loaded in memory at the same time. Applying a frequency filter on a file that
contains projectors requires all the file to be loaded and processed at once, which may cause
memory issues. Pre-processing filters are rarely changed, whereas you may want to redo the
SSP/ICA cleaning multiple times. Therefore it is more convenient to apply the filters first.

 Imported epochs: Filtering epochs after importing them in the database is possible but
requires extra attention: you may need to import longer epochs to be able to deal with the edge
effects.

 After averaging: You may low-pass filter the averaged epochs for display or statistical
purposes but again be aware of the edge effects.

 Empty room measurements: In principle, all the filters that are applied to the experimental
data also need to be applied, with the same settings, to the noise recordings. In the source
estimation process, we will need all the files to have similar levels of noise, especially for the
calculation of the noise covariance matrix. This applies in particular when some channels are
noisy.

 Think first: Never apply a frequency filter without a clear reason (artifacts, predefined
frequency ranges of interest, etc.) and without keeping the side effects under control. Avoid
when possible.

Filter specifications: Low-pass, high-pass, band-pass


 Process: Pre-process > Band-pass filter

 Process options:

pg. 135
o Lower cutoff frequency: Defines a high-pass filter (enter 0 for a low-pass filter)
o Upper cutoff frequency: Defines a low-pass filter (enter 0 for a high-pass filter)
o Stopband attenuation: The higher the attenuation, the higher the performance of the
filter, but longer the transient duration. Use the default (60dB) unless you need shorter
edge effects.

o Use old filter: For replicating results obtained with older versions of Neurostorm.
o View filter response: Click on this button to display the impulse and frequency
response of your filter, and confirm that the responses appear reasonable.

 Filter design:
o Description: Even-order linear phase FIR filter, based on a Kaiser window design. The
order N is estimated using Matlab's kaiserord function and the filter generated with fir1.
Because the filters are linear phase, we can (and do) compensate for the filter delay by
shifting the sequence backward in time by M=N/2 samples. This effectively makes the
filters zero-phase and zero-delay.

o Ripple and attenuation: The allowed ripple in pass and attenuation in stop band are
set by default to 10^(-3) and 60dB respectively (note that with Kaiser window design,
errors in pass and stopband will always be equal). Transitions between pass and
stopbands are set to 15 percent of the upper and lower passband edges. However,
when the lower edge of the passband is 5hz or lower we set the transition width to 50
percent of the lower passband edge.

o Filtering function: The FIR bandpass filter can be performed in frequency domain
(fftfilt function) or in time domain (filter function). The two approaches give the same
results, but they have different execution times depending on the filter oder. The time-
domain filtering is faster for low-order filters and much slower for high-order filters. The
process selects automatically what approach to use.

 Edge effects:
o Transient (full): With any filtering operation there will always be a transient effect at
the beginning of the filtered data. For our filter, this effect will last for half of the filter
order: M=N/2 samples. We strongly recommend that your data records are sufficiently
long that you can discard these M=N/2 samples. Because we are using zero-phase
filtering, there is a similar N/2 effect at the end of the sampled data – these samples
should also be discarded.

o Transient (99% energy): For some filters, the full transient window might be longer
than your epochs. However, most of the energy is carried by the beginning of the filter,
and you can obtain amplitudes acceptable for most analysis after a fraction of this full
window. For this reason we also mention a much shorter window in the documentation
of the filter, which corresponds to the duration needed to obtain 99% of the total
energy in the impulse response. This duration corresponds to the "transient" event

pg. 136
markers that are added to the recordings when applying filters.

o Adjust the parameters: If possible, always discard the full transient window. If the
edge effect affects too much of your data, adjust the filter parameters to reduce filter
order (increase the lower cut-off frequency or reduce the stopband attenuation). If you
cannot get any acceptable compromise, you can consider discarding shorter transient
windows, but never go below this "99% energy" window.

o Mirroring: We included an option to mirror the data at the start and end of the record
instead of padding the signal with zeros. This will reduce the apparent N/2 transients at
the start and end of your data record, but you should be aware that these samples are
still unreliable and we do not recommend using them.

o [TODO] Check this "99% energy" criteria in the case of high-pass filters, it does not
seem very useful...

 Additional recommendations:
o Filter order: The key issue to be aware of when filtering is that the specification you
choose for your filter will determine the length of the impulse response (or the filter
order) which in turn will affect the fraction of your data that fall into the "edge" region.
The most likely factor that will contribute to a high filter order and large edge effects is if
you specify a very low frequency at the lower edge of the passband (i.e. the high pass
cut-off frequency).

o Detrending: If your goal is to remove the DC signal we recommend you first try
detrending the data (removes average and best linear fit) to see if this is sufficient. If
you still need to remove other low frequency components, then pick the highest cut-off
frequency that will fit your needs.

o Design optimization: If you are performing bandpass filtering and are not satisfied
with the results, you can investigate filtering your data twice, once with a low pass filter
and once with a high pass filter. The advantage of this is that you can now separately
control the transition widths and stop band attenuation of the two filters. When
designing a single BPF using the Kaiser (or any other) window, the maximum deviation
from the desired response will be equal in all bands, and the transition widths will be
equal at the lower and upper edges of the pass band. By instead using a LPF and a HPF
you can optimize each of these processes separately using our filtering function.

pg. 137
 Linear phase, no distortion, zero delay: As described earlier, FIR filters have a linear phase
in the frequency domain. It means that all samples of the input signal will have a same delay in
the output. This delay is compensated after filtering. Consequently, no distortion happens
during the filtering process. To illustrate this property, we considered a chirp signal in which the
oscillation frequency grows linearly. The signal is band-pass filtered in two frequency ranges.
The following plot represents the original signal and its filtered versions with our proposed
filters. Results show that the input and output signals of this filter are completely aligned
without any delay or distortion.

 Function: Neurostorm3/toolbox/math/bst_bandpass_hfilter.m
 External call: process_bandpass('Compute', x, Fs, HighPass, LowPass, 'bst-hfilter', isMirror,
isRelax)

 Code:
 1 function [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs, HighPass,
LowPass, isMirror, isRelax, Function, TranBand, Method)
 2 % BST_BANDPASS_HFILTER Linear phase FIR bandpass filter.
 3 %
 4 % USAGE: [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs,
HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[],
Method='bst-hfilter-2019')
 5 % [~, FiltSpec, Messages] = bst_bandpass_hfilter([], Fs,
HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[],
Method='bst-hfilter-2019')
 6 % x = bst_bandpass_hfilter(x, Fs,
FiltSpec)
 7 %

pg. 138
 8 % DESCRIPTION:
 9 % - A linear phase FIR filter is created.
 10 % - Function "kaiserord" and "kaiser" are used to set the necessary
order for fir1.
 11 % - The transition band can be modified by user.
 12 % - Requires Signal Processing Toolbox for the following functions:
 13 % kaiserord, kaiser, fir1, fftfilt. If not, using Octave-based
alternatives.
 14 %
 15 % INPUT:
 16 % - x : [nChannels,nTime] input signal (empty to only get
the filter specs)
 17 % - Fs : Sampling frequency
 18 % - HighPass : Frequency below this value are filtered in Hz (set
to 0 for low-pass filter only)
 19 % - LowPass : Frequency above this value are filtered in Hz (set
to 0 for high-pass filter only)
 20 % - isMirror : isMirror (default = 0 no mirroring)
 21 % - isRelax : Change ripple and attenuation coefficients
(default=0 no relaxation)
 22 % - Function : 'fftfilt', filtering in frequency domain (default)
 23 % 'filter', filtering in time domain
 24 % If not specified, detects automatically the fastest
option based on the filter order
 25 % - TranBand : Width of the transition band in Hz
 26 % - Method : Version of the filter (2019/2016-18)
 27 %
 28 % OUTPUT:
 29 % - x : Filtered signals
 30 % - FiltSpec : Filter specifications (coefficients, length, ...)
 31 % - Messages : Warning messages, if any
 32
 33 %
@=============================================================================
 34 % This function is part of the Neurostorm software:
 35 % https://neuroimage.usc.edu/Neurostorm
 36 %
 37 % Copyright (c)2000-2020 University of Southern California & McGill
University
 38 % This software is distributed under the terms of the GNU General Public
License
 39 % as published by the Free Software Foundation. Further details on the
GPLv3
 40 % license can be found at http://www.gnu.org/copyleft/gpl.html.
 41 %
 42 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE
 43 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE
ANY
 44 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES
OF
 45 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY
ASSUME ANY

pg. 139
 46 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE.
 47 %
 48 % For more information type "Neurostorm license" at command prompt.
 49 %
=============================================================================@
 50 %
 51 % Authors: Hossein Shahabi, Francois Tadel, John Mosher, Richard Leahy,
 52 % 2016-2019
 53
 54
 55 %% ===== PARSE INPUTS =====
 56 % Filter is already computed
 57 if (nargin == 3)
 58 FiltSpec = HighPass;
 59 % Default filter options
 60 else
 61 if (nargin < 9) || isempty(Method)
 62 Method = 'bst-hfilter-2019' ;
 63 end
 64 if (nargin < 8) || isempty(TranBand)
 65 TranBand = [];
 66 end
 67 if (nargin < 7) || isempty(Function)
 68 Function = []; % Auto-detection based on the filter order later
in the code
 69 end
 70 if (nargin < 6) || isempty(isRelax)
 71 isRelax = 0;
 72 end
 73 if (nargin < 5) || isempty(isMirror)
 74 isMirror = 0;
 75 end
 76 FiltSpec = [];
 77 end
 78 Messages = [];
 79
 80
 81 %% ===== CREATE FILTER =====
 82 if isempty(FiltSpec)
 83 % ===== FILTER SPECIFICATIONS =====
 84 Nyquist = Fs/2;
 85 % High-pass filter
 86 if ~isempty(HighPass) && (HighPass ~= 0)
 87 f_highpass = HighPass / Nyquist; % Change frequency from Hz
to normalized scale (0-1)
 88 switch Method
 89 case 'bst-hfilter-2019'
 90 if isempty(TranBand) || TranBand==0
 91 if (HighPass <= 5)
 92 LwTranBand = .5 ; %Hz
 93 else

pg. 140
 94 LwTranBand = 1 ; %Hz
 95 end
 96 f_highstop = f_highpass - LwTranBand/Nyquist;
 97 else
 98 f_highstop = max(0, HighPass - TranBand) / Nyquist;
 99 % f_highstop = max(0.2, HighPass - TranBand) /
Nyquist;
 100 TranBand = (f_highpass - f_highstop)*Nyquist ; %
Adjusted Transition band
 101 end
 102 case 'bst-hfilter-2016'
 103 % Default transition band
 104 if (HighPass <= 5) % Relax the transition band if
HighPass<5 Hz
 105 f_highstop = .5 * f_highpass;
 106 else
 107 f_highstop = .85 * f_highpass;
 108 end
 109 end
 110 else
 111 f_highpass = 0;
 112 f_highstop = 0;
 113 LwTranBand = 1 ;
 114 end
 115 % Low-pass filter
 116 if ~isempty(LowPass) && (LowPass ~= 0)
 117 f_lowpass = LowPass / Nyquist;
 118 switch Method
 119 case 'bst-hfilter-2019'
 120 if isempty(TranBand) || TranBand==0
 121 UpTranBand = 1 ;
 122 UpTranBand = min(UpTranBand,LwTranBand) ;
 123 f_lowstop = f_lowpass + UpTranBand/Nyquist;
 124 else
 125 f_lowstop = f_lowpass + TranBand/Nyquist;
 126 end
 127 case 'bst-hfilter-2016'
 128 % Default transition band
 129 if f_highpass==0 % If this is a low-pass filter
 130 f_lowstop = 1.05 * f_lowpass;
 131 else
 132 f_lowstop = 1.15 * f_lowpass;
 133 end
 134 end
 135 else
 136 f_lowpass = 0;
 137 f_lowstop = 0;
 138 end
 139 % If both high-pass and low-pass are zero
 140 if (f_highpass == 0) && (f_lowpass == 0)
 141 Messages = ['No frequency band in input.' 10];

pg. 141
 142 return;
 143 % Input frequencies are too high
 144 elseif (f_highpass >= 1) || (f_lowpass >= 1)
 145 Messages = sprintf('Cannot filter above %dHz.\n', Nyquist);
 146 return;
 147 end
 148 % Transition parameters
 149 if isRelax
 150 Ripple = 10^(-2);
 151 Atten = 10^(-2); % Equals 40db
 152 else
 153 Ripple = 10^(-3); % pass band ripple
 154 Atten = 10^(-3); % Equals 60db
 155 end
 156
 157 % ===== DESIGN FILTER =====
 158 % Build the general case first
 159 fcuts = [f_highstop, f_highpass, f_lowpass, f_lowstop];
 160 mags = [0 1 0]; % filter magnitudes
 161 devs = [Atten Ripple Atten]; % deviations
 162 % Now adjust for desired properties
 163 fcuts = max(0,fcuts); % Can't go below zero
 164 fcuts = min(1-eps, fcuts); % Can't go above or equal to 1
 165
 166 % We have implicitly created a bandpass, but now adjust for desired
filter
 167 if (f_lowpass == 0) % User didn't want a lowpass
 168 fcuts(3:4) = [];
 169 mags(3) = [];
 170 devs(3) = [];
 171 end
 172 if (f_highpass == 0) % User didn't want a highpass
 173 fcuts(1:2) = [];
 174 mags(1) = [];
 175 devs(1) = [];
 176 end
 177
 178 % Generate FIR filter
 179 % Using Matlab's Signal Processing toolbox
 180 if bst_get('UseSigProcToolbox')
 181 [n,Wn,beta,ftype] = kaiserord(fcuts, mags, devs, 2);
 182 n = n + rem(n,2); % ensure even order
 183 b = fir1(n, Wn, ftype, kaiser(n+1,beta), 'noscale');
 184 % Using Octave-based functions
 185 else
 186 [n,Wn,beta,ftype] = oc_kaiserord(fcuts, mags, devs, 2);
 187 n = n + rem(n,2); % ensure even order
 188 b = oc_fir1(n, Wn, ftype, oc_kaiser(n+1,beta), 'noscale');
 189 end
 190

pg. 142
 191 % Filtering function: Detect the fastest option, if not explicitely
defined
 192 if isempty(Function)
 193 % The filter() function is a bit faster for low-order filters,
but much slower for high-order filters
 194 if (n > 800) % Empirical threshold
 195 Function = 'fftfilt';
 196 else
 197 Function = 'filter';
 198 end
 199 end
 200
 201 % Compute the cumulative energy of the impulse response
 202 E = b((n/2)+1:end) .^ 2 ;
 203 E = cumsum(E) ;
 204 E = E ./ max(E) ;
 205 % Compute the effective transient: Number of samples necessary for
having 99% of the impulse response energy
 206 [tmp, iE99] = min(abs(E - 0.99)) ;
 207
 208 % Output structure
 209 FiltSpec.b = b;
 210 FiltSpec.a = 1;
 211 FiltSpec.order = n;
 212 FiltSpec.transient = iE99 / Fs ; % Start up and end
transients in seconds (Effective)
 213 % FiltSpec.transient_full = n / (2*Fs) ; % Start up and end
transients in seconds (Actual)
 214 FiltSpec.f_highpass = f_highpass;
 215 FiltSpec.f_lowpass = f_lowpass;
 216 FiltSpec.fcuts = fcuts * Nyquist ; % Stop and pass bands
in Hz (instead of normalized)
 217 FiltSpec.function = Function;
 218 FiltSpec.mirror = isMirror;
 219 % If empty input: just return the filter specs
 220 if isempty(x)
 221 return;
 222 end
 223 end
 224
 225 %% ===== FILTER SIGNALS =====
 226 % Transpose signal: [time,channels]
 227 [nChan, nTime] = size(x);
 228 % Half of filter length
 229 M = FiltSpec.order / 2;
 230 % If filter length > 10% of data length
 231 edgePercent = 2*FiltSpec.transient / (nTime / Fs);
 232 if (edgePercent > 0.1)
 233 Messages = [Messages, sprintf('Start up and end transients (%.2fs)
represent %.1f%% of your data.\n', 2*FiltSpec.transient, 100*edgePercent)];
 234 end

pg. 143
 235
 236 % Remove the mean of the data before filtering
 237 xmean = mean(x,2);
 238 x = bst_bsxfun(@minus, x, xmean);
 239
 240 % Mirroring requires the data to be longer than the filter
 241 if (FiltSpec.mirror) && (nTime < M)
 242 Messages = [Messages, 'Warning: Data is too short for mirroring.
Option is ignored...' 10];
 243 FiltSpec.mirror = 0;
 244 end
 245 % Mirror signals
 246 if (FiltSpec.mirror)
 247 x = [fliplr(x(:,1:M)), x, fliplr(x(:,end-M+1:end))];
 248 % Zero-padding
 249 else
 250 x = [zeros(nChan,M), x, zeros(nChan,M)] ;
 251 end
 252
 253 % Filter signals
 254 switch (FiltSpec.function)
 255 case 'fftfilt'
 256 if bst_get('UseSigProcToolbox')
 257 x = fftfilt(FiltSpec.b, x')';
 258 else
 259 x = oc_fftfilt(FiltSpec.b, x')';
 260 end
 261 case 'filter'
 262 x = filter(FiltSpec.b, FiltSpec.a, x, [], 2);
 263 end
 264
 265 % Remove extra data
 266 x = x(:,2*M+1:end);
 267 % Restore the mean of the signal (only if there is no high-pass filter)
 268 if (FiltSpec.f_highpass == 0)
 269 x = bst_bsxfun(@plus, x, xmean);
 270 end

Filter specifications: Notch


 Description: 2nd order IIR notch filter with zero-phase lag (implemented with filtfilt).
 Reference: Mitra, Sanjit Kumar, and Yonghong Kuo. Digital signal processing: a computer-based
approach. Vol. 2. New York: McGraw-Hill, 2006. MatlabCentral #292960

 Edge effects: It is computed based on the 99% energy of the estimated impulse response.
 Function: Neurostorm3/toolbox/process/functions/process_notch.m

pg. 144
 External call: [x, FiltSpec, Messages] = Compute(x, sfreq, FreqList, Method, bandWidth)

Filter specifications: Band-stop


 Description: 4th order Butterworth IIR filter with zero-phase lag (implemented with filtfilt)
 Reference: FieldTrip: x = ft_preproc_bandstopfilter(x, sfreq, FreqBand, [], 'but', 'twopass')
 Edge effects It is computed based on the 99% energy of the estimated impulse response.
 Function: Neurostorm3/toolbox/process/functions/process_bandstop.m
 External call: x = process_bandstop('Compute', x, sfreq, FreqList, FreqWidth)

On the hard drive


The names of the files generated by the process "Power spectrum density" start with the tag
timefreq_psd, they share the same structure as all the files that include a frequency dimension.
To explore the contents of a PSD file created in this tutorial, right-click on it and use the popup menus
File > View file contents or File > Export to Matlab.

pg. 145
Structure of the time-frequency files: timefreq_psd_*.mat

 TF: [Nsignals x Ntime x Nfreq] Stores the spectrum information. Nsignals is the number of
channels that were selected with the option "MEG" in the PSD process. Nfreq is the number of
frequency bins. There is no time dimension (Ntime = 1).

 Comment: String used to represent the file in the database explorer.


 Time: Window of time over which the file was estimated.
 TimeBands: Defined only when you select the option "Group in time bands".
Always empty for the PSD files because there is no time dimension.

 Freqs: [1 x Nfreq] List of frequencies for which the power spectrum was estimated (in Hz).
 RefRowNames: Only used for connectivity results.
 RowNames: [Nsignals x 1] Describes the rows of the TF matrix (first dimension). Here it
corresponds to the name of the MEG sensors, in the same order as is the .TF field.

 Measure: Function currently applied to the FFT coefficients {power, none, magnitude, log,
other}

 Method: Function that was used to produce this file {psd, hilbert, morlet, corr, cohere, ...}
 DataFile: File from which this file was calculated = Parent file in the database explorer.
 DataType: Type of file from which this file was calculated (file type of .DataFile).

o 'data' = Recordings
o 'cluster' = Recordings grouped by clusters of sensors
o 'results' = Source activations
o 'scouts' = Time series associated with a region of interest
 SurfaceFile / GridLoc / GridAtlas / Atlas: Used only when the input file was a source file.
 Leff: Effective number of averages = Number of input files averaged to produce this file.
 Options: Most relevant options that were passed to the function bst_timefreq.
 History: Describes all the operations that were performed with Neurostorm on this file. To get
a better view of this piece of information, use the menu File > View file history.

Useful functions

 in_bst_timefreq(PsdFile): Read a PSD or time-frequency file.


 in_bst(FileName): Read any Neurostorm file.
 bst_process('LoadInputFile', FileName, Target): The most high-level function for reading
Neurostorm files. "Target" is a string with the list of sensor names or types to load (field
RowNames).

pg. 146
 bst_psd(F, sfreq, WinLength, WinOverlap): Computation of the Welch's power spectrum
density.

Tutorial 11: Bad channels


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

It is common during the acquisition to have a few sensors that are recording values that will not
be usable in the data analysis. In MEG, a sensor can be damaged or unstable. In EEG, the quality
of the connection between the electrode and the scalp is sometimes too low to record anything
interesting.

It is important to identify the sensors with poor signal quality at an early stage of the pre-
processing, because the efficiency of the artifact removal will depend on it. If you try to remove
blink and cardiac artifacts with some bad sensors it may not work very well, and worse, it will
propagate the bad signals to all the channels.

This tutorial will explain the various ways we have to handle the bad channels. Note that the
recordings from this auditory experiment do not contain any bad sensors, therefore the entire
tutorial is optional. If you are not interested, you can skip it and will still be able to follow the
next tutorials.

Contents

1. Identifying bad channels


2. Selecting sensors
3. Marking bad channels
4. From the database explorer
5. Epoching and averaging
6. On the hard drive

Advanced

Identifying bad channels


Some bad channels are easy to detect, their signals look either completely off or totally flat
compared with the other surrounding sensors. Some others are more difficult to identify. The
examples below are taken from other datasets.

 The power spectrum density (PSD) is usually a good way to spot a few bad channels, this is why
we always recommend to compute it for all the datasets:

pg. 147
 Simply looking at the signals traces, some channels may appear generally noisier than the
others:

 Looking at a 2D sensor topography, if one sensor shows very different values from its neighbors
for extended periods of time, you can doubt of its quality:

Advanced

Selecting sensors
 Double-click on the recordings for run #01 to open the MEG sensors.
 Right-click on the time series figure > View topography (or press Ctrl+T).

 Right-click on the topography figure > Channels > Display sensors (or press Ctrl+E).

pg. 148
 If you can't see anything because the topography figure is too small, you can change the way the
figures are automatically arranged. In the top-right corner of the Neurostorm figure, select the
menu "Window layout options > Tiled".

 You can select one channel by clicking on its signal or on the dot representing it in the
topography figure. Note that the sensor selection is automatically reported to the other figure.

 You can select multiple sensors at the same time the topography figure.
Right-click on the figure, then hold the mouse button and move the mouse.

 Select a few sensors, then right-click on one of the figures and check out the Channels menu:

o View selected: Show the time series of the selected sensors.


o Mark selected as bad: Remove sensors from the display and all the further
computations.

o Mark non-selected as bad: Keep only the selected channels.

pg. 149
o Reset selection: Unselect all the selected sensors.
o Mark all channels as good: Brings back all the channels to display.
o Edit good/bad channels: Opens an interface that looks like the channel editor, but
with one extra column to edit the status (good or bad) of each channel.

Advanced

Marking bad channels


 Select a few channels, right-click > Channels > Mark selected as bad (or press the Delete
key).

 The selected channels disappear from the two views. In the time series figure, the signals are
not visible anymore, in the topography the corresponding dots disappear and the values of the
magnetic fields around the missing sensors get re-interpolated based on what is left.

 With the time series figure, you can display the signals that have been tagged as bad.
In the Record tab, select the montage "Bad channels".
In this view, you cannot select the channels, they are not available anymore.

 Right-click on a figure > Channels > Edit good/bad channels.


This menu open a window very similar to the Channel Editor window, with additional green and
red dots to indicate the status of each channel. Click on the dot to switch it to good or bad.

pg. 150
Advanced

From the database explorer


Many options to change the list of bad channels are available from the database explorer.

 The menus are available if you right-click one data file (or link to raw file). In this case, the
selected operation is applied only on the selected file.

 The same menus are also available for all the folders. In this case, the selected operation is
applied recursively to all the data files (and links to raw files) that are found in the folder.

 With this batching ability of the database explorer, you can quickly tag some bad channels in all
the recordings of a subject or for the entire protocol. You can also get a quick overview of all the
bad channels in all the files at once with the menu View all bad channels.

pg. 151
 Restore all the good channels before moving to the next tutorial. For instance, right-click on
the protocol folder TutorialIntroduction > Good/bad channels > Mark all channels as good.

Advanced

Epoching and averaging


The list of bad channels is saved separately for each dataset.

At this stage of the analysis, the database contains only links to continuous files. When you
import epochs from a continuous file, the list of bad channels will be copied from the raw file to
all the imported data files.

Then you will be able to redefine this list for each epoch individually, tagging more channels as
bad, or including back the ones that are ok. This way it is possible to exclude from the analysis
the channels that are too noisy in a few trials only, for instance because of some movement
artifacts.

When averaging, if an epoch contains one bad channel, this bad channel is excluded from the
average but all the other channels are kept. If the same channel is good in other trials, it will be
considered as good in the average. This means that not all the channels have the same number of
trials for calculating the average.

This may cause the different channels of an averaged file to have different signal-to-noise ratios,
which may lead to confusing results. However, we decided to implement the average in this way
to be able to keep more data in the studies with a low number of trials and a lot of noise.

Advanced

On the hard drive


The list of bad channels is saved for each data file separately, in the field ChannelFlag.
This vector indicates for each channel #i if it is good (ChannelFlag(i)= 1) or bad (ChannelFlag(i)= -1).

Right-click on a link to a continuous file > File > View file contents:

pg. 152
This information is duplicated in the sFile structure (F field) in order to be passed easily to the low-level
reading functions. If you are planning to modify the list of bad channels manually, you need to change
two fields: mat.ChannelFlag and mat.F.channelflag

Tutorial 12: Artifact detection


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The previous tutorials illustrated how to remove noise patterns occurring continuously and at
specific frequencies. However, most of the events that contaminate the MEG/EEG recordings are
not persistent, span over a large frequency range or overlap with the frequencies of the brain
signals of interest. Frequency filters are not appropriate to correct for eye movements, breathing
movements, heartbeats or other muscle activity.

For getting rid of reproducible artifacts, one popular approach is the Signal-Space Projection
(SSP). This method is based on the spatial decomposition of the MEG/EEG recordings for a
selection of time samples during which the artifact is present. Therefore we need to identify
when each type of artifacts occurs in the recordings. This tutorial shows how to automatically
detect some well defined artifacts: the blinks and the heartbeats.

Contents

1. Observation
2. Detection: Heartbeats
3. Detection: Blinks
4. Remove simultaneous blinks/heartbeats
5. Run #02: Running from a script
6. Artifacts classification

pg. 153
7. Detection: Custom events
8. In case of failure
9. Other detection processes
10. Additional documentation

Observation
Let's start by observing the type of contamination the blinks and heartbeats cause to the MEG
recordings.

 Run #01: Double-click on the link to show the MEG sensors.


 Configuration: Page of 3 seconds, view in columns, selection of the "CTF LT" sensors (the
left-temporal sensors will be a good example to show at the same time the two types of
artifacts).

 EOG: Right-click on the link > EOG > Display time series. Two channels are classified as EOG:
o VEOG: Vertical electrooculogram (two electrodes placed below and above one eye)
o HEOG: Horizontal electrooculogram (two electrodes placed on the temples of the
subject)

o On these traces, there is not much happening for most of the recordings except for a
few bumps. This subject is sitting very still and not blinking much. We can expect MEG
recordings of a very good quality.
 ECG: Right-click on the link > ECG > Display time series.
The electrocardiogram was recorded with a bipolar montage of electrodes across the chest. You
can recognize the typical shape of the electric activity of the heart (P, QRS and T waves).

 Find a blink: Scroll through the recordings using the F3 shortcut until you see a large blink.
o Remember you can change the amplitude scale with many shortcuts (eg. right-click +
move).
o To keep the scale fixed between two pages: Uncheck the button [AS] (auto-scale).

o For instance, you can observe a nice blink at 20.8s (red cursor in the screen capture
below).
o On the same page, you should be able to observe the contamination due to a few
heartbeats, corresponding to the peaks of the ECG signal (eg. 19.8s, shown as a blue
selection below).
 The additional data channels (ECG and EOG) contain precious information that we can use for
the automatic detection of the blinks and heartbeats. We strongly recommend that you always
record these signals during your own experiments, it helps a lot with the data pre-processing.

pg. 154
Detection: Heartbeats
In the Record tab, select the menu: "Artifacts > Detect heartbeats".

 It automatically opens the pipeline editor, with the process "Detect heartbeats" selected.
 Channel name: Name of the channel that is used to perform the detection. Select or type
"ECG".

 Time window: Time range that the algorithm should scan for amplitude peaks. Leave the
default values to process the entire file, or check the option [All file].

 Event name: Name of the event group created for saving the detected events. Enter
"cardiac".

pg. 155
 Click on Run. After the process stops, you can see a new event category "cardiac". The 464
(aprox.) heartbeats for 360s of recordings indicate an average heart rate of 77bpm, everything
looks normal.

 You can check a few of them, to make sure the "cardiac" markers really indicate the ECG peaks.
Not all peaks need to be detected, but you should have a minimum of 10-20 events marked for
removing the artifacts using SSP, described in the following tutorials.

Detection: Blinks
Now do the same thing for the blinks: Menu "Artifacts > Detect eye blinks".

 Channel name: VEOG

 Time window: All file

 Event name: Blink

pg. 156
 Run, then look quickly at the 15 detected blinks (shortcut: Shift+Right arrow).

Remove simultaneous blinks/heartbeats


We will use these event markers as the input to our SSP cleaning method. This technique works
well if each artifact is defined precisely and as independently as possible from the other artifacts.
This means that we should try to avoid having two different artifacts marked at the same time.

Because the heart beats every second or so, there is a high chance that when the subject blinks
there is a heartbeat not too far away in the recordings. We cannot remove all the blinks that are
contaminated with a heartbeat because we would have no data left. But we have a lot of
heartbeats, so we can do the contrary: remove the markers "cardiac" that are occurring during a
blink.

In the Record tab, select the menu "Artifacts > Remove simultaneous". Set the options:

 Remove events named: "cardiac"

 When too close to events: "blink"

 Minimum delay between events: 250ms

After executing this process, the number of "cardiac" events goes from 465 to 456. The deleted
heartbeats were all less than 250ms away from a blink.

pg. 157
Run #02: Running from a script
Let's perform the same detection operations on Run #02, using this time the Process1 box.

 Close everything with the [X] button at the top-right corner of the Neurostorm window.

 Select the run AEF #02 in the Process1 box, then select the following processes:

 Events > Detect heartbeats: Select channel ECG, check "All file", event name "cardiac".
 Events > Detect eye blinks: Select channel VEOG, check "All file", event name "blink".
 Events > Remove simultaneous: Remove "cardiac", too close to "blink", delay 250ms.

 Open the Run#02 recordings (MEG+EOG+ECG) and verify that the detection worked as
expected. You should get 472 cardiac events and 19 blink events.

Advanced

Artifacts classification
If the EOG signals are not as clean as here, the detection processes may create more than one category,
for instance: blink, blink2, blink3. The algorithm not only detects specific events in a signal, it also
classifies them by shape. For two detected events, the signals around the event marker have to be
sufficiently correlated (> 0.8) to be classified in the same category. At the end of the process, all the
categories that contain less than 5 events are deleted.

In the good cases, this can provide an automatic classification of different types of artifacts, for instance:
blinks, saccades and other eye movements. The tutorial MEG median nerve (CTF) is a good illustration
of appropriate classification: blink groups the real blinks, and blink2 contains mostly saccades.

pg. 158

In the bad cases, the signal is too noisy and the classfication fails. It leads to either many different
categories, or none if all the categories have less than 5 events. If you don't get good results with the
process "Detect eye blinks", you can try to run a custom detection with the classification disabled.

At the contrary, if you obtain one category that mixes multiple types of artifacts and would like to
automatically separate them in different sub-groups, you can try the process "Events > Classify by
shape". It is more powerful than the automatic classification from the event detection process because
it can run on multiple signals at the same type: first it reduces the number of dimensions with a PCA
decomposition, then runs a similar classification procedure.

Advanced

Detection: Custom events


These two processes "Detect heartbeats" and "Detect eye blinks" are in reality shortcuts for a generic
process "Detect custom events". This process can be used for detecting any kind of event based on
the signal power in a specific frequency band. We are not going to use it here, but you may have to use
it if the standard parameters do not work well, or for detecting other types of events.

 The signal to analyze is read from the continuous file (options "Channel name" and "Time
window").
 Frequency band: The signal is filtered in a frequency band where the artifact is easy to detect.
For EOG: 1.5-15Hz ; for ECG: 10-40Hz.

 Threshold: An event of interest is detected if the absolute value of the filtered signal value
goes over a given number of times the standard deviation. For EOG: 2xStd, for ECG: 4xStd

 Minimum duration between two events: If the filtered signal crosses the threshold several
times in relation with the same artifact (eg. muscle activity in an EMG channel), we don't want
to trigger several events but just one at the beginning of the activity. This parameter would
indicate the algorithm to take only the maximum value over the given time window; it also
prevents from detecting other events immediately after a successful detection. For the ECG, this
value is set to 500ms, because it is very unlikely that the heart rate of the subject goes over 120
beats per minute.

 Ignore the noisy segments: If this option is selected, the detection is not performed on the
segments that are much noisier than the rest of the recordings.

pg. 159
 Enable classification: If this option is selected, the events are classified by shape in different
categories, based on correlation measure. In the end, only the categories that have more than 5
occurrences are kept, all the other successful detections are ignored.

Advanced

In case of failure
If the signals are not as clean as in this sample dataset, the automatic detection of the heartbeats
and blinks may fail with the standard parameters. You may have to use the process "Detect
custom events" and adjust some parameters. For instance:

 If nothing is detected: decrease the amplitude threshold, or try to adjust the frequency band.
 If too many events are detected: increase the amplitude threshold or the minimum duration
between two events.
 If too many categories of events are generated, and you have a very little number of events in
the end: disable the classification.
 To find the optimal frequency band for an artifact, you can open the recordings and play with
the online band-pass filters in the Filter tab. Keep the band that shows the highest amplitude
peaks.

pg. 160
If you cannot get your artifacts to be detected automatically, you can browse through the recordings
and mark all the artifacts manually, as explained in the tutorial Event markers..

Advanced

Other detection processes


Events > Detect analog trigger

 See tutorial Stimulation delays.

 This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc), where the baseline
is relatively stable and the events will predictably cross a threshold. This is useful when you want
to detect a single time point (simple event) at the start of an event, as in these examples:

Events > Detect custom events

 See tutorial Artifact detection.

 This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc) where the baseline is
relatively stable and the events will predictably cross a threshold. This is useful when you want
to detect a simple event at the peak of an event, as in these examples:

pg. 161
Events > Detect events above threshold

 See tutorial MEG visual: single subject.

 This is used to detect signal on any channel (MEG, EEG, STIM, Analog, etc) that is above a
defined threshold value. This is useful when you want to detect all time points when the signal is
above the threshold (extended events), as in these examples:

 The extended event can be converted to a single event (when the rising or falling edge is
desired). in the Record tab, select the event to convert, then in the menu Events > Convert to
simple event > select Start, Middle, or End to indicate where the marker should be placed.

Events > Detect other artifacts

 See tutorial Additional bad segments

Events > Detect movement

 See tutorial Detect subject movements

Synchronize > Transfer events

 See tutorial Synchronization with eye tracker

Artifacts > Detect bad channels: Peak-to-peak

 Reject channels and trials from imported data. Usually not recommended, as the amplitude of
the signal is not always a good marker of the quality of the channel.

Tutorial 12: Artifact cleaning with SSP


Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

As previously said, the frequency filters are not adapted to remove artifacts that are transient or
overlapping in frequency domain with the brain signals of interest. Other approaches exist to
correct for these artifacts, based on the spatial signature of the artifacts.

pg. 162
If an event is very reproducible and occurs always at the same location (eg. eye blinks and heartbeats),
the sensors will always record the same values when it occurs. We can identify the topographies
corresponding to this artifact (ie. the spatial distributions of values at one time point) and remove them
from the recordings. This spatial decomposition is the basic idea behind two widely used approaches:
the SSP (Signal-Space Projection) and ICA (Independent Component Analysis) methods.

This introduction tutorial will focus on the SSP approach, as it is a lot simpler and faster but still very
efficient for removing blinks and heartbeats from MEG recordings. For cleaning EEG data, ICA is ofter
better indicated - the interface for running ICA decompositions is very similar and is described in an
advanced tutorial.

Contents

1. Overview
2. The order matters
3. SSP: Heartbeats
4. Evaluate the components
5. Evaluate the correction
6. SSP: Eye blinks
7. Run #02
8. Note for beginners
9. SSP: Generic
10. Averaged artifact
11. Troubleshooting
12. SSP Theory
13. SSP Algorithm
14. Extract the time series
15. On the hard drive
16. Additional documentation

Overview
The general SSP objective is to identify the sensor topographies that are typical of a specific
artifact, then to create spatial projectors to remove the contributions of these topographies from
the recordings.

1. We start by identifying many examples of the artifact we are trying to remove. This is what
we've been doing in the previous tutorial with the creation of the "cardiac" and "blink" events.
2. We extract a short time window around each of these event markers and concatenate in time all
the small blocks of recordings.
3. We run a principle components analysis (PCA) on the concatenated artifacts in order to get a
decomposition in various spatial components (number of components = number of sensors).
4. If it works well, we can find in the first few principal components some topographies that are
very specific of the type of artifact we are targeting. We select these components to remove.
5. We compute a linear projector for each spatial component to remove and save them in the
database (in the "Link to raw file"). They are not immediately applied to the recordings.

pg. 163
6. Whenever some recordings are read from this file, the SSP projectors are applied on the fly to
remove the artifact contributions. This approach is fast and memory efficient.
7. Note that these tools are available on continuous files only ("Link to raw file") and cannot be
applied to recordings that have already been imported in the database.

The order matters


This procedure has to be repeated separately for each artifact type. The order in which you
process the artifacts matters, because for removing the second artifact we typically use the
recordings cleaned with the first set of SSP projectors. We have to decide which one to process
first.

It works best if each artifact is defined precisely and as independently as possible from the other
artifacts. If the two artifacts happen simultaneously, the SSP projectors calculated for the blink
may contain some of the heartbeat topography and vice versa. When trying to remove the second
artifact, we might not be able to clearly isolate it anymore.

Because the heart beats every second or so, there is a high chance that when the subject blinks
there is a heartbeat not too far away in the recordings. Therefore a significant number of the
blinks will be contaminated with heartbeats. But we have usually a lot of "clean" heartbeats, we
can start by removing these ones. To correctly isolate these two common artifacts, we
recommend the following procedure:

 Remove the markers "cardiac" that are occurring during a blink (done in the previous tutorial),
 Compute the cardiac SSP (with no eye movements, because we removed the co-occurring
events),

 Compute the blink SSP (with no heartbeats, because they've already been taken care of).

If you have multiple modalities recorded simultaneously, for example MEG and EEG, you should run this
entire procedure twice, once for the EEG only and once for the MEG only. You will always get better
results if you process the different types of sensors separately. Same thing when processing

pg. 164
Elekta-Neuromag recordings: separately process the magnetometers (MEG MAG) and the gradiometers
(MEG GRAD).

SSP: Heartbeats
Double-click on the link to show the MEG sensors for Run #01.
In the Record tab, select the menu: "Artifacts > SSP: Heartbeats".

 Event name: Name of the event to use to calculate the projectors, enter "cardiac".
 Sensor types: Type of sensors for which the projection should be calculated ("MEG"). Note
that you will always get better results if you process the different types of sensors separately.

 Compute using existing SSP projectors: You have the option to calculate the projectors
from the raw recordings, or from the recordings filtered with the previously computed SSP
projectors.
Unless you have a good reason for not considering the existing projectors, you should select this
option. Then if the results are not satisfying, try again with the option disabled.
For this step it doesn't make any difference because there are not projectors yet in the file.

After the computation is done, a new figure is displayed, that lets you select the active projectors.

 On the left: The projector categories where each row represents the result of an execution of
this process (usually one for each sensor type and each artifact).

 On the right: The spatial components returned by the PCA decomposition. The percentage
indicated between brackets is the singular value for this each component, normalized for this
decomposition (percentage = Si / sum(Si), see technical details at the end of this page).

 Percentage: More practically, it indicates the amount of signal that was captured by the
component during the decomposition. The higher it is, the more the component is

pg. 165
representative of the artifact recordings that were used to calculate it. In the good cases, you
would typically see one to three components with values that are significantly higher than the
others.

 When a component is selected, it means that it is removed from the recordings. A spatial
projector is computed and applied to the recordings on the fly when reading from the
continuous file.
 Default selection: The software selects the first component and leaves the others
unselected. This selection is arbitrary and doesn't mean the cleaning is correct, you
should always manually review the components that you want to remove.

Evaluate the components


The percentage indicated for the first value (9%) is much higher than the following ones (5%,
5%, 4%, 3%...), this could indicate that it targets relatively well the cardiac artifact. Let's
investigate this.

 Click on the first component, then click on the toolbar button [Display component
topography]. This menu shows the spatial distribution of the sensor values for this component.
Note that you don't have to select the component (ie. check the box) to display it. This
topography seems to correspond to a strong dipolar activity located relatively far from the
sensor array, it matches the type of artifact we expect from the heart activity.

pg. 166
 The second button "Display component topography [No magnetic interpolation]"
produces the same figure but without the reinterpolation of the magnetic fields that is typically
applied to the MEG recordings in Neurostorm, it may help understand some difficult cases. This
magnetic interpolation will be detailed later in the introduction tutorials.

 You can display multiple components in the same figure: select them at the same time in the list
(holding the Shift/Ctrl/Cmd button of your keyboard) and then click on the button "Display
topography". No other strong components looks like it could be related with the heartbeats.

 The last button in the toolbar [Display component time series], opens a figure that
represents the evolution of the contribution of this component over time. The higher the
amplitude, the more present the selected topography in the recordings. Click on it to show the
component #1, then display the ECG signal at the same time (right-click on the file > ECG >
Display time series).

 We observe that the "SSP1" trace correlates relatively well with the ECG trace, in the sense that
we captured most the ECG peaks with this component. However, the component seems also to
capture much more signal than just the heartbeats: many alpha oscillations and some of the
ocular activity. The example below shows a blink in the EOG, ECG and SSP component #1.

pg. 167
 If you remove this component from the recordings, you can expect to see most of the artifacts
related with the cardiac activity to go away, but you will also remove additional signal
elements that were not really well identified. The job is done but it causes some unwanted side
effects.

 It is in general possible to refine the SSP decomposition by going back to the selection of
"cardiac" markers that we used to compute it. You could look at all the ECG peaks
individually and remove the markers located in segments of recordings that are noisier or
that contain a lot of alpha activity (~10Hz). You would need to delete this SSP
decomposition and run again the same process.
 Alternatively, or if you don't manage to extract a clean cardiac component with a PCA/SSP
decomposition, you could try to run an ICA decomposition instead. You might be able to get
better results, but it comes with significant computation and manual exploration times. Note
that for some subjects, the cardiac artifact is not very strong and could be simply ignored in the
analysis.

Evaluate the correction


The topography of the component #1 looks like it represents the heart activity and its temporal
evolution shows peaks where we identified heartbeats. It is therefore a good candidate for
removal, we just need to make sure the signals look good after the correction before validating
this choice.

 Show the left-temporal MEG sensors (CTF LT) and select/unselect the first SSP component.
 Repeat this for different time windows, to make sure that the cardiac peaks in the MEG sensors
really disappear when the projector #1 is selected and that the rest is not altered too much.

pg. 168
 No correction:

 Cardiac component #1 removed:

 In this example we will consider that the current decomposition is good enough.
Make sure you select the component #1, then click on [Save] to validate the modifications.

 After this window is closed, you can always open it again from the Record tab with the menu
Artifacts > Select active projectors. At this stage of the analysis, you can modify the list of
projectors applied to the recordings at any time.

SSP: Eye blinks


Let's try the same thing with the eye blinks.

 Select the process "Artifacts > SSP: Eye blinks"

 Run it on the event type "blink", that indicates the peaks of the VEOG signal.
Select the option "Compute using existing projectors" (if this step doesn't seem to work

pg. 169
correctly, try again without selecting this option).

 You see now a new category of projectors. Based on the distribution of values, this first
component is most likely a good representation of the artifact we are trying to remove. The
second one could be a good candidate as well.

 Select the first three components and display their topographies:

 Component #1: Most likely a blink,

 Component #2: Probably a saccade (another type of eye movement),

pg. 170
 Component #3: Not related with the eye movements (maybe related with the alpha activity).
 As a side note, if you had not selected the option "Compute using existing SSP/ICA projectors",
you would have obtained the projectors below, which correspond to the topography of the
artifact in the original signals (without considering the cardiac projector). It is normal if the
topographies we obtain after removing the cardiac peaks are slightly different, this is because
they are computed on the different subspace of the signals. The relative singular values is
smaller after the cardiac correction, maybe because the recordings we used to compute it
already contained some eye movements.

 Display the times series for these three components, together with the EOG signals. You have to
uncheck temporarily the component #1 to be able to display its signal. When it is checked, it is
removed from the signal therefore it corresponds to a flat trace.
The figure below shows the EOG and SSP values between 318s and 324s. The SSP1 trace
matches the blink observed in VEOG and SSP2 matches the saccade observed in HEOG.

pg. 171
 Left-temporal MEG signals when there is no component selected:

 With only the component #2 selected (saccade):

 With components #1 and #2 selected (blink + saccade):

 Keep the components #1 and #2 selected and click on [Save] to validate your changes.

Run #02

pg. 172
Reproduce the same operations on Run #02:

 Close everything with the [X] button at the top-right corner of the Neurostorm window.
 Open the MEG recordings for run AEF #02 (double-click on the file link).

 Artifacts > SSP: Heartbeats: Event name "cardiac", sensors "MEG", use existing SSP
projectors.
Select component #1, click on [Save].

 Artifacts > SSP: Eye blinks: Event name "blink", sensors "MEG", use existing SSP projectors.

Select component #1, click on [Save].

 Note that in this second session, the representation of the saccade was not as clear as in
the first file. The distribution of the percentage values does not show any clear
component other from the blink one, and the topographies are not as clear. In general, the
saccade processing requires a separate step, we will illustrate this in the next tutorial.

Note for beginners


Everything below is advanced documentation, you can skip it for now.

pg. 173
Advanced

SSP: Generic
The calculation of the SSP for the heartbeats and the eye blinks are shortcuts to a more generic process
"Artifacts > SSP: Generic". You may need this process if the standard parameters do not work of if
you want to use this technique to remove other types of artifacts.

 Time window: What segment of the file you want to consider.


 Event name: Markers that are used to characterize the artifact. If you don't specify any event
name, it will use the entire time window as the input of the PCA decomposition.

 Event window: Time segment to consider before and after each event marker. We want this
time window to be longer than the artifact effect itself, in order to have a large number of time
samples representative of a normal brain activity. This helps the PCA decomposition to separate
the artifact from the ongoing brain activity.

 Frequency band: Definition of the band-pass filter that is applied to the recordings before
calculating the projector. Usually you would use the same frequency band as we used for the
detection, but you may want to try to refine this parameter if the results are not satisfying.

 Sensor types or names: List of sensor names or types for which the SSP projectors are
calculated. You can get better results if you process one sensor type at a time.

 Compute using existing SSP/ICA projectors: Same as in the heartbeats/blinks processes.


 Save averaged artifact in the database: If you check this option with an event name
selected, the process will save the average of all the artifact epochs (event marker + event
window) before and after the application of the first component of the SSP decomposition. This
is illustrated in the next section.

 Method to calculate the projector:


o PCA: What was described until now: SVD decomposition to extract spatial
components.

o Average: Uses only one spatial component, the average of the time samples at which
the selected events occur. This has no effect if there are no events selected.

pg. 174
Advanced

Averaged artifact
One efficient way of representing the impact of this artifact correction is to epoch the recordings
around the artifacts before and after the correction and compute the average of these epochs.

 Run the process "SSP: Generic" with:


o The default blink options: event "blink", [-200,+200]ms, [1.5-15]Hz.
o The option "Computing existing SSP/ICA projectors" disabled.

o The option "Save averaged artifact in the database" selected.

o The option panel should look like the screen capture in the previous section.
 Look at the topography of the first component. You can notice that the percentage value is
higher than what we got previously, and that the topography looks different than previously.

pg. 175
 This difference comes from the fact that this time we did not use the cardiac SSP to compute the
blink SSP ("Compute existing SSP" disabled). This could indicate that there is some form of cross-
contamination of the "blink" and "cardiac" events that we designed here. The origin of the
common signals between the different segments of artifact is sometimes due to important alpha
waves (around 10Hz) that are present for most of the recordings. It don't matter much, you just
have to remember that the computation order matters and that you can try variations of the
suggested workflow to fit better your recordings.
 Otherwise, the difference between this topography and the previous one could be only due to
the fact that they represent the artifact in different subspaces (in the first case, one dimension
has already been removed). Even if the two artifacts were completely independant (the two
removed dimensions are orthogonal), the topographies would look slightly different.
 You should see now two additional files in your database. They are both the average of the 19
blinks identified in the recordings, [-200,+200]ms around the "blink" events. The top row shows
the average before the SSP correction, the bottom row the same average but recomputed after
removing the first component of the decomposition. The artifact is gone.

 Delete this new category, and make sure you get back to the previous settings (first
component of both "cardiac" and "blink" selected). Click on [Save] to validate this modification.

pg. 176
Advanced

Troubleshooting
You have calculated your SSP projectors as indicated here but you don't get any good results. No
matter what you do, the topographies don't look like the targeted artifact. You can try the
following:

 Review one by one the events indicating the artifacts, remove the ones that are less clear or that
occur close to another artifact.
 Select or unselect the option "Compute using existing SSP".
 Change the order in which you compute the projectors.
 Use the process "SSP: Generic" and modify some parameters:
o Use a narrower frequency band: especially the EOG, if the projectors capture some of
the alpha oscillations, you can limit the frequency band to [1.5 - 9] Hz.
o Reduce or increase the time window around the peak of the artifact.
o Change the method: Average / SSP.
 If you have multiple acquisition runs, you may try to use all the artifacts from all the runs rather
than processing the files one by one. For that, use the Process2 tab instead of Process1. Put the
"Link to raw file" of all the runs on both sides, Files A (what is used to compute the SSP) and Files
B (where the SSP are applied).

Always look at what this procedure gives you in output. Most of the time, the artifact cleaning
will be an iterative process where you will need several experiments to adjust the options and the
order of the different steps in order to get optimal results.

Advanced

SSP Theory
The Signal-Space Projection (SSP) is one approach to rejection of external disturbances. Here is a short
description of the method by Matti Hämäläinen, from the MNE 2.7 reference manual, section 4.16.

pg. 177
Unlike many other noise-cancellation approaches, SSP does not require additional reference
sensors to record the disturbance fields. Instead, SSP relies on the fact that the magnetic field
distributions generated by the sources in the brain have spatial distributions sufficiently different
from these generated by external noise sources. Furthermore, it is implicitly assumed that the
linear space spanned by the significant external noise patterns has a low dimension.

Without loss of generality we can always decompose any n-channel measurement b(t) into its signal and
noise components as:

 b(t) = bs(t) + bn(t)


Further, if we know that bn(t) is well characterized by a few field patterns b1...bm, we can express the
disturbance as

 bn(t) = Ucn(t) + e(t) ,


where the columns of U constitute an orthonormal basis for b1...bm, cn(t) is an m-component column
vector, and the error term e(t) is small and does not exhibit any consistent spatial distributions over
time, i.e., Ce = E{eeT} = I. Subsequently, we will call the column space of U the noise subspace. The basic
idea of SSP is that we can actually find a small basis set b1...bm such that the conditions described above
are satisfied. We can now construct the orthogonal complement operator

 P⊥ = I - UUT
and apply it to b(t) yielding

 b(t) ≈ P⊥bs(t) ,
since P⊥bn(t) = P⊥Ucn(t) ≈ 0. The projection operator P⊥ is called the signal-space projection operator
and generally provides considerable rejection of noise, suppressing external disturbances by a factor of
10 or more. The effectiveness of SSP depends on two factors:

1. The basis set b1...bm should be able to characterize the disturbance field patterns completely
and

2. The angles between the noise subspace space spanned by b1...bm and the signal vectors bs(t)
should be as close to π/2 as possible.

If the first requirement is not satisfied, some noise will leak through because P⊥bn(t) ≠ 0. If the any of
the brain signal vectors bs(t) is close to the noise subspace not only the noise but also the signal will be
attenuated by the application of P⊥ and, consequently, there might by little gain in signal-to-noise ratio.

Since the signal-space projection modifies the signal vectors originating in the brain, it is
necessary to apply the projection to the forward solution in the course of inverse computations.

Advanced

SSP Algorithm

pg. 178
The logic of the SSP computation is the following:

1. Take a small time window around each marker to capture the full effect of the artifact, plus
some clean brain signals before and after. The default time window is [-200,+200]ms for eye
blinks, and [-40,+40]ms for the heartbeats.
2. Filter the signals in a frequency band of interest, in which the artifact is the most visible (in
practice, we extract a segment long enough so that it can be filtered properly, and cut it after
filtering).
3. Concatenate all these time blocks into a big matrix A = [b1, ..., bm]

4. Compute the singular value decomposition of this matrix A: [U,S,V] = svd(A, 'econ')

5. The singular vectors Ui with the highest singular values Si are an orthonormal basis of the
artifact subspace that we want to subtract from the recordings. The software selects by default
the vector with the highest eigenvalue. Then it is possible to redefine interactively the selected
components.

6. Calculate the projection operator: P⊥i = I - UiUiT

7. Apply this projection on the MEG or EEG recordings F: F = P⊥iF

8. The process has to be repeated separately several times for each sensor type and each artifact.

Steps #1 to #5 are done by the processes "Artifact > SSP" in the Record tab: the results, the vectors Ui,
are saved in the channel file (field ChannelMat.Projector(i).Components).

Steps #6 and #7 are calculated on the fly when reading a block of recordings from the continuous
file: when using the raw viewer, running a process a process on the continuous file, or importing
epochs in the database.

Step #8 is the manual control of the process. Take some time to understand what you are trying
to remove and how to do it. Never trust blindly any fully automated artifact cleaning algorithm,
always check manually what is removed from the recordings, and do not give up if the first
results are not satisfying.

Advanced

Extract the time series


It could be useful to save the SSP or ICA time series in a new file for further processing. Here is
one solution to get there:

 First, make sure you do not remove the components you are interested in: open the continuous
recordings, Record tab > Artifacts > Select active projectors, unselect the components you
want to study, so that they are kept in the imported data.

pg. 179
 Import the segments of recordings of interest from the continuous file: select the option Apply
SSP/ICA projectors, otherwise the projectors would be discarded from the new channel file
in the imported folder.

 To review the SSP/ICA time series (optional): open the recordings you just imported, and select
the menu Artifacts > Load projector as montages in the Record tab. The projectors are
made available in the montage menu.

pg. 180
 To create a new file with the SSP/ICA time series in the database: select the file you imported in
Process1 and run the process Standardize > Apply montage, with the option Create new
folders selected.

pg. 181
Advanced

On the hard drive


The projectors are saved in the channel file associated with the recordings. This means that they
will be shared by all the files that share the same channel file. As a consequence, you cannot
share the channel files between acquisition runs if you are planning to use different SSP
projectors for different runs.

pg. 182
You can find them in the field ChannelMat.Projector (array of structures):

 Comment: String representing the projector in the window "Select active projectors".
 Components: [Nsensors x Ncomponents], each column is one spatial component.
 CompMask: [1 x Ncomponents], Indicates if each component is selected or not (0 or 1).
 Status: 0=Category not selected, 1=Category selected, 2=Projectors already applied to the file.
 SingVal: [1 x Ncomponents], Singular values of the SVD decomposition for each component.

Tutorial 15: Import epochs


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

We can consider that our datasets are clean from any major artifact. We will now proceed to the
analysis of the brain signals we recorded in response to the auditory stimulation. There are two
major types of processing workflows for MEG/EEG, depending on whether we are dealing with
an event-related paradigm or a steady-state/resting-state study.

This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and
we have the corresponding triggers marked in the recordings. We will base our analysis on these
triggers, import short epochs around each of them and average them. You will find in the
advanced tutorials a scenario of MEG resting-state analysis.

Import in database
Until now, we've only been looking at data that was read from continuous files. The raw file
viewer provides rapid access to the recordings, but many operations can only be applied to short
segments of recordings that have been imported in the database. We will refer to these as
"epochs" or "trials".

pg. 183
 Right-click on Run#01 > Import in database.

 Set the import options as described below:

 Time window: Time range of interest. We are interested by all the stimulations, so do not
change this parameter. The default values always represent the entire file.
 Split: Useful to import continuous recordings without events, to import successive
chunks of the same duration. We do not need this here.
 Events selection: Check the "Use events" option, and select both "standard" and
"deviant".
The number between parenthesis represents the number of occurrences of each event in
the selected time window (changes if you modify the time definition at the top of the
window)
 Epoch time: Time segment that is extracted around each event marker. Set it to [-
100,+500]ms.
This option is disabled for extended events: if you want to enable it, you need to convert
the extended events to simple events first.

pg. 184
 Apply SSP/ICA projectors: Use the active projectors calculated during the previous pre-
processing steps. Always check the summary of the projectors that are selected.
Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected
(one in "cardiac" and two in "blink", the blink and the saccade). Keep this option
selected.
 Remove DC Offset: Check this option, select Time range: [-100, -1.7]ms. For each
epoch, it will:
o Compute the average of each channel over the baseline (pre-stimulus interval: [-100,-
1.7]ms)
o Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms).
o This option removes the baseline value of each sensor. In MEG, the sensors record
variations around a somewhat arbitrary level, therefore this operation is always needed,
unless it was already applied during one of the pre-processing steps.
o Note that a high-pass filter with a very low frequency (for instance 0.3Hz) can efficiently
replace this DC correction. If a high-pass filter has already been applied to the
recordings, you may want to unselect this option.

 Resample recordings: Keep this unchecked.


 Create a separate folder for each epoch type: Do not check this option.
o If selected: a new folder is created for each event type ("standard" and "deviant")
o If not selected: all the epochs are saved in a new folder, the same one for all the events,
that has the same name as the initial raw file. This is what we want because we have
two acquisition runs with different channel files (different head positions and different
SSP projectors) to import for the same subject. If we select this option, the "standard"
epochs of both runs would be imported in the same folder and would end up sharing
the same channel file, which is not correct.

One new folder appears in Subject01. It contains a channel file and two trial groups.

 The channel file is copied from the continuous file.


 To expand a group of trials and show all the files: double-click on it or click on the "+" next to it.
 The SSP projectors calculated in the previous tutorial were applied on the fly while reading from
the continuous file. These epochs are clean from eye blinks and power line contamination.

 Note that the trials that are overlapping with a BAD segment are tagged as bad in the
database explorer (marked with a red dot). All the bad trials are going to be ignored in the
rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next
tutorial).

pg. 185
Review the individual trials
After reviewing the continuous file with the "columns" view (channels one below the other) it
can be useful to also review the imported trials with the "butterfly" view (all the channels
superimposed).

 Double-click on the first trial for the "deviant" condition.

 Switch to the "butterfly" display mode: in the Record tab, click on the first button in the
toolbar.

 Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
This way you can quickly review all the trials to make sure that there is no obvious
problem.
Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.

pg. 186
To manually tag a trial as bad, you have three options:

 Right-click on the trial file in the database > Reject trial.


 Right-click on the figure > Reject trial.
 Use the keyboard shortcut Ctrl+B.
 To set all the trials back as good in a group: right-click on the trials group > Accept bad
trials.

Raster plot
You can also get an overview of the values of one specific sensor over all the trials at once.

 Right-click on the group of trials "deviant" > Display as image > MEG.
 You can change the selected sensor with drop-down menu in the Display tab, or use the
up/down arrows on your keyboard after clicking on the figure.

 The bad trials are already marked, but if they were not this view could help you identify
them easily.

pg. 187
Run #02
Repeat the same operations for the second dataset:

 Right-click on Run#02 > Import in database.


 Import events "standard" and "deviant" with the same options.

Advanced

Epoch length
We imported epochs of 600ms (100ms baseline + 500ms post-stimulus) but did not justify this
choice.
The length of the epochs you import should be chosen very carefully. If you realize later your
epochs are too short or too long, you would have to start over your analysis from this point.
The epoch length to consider depends on:

The experimental design

 The minimum duration between two stimuli defines the maximum length you can consider
analyzing after the stimulus. You should design your experiment so that it always includes the
entire evoked response, plus an additional segment that you can use as a baseline for the
following epoch.
 In this study, the inter-stimulus interval (ISI) is random between 0.7s and 1.7s. The minimum ISI
(700ms) is long enough to include the entire auditory evoked response, but not the button press
that follows a deviant tone. In some cases (late subject response and short ISI), the following
stimulation occurs while the brain is still processing the button press. The baseline of some
epochs may contain motor and somatosensory components.
 For data processing, it is always better to have longer ISI, but it also means increasing the
duration of the experiment or decreasing the number of repetitions, which leads to other
problems. The trade-off between data quality and recording time in this experiment is
acceptable, very few trials are actually contaminated by the motor response to the previous
trial. We will ignore this problem in the following tutorials, but you could decide to reject these
few trials in your own analysis.

pg. 188
 Here we consider only a short baseline (100ms) to avoid including too much motor
activity.
We will only study the auditory response, therefore 500ms post-stimulus is enough.

The processing pipeline

You may have to artificially extend the epochs of interest for technical reasons. Most filters
cause edge effects, ie. unreliable segments of data at the beginning and the end of the signal.
When applied on short epochs, they might destroy all the data of interest.

For avoiding this, you can add a few hundred milliseconds before and after your epoch of
interest. It doesn't matter if it overlaps with the previous or the next epoch. After running the
operations that required longer signals, you can cut your epochs back to the desired epoch
length. Examples:

 Time-frequency (Morlet wavelets):


When estimating the power at frequency f Hz, you get incorrect values for at least one
period (T=1/f) at the beginning and the end of the signal. For example, at 2Hz you need
to discard the first and last 500ms of your time-frequency maps (1/2Hz=0.5s).
 Low-pass filtering:
With any filtering operation there will always be a transient effect at the beginning of the
filtered data. After filtering, you need to discard the time windows corresponding to these
effects. Their duration depends on the order of the filter:
 Hilbert transform:
Same considerations as for the low-pass filter. This process starts by filtering the signals
in various frequency bands, using the same function as the band-pass and low-pass filters.
 Normalizations:
The normalization procedures that use a baseline from the same epoch (Z-score,
ERS/ERD, baseline correction) usually work better with longer baselines. The longer the
clean baseline, the better the estimation of the average and standard deviation over this
baseline. If your baseline is too short, the quality of your normalization will be low.
If you normalize time-frequency maps or filtered source averages, you have to
additionally exclude the edge effects from the baseline, and consider an even longer
baseline.

In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis
would run on most computers, including personal laptops. For any type of frequency analysis on
the recordings, this will be too short. When processing your own recordings, you should
increase the size of the epochs beyond the segment that you are actually planning to study.

Advanced

On the hard drive


Right-click on any imported epoch > File > View file contents:

pg. 189

Structure of the imported epochs: data_*.mat

 F: [Nchannels x Ntime] recordings time series, in Volts.


 Std: [Nchannels x Ntime] Standard deviation or standard error, when available (see next
tutorial).
 Comment: String displayed in the database explorer to represent this file.
 ChannelFlag: [Nchannels x 1] One value per channel, 1 means good, -1 means bad.
 Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.
 DataType: Type of information saved in the F matrix.
 Device: Name of the acquisition system used to record this file.
 Leff: Effective number of averages. For averaged files, number of trials that were used to
compute this file.
 Events: Time markers available in the file (stimulus triggers or other events)
o label: Name of the event group.
o color: [r,g,b] Color used to represent the event group, in Matlab format.
o epochs: [1 x Nevt] Only ones for imported epochs.
o times: [1 x Nevt] Time in seconds of each marker in this group (times = samples /
sfreq).
For extended events: [2 x Nevt], first row = start, second row = end.
o reactTimes: Not used anymore.
o select: Indicates if the event group should be displayed in the viewer.

pg. 190
o channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence
can be associated with one or more channels, by setting .channels{iEvt} to a cell-
array of channel names.
o notes: {1 x Nevt} Cell-array of strings: additional comments for each event
occurrence
 History: Operations performed on file since it was imported (menu "View file history").

File history

Right-click on any imported epoch > File > View file history:

List of bad trials

 There is no field in the file structure that says if the trial is good or bad.
This information is saved at the level of the folder, in the Neurostormstudy.mat file.
 Right-click on an imported folder > File > Show in file explorer.

pg. 191
 Load the Neurostormstudy.mat file into Matlab, the bad trials are listed in the cell array
"BadTrials":

Useful functions

 in_bst_data(DataFile, FieldsList): Read an imported epoch.


 in_bst(FileName, TimeWindow): Read any Neurostorm data file with the possibility to
load only a specific part of the file. "TimeWindow" is a range of time values in seconds:
[tStart, tStop].
 bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level
function for reading data files. "Target" is a string with the list of sensor names or types
to load.

Tutorial 17: Visual exploration


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

This tutorial illustrates the options Neurostorm offers to represent graphically and explore
interactively the evoked responses we computed in the previous tutorial. It shows how to
produce spatial maps of the sensors, temporal averages, save screen captures and movies.

2D/3D topography
The sensor values at one time instant can be represented on a surface. Each amplitude value gets
associated with a color using a colormap (described in the next tutorial). We call this type of
representation "sensor topography", it shows the spatial distribution of the magnetic fields (or
electric potentials).

pg. 192
 Show the MEG signals for the standard average in Run#01 (double-click on the file).
This gives us a direct feedback of the current time instant and allows to jump quickly to a
different time.
 Right-click on the same file > MEG > select the menus: 3D sensor cap, 2D sensor cap,
2D disc.

 3D sensor cap: Represents the real 3D positions of the sensors.


 2D sensor cap: The sensors are projected on a 2D plane. Realistic distribution of the
sensors.
 2D disc: The sensors are projected on a sphere, then displayed as flat disc. Sometimes
distorted...

 In each of these views, you can add markers to indicate the sensors and their labels.
Right-click on the figure > Channels > Display sensors/labels (or Ctrl+E).
 You can change the number of contour lines: Right-click > Contour lines > 0-20.

 In the 3D view, you can notice a hole in the right-occipital area. It corresponds to a damaged
sensor in the MEG system we used for collecting this dataset.

 For EEG/sEEG/ECoG recordings, there is an additional representation mode available:


"3D Electrode". This will be detailed in the advanced tutorials corresponding to these
modalities.

pg. 193
Advanced

Magnetic interpolation
By default, some of the views re-interpolate the fields that are recorded by the MEG sensors to
get smoother displays. A simple inverse problem and forward problem are solved to reconstruct
the magnetic fields on a high-resolution surface of virtual magnetometers (function
channel_extrapm.m).

On Elekta-Neuromag systems, this interpolation has the effect of converting the topographies of
the planar gradiometers into topographies of magnetometers, which deeply affects the display.

The menu "No magnetic interpolation" offers the same views, but without using this
reconstruction of the magnetic field. A spatial interpolation of the values between the sensors is
performed instead.

Advanced

pg. 194
2D Layout
The menu 2D Layout represents, in the same figure, the spatial information (the values for each
channel is represented where the sensor is actually located) and the temporal information
(instead of just one single value, we represent the signal around the current time).
The light gray lines represent the zero amplitude (horizontal) and the current time (vertical lines).

To zoom in/out in each small graph, use the buttons at the bottom-right corner of the figure, or
the corrsponding mouse shortcuts: Ctrl+mouse wheel and Shift+mouse wheel. To select
multiple sensors simultaneously: right-click and move your mouse to enlarge the selection
rectangle.

You can use this display mode to compare multiple files:


Select multiple files in the database explorer, right-click on any of them > 2DLayout.

Advanced

Display as image

pg. 195
The menu "Display as image" shows the same information as the "time series" view, but the
values for each sensor are represented with a color instead of a line.

Advanced

Time selection
Click somewhere on the white part of the time series figure, hold the mouse button, and drag
your mouse left or right: A transparent blue rectangle appears to represent the time selection. If
you right-click on the figure, new options become available in the popup menu:

 Set current time: Move the time cursor where the right-click occurred. The shortcut
Shift+Click can be useful when trying to move in time on dense displays in columns
view.
 Set selection manually: Type the beginning and end of the selected window (in
milliseconds).
 Average time: Average over the selected time window and save it as a new file in the
database.
Note that the best way to do this is to run the process "Average > Average time".
 Export to database: Extract the recordings and save them in a new file in the database.
If some sensors are selected, only their values are extracted, all the others are set to zero.
Note that the best way to do this is to run the process "Extract > Extract time".
 Export to file: Same, but in a user-defined file (not in the database).
 Export to Matlab: Same, but export the selection as a variable in the current Matlab
workspace.

pg. 196
Advanced

Snapshots
Using Neurostorm, you will quickly feel like saving the beautiful images you produce. Your
operating system already provides some nice tools for doing this. Many other options are
available in the "Snapshot" menu, accessible with a right-click on any Neurostorm figure.

Operating system

 Windows/Linux: Press the PrintScreen key on your keyboard and paste the copied
screen in your favorite image or text editor. The combination Alt+PrintScreen only
copies the figure that is currently selected.
 MacOS: Many more options available, Google for the best ones (see example).

Snapshot menu

 The options available in the Snapshot menu depend on the type of data represented.
Examples:

pg. 197
 Save as image: Save the figure in a file, without the title bar and borders. Many formats
available.
 Open as image: Capture the figure and open it in as an image. This can be useful if you
want to visually compare the selected figure with another one that you cannot display at
the same time (because they have different time or frequency definitions).
 Open as figure: Similar, but copies the figure as a new Matlab figure with some
interactivity.
 Contact sheet and movies: See next section.
 Export to database: Save the recordings in the figure as a new entry in the database.
If there are selected channels, only their values will be saved, the others being set to zero.
 Export to file: Extract the time series displayed in this figure (or only the selected
sensors), and save them in a file. Several exchange file formats available for exporting to
another program.
 Export to Matlab: Same thing, but exports the structure in a variable of the Matlab
workspace.
 Save as SSP projector: Create an SSP projector that removes the current topography.
 Save surface: Save the surface in a file, with the current modifiers applied (smooth,
resect).

Advanced

Movie studio
 Movie (horizontal/vertical): Rotate spatially the 3D scene.
 Movie (time): Selected figure: Create .avi movies to show the evolution of the selected
figure.

o The dimensions of the movie depend on the actual size of the figure on the screen.

pg. 198
Resize the figure to the appropriate dimensions for the movie before using this
menu.

o Zoom in/out (mouse wheel) and move the image (middle click+move) to give
enough space to the time stamp that is added at the bottom-left of the rendered
movie.
o Don't do anything else while rendering: the captured figure must be visible all the
time.

 Movie (time): All figures: Instead of capturing one figure only, it captures them all.
Arrange your figures the way you want and create a movie of all your workspace at once.

Advanced

Contact sheets
A contact sheet is a large image representing many time frames of the same figure.

 Same recommendations as for movies: if you don't want the final image to be too big, reduce
the size of the figure, zoom in, move, hide the colorbar. Keep the figure visible during the
capture.

 At the end, the image is displayed in an image viewer with which you can zoom (menu or
wheel), move (click+move) and save the image (File > Save as).

pg. 199
 Example for the standard average, run#01:

Advanced

Edit the figures


All the figures can be edited with the Figure popup menu:

If you select both "Matlab controls" and "Plot edit toolbar", you will get all the tools available
in the Matlab environment to explore the data and edit the figure. Select the button "Edit plot" to

pg. 200
edit the graphic properties of an object (eg. select a signal, then right-click on it to edit its
properties) or unselect it to get back to the regular Neurostorm figure interactivity.

Advanced

Mouse shortcuts
Scroll

 Mouse wheel: Zoom in / zoom out


 Control + mouse wheel: Change the length of the displayed time window (2D Layout)
 Control + mouse wheel: Vertical zoom (time series)

Click

 Left click + move: Rotate (3D) or select (time)


 Middle click + move: Move in zoomed figure (ie. panning)
 Left click + right click + move: Move in zoomed figure (ie. panning)
 Shift + left click: Force setting the current time, ignoring if a line was clicked (time
series)
 Right click + move: Vertical zoom (time series)
 Right click + move: Select sensors (2D topography)
 Right click: Popup menu
 Double-click: Restore initial view

Click on something

 Click on a line: Select a sensor


 Shift + click on a line: Select a sensor and unselect all the others (2D topography)
 Click on the colorbar + move: Change contrast (up/down) and brightness (left/right)

Advanced

pg. 201
Keyboard shortcuts
Here is a memo of all the keyboard shortcuts for time series and topography figures. If you don't
remember them, you can find most of them in the figure popup menus.

 Arrows: Left, right, PageUp, PageDown: Move in time


 Delete: Mark selected sensors as bad
 Shift + Delete: Mark non-selected sensors as bad (=keeps ony the selected sensors)
 Enter: View time series for the selected sensors
 Escape: Unselect all the selected sensors
 Shift + Escape: Set all the bad sensors as good (=brings back all the channels in the
display)
 Ctrl + A: Show axis on 3D figures (X,Y,Z)
 Ctrl + B: Set trial as bad
 Ctrl + D: Dock/undock figure in Matlab's figures list
 Ctrl + E: Show sensors markers and labels (E=Electrode) or add an event marker
(E=Event)
 Ctrl + F: Copy figure, removes all the callbacks and detach from Neurostorm figure
management
 Ctrl + I: Save figure as image
 Ctrl + J: Open figure as an image
 Ctrl + R: Open Time series view (R=Recordings)
 Ctrl + S: Open Sources view (S=Sources)
 Ctrl + T: Open 2D sensor cap view (T=Topography)
 Shift + letter: Change selected montage
 F1, F2, F3: with or without Shift, calls the database navigator (F1=subject,
F2=condition, F3=file)
 1, 2, 3, 4, 5, 6, 7, 8, 9, 0: Set a pre-defined 3D view
 + / -: Increase/decrease the channel gain (vertical zoom for time series)
 =: Apply view to all figures
 *: Apply montage to all figures

 Notes for Mac users:

o PageDown = Fn + DOWN
o PageUp = Fn + UP

o F1 = Fn + F1
o Mouse wheel = Two finger up/down on the MacBook pad

Tutorial 18: Colormaps


Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet, Rana El Khoury Maroun

pg. 202
When displaying signals on the sensor array or on the cortex surface, we need to convert the
amplitude of the signals into colors. The way the values are mapped to colors has a lot of
influence on the visual interpretation of the figures. The selection of the appropriate colormap is
an important step of the data exploration.

Colormap menus
Neurostorm keeps track of many user-defined colormaps: anatomy, EEG, MEG, sources, stat,
time, time-frequency, etc. You can go to the Colormaps menu in the main window to see this list.

Usually, you will use only popup menus from specific figures to edit the colormaps.

 Open a topography view for the standard average (right-click > MEG > 2D Sensor cap).
 Right-click on the figure, you will only see the menu "Colormap: MEG recordings".

 If you modify a colormap, the changes will be applied to all the figures, saved in your
user preferences and available the next time you start Neurostorm.

Standard color arrays


A colormap is an array of colors that are indexed and then mapped to values. It is represented by
a [Nx3] matrix, where N is the number of colors available in it. Each color is coded with three
values corresponding its relative levels of red, green and blue. In Matlab, the colors are coded
between 0 and 1. To get an example, type "jet" in the Matlab command window, you will get the
default values for the "jet" colormap.

pg. 203
We offer two ways of creating this array of colors in Neurostorm: you can use standard color
arrays (modulated in contrast and brightness) or define your own.

Colormap name: The standard colormaps are referred to with names (bone, gray, jet, rwb, etc).
Pick a different color set in the menu to update all the figures with similar data types.

Brightness: Moves the center of the color array up and down. Example values: -80, 0, +80.
The term "brightness" is not well adapted for rbw, jet or hsv. It makes more sense for colormaps
with only one tint that varies in intensity, such as the gray colormap. We use it here for lack for a
better word.

Contrast: Changes the distance between the first and last colors. Example values: -80,0,+80.

You can modify these values by clicking directly on the color bar. Hold the mouse button,
then:

 Move up/down to change the brightness,


 Move left/right to change the contrast.

Advanced

Custom color arrays

pg. 204
To edit your own list of colors, use the menu "New..." at the end of the list of standard
colormaps.

 Open a 2D sensor cap view for the MEG sensors for the standard average (Run#01).
Right-click on the figure Colormap: MEG recordings > Colormap > New.
 Enter the name of the new colormap and the number of colors it will contain.

 Each color in this color array is represented with a little square. The arrows in the the
second row can be selected and deleted (delete key) or edited (double-click). They
represent the key colors between which Matlab interpolates the other colors. Click on the
second row to add more key colors.

 Once you are satisfied with your new colormap, click on [Ok].
It will update the figure. A new menu is now available in the list of colormap names.

pg. 205
 To delete the custom colormap currently selected, use the menu "Delete".

Color mapping
After defining the colors, we need to define how we want to map them with the values. The
information necessary to do this color mapping is the value corresponding to the first and last
colors. The color indices will be scaled linearly between these extrema.

Absolute values: Display the absolute values of the recordings, instead of the original values.
This has the effect of constraining the color mapping to positive values only. It is not very useful
for exploring the recordings: in EEG and MEG, the sign of the values is very important.

Maximum: Method used to estimate the minimum and maximum values of the colorbar.

 Global: The bounds of the colormap are set to the extrema values found in the entire file.
Example: if you use the rbw colormap and the min and max values are [-200ft, +200ft],
the colors will be mapped in the following way: -200ft is blue, +200ft is red, 0ft is white.
The mapping is identical for all the time samples. If you select this option at t=0ms, the
2D topography figure will turn almost white because the average values are low before
the brain response.

pg. 206
 Local: Uses the local min and max values at the current time frame AND for each figure,
instead of the global min and max. Example: A t=0ms, the extrema values are roughly [-
30ft, +30ft]. So the colors will be mapped in order to have: -30ft=blue and +30ft=red.
 Custom: You can manually set the min/max bounds of the colorbar. It does not have to
be symmetrical around zero. If you set the values to [-40,+20] ft, the white colors would
correspond to values around -10ft, and values around 0ft would be displayed in pale red.

 You can usually keep the option Local when looking at recordings, it is easier to read.
But keep in mind that it is not because you see flashy colors that you necessarily have
strong effects. It's always a matter of colormap configuration.

Range: Use symmetrical or non-symmetrical colormaps.

 [-max, max]: Symmetrical colorbar around the absolute value of the maximum.
Example: at t=170ms, the range is [-220ft, +90ft], the color mapping used is [-220ft,
+220ft].
 [min, max]: Uses the real min and max. Useful for displaying values that are not
centered on zero. Example: at t=170ms, the mapping used is [-220ft, +90ft], white is not
zero.

 This option is ignored when the option "Maximum: Custom" is selected.

Advanced

Colormap management
Remember that when you change any of the options above, it is saved in your user preferences.
If you close Neurostorm and start it again, the colormap configuration stays the same.

pg. 207
To reset the colormap to its default values:

 Double-click on the color bar, or

 Use the menu Restore defaults.

Two additional menus can help you manipulate the colormaps:

 Display colorbar: In case you want to hide the color bar. Useful for contact sheets and
movies.
 Permanent menu: Open a window that displays this colormap sub-menu, for faster
access.

Advanced

New default colormaps


Recently, the default colormaps of Neurostorm were changed because they lack important
attributes of a good colormap: they don’t have linear lightness and they are not perceptually
uniform. This can either cause details in the visualization to be hidden or create features that
don’t exist in the underlying data, which results in a distortion of the perceived pattern. For that
reason, new default colormaps were added to better represent the underlying data.

Here are the new colormaps created with their chosen names:

pg. 208
Three other colormaps were added: viridis and magma (taken from mpl colormaps) as well as a
variation of viridis (viridis2). The colormaps were created using the viscm tool, which allows
designing a colormap that has linear lightness and hue changes.

This paper presents the work done in more detail: colormap_optimization.pdf

JET Alternative

A new colormap created by Google, the Turbo colormap, was recently added. It is presented as
a an improved rainbow colormap that can be used as an alternative to the popular JET colormap,
resulting in a perceptually linear colormap.

More information can be found on the following Google Blog post:


https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html

Tutorial 20: Head modeling


Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

The following tutorials describe how cerebral currents can be estimated from the MEG/EEG
recordings we have processed so far. To achieve this, we need to consider two distinct modeling
problems: the modeling of the electromagnetic properties of the head and of the sensor array
(a.k.a. head model or forward model), and the estimation of the brain sources which produced
the data, according to the head model in question. That second step is known as source
modeling or solving an inverse problem. It requires that forward modeling of head tissues and

pg. 209
sensor characteristics is completed first. This tutorial explains how to compute a head model for
the participant to the auditory oddball experiment.

Advanced

Why estimate sources?


Reconstructing the activity of the brain from MEG or EEG recordings involves several
sophisticated steps. Although Neurostorm simplifies the procedures, it is important to decide
whether source modeling is essential to answer the neuroscience question which brought you to
collect data in the first place.

If one of your primary objectives is to identify and map the regions of the brain involved in a
specific stimulus response or behavior, source estimation can help address this aspect. Empirical
interpretations of sensor topographies can inform where brain generators might be located: which
hemisphere, what broad aspect of the anatomy (e.g., right vs. left hemisphere, frontal vs.
posterior regions). Source estimation improves anatomical resolution further from the
interpretation of sensor patterns. The spatial resolution of MEG and EEG depends on source
depth, the principal orientation of the neural current flow, and overall SNR: still, a sub-
centimeter localization accuracy can be expected in ideal conditions, especially when contrasting
source maps between conditions in the same participant. As for other imaging modalities, spatial
resolution of group-level effects (i.e. after averaging across multiple participants) is limited by
the accuracy of anatomical registration of individual brain structures, which are very variable
between participants, and intersubject variations in functional specialization with respect to
cortical anatomy.

Source mapping is a form of spatial deconvolution of sensor data. In EEG in particular, scalp
topographies are very smooth and it is common that contributions from distant brain regions
overlap over large clusters of electrodes. Moving to the source space can help discriminating
between contributing brain regions.

In MEG, source maps can be a great asset to alleviate some issues that are specific to the
modality. Indeed in MEG and contrarily to EEG, the head of the participant is not fixed with
respect to sensor locations. Hence data sensor topographies depend on the position of the
subject's head inside the MEG sensor array. Therefore, between two runs of acquisition, or
between subjects with different head shapes and sizes and positions under the helmet, the same
MEG sensors may pick up signals from different parts of the brain. This problem does not
exist in EEG, where electrodes are attached to the head and arranged according to standard
positions.

Another important point to consider when interpreting MEG sensor maps and that can be solved
by working in the MEG source space instead, is that MEG manufacturers use different types of
sensor technology (e.g., magnetometers vs. gradiometers; axial vs. tangential gradiometers, etc.
yielding different physical measures). This is not an issue with EEG, with essentially one sensor
type (electrodes, dry or active, all measuring Volts).

pg. 210
Nevertheless, if your neuroscience question can be solved by measuring signal latencies over
broad regions, or other aspects which do not depend crucially on anatomical localization (such as
global signal properties integrated over all or clusters of sensors), source modeling is not
required. To sort out this question will influence the time and computational resources required
for data analysis (source analysis multiplies the needs in terms of disk storage, RAM and CPU
performance).

Advanced

The origins of MEG/EEG signals


To better understand how forward and inverse modeling work, we need to have a basic
understanding of the physiological origins of MEG/EEG signals. Note that, as always with
modeling, we need to deal with various degrees of approximation.

Overall, it is assumed that most of - but not exclusively - the MEG/EEG signals are generated by
postsynaptic activity of ensembles of cortical pyramidal neurons of the cerebral cortex. The
reason is essentially in the morphology and mass effect of these cells, which present elongated
shapes, and are grouped in large assemblies of cells oriented in a similar manner
approximately normal to the cortex. Mass effects of close-to-simultaneous changes in post-
synaptic potentials across the cell group add up in time and space. These effects can conveniently
be modeled at a mesoscopic spatial scale with electric dipoles distributed along the cortical
mantle (green arrows in figure below). Note that there is growing evidence that MEG and EEG
are also sensitive to deeper, cortical and subcortical structures, including brain nuclei and the
cerebellum. Neurostorm features advanced models of these structures, as an option to your
analysis. The emphasis in this tutorial is on cortical source models, for simplicity.

The primary and volume currents generated by current dipoles create differences in electrical
potentials and magnetic fields that can be detected outside the head. They can be measured with
electrodes placed on the skin (EEG, with respect to a reference) or very sensitive magnetic
detectors (MEG).

pg. 211

 Matti Hamalainen, 2007

Advanced

Source models
Dipole fitting vs distributed models

MEG/EEG source estimation consists in modeling brain activity with current dipoles. A current
dipole is a convenient model equivalent to the net post-synaptic electrophysiological activity of
local assemblies of neurons. Two main approaches have been explored for source MEG/EEG
estimation: dipole fitting methods - where the position and amplitude of one to a few equivalent
current dipoles (ECD) are estimated over relatively short time windows - and distributed
models - where the location (and typically, the orientation) of a large number dipoles is fixed;
the dipoles sample a spatial grid covering the entire brain volume or the cortical surface -
requiring estimation of the amplitude of a vast number of dipoles in a fixed grid at each time
point.

Equivalent dipole fitting approaches are quite straightforward and can be adequate when the
number of brain regions expected to be active is small (ideally only one). Therefore, it is most
adequate for responses at early post-stimulus latencies. They cannot generalize to capture
complex dynamics over extended period of time (epochs) and the associated estimation
techniques are quite sensitive to initial conditions (how many dipoles to fit? where does the
search start? etc). Our strategy in Neurostorm is to promote distributed source models, which are
less user dependent, can generalize to all experimental conditions, and yield time-resolved image
volumes that can be processed in many different, powerful ways (group statistics, spatial
segmentation, use of regions of interest, correspondence with fMRI, etc.)

pg. 212
Source constraints

When opting for distributed source models, the positions and orientations of the elementary
dipoles that will define the "voxel" grid of the source images produced need to be defined. This
set of dipoles is called the source space. By default, Neurostorm constrains the source space to
the cortex, where signal-to-noise and sensitivity is maximum in MEG/EEG. Note however that
more complete models that include subcortical structures and the cerebellum are available in
Neurostorm. Therefore, one decision you need to make before proceeding with source imaging is
whether more complete source spaces are required to answer your neuroscience question.

For this tutorial, we use the simple approach where current dipoles are automatically assigned to
each of the vertices of the cortical surface (see the nodes in the grey mesh in the leftmost image
below). When importing the anatomy of the subject, we downsampled the cortex surface to
15,000 vertices.

This default number of 15,000 vertices is empirical. In our experience, this balances the adequate
geometrical sampling of cortical folds with the volume of data to be analyzed. To use a smaller
number of vertices (sources) oversimplifies the shape of the brain; to use more vertices yields
considerably larger data volumes, without necessarily adding to spatial resolution, and may lead
to practical hurdles (CPU and memory issues.)

Orientation constraints

After defining the locations of the dipoles, we also need to define their orientations. Neurostorm
features two main options: unconstrained dipole orientations or orientations constrained
perpendicularly with respect to the cortical surface.

In the unconstrained case, three orthogonal dipoles are assigned to each vertex of the cortex
surface. This triplet can account mathematically for local currents flowing in arbitrary directions.
The total number of elementary sources used in that case amounts to 45,000 dipoles (3
orientations x 15,000 vertices).

In the constrained case, one dipole is assigned to each vertex with its orientation perpendicular
to the cortical surface. The benefit to this option is that it restricts the number of dipoles used to
15,000 (one per vertex). Results are also easier to process and visualize. However, there are
some instances where such constraint is exaggerated and may bias source estimation, for instance
when the individual anatomy is not available for the participant.

In the Neurostorm workflow, this orientation constraint is offered as an option of the inverse
model and will be discussed in the following tutorial sections. In the present tutorial, we compute
the forward model corresponding to a grid of 15,000 cortical sources without orientation
constraints (hence a total of 45,000 dipoles). Note that the orientation constraint can be applied
subsequently in the workflow: We do not have to take such hard decision (constrained vs.
unconstrained source orientation) at this stage.

Whole-brain model

pg. 213
The constraint of restricting source locations to the cortical surface can be seen as too restrictive
in some cases, especially if subcortical areas and cerebellum are regions of interest to the study.
Neurostorm features the possibility to use the entire brain volume as source space (see green
dots below: they represent dipole locations sampling the entire brain volume). One minor
drawback of such model is that the results produced are impractical to review. We encourage
users interested in more sophisticated approaches to add non-cortical structures to their
MEG/EEG model to consult the sections concerning Volume and Mixed Head Volumes in the
advanced tutorials about source modeling.

Advanced

Forward model
We now need to obtain a model that explains how neural electric currents (the source space)
produce magnetic fields and differences in electrical potentials at external sensors (the sensor
space), given the different head tissues (essentially white and grey matter, cerebrospinal fluid
(CSF), skull bone and skin).

 The process of modeling how data values can be obtained outside of the head with
MEG/EEG from electrical current dipoles in the brain is called forward modeling or
solving a forward problem.
 In Neurostorm, we call the outcome of this modeling step a "head model", a.k.a.
forward model, leadfield matrix or gain matrix in the MEG/EEG literature.
 In this tutorial we will use the default source space: a lower-resolution cortical surface
representation, with 15,000 vertices, serving as location support to 45,000 dipoles (see
above: models with unconstrained orientation). Note that we use the terms dipole and
source interchangeably.
 We will obtain a matrix [Nsensors x Nsources] that relates the activity of the 45,000
sources to the sensor data collected during the experiment.

pg. 214
Available methods for MEG forward modeling

 Single sphere: The head geometry is simplified as a single sphere, with homogeneous
electromagnetic properties.
 Overlapping spheres: Refines the previous model by fitting one local sphere under each
sensor.
 OpenMEEG BEM: Symmetric Boundary Element Method from the open-source
software OpenMEEG.
 DUNEuro FEM: Finite Element Method from the open-source software DUNEuro.
Models recommended for each modality

 MEG: Overlapping spheres.


Magnetic fields are less sensitive to heterogeneity of tissue in the brain, skull and scalp
than are the scalp potentials measured in EEG. We have found that this locally
fittedspheres approach (one per sensor) achieves reasonable accuracy relative to more
complex BEM-based methods: [Leahy 1998], [Huang 1999].
 EEG: OpenMEEG BEM.
Since EEG measures differential electric potentials on the scalp surface it depends on the
effects of volume conduction (or secondary currents) to produce the signals we measure.
As a result EEG is very sensitive to variations in conductivity not only in the tissue near
the brain's current sources but also with the skull and scalp. Some tissues are very
conductive (brain, CSF, skin), some are less (skull). A realistic head model is advised for
integrating their properties correctly. When computing a BEM model is not an option, for
instance if OpenMEEG crashes for unknown reasons, then Berg's three-layer sphere can
be an acceptable option.
 sEEG/ECoG: The OpenMEEG BEM option is the only model available for this data
modality.

Computation
The forward models depend on the anatomy of the subject and characteristics of EEG/MEG
sensors: the related contextual menus are accessible by right-clicking over channel files in the
Neurostorm data tree.

pg. 215
 In the imported Run#01, right-click on the channel file or the folder > Compute head
model.
Keep the default options selected: Source space=Cortex, Forward model=Overlapping
spheres.

 A new file will then appear in the database. Headmodel files are saved in the same folder
as the channel file's.
This file is required for EEG/MEG source estimation: This next step will be described in
details in the following tutorial sections.
 Right-click on the head model file > Check spheres. This window shows the spheres that
were estimated to compute the head model. You can visualize and verify their location by
following the indications written in green at the bottom of the window: use left/right
arrows. At each step, the current sensor marker is displayed in red, and the sphere shown
is the local estimation of the shape of the inner skull immediately below the sensor.

 Although in principle, the overlapping-sphere method requires the inner skull surface,
this data is not always available for every participant. If not available, a pseudo-
innerskull surface is estimated by Neurostorm using a dilated version of the cortex
envelope.

Repeat the same operation for the other file. We now have two different acquisition runs with
two different relative positions of the head and of the sensors. We now need to compute two
different head models (one per head/sensor location set).

pg. 216
 In the imported Run#02, right-click on the channel file > Compute head model.

Advanced

Database explorer
This section contains additional considerations about the management of the head model files.

 If multiple head models were computed in the same folder (e.g., after experimenting
different forward models), one will be displayed in green and the others in black. The
model in green is selected as the default head model: it will be used for all the following
computation steps (e.g., source estimation). To change the default to another available
head model, double-click on another head model file (or right-click over that file > Set as
default head model).
 You can use the database explorer for batching the computation of head models (across
runs, subjects, etc.). The "Compute head model" item is available in contextual menus at
multiple instances and all levels of the database explorer. The same forward model type
is obtained recursively, visiting all the folders contained in the selected node(s) of the
database explorer.

Advanced

On the hard drive


Right-click on any head model entry > File > View file contents:

pg. 217

Structure of the head model files: headmodel_*.mat

 MEGMethod: Type of forward model used for MEG sensors ('os_meg', 'meg_sphere',
'openmeeg' or empty).
 EEGMethod: Type of forward model used for EEG sensors ('eeg_3sphereberg',
'openmeeg' or empty).
 ECOGMethod: Type of forward model used for ECoG sensors ('openmeeg' or empty).
 SEEGMethod: Type of forward model used for sEEG sensors ('openmeeg' or empty).
 Gain: Leadfield matrix, [Nsensors x Nsources] (in practice, equivalent to [Nsensors x
3*Nvertices]).
 Comment: String displayed in the database explorer to represent this file.
 HeadModelType: Type of source space used for this head model ('surface', 'volume',
'mixed').
 GridLoc: [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a
surface head model, it corresponds to a copy of the 'Vertices' matrix from the cortex
surface file.
 GridOrient: [Nvertices x 3], directions of the normal to the surface for each vertex point
(copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume
head model.
 GridAtlas: In the case of mixed head models, contains a copy of the "Source model
options" atlas structure that was used for creating the model.

pg. 218
 SurfaceFile: Relative path to the cortex surface file related with this head model.
 Param: In case of a surface head model, it contains a description of the sphere that was
estimated for each sensor (Center/Radius).

 History: Date and brief description of the method used for computing the head model.

Gain matrix

The Gain matrix is the most important piece of information in the structure. It stores the
leadfields for 3 orthogonal orientations (x,y,z) at each grid point (p1, p2, etc). The information
relative to each pair sensor <-> grid source point is stored as successive columns of the matrix
are ordered as: [p1_x, p1_y, p1_z, p2_x, p2_y, p2_z ...]. For the tutorial introduction dataset,
with 15002 sources, the gain matrix has 45006 columns.

To convert this unconstrained leadfield matrix to that of an orientation-constrained model,


where the orientation of each dipole is fixed and normal to the cortex surface:

 Export the head model file to the HeadModel structure: Right-click > File > Export to
Matlab.
 At the Matlab prompt:
> Gain_constrained = bst_gain_orient(HeadModel.Gain, HeadModel.GridOrient);
 The dimension of the output matrix is three times smaller (now only one source
orientation at each location): [Nsensors x Nvertices]

Useful functions

 in_bst_headmodel(HeadModelFile, ApplyOrient, FieldsList): Read contents of the head


model file.
 bst_gain_orient(Gain, GridOrient): Apply orientation constraints.

Tutorial 22: Source estimation


Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard M Leahy,
Sylvain Baillet

You have in your database a forward model that explains how the cortical sources determine the
values on the sensors. This is useful for simulations, but what we need next is to solve the
inverse problem: how to estimate the sources when we have the recordings. This tutorial
introduces the tools available in Neurostorm for solving this inverse problem. (For backward
compatibility,

Ill-posed problem
Our goal is to estimate the activity of the thousands of dipoles described by our forward model.
However we only have a few hundred spatial measurements as input (the number of sensors).

pg. 219
This inverse problem is ill-posed, meaning there are an infinite number of source activity
patterns that could generate exactly the same sensor topography. Inverting the forward model
directly is impossible, unless we add some strong priors to our model.

Wikipedia says: "Inverse problems are some of the most important and well-studied
mathematical problems in science and mathematics because they tell us about parameters that we
cannot directly observe. They have wide application in optics, radar, acoustics, communication
theory, signal processing, medical imaging, computer vision, geophysics, oceanography,
astronomy, remote sensing, natural language processing, machine learning, nondestructive
testing, and many other fields."

Many solutions to the inverse problem have been proposed in the literature, based on different
assumptions on the way the brain works and depending on the amount of information we already
have on the effects we are studying. Among the many methods available, in Neurostorm, we
present three general approaches to the inverse problem that represent the most widely used
methods in MEG/EEG source imaging: minimum-norm solutions, beamformers, and dipole
modeling.

These approaches have the advantage of being implemented in an efficient linear form: the
activity of the sources is a linear recombination of the MEG/EEG recordings, such that it is
possible to solve the inverse problem by applying a linear kernel (in the form of a matrix that
multiples the spatial data at each point in time) which is easily stored. Subsequent data
manipulation and source visualization is then much simpler, as are comparisons among these
techniques.

Below we first describe the minimum norm imaging approach and its options, followed by the
beamformer and dipole modeling, both of which are actually quite similar and only use a subset
of the options available in the minimum norm approach.

Source estimation options


Before we start estimating the sources for the recordings available in our database, let's start with
an overview of the options available. This section focuses on the options for the minimum norm
estimates. The other methods are described in advanced sections at the end of this page.

pg. 220
Method

Minimum norm imaging

 Estimates the sources as the solution to a linear imaging problem, that can be interpreted
in various ways (Tikhonov regularization, MAP estimation). The method finds a cortical
current source density image that approximately fits the data when mapped through the
forward model. The "illposedness" is dealt with by introducing a regularizer or prior in
the form of a source covariance that favors solutions that are of minimum energy (or L2
norm).
 Min norm requires specification of a noise and a source covariance matrix. Users can
estimate a noise covariance matrix directly from recordings (for example, using pre-stim
recordings in event related studies) or simply assume a white-noise identify matrix
covariance as described below.The source covariance prior is generated from the options
discussed in detail below.

pg. 221
 In contrast to the LCMV beamformer, in which the data covariance is estimated directly
from the data, for minimum norm the data covariance is determine by the choice of
source and data covariances and the forward model.

LCMV beamformer

 Linearly constrained minimum variance (LCMV) beamformers compute an estimate of source


activity at each location through spatial filtering. The spatial data are linearly combined with
weights (the spatial filter) chosen separately for each location to ensure that the strength of a
dipolar source at that location is correctly estimated (assuming a perfect head model).
 The remaining degrees of freedom in selecting the weights are used to minimize the total
output power. This has the effect of suppressing contributions of sources from other locations
to the estimated signal at the location of interest.
 It should be noted, however, that correlation between sources can at times lead to partial or full
signal cancellation, and the method can be sensitive to accuracy of the head model.

 LCMV beamformers require specification of the data covariance matrix, which is


assumed to include contributions from background noise and the brain signals of interest.
In practice, the data covariance is estimated directly from the recordings. A linear kernel
(matrix) is formed from this data covariance matrix and the forward model. This kernel
defines the spatial filters applied at each location. Multiplying by the data produces an
output beamformer scanning image. These images can either be used directly, as is
common practice with LCMV methods, or the largest peak(s) can be fit with a dipolar
model at every time instance.

Dipole modeling [TODO]

 In some sense this is the simplest model: we fit a single current dipole at each point in time to
the data. We do this by computing a linear kernel (similar to the min norm and LCMV methods)
which when multiplied by the data produces a dipole scanning image whose strongest peak
represents the most likely location of a dipolar source.

 As with LMCV, the dipole scanning images can be viewed directly, or the single best
dipole fit (location and orientation) computed, as described in (LINK ?).

Recommended option

 Still under much debate, even among our Neurostorm team. In cases where sources are
expected to be focal (e.g. interictal spikes in epileptic patients, or early components of sensory
evoked responses) the single dipole can be precise in terms of localization. For cases where
sources are expected to be distributed, the min norm method makes the least restrictive source
assumptions. LCMV beamformers fall somewhere between these two cases.

 One advantage of Neurostorm is that all three approaches can be easily run and
compared. If the results are concordant among all three techniques, then our underlying
assumptions of source modeling, head modeling, and data statistics are confirmed. If the
results are disparate, then a more in depth study is needed to understand the consequences

pg. 222
of our assumptions and therefore which technique may be preferred. The next several
sections discuss in detail the options associated with the "mininum norm imaging"
method.

Measure [TODO]

The minimum norm estimate computed by Neurostorm represents a measure of the current found
in each point of the source grid (either volume or surface), the units are strictly kept in A-m, i.e.
we do not normalize by area (yielding A/m, i.e. a surface density) or volume (yielding A/m^2,
i.e. a volume density). Nonetheless, it is common to refer these units as a "source density" or
"current density" maps when displayed directly.

More commonly, however, current density maps are normalized. The value of the estimated
current density is normalized at each source location by a function of either the noise or data
covariance. Practically, this normalization has the effect of compensating for the effect of depth
dependent sensitivity and resolution of both EEG and MEG. Current density maps tend to
preferentially place source activity in superficial regions of cortex, and resolution drops
markedly with sources in deeper sulci. Normalization tends to reduce these effects as nicely
shown by (LINK ?). We have implemented the two most common normalization methods:
dSPM and sLORETA.

 Current density map: Produces a "depth-weighted" linear L2-minimum norm estimate


current density using the method also implemented in Matti Hamalainen's MNE software.
For a full description of this method, please refer to the MNE manual, section 6, "The
current estimates". Units: picoamper-meter (pA-m).
 dSPM: Implements dynamical Statistical Parametric Mapping (Dale, 2000). The MNE is
computed as above. The noise covariance and linear inverse kernel are then used to also
compute estimates of noise variance at each location in the current density map. The
MNE current density map is normalized by the square root (standard deviation) of these
variance estimates. As a result dSPM gives a z-score statistical map. Units: unitless "z".
 sLORETA: Standardized LOw Resolution brain Electromagnetic TomogrAphy
(Pasqual-Marqui, 2002). As with dSPM, the MNE current density map is normalized at
each point. While dSPM computes the normalization based on the noise covariance,
sLORETA replaces the noise covariance with the theoretical data covariance, as is
assumed in the minimum norm estimation. The theoretical data covariance is the noise
covariance plus the theoretical signal covariance. As discussed in (Pasqual-Marqui 2002),
this theoretical data covariance simplifies sLORETA to an alternative form that results in
a "resolution" kernel (eq.(17) of (Pasqual-Marqui 2002). (We note that the theoretical
data covariance is not the experimental data covariance estimated directly from the data,
as is used in beamformers). Units: unitless.

pg. 223
 Recommended option: Discussed in the section Source map normalization below.

Source model: Dipole orientations [TODO]

At each point in the source grid, the current dipole may point arbitrarily in three directions. In
this section of the options, we describe alternatives for constraining orientation:

 Constrained: Normal to cortex: Only for "surface" grids. At each grid point, we model
only one dipole, oriented normally to the cortical surface. This is based on the anatomical
observation that in the cortex, the pyramidal neurons are mainly organized in macro-
columns that are perpendicular to the cortex surface.
Size of the inverse operator: [Nvertices x Nchannels].
 Loose: Only for "surface" grids. As introduced by (LINK ?), at each point in the surface
grid the dipole direction is constrained to be normal to the local cortical surface. Two
additional elemental dipoles are also allowed, in the two directions tangential to the
cortical surface. As contrasted with "unconstrained," these two tangential elemental
dipoles are constrained to have an amplitude that is a fraction of the normal dipole,
recommended to be between 0.1 and 0.6. Thus the dipole is only "loosely" constrained to
be normal to the local cortical surface.
Size of the inverse operator: [3*Nvertices x Nchannel].
 Unconstrained: Either "surface" or "volume" grids. At each grid point, we leave
undefined the assumed orientation of the source, such that three "elemental" dipoles are
needed to model the source. In Neurostorm, our elemental dipoles are in the x, y, and z
("Cartesian") directions, as compared to other software that may employ polar
coordinates. Thus for "N" vertices, we are calculating the estimate for "3*N" elemental
dipoles.
Size of the inverse operator: [3*Nvertices x Nchannels].
 Recommended option: The constrained options use one dipole per grid point instead of
three, therefore the source files are smaller, faster to compute and display, and more
intuitive to process because we don't have to think about recombining the three values
into one. On the other hand, in the cases where its physiological assumptions are not
verified, typically when using an MNI template instead of the anatomy of the subject, the
normal orientation constraint may fail to represent certain activity patterns.
Unconstrained models can help in those cases. See further discussion on constrained vs
unconstrained solutions below in section Why does it looks so noisy.

pg. 224
Sensors

We automatically detect and display the sensors found in your head model. In the example
above, only one type of sensors is found ("MEG"). You can select one or all of the sensors found
in your model, such as MEG and EEG.

However, cross-modality calculations are quite dependent on the accuracy by which you have
provided adequate covariance calculations and consistency of the head models across sensor
types. As of Spring of 2018, we have also elected to NOT account for cross-covariances between
different sensor types, since regularization and stability of cross-modalities is quite involved. For
multiple sensor types, the recommendation is that you try each individually and then combined,
to test for discordance.

Computing sources for an average


Using the above selections, we now discuss explicit directions on how to compute and visualize.

 In Run#01, right-click on the average response for the deviant stim > Compute sources
[2018].
Select the options: Minimum norm imaging, Current density map, Constrained:
Normal to cortex.

pg. 225
 The other menu "Compute sources" launches the interface that was used previously in
Neurostorm. We are going to keep maintaining the two implementations in parallel for a
while for compatibility and cross-validation purposes.

 The result of the computation is displayed as a dependent file of the deviant average
because it is related only to this file. In the file comment, "MN" stands for minimum
norm and "Constr" stands for "Constrained: normal orientation".

Display: Cortex surface


 Right-click on the sources for the deviant average > Cortical activations > Display on
cortex.

pg. 226
 Double-click on the recordings for the deviant average to have a time reference.
In the filter tab, add a low-pass filter at 40Hz.

 Change the current time (click on the time series figure or use the keyboard arrows) and note it
updates the source maps in the 3D figure. You can also use all the menus and shortcuts
introduced in the anatomy tutorial (like setting the view with the keys from 0 to 6).
 You can edit the display properties in the Surface tab:

o Amplitude: Only the sources that have a value superior to a given percentage of
the colorbar maximum are displayed.
o Min size: Hide all the small activated regions, ie. the connected color patches that
contain a number of vertices smaller than this "min size" value.
o Transparency: Change the transparency of the source activity on the cortex
surface.

 Take a few minutes to understand what the amplitude threshold represents.


o The colorbar maximum depends on the way you configured your Sources
colormap. If the option "Maximum: Global" is selected, the maximum should be
around 150 pA.m. This value is a rough estimate of the maximum amplitude, and
this default value is not always adapted to your figure. To edit the maximum
value, use the colormap option "Maximum: Custom".
o On the screen capture below, the threshold value is set to 20%. It means that only
the sources that have a value over 0.20*150 = 30 pA.m are visible.
The threshold level is indicated in the colorbar with a horizontal white line.
 At the first response peak (91ms), the sources with high amplitudes are located around
the primary auditory cortex, bilaterally, which is what we are expecting for an auditory
stimulation.

pg. 227
Why does it look so noisy?
The source maps look very noisy and discontinuous, they show a lot of disconnected patches.
This is due to the orientation constraint we imposed on the dipoles orientations. Each value on
the cortex should be interpreted as a vector, oriented perpendicular to the surface. Because of the
brain’s circumvolutions, neighboring sources can have significantly different orientations, which
also causes the forward model response to change quickly with position. As a result, the
orientation-constrained minimum norm solution can produce solutions that vary rapidly with
position on the cortex resulting in the noisy and disjointed appearance.

It is therefore important not to always interpret disconnected colored patches as independent


sources. You cannot expect high spatial resolution with this technique (~5-10mm at best). Most
of the time, a cluster of disconnected source patches in the same neighborhood that show the
same evolution in time can be interpreted as "there is some significant activity around here, but
with some uncertainty as to its precise location".

To get more continuous maps for visualization or publication purposes, you can either smooth
the values explicitly on the surface (process "Sources > Spatial smoothing") or use
unconstrained source models.

For data exploration, orientation-constrained solutions may be a good enough representation of


brain activity, mostly because it is fast and efficient. You can often get a better feeling of the
underlying brain activity patterns by making short interactive movies: click on the figure, then
hold the left or right arrows of your keyboard.

Activity patterns will also look sharper when we compute dSPM or sLORETA normalized
measures (later in this tutorial). In most of the screen captures in the following sections, the
contrast of the figures has been enhanced for illustration purposes. Don't worry if it looks a lot
less colorful on your screen. Of course, ultimately statistical analysis of these maps is required to
make scientific inferences from your data.

Display: MRI Viewer

pg. 228
 Right-click on the sources for the deviant average > Cortical activations > Display on
MRI (MRI Viewer).
 The MRI viewer was introduced in tutorials #2 and #3.
Additionally you can change the current time and amplitude threshold from the
Neurostorm window.
 This figure shows the sources computed on the cortical surface and re-interpolated in the
MRI volume. If you set the amplitude threshold to 0%, you would see the thin layer of
cortex in which the dipoles where estimated.

 You can configure this figure with the following options:

o MIP Anatomy: Checkbox in the MRI Viewer figure. For each slice, display the
maximum value over all the slices instead of the original value in the structural
MRI ("glass brain" view).
o MIP Functional: Same as for MIP Anatomy, but with the layer of functional
values.
o Smooth level: The sources values can be smoothed after being re-interpolated in
the volume. Right-click on the figure to define the size of the smoothing kernel
(in number of slices).
o Amplitude threshold: In the Surface tab of the Neurostorm window.
o Current time: At the top-right of the Neurostorm window (or use the time series
figure).

pg. 229

Display: MRI 3D
 Right-click on the sources for the deviant average > Cortical activations > Display on
MRI (3D).
 This view was also introduced in the tutorials about MRI and surface visualization.
Right-click and move your mouse to move the slices (or use the Resect panel of the
Surface tab).

Sign of constrained maps


You should pay attention to the sign of the current amplitudes that are given by the minimum
norm method: they can be positive or negative and they oscillate around zero. Display the
sources on the surface, set the amplitude threshold to 0%, then configure the colormap to show

pg. 230
relative values (uncheck the "Absolute values" option), you will see those typical stripes of
positive and negative values around the sulci. Double-click on the colorbar after testing this to
reset the colormap.

This pattern is due to the orientation constraint imposed on the dipoles. On both sides of a
sulcus, we have defined dipoles that are very close to each other, but with opposite orientations.
If we have a pattern of activity on one side of a suclus that can be modeled as a current dipole
(green arrow), the limited spatial resolution of the minimum norm model will blur this source
using the dipoles that are available in the head model (red and blue arrows). Because of the
dipoles’ orientations, the minimum norm images produces positive values (red arrows) on one
side of the sulcus and negative on the other side (blue arrows).

When displaying the cortical maps at one time point, we are usually not interested in the sign of
the minimum norm values but rather by their amplitude. This is why we always display them by
default with the colormap option "absolute values" selected.

However, we cannot simply discard the sign of these values because we need these for other
types of analysis, typically time-frequency decompositions and connectivity analysis. For
estimating frequency measures on the source maps it is essential that we retain the sign of the
time course at each location so that the correct oscillatory frequencies are identified.

Unconstrained orientations
In cases where the orientation constraint imposed on the dipole orientations produces implausible
results, it is possible to relax it partially (option "loose constraints") or completely (option
"unconstrained"). This produces a vector (3 component) current source at each location which

pg. 231
can complicate interpretation, but avoids some of the noisy and discontinuous features in the
current map that are often seen in the constrained maps. Unconstrained solutions are particularly
appropriate when using the MNI template instead of the subject's anatomy, or when studying
deeper or non-cortical brain regions for which the normal to the cortical surface obtained with
FreeSurfer or BrainSuite is unlikely to match any physiological reality.

In terms of data representation, the option "unconstrained" and "loose constraints" are very
similar. Instead of using one dipole at each cortical location, a base of three orthogonal dipoles is
used. Here we will only illustrate the fully unconstrained case.

 In Run#01, right-click on the average response for the deviant stim > Compute sources
[2018].
Select the options: Minimum norm imaging, Current density map, Unconstrained.
 Double-click on the new source file for the deviant average, open the time series
simultaneously. The two brain maps below represent the same file at 91ms, with different
colormap options (absolute values on the left, relative values on the right). Explanations
below.

 We have to be careful with the visual comparisons of constrained and unconstrained


source maps displayed on the cortex surface, because they are very different types of
data. In unconstrained source maps, we have three dipoles with orthogonal
orientations at each cortex location, therefore we cannot represent all the information at
once. To display them as an activity map, Neurostorm computes the norm of the
vectorial sum of the three orientations at each vertex.
S = sqrt(Sx2 + Sy2 + Sz2)

pg. 232
 This explains that we only observe positive values (no blue values when the colormap is
set to display positive and negative values): the norm displayed at each vertex is always
positive. The underlying values along each orientation (x,y,z) can be positive or negative
and oscillate around zero in time, but we cannot get access to this information with these
static cortical maps.
 The maps we observe here look a lot smoother than the constrained sources we
computed earlier. This can be explained by the fact that there is no sharp discontinuity in
the forward model between two adjacent points of the grid for a vector dipole represented
in Cartesian coordinates while the normal to the surface for two nearby points can be
very different, resulting in rapidly changing forward models for the constrained case.

 Delete the unconstrained file, we will not explore this option in the introduction tutorials.
You can refer to the tutorial EEG and epilepsy for an example of analysis using
unconstrained sources.

Source map normalization


The current density values returned by the minimum norm method have a few problems:

 They depend a lot on the SNR of the signal, which may vary significantly between subjects. Their
amplitude is therefore difficult to interpret directly.
 The values tend to be higher at the surface of the brain (close to the sensors).
 The maps are sometimes patchy and difficult to read.

Normalizing the current density maps with respect to a reference level (estimated from noise
recordings, pre-stimulus baseline or resting state recordings) can help with all these issues at the
same time. In the case of dSPM and sLORETA, the normalizations are computed as part of the
inverse routine and based on noise and data covariances, respectively. While dSPM does produce
a Z-score map, we also provide an explicit Z-score normalization that offers the user more
flexibility in defining a baseline period over which Neurostorm computes the standard deviation
for normalization.

The normalization options do not change the temporal dynamics of your results when
considering a single location but they do alter the relative scaling of each point in the min norm
map. If you look at the time series associated with one given source, it will be exactly the same

pg. 233
for all normalizations, except for a scaling factor. Only the relative weights change between the
sources, and these weights do not change over time.

dSPM, sLORETA

 In Run#01, right-click on the average recordings for the deviant stim > Compute
sources [2018].
Select successively the two normalization options: dSPM, sLORETA, (constrained).

 Double-click on all of them to compare them (screen capture at 143ms):

 Current density maps: Tends to highlight the top of the gyri and the superficial sources.
 dSPM: Tends to correct this behavior and may give higher values in deeper areas. The
values obtained are unitless and similar to Z-scores, therefore they are easier to interpret.
They are by default not scaled with the number of averages. To obtain correctly scaled
dSPM values, one has to call the process "Sources > Scale averaged dSPM", as
explained in the advanced section Averaging normalized values.
 sLORETA: Produces smoother maps where all the potentially activated area of the brain
(given to the low spatial resolution of the source localization with MEG/EEG) is shown
as connected, regardless of the depth of the sources. The maps are unitless, but unlike
dSPM cannot be interpreted as Z-scores so are more difficult to interpret.

Z-score

 The Z-transformation converts the current density values to a score that represents the
number of standard deviations with respect to a baseline period. We define a baseline
period in our file (in this case, the pre-stimulus baseline) and compute the average and
standard deviation for this segment. Then for every time point we subtract the baseline
average and divide by the baseline standard deviation. Z = (Data - μ) / σ
 This measure tells how much a value deviates from the baseline average, in number of times the
standard deviation. This is done independently for each source location, so the sources with a
low variability during baseline will be more salient in the cortical maps post-stimulus.

pg. 234
 In Process1: Select the constrained current density maps (file MN: MEG(Constr)).
 Run process "Standardize > Baseline normalization", [-100,-1.7]ms, Z-score
transformation
Do not select "Use absolute values": We want the sign of the current values.

 Double-click on the new normalized file to display it on the cortex (file with the "|
zscore" tag).

 You can see that the cortical maps obtained in this way are very similar to the other
normalization approaches, especially with the dSPM maps.

pg. 235
 A value of 3 in this figure means: at this vertex, the value is 3 times higher than the
standard deviation from zero during the baseline. If the values during the baseline follow
a normal distribution N(μ,σ2), then the values we computed follow a N(0,1)=Z
distribution. We can get a level of significance from this well known distribution, for
instance a value Z=1.96 corresponds to a p-value of 0.05. These questions will be
discussed in more details in the statistics tutorial.
 The Z-normalized source maps are not impacted by the visualization filters. If you
open simultaneously the time series and all the files you have now (MN, dSPM,
sLORETA, Z-score) and modify the options in the Filter tab, all the figures are updated
except for the Z-score one. We can filter easily all the linear models (MN, dSPM,
sLORETA), but we would lose the interesting properties of the Z-values if we were
filtering them (the values would not follow a Z-distribution anymore).
 If the baseline and the active state are not in the same file, you can use the Process2 tab:
place the baseline in the left list (Files A) and the file to normalize in the right list (Files
B).

Typical recommendations

 Use non-normalized current density maps for:


o Computing shared kernels applied to single trials.
o Averaging files across MEG runs.
o Computing time-frequency decompositions or connectivity measures on the single trials.

 Use normalized maps (dSPM, sLORETA, Z-score) for:


o Estimating the sources for an average response.
o Exploring visually the average response (ERP/ERF) at the source level.
o Normalizing the subject averages before a group analysis.
o Avoid averaging normalized maps (or computing any additional statistics)
 Recommended normalization approach:
o It is difficult to declare that one normalization technique is better than another. They
have different advantages and may be used in different cases. Ideally, they should all
converge to similar observations and inferences. If you obtain results with one method
that you cannot reproduce with the others, you should question your findings.
o dSPM and sLORETA are linear measures and can expressed as imaging kernels, therefore
they are easier to manipulate in Neurostorm. sLORETA maps can be smoother but they
are difficult to interpret. dSPMs, as z-score maps, are much easier to understand and
interpret.
o Z-normalized current density maps are also easy to interpret. They represent explicitly a
"deviation from experimental baseline" as defined by the user. In contrast, dSPM
indicates the deviation from the data that was used to define the noise covariance used
in computing the min norm map.

Delete your experiments

pg. 236
 Select all the source files you computed until now and delete them.

Computing sources for single trials


Because the minimum norm model is linear, we can compute an inverse model independently
from the recordings and apply it on the recordings when needed. We will now illustrate how to
compute a shared inverse model for all the imported epochs.

 Right-click on the head model or the folder for Run#01 > Compute sources [2018].
Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex

 Because we did not request to compute an inverse model for a specific block of
recordings, it computed a shared inverse model. If you right-click on this new file, you
get a warning message: "Inversion kernel". It does not contain any source map, but only
the inverse operator that will allow us to convert the recordings into source maps.

pg. 237
 The database explorer now shows one source link to this inverse model for each block of
recordings available in this folder, single trials and averages. These links are not real files
saved on the hard drive, but you can use them exactly like the previous source files we
calculated for the deviant average. If you load a link, Neurostorm loads the corresponding
MEG recordings, loads the inverse kernel and multiplies the two on the fly before
displaying it. This optimized approach saves a lot of computation time and lot of space on
the hard drive.

Averaging in source space


Computing the average

 First compute the same source model for the the second acquisition run.
In Run#02, right-click on the head model or the folder > Compute sources [2018].
Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex

pg. 238
 Now we have the source maps available for all the recordings, we can average them in
source space across runs. This allows us to average MEG recordings that were recorded
with different head positions (in this case Run#01 and Run#02 have different channel
files so they could potentially have different head positions preventing the direct
averaging at the sensor level).
 Thanks to the linearity of the minimum norm model: MN(Average(trials)) =
Average(MN(trials))
The two following approaches are equivalent:
1. Averaging the sources of all the individual trials across runs,
2. Averaging the sources for the sensor averages that we already computed for each run.
 We will use the second option: using the sources for the sensor-level averages. It is a lot faster
because it needs to read 4 files (one average file per run and per condition) instead of 456 files
(total number of good trials in the two runs).

 Drag and drop to the Process1 tab the average recordings for Run01 and Run02, then
press the [Process sources] button on the left to select the source files instead of the
MEG recordings.
 Run process "Average > Average files":
Select "By trial group (subject average)" to average together files with similar names.
Select "Arithmetic average" function.
Check "Weighted average" to account for the different numbers of trials in both runs.

pg. 239
 The two averages that are produced (one for each condition) are saved in the folder
Intra-subject. This is where all the files computed using information from multiple
folders within the same subject are saved. If you prefer to have them somewhere else,
you can create new folders and move them there, just like you would do with a regular
file explorer.

pg. 240
 The file comments say "2 files" because they were computed from two averages each
(one from each run), but the number of corresponding trials is correctly updated in the
file structure.
Right-click on each of them > File > View file contents, and check the Leff field:
78 trials for the deviant condition, 378 trials for the standard condition.
 Double-click on the source averages to display them (deviant=top, standard=bottom).
Open the sensor-level averages as a time reference.
Use the predefined view "Left, Right" for looking at the two sides at the same time
(shortcut: "7").

Visualization filters

 Note that opening the source maps can be very long because of the filters for
visualization. Check in the Filter tab, you may have a filter applied with the option
"Filter all results" selected. In the case of averaged source maps, the 15,000 source
signals are filtered on the fly when you load a source file. This filtering of the full source
files can take a significant amount of time, consider unchecking this option if the display
is too slow on your computer.

pg. 241
 It was not a problem until now because the source files were saved in the compact form
(Kernel*recordings) and the visualization filters were applied on the recordings, then projected
to the source space. This fast option is not available anymore with these averages across runs.
 The visualization filters will not be available anymore after we apply a Z-score normalization. If
we want to display Z-score source maps that are smoothed in time, we will have to apply
explicitly the filters on the file, with the Process1 tab.

Low-pass filter

 Clear the Process1 list, then drag and drop the new averages in it.

 Run process "Pre-process > Band-pass filter": [0,40] Hz

 Epochs are too short: Look at the filter response, the expected transient duration is at
least 78ms. The first and last 78ms of the average should be discarded after filtering.
However, doing this would get rid of almost all the 100ms baseline, which we need for
normalization, we should have been importing longer epochs in order to filter and
normalize the averages properly.

pg. 242
Z-score normalization

 In Process1, select the two filtered averages.

 Run process "Standardize > Baseline normalization", baseline=[-100,-1.7]ms, Z-score.

pg. 243
 Four new files are accessible in the database: two filtered and two filtered+normalized.

 Double-click on the source averages to display them (deviant=top, standard=bottom).

 The Z-score source values at 90ms are higher for the standard condition (~25) than for the
deviant condition (~15). We observe this because the two conditions have very different signal-
to-noise ratios. The standard condition has about 5x more trials, therefore the standard
deviation over the baseline is a lot lower, leading to higher Z-score.

 Delete the non-normalized filtered files, we will not use them in the following tutorials.

pg. 244
Note for beginners
Everything below is advanced documentation, you can skip it for now.

Advanced

Averaging normalized values


Averaging normalized source maps within a single subject requires more attention than
averaging current density maps. Since averaging reduces variance, the resulting source maps will
have a different statistical distribution than the nominal distribution of the individual maps.

For example, averaging z-score normalized maps will result in maps with variance less than 1.
The same holds true for dSPM maps. Assuming independent samples, the variance of an average
of N maps drops by 1/N. For this reason, it is generally recommended to select the "Weighted
average" option in the ‘Average files’ process when averaging trials or source maps (which
performs mean(x) = (N1*sum(x1(i)) + N2*sum(x2(i)) + …)/ (N1+N2+…) ) in order to keep
track of the number of samples and the actual variance of averaged statistical maps.

dSPM

 An averaged dSPM map has variance equal to 1/N (and thus standard deviation equal to
1/sqrt(N)). Therefore one could multiply the averaged dSPM map by sqrt(N) in order to maintain
variance 1 under the null hypothesis. In previous versions of Neurostorm, this was done
automatically when visualizing the files, and when averaging source maps with the option
"Adjust normalized source maps for SNR increase". To simplify the interface and make the
interpretation of maps more intuitive and consistent with other cases (min-norm, z-scored), we
now dropped this option. Thus averaging dSPM maps now results in maps with variance less
than 1, and is consistent with handling min-norm, z-scored and sLORETA maps.

 Ajusting an averaged dSPM file by this sqrt(N) factor is still possible manually, eg. in
order to visualize cortical maps that can be interpreted as Z values. Select the average
dSPM files in Process1 and run process "Sources > Scale averaged dSPM". This should
be used only for visualization and interpretation, scaled dSPM should never be averaged
or used for any other statistical analysis.

pg. 245
Z-score

 The same SNR issues arise while averaging Z-scores: the average of the Z-scores is lower than
the Z-score of the average.

 When computing averages at the subject level: Always avoid averaging Z-score maps.
Average the current density maps, then normalize.

sLORETA

 This normalization is not based on the SNR of signal, but rather on the spatial smoothness of the
maps. Managing these maps is similar to min-norm maps: the variance of the individual maps is
not explicitly modeled or known analytically.
 As in other cases, sLORETA(Average(trials)) = Average(sLORETA(trials)), and this relationship is
guaranteed to hold with averaging uneven number of samples when using the option "Weighted
average".

Advanced

Display: Contact sheets and movies


A good way to represent what is happening in time is to generate contact sheets or videos. Right-
click on any figure and go to the menu Snapshot to check out all the possible options. For a nicer
result, take some time to adjust the size of the figure, the amplitude threshold and the colormap
options (hiding the colorbar can be a good option for contact sheets).

pg. 246
A time stamp is added to the captured figure. The size of the text font is fixed, so if you want it
to be readable in the contact sheet, you should make you figure very small before starting the
capture. The screen captures below where produced with the colormap "hot".

 Contact sheet: Right-click on any figure > Snapshot > Time contact sheet: Figure

 Movies: Right-click on any figure > Snapshot > Movie (time): All figures

Advanced

Model evaluation
One way to evaluate the accuracy of the source reconstruction if to simulate recordings using the
estimated source maps. This is done simply by multiplying the source time series with the
forward model:
MEG_simulated [Nmeg x Ntime] = Forward_model [Nmeg x Nsources] * MN_sources
[Nsources x Ntime]

pg. 247
Then you can compare visually the original MEG recordings with the simulated ones. More
formally, you can compute an error measure from the residuals (recordings - simulated).

To simulate MEG recordings from a minimum norm source model, right-click on the source file,
then select the menu "Model evaluation > Simulate recordings".

Open side-by-side the original and simulated MEG recordings for the same condition:

Advanced

Advanced options: Minimum norm


Right-click on the deviant average for Run#01 > Compute sources [2018].
Click on the button [Show details] to bring up all the advanced minimum norm options.

pg. 248
Depth weighting

Briefly, the use of various depth weightings was far more debated in the 1990s, before the
introduction of MNE normalization via dSPM, sLORETA, and other "z-scoring" methods, which
mostly cancel the effects of depth weighting (put another way, after normalization min norm
results tend to look quite similar whether depth weighting is used or not).

By modifying the source covariance model at each point in the source grid, deeper sources are
"boosted" to increase their signal strength relative to the shallower dipoles; otherwise, the
resulting MNE current density maps are too dominated by the shallower sources. If using dSPM
or sLORETA, little difference in using depth weighting should be noted. To understand how to
set these parameters, please refer to the MNE manual. (options --depth, --weightexp and --
weightlimit).

Noise covariance regularization [TODO]

MNE and dipole modeling are best done with an accurate model of the noise covariance, which
is generally computed from experimental data. As such, these estimates are themselves prone to
errors that arise from relatively too few data points, weak sensors, and strange data dependencies
that can cause the eigenspectrum of the covariance matrix to be illconditioned (i.e. a large
eigenvalue spread or matrix condition number). In Neurostorm, we provide several means to
"stabilize" or "regularize" the noise covariance matrix, so that source estimation calculations are
more robust to small errors.

pg. 249
 Regularize noise covariance: The L2 matrix norm is defined as the largest eigenvalue of
its eigenspectrum. This option adds to the covariance matrix a diagonal matrix whos
entries are a fraction of the matrix norm. The default is 0.1, such that covariance matrix is
stabilized by adding to it an identity matrix that is scaled to 10% of the largest
eigenvalue.
 Median eigenvalue: The eigenspectrum of MEG data can often span many decades, due
to highly colored spatial noise, but this broad spectrum is generally confined to the first
several modes only. Thus the L2 norm is many times greater than the majority of the
eigenvalues, and it is difficult to prescribe a conventional regularization parameter.
Instability in the inverse is dominated by defects found in the smallest eigenvalues. This
approach stabilizes the eigenspectrum by replicating the median (middle) eigenvalue for
the remainder of the small eigenvalues.
 Diagonal noise covariance: Deficiencies in the eigenspectrum often arise from
numerical inter-dependencies found among the channels, particularly in covariance
matrices computed from relatively short sequences of data. One common method of
stabilization is to simply take the diagonal of the covariance matrix and zero-out the
cross-covariances. Each channel is therefore modeled as independent of the other
channels. The eigenspectrum is now simply the (sorted) diagonal values.
 No covariance regularization: We simply use the noise covariance matrix as computed
or provided by the user.
 Automatic shrinkage: Stabilization method of Ledoit and Wolf (2004), still under
testing in the Neurostorm environment. Basically tries to estimate a good tradeoff
between no regularization and diagonal regularization, using a "shrinkage" factor. See
Neurostorm code "bst_inverse_linear_2018.m" for notes and details.
 Recommended option: This author (Mosher) votes for the median eigenvalue as being
generally effective. The other options are useful for comparing with other software
packages that generally employ similar regularization methods. [TODO]

Regularization parameter [TODO]

In minimum norm estimates, as mentioned above in the comparisons among methods, the data
covariance matrix is essentially synthesized by adding the noise covariance matrix to a modeled
signal covariance matrix. The signal covariance matrix is generated by passing the source prior
through the forward model. The source prior is in turn prescribed by the source model orientation
and the depth weighting.

A final regularization parameter, however, determines how much weight the signal model should
be given relative to the noise model, i.e. the "signal to noise ratio" (SNR). In Neurostorm, we
follow the definition of SNR as first defined in the original MNE software of Hamalainen. The
signal covariance matrix is "whitened" by the noise covariance matrix, such that the whitened
eigenspectrum has elements in terms of SNR (power). We find the mean of this spectrum, then
take the square root to yield the average SNR (amplitude). The default in MNE and in
Neurostorm is "3", i.e. the average SNR (power) is 9.

 Signal-to-noise ratio: Use SNR of 3 as the classic recommendation, as discussed above.

pg. 250
 RMS source amplitude: An alternative definition of SNR, but still under test and may
be dropped. [TODO]

Output mode

As mentioned above, these methods create a convenient linear imaging kernel that is "tall" in the
number of elemental dipoles (one or three per grid point) and "wide" only in the number of
sensors. At subsequent visualization time, we efficiently multiply the kernel with the data matrix
to compute the min norm images.

For some custom purposes, however, a user may find it convenient to pre-multiply the data
matrix and generate the full source estimation matrix. This would only be recommended in small
data sets, since the full results can become quite large.

 Kernel only: Saves only the linear inverse operator, a model that converts sensor values
into source values. The size of this matrix is: number of sources (15000) x number of
MEG sensors (274). The multiplication with the recordings is done on the fly by
Neurostorm in a transparent way. For long recordings or numerous epochs, this form of
compact storage helps saving a lot of disk space and computation time, and it speeds up
significantly the display. Always select this option when possible.
 Full results: Saves in one big matrix the values of all the sources (15,000) for all the time
samples (361). The size in memory of such a matrix is about 45Mb for 600ms of
recordings. This is still reasonable, so you may use this option in this case. But if you
need to process longer recordings, you may face "Out of memory" errors in Matlab, or
fill your hard drive quickly.

 Full results [15000x361] = Inverse kernel [15000x274] * Recordings [274x361]

Advanced options: LCMV beamformer


As mentioned in the introduction above, two other methods can be selected for source
estimation, a beamformer and dipole modeling. In this section, we review the options for the
beamformer. On top of the noise covariance matrix, you need to estimate a data covariance
matrix in order to enable the option "LCMV beamformer" in the interface. Note that pre-
whitening with the noise covariance matrix has not yet been implemented for the LCMV
beamformer, and only the data covariance is used in the current version. Thus, until noise pre-
whitening is implemented, select the "No noise modeling (identity matrix)" option in the
contextual menu for the noise covariance to enable the option "LCMV beamformer" in the
"Compute sources" interface.

pg. 251
Measure

The only option "Pseudo Neural Activity Index" (PNAI), is named after the definition of the
Neural Activity Index (NAI). We have modified Van Veen’s definition to rely strictly on the data
covariance, without need for a separate noise covariance matrix, but the basic premise is the
same as in dSPM, sLORETA, and other normalizations. Viewing the resulting "map," in an
identical manner to that with MNE, dSPM, and sLORETA described above, reveals possibly
multiple sources as peaks in the map. The PNAI scores analogously to z-scoring.

Dipole orientations

We recommend you choose "unconstrained" and let the later Dipole scanning process, which
finds the best fitting dipole at each time point, optimize the orientation with respect to the data.

Data covariance regularization

Same definitions as in MNE, only applied to the data covariance matrix, rather than the noise
covariance matrix. Our recommendation is to use median eigenvalue.

Advanced options: Dipole modeling


Dipole modeling fits a single dipole at each potential source location to produce a dipole
scanning map. This map can be viewed as a indication of how well, and where, the dipole fits at
each time point. However, we recommend using the subsequent best-dipole fitting routine
(dipole scanning) to determine the final location and orientation of the dipole (one per time
point). Please note that this function does not fit multiple simultaneous dipoles.

pg. 252
Although not widely recognized, dipole modeling and beamforming are more alike than they are
different – when comparing the inverse operators required to compute the dipole scanning map
(dipole modeling) and the beamformer output map (LCMV), we see that they differ only in that
the former uses an inverse noise covariance matrix while the latter replaces this with the inverse
of the data covariance.

Measure

This field is now missing, but the resulting imaging kernel file is directly analogous to the PNAI
result from LCMV beamforming. The user can display this scanning measure just as with the
LCMV case, where again the normalization and units are a form of z-scoring.

Dipole orientations

Use "unconstrained source" modeling and let the process "dipole scanning" optimize the
orientation of the dipole for every time instance.

Noise covariance regularization

Similarly, use "median eigenvalue".

The tutorial "MEG current phantom (Elekta)" demonstrates dipole modeling of 32 individual
dipoles under realistic experimental noise conditions.

Advanced

Combining MEG+EEG for source estimation

pg. 253
Magnetoencephalography and EEG sensor data can be processed jointly to produce combined
source estimates. Joint processing presents unique challenges because EEG and MEG use head
models that exhibit differing sensitivities to modeling errors, which can in turn lead to
inconsistencies between EEG and MEG with respect to the (common) source model. In practice
joint processing is relatively rare (Baillet et al., 1999). However, these data are complementary,
which means that joint processing can potentially yield insights that cannot be seen with either
modality alone.

For example, in the evoked responses in the data set used here, the first peak over the occipital
areas is observed in MEG (90 ms) slightly before EEG (110 ms). This delay is too large to be
caused by acquisition imprecisions. This indicates that we are not capturing the same brain
processes with the two modalities, possibly because the orientation and type of activity in the
underlying cortical sources is different.

MEG and EEG have different sensitivities to source orientation and depth. Given the challenges
of joint processing, our advice is to first look at the source reconstructions for the two modalities
separately before trying to use any type of fusion technique.

Advanced

On the hard drive


Constrained shared kernel

Right-click on a shared inverse file in the database explorer > File > View file contents.

pg. 254
Structure of the source files: results_*.mat

Mandatory fields:

 ImagingKernel: [Nsources x Nchannels] Linear inverse operator that must be multiplied


by the recordings in order to get the full source time series. If defined, ImageGridAmp
must be empty.
 ImageGridAmp: [Nsources x Ntime] Full source time series, in Amper.meter. If this
field is defined, ImagingKernel must be empty. If you want this field to be set instead of
ImagingKernel, make sure you select the advanced option Full results when estimating
the sources.
 Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.
 nComponents: Number of dipoles per grid point: 1=Constrained, 3=Unconstrained,
0=Mixed. In the case of mixed head models, the atlas GridAtlas documents region by
region how the list of grid points matches the list of dipoles.
 Function: Type of values currently saved in the file: 'mn', 'mnp', 'dspm', 'dspm2018',
'dspm2018sc', 'sloreta', 'lcmv', 'lcmvp', 'lcmvnai', 'lcmvpow', 'gls', 'glsp', 'glsfit', 'glschi',
'zscore', 'ersd'...
 HeadModelType: Type of source space used for this head model ('surface', 'volume',
'mixed').
 HeadModelFile: Relative path to the head model used to compute the sources.
 SurfaceFile: Relative path to the cortex surface file related with this head model.
 Atlas: Used only by the process "Sources > Downsample to atlas".
 GridLoc: [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a
surface head model, it is empty and you read directly the positions from the surface file.
 GridOrient: [Nvertices x 3], direction of the normal to the surface for each vertex point
(copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume
head model or unconstrained sources.
 GridAtlas: Atlas "Source model" used with mixed source models.
 GoodChannel: [1 x Nchannels] Array of channel indices used to estimate the sources.
 DataFile: Relative path to the recordings file for which the sources where computed. If
this field is set, the source file appears as a dependent of the DataFile.
 Comment: String displayed in the database explorer to represent this file.
 History: Operations performed on the file since it was create (menu "View file history").

Optional fields:

 Options: Structure that contains all the options of the inverse calculation. This is saved in
the file only for bookkeeping.
 Whitener: Noise covariance whitener computed in bst_inverse_linear_2018.m
 DataWhitener: Data covariance whitener computed in bst_inverse_linear_2018.m
 SourceDecompVa: [3 x Nsources] Concatenated right singular vectors from the SVD
decomposition of the whitened leadfield for each source (only for unconstrained sources).
 SourceDecompSa: [3 x Nvertices] Vector diagonal of the singular values from the SVD
decomposition of the whitened leadfield for each source (only for unconstrained sources).
 Std: For averaged files, number of trials that were used to compute this file.

pg. 255
 DisplayUnits: String, force the display of this file using a specific type of units.
 ChannelFlag: [Nchannels x 1] Copy of the ChannelFlag field from the original data file.
 Leff: Effective number of averages. For averaged files, number of trials that were used to
compute this file. For source files that are attached to a data file, we use the Leff field
from the data file.

Full source maps

In Intra-subject, right-click on one of the normalized averages > File > View file contents.

This file has the same structure as a shared inverse kernel, with the following differences:

 It contains the full time series (ImageGridAmp) instead of an inverse operator


(ImagingKernel).
 The Z-score process updated the field Function ('mn' => 'zscore')

Source links

 The links are not real files on the hard drive, if you select the menu "View file contents" for any
of them it would display the structure of the corresponding shared kernel.

 They are saved in the database as strings with a specific structure: "link|kernel_file|
data_file". This string associates a shared inverse operator with some recordings. The
two files have to be available to load the this file. All the functions in Neurostorm are
equipped to reconstruct the full source matrix dynamically.

pg. 256
Filename tags

 _KERNEL_: Indicates that the file contains only an inverse kernel, it needs to be
associated with recordings to be opened.

Useful functions

 in_bst_results(ResultsFile, LoadFull, FieldsList): Load a source file and optionally


reconstruct the full source time series on the fly (ImagingKernel * recordings).
 in_bst(FileName, TimeBounds, LoadFull): Load any Neurostorm data file with the
possibility to load only a specific part of the file.
 bst_process('LoadInputFile', FileName, Target, TimeWindow, OPTIONS): The most
high-level function for reading data files, can compute scout values on the fly.

Tutorial 23: Scouts


Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

In Neurostorm jargon, a scout represents a region of interest (ROI) in the available source space.
It is a subset of dipoles defined on the cortex surface or the head volume. This tutorial explains
how to create one or several scouts, use them to represent the activity in specific brain regions
and compare the responses between different experimental conditions.

Hypothesis
For all the brain imaging experiments, it is highly recommended to have a clear hypothesis to
test before starting the analysis of the recordings. With this auditory oddball experiment, we
would like to explore the temporal dynamics of the auditory network, the deviant detection and
the motor response. According to the literature, we expect to observe at least the following
effects:

 Bilateral response in the primary auditory cortex (P50, N100), in both experimental
conditions (standard and deviant beeps).
 Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to
the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.
 Decision making and motor preparation, for the deviant beeps only (after 300ms).

We will start by creating regions of interest corresponding to the auditory cortices to illustrate
the tools, then define other regions to better explore the dynamics of the brain response.

Creating a scout
Almost all the features related to scout manipulation are accessible in the Scout tab in the main
Neurostorm window. The scouts are automatically saved in the surface file from which they are
created, and they are loaded and automatically displayed each time the surface is loaded.

pg. 257
An atlas designates, in this context, a list of scouts. For one cortex surface, we can have as many
atlases as needed. An atlas can be an anatomical parcellation (like the ones loaded when using
FreeSurfer), a random parcellation generated by Neurostorm, or a user-defined list of ROIs. All
the surfaces contain, by default, an empty atlas called "User scouts", for the user to create new
regions of interest.

First vertex (seed)

 In Intra-subject, double-click on the normalized standard average.


Open the average recordings for the standard condition, to have a time reference.
Go to the first peak in the response, around 90ms.
In the Surface tab, increase the amplitude threshold to see something relatively focal. The
response is larger in the left hemisphere, so let's start with the left.

 Switch to the Scout tab, click on the first button in the toolbar (the big cross).
In the 3D figure, click where we expect to find the primary auditory cortex (rotate and
zoom before if necessary). A small green dot with the label "1" appears where you
clicked. Your first scout is created in the User scouts atlas, and contains only one vertex
for the moment.

 If you are not satisfied with the position of the vertex you selected, delete the new scout
(select it in the list and press the Delete key, or menu Scout > Delete) and try again.
 Rename it "A1L", for primary auditory left: double-click on it in the list, or menu Scout
> Rename.

pg. 258
 In light grey, you can see in the list the letters "LT", this means that based on the
anatomical atlases imported from FreeSurfer, the point you clicked is in the left temporal
lobe.

Growing a scout

For now, our scout contains only one vertex of the cortex surface. Most of the time, the aim of a
scout is to extract the average activity over a larger region. The buttons in the Scout size section
offer basic operations to define the scout extension.

 [>]: Add the closest vertex (with respect to the seed) to the current scout.
 [<]: Remove the furthest vertex (with respect to the seed) from the current scout.
 [>>]: Grow scout in all directions.
 [<<]: Shrink scout in all directions.
 'Constrained': If this button is pressed, only the vertices that have a source value above
the threshold will be added to the scout (its growth will be limited to the colored patch on
the surface).
 Add vertex manually: Select the Select point button, then select again the "A1L" scout.
When you click on a vertex on the surface, it is added to the selected scout.
 Remove vertex manually: Same as previously, but holding the SHIFT key at the same
time.

Grow the scout A1L to 20 vertices, not in constrained mode. You can read the number of
vertices and the estimated cortical area just below the [<<] and [>>] buttons.

Display time series

pg. 259
Select the scout in the list, click on the second button in the toolbar [Display scouts time series].
It displays the signal associated with this region for the entire time window (-100ms to 500ms).
We can now observe the early response at 50ms (P50) that was not very obvious before.

3D display options
In the toolbar on the right side of the scouts list, you can find a list of display options. Leave your
mouse over each button for a few seconds to get a short description.

 Load Atlas
 Save selected scouts
 Show all the scouts
 Show only the selected scouts
 Hide all: Uncheck both buttons above to hide the scouts in the 3D figure.
 Show / hide the contour line
 Show / hide the scouts labels
 Scout patch: Opaque
 Scout patch: Transparent
 Scout patch: None
 Display the color of the region instead of the scout color (only for anatomical atlases)
 Center MRI on scout (open the MRI viewer and show the position of the scout's seed)

Scout function
We have extended the scout A1L to 20 vertices. Because this is a source model with constrained
dipoles orientations, we have one source only per vertex. The region A1L corresponds to 20
signals.

The signals are grouped together into one unique signal that is then used to represent the activity
of the region of interest. In the list of available scouts, you can see the indication [Mean] next to
the name of the scout. It represents the name of the function that is used for combining all the

pg. 260
source signals into one. This function can be changed individually for each scout, with the menu
Scout > Set function.

Here is a description of the different options. In the case of unconstrained sources (3 signals for
each vertex, one for each orientation), the function is applied separately for each orientation and
produces 3 time series instead of one. For more details, see the code of bst_scout_value.m.

 Mean: Average all the signals.

 PCA: Take the first mode of the PCA decomposition of all the signals.
 Fast PCA: Same as PCA function, but computes the first PCA decomposition based only
on a subset of the most powerful signals. It gives very similar results, but its computation
is much faster for scouts that group a large number of dipoles (>50).

 Mean(norm): Average the absolute values of all the signals: mean(abs(signal))

 Max: For each time point, get the maximum across all the vertices.
Signed maximum: m = max(abs(signal)), scout = sign(signal(m)) * m

pg. 261
 Power: Average the square of all the signals: mean(signal^2)

 All: Do not apply any operation on the scouts signals, returns all the signals.

Option: Absolute / relative


As said in the previous tutorial, the minimum norm current amplitudes can be positive or
negative depending on the dipole orientation. This means that the values of the scouts, depending
on the function that is used, may be positive or negative too. Most of the time we are interested
in visualizing the absolute values of the scouts time series, to compare the activation level in
different conditions or subjects. But it is sometimes easier to understand the temporal dynamic of
a ROI with the relative values.

At the bottom of the Scout tab, you can choose to display either Absolute or Relative values.
The effect of this options changes whether you are processing source files with constrained (1
signal per vertex) or unconstrained (3 signals per vertex) dipoles orientations:

Constrained: Apply the scout function to all source signals, then:

pg. 262
 Absolute: abs(ScoutFunc(sources))

 Relative: ScoutFunc(sources)

Unconstrained: Apply the scout function to the source signals for each orientation (Sx,Sy,Sz)
separately, and then returns either the norm of the 3 orientations, or each orientation separately:

 Absolute: Returns one time series:


sqrt( ScoutFunc(Sx)2 + ScoutFunc(Sy)2 + ScoutFunc(Sz)2)

 Relative: Returns three time series:


A1L1=ScoutFunc(Sx), A1L2=ScoutFunc(Sy), A1L3=ScoutFunc(Sz)

pg. 263
Display only: Note that this relative/absolute selection is a display option, it is not saved in the
scouts themselves. It is used only when displaying the scouts time series with the [Display scouts
time series] button of the Scout tab. In all other cases, such as the extraction of the scouts values
from a script, this relative/absolute option is ignored.

Multiple conditions
We can easily compare the activity between multiple conditions and multiple regions of interest.

 In Intra-subject, open at the same time the normalized sources for deviant and standard
condition.
 Select the scout A1L and click on [Display scouts time series].
This computes the scouts time series for all the files that are currently open.

 In the Record tab, you can use the button [=] to configure the amplitude of multiple axes.
Select it to force multiple graphs to use the same scale, unselect it to scale each graph
separately.
You can find the same option in the figure popup menu: Figure > Uniform figure scale.

 At the bottom of the Scout tab, select the option "Overlay: Files", then click [Display]
again.

 The Z-score value for the standard condition is a lot higher than the deviant condition
because of the number of trials we used for computing the two averages (5x more trials

pg. 264
for the standard). The SNR is higher with more trials, the baseline has less variance so the
Z-score is higher.
 To overlay the results of two conditions averaged with very different numbers of trials, it
makes more sense to display the scout time series for non-normalized maps (averaged
current density).

Other regions of interest


 Let's place all the regions of interest starting from the easiest to identify.

o Open the normalized average source files (standard and deviant), together with
the average recordings for the deviant and standard condition for Run#01 for an
easier time navigation.
o In the Surface tab, smooth the cortical surface at 70%.
o For each region: go to the indicated time point, adjust the amplitude threshold in the
Surface tab, identify the area of interest, click on its center, grow the scout, rename it.

o Grow all the regions to the same size: 20 vertices.

 A1L: Left primary auditory cortex (Heschl gyrus) - Already marked.


o The most visible region in both conditions. Active during all the main steps of the
auditory processing: P50, N100, MMN, P200, P300.

o Standard condition, t=90ms, amplitude threshold=50%

 A1R: Right primary auditory cortex (Heschl gyrus)


o The response in this region is not as obvious than A1L. These binaural auditory
stimulations should be generating similar responses in both left and right auditory
cortices at early latencies. Possible explanations for this observation:

pg. 265
 The earplug was not adjusted on the right side and the sound was not well
delivered.
 The subject's hearing from the right ear is impaired.
 The response is actually stronger in the left auditory cortex for this subject.
 The orientation of the source makes it more difficult for the MEG sensors to
capture.

o Deviant condition, t=90ms, amplitude threshold=50%

 IFGL: Left inferior frontal gyrus (Brodmann area 44)


o Involved in the auditory processing, particularly while processing irregularities.
o You can use the atlas "Brodmann-thresh" available in the Scout tab for identifying this
region.

o Deviant condition, t=130ms, amplitude threshold=30%

 M1L: Left motor cortex


o The subject taps with the right index when a deviant is presented.
o The motor cortex responds at very early latencies together with the auditory cortex, in
both conditions (50ms and 100ms). The subject is ready for a fast response to the task.
o At 170ms, the peak in the standard condition probably corresponds to an inhibition: the
sound heard is not a deviant, there is no further motor processing required.
o At 230ms, the peak in the deviant condition is probably a motor preparation. At 350ms,
the motor task begins, the subject moves the right hand (recorded reaction times 500ms
+/- 108ms).
o We cannot expect to have clear responses during the motor response because of the
averaging. The response times are variable, so in order to get a better representation of
the regions involved we should import and average the trials based on the response
triggers.

pg. 266
o Deviant condition, t=440ms, amplitude threshold=25%

Advanced

Multiple scouts
We can display the activity of multiple regions simultaneously.

 Close everything (toolbar button [X]).

 Open the two normalized average source files (standard and deviant).
 In the Scout tab, select the A1L and IFGL scouts simultaneously.
 Select the option "Overlay:Scouts", do not select "Overlay:Files", Absolute values.
 Click on [Display scouts time series].

 Now select the option "Overlay:Files", do not select "Overlay:Scouts", click on


[Display].

 In both conditions, we observe similar early responses (P50 and N100), then it diverges.
In the deviant condition, we observe a pattern A1-IFG-A1 that is absent in the standard
condition.

pg. 267
Advanced

From the database explorer


You have to display the sources on the cortex to create the scouts. But once they are created, you
can directly display the scouts time series from the database explorer. It means that you can
quickly compare the values for a scout between many different conditions without having to
open them all.

 Close everything (toolbar button [X]).

 Select the option "Overlay:Files", do not select "Overlay:Scouts".


 Select the two average recordings in Run#01 folder > right-click > Scouts time series.
Note that this menu is present at all the levels in the database explorer.

 If no scout is currently loaded in the Scout tab, it shows all the scouts available in the
surface.
If a surface is loaded, and at least one scout selected in the Scout tab, this popup menu
would display only the selected scouts.

 Once the list of scouts is loaded in the Scout tab, you can select one of them, and then
display its values for all the trials of a single condition (overlay:files).

pg. 268
Advanced

Sign flip
In the case of source models with constrained orientations (normal to the cortex), the sign of
the current can be an issue. If the region is large enough to include vertices with normals in
opposite directions, averaging the source values may cancel out the activity.

Let's use the example from the previous tutorial and consider that one scout contains all the
dipoles corresponding to both the red arrows (positive values) and the blue arrows (negative
values). If we average all the values from this scout, we get a value close to zero.

To avoid this, a mechanism was added in the scout calculation, to flip the sign of sources with
opposite directions before the averaging. We start by finding the dominant orientation of the
scout, then flip the sign of the values that are not in the same direction (scalar product of the
orientations < 0).

If the sign of some sources is flipped, you get a message in the Matlab command window, for
example:
BST> Flipped the sign of 7 sources.

Advanced

pg. 269
Scout toolbar and menus
Menu: Atlas


 New atlas:
o Empty atlas: Create a new empty atlas.
o Copy atlas: Duplicate an atlas and all the scouts it contains.
o Copy selected scouts: Create a new atlas and copy only the selected scouts to it.
o Source model options: Create a special atlas "Source model", with which you
can attribute different modeling constraints to each region, after merging the
cortex with some subcortical regions. This is required to use the option "Custom
head model" (mixed models).
o Volume scouts: Create a volume atlas in order to define 3D scouts (volume head
models).
 From subject anatomy: Import as surface or volume scouts the ROIs defined in one of
the volume parcellation available in the subject's anatomy (MNI-based atlases or subject
parcellations from CAT12 or FreeSurfer).
 Load atlas: Load ROIs or a cortical parcellation coming from FreeSurfer or BrainSuite
as a new atlas.
 Rename atlas: Change the name that appears in the atlas list to refer to the selected atlas.
 Delete atlas: Delete the current atlas and all the scouts in contains.
 Add scouts to atlas: Load ROIs or a cortical parcellation and add them to the current
atlas.
 Subdivide atlas: Splits all the scouts of the current atlas in smaller ROIs. Available
options:

pg. 270
 Subdivide selected scouts: Same thing, but processes only the selected scouts.
 Surface clustering: Create a parcellation of the cortex surface and saves it as a new atlas.

Only the "Homogeneous parcellation" clustering is currently available.


 Save modifications: Force the current modifications to be saved to the cortex surface.
 Undo all modifications: Restore the atlas the way it was the last time the surface was
loaded.

Menu: Scout


 New: coordinates: Create a new scout starting from the vertex that is the closest to the
specified coordinates. You can enter coordinates in MRI, SCS or MNI coordinates.
 Add vertices: The user can select some points on the cortex surface and add them to the
scout.
Equivalent: Click on Select point (first toolbar button) then select the scout in list.

pg. 271
 Edit in MRI: Open an interface to edit the selected scout as a 3D ROI, slice by slice.
Only the vertices contained in the 3D mask are kept, all the volume information is lost. It
means that the mask you intended to draw might be very different from what you get as a
scout at the end. This is not a very reliable tool, as there is no direct correspondence
between the volume and the surface.
 Set function: Set the scout function for the selected scouts.
 Set region: Set the cortical region for the selected scouts.
 Rename: Rename the selected scout. Shortcut: double-click.
 Set color: Change the display color of the selected scout.
 Delete: Delete selected scouts. Shortcut: Delete key
 Merge: Join two or more selected scouts.
 Export to Matlab: Export the structures of the selected scouts to the Matlab environment
and makes them accessible from the command window. This menu can be useful to get
quickly the list of vertex indices or modify a scout manually.
 Import from Matlab: Import scouts structures that you modified manually from your
Matlab command window directly as new scouts.
 Project to: Project the selected scout on another surface available in the database.
 Edit surface: Create a new surface containing only the desired parts (remove or keep
only the selected scouts). This is useful for instance for selecting one sub-cortical region
from the Aseg FreeSurfer atlas (see the FreeSurfer tutorial).

Menu: Sources


 Correlation with sensor: Create a new scout with all the sources that are strongly
correlated with a given sensor.
 Expand with correlation: Computes the correlation between the values for the scout's
seed (first point) and all the other sources. The sources that have correlation coefficient
superior to a given threshold are added to the scout.
 Maximal value (new scout): Find the vertex with the maximal intensity at the current
time, and create a scout centered on it.
 Maximal value (selected scout): Move the scout's seed to the source that has the
maximum amplitude in the scout, at the current time.
 Simulate recordings: Multiply the selected scouts with the forward model. Simulate the
scalp data that would be recorded if only the selected cortex region was activated; all the
other sources are set to zero. Create a new data file in the database.
If no scout selected: simulate recordings produced by the activity of the entire cortex.

pg. 272
Advanced

Scout region
A scout is defined by its name, and it has several properties: a list of vertices and an aggregating
function. These are usually enough to explore the activity at the cortex level the way we did it in
these tutorials. An extra property can be defined on the scout: the explicit classification in a brain
region. This property is used only in more advanced functional connectivity analysis, for the
representation of the NxN connection graphs. It is introduced here for reference purpose.

A brain region in Neurostorm is following a hierarchy with three levels: hemisphere / lobe / sub-
region. The definition at each level is optional: a region can be classified only at the hemisphere
level, or at the hemisphere+lobe level, or none of them. It depends on level or hierarchy you are
interested in to explore the connectivity graphs.

The region for a scout can be set with the Scout > Set region menus, and is encoded in a string
that contains at least 2 characters: "HLxxx". H represents the hemisphere (L,R,U), L stands for
the lobe (F,PF,C,P,T,O,L,U), and xxx for the sub-region name (optional). For both the
hemisphere and the lobe, the value "U" stands for "Undefined", meaning that the classification is
simply not set. The menu Set region>Custom region... lets you directly edit this string.

When set, the region string is shown before the scout name in the list, representing only the
defined levels. It doesn't show the letters U for "undefined".

pg. 273
Advanced

On the hard drive


The scouts are saved in the surface file on which they have been defined.
In the anatomy view, right-click on the selected cortex surface (cortex_15002V) > View file
contents.

iAtlas: Index of the atlas that is currently selected for this surface.

Atlas: Array of structures, each entry is one menu in the drop-down list in the Scout tab.

 Name: Label of the atlas (reserved names: "User scouts", "Structures", "Source model").
 Scouts: Array of structures, one per scout in this atlas.
o Vertices: Array of indices of the vertices that are part of the scout.
o Seed: Index of the central point of the scout (or the most relevant one).
o Color: [r,b,g] color array, with values between 0 and 1.
o Label: Display name of the scout (must be unique in this atlas).
o Function: Scout function {'Mean', 'PCA', 'FastPCA', 'Mean_norm', 'Max', 'Power',
'All'}

pg. 274
o Region: Code name for indicating the anatomical region in which the scout is
located.
o Handles: Graphic handles if the scout is currently displayed (always empty in a
file).

Useful functions

 bst_scout_value: Combine multiple signals into one.


 process_extract_scout: Process "Extract > Scouts time series"
 view_scouts: Compute scouts time series and displays them in a new figure.

Tutorial 24: Time-frequency


Authors: Francois Tadel, Dimitrios Pantazis, Elizabeth Bock, Sylvain Baillet

This tutorial introduces how to compute time-frequency decomposition of MEG/EEG recordings


and cortical currents using complex Morlet wavelets and Hilbert transforms.

Introduction
Some of the MEG/EEG signal properties are difficult to access in time domain (graphs
time/amplitude). A lot of the information of interest is carried by oscillations at certain
frequencies, but the amplitude of these oscillations is sometimes a lot lower than the amplitude
of the slower components of the signal, making them difficult to observe.

The averaging in time domain may also lead to a cancellation of these oscillations when they are
not strictly locked in phase across trials. Averaging trials in time-frequency domain allows to
extract the power of the oscillation regardless of the phase shifts. For a better understanding of
this topic, we recommend the lecture of the following article: Bertrand O, Tallon-Baudry C
(2000), Oscillatory gamma activity in humans: a possible role for object representation.

In Neurostorm we offer two approaches for computing time-frequency decomposition (TF): the
first one is based on the convolution of the signal with series of complex Morlet wavelets, the
second filters the signal in different frequency bands and extracts the envelope of the filtered
signals using the Hilbert transform.

pg. 275
Morlet wavelets
Complex Morlet wavelets are very popular in EEG/MEG data analysis for time-frequency
decomposition. They have the shape of a sinusoid, weighted by a Gaussian kernel, and they can
therefore capture local oscillatory components in the time series. An example of this wavelet is
shown below, where the blue and red curves represent the real and imaginary part, respectively.

Contrary to the standard short-time Fourier transform, wavelets have variable resolution in time
and frequency. For low frequencies, the frequency resolution is high but the time resolution is
low. For high frenqucies, it's the opposite. When designing the wavelet, we basically decide a
trade-off between temporal and spectral resolution.

To design the wavelet, we first need to choose a central frequency, ie. the frequency where we
will define the mother wavelet. All other wavelets will be scaled and shifted versions of the

pg. 276
mother wavelet. Unless interested in designing the wavelet at a particular frequency band, the
default 1Hz should be fine.

Then, the desirable time resolution for the central frequency should be defined. For example, we
may wish to have a temporal resolution of 3 seconds at frequency 1 Hz (default parameters).
These two parameters, uniquely define the temporal and spectral resolution of the wavelet for all
other frequencies, as shown in the plots below. Resolution is given in units of Full Width Half
Maximum of the Gaussian kernel, both in time and frequency.

Edge effects
Users should pay attention to edge effects when applying wavelet analysis. Wavelet coefficients
are computed by convolving the wavelet kernel with the time series. Similarly to any
convolution of signals, there is zero padding at the edges of the time series and therefore the
wavelet coefficients are weaker at the beginning and end of the time series.

From the figure above, which designs the Morlet wavelet, we can see that the default wavelet
(central frequency fc=1Hz, FWHM_tc=3sec) has temporal resolution 0.6sec at 5Hz and 0.3sec
at 10Hz. In such case, the edge effects are roughly half these times: 300ms in 5Hz and 150ms in
10Hz.

More precisely, if f is your frequency of interest, you can expect the edge effects to span over
FWHM_t seconds: FWHM_t = FWHM_tc * fc / f / 2. Examples of such transients are given in
the figures below.

pg. 277
We also need to consider these edge effects when using the Hilbert transform approach. The
band-pass filters used before extracting the signal envelope are relatively narrow and may cause
long transients. To evaluate the duration of these edge effects for a given frequency band, use the
interface of the process "Pre-process > Band-pass filter" or refer to the filters specifications

Simulation
We will illustrate the time-frequency decomposition process with a simulated signal.

 The following code generates a sum of three sinusoids (2Hz, 20Hz, 50Hz) with random
white noise. The 50Hz and noise are present everywhere, the 2Hz and 20Hz start only
after two seconds.

 f1 = 2; f2 = 20; f3 = 50;
 i =2000:6000;
 Data(1,i) = sin(f1*2*pi*t(i)) + 0.4 * cos(f2*2*pi*t(i));
Data = Data + 0.2 * sin(f3*2*pi*t) + 0.4 * rand(1,6000);

 Empty the Process1 list (right-click > Clear list) then click on [Run].
 Run process: Simulate > Simulate generic signals.
Ntime=6000, Sampling frequency=1000Hz (signal duration = 6000/1000 = 6 seconds).
Copy-paste the few lines of code above to generate the sum of three sinusoids.

pg. 278
 Double-click on the new file to look at the simulated signal.

 In Process1, select the simulated signal.

 Run process: Frequency > Time-frequency (Morlet wavelets).


Select the option Spectral flattening: The normalization will be discussed later.

pg. 279
 Click on the button [Edit] to see all the process options.
Time definition: Same as input, Frequency definition: Linear 1:1:60, Compute measure:
Power.

pg. 280
Process options
Comment: String that will be displayed in the database explorer to represent the output file.

Time definition

 Same as input file: The output file has the same time definition as the input file.
In this example, it means: 6000 samples between 0 and 6s.
 Group in time bands: This option adds a step of computation. First it computes the TF
decomposition for all the input file, then averages the power by time band. To define a
time band:

o Enter your own time bands in the text area, one line per time band, with the
following format: "name / time definition / function"
o Click on the button [Generate] to automatically create a list of time bands with the same
length. You will be asked the maximal length of each time band.

o The function is the measure we take to combine the values for all the individual
frequencies into one for the frequency band. Possible values are: mean, max, std,
median.

Frequency definition: Frequencies for which the power will be estimated at each time instant.

 Linear: You can specify the frequencies with the Matlab syntax start:step:stop.
The default is "1:1:60", which produces 60 values [1, 2, 3, 4, ..., 59, 60].
 Log: With the option start:N:stop, produces a list of N frequencies logarithmically scaled
between "start" and "stop". For example "1:40:80" is converted to [1, 1.5, 2.1, 2.7, ...,
61.5, 65.8, 75, 80]
 Group in frequency bands: As for the time definition, this option leads to a two-step
process. First it computes the TF decomposition for several values in the frequency band,
then it averages the power of TF coefficients per frequency band. To define a frequency
band:

o One line per frequency band, with the format "name / frequency definition /
function"
o The frequency definition is a Matlab expression evaluated with an eval() call. If
the frequency definition contains only two values, Neurostorm adds two extra
values in the middle so that the final averaged value is a bit more robust. Example
of valid expressions:
"2,4": Evaluates to [2,4], and then expands to the frequency vector [2, 2.66, 3.33,
4]
"2:0.5:4": Evaluates to [2 2.5 3 3.5 4]
"2, 2.5, 3, 3.5, 4": Evaluates to [2 2.5 3 3.5 4]
o The function is the measure we take to combine the values for all the individual
frequencies into one for the frequency band. Possible values are: mean, max, std,
median.

pg. 281
Morlet wavelet options

 Central frequency: Frequency where the mother wavelet is designed. All other wavelets
will be shifted and scaled versions of the mother wavelet
 Time resolution (FWHM): Temporal resolution of the wavelet at the central frequency
(in units of Full Width Half Maximum). Click [Display] to see the resolution of the
wavelet for other frequencies.

Compute the following measure:

 The convolution of the signal with complex Morlet wavelets returns the complex
coefficients for each frequency/time/sensor. Typically, what we display is the power of
the coefficients (square of the amplitude: abs(TF)2). You can choose if you want to apply
this transformation or not.

 Power: Computes the "power" transformation immediately after the TF decomposition.


This discards the phase information, but produces files that are twice smaller and a lot
easier to process.
 Magnitude: Save the magnitude of the complex values instead of the power: abs(TF).
 None: Save the TF coefficients as they are computed (complex values). This can be
useful if you plan to use these decompositions for other purposes that require the phase.
 Some combinations of options may disable this choice. If you select frequency bands, the
program will have to compute the power before averaging the values, therefore "none" is not
an option.

Display: Time-frequency map


 Right on the new time-frequency file > Time-freq: One matrix (same as double-
clicking).
This menu displays the time-frequency decomposition of the first (and only) signal. The
Neurostorm window shows two new elements: the tab "Display" and the frequency
slider.

 We can easily identify the three horizontal bars as the three sinusoids in the simulated signals,
and the trade-off between accuracy in time and accuracy in frequency. Set the measure to

pg. 282
"magnitude" in the Display tab if you don't. Click on the figure to move the time-frequency
cursor and explore the two axes of the plane.

o 2Hz: High frequency resolution but poor time resolution (supposed to start
sharply at 2s)
o 20Hz: Better time resolution but poorer frequency resolution (17-24Hz)
o 50Hz: Continuous over the 6s - Frequency resolution gets even worse (40-60Hz).
It looks discontinuous because this oscillation has the same amplitude as the white
noise we added in the signal (weight 0.4, relatively to the 2Hz oscillation).
 Current frequency: Slider that shows the current frequency selected in all the figures.
Just like the time, the frequency selection is centralized and managed by one control only
for all the figures. As a consequence, it is impossible to display TF files with different
frequency definitions at the same time. This can be perceived as an annoying limitation,
but it allows all the simultaneous displays to be consistent at anytime and makes the
interface more intuitive to manipulate, with lower risks of mistakes in the interpretation
of the different figures.
 List of signals: This drop-down list shows the signal currently displayed in the selected
figure. In this case, there is only one channel of data called "s1". It will be more useful
later.
 Hide edge effects: When this option is selected, the time-frequency coefficients that
could not be properly estimated because of a lack of time samples are hidden. It allows
you to see only the information that is really reliable. The lower the frequency, the longer
the edge effects. In the screen capture below, the colormap has been changed to "jet" and
the maximum set manually to 0.2 (measure=power).

 Smooth display: Re-interpolates the time-frequency maps on a finer grid to produce


nicer plots.
 Measure: Type of measure that is currently represented in the selected figure. The entries
that are enabled depend on the type of data that is saved in the file. In this case, we saved
directly the power of the wavelet coefficients in the file, we discarded the angle/phase
information, so the "phase" option is disabled. The other options are: Magnitude =
sqrt(power), Log = 10*log10(power)

pg. 283
 Colormap: As explained in the previous tutorials, you can change the colormap by
clicking+moving on the colorbar on the right of the figure. Double-click on the colorbar
to restore the defaults.

Display: Mouse and keyboard shortcuts


Mouse shortcuts

 Left-click: Selection of current time and frequency.


 Left-click + move: Select a time/frequency range. The legends of the X/Y axis show the
selection.

 Mouse wheel: Zoom in time, centered on the current time cursor.


 Control + mouse wheel: Zoom in frequencies, centered on the current frequency cursor.
 Right-click + move, or Control + left-click + move: Move in the zoomed image.
 Double-click: Restore initial view.

Keyboard shortcuts:

 Left/right arrows: Change the current time.


 Page-up/page-down: Change the current time, 10 time samples at a time.
 Up/down arrows: Change the the sensor displayed in this figure.
 Control + up/down arrows: Change the current frequency.
 Enter: View the original time series for this sensor.
 Control + R: View the original MEG recordings.
 Control + T: View the time-frequency 2D topography.
 Control + I: Save as image.
 Control + D: Dock figure in the Matlab environment.

Figure popup menu

pg. 284

 Set selection manually: Does the same thing as drawing a time/freq selection square on a
figure, but by typing the values for time and frequency manually.
 Export to database: Save the selection for the displayed sensor in a new file in the
database.
 Export to file: Same as "Export to database", but the saved file is not registered in the
database.
 Export to Matlab: Same as "Export to database", but the output structure is sent to a
variable in the Matlab base workspace instead of being saved to a file.

Display: Power spectrum and time series


Right-click on the file in the database or directly on the time-frequency figure to access these
menus.

 Power spectrum: For the current time, shows the power for all the frequencies.
 Time series: For the current frequency, shows the power for all the time samples.

 Example: Power spectrum density at 0.5s and power time series at 2Hz.
We see the oscillation at the 50Hz in the PSD plot, and the oscillation at 2Hz in the TS
plot.

 Example: Power spectrum density at 4s and power time series at 20Hz.


We see all three oscillations in the PSD plot, and the oscillation at 20Hz in the TS plot.

pg. 285
 Note that if you right-click on the file in the database explorer and then select one of
these menus, it will show all the signals. If you right-click on an existing time-
frequency figure, it will show only the selected signal. It doesn't make any difference
here because there is only one signal, but it will with the MEG recordings.

Normalized time-frequency maps


The brain is always active, the MEG/EEG recordings are never flat, some oscillations are always
present in the signals. Therefore we are often more interested in the transient changes in the
power at certain frequencies than at the actual power. A good way to observe these changes is to
compute a deviation of the power with respect with a baseline.

There is another reason for which we are usually interested in standardizing the TF values. The
power of the time-frequency coefficients are always lower in the higher frequencies than in the
lower frequencies, the signal carries a lot less power in the faster oscillations than in the slow
brain responses. This 1/f decrease in power is an observation that we already made with the
power spectrum density in the filtering tutorial. If we represent the TF maps with a linear color
scale, we will always see values close to zero in the higher frequency ranges. Normalizing each
frequency separately with respect with a baseline helps obtaining more readable TF maps.

No normalization

The values we were looking at were already normalized (checkbox "Spectral flattening" in the
process options), but not with respect to a baseline. We will now compute the non-normalized
power obtained with the Morlet wavelets and try the various options available for normalizing
them.

 In Process1, keep the simulated signal selected.

pg. 286
 Run process: Frequency > Time-frequency (Morlet wavelets), No spectral flattening.

 Double-click on the file. As expected, we only see the lower frequencies in this
representation: the power of the 2Hz oscillation is a lot larger than the power at 20Hz or
60Hz.

Spectrum normalization

 In Process1: Select this new non-normalized file "Power,1-60Hz".

pg. 287
 Run process: Standardize > Spectrum normalization, Method=1/f compensation.

 This produces exactly the same results as previously (option "Spectral flattening" in the
time-frequency process). It multiplies the power at each frequency bin with the
frequency value (eg. multiplies by 20 the power at 20Hz), in order to correct for the 1/f
shape we observe in the power spectrum. This works well for the lower part of the
spectrum and up to 60-80Hz, but past this range it tends to overcompensate the higher
frequencies.
Note that it does not do any form of baseline correction: the 50Hz oscillation is visible
everywhere.

Baseline normalization

 The second way to proceed is to normalize the power with respect to its average level
during a reference time period. We can consider the oscillations at 2Hz and 20Hz as our
events of interest, and the 50Hz as noise we want to get rid of. The segment from 0 to 2
seconds does not contain any of the signals of interest, therefore we can consider it as a
baseline.
 However, we will not be able to use the full segment [0,2]s because of the edge effects
we described at the beginning of this tutorial. The time-frequency map at 2Hz with the
display option "Hide edge effects" (left figure below) shows that the power could not be
estimated correctly before 0.75s, therefore we shouldn't use it as a baseline. The power
time series at 2Hz (right) shows that the power related with the 2Hz oscillation starts to
increase significantly after 1.25s, therefore it's not really a "baseline" anymore. This
leaves only the segment [0.75,1.25]s available.

pg. 288
 At 20Hz, the expected transient effects are only 75ms long, therefore we could use a
much longer baseline if we were not interested by the lower frequencies: [0.075, 1.925]s.

 In this case, we have a very long "resting" time segment (2s), therefore the edge effects
are not a problem for picking a time window for the baseline normalization. We will use
the first time window mentioned, [0.75,1.25]s as it is long enough (500ms) to estimate
the baseline mean power. In real-life cases, with shorter epochs, it is sometimes difficult
to find an acceptable trade-off between the baseline duration and the exclusion of the
edge effects, especially for the lower frequencies.
 Run process: Standardize > Baseline normalization, Baseline=[0.75, 1.25]s.
Method=Event-related perturbation: ERS/ERD stands for "event-related
synchronization / desynchronization", a widely used normalization measure for time-
frequency power maps. It evaluates the deviation from the mean over the baseline, in
percents: (x-mean)/mean*100.

pg. 289
 Double-click on the new "ersd" file. The colormap type changed from "Timefreq" to
"Stat2", which uses by default the "rwb" color set and shows relative values. Indeed, the
ERS/ERD values can be positive or negative: the power at a given time sample can be
higher or lower than the average level during the baseline. In the simple simulation we
used, there is no power decrease at any frequency after 2s, so the strong values are mostly
positive. However, if you look in the file, you would see that there are many small
negative values (due to the random noise we added).

pg. 290
 Note that the 50Hz disappeared because it was present during the baseline, while the
2Hz and 20Hz oscillations show high positive values (between 100% and 2000% increase
from baseline).
 Remember to select your baseline very carefully according to the frequency range you
are interested in. See below examples obtained with different baselines: 0.075-1.925s, 0-
2s, 0-6s.
Change the colormap to "jet" if you prefer, and adjust the colormap contrast as needed.

 This video may help you understand better what are the implications of the baseline
selection: http://www.mikexcohen.com/lecturelets/whichbaseline/whichbaseline.html

Advanced

Tuning the wavelet parameters


Time resolution

You can adjust the relative time and frequency resolution of your wavelet transformation by
adjusting the parameters of the mother wavelet in the options of the process.

 Increasing the option "time resolution" will produce longer wavelets at a given frequency,
hence increase the frequency accuracy (lower Δf) and decrease the time accuracy (higher
Δt). Expect longer edge effects.
 Decrease the time resolution will produce shorter wavelets at a given frequency, hence
decrease the frequency accuracy (higher Δf) and increase the time accuracy (higher Δt).
Expect shorter edge effects.
 You can modify one or the other parameter, what is important is the product of the two.
All the following combinations fc/FWHM_t produce the same results because their
product is constant: (1Hz,3s), (3Hz,1s), (6Hz,0.5s), (60Hz,0.05s)

pg. 291
Examples for a constant central frequency of 1Hz with various time resolutions: 1.5s, 4s, 10s.

Frequency axis

You can also obtain very different representations of the data by changing the list of frequencies
for which you estimate the power. You can change this in the options of the process.

Examples: Log 1:20:150, Log 1:300:150, Linear 15:0.1:25

Advanced

Hilbert transform

pg. 292
We can repeat the same analysis with the other approach available for exploring the simulated
signal in the time-frequency plane. The process "Frequency > Hilbert transform" first filters
the signals in various frequency bands with a band-pass filter, then computes the Hilbert
transform of the filtered signal. The magnitude of the Hilbert transform of a narrow-band signal
is a measure of the envelope of this signal, and therefore gives an indication of the activity in this
frequency band.

No normalization

Let's compute the same three results as before: non-normalized, spectral flattening, baseline
normalization.

 In Process1, select the simulated signal.

 Run process: Frequency > Hilbert transform, No spectral flattening, Do not mirror.

 In the advanced options panel, keep the default options: Default frequency bands and
Power.

pg. 293
 Double-click on the new file. The figure now has only 6 rows, one for each frequency
band.

 The non-normalized results are already easy to interpret:

o delta (2-4Hz): Includes the 2Hz oscillation, contribution starts at 2s


o beta (15-29Hz): Includes the 20Hz oscillation, contribution starts at 2s
o gamma1(30-59Hz): Includes the 50Hz oscillation, contribution starts at the
beginning (0s)

pg. 294
 Right-click on the file or figure > Time series. Example for delta and beta.

Normalization

 In Process1, select the non-normalized Hilbert-based decomposition.

 Run process: Standardize > Spectrum normalization, Method=1/f compensation.


 Run process: Standardize > Baseline normalization, Baseline=[0.75, 1.25]s,
Method=ERS/ERD.
 Display the two normalized files side by side.

Method specifications

 Band-pass filters: Same filters as in the process "Pre-process > Band-pass filter", with
the option "Stop-band attenuation: 60dB". For details, see the tutorial Power spectrum
and frequency filters.
 Edge effects: To estimate the duration of the transient effects for each frequency band,
select the process "Band-pass filter", enter the frequency band of interest and click "View
filter response". Example for the alpha band:

pg. 295
 Hilbert transformation: Using the Matlab's hilbert() function.
 Extraction of the envelope: Power of the complex Hilbert transform, abs(hilbert(x))2.

MEG recordings: Single trials


Let's go back to our auditory oddball paradigm and apply the concepts to MEG recordings. We
will use all the trials available for one condition to estimate the average time-frequency
decomposition.

Spectrum normalization

 In Process1, select all the deviant trials in Run#01.


 Run process: Frequency > Time-frequency (Morlet wavelets), No spectral flattening.
In the advanced options, select: Log 1:40:150, Power, Save average time-frequency
maps.

pg. 296
pg. 297
 Save individual TF maps: This option stops the computation here and saves in the
database one time-frequency file for each input file (40 files), with one TF map for each
scout.
 Save average TF maps: Instead of saving the TF for each file separately, it
automatically computes the average of the power of all the TF. This is a good choice if
you do not plan to use independently all the TF files, because it saves a lot of time and
disk space.
 Remove evoked response from each trial before computing TF: This option first
computes the average of all the trials, then subtracts this average from each trial before
computing the time-frequency decomposition. This brings the signals to a slightly more
stationary state, which may help for the evaluation of the frequency contents.

Baseline normalization

 Double-click on the new file Avg,Power,1-150Hz (MEG). Select "Hide edge effects".
In the drop-down list, select sensor MLP56 (the one with the strongest respons at 90ms).
Right-click on the TF figure > Time series.

 Defining a baseline is now a lot trickier than with 6s-long simulated signal. The epochs
are only 600ms long, and the power at many frequencies could not be estimated correctly.
If we want all the values to be "good" after 0s, we cannot use anything below 15Hz. If
we want to normalize the values, we have to go even higher: 30Hz if we want a baseline
of 50ms before 0.
 The epochs we use in this tutorial are too short to perform a correct time-frequency
analysis. We should have imported at least 200ms more on each side, just for controlling
the edge effects. You should always think carefully about the length of the epochs you
import in your database if you are planning to run any form of frequency or time-
frequency analysis.
 For the purpose of illustrating the tools available in Neurostorm, we will keep on working with
these short epochs. Let's try to do the best we can with what we have here. We could use a
baseline of 50ms to get a correct estimation above 30Hz, but this is probably a bit too short. We
propose to include some more of the baseline (75ms), hoping there are not major edge effects
in this segment.
 In Process1, select the average time-frequency file.

 Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms,


Method=ERS/ERD.

pg. 298
 The new menus available to display this file are described in the next section.

Things to avoid

 Avoid computing the time-frequency decomposition of the average of the trials, you
would miss some of the induced response, the brain activity in higher frequencies that is
not strictly time-locked to the stimulus, and not aligned in phase across trials. Always
prefer the computation of the average of the time-frequency power maps of each trials, as
we did here.
This is well documented in: Bertrand O, Tallon-Baudry C (2000).
 Avoid using the Hilbert transform approach on short recordings or averages, always use
the wavelet approach in these cases. The band-pass filters used for the lower frequency
bands may have very high orders, leading to long transients. The example below shows
the expected transients for the default frequency bands using the process "Frequency >
Hilbert transform", they can be much more invalidating than with the process "Frequency

pg. 299
> Time-frequency (Morlet wavelets)".

Display: All channels


Three menus display the time-frequency of all the sensors with different spatial organizations.
All the figures below represent the ERS/ERD-normalized average. They use the "jet" colormap,
which is not the default configuration for these files. To get the same displays, change the
colormap configuration:
right-click on the figure > Colormap: Stat2 > Colormap > jet.

 All channels: All the maps are displayed one after the other, in the order they are saved
in the file.

 2D Layout (maps): Show each TF map where the sensor is located on a flattened 2D
map. Most display options are available, such as the colormap management and the
option "Hide edge effects".

pg. 300
 2D Layout (no overlap): Similar to the the previous display, but the positions of the
images are reorganized so that they do not overlap.

pg. 301
 Image [channel x time]: Shows the values of all the sensors over time for one frequency.

Useful shortcuts for the first three figures:

 Click: Clicking on any small TF image opens a new figure with only the selected sensor.
 Shift + click: Opens the original recordings time series of the selected sensor, when
available. Here, we display an average of time-frequency maps, so this menu has no
effect.
 Mouse wheel: Zoom in/out.
 Right click + move: Move in a zoomed figure.

Display: Topography
The menus below show the distribution of TF power over the sensors, for one time point and one
frequency bin, very similarly to what was introduced in tutorial Visual exploration.

pg. 302
 2D Sensor cap / 2D Disc / 3D Sensor cap: 175ms, 8Hz

 2D Layout: 8Hz (black), 35Hz (white)

Useful shortcuts for these figures:

 Left/right arrows: Change the current time.


 Up/down arrows: Change the current frequency.
 Control + E: Display the sensors markers/names.
 Shift + click on a sensor: Displays the time-frequency decomposition for that specific
sensors.
 Right click + move: Select a group of sensors.
 Shift + scroll: Change the gain of the time series (2D Layout).
 Control + scroll: Change the length of the window displayed around the current time
(2D Layout).

Advanced

Scouts
Similar calculations can be done at the level of the sources, either on the full cortex surface or on
a limited number of regions of interests. We will start with the latter as it is usually an easier
approach.

pg. 303
 Drag and drop all the deviant trials from both runs, select [Process sources].
 Run process "Frequency > Time-frequency (Morlet wavelets)".
Select the option "Use scouts" and select all the scouts defined in the previous tutorial.

 In the advanced options, select "Scout function: After" and "Output: Save average".
Run the process (it may take a while).

pg. 304
 The scout function was introduced in the previous tutorial. It is the method we use to
group the time series for the 20 dipoles we have in each scout into one unique signal.
When computing the TF of one scout, we have the choice between applying this function
before or after the time-frequency decomposition itself.

o Before: Extract the 20 source signals, apply the scout function to get one signal,
run the TF decomposition of this signal. This is faster but may lead to information
loss.
o After: Extract the 20 source signals, run the TF decomposition of the 20 signals,
apply the scout function on the power of the TF maps. Always prefer this option
when possible.

pg. 305
 Rename the new file to add a tag "Deviant" in it. Then right-click > Time-freq: All
scouts.

 In Process1, select the new average TF file.

 Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms,


Method=ERS/ERD.

Advanced

Full cortical maps


Computing the time-frequency decomposition for all the sources of the cortex surface is possible
but complicated because it can easily generate gigantic files, completely out of the reach of most
computers. For instance the full TF matrix for each trial we have here would be [Nsources x
Ntimes x Nfrequencies] = [15000 x 361 x 40] double-complex = 3.2 Gb!

We have two ways of going around this issue: computing the TF decomposition for fewer
frequency bins or frequency bands at a time, or as we did previously, use only limited number
of regions of interest.

 In Process1, keep all the deviant trials both conditions selected, select [Process sources].
 Run process "Frequency > Hilbert transform", No spectral flattening, Mirror signal
before.

pg. 306
To process the entire brain, do not select the option "Use scouts".

 In the advanced options, select "Optimize storage: No", this option is not available when
computing on the fly the average of multiple trials. Save the power, Save the average
Hilbert maps.

pg. 307
 Optimize the storage of the time frequency file: Let's describe this option in more
details.
o When computing the TF decomposition of a source file, we are actually applying
sequentially two linear transformations to the original recordings: the TF analysis and
the source inversion. These two processes can be permuted: TF(Inverse(Recordings)) =
Inverse(TF(Recordings)).

o Therefore we can optimize the TF computation time by applying the wavelet


transformation only to the sensor recordings, and then multiply the wavelet
complex coefficients by the inverse operator (ImagingKernel). This trick is
always used in the computation of the Hilbert and Morlet transforms.
o When we have the option to the save the complex values (constrained sources and no
averaging), this can also be used to optimize the storage of the files. In these cases, we
save only the wavelet transformation of the sensor data. Later, when the file is loaded
for display, the imaging kernel is applied on the fly. This can be disabled explicitly with
this option.

 Rename the new Hilbert file to include the tag "Deviant", and select it in Process1.

pg. 308
 Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms,
Method=ERS/ERD.
 Right-click on the Hilbert file > Display on cortex.
The frequency slider now shows frequency bands ("alpha:8-12Hz") instead of
frequencies ("12Hz"). You can explore the source activity in time and frequency
dimensions. The screen capture below shows the activity at 175ms: a 60% increase in the
alpha band around the auditory cortex and a 20% decrease in the beta band around the
motor cortex.

 Shift + click on the cortex surface: Displays the TF decomposition of the selected source.
 Right-click on the brain: Selects the closest vertex and displays the popup menu at the
same time. The first three menus are relative to the source that was just clicked.

Advanced

Unconstrained sources
In the current example, we are working with the simple case: sources with constrained
orientations. The unconstrained case is more difficult to deal with, because we have to handle
correctly the three orientations we have at each vertex.

 Full cortex: Computes the TF decompositions for all the sources (3*15000=45000), then
sum at each location the power for the three orientations.
 Scouts: Option "Scout function" in the process.
o Before: Extract the 20*3=60 source signals, apply the scout function to get three
signals (one per orientation), run the TF decomposition of the three signals, and

pg. 309
finally sum the power of the three TF maps. This is faster but may lose some
frequency resolution (especially for constrained sources).
o After: Extract the 20*3=60 source signals, run the TF decomposition of the 60
signals, apply the scout function on the power of the TF maps for each orientation
separately, and finally sum the power obtained for the three orientations.
 The storage optimization option is not available with unconstrained sources.

Advanced

Getting rid of the edge effects


To avoid making mistakes in the manipulation of the data and producing more readable figures,
we encourage you to cut out the edge effects from your time frequency maps after computation.

 In Process1, select the very first computed in this tutorial: Test/Simulation/Power,1-60Hz |


multiply

 Run the process: "Extract > Extract time", Time window = [0.75, 5.25]s
 Open the new file, select the option "Hide edge effects": Almost everything left in this
new file is correctly estimated. Neurostorm keeps track of the edge effects in the TFmask
field of the file.

 We recommend you do the same when epoching your recordings: import trials that are longer
than necessary, and after the time-frequency estimation, remove the unnecessary segments.

Advanced

On the hard drive


Right click one of the first TF file we computed > File > View file contents.

pg. 310

Structure of the time-frequency files: timefreq_*.mat

 TF: [Nsignals x Ntime x Nfreq] matrix containing all the values of the time-frequency
decomposition (complex wavelet coefficients, or double values for power/magnitude/Z-
score).
 TFmask: [Nfreq x Ntime] logical mask indicating the edge effects (0=edge, 1=valid
value).
 Std: [Nsignals x Ntime x Nfreq] standard deviation if this file is an average.
 Comment: String displayed in the database explorer to represent the file.
 DataType: From what kind of data this file was computed: {'data', 'results', 'scout,
'matrix'}
 Time: [1 x Ntime] Time vector used to estimate this file.
 TimeBands: [Ntimebands x 3] Cell array where each line represents a time band:
{'band_name', 'time definition', 'function'}
 Freqs: For regular frequency binning: vector containing all the frequencies.
If using frequency bands: [Nfreqbands x 3] cell array, where each line represents a
frequency band {'band_name', 'frequency definition', 'function'}
 RefRowNames: Used only for connectivity matrices.
 RowNames: [1 x Nsignals] Cell array of strings that describes each row of the TF matrix.
In this specific case, it would be the list of all the MEG sensor names. But it could also be
a list of names of scouts or clusters.

pg. 311
 Measure: Contains the name of the function that was applied right after the computation
of the wavelet coefficients. So it represents the type of data contained in the TF matrix.
Possible values:
o none: No measure applied, TF contains the complex wavelet coefficients.
o power: Power for each frequency, ie. the square of the amplitude:
abs(coefficients)2
o magnitude: abs(coefficients)
o log: 10 * log10( abs(coefficients)2)
o phase: angle(coefficients)
 Method: String that identifies the process that generated the file:
{'morlet', 'fft', 'psd', 'hilbert', 'corr', 'cohere', 'granger', 'plv'}
 DataFile: Initial file from which this file was computed. In the database explorer, the TF
file will be shown as a child of this DataFile file.
 SurfaceFile / GridLoc / GridAtlas: Source space that was used, only for source files.
 Leff: Effective number of averages = Number of trials that were averaged to obtain this
file.
 ColormapType: String, force a specific colormap type to be used when displaying this
file.
 DisplayUnits: String, force to use specific units when displaying this file.
 Options: Options that were selected in the time-frequency options window.
 History: List of operations performed on this file (menu File > View file history).

Useful functions

 in_bst_timefreq(TimefreqFile): Read a time-frequency file.


 in_bst(FileName, TimeWindow): Read any Neurostorm file with the possibility to load
only a specific part of the file. "TimeWindow" is an range of time values in seconds:
[tStart, tStop].
 bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level
function for reading data files. "Target" is a string with the list of sensor names or types
to load.
 morlet_transform(): Applies complex Morlet wavelet transform to the time series in
input.

Tutorial 26: Statistics


Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet

In this auditory oddball experiment, we would like to test for the significant differences between
the brain response to the deviant beeps and the standard beeps, time sample by time sample.
Until now we have been computing measures of the brain activity in time or time-frequency
domain. We were able to see clear effects or slight tendencies, but these observations were
always dependent on an arbitrary amplitude threshold and the configuration of the colormap.
With appropriate statistical tests, we can go beyond these empirical observations and assess what
are the significant effects in a more formal way.

pg. 312
Random variables
In most cases we are interested in comparing the brain signals recorded for two populations or
two experimental conditions A and B.

A and B are two random variables for which we have a limited number of repeated measures:
multiple trials in the case of a single subject study, or multiple subject averages in the case of a
group analysis. To start with, we will consider that each time sample and each signal (source or
sensor) is independent: a random variable represents the possible measures for one sensor/source
at one specific time point.

A random variable can be described with its probability distribution: a function which
indicates what are the chances to obtain one specific measure if we run the experiment. By
repeating the same experiment many times, we can approximate this function with a discrete
histogram of observed measures.

Histograms
You can plot histograms like this one in Neurostorm, it may help you understand what you can
expect from the statistics functions described in the rest of this tutorial. For instance, seeing a

pg. 313
histogram computed with only 4 values would discourage you forever from running a group
analysis with 4 subjects...

Recordings

 Let's evaluate the recordings we obtained for sensor MLP57, the channel that was
showing the highest value at 160ms in the difference of averages computed in the
previous tutorial.

 We are going to extract only one value for each trial we have imported in the database,
and save these values in two separate files, one for each condition (standard and deviant).

In order to observe more meaningful effects, we will process the trials together from the
two acquisition runs. this is usually not recommended in MEG analysis, but it can be an
acceptable approximation if the subject didn't move between runs.
 In Process1, select all the deviant trials from both runs.
Run process Extract > Extract values:
Options: Time=[160,160]ms, Sensor="MLP57", Concatenate time (dimension 2)

 Repeat the same operation for all the standard trials.


 You obtain two new files in the folder Intra-subject. If you look inside the files, you can
observe that the size of the Value matrix matches the number of trials (78 for deviant,
383 for standard). The matrix is [1 x Ntrials] because we asked to concatenate the

pg. 314
extracted values in the 2nd dimension.

 To display the distribution of the values in these two files:


select them simultaneously, right-click > File > View histograms.

 With the buttons in the toolbar, you can edit the way these distributions are represented:
number of bins in the histogram, total number of occurrences (shows taller bars for
standard because it has more values) or density of probability (normalized by the total
number of values).
In addition, you can plot the normal distribution corresponding to the mean μ and
standard deviation σ computed from the set of values (using Matlab functions mean and
std).
 When comparing two sample sets A and B, we try to evaluate if the distributions of the
measures are equal or not. In most of the questions we explore in EEG/MEG analysis, the
distributions are overlapping a lot. The very sparse sampling of the data (a few tens or
hundreds of repeated measures) doesn't help with the task. Some representations will be
more convincing than others to estimate the differences between the two conditions.

 The legend of the histograms shows the result of the Shapiro-Wilk normality test, as
implemented by Ahmed BenSaïda (Matlab FileExchange). The button "Q-Q plots" gives
another way to compare the current samples to the normal distribution (see: Wikipedia,

pg. 315
Matlab FileExchange).

 Everything seems to indicate that the values recorded on the sensor MLP57 at 160ms
follow a normal distribution, in both conditions.

Sources (relative)

 We can repeat the same operation at the source level and extract all the values for scout
A1L.
 In Process1, select all the deviant trials from both runs. Select button [Process sources].

Run process Extract > Extract values: Time=[160,160]ms, Scout="A1L"

 Repeat the same operation for all the standard trials.

pg. 316
 Select the two files > Right-click > File > View histogram.

 The distributions still look normal, but the variances are now slightly different. You have to pay
attention to this information when choosing which parametric t-test to run.

Sources (absolute)

 Run again the process Extract > Extract values, but this time select Compute absolute
values.
 Display the histograms of the two rectified files.

 The rectified source values are definitely not following a normal distribution, the shape
of the histogram has nothing to do with the corresponding Gaussian curves. As a
consequence, if you are using rectified source maps, you will not be able to run
independent parametric t-tests.
Additionally, you may have issues with the detection of some effects

Time-frequency

pg. 317
 Time-frequency power for sensor MLP57 at 55ms / 48Hz (left=no normalization,
right=ERS/ERD):

 These sample sets are clearly not normally distributed. Parametric t-tests don't look like good
candidates for testing time-frequency power across trials.

Group studies and central limit theorem

 The observations above hold only for the specific case we are looking at: single subject
analysis, testing for differences across trials.
 In the context of a group analysis, we usually test subject averages between conditions
or populations. This corresponds to comparing the distributions of the mean of the trials
across subjects, which will tend to be normal when we increase the number of trials
(Central-limit theorem). In general, it is easier to obtain sample sets with normal
distributions at the group level.
 Additionally, some tricks can help bringing the samples closer to a normal distribution,
like averaging in time/space/frequencies, or testing the square root of the data, in the case
of time-frequency power. Some solutions are explored in (Kiebel, Tallon-Baudry &
Friston, HBM 2005).

Statistical inference
Hypothesis testing

To show that there is a difference between A and B, we can use a statistical hypothesis test. We
start by assuming that the two sets are identical then reject this hypothesis. For all the tests we
will use here, the logic is similar:

 Define a null hypothesis (H0:"A=B") and an alternative hypothesis (eg. H1:"A<B").


 Make some assumptions on the samples we have (eg. A and B are independent, A and B follow
normal distributions, A and B have equal variances).

 Decide which test is appropriate, and state the relevant test statistic T (eg. Student t-test).
 Compute from the measures (Aobs, Bobs) the observed value of the test statistic (tobs).
 Calculate the p-value. This is the probability, under the null hypothesis, of sampling a
test statistic at least as extreme as that which was observed. A value of (p<0.05) for the
null hypothesis has to be interpreted as follows: "If the null hypothesis is true, the chance

pg. 318
that we find a test statistic as extreme or more extreme than the one observed is less than
5%".
 Reject the null hypothesis if and only if the p-value is less than the significance level
threshold (α).

Evaluation of a test

The quality of test can be evaluated based on two criteria:

 Sensitivity: True positive rate = power = ability to correctly reject the null hypothesis
and control for the false negative rate (type II error rate). A very sensitive test detects a
lot of significant effects, but with a lot of false positive.
 Specificity: True negative rate = ability to correctly accept the null hypothesis and
control for the false positive rate (type I error rate). A very specific test detects only the
effects that are clearly non-ambiguous, but can be too conservative and miss a lot of the
effects of interest.

Different categories of tests

pg. 319
Two families of tests can be helpful in our case: parametric and nonparametric tests.

 Parametric tests need some strong assumptions on the probability distributions of A and
B then use some well-known properties of these distributions to compare them, based on
a few simple parameters (typically the mean and variance). The estimation of these
parameters is highly optimized and requires very little memory. The examples which will
be described here are the Student's t-tests.
 Nonparametric tests do not require any assumption on the distribution of the data. They
are therefore more reliable and more generic. On the other hand, they are a lot more
complicated to implement: they require a lot more memory because all the tested data has
to be loaded at once, and a lot more computation time because the same test is repeated
thousands of times.

Parametric Student's t-test


Assumptions

The Student's t-test is a widely-used parametric test to evaluate the difference between the means
of two random variables (two-sample test), or between the mean of one variable and one known
value (one-sample test). If the assumptions are correct, the t-statistic follows a Student's t-
distribution.

The main assumption for using a t-test is that the random variables involved follow a normal
distribution (mean: μ, standard deviation: σ). The figure below shows a few example of normal
distributions.

pg. 320
t-statistic

Depending on the type of data we are testing, we can have different variants for this test:

 One-sample t-test (testing against a known mean μ0):

where is the sample mean, σ is the sample standard deviation and n is the sample size.
 Dependent t-test for paired samples (eg. when testing two conditions across a group of
subjects). Equivalent to testing the difference of the pairs of samples against zero with a
one-sample t-test:

where D=A-B, is the average of D and σD its standard deviation.


 Independent two-sample test, equal variance (equal or unequal sample sizes):

where and are the unbiased estimators of the variances of the two samples.
 Independent two-sample test, unequal variance (Welch's t-test):

where and are the sample sizes of A and B.

p-value

Once the t-value is computed (tobs in the previous section), we can convert it to a p-value based
on the known distributions of the t-statistic. This conversion depends on two factors: the number
of degrees of freedom and the tails of the distribution we want to consider. For a two-tailed t-test,
the two following commands in Matlab are equivalent and can convert the t-values in p-values.

p = betainc(df./(df + t.^2), df/2, 0.5); % Without the Statistics toolbox


p = 2*(1 - tcdf(abs(t),df)); % With the Statistics toolbox

The distribution of this function for different numbers of degrees of freedom:

pg. 321
Example 1: Parametric t-test on recordings
Parametric t-tests require the values tested to follow a normal distribution. The recordings
evaluated in the histograms sections above (MLP57/160ms) show distributions that are matching
this assumption relatively well: the histograms follow the traces of the corresponding normal
functions, and the two conditions have very similar variances. It looks reasonable to use a
parametric t-test in this case.

 In the Process2 tab, select the following files, from both runs (approximation discussed
previously):
o Files A: All the deviant trials, with the [Process recordings] button selected.
o Files B: All the standard trials, with the [Process recordings] button selected.

o The t-tests work well with unbalanced number of samples: It is better to use
all the possible samples you have, even if you have 80 trials in one condition and
400 in the other.

 Run the process "Test > Parametric test: Independent": Select all the data, do not
average.
Sensor types: Leave this empty instead of entering "MEG", it won't affect the results but
the computation will be faster (optimized when processing full files).
Test: Student's t-test (equal variance), two-tailed.

pg. 322
 Double-click on the new file and add a 2D topography (CTRL+T). The values displayed
in the 2D view are the significant t-values. All the sensors that have p-values higher than
the significance level threshold (α) are set to zero.
 With the new Stat tab you can control the significance level α and the correction you
want to apply for multiple comparisons (see next section).
 With the option minimum duration, you can exclude from the display all the data points
that are significant only for isolated time samples, and which are mostly likely false
positives. If this parameter is set to zero it has no impact on the display. Otherwise all the
data points that are not significant continuously for at least this duration are set to zero.

pg. 323
Correction for multiple comparisons
Multiple comparison problem

The approach described in this first example performs many tests simultaneously. We test,
independently, each MEG sensor and each time sample across the trials, so we run a total of
274*361 = 98914 t-tests.

If we select a critical value of 0.05 (p<0.05), it means that we want to see what is significantly
different between the conditions while accepting the risk of observing a false positive in 5% of
the cases. If we run the test around 100,000 times, we can expect to observe around 5,000 false
positives. We need to better control for false positives (type I errors) when dealing with multiple
tests.

Bonferroni correction

The probability to observe at least one false positive, or familywise error rate (FWER) is
almost 1:
FWER = 1 - prob(no significant results) = 1 - (1 - 0.05)^100000 ~ 1

A classical way to control the familywise error rate is to replace the p-value threshold with a
corrected value, to enforce the expected FWER. The Bonferroni correction sets the significance
cut-off at /Ntest. If we set (p ≤ /Ntest), then we have (FWER ≤ ). Following the previous
example:
FWER = 1 - prob(no significant results) = 1 - (1 - 0.05/100000)^100000 ~ 0.0488 < 0.05

This works well in a context where all the tests are strictly independent. However, in the case of
MEG/EEG recordings, the tests have an important level of dependence: two adjacent sensors or
time samples often have similar values. In the case of highly correlated tests, the Bonferroni
correction tends too be conservative, leading to a high rate of false negatives.

FDR correction

The false discovery rate (FDR) is another way of representing the rate of type I errors in null
hypothesis testing when conducting multiple comparisons. It is designed to control the expected

pg. 324
proportion of false positives, while the Bonferroni correction controls the probability to have at
least one false positive. FDR-controlling procedures have greater power, at the cost of increased
rates of Type I errors (Wikipedia)

In Neurostorm, we implement the Benjamini–Hochberg step-up procedure (1995):

 Sort the p-values p(k) obtained across all the multiple tests (k=1..Ntest).
 Find the largest k such as p(k) < k / Ntest * .
 Reject the null hypotheses corresponding to the first k smallest p-values.
 This is is the same procedure as Matlab's call: mafdr(p, 'BHFDR', alpha)

Note that there are different implementations of FDR. FieldTrip uses the Benjamini–Yekutieli
(2001) algorithm as described in (Genovese, 2002), which usually gives less true positive results.
Don't be surprised if you get empty displays when using the option "FDR" in the FieldTrip
processes, while you get significant results with the Neurostorm FDR correction.

In the interface

You can select interactively the type of correction to apply for multiple comparison while
reviewing your statistical results. The checkboxes "Control over dims" allow you to select which
are the dimensions that you consider as multiple comparisons (1=sensor, 2=time, 3=not
applicable here). If you select all the dimensions, all the values available in the file are
considered as the same repeated test, and only one corrected p-threshold is computed for all the
time samples and all the sensors.

If you select only the first dimension, only the values recorded at the same time sample are
considered as repeated tests, the different time points are corrected independently, with one
different corrected p-threshold for each time point.

When changing these options, a message is displayed in the Matlab command window, showing
the number of repeated tests that are considered, and the corrected p-value threshold (or the
average if there are multiple corrected p-thresholds, when not all the dimensions are selected):

BST> Average corrected p-threshold: 5.0549e-07 (Bonferroni, Ntests=98914)


BST> Average corrected p-threshold: 0.00440939 (FDR, Ntests=98914)

"It doesn't work"

pg. 325
If nothing appears significant after correction, don't start by blaming the method ("FDR doesn't
work"). In the first place, it's probably because there is no clear difference between your sample
sets or simply because your sample size is too small. For instance, with less than 10 subjects you
cannot expect to observe very significant effects in your data.

If you have good reasons to think your observations are meaningful but cannot increase the
sample size, consider reducing the number of multiple comparisons you perform (test only the
average over a short time window, a few sensors or a region of interest) or using a cluster-based
approach. When using permutation tests, increasing the number of random permutations also
decreases the p-value of the very significant effects.

Nonparametric permutation tests


Principle

A permutation test (or randomization test) is a type of test in which the distribution of the test
statistic under the null hypothesis is obtained by calculating all possible values of the test statistic
under rearrangements of the labels on the observed data points. (Wikipedia)

If the null hypothesis is true, the two sets of tested values A and B follow the same distribution.
Therefore the values are exchangeable between the two sets: we can move one value from set A
to set B, and one value from B to A, and we expect to obtain the same value for the test statistic
T.

By taking all the possible permutations between sets A and B and computing the statistic for
each of them, we can build a histogram that approximates the permutation distribution.

Then we compare the observed statistic with the permutation distribution. From the histogram,
we calculate the proportion of permutations that resulted in a larger test statistic than the
observed one. This proportion is called the p-value. If the p-value is smaller than the critical

pg. 326
value (typically 0.05), we conclude that the data in the two experimental conditions are
significantly different.

The number of possible permutations between the two sets of data is usually too large to
compute an exhaustive permutation test in a reasonable amount of time. The permutation
distribution of the statistic of interest is approximated using a Monte-Carlo approach. A
relatively small number of random selection of possible permutations can give us a reasonably
good idea of the distribution.

Permutation tests can be used for any test statistic, regardless of whether or not its distribution
is known. The hypothesis is about the data itself, not about a specific parameter. In the examples
below, we use the t-statistic, but we could use any other function.

Practical limitations

Computation time: If you increase the number of random permutations you use for estimating
the distribution, the computation time will increase linearly. You need to find a good balance
between the total computation time and the accuracy of the result.

Memory (RAM): The implementations of the permutation tests available in Neurostorm require
all the data to be loaded in memory before starting the evaluation of the permutation statistics.
Running this function on large datasets or on source data could quickly crash your computer. For
example, loading the data for the nonparametric equivalent to the parametric t-test we ran
previously would require:
276(sensors) * 461(trials) * 361(time) * 8(bytes) / 1024^3(Gb) = 0.3 Gb of memory.

This is acceptable on most recent computers. But to perform the same at the source level, you
need:
45000*461*361*8/1024^3 = 58 Gb of memory just to load the data. This is impossible on most
computers, we have to give up at least one dimension and run the test only for one region of
interest or one time sample (or average over a short time window).

Example 2: Permutation t-test


Let's run the nonparametric equivalent to the test we ran in the first example.

pg. 327
 In the Process2 tab, select the following files, from both runs (approximation discussed
previously):
o Files A: All the deviant trials, with the [Process recordings] button selected.
o Files B: All the standard trials, with the [Process recordings] button selected.
 Run the process "Test > Permutation test: Independent", set the options as shown
below.
Sensor type: You should enter explicitly "MEG" instead of leaving this field empty. It
will decrease the computation time (no optimization for full files in the nonparametric
tests).
Note that it may require more than 3Gb of RAM and take more than 10min.

 Open the parametric and nonparametric results side by side, the results should be very
similar. You may have to increase the significance level α to 0.05 to see something
significant in the nonparametric version (edit the value in the Stat tab). Alternatively, you
can obtain lower p-values by running the same process with more randomizations (for
instance 2000 or 10000).

pg. 328
 In this case, the distributions of the values for each sensor and each time point (non-
rectified MEG recordings) are very close to normal distributions. This was illustrated at
the top of this page, in the section "Histograms". Therefore the assumptions behind the
parametric t-test are verified, the results of the parametric tests are correct and very
similar to the nonparametric ones.
 If you get different results with the parametric and nonparametric approaches: trust the
nonparametric one. If you want to increase the precision of a nonparametric test a decrease the
p-values, increase the number of random permutations.

FieldTrip implementation
FieldTrip functions in Neurostorm

We have the possibility to call some of the FieldTrip toolbox functions from the Neurostorm
environment. If you are running the compiled version of Neurostorm these functions are already
packaged with Neurostorm, otherwise you need to install FieldTrip on your computer, either
manually or as a Neurostorm plugin.

Cluster-based correction

One interesting method that has been promoted by the FieldTrip developers is the cluster-based
approach for nonparametric tests. In the type of data we manipulate in MEG/EEG analysis, the
neighboring channels, time points or frequency bins are expected to show similar behavior. We
can group these neighbors into clusters to "accumulate the evidence". A cluster can have a
multi-dimensional extent in space/time/frequency.

In the context of a nonparametric test, the test statistic computed at each permutation is the
extent of the largest cluster. To reject or accept the null hypothesis, we compare the largest
observed cluster with the randomization distribution of the largest clusters.

This approach solves the multiple comparisons problem, because the test statistic that is used
(the maximum cluster size) is computed using all the values at the same time, along all the

pg. 329
dimensions (time, sensors, frequency). There is only one test that is performed, so there is no
multiple comparisons problem.

The result is simpler to report but also a lot less informative than FDR-corrected nonparametric
tests. We have only one null hypothesis, "the two sets of data follow the same probability
distribution", and the outcome of the test is to accept or reject this hypothesis. Therefore we
cannot report the spatial or temporal extent of the most significant clusters. Make sure you
read this recommendation before reporting cluster-based results in your publications.

Reference documentation

For a complete description of nonparametric cluster-based statistics in FieldTrip, read the


following:

 Article: Maris & Oostendveld (2007)


 Video: Statistics using non-parametric randomization techniques (E Maris)
 Video: Non-parametric cluster-based statistical testing of MEG/EEG data (R Oostenveld)
 Tutorial: Parametric and non-parametric statistics on event-related fields
 Tutorial: Cluster-based permutation tests on event related fields
 Tutorial: Cluster-based permutation tests on time-frequency data
 Tutorial: How NOT to interpret results from a cluster-based permutation test
 Functions references: ft_timelockstatistics, ft_sourcestatistics, ft_freqstatistics

Process options

There are three separate processes in Neurostorm, to call the three FieldTrip functions.

 ft_timelockstatistics: Compare imported trials (recordings).


 ft_sourcestatistics: Compare source maps or time-frequency decompositions of sources.
 ft_freqstatistics: Compare time-frequency decompositions of sensor recordings or scouts
signals.

 See below the correspondence between the options in the interface and the FieldTrip
functions.

Test statistic options (name of the options in the interface in bold):

 cfg.numrandomization = "Number of randomizations"


 cfg.statistic = "Independent t-test" ('indepsamplesT') or "Paire t-test" ('depsamplesT)
 cfg.tail = One-tailed (-1), Two-tailed (0), One-tailed (+1)
 cfg.correctm = "Type of correction" ('no', 'cluster', 'bonferroni', 'fdr', 'max', 'holm',
'hochberg')
 cfg.method = 'montecarlo'

pg. 330
 cfg.correcttail = 'prob'

Cluster-based correction:

 cfg.clusteralpha = "Cluster Alpha"


 cfg.minnbchan = "Min number of neighours"
 cfg.clustertail = cfg.tail (in not, FieldTrip crashes)
 cfg.clusterstatistic = 'maxsum'

Input options: All the data selection is done before, in the process code and functions
out_fieldtrip_*.m.

 cfg.channel = 'all';
 cfg.latency = 'all';
 cfg.frequency = 'all';
 cfg.avgovertime = 'no';
 cfg.avgchan = 'no';
 cfg.avgoverfreq = 'no';

Example 3: Cluster-based correction


Run again the same test, but this time select the cluster correction.

 Keep the same files selected in Process2.

 Run the process "Test > FieldTrip: ft_timelockstatistics", Type of correction = cluster
Note that it may require more than 5Gb of RAM and take more than 20min: check the
Matlab command window for the FieldTrip progress report.

pg. 331
 Double-click on the new file, add a 2D topography to is (CTRL+T). Note that in the Stat tab, the
options for multiple comparisons corrections are disabled, because the values saved in the file
are already corrected, you can only change the significance threshold.

 Instead, you get a list of significant clusters, which you can display separately if needed.
The colored dots on the topography represent the clusters, blue for negative clusters and
red for positive clusters. You can change these colors in the Stat tab. Note that the
clusters have a spatio-temporal extent: at one time point they can be represented as two
separate blobs in the 2D topography, but these blobs are connected at other time points.

pg. 332
 Additional options are available for exploring the clusters, try them all. Values used to
represent the clusters: p=p-value, c=cluster statistic (maxsum), s=cluster size (connected
data points).

 Don't spend too much time exploring the clusters: In the previous cases, all the tests at
each sensor and each time point were computed independently, we could report,
individually, whether each of them was significant or not. On the other hand, the cluster-
based approach just allows us to report that the two conditions are different, without
specifying where or when, which makes the visual exploration of clusters relatively
useless. Make sure you read this recommendation before reporting cluster-based results
in your publications.

Example 4: Parametric test on sources


We can reproduce similar results at the source level. If you are using non-normalized and non-
rectified current density maps, their distributions across trials should be normal, as illustrated
earlier with the histograms. You can use a parametric t-test to compare the two conditions at the
source level.

 Keep the same files selected in Process2. Select the button [Process sources] on both
sides.
 Run the process "Test > Parametric test: Independent", Select all the data, do not
average.
Use scouts: No. When this option is not selected, it uses the entire cortex instead.

pg. 333
 Double-click on the new file. Change the colormap definition to show only positive
values (right-click > Colormap: Stat2 > Uncheck: Absolute values) and use a different
colormap ("hot" or "jet"). The sign of the relative t-statistic is not meaningful, it depends
mostly on the orientation of the dipoles on the cortex

Scouts and statistics

pg. 334
 From the Scout tab, you can also plot the scouts time series and get a summary of what is
happening in your regions of interest. Non-zero values indicate the latencies when at
least one vertex of the scout has a value that is significantly different between the two
conditions. The values that are shown are the averaged t-values in the scout. The figure
below shows the option "Values: Relative" to match surface display, but absolute values
would make more sense in this case.

Unconstrained sources

There are some additional constraints to take into consideration when computing statistics for
source models with unconstrained orientations.

Directionality: Difference of absolute values


The test we just computed detects correctly the time and brain regions with significant
differences between the two conditions, but the sign of the t statistic is useless, we don't know
where the response is stronger or weaker for the deviant stimulation.

After identifying where and when the responses are different, we can go back to the source
values and compute another measure that will give us the directionality of this difference:
abs(average(deviant_trials)) - abs(average(standard_trials))

 Keep the same files selected in Process2. Select the button [Process sources] on both
sides.
 Run process "Test > Difference of means", with the option "Absolute value of
average".

pg. 335
 Double-click on the new file. Double-click on the colorbar to reset it to its defaults. The
sign of this difference is meaningful: red values mean "higher amplitude for the deviant
condition", blue values mean "higher amplitude for the standard condition", but we
don't know if they are statistically significant.

 The example above shows the two files at 148ms. The left figure shows the result of the
t-test (significant effects but ambiguous sign) and the right figure shows the difference of
absolute values (meaningful sign, but no statistical threshold). The superposition of the
two information shows that there is some significant increase of activity in the frontal
region for the deviant condition, but a decrease around the auditory cortex. This can be
combined more formally as explained below.
 In Process2: FilesA = t-test results (sources), FilesB = difference deviant-standard (sources).

 Run process: Test > Apply statistic threshold, significance level α=0.01,
correction=FDR.

pg. 336
 Double-click on the new file, go to 148ms. The statistic threshold from the t-test file
was applied to the difference of rectified averages (deviant-standard): only the values
for which there is a significant effect between the two conditions are kept, all the others
are set to zero and masked. We observe areas colored in white where the two conditions
have equal amplitudes but different signs. Note that for displaying this file correctly, you
must keep the amplitude slider at 0% (in the Surface tab): a correct statistic threshold is
already applied to the source map, you should not perform any additional random
amplitude threshold on it.

Example 5: Parametric test on scouts


The previous example showed how to test for differences across the full source maps, and then to
extract the significant scout activity. Another valid approach is to test directly for significant
differences in specific regions of interest, after computing the scouts time series for each trial.

pg. 337
This alternative has many advantages: it can be a lot faster and memory-efficient and it reduces
the multiple comparisons problem. Indeed, when you perform less tests at each time point (the
number of scouts instead of the number of sources), the FDR and Bonferroni corrections are less
conservative and lead to higher p-values. On the other hand, it requires to formulate stronger
hypotheses: you need to define the regions in which you expect to observe differences, instead
of screening the entire brain.

 In Process2, select all the deviant trials (A) and standard trials (B). Select [Process
sources].
 Run process Test > Parametric test: Independent, Use scouts: A1L, IFGL, M1L.

 Double-click on the new file. It shows the time points where the scout signals (averaged
across vertices) are significantly different in the two conditions. We cannot represent this
new file over the cortex because we have restricted the test to the scouts and discarded all
the spatial information. As explained in the previous examples: the significant differences
are correctly detected but the sign of the t-statistic is ambiguous.

pg. 338
Advanced

Convert statistic results to regular files


Apply statistic threshold

You can convert the results of a statistic test to a regular file. It can be useful because lots of
menus and processes are not accessible for the files with the "stat" tag display on top of their
icon.

 In Process1: Select the results for the test you ran on scouts time series (example #5).

pg. 339
 Run process: "Test > Apply statistic threshold": α=0.01, correction=FDR, dim=[all].

 It produces a new file with the same name but without the "stat" tag. This file is a regular matrix
that you can use with any process. When you open it, the Stat tab doesn't show up.

Simulate recordings from these scouts

 Compute a head model for the intra-subject folder:


Right-click on the channel file > Compute head model, keep all the default options.
Note that this channel file was created during one of the processes involving the two runs,
it contains an average of their respective channel files (average head positions).
 In Process1, select the new thresholded matrix file.

 Run process: "Simulate > Simulate recordings from scout", select option Save full
sources.

pg. 340
 This process creates two files. First it maps the scouts time series on the cortex: it
creates an empty source file with zeroes everywhere, then for each scout it maps the
values of the input time times series to the sources within the ROI. Then it multiplies
these artificial source maps with the forward model to simulate MEG recordings.

Advanced

Example 6: Nonparametric test on time-frequency maps


Two run a test on time-frequency maps, we need to have all the time-frequency decompositions
for each individual trial available in the database. we saved only the averaged time-frequency
decompositions of all the trials.

 In Process1, select all the trials from both conditions and both runs.
 Run process "Frequency > Time-frequency (Morlet wavelets)". Select the options as
below.

pg. 341
Select only one sensor (eg. MLP57) to make it faster. Save individual TF maps.
Measure=Magnitude: It is more standard to test the square root of power (amplitude).
Do not normalize the TF maps for a test within a single subject (only for group studies).

 In Process2, select all the deviant trials (A) and standard trials (B). Select [Process time-
freq].
 Run process: Test > Permutation test: Independent, 1000 randomizations, no
correction.
No need to select the option "Match signals between files" because the list of signals is
the same for all the trials. If you have marked bad channels in some trials during your

pg. 342
analysis, you would need to select this option.

 Double-click on the new file. In the Stat tab, select α=0.05 uncorrected.

 If you run this test on time-frequency files where the power has been saved, you get this
warning:

pg. 343
Now delete the TF decompositions for the individual trials:

 In Process1, select the files for which you computed the TF decomposition (all trials).

 Select the [Process time-freq] button.


 Run process: File > Delete files, option Delete selected files.

Advanced

Export to SPM
An alternative to running the statistical tests in Neurostorm is to export all the data and compute
the tests with an external program (R, Matlab, SPM, etc). Multiple menus exist to export files to
external file formats (right-click on a file > File > Export to file).

Advanced

On the hard drive


Right click on the first test we computed in this tutorial > File > View file contents.

pg. 344

Description of the fields

 pmap: [Nsignals x Ntime x Nfreq]: p-values for all the data points. If empty, computed
from tmap:
pmap = process_test_parametric2('ComputePvalues', tmap, df, TestType, TestTail);
 tmap: [Nsignals x Ntime x Nfreq]: t-values for all the data points.
 df: [Nsignals x Ntime x Nfreq]: Number of degrees of freedom for each test.
 Correction: Correction for multiple comparison already applied ('no', 'cluster', 'fdr', ...)
 Type: Initial type of the data ('data', 'results', 'timefreq', 'matrix').
 Comment: String displayed in the database explorer to represent the file.
 The other fields were copied from the files that were tested, and were described previously.

Useful functions

 process_test_parametric2: Two-sample independent parametric t-test


 process_test_parametric2p: Two-sample paired parametric t-test
 process_test_parametric1: One-sample parametric t-test (against zero)
 process_ttest_baseline: One-sample parametric t-test (against baseline)
 process_ft_timelockstatistics: FieldTrip tests for recordings (file type "data")
 process_ft_sourcestatistics: FieldTrip tests for source maps (file type "results")
 process_ft_freqstatistics: For time-frequency and scouts (file type "timefreq" and
"matrix")

pg. 345
 Conversion Neurostorm to FieldTrip: out_fieldtrip_data, out_fieldtrip_results,
out_fieldtrip_timefreq, out_fieldtrip_matrix

 process_extract_pthresh: Computes the pmap from the tmap, and saves thresholded
files.
 process_test_parametric2('ComputePvalues', t, df, TestType, TestTail)
 bst_stat_thresh: Computes the Bonferroni and FDR corrections in Neurostorm.

Citations
Brainstorm

 To include in all your publications using Brainstorm:


Data analysis was performed with Brainstorm (Tadel et al. 2011), which is documented
and freely available for download online under the GNU general public license
(http://neuroimage.usc.edu/brainstorm).
 Group analysis processing pipeline: Tadel et al. 2019
 Resting state processing pipeline: Niso et al. 2019

Forward models

 Spherical head model: Mosher et al. 1999


 Overlapping spheres: Huang et al. 1999
 OpenMEEG BEM: Gramfort et al. 2010 / Kybic et al. 2005

Inverse methods

 Tikhonov-regularized inimum norm: Baillet et al., 2001

Electromagnetic brain mapping, IEEE SP MAG 2001.

 dSPM: Dale et al. 2000


 sLORETA: Pascual-Marqui 2002

Preprocessing

 SSP: Uusitalo et al. 1999


 ICA: Makeig et al. 1996
 Good practice: Gross et al. 2013

Time-frequency

 Morlet wavelets: Bertrand et al. 2000 / Pantazis et al. 2005b


 ERS/ERD: Pfurtscheller 1992

Statistical analysis

pg. 346
 Non-parametric tests: Pantazis et al., 2005a

 FieldTrip cluster-based tests: Maris and Oostenveld, 2007


 Decoding: Cichy et al. 2014

Default anatomy

 MNI/ICBM152: Fonov et al. 2009 | more information


 MNI/Colin27: more information
 BCI-DNI_BrainSuite_2016: more information
 USCBrain: more information
 Infant7w: Kabdebon et al. 2014
 Oreilly_1y: Li et al. 2015, Shi et al. 2011

MRI processing

 FreeSurfer: Whether you are using FreeSurfer for the T1 segmentation (Dale et al. 1999),
the cortical atlases or the FSAverage subject default: please register on their website
(registration page) and cite the appropriate references
 BrainSuite: Shattuck et al. 2002
 BrainVISA: Rivière et al. 2003
 MNI normalization (SPM): Ashburner and Friston, 2005
 MRI coregistration (SPM): Ashburner and Friston, 2005

Tutorial datasets

 Introduction (auditory CTF MEG): more information


 Group analysis (visual Elekta MEG): more information
 EEG/Epilepsy: contact the authors
 SEEG/Epilepsy: contact the authors
 CTF MEG phantom: more information
 Elekta MEG phantom: more information
 OMEGA: more information
 HCP-MEG: more information

References
The previous citations refer to the following publications. Further reading is available in the
Publications section.

 Ashburner J, Friston KJ (2005)


Unified segmentation
NeuroImage 26, 839–851. doi: 10.1016/j.neuroimage.2005.02.018
 Baillet S, Mosher JC, Leahy RM (2001)
Electromagnetic Brain Mapping (pdf)
IEEE Signal Processing Magazine, 18(6): 14-30, Nov 2001

pg. 347
 Bertrand O, Tallon-Baudry C (2000)
Oscillatory gamma activity in humans: a possible role for object representation
Int J Psychophysiol, 38(3):211-23
 Cichy RM, Pantazis D, Oliva A (2014)
Resolving human object recognition in space and time
Nature neuroscience, 17:455
 Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E (2000)
Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution
imaging of cortical activity
Neuron, Apr 2000, 26(1):55-67
 Dale AM, Fischl B, Sereno MI (1999)
Cortical surface-based analysis (pdf)
NeuroImage 9, 179–194. doi: 10.1006/nimg.1998.0395
 Fonov VS, Evans AC, McKinstry RC, Almli CR, Collins DL (2009)
Unbiased nonlinear average age-appropriate brain templates from birth to adulthood
NeuroImage 47(Suppl. 1):S102. doi: 10.1016/S1053-8119(09)70884-5
 Gross J, Baillet S, Barnes GR, Henson RN, Hillebrand A, Jensen O, et al (2013)
Good practice for conducting and reporting MEG research
NeuroImage 65, 349–363
 Gramfort A, Papadopoulo T, Olivi E, Clerc M (2010)
OpenMEEG: opensource software for quasistatic bioelectromagnetics
BioMedical Engineering OnLine 45:9, 2010
 Huang MX, Mosher JC, Leahy RM (1999)
A sensor-weighted overlapping-sphere head model and exhaustive head model
comparison for MEG (pdf)
Phys Med Biol, 44:423-440
 Kabdebon C, Leroy F, Simmonet H, Perrot M, Dubois J, Dehaene-Lambertz G
Anatomical correlations of the international 10–20 sensor placement system in infants
NeuroImage, 1 Oct 2014, 99:342-356
 Kybic J, Clerc M, Abboud T, Faugeras O, Keriven R, Papadopoulo T (2005)
A common formalism for the integral formulations of the forward EEG problem (pdf)
IEEE Transactions on Medical Imaging, 24:12-28, 2005
 Leahy RM, Mosher JC, Spencer ME, Huang MX, Lewine JD (1998)
A study of dipole localization accuracy for MEG and EEG using a human skull phantom
(pdf)
Electroencephalography and Clinical Neurophysiology, 107(2):159-73, 1998
 Makeig S, Bell AJ, Jung TP, Sejnowski TJ (1996)
Independent component analysis of electroencephalographic data
in Advances in Neural Information Processing Systems. Vol. 8, eds D. S. Touretzky, M.
C. Mozer, and M. E. Hasselmo (Cambridge, MA: MIT Press), 145–151.
 Maris E, Oostendveld R
Nonparametric statistical testing of EEG- and MEG-data
J Neurosci Methods (2007), 164(1):177-90.
 Mosher, Leahy RM, Lewis PS (1999)
EEG and MEG: Forward solutions for inverse methods (pdf)
IEEE Trans Biomedical Eng, 46(3):245-259, Mar 1999

pg. 348
 Niso G, Tadel F, Bock E, Cousineau M, Santos A, Baillet S (2019)
Brainstorm Pipeline Analysis of Resting-State Data from the Open MEG Archive
Frontiers in Neuroscience, Mar 2019
 Pantazis D, Nichols TE, Baillet S, Leahy RM (2005a)
A Comparison of Random Field Theory and Permutation Methods for the Statistical
Analysis of MEG data (pdf)
Neuroimage, 25(2):383-394, 2005
 Pascual-Marqui RD (2002)
Standardized low-resolution brain electromagnetic tomography (sLORETA): technical
details
Methods Find Exp Clin Pharmacol 2002, 24 Suppl D:5-12
 Pfurtscheller G (1992)
Event-related synchronization (ERS): an electrophysiological correlate of cortical areas at
rest
Electroencephalogr Clin Neurophysiol, 83(1):62-9
 Rivière D, Régis J, Cointepas Y, Papadopoulos-Orfanos D, Cachia A, Mangin JF (2003)
A freely available anatomist/brainVISA package for structural morphometry of the
cortical sulci
in Proceedings of the 9th HBM, Neuroimage, Vol. 19
 Shattuck D W, Leahy RM (2002)
Brainsuite: an automated cortical surface identification tool
Med. Image Anal. 6, 129–142. doi: 10.1016/S1361-8415(02)00054-3
 Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM (2011)
Brainstorm: A User-Friendly Application for MEG/EEG Analysis
Computational Intelligence and Neuroscience, vol. 2011, Article ID 879716, 13 pages,
2011. doi:10.1155/2011/879716
 Tadel F, Bock E, Niso G, Mosher JC, Cousineau M, Pantazis D, Leahy RM, Baillet S
(2019)
MEG/EEG Group Analysis With Brainstorm
Frontiers in Neuroscience, Feb 2019
 Uusitalo MA, Ilmoniemi RJ (1997)
Signal-space projection method for separating MEG or EEG into components
Med. Biol. Eng. Comput. 35, 135–140

pg. 349

You might also like