Bbmatter
Bbmatter
INTRODUCTION
Human brain is the most valuable creation of God. The man is
called intelligent because of the brain. The brain translates the
information delivered by the impulses, which then enables the person
to react. But we loss the knowledge of a brain when the body is
destroyed after the death of man. That knowledge might have been
used for the development of the human society. What happen if we
create a brain and up load the contents of natural brain into it?
1
be able to scan ourselves into the computers. Is this the beginning of
eternal life?
2
1.4 How it is possible?
3
2. HOW THE BLUE BRAIN PROJECT WILL WORK?
4
2.2 Architecture of Blue Gene
Blue Gene/L is built using system-on-a-chip technology in
which all functions of a node (except for main memory) are integrated
onto a single application-specific integrated circuit (ASIC). This ASIC
includes 2 PowerPC 440 cores running at 700 MHz. Associated with
each core is a 64-bit “double” floating point unit (FPU) that can
operate in single instruction, multiple data (SIMD) mode. Each (single)
FPU can execute up to 2 “multiply-adds” per cycle, which means that
the peak performance of the chip is 8 floating point operations per
cycle (4 under normal conditions, with no use of SIMD mode). This
leads to a peak performance of 5.6 billion floating point operations
per second (gigaFLOPS or GFLOPS) per chip or node, or 2.8 GFLOPS
in non- SIMD mode. The two CPUs (central processing units) can be
used in “co- processor” mode (resulting in one CPU and 512 MB RAM
(random access memory) for computation, the other CPU being used
for processing the I/O (input/output) of the main CPU) or in “virtual
node” mode (in which both CPUs with 256 MB each are used for
computation). So, the aggregate performance of a processor card in
virtual node mode is 2 x node = 2 x 2.8 GFLOPS = 5.6 G FLOPS, and
its peak performance (optimal use of double FPU) is: 2 x 5.6 GFLOPS
= 11.2 GFLOPS. A rack (1,024 nodes = 2,048 CPUs) therefore has 2.8
tera FLOPS or TFLOPS, and a peak of 5.6 TFLOPS. The Blue Brain
Projects Blue Gene is a 4-rack system that has 4,096 nodes, equal to
8,192 CPUs, with a peak performance of 22.4 TFLOPS. A 64-rack
machine should provide 180 TFLOPS, or 360 TFLOPS at peak
performance.
5
composed of neurons and synaptic connections. To model neurons,
the three-dimensional morphology, ion channel composition, and
distributions and electrical properties of the different types of neuron
are required, are composed of neurons and synaptic connections. To
model neurons, the three-dimensional morphology, ion channel
composition, and distributions and electrical properties of the
different types of neurons are required, as well as the total numbers
of neurons in the microcircuit and the relative proportions of the
different types of neurons. To model synaptic connections, the
physiological and pharmacological properties of the different types of
synapses that connect any two types of neurons are required, in
addition to statistics on which part of the axonal arborization is used
(presynaptic innervation pattern) to contact which regions of the
target neuron (postsynaptic innervations pattern), how many
synapses are involved in forming connections, and the connectivity
statistics between any two types of neurons.
6
Fig. 1.2. Elementary building blocks of neural microcircuits.
7
Neurons receive inputs from thousands of other neurons, which are
intricately mapped onto different branches of highly complex
dendritic trees and require tens of thousands of compartments to
accurately represent them. There is therefore a minimal size of a
microcircuit and a minimal complexity of a neuron’s morphology that
can fully sustain a neuron. A massive increase in computational
power is required to make this quantum leap - an increase that is
provided by IBM’s Blue Gene supercomputer. By exploiting the
computing power of Blue Gene, the Blue Brain Project1 aims to build
accurate models of the mammalian brain from first principles. The
first phase of the project is to build a cellular-level (as opposed to a
genetic- or molecular-level) model of a 2-week-old rat somatosensory
neocortex corresponding to the dimensions of a neocortical column
(NCC) as defined by the dendritic arborizations of the layer 5
pyramidal neurons. The combination of infrared differential
interference microscopy in brain slices and the use of multi-neuron
patch- clamping allowed the systematic quantification of the
molecular, morphological and electrical properties of the different
neurons and their synaptic pathways in a manner that would allow
an accurate reconstruction of the column. Over the past 10 years, the
laboratory has prepared for this reconstruction by developing the
multi-neuron patch- clamp approach, recording from thousands of
neocortical neurons and their synaptic connections, and developing
quantitative approaches to allow a complete numerical breakdown of
the elementary building blocks of the NCC. The recordings have
mainly been in the 1416-day-old rat somatosensory cortex, which is a
highly accessible region on which many researchers have converged
following a series of pioneering studies driven by Bert Sakmann.
Much of the raw data is located in our databases, but a major
8
initiative is underway to make all these data freely available in a
publicly accessible database. The so-called ’blue print’ of the circuit,
although not entirely complete, has reached a sufficient level of
refinement to begin the reconstruction at the cellular level. Highly
quantitative data are available for rats of this age, mainly because
visualization of the tissue is optimal from a technical point of view.
This age also provides an ideal template because it can serve as a
starting point from which to study maturation and ageing of the NCC.
As NCCs show a high degree of stereotypy, the region from which the
template is built is not crucial, but a sensory region is preferred
because these areas contain a prominent layer 4 with cells
specialized to receive input to the neocortex from the thalamus; this
will also be required for later calibration with in vivo experiments. The
NCC should not be overly specialized, because this could make
generalization to other neocortical regions difficult, but areas such as
the barrel cortex do offer the advantage of highly controlled in vivo
data for comparison.
The mouse might have been the best species to begin with, because it
offers a spectrum of molecular approaches with which to explore the
circuit, but mouse neurons are small, which prevents the detailed
dendritic recordings that are important for modelling the nonlinear
properties of the complex dendritic trees of pyramidal cells (75-80% of
the neurons). The image shows the Microcircuit in various stages of
reconstruction. Only a small fraction of reconstructed, three
dimensional neurons is shown. Red indicates the dendritic and blue
the axonal arborizations. The columnar structure illustrates the layer
definition of the NCC.
The microcircuits (from left to right) for layers 2, 3, 4 and 5.
A single thick tufted layer 5 pyramidal neuron located within the
column.
9
One pyramidal neuron in layer 2, a small pyramidal neuron in layer 5
and the large thick tufted pyramidal neuron in layer
An image of the NCC, with neurons located in layers 2 to 5.
10
3. Interpreting the Results
Running the Blue Brain simulation generates huge amounts of
data. Analyses of individual neurons must be repeated thousands of
times. And analyses dealing with the network activity must deal with
data that easily reaches hundreds of gigabytes per second of
simulation. Using massively parallel computers the data can be
analyzed where it is created
11
visual exploration of the circuit is an important part of the analysis.
Mapping the simulation data onto the morphology is invaluable for an
immediate verification of single cell activity as well as network
phenomena. Architects at EPFL have worked with the Blue Brain
devel- opers to design a visualization interface that translates the
Blue Gene data into a 3D visual representation of the column. A
different supercomputer is used for this compu- tationally intensive
task. The visualization of the neurons’ shapes is a challenging task
given the fact that a column of 10,000 neurons rendered in high
quality mesh accounts for essentially 1 billion triangles for which
about 100GB of management data is required. Simulation data with
a resolution of electrical compartments for each neuron accounts for
another 150GB. As the electrical impulse travels through the column,
neurons light up and change colour as they become electrically active.
A visual interface makes it possible to quickly identify areas of
interest that can then be studied more extensively using further
simulations. A visual representation can also be used to compare the
simulation results with experiments that show electrical activity in
the brain
12
sufficient stage of convergence to generate efforts to classify neurons,
such as the Petilla Convention - a conference held in October 2005 on
anatomical and electrical types of neocortical interneuron,
established by the community. Single-cell gene expression studies of
neocortical interneurons now provide detailed predictions of the
specific combinations of more than 20 ion channel genes that
underlie electrical diversity. A database of biologically accurate
Hodgkin-Huxley ion channel models is being produced. The
simulator NEURON is used with automated fitting algorithms
running on Blue Gene to insert ion channels and adjust their
parameters to capture the specific electrical properties of the different
electrical types found in each anatomical class. The statistical
variations within each electrical class are also used to generate subtle
variations in discharge behaviour in each neuron. So, each neuron is
morpho- logically and electrically unique. Rather than taking 10,000
days to fit each neuron’s electrical behaviour with a unique profile,
density and distribution of ion channels, applications are being
prepared to use Blue Gene to carry out such a fit in a day. These
functionalized neurons are stored in a database. The three-
dimensional neurons are then imported into Blue Builder, a circuit
builder that loads neurons into their layers according to a “recipe” of
neuron numbers and proportions. A collision detection algorithm is
run to determine the structural positioning of all axo-dendritic
touches, and neurons are jittered and spun until the structural
touches match experimentally derived statistics. Probabilities of
connectivity between different types of neurons are used to determine
which neurons are connected, and all axo-dendritic touches are
converted into synaptic connections. The manner in which the axons
map onto the dendrites between specific anatomical classes and the
distribution of synapses received by a class of neurons are used to
verify and fine-tune the biological accuracy of the synaptic mapping
13
between neurons. It is therefore possible to place 10-50 million
synapses in accurate three-dimensional space, distributed on the
detailed threedimen- sional morphology of each neuron. The
synapses are functionalized according to the synaptic parameters for
different classes of synaptic connection within statistical vari- ations
of each class, dynamic synaptic models are used to simulate
transmission, and synaptic learning algorithms are introduced to
allow plasticity. The distance from the cell body to each synapse is
used to compute the axonal delay, and the circuit configuration is
exported. The configuration file is read by a NEURON subroutine
that calls up each neuron and effectively inserts the location and
functional properties of every synapse on the axon, soma and
dendrites. One neuron is then mapped onto each processor and the
axonal delays are used to manage communication between neurons
and processors. Effectively, processors are converted into neurons,
and MPI (message-passing interface)- based communication cables
are converted into axons interconnecting the neurons - so the entire
Blue Gene is essentially converted into a neocortical microcircuit. We
developed two software programs for simulating such large-scale
networks with morphologically complex neurons. A new MPI version
of NEURON has been adapted by Michael Hines to run on Blue Gene.
The second simulator uses the MPI messaging component of the
large-scale NeoCortical Simu- lator (NCS), which was developed by
Philip Goodman, to manage the communication between NEURON-
simulated neurons distributed on different processors. The latter
simulator will allow embedding of a detailed NCC model into a
simplified large-scale model of the whole brain. Both of these
softwares have already been tested, produce identical results and can
simulate tens of thousands of morphologically and electri- cally
complex neurons (as many as 10,000 compartments per neuron with
more than a dozen Hodgkin-Huxley ion channels per compartment).
14
Up to 10 neurons can be mapped onto each processor to allow
simulations of the NCC with as many as 100,000 neurons.
Optimization of these algorithms could allow simulations to run at
close to real time. The circuit configuration is also read by a graphic
application, which renders the entire circuit in various levels of
textured graphic formats. Real-time stereo visu- alization applications
are programmed to run on the terabyte SMP (shared memory
processor) Extreme series from SGI (Silicon Graphics, Inc.). The
output from Blue Gene (any parameter of the model) can be fed
directly into the SGI system to perform in silico imaging of the activity
of the inner workings of the NCC. Eventually, the simulation of the
NCC will also include the vasculature, as well as the glial network, to
allow capture of neuron-glia interactions. Simulations of extracellular
currents and field potentials, and the emergent electroencephalogram
(EEG) activity will also be modelled.
15
However, simulating neurons embedded in microcircuits,
microcircuits embedded in brain regions, and brain regions
embedded in the whole brain as part of the process of understanding
the emergence of complex behaviours of animals is an inevitable
progression in understanding brain function and dysfunction, and
the question is whether whole-brain simulations are at all possible.
Computational power needs to increase about 1-millionfold before we
will be able to simulate the human brain, with 100 billion neurons, at
the same level of detail as the Blue Column. Algorithmic and
simulation efficiency (which ensure that all possible FLOPS are
exploited) could reduce this requirement by two to three orders of
magnitude. Simulating the NCC could also act as a test-bed to refine
algorithms required to simulate brain function, which can be used to
produce field programmable gate array (FPGA)-based chips. FPGAs
could increase computational speeds by as much as two orders of
magnitude. The FPGAs could, in turn, provide the testing ground for
the production of specialized NEURON solver application- specific
integrated circuits (ASICs) that could further increase computational
speed by another one to two orders of magnitude. It could therefore
be possible, in principle, to simulate the human brain even with
current technology. The computer industry is facing what is known
as a discontinuity, with increasing processor speed leading to
unacceptably high power consumption and heat production. This is
pushing a qualita- tively new transition in the types of processor to be
used in future computers. These advances in computing should begin
to make genetic- and molecular-level simulations possible. Software
applications and data manipulation required to model the brain with
biological accuracy. Experimental results that provide the elementary
building blocks of the microcircuit are stored in a database. Before
three-dimensional neurons are modelled electrically, the morphology
is parsed for errors, and for repair of arboriza- tions damaged during
16
slice preparation. The morphological statistics for a class of neurons
are used to clone multiple copies of neurons to generate the full
morpho- logical diversity and the thousands of neurons required in
the simulation. A spectrum of ion channels is inserted, and
conductances and distributions are altered to fit the neurons
electrical properties according to known statistical distributions, to
capture the range of electrical classes and the uniqueness of each
neuron’s behaviour (model fitting/electrical capture). A circuit builder
is used to place neurons within a three- dimensional column, to
perform axo-dendritic collisions and, using structural and functional
statistics of synaptic connectivity, to convert a fraction of axo-
dendritic touches into synapses. The circuit configuration is read by
NEURON, which calls up each modelled neuron and inserts the
several thousand synapses onto appropriate cellular locations. The
circuit can be inserted into a brain region using the brain builder. An
environment builder is used to set up the stimulus and recording
conditions. Neurons are mapped onto processors, with integer
numbers of neurons per processor. The output is visualized, analysed
and/or fed into real-time algorithms for feedback stimulation.
17
4. APPLICATIONS OF BLUE BRAIN PROJECT
18
Fig. 1.4. The data manipulation cascade
19
5. ADVANTAGES AND LIMITATIONS
5.1 Advantages
We can remember things without any effort.
Decision can be made without the presence of a person.
Even after the death of a man his intelligence can be used.
The activity of different animals can be understood. That means by
interpre- tation of the electric impulses from the brain of the animals,
their thinking can be understood easily.
It would allow the deaf to hear via direct nerve stimulation, and also
be helpful for many psychological diseases. By down loading the
contents of the brain that was uploaded into the computer, the man
can get rid from the madness.
5.2 Limitations
Further, there are many new dangers these technologies will
open. We will be susceptible to new forms of harm.
20
6. FUTURE PERSPECTIVE
The synthesis era in neuroscience started with the launch of the
Human Brain Project and is an inevitable phase triggered by a critical
amount of fundamental data. The data set does not need to be
complete before such a phase can begin. Indeed, it is essential to
guide reductionist research into the deeper facets of brain structure
and function.
In short, we can hope to learn a great deal about brain function and
disfunction from accurate models of the brain .The time taken to
build detailed models of the brain depends on the level of detail that
is captured. Indeed, the first version of the Blue Column, which has
10,000 neurons, has already been built and simulated; it is the
refinement of the detailed properties and calibration of the circuit
that takes time.
A model of the entire brain at the cellular level will probably take
the next decade. There is no fundamental obstacle to modeling the
brain and it is therefore likely that we will have detailed models of
mammalian brains, including that of man, in the near future. Even if
overestimated by a decade or two, this is still just a ’blink of an eye’
in relation to the evolution of human civilization.
21
7. CONCLUSION
In conclusion, we will be able to transfer ourselves into computers
at some point. Most arguments against this outcome are seemingly
easy to circumvent. They are either simple minded, or simply require
further time for technology to increase. The only serious threats
raised are also overcome as we note the combination of biological and
digital technologies.
22
8 . REFERENCES
“Engineering in Medicine and Biology Society”, 2008. EMBS
2008. 30th Annual International Conference of the IEEE
Henry Markram, “The Blue Brain Project”, Nature Reviews
Neuroscience 2006 February. [3] Simulated brain closer to
thought BBC News 22 April 2009.
“ProjectMilestones”.BlueBrain.http://bluebrain.epfl.ch/Jahia/site/bl
uebrain/op/edit/pid/19085
Graham-Rowe, Duncan. “Mission to build a simulated brain begins”,
NewSci-entist, June 2005. pp. 1879-85.
Special issue on brain-computer interface technology: The third
international meeting. IEEE Transactions on Neural Systen
Rehabilitation Engineering, 2006.
E. Bart and S. Ullman. Cross-generalization: Learning novel classes
from a single example by feature replacement. In CVPR, 2005.
L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual
models from few training examples: an incremental bayesian
Approach Tested on 101 Object Cateories. In Workshop on Generative
Model Based Vision, 2004.
R. Fergus, P. Perona, and A. Zisserman. Object class recognition by
unsupervised scale-invariant learning. In CVPR, 2003.
B. Fisch. Fisch & Spehlmann’s EEG primer: Basic principles of digital
and analog EEG. Elsevier: Amsterdam, 2005.
Gerson, L. Parra, and P. Sajda. Cortically-coupled computer vision for
rapid image search. IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 14(2):174–179, 2006.
K. Grauman and T. Darrell. The pyramid match kernel:
Discriminative classification with sets of image features. In ICCV,
2005.
23
K. Grauman and T. Darrell. Approximate correspondences in high
dimensions. In NIPS, 2007.
K. Grill-Spector. The neural basis of object perception. Current
opinion in neurobiology, 13:1–8, 2003.
Y. Ivanov, T. Serre, and J. Bouvrie. Confidence weighted classifier
combination for multi-modal human identification. Technical Report
AI Memo 2005-035, MIT Computer Science and Artificial Intelligence
Laboratory, 2005.
Z. J., M. Marszalek, S. Lazebnik, and C. Schmid. Local features and
kernels for classifcation of texture and object categories: A
comprehensive study. IJCV, 2006.
24