0% found this document useful (0 votes)
12 views24 pages

Bbmatter

1. The Blue Brain Project aims to create the first virtual brain by reverse engineering the mammalian brain through detailed computer simulations. Its goals are to model the brain at the genetic, cellular, and whole brain levels to better understand brain functions and dysfunctions. 2. The Blue Brain Project uses IBM's Blue Gene/L supercomputer, which has over 8,000 CPUs and can perform calculations at 22.4 teraflops. It models neural microcircuits by representing individual neurons, their ion channels and morphology, as well as synaptic connections between different neuron types. 3. Reconstructing even a small neural microcircuit requires accurately modeling thousands of neurons with complex dendritic trees and tens of thousands of compartments, as
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views24 pages

Bbmatter

1. The Blue Brain Project aims to create the first virtual brain by reverse engineering the mammalian brain through detailed computer simulations. Its goals are to model the brain at the genetic, cellular, and whole brain levels to better understand brain functions and dysfunctions. 2. The Blue Brain Project uses IBM's Blue Gene/L supercomputer, which has over 8,000 CPUs and can perform calculations at 22.4 teraflops. It models neural microcircuits by representing individual neurons, their ion channels and morphology, as well as synaptic connections between different neuron types. 3. Reconstructing even a small neural microcircuit requires accurately modeling thousands of neurons with complex dendritic trees and tens of thousands of compartments, as
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1.

INTRODUCTION
Human brain is the most valuable creation of God. The man is
called intelligent because of the brain. The brain translates the
information delivered by the impulses, which then enables the person
to react. But we loss the knowledge of a brain when the body is
destroyed after the death of man. That knowledge might have been
used for the development of the human society. What happen if we
create a brain and up load the contents of natural brain into it?

1.1 Blue Brain


The name of the world’s first virtual brain.That means a
machine that can function as human brain. Today scientists are in
research to create an artificial brain that can think, response, take
decision, and keep anything in memory. The main aim is to upload
human brain into machine. So that man can think, take decision
without any effort. After the death of the body, the virtual brain will
act as the man .So, even after the death of a person we will not loose
the knowledge, intelligence, personalities, feelings and memories of
that man that can be used for the development of the human society.
No one has ever understood the complexity of human brain. It is
complex than any circuitry in the world. So, question may arise “Is it
really possible to create a human brain?” The answer is “Yes”.
Because what ever man has created today always he has followed the
nature. When man does not have a device called computer, it was a
big question for all.

Technology is growing faster than every thing. IBM is now in


research to create a virtual brain, called “Blue brain”. If possible, this
would be the first virtual brain of the world. With in 30 years, we will

1
be able to scan ourselves into the computers. Is this the beginning of
eternal life?

1.2 What is Virtual Brain?


Virtual brain is an artificial brain, which does not actually the
natural brain, but can act as the brain. It can think like brain, take
decisions based on the past experience, and response as the natural
brain can. It is possible by using a super computer, with a huge
amount of storage capacity, processing power and an interface
between the human brain and this artificial one. Through this
interface the data stored in the natural brain can be up loaded into
the computer. So the brain and the knowledge, intelligence of anyone
can be kept and used for ever, even after the death of the person.

1.3 Why we need Virtual Brain?


Today we are developed because of our intelligence. Intelligence is
the inborn quality that can not be created. Some people have this
quality, so that they can think up to such an extent where other can
not reach. Human society is always need of such intelligence and
such an intelligent brain to have with. But the intelligence is lost
along with the body after the death. The virtual brain is a solution to
it. The brain and intelli- gence will alive even after the death. We often
face difficulties in remembering things such as people’s names, their
birthdays, and the spellings of words, proper grammar, important
dates, history, facts etc... In the busy life every one want to be relaxed.
Can’t we use any machine to assist for all these? Virtual brain may
be the solution to it. What if we upload ourselves into computer, we
were simply aware of a computer, or maybe, what if we lived in a
computer as a program?

2
1.4 How it is possible?

First, it is helpful to describe the basic manners in which a


person may be uploaded into a computer. Raymond Kurzweil recently
provided an interesting paper on this topic. In it, he describes both
invasive and noninvasive techniques. The most promising is the use
of very small robots, or nanobots. These robots will be small enough
to travel throughout our circulatory systems. Traveling into the spine
and brain, they will be able to monitor the activity and structure of
our central nervous system.

 They will be able to provide an interface with computers that is as


close as our mind can be while we still reside in our biological form.
Nanobots could also carefully scan the structure of our brain,
providing a complete readout of the connections between each neuron.
They would also record the current state of the brain. This
information, when entered into a computer, could then continue to
function as us. All that is required is a computer with large enough
storage space and processing power. Is the pattern and state of
neuron connections in our brain truly all that makes up our
conscious selves? Many people believe firmly those we posses a soul,
while some very technical people believe that quantum forces
contribute to our awareness. But we have to now think technically.
Note, however, that we need not know how the brain actually
functions, to transfer it to a computer. We need only know the media
and contents. The actual mystery of how we achieved consciousness
in the first place, or how we maintain it, is a separate discussion.
Really this concept appears to be very difficult and complex to us. For
this we have to first know how the human brain actually works.

3
2. HOW THE BLUE BRAIN PROJECT WILL WORK?

2.1 Goals & Objectives


The Blue Brain Project is the first comprehensive attempt to
reverse-engineer the mammalian brain, in order to understand brain
function and dysfunction through detailed simulations. The mission
in undertaking The Blue Brain Project is to gather all existing
knowledge of the brain, accelerate the global research effort of reverse
engineering the structure and function of the components of the
brain, and to build a complete theoretical framework that can
orchestrate the reconstruction of the brain of mammals and man
from the genetic to the whole brain levels, into computer models for
simulation, visualization and automatic knowledge archiving by 2015.
Biologi- cally accurate computer models of mammalian and human
brains could provide a new foundation for understanding functions
and malfunctions of the brain and for a new generation of
information-based, customized medicine.

Fig. 1.1. The Blue Gene/L supercomputer architecture

4
2.2 Architecture of Blue Gene
Blue Gene/L is built using system-on-a-chip technology in
which all functions of a node (except for main memory) are integrated
onto a single application-specific integrated circuit (ASIC). This ASIC
includes 2 PowerPC 440 cores running at 700 MHz. Associated with
each core is a 64-bit “double” floating point unit (FPU) that can
operate in single instruction, multiple data (SIMD) mode. Each (single)
FPU can execute up to 2 “multiply-adds” per cycle, which means that
the peak performance of the chip is 8 floating point operations per
cycle (4 under normal conditions, with no use of SIMD mode). This
leads to a peak performance of 5.6 billion floating point operations
per second (gigaFLOPS or GFLOPS) per chip or node, or 2.8 GFLOPS
in non- SIMD mode. The two CPUs (central processing units) can be
used in “co- processor” mode (resulting in one CPU and 512 MB RAM
(random access memory) for computation, the other CPU being used
for processing the I/O (input/output) of the main CPU) or in “virtual
node” mode (in which both CPUs with 256 MB each are used for
computation). So, the aggregate performance of a processor card in
virtual node mode is 2 x node = 2 x 2.8 GFLOPS = 5.6 G FLOPS, and
its peak performance (optimal use of double FPU) is: 2 x 5.6 GFLOPS
= 11.2 GFLOPS. A rack (1,024 nodes = 2,048 CPUs) therefore has 2.8
tera FLOPS or TFLOPS, and a peak of 5.6 TFLOPS. The Blue Brain
Projects Blue Gene is a 4-rack system that has 4,096 nodes, equal to
8,192 CPUs, with a peak performance of 22.4 TFLOPS. A 64-rack
machine should provide 180 TFLOPS, or 360 TFLOPS at peak
performance.

2.3 Modelling the Microcircuit


The scheme shows the minimal essential building blocks
required to recon- struct a neural microcircuit. Microcircuits are

5
composed of neurons and synaptic connections. To model neurons,
the three-dimensional morphology, ion channel composition, and
distributions and electrical properties of the different types of neuron
are required, are composed of neurons and synaptic connections. To
model neurons, the three-dimensional morphology, ion channel
composition, and distributions and electrical properties of the
different types of neurons are required, as well as the total numbers
of neurons in the microcircuit and the relative proportions of the
different types of neurons. To model synaptic connections, the
physiological and pharmacological properties of the different types of
synapses that connect any two types of neurons are required, in
addition to statistics on which part of the axonal arborization is used
(presynaptic innervation pattern) to contact which regions of the
target neuron (postsynaptic innervations pattern), how many
synapses are involved in forming connections, and the connectivity
statistics between any two types of neurons.

6
Fig. 1.2. Elementary building blocks of neural microcircuits.

7
 Neurons receive inputs from thousands of other neurons, which are
intricately mapped onto different branches of highly complex
dendritic trees and require tens of thousands of compartments to
accurately represent them. There is therefore a minimal size of a
microcircuit and a minimal complexity of a neuron’s morphology that
can fully sustain a neuron. A massive increase in computational
power is required to make this quantum leap - an increase that is
provided by IBM’s Blue Gene supercomputer. By exploiting the
computing power of Blue Gene, the Blue Brain Project1 aims to build
accurate models of the mammalian brain from first principles. The
first phase of the project is to build a cellular-level (as opposed to a
genetic- or molecular-level) model of a 2-week-old rat somatosensory
neocortex corresponding to the dimensions of a neocortical column
(NCC) as defined by the dendritic arborizations of the layer 5
pyramidal neurons. The combination of infrared differential
interference microscopy in brain slices and the use of multi-neuron
patch- clamping allowed the systematic quantification of the
molecular, morphological and electrical properties of the different
neurons and their synaptic pathways in a manner that would allow
an accurate reconstruction of the column. Over the past 10 years, the
laboratory has prepared for this reconstruction by developing the
multi-neuron patch- clamp approach, recording from thousands of
neocortical neurons and their synaptic connections, and developing
quantitative approaches to allow a complete numerical breakdown of
the elementary building blocks of the NCC. The recordings have
mainly been in the 1416-day-old rat somatosensory cortex, which is a
highly accessible region on which many researchers have converged
following a series of pioneering studies driven by Bert Sakmann.
Much of the raw data is located in our databases, but a major

8
initiative is underway to make all these data freely available in a
publicly accessible database. The so-called ’blue print’ of the circuit,
although not entirely complete, has reached a sufficient level of
refinement to begin the reconstruction at the cellular level. Highly
quantitative data are available for rats of this age, mainly because
visualization of the tissue is optimal from a technical point of view.
This age also provides an ideal template because it can serve as a
starting point from which to study maturation and ageing of the NCC.
As NCCs show a high degree of stereotypy, the region from which the
template is built is not crucial, but a sensory region is preferred
because these areas contain a prominent layer 4 with cells
specialized to receive input to the neocortex from the thalamus; this
will also be required for later calibration with in vivo experiments. The
NCC should not be overly specialized, because this could make
generalization to other neocortical regions difficult, but areas such as
the barrel cortex do offer the advantage of highly controlled in vivo
data for comparison.
 The mouse might have been the best species to begin with, because it
offers a spectrum of molecular approaches with which to explore the
circuit, but mouse neurons are small, which prevents the detailed
dendritic recordings that are important for modelling the nonlinear
properties of the complex dendritic trees of pyramidal cells (75-80% of
the neurons). The image shows the Microcircuit in various stages of
reconstruction. Only a small fraction of reconstructed, three
dimensional neurons is shown. Red indicates the dendritic and blue
the axonal arborizations. The columnar structure illustrates the layer
definition of the NCC.
 The microcircuits (from left to right) for layers 2, 3, 4 and 5.
 A single thick tufted layer 5 pyramidal neuron located within the
column.

9
 One pyramidal neuron in layer 2, a small pyramidal neuron in layer 5
and the large thick tufted pyramidal neuron in layer
 An image of the NCC, with neurons located in layers 2 to 5.

2.4 Simulating the Microcircuit


Once the microcircuit is built, the exciting work of making the
circuit function can begin. All the 8192 processors of the Blue Gene
are pressed into service, in a massively parallel computation solving
the complex mathematical equations that govern the electrical activity
in each neuron when a stimulus is applied. As an electrical impulse
travels from neuron to neuron, the results are communicated via
inter-processor communication (MPI). Currently, the time required to
simulate the circuit is about two orders of magnitude larger than the
actual biological time simulated. The Blue Brain team is working to
streamline the computation so that the circuit can function in real
time - meaning that 1 second of activity can be modeled in one
second.

10
3. Interpreting the Results
Running the Blue Brain simulation generates huge amounts of
data. Analyses of individual neurons must be repeated thousands of
times. And analyses dealing with the network activity must deal with
data that easily reaches hundreds of gigabytes per second of
simulation. Using massively parallel computers the data can be
analyzed where it is created

Fig. 1.3. Reconstructing the neocortical column.

(server-side analysis for experimental data, online analysis


during simulation) Given the geometric complexity of the column, a

11
visual exploration of the circuit is an important part of the analysis.
Mapping the simulation data onto the morphology is invaluable for an
immediate verification of single cell activity as well as network
phenomena. Architects at EPFL have worked with the Blue Brain
devel- opers to design a visualization interface that translates the
Blue Gene data into a 3D visual representation of the column. A
different supercomputer is used for this compu- tationally intensive
task. The visualization of the neurons’ shapes is a challenging task
given the fact that a column of 10,000 neurons rendered in high
quality mesh accounts for essentially 1 billion triangles for which
about 100GB of management data is required. Simulation data with
a resolution of electrical compartments for each neuron accounts for
another 150GB. As the electrical impulse travels through the column,
neurons light up and change colour as they become electrically active.
A visual interface makes it possible to quickly identify areas of
interest that can then be studied more extensively using further
simulations. A visual representation can also be used to compare the
simulation results with experiments that show electrical activity in
the brain

3.1 Data Manipulation Cascade


Building the Blue Column requires a series of data
manipulations. The first step is to parse each three-dimensional
morphology and correct errors due to the in vitro preparation and
reconstruction. The repaired neurons are placed in a database from
which statistics for the different anatomical classes of neurons are
obtained. These statistics are used to clone an indefinite number of
neurons in each class to capture the full morphological diversity. The
next step is to take each neuron and insert ion channel models in
order to produce the array of electrical types. The field has reached a

12
sufficient stage of convergence to generate efforts to classify neurons,
such as the Petilla Convention - a conference held in October 2005 on
anatomical and electrical types of neocortical interneuron,
established by the community. Single-cell gene expression studies of
neocortical interneurons now provide detailed predictions of the
specific combinations of more than 20 ion channel genes that
underlie electrical diversity. A database of biologically accurate
Hodgkin-Huxley ion channel models is being produced. The
simulator NEURON is used with automated fitting algorithms
running on Blue Gene to insert ion channels and adjust their
parameters to capture the specific electrical properties of the different
electrical types found in each anatomical class. The statistical
variations within each electrical class are also used to generate subtle
variations in discharge behaviour in each neuron. So, each neuron is
morpho- logically and electrically unique. Rather than taking 10,000
days to fit each neuron’s electrical behaviour with a unique profile,
density and distribution of ion channels, applications are being
prepared to use Blue Gene to carry out such a fit in a day. These
functionalized neurons are stored in a database. The three-
dimensional neurons are then imported into Blue Builder, a circuit
builder that loads neurons into their layers according to a “recipe” of
neuron numbers and proportions. A collision detection algorithm is
run to determine the structural positioning of all axo-dendritic
touches, and neurons are jittered and spun until the structural
touches match experimentally derived statistics. Probabilities of
connectivity between different types of neurons are used to determine
which neurons are connected, and all axo-dendritic touches are
converted into synaptic connections. The manner in which the axons
map onto the dendrites between specific anatomical classes and the
distribution of synapses received by a class of neurons are used to
verify and fine-tune the biological accuracy of the synaptic mapping
13
between neurons. It is therefore possible to place 10-50 million
synapses in accurate three-dimensional space, distributed on the
detailed threedimen- sional morphology of each neuron. The
synapses are functionalized according to the synaptic parameters for
different classes of synaptic connection within statistical vari- ations
of each class, dynamic synaptic models are used to simulate
transmission, and synaptic learning algorithms are introduced to
allow plasticity. The distance from the cell body to each synapse is
used to compute the axonal delay, and the circuit configuration is
exported. The configuration file is read by a NEURON subroutine
that calls up each neuron and effectively inserts the location and
functional properties of every synapse on the axon, soma and
dendrites. One neuron is then mapped onto each processor and the
axonal delays are used to manage communication between neurons
and processors. Effectively, processors are converted into neurons,
and MPI (message-passing interface)- based communication cables
are converted into axons interconnecting the neurons - so the entire
Blue Gene is essentially converted into a neocortical microcircuit. We
developed two software programs for simulating such large-scale
networks with morphologically complex neurons. A new MPI version
of NEURON has been adapted by Michael Hines to run on Blue Gene.
The second simulator uses the MPI messaging component of the
large-scale NeoCortical Simu- lator (NCS), which was developed by
Philip Goodman, to manage the communication between NEURON-
simulated neurons distributed on different processors. The latter
simulator will allow embedding of a detailed NCC model into a
simplified large-scale model of the whole brain. Both of these
softwares have already been tested, produce identical results and can
simulate tens of thousands of morphologically and electri- cally
complex neurons (as many as 10,000 compartments per neuron with
more than a dozen Hodgkin-Huxley ion channels per compartment).
14
Up to 10 neurons can be mapped onto each processor to allow
simulations of the NCC with as many as 100,000 neurons.
Optimization of these algorithms could allow simulations to run at
close to real time. The circuit configuration is also read by a graphic
application, which renders the entire circuit in various levels of
textured graphic formats. Real-time stereo visu- alization applications
are programmed to run on the terabyte SMP (shared memory
processor) Extreme series from SGI (Silicon Graphics, Inc.). The
output from Blue Gene (any parameter of the model) can be fed
directly into the SGI system to perform in silico imaging of the activity
of the inner workings of the NCC. Eventually, the simulation of the
NCC will also include the vasculature, as well as the glial network, to
allow capture of neuron-glia interactions. Simulations of extracellular
currents and field potentials, and the emergent electroencephalogram
(EEG) activity will also be modelled.

3.2 Whole Brain Simulations


The main limitations for digital computers in the simulation of
biological processes are the extreme temporal and spatial resolution
demanded by some biological processes, and the limitations of the
algorithms that are used to model biological processes. If each atomic
collision is simulated, the most powerful super- computers still take
days to simulate a microsecond of protein folding, so it is, of course,
not possible to simulate complex biological systems at the atomic
scale. However, models at higher levels such as the molecular or
cellular levels, can capture lower-level processes and allow complex
large-scale simulations of biological processes. The Blue Brain
Project’s Blue Gene can simulate a NCC of up to 100,000 highly
complex neurons at the cellular or as many as 100 million simple
neurons (about the same number of neurons found in a mouse brain).

15
However, simulating neurons embedded in microcircuits,
microcircuits embedded in brain regions, and brain regions
embedded in the whole brain as part of the process of understanding
the emergence of complex behaviours of animals is an inevitable
progression in understanding brain function and dysfunction, and
the question is whether whole-brain simulations are at all possible.
Computational power needs to increase about 1-millionfold before we
will be able to simulate the human brain, with 100 billion neurons, at
the same level of detail as the Blue Column. Algorithmic and
simulation efficiency (which ensure that all possible FLOPS are
exploited) could reduce this requirement by two to three orders of
magnitude. Simulating the NCC could also act as a test-bed to refine
algorithms required to simulate brain function, which can be used to
produce field programmable gate array (FPGA)-based chips. FPGAs
could increase computational speeds by as much as two orders of
magnitude. The FPGAs could, in turn, provide the testing ground for
the production of specialized NEURON solver application- specific
integrated circuits (ASICs) that could further increase computational
speed by another one to two orders of magnitude. It could therefore
be possible, in principle, to simulate the human brain even with
current technology. The computer industry is facing what is known
as a discontinuity, with increasing processor speed leading to
unacceptably high power consumption and heat production. This is
pushing a qualita- tively new transition in the types of processor to be
used in future computers. These advances in computing should begin
to make genetic- and molecular-level simulations possible. Software
applications and data manipulation required to model the brain with
biological accuracy. Experimental results that provide the elementary
building blocks of the microcircuit are stored in a database. Before
three-dimensional neurons are modelled electrically, the morphology
is parsed for errors, and for repair of arboriza- tions damaged during
16
slice preparation. The morphological statistics for a class of neurons
are used to clone multiple copies of neurons to generate the full
morpho- logical diversity and the thousands of neurons required in
the simulation. A spectrum of ion channels is inserted, and
conductances and distributions are altered to fit the neurons
electrical properties according to known statistical distributions, to
capture the range of electrical classes and the uniqueness of each
neuron’s behaviour (model fitting/electrical capture). A circuit builder
is used to place neurons within a three- dimensional column, to
perform axo-dendritic collisions and, using structural and functional
statistics of synaptic connectivity, to convert a fraction of axo-
dendritic touches into synapses. The circuit configuration is read by
NEURON, which calls up each modelled neuron and inserts the
several thousand synapses onto appropriate cellular locations. The
circuit can be inserted into a brain region using the brain builder. An
environment builder is used to set up the stimulus and recording
conditions. Neurons are mapped onto processors, with integer
numbers of neurons per processor. The output is visualized, analysed
and/or fed into real-time algorithms for feedback stimulation.

17
4. APPLICATIONS OF BLUE BRAIN PROJECT

4.1 What can we learn from Blue Brain?

Detailed, biologically accurate brain simulations offer the


opportunity to answer some fundamental questions about the brain
that cannot be addressed with any current experimental or
theoretical approaches. These include,

 Defining Functions of the Basic Elements.


 Understanding Complexity.
 Exploring the Role of Dendrites.
 Revealing Functional Diversity.
 Tracking the Emergence of Intelligence.
 Identifying Points ofVvulnerability.
 Simulating Disease and Developing Treatments.
 Providing a Circuit Design Platform.

4.2 Applications of Blue Brain

 Gathering and Testing 100 Years of Data.


 Cracking the Neural Code.
 Understanding Neocortical Information Processing.
 A Novel Tool for Drug Discovery for Brain Disorders.
 A Global Facility.
 A Foundation for Whole Brain Simulations.
 A Foundation for Molecular Modeling of Brain Function.

18
Fig. 1.4. The data manipulation cascade

19
5. ADVANTAGES AND LIMITATIONS

5.1 Advantages
 We can remember things without any effort.
 Decision can be made without the presence of a person.
 Even after the death of a man his intelligence can be used.
 The activity of different animals can be understood. That means by
interpre- tation of the electric impulses from the brain of the animals,
their thinking can be understood easily.
 It would allow the deaf to hear via direct nerve stimulation, and also
be helpful for many psychological diseases. By down loading the
contents of the brain that was uploaded into the computer, the man
can get rid from the madness.

5.2 Limitations
Further, there are many new dangers these technologies will
open. We will be susceptible to new forms of harm.

 We become dependent upon the computer systems.


 Others may use technical knowledge against us.
 Computer viruses will pose an increasingly critical threat.
 The real threat, however, is the fear that people will have of new
technologies.
 That fear may culminate in a large resistance. Clear evidence of this
type of fear is found today with respect to human cloning.

20
6. FUTURE PERSPECTIVE
The synthesis era in neuroscience started with the launch of the
Human Brain Project and is an inevitable phase triggered by a critical
amount of fundamental data. The data set does not need to be
complete before such a phase can begin. Indeed, it is essential to
guide reductionist research into the deeper facets of brain structure
and function.

As a complement to experimental research, it offers rapid assessment


of the probable effect of a new finding on preexisting knowledge,
which can no longer be managed completely by any one researcher.
Detailed models will probably become the final form of databases that
are used to organize all knowledge of the brain and allow hypothesis
testing, rapid diagnoses of brain malfunction, as well as development
of treatments for neurological disorders.

In short, we can hope to learn a great deal about brain function and
disfunction from accurate models of the brain .The time taken to
build detailed models of the brain depends on the level of detail that
is captured. Indeed, the first version of the Blue Column, which has
10,000 neurons, has already been built and simulated; it is the
refinement of the detailed properties and calibration of the circuit
that takes time.

A model of the entire brain at the cellular level will probably take
the next decade. There is no fundamental obstacle to modeling the
brain and it is therefore likely that we will have detailed models of
mammalian brains, including that of man, in the near future. Even if
overestimated by a decade or two, this is still just a ’blink of an eye’
in relation to the evolution of human civilization.

21
7. CONCLUSION
In conclusion, we will be able to transfer ourselves into computers
at some point. Most arguments against this outcome are seemingly
easy to circumvent. They are either simple minded, or simply require
further time for technology to increase. The only serious threats
raised are also overcome as we note the combination of biological and
digital technologies.

22
8 . REFERENCES
 “Engineering in Medicine and Biology Society”, 2008. EMBS
2008. 30th Annual International Conference of the IEEE
 Henry Markram, “The Blue Brain Project”, Nature Reviews
Neuroscience 2006 February. [3] Simulated brain closer to
thought BBC News 22 April 2009.
 “ProjectMilestones”.BlueBrain.http://bluebrain.epfl.ch/Jahia/site/bl
uebrain/op/edit/pid/19085
 Graham-Rowe, Duncan. “Mission to build a simulated brain begins”,
NewSci-entist, June 2005. pp. 1879-85.
 Special issue on brain-computer interface technology: The third
international meeting. IEEE Transactions on Neural Systen
Rehabilitation Engineering, 2006.
 E. Bart and S. Ullman. Cross-generalization: Learning novel classes
from a single example by feature replacement. In CVPR, 2005.
 L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual
models from few training examples: an incremental bayesian
Approach Tested on 101 Object Cateories. In Workshop on Generative
Model Based Vision, 2004.
 R. Fergus, P. Perona, and A. Zisserman. Object class recognition by
unsupervised scale-invariant learning. In CVPR, 2003.
 B. Fisch. Fisch & Spehlmann’s EEG primer: Basic principles of digital
and analog EEG. Elsevier: Amsterdam, 2005.
 Gerson, L. Parra, and P. Sajda. Cortically-coupled computer vision for
rapid image search. IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 14(2):174–179, 2006.
 K. Grauman and T. Darrell. The pyramid match kernel:
Discriminative classification with sets of image features. In ICCV,
2005.

23
 K. Grauman and T. Darrell. Approximate correspondences in high
dimensions. In NIPS, 2007.
 K. Grill-Spector. The neural basis of object perception. Current
opinion in neurobiology, 13:1–8, 2003.
 Y. Ivanov, T. Serre, and J. Bouvrie. Confidence weighted classifier
combination for multi-modal human identification. Technical Report
AI Memo 2005-035, MIT Computer Science and Artificial Intelligence
Laboratory, 2005.
 Z. J., M. Marszalek, S. Lazebnik, and C. Schmid. Local features and
kernels for classifcation of texture and object categories: A
comprehensive study. IJCV, 2006.

24

You might also like