RM Nanoelectronics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

NanoElectronics

Mrs.Uma Balaji,
Assistant Professor/ECE,
SCSVMV
February 2021
Department of ECE NanoElectronics

Page 2
Department of ECE NanoElectronics

1 Unit - I Introduction to Nanotechnology


1.1 Introduction
Nanotechnology is the understanding and control of matter at dimensions of
roughly 1 to 100 nanometers, where unique phenomena enable novel applica-
tions.
“Encompassing nanoscale science, engineering and technology, nanotechnology
involves imaging, measuring, modeling, and manipulating matter at this length
scale.

We are talking about the “nano tidal wave”. Not a single day passes without
the press reporting on major innovations in this area. Large industrialized coun-
tries spend considerable amounts of money, around USD 10 billion per year, on
this field of study. This should have a positive effect on the economy and on
employment1. Microelectronics and the steady miniaturization of components

V
has become commonplace. Moore’s Law (a doubling of the number of transis-
tors for the same surface every 18 months) illustrates this idea. This also makes
us think of the production of chips in laboratories.
M
With their engineers and technicians in uniform, these laboratories can be con-
sidered as the technological cathedrals of our times. Microcomputers, micropro-
cessors, mobile phones and MP3 players with a USB connection are available
to the general public. For several decades now, this technology has been largely
SV
submicronic, and the idea of nanoelectronics was created in the laboratories.
The current technological limits will soon be reached, even if ongoing innova-
tions will push them beyond these limits. Emerging technologies such as carbon
nanotubes will take over.
The nanoworld is the intermediary between the atom and the solid, from the
SC

large molecule or the small solid object to the strong relationship between sur-
face and volume. Strictly speaking, the nanoworld has existed for a long time
and it is up to chemists to study the structures and properties of molecules.
They have learnt (with the help of physicists) to manipulate them and build
more and more complex structures. Progress in observation tools (electron
microscopes, scanning-tunneling microscopes and atomic force microscopes) as
well as in analysis tools (particularly X-ray, neutron and mass spectometry)
has been a decisive factor. The production of nanoscopic material is constantly
improving, as is the case for the process of catalysis and surfaces used in the
nanoworld.
A substantial number of new materials with nano elements such as ceramics,
glass, polymers and fibers are making their way onto the market and are present
in all shapes and forms in everyday life, from washing machines to architecture.
In 1959, the physicist Richard Feynman, Nobel Prize winner for Physics in 1965,
came up with the brilliant concept of the nano when he said “there is plenty of
room at the bottom” during a conference of the American Physical Society.

Biology has been molecular for a long time. The areas of DNA, proteins, and

Page 3
Department of ECE NanoElectronics

V
M
SV
SC

Page 4
Department of ECE NanoElectronics

V
M
SV
SC

Figure 1: Where can we find the nanoworld?

Page 5
Department of ECE NanoElectronics

cellular machinery are all subjects of multidisciplinary research. Investigations


into these fields have been carried out by biologists, chemists, and physicists.
Furthermore, the tools that have been developed have created new areas of
specialization, such as bioinformatics. Observation, image-processing and sim-
ulation all benefit from the advances in information technology and, once more,
conceptual progress goes hand in hand with technical expertise.
The concept of the nanoworld is based on the convergence of a real mix of sci-
entific and technological domains which once were separate.
Even though the laws of quantum mechanics based on wave corpuscle duality are
not directly visible in our everyday world, except for lasers and semi-conductor
components, they do govern the nanoworld. In the future, the quantum effects
will be used in a large number of applications, and in objects with new prop-
erties, such as quantum cryptography, quantum computers, teletransportation,
etc.
The evolution of our know-how, and of technological innovations, is already
having significant consequences. The Internet is the fruit of the union between

V
information technology and telecommunications, just as biochips are for elec-
tronics and biology. Imaging on a molecular level revolutionized the techniques
of medical examinations. The borders between chemistry, physics, mechanics
M
and biology are disappearing with the emergence of new materials, such as in-
telligent systems, nanomachines, etc.
This is where the nano tidal wave, which will have considerable impact on soci-
SV
ety, can be found. A comprehensive public debate is required on real or possible
risks and their consequences. Will humanity be able to master these new appli-
cations or are we taking on an unfamiliar role?

1.1.1 Two basic facts


SC

The evolution of knowledge


This is a fabulous adventure where the frontier between fundamental science
and applied science becomes an area of exchange and innovation. If the laws of
electricity make the electric motor possible, then we can make the same com-
parison for the electron and television. We are going from the macroscopic to
the microscopic.

Technological expertise
Progress in metallurgy and in chemistry has allowed scientists to process silicon.
Physicists, in particular, have highlighted its semi-conductor properties. The
understanding of these allowed the invention and the production of the tran-
sistor. A long succession of successful discoveries and innovations has meant
that integrated circuits are now present in everyday objects. If an object can
be understood in detail at the microscopic level, we can use our knowledge to
apply it to the macroscopic level.
Furthermore, the concept of nano is becoming fashionable as it combines what
we already know with new concepts and it conveys the idea of modern tech-

Page 6
Department of ECE NanoElectronics

nology (eg carbon nanotubes used in top of the range tennis rackets, bicycle
frames, or golf clubs).

1.1.2 Two approaches


It seems that the level of knowledge and technical know-how has never been as
advanced. This in turn allows for the manufacture of intelligent objects which
result from the merging of two approaches:
– top-down, which enables us to control the manufacture of smaller, more
complex objects, as illustrated by micro and nanoelectronics;
– bottom-up, which enables us to control the manufacture of atoms and
molecules, as illustrated by super molecular chemistry. The traditional world
has come together with the quantum world. Sectors that were once separate are
now coming together. The natural world is of interest to physicists as well as
to computer scientists and mathematicians. The divisions between the different
disciplines are disappearing and paving the way for new paradigms.

V
These approaches come together in the nanometric domain.

1.1.3 Two Key points


Miniaturization
M
This process makes it possible to see, work on and manufacture ever smaller
SV
objects. In order to do so, increasingly sophisticated technology is required.
Complexity
The integration of ever smaller objects, coupled with a rise in their number,
leads to the emergence of new implementations. The appearance of algorithms,
with sometimes unpredictable results, brings objects that have been inspired by
SC

human genius closer together with objects found in the biological world. The
complexity of objects in the biological world is strictly organized and at the
same time they are self-organizing. The processes of supramolecular chemistry
and of the chemistry of self-assembling materials function in the same fashion.

1.1.4 Nanoworld
Nanoscience is the study of phenomenon and manipulation of materials at
atomic, molecular and macro-molecule scales, where properties differ signifi-
cantly from those at large scale.
• Nanotechnology is the branch of science and engineering which deals with cre-
ation of materials, devices and systems through the manipulation of individual
atoms and molecules.
• Nanotechnologies are the design, characterisation, production and application
of structures, devices and systems by controlling shape and size at nanometre
scale
• The goal of nanotechnology is to control individual atoms and molecules to

Page 7
Department of ECE NanoElectronics

V
M
SV
SC

Figure 2: Two technological approaches to the nanoworld: top-down and


bottom-up

Page 8
Department of ECE NanoElectronics

V
M
SV
SC

Figure 3: Two key points

Page 9
Department of ECE NanoElectronics

V
M
SV
SC

Figure 4: Nanoworld

Page 10
Department of ECE NanoElectronics

V
M
SV

Figure 5: Nanometer scale


SC

create computer chips and other devices that are thousands of times smaller
than current technologies limit.
The prefix “Nano” is derived from the Greek word which means “Dwarf”. •
One nanometer is equal to one billionth of meter (10-9 )
Nanotechnology is the understanding and control of matter at dimensions of
roughly 1 to 100 nanometers, where unique phenomena enable novel applica-
tions

At the nanoscale, the physical, chemical, and biological properties of mate-


rials differ in fundamental and valuable ways from the properties of individual
atoms and molecules or bulk matter.
Nanoscale science and technology i.e. Nanotechnology is a young and burgeon-
ing field that encompasses nearly every discipline of science and engineering.
• Nanotechnology is truly a multidisciplinary, interdisciplinary and multifunc-
tional field. Today, chemists, physicists, medical doctors, engineers, biologists
and computer scientists are working and collaborating for the development of
Nanotechnology.

Page 11
Department of ECE NanoElectronics

V
M
SV
SC

Figure 6: Nanometer scale

Page 12
Department of ECE NanoElectronics

V
M
SV
Figure 7: Nanometer scale
SC

Page 13
Department of ECE NanoElectronics

The first ever concept was presented in 1959 by the famous professor of
physics Dr. Richard P. Feynman. • The term “Nano-technology” had been
coined by Norio Taniguchi in 1974

• Feynman solicit the Physicists in 1959 “to make the electron microscope
100 times better”. This was achieved about 22 years later.
• Not only seeing the atoms but also their manipulation became a reality in
1981 when Gerd Binnig and Heinrich Rohrer of IBM, Zurich Research Labora-
tory invented the Scanning Tunneling Microscope (STM) for which they were
awarded Noble Prize in 1986.
• In 1985 Binnig along with Gerber and Quate invented the Atomic Force Mi-
croscope (AFM) which did not require the specimen to be conducting.

At very small sizes physical properties (magnetic, electric and optical) of


materials can change dramatically.

V
1.1.5 Benefits of Nanotechnology
M
“The power of nanotechnology is rooted in its potential to transform and rev-
olutionize multiple technology and industry sectors, including aerospace, agri-
culture, biotechnology, homeland security and national defense, energy, envi-
ronmental improvement, information technology, medicine, and transportation.
SV
Discovery in some of these areas has advanced to the point where it is now
possible to identify applications that will impact the world we live in.”

1.1.6 What is nanomaterial?


SC

• Is defined as any material that has unique or novel properties, due to the
nanoscale ( nano metre- scale) structuring.
• These are formed by incorporation or structuring of nanoparticles.
• They are subdivided into nanocrystals, nano powders, and nanotubes: A se-
quence of nanoscale of C60 atoms arranged in a long thin cylindrical structure
Nanomaterial properties can be ‘tuned’ by varying the size of the particle (e.g.
changing the fluorescence colour so a particle can be identified)

Examples of Nanomaterials
• Examples:
• Amorphous silica fume (nano-silica) in Ultra High Performance Concrete –
this silica is normally thought to have the same human risk factors as non-nano
non-toxic silica dust
• Nano platinum or palladium in vehicle catalytic converters - higher surface
area to volume of particle gives increased reactivity and therefore increased ef-
ficiency
• Crystalline silica fume is used as an additive in paints or coatings, giving e.g.

Page 14
Department of ECE NanoElectronics

V
M
SV
SC

Page 15
Department of ECE NanoElectronics

self-cleaning characteristics – it has a needle-like structure and sharp edges so


is very toxic and is known to cause silicosis upon occupational exposure

1.1.7 Classification
Classification is based on the number of dimensions, which are not confined to
the nanoscale range (¡100nm)
1) Zero dimensional (0-D)
2) One- dimensional (1-D)
3) two-dimensional (2-D)
4) Three dimensional (3-D)

Zero dimensional nanomaterials


Materials wherein all the dimensions are measured within the nanoscale
• The most common representation of zero dimensional nanomaterilas are nano

V
dots

One dimensional nanomaterials M


• One dimension is outside the nanoscale and other two dimensions are in
the nanoscale
SV
• This leads to needle like-shaped nanomaterials
• 1-D materials include nanotubes, nanorods and nanowires.
• 1-D nanomaterials can be
SC

• Amorphous or crystalline
• Single crystalline or poly crystalline
• Chemically pure or impure
• Metallic, ceramic or polymeric.

Two dimensional materials

One dimension lies in the nanometer range and other two dimensions are
not confined to the nanoscale
• 2D nanomaterials exhibit plate like shapes
• Two dimensional nanomaterials include nanofilms, nanlayers and nanocoat-
ings.

Three dimensional materials

Page 16
Department of ECE NanoElectronics

Three dimensional materials are not confined in the nanoscale in any di-
mension. These materials are thus characterized by having three arbitrarily
dimensions above 100nm
• Materials possess a nanocrystalline structure or involve the presence of fea-
tures at the nanoscale

1.2 Quantum Mechanics


Quantum mechanics can be thought of roughly as the study of physics on very
small length scales, although there are also certain macroscopic systems it di-
rectly applies to. The descriptor “quantum” arises because in contrast with
classical mechanics, certain quantities take on only discrete values. However,
some quantities still take on continuous values.
In quantum mechanics, particles have wavelike properties, and a particular
wave equation, the Schrodinger equation, governs how these waves behave. The

V
Schrodinger equation is different in a few ways from the other wave equations
we’ve seen in this book. But these differences won’t keep us from applying all of
our usual strategies for solving a wave equation and dealing with the resulting
M
solutions.

In some respect, quantum mechanics is just another example of a system


governed by a wave equation. In fact, we will find below that some quantum
SV
mechanical systems have exact analogies to systems we’ve already studied in
this book. So the results can be carried over, with no modifications whatsoever
needed. However, although it is fairly straightforward to deal with the actual
waves, there are many things about quantum mechanics that are a combination
of subtle, perplexing, and bizarre. To name a few: the measurement problem,
SC

hidden variables along with Bell’s theorem, and wave-particle duality.

Even though there are many things that are highly confusing about quantum
mechanics, the nice thing is that it’s relatively easy to apply quantum mechanics
to a physical system to figure out how it behaves. There is fortunately no need
to understand all of the subtleties about quantum mechanics in order to use it.
Of course, in most cases this isn’t the best strategy to take; it’s usually not a
good idea to blindly forge ahead with something if you don’t understand what
you’re actually working with. But this lack of understanding can be forgiven
in the case of quantum mechanics, because no one really understands it. (Well,
maybe a couple people do, but they’re few and far between.) If the world waited
to use quantum mechanics until it understood it, then we’d be stuck back in
the 1920’s. The bottom line is that quantum mechanics can be used to make
predictions that are consistent with experiment. It hasn’t failed us yet. So it
would be foolish not to use it.

Before discussing the Schrodinger wave equation, let’s take a brief (and by
no means comprehensive) look at the historical timeline of how quantum me-

Page 17
Department of ECE NanoElectronics

chanics came about. The actual history is of course never as clean as an outline
like this suggests, but we can at least get a general idea of how things proceeded.
1900 (Planck): Max Planck proposed that light with frequency is emitted in
quantized lumps of energy that come in integral multiples of the quantity,
E = hv = hω
The frequency of light is generally very large (on the order of 1015 s1 for the
visible spectrum), but the smallness of h wins out, so the h unit of energy is very
small (at least on an everyday energy scale). The energy is therefore essentially
continuous for most purposes. However, a puzzle in late 19th-century physics
was the blackbody radiation problem. In a nutshell, the issue was that the clas-
sical (continuous) theory of light predicted that certain objects would radiate an
infinite amount of energy, which of course can’t be correct. Planck’s hypothesis
of quantized radiation not only got rid of the problem of the infinity, but also
correctly predicted the shape of the power curve as a function of temperature.
And E = pc for a light. Planck’s hypothesis simply adds the information of how
many lumps of energy a wave contains. Although strictly speaking, Planck ini-

V
tially thought that the quantization was only a function of the emission process
and not inherent to the light itself.
1905 (Einstein): Albert Einstein stated that the quantization was in fact inher-
M
ent to the light, and that the lumps can be interpreted as particles, which we
now call “photons.” This proposal was a result of his work on the photoelectric
effect, which deals with the absorption of light and the emission of elections
SV
from a material. We know from Chapter 8 that E = pc for a light wave. (This
relation also follows from Einstein’s 1905 work on relativity, where he showed
that E = pc for any massless particle, an example of which is a photon.) And
we also know that = ck for a light wave. So Planck’s E = hω relation becomes
E = hω becomes pc = h(ck) becomes p = hk
This result relates the momentum of a photon to the wavenumber of the wave
SC

it is associated with.
1913 (Bohr): Niels Bohr stated that electrons in atoms have wavelike proper-
ties. This correctly explained a few things about hydrogen, in particular the
quantized energy levels that were known.
1924 (de Broglie): Louis de Broglie proposed that all particles are associated
with waves, where the frequency and wavenumber of the wave are given by the
same relations we found above for photons, namely E = hω and p = hk. The
larger E and p are, the larger ω and k are. Even for small E and p that are
typical of a photon, and k are very large because h is so small. So any everyday-
sized particle with large (in comparison) energy and momentum values will have
extremely large and k values. This (among other reasons) makes it virtually
impossible to observe the wave nature of macroscopic amounts of matter.
This proposal (that E = homega and p = hk also hold for massive particles)
was a big step, because many things that are true for photons are not true for
massive (and nonrelativistic) particles. For example, E = pc (and hence =
ck) holds only for massless particles (we’ll see below how and k are related for
massive particles). But the proposal was a reasonable one to try. And it turned
out to be correct, in view of the fact that the resulting predictions agree with

Page 18
Department of ECE NanoElectronics

experiments.
The fact that any particle has a wave associated with it leads to the so-called
waveparticle duality. Are things particles, or waves, or both? Well, it depends
what you’re doing with them. Sometimes things behave like waves, sometimes
they behave like particles. A vaguely true statement is that things behave like
waves until a measurement takes place, at which point they behave like parti-
cles. However, approximately one million things are left unaddressed in that
sentence. The wave-particle duality is one of the things that few people, if any,
understand about quantum mechanics.
1925 (Heisenberg): Werner Heisenberg formulated a version of quantum me-
chanics that made use of matrix mechanics. We won’t deal with this matrix
formulation (it’s rather difficult), but instead with the following wave formula-
tion due to Schrodinger (this is a waves book, after all).
1926 (Schrodinger): Erwin Schrodinger formulated a version of quantum me-
chanics that was based on waves. He wrote down a wave equation (the so-called
Schrodinger equation) that governs how the waves evolve in space and time.

V
We’ll deal with this equation in depth below. Even though the equation is
correct, the correct interpretation of what the wave actually meant was still
missing. Initially Schrodinger thought (incorrectly) that the wave represented
M
the charge density.
1926 (Born): Max Born correctly interpreted Schrodinger’s wave as a proba-
bility amplitude. By “amplitude” we mean that the wave must be squared to
SV
obtain the desired probability. More precisely, since the wave (as we’ll see) is in
general complex, we need to square its absolute value. This yields the probabil-
ity of finding a particle at a given location (assuming that the wave is written
as a function of x).

This probability isn’t a consequence of ignorance, as is the case with virtu-


SC

ally every other example of probability you’re familiar with. For example, in
a coin toss, if you know everything about the initial motion of the coin (veloc-
ity, angular velocity), along with all external influences (air currents, nature of
the floor it lands on, etc.), then you can predict which side will land facing up.
Quantum mechanical probabilities aren’t like this. They aren’t a consequence of
missing information. The probabilities are truly random, and there is no further
information (so-called “hidden variables”) that will make things unrandom. The
topic of hidden variables includes various theorems (such as Bell’s theorem) and
experimental results that you will learn about in a quantum mechanics course.
1926 (Dirac): Paul Dirac showed that Heisenberg’s and Schrodinger’s versions
of quantum mechanics were equivalent, in that they could both be derived from
a more general version of quantum mechanics.

1.3 The Schrodinger equation

Page 19
Department of ECE NanoElectronics

V
M
SV
SC

Page 20
Department of ECE NanoElectronics

that the theory is consistent with the real world. The more experiments we
do, the more comfortable we are that the theory is a good one. But we can
never be absolutely sure that we have the correct theory. In fact, odds are that
it’s simply the limiting case of a more correct theory.
The Schrodinger equation actually isn’t valid, so there’s certainly no way that
we proved it. Consistent with the above point concerning limiting cases, the
quantum theory based on Schrodinger’s equation is just a limiting theory of
a more correct one, which happens to be quantum field theory (which unifies
quantum mechanics with special relativity). This is turn must be a limiting
theory of yet another more correct one, because it doesn’t incorporate gravity.
Eventually there will be one theory that covers everything (although this point
can be debated), but we’re definitely not there yet.
Due to the “i” that appears in Eq. (6), (x) is complex. And in contrast with
waves in classical mechanics, the entire complex function now matters in quan-
tum mechanics. We won’t be taking the real part in the end. Up to this point
in the book, the use of complex functions was simply a matter of convenience,

V
because it is easier to work with exponentials than trig functions. Only the
real part mattered (or imaginary part – take your pick, but not both). But in
quantum mechanics the whole complex wavefunction is relevant. However, the
M
theory is structured in such a way that anything you might want to measure
(position, momentum, energy, etc.) will always turn out to be a real quantity.
This is a necessary feature of any valid theory, of course, because you’re not
SV
going to go out and measure a distance of 2 + 5i meters, or pay an electrical
bill of 17 + 6i kilowatt hours.
SC

Page 21
Department of ECE NanoElectronics

V
M
SV
SC

Page 22
Department of ECE NanoElectronics

V
M
SV
SC

Page 23
Department of ECE NanoElectronics

1.4 Particles in a Box

V
M
SV
SC

Page 24
Department of ECE NanoElectronics

V
M
SV
SC

1.5 Degeneracy
A term referring to the fact that two or more stationary states of the same
quantum-mechanical system may have the same energy even though their wave
functions are not the same. In this case the common energy level of the station-
ary states is degenerate. The statistical weight of the level is proportional to the
order of degeneracy, that is, to the number of states with the same energy; this
number is predicted from Schrödinger’s equation. The energy levels of isolated
systems (that is, systems with no external fields present) comprising an odd
number of fermions (for example, electrons, protons, and neutrons) always are
at least twofold degenerate.

Page 25
Department of ECE NanoElectronics

V
M
SV
SC

Page 26
Department of ECE NanoElectronics

V
M
SV
SC

Page 27
Department of ECE NanoElectronics

V
1.6 Band theory of solids
M
There are usually two approaches to understand the origin of band theory as-
SV
sociated with solids. One is the “nearly free electron model” and the other
“tight-binding model”.
1) Nearly free electron model:
In the nearly free electron approximation, interactions between electrons are
completely ignored. This model allows use of Bloch’s Theorem which states
SC

that electrons in a periodic potential have wavefunctions and energies which


are periodic in wavevector up to a constant phase shift between neighboring
reciprocal lattice vectors.
2) Tight-binding model
The opposite extreme to the nearly-free electron model assumes the electrons
in the crystal behave much like an assembly of constituent atoms.

Page 28
Department of ECE NanoElectronics

V
M
SV
SC

Page 29
Department of ECE NanoElectronics

V
M
SV
SC

Page 30
Department of ECE NanoElectronics

V
M
SV
SC

Page 31
Department of ECE NanoElectronics

V
M
SV
SC

Page 32
Department of ECE NanoElectronics

V
M
SV
SC

Page 33
Department of ECE NanoElectronics

1.7 Kronig Penny Model


In this student laboratory, various calculations of the electronic bandstructure of
a one-dimensional crystal are performed with the Kronig-Penney (KP) model.
This model has an analytical solution and therefore allows for simple calcu-
lations. More realistic models always require extensive numeric calculations,
often on the fastest computers available. The electronic band structure is di-
rectly related to many macroscopic properties of the material and therefore of
large interest. Nowadays, hypothetical (nonexistent) materials are often inves-
tigated by band structure calculations – and if they show attractive properties,
researchers try to prepare these materials experimentally.

The KP model is a strongly simplified one-dimensional quantum mechanical


model of a crystal. Despite of the simplifications, the electronic band structure
obtained from this model shares many features with band structures that result
from more sophisticated models.

V
Details of the Kronig-Penney model
The KP model is a single-electron problem. The electron moves in a one-
M
dimensional crystal of length L. The periodic potential that the electrons expe-
rience in the crystal lattice is approximated by the following periodical function.
SV
SC

Page 34
Department of ECE NanoElectronics

V
M
SV
SC

Page 35
Department of ECE NanoElectronics

V
M
SV
SC

Page 36
Department of ECE NanoElectronics

V
M
SV
SC

Page 37
Department of ECE NanoElectronics

1.8 Brillouin Zones


The Brillouin zone is defined as the set of points in k-space that can be reached
from the origin without crossing any Bragg plane. Equivalently it can be defined
as the Wigner-Seitz Cell of the reciprocal lattice. In case of single walled carbon
nanotubes the first Brillouin zone is given by irreducible set of equidistant lines
whose length and spacing are dependent on the values of two integers n and m.
The primitive cell of the reciprocal lattice maybe be taken to be the paral-
lelepiped denoted by b1, b2, b3. The parallelepiped contains one reciprocal lat-
tice point. Each corner is shared with 8 parallelepipeds: thus, 8 × 1/8 equalto
1 lattice point per parallelepiped.
In mathematics and solid state physics, the first Brillouin zone is a uniquely
defined primitive cell in reciprocal space. In the same way the Bravais lattice
is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is
broken up into Brillouin zones. The boundaries of this cell are given by planes
related to points on the reciprocal lattice. The importance of the Brillouin zone

V
stems from the description of waves in a periodic medium given by Bloch’s the-
orem, in which it is found that the solutions can be completely characterized by
their behavior in a single Brillouin zone.
M
The first Brillouin zone is the locus of points in reciprocal space that are closer
to the origin of the reciprocal lattice than they are to any other reciprocal lattice
points (see the derivation of the Wigner–Seitz cell). Another definition is as the
set of points in k-space that can be reached from the origin without crossing
SV
any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the
reciprocal lattice.

k-vectors exceeding the first Brillouin zone (red) do not carry any more in-
formation than their counterparts (black) in the first Brillouin zone. k at the
SC

Brilliouin zone edge is the spatial Nyquist frequency of waves in the lattice,
because it corresponds to a half-wavelength equal to the inter-atomic lattice
spacing a.[1] See also Aliasing § Sampling sinusoidal functions for more on the
equivalence of k-vectors.

The Brillouin zone (purple) and the Irreducible Brillouin zone (red) for a
hexagonal lattice.
There are also second, third, etc., Brillouin zones, corresponding to a sequence
of disjoint regions (all with the same volume) at increasing distances from the
origin, but these are used less frequently. As a result, the first Brillouin zone
is often called simply the Brillouin zone. In general, the n-th Brillouin zone
consists of the set of points that can be reached from the origin by crossing
exactly n 1 distinct Bragg planes. A related concept is that of the irreducible
Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries
in the point group of the lattice (point group of the crystal).
The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969),
a French physicist
It is often useful to take the primitive cell as the smallest volume bounded by

Page 38
Department of ECE NanoElectronics

planes normal to the G vectors of the nearest neighbours. It is just another way
of dividing up reciprocal space, into identical cells which fill it uniformly. Each
cell contains one lattice site at the centre of the cell. It is the first Brillouin
zone. The same construction in the direct (real) lattice is called the Wigner
Seitz cell.
The first Brillouin zone is the set of points that can be reached from the origin,
without crossing any Bragg plane. The second Brillouin zone is the set of points
that can be reached from the first zone by crossing only one Bragg plane.

V
M
SV
SC

Page 39
Department of ECE NanoElectronics

V
M
SV
The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969),
a French physicist.

The first and second Brillouin zones, for a 1D reciprocal lattice. The sites
are spaced by 2/a. The first zone is the red area (inner), and second zone is the
SC

(disconnected, outer) blue area in the above figure.

The first and second Brillouin zones for a 2D reciprocal (square) lattice.
Notice how each is generated, and that the second zone is disconnected,

Page 40
Department of ECE NanoElectronics

2 Unit - II CMOS Scaling its limits


Gordon Moore famously predicted in his 1965 paper that the number of compo-
nents per chip would continue to increase by a factor of two every year.The goals
of following Moore’s law are to decrease the cost per component and reduce the
power consumed per component. In 1975, Moore updated his earlier prediction
by forecasting that components per chip would increase by a factor of two every
two years, and that this would come from the combination of scaling component
size and increasing chip area. Back in 1965, the industry was producing chips
using a minimum feature size of approximately 50 mm totaling about 50 com-
ponents. Today’s leading chips use a minimum feature size of approximately 10
nm and incorporate several billion transistors.

V
M
SV
SC

Page 41
Department of ECE NanoElectronics

Robert Dennard and colleagues described in 1974 a scaling methodology


for metal-oxide-semiconductor field-effect transistors (MOSFETs) that would

V
deliver consistent improvements in transistor area, performance, and power re-
duction.3 The methodology called for the scaling of transistor gate length, gate
width, gate oxide thickness, and supply voltage all by the same scale factor,
M
and increasing channel doping by the inverse of the same scale factor (see Fig-
ure 1). The result would be transistors with smaller area, higher drive current
(higher performance), and lower parasitic capacitance (lower active power). This
SV
method for scaling MOSFET transistors is generally referred to as “classic” or
“traditional” scaling and was very successfully used by the industry up until
the 130-nm generation in the early 2000s. For the past 20 years, we have been
developing new generations of process technologies on a two-year cadence, and
each generation scaled the minimum feature size by approximately 0.7 times to
deliver an area scaling improvement of about 0.5 times. Thus, we have been
SC

doubling transistor density every two years. But recent technology generations
(such as 14 nm and 10 nm) have taken longer to develop than the normal two-
year cadence, owing to increased process complexity and an increased number
of photomasking steps. Nonetheless, Intel’s 14-nm and 10-nm technologies have
provided better-than-normal transistor density improvements that keep us on
pace with increasing transistor density at a rate of doubling about every two
years.

Transistor Innovations
As mentioned earlier, traditional MOSFET scaling worked well up until the
130-nm generation in the early 2000s. By that generation, the SiO2 gate oxide
thickness had scaled to about 1.2 nm, and electron tunneling through such a thin
dielectric was becoming a significant portion of total transistor leakage current.
We had reached the limit for scaling transistors using traditional methods, and
we needed to start introducing innovations in transistor materials and structure
to continue scaling.

Page 42
Department of ECE NanoElectronics

V
M
SV
SC

Page 43
Department of ECE NanoElectronics

V
M
SV
One of the first significant innovations was the introduction of strained silicon
transistors on Intel’s 90-nm technology in 2003.4 This innovation used tensile
stain in n-channel MOS (NMOS) transistor channels to increase electron mo-
bility and compressive strain in p-channel MOS (PMOS) channels to increase
hole mobility. Tensile strain was induced by adding a high-stress film above
the NMOS transistor. Compressive strain was induced by replacing the PMOS
SC

source-drain regions with epitaxial SiGe depositions. The resultant increases in


electron and hole mobility provided increased transistor drive currents without
having to further scale the SiO2 gate oxide thickness. This strained silicon tech-
nique has been adopted by all major semiconductor companies and continues
to be used on the latest 10-nm technologies.
The need to improve the transistor gate dielectric to continue scaling could
not be avoided, and Intel’s 45-nm technology in 2007 first introduced high-
k metal gate transistors. The traditional SiO2 gate oxide was replaced by a
hafnium-based high-k dielectric. The high-k dielectric both reduced gate oxide
leakage current and improved transistor drive current. The traditional doped-
polysilicon gate electrode was replaced by metal electrodes with separate materi-
als for NMOS and PMOS to provide optimal transistor threshold voltages. The
combination of high-k dielectric and metal gate electrodes was a revolutionary
process change that provided significant improvements in transistor performance
while also reducing transistor leakage current. High-k metal gate transistors are
now universally used on advanced logic technologies.

Page 44
Department of ECE NanoElectronics

The next major transistor innovation was the introduction of FinFET (tri-
gate) transistors on Intel’s 22-nm technology in 2011.6 Traditional planar MOS-
FETs had been able to scale transistor gate length down to about 32 nm to de-
liver good performance and density while also maintaining low off-state leakage.
But scaling the gate length below 32 nm was problematic without sacrificing
either performance or leakage. A solution was to convert from a planar transis-
tor structure to a 3D FinFET structure in which the gate electrode had better
electrostatic control of the transistor channel formed in a tall narrow silicon
fin. This improved electrostatic control provided scaled transistors with steeper
sub-threshold slope. Steeper sub-threshold slope either provided transistors
with lower off-state leakage or allowed threshold voltage to be reduced, which
enabled improved performance at low operating voltage. Operating integrated
circuits at a lower voltage is highly desired in order to reduce active power con-
sumption. All advanced logic technologies now use FinFET transistors for their
good density and superior low-voltage performance compared to planar tran-
sistors. As Figure shows, when traditional MOSFET scaling ran out of steam

V
in the early 2000s, innovations such as strained silicon, high-k metal gate, and
FinFETs were needed, and we must now continually invent new transistor ma-
terials and structures to continue scaling.
M
2.1 Finfets
SV
A FinFET is a transistor. Being a transistor, it is an amplifier and a switch. Its
applications include home computers, laptops, tablets, smartphones, wearables,
high-end networks, automotive, and more.

FinFET stands for a fin-shaped field-effect transistor. Fin because it has a


SC

fin-shaped body – the silicon fin that forms the transistor’s main body distin-
guishes it. Field-effect because an electric field controls the conductivity of the
material.

A FinFET is a non-planar device, i.e., not constrained to a single plane. It


is also called 3D for having a third dimension.

To avoid confusion, it is essential to understand that different literature uses


different labels when referring to FinFET devices.
Why Use FinFET Devices in Place of MOSFETs?
Choosing FinFET devices instead of traditional MOSFETs happens for a va-
riety of reasons. Increasing computational power implies increasing computa-
tional density. More transistors are required to achieve this, which leads to
larger chips. However, for practical reasons, it is crucial to keep the area about
the same.

As previously stated, one way of achieving more computational power is by


shrinking the transistor’s size. But as the transistor’s dimensions decrease, the

Page 45
Department of ECE NanoElectronics

V
M
SV

proximity between the drain and the source lessens the gate electrode’s ability
to control the flow of current in the channel region. Because of this, planar
SC

MOSFETs display objectionable short-channel effects.

Shrinking the gate length (Lg) below 90 nm produces a significant leakage


current, and below 28 nm, the leakage is excessive, rendering the transistor use-
less. So, as the gate length is scaled down, suppressing the off-state leakage is
vital.

Another way to increase computational power is by changing the materials


used for manufacturing the chips, but it may not be suitable from an economic
standpoint.

In short, FinFET devices display superior short-channel behavior, have con-


siderably lower switching times, and higher current density than conventional
MOSFET technology.
Computing FinFET Transistor Width (W)

The channel (fin) of the FinFET is vertical. This device requires keeping

Page 46
Department of ECE NanoElectronics

in mind specific dimensions. Evoking Max Planck’s “quanta,” the FinFET ex-
hibits a property known as width quantization: its width is a multiple of its
height. Random widths are not possible.

The fin thickness is a crucial parameter because it controls the short-channel


behavior and the device’s subthreshold swing. The subthreshold swing measures
the efficiency of a transistor. It is the variation in gate voltage that increases
the drain current one order of magnitude.

1. Lg gate length
2. T fin thickness
3. Hfin fin height
4. W transistor width (single fin)
5. Weff effective transistor width (multiple fins)
6. For double-gate: W = 2 Hfin
7. For tri-gate: W = 2 Hfin + T

V
8. Multiple fins will increase the transistor width.
9. Weff = n W
Where n = number of fins M
FinFET (fin field-effect transistor) is a type of non-planar transistor, or ”3D”
transistor (not to be confused with 3D microchips).[16] The FinFET is a vari-
SV
ation on traditional MOSFETs distinguished by the presence of a thin silicon
”fin” inversion channel on top of the substrate, allowing the gate to make two
points of contact: the left and right sides of the fin. The thickness of the fin
(measured in the direction from source to drain) determines the effective channel
length of the device. The wrap-around gate structure provides a better electri-
cal control over the channel and thus helps in reducing the leakage current and
SC

overcoming other short-channel effects.

The first finfet transistor type was called a ”Depleted Lean-channel Tran-
sistor” or ”DELTA” transistor, which was first fabricated by Hitachi Central
Research Laboratory’s Digh Hisamoto, Toru Kaga, Yoshifumi Kawamoto and
Eiji Takeda in 1989.[17][10][18] In the late 1990s, Digh Hisamoto began collab-
orating with an international team of researchers on further developing DELTA
technology, including TSMC’s Chenming Hu and a UC Berkeley research team
including Tsu-Jae King Liu, Jeffrey Bokor, Xuejue Huang, Leland Chang, Nick
Lindert, S. Ahmed, Cyrus Tabery, Yang-Kyu Choi, Pushkar Ranade, Sriram
Balasubramanian, A. Agarwal and M. Ameen. In 1998, the team developed the
first N-channel FinFETs and successfully fabricated devices down to a 17 nm
process. The following year, they developed the first P-channel FinFETs.[19]
They coined the term ”FinFET” (fin field-effect transistor) in a December 2000
paper.

In current usage the term FinFET has a less precise definition. Among
microprocessor manufacturers, AMD, IBM, and Freescale describe their double-

Page 47
Department of ECE NanoElectronics

gate development efforts as FinFET[21] development, whereas Intel avoids using


the term when describing their closely related tri-gate architecture.[22] In the
technical literature, FinFET is used somewhat generically to describe any fin-
based, multigate transistor architecture regardless of number of gates. It is
common for a single FinFET transistor to contain several fins, arranged side by
side and all covered by the same gate, that act electrically as one, to increase
drive strength and performance.[23] The gate may also cover the entirety of the
fin(s).
A 25 nm transistor operating on just 0.7 volt was demonstrated in December
2002 by TSMC (Taiwan Semiconductor Manufacturing Company). The ”Omega
FinFET” design is named after the similarity between the Greek letter omega
() and the shape in which the gate wraps around the source/drain structure. It
has a gate delay of just 0.39 picosecond (ps) for the N-type transistor and 0.88
ps for the P-type.
In 2004, Samsung Electronics demonstrated a ”Bulk FinFET” design, which
made it possible to mass-produce FinFET devices. They demonstrated dynamic

V
random-access memory (DRAM) manufactured with a 90 nm Bulk FinFET pro-
cess.[19] In 2006, a team of Korean researchers from the Korea Advanced In-
stitute of Science and Technology (KAIST) and the National Nano Fab Center
M
developed a 3 nm transistor, the world’s smallest nanoelectronic device, based
on FinFET technology. In 2011, Rice University researchers Masoud Rostami
and Kartik Mohanram demonstrated that FINFETs can have two electrically
SV
independent gates, which gives circuit designers more flexibility to design with
efficient, low-power gates.

In 2012, Intel started using FinFETs for its future commercial devices. Leaks
suggest that Intel’s FinFET has an unusual shape of a triangle rather than rect-
angle, and it is speculated that this might be either because a triangle has a
SC

higher structural strength and can be more reliably manufactured or because


a triangular prism has a higher area-to-volume ratio than a rectangular prism,
thus increasing switching performance.

Vertical MOSFETs
A type of metal oxide semiconductor field effect transistor (MOSFET) used to
switch large amounts of current. Power MOSFETs use a vertical structure with
source and drain terminals at opposite sides of the chip. The vertical orientation
eliminates crowding at the gate and offers larger channel widths.
In addition, thousands of these transistor ”cells” are combined into one in order
to handle the high currents and voltage required of such devices.

Over the past 20 years, the channel length of MOS transistors has halved
at intervals of approximately every two or three years, which has led to a vir-
tuous circle of increasing packing density (more complex electronic products),
increasing performance (higher clock frequencies) and decreasing costs per unit
silicon area. To continue on this path, Research is underway at Southamp-
ton University to investigate an alternative method of fabricating short-channel

Page 48
Department of ECE NanoElectronics

V
M
MOS transistors, socalled Vertical MOSFET’s. In these devices the channel is
SV
perpendicular to the wafer surface in stead of in the plane of the surface. Vetical
MOSFET’s have three main advantages:

First, the channel length of the vertical MOS transistor is not defined by
lithography. This means no requirements for post-optical lithography tech-
niques such as x-ray, extreme ultra-violet, electron projection lithography, ion
SC

projection lithography or direct write e-beam which are possibly prohibitively


expensive.

Second, Vertical MOS transistors are easily made with both front gate and
back gate. Using this technology doubles the channel width per transistor area.
Combined with easier design rules, this leads to an increase of packing density
of at least a factor of four as compared to horizontal transitors.

One step further, is the use of very narrow pillars with the gate surrounding
the entire pillar. This way, fully depleted transistors can be produced which have
all the advantages of SOI transistors. Third advantage of the vertical MOSFET
is the possibility to prevent short channel effects from dominating the transitor
by adding processes that are not easily realised in horizontal transistors, such
as a polysilicon (or polySiGe) source to reduce parasitic bipolar effects or a di-
electric pocket to reduce drain induced barrier lowering (dibl).

Why vertical MOSFETs are called power MOSFET?

Page 49
Department of ECE NanoElectronics

Power MOSFETs are usually constructed in V-configuration, as shown in fig-


ure. That is why, the device is sometimes called the V-MOSFET or V-FET.
V-shaped cut penetrates from the device surface almost to the N+ substrate
through N+, P and N layers.

2.2 Limits to scaling


Effects, as a result of scaling down- which eventually become severe enough to
prevent further miniaturization.

o Substrate doping

o Depletion width

o Limits of miniaturization

V
o Limits of interconnect and contact resistance
M
o Limits due to sub threshold currents

o Limits on logic levels and supply voltage due to noise


SV
o Limits due to current density

For digital circuit design, the ideal MOSFET would be a perfect switch. It
would conduct infinite current in the on-state and zero current in the on-state.
Scaling of the device dimensions has been effective at increasing the on-current
SC

of the device, but at the same time it causes an increase in the on-current. For
an NMOS device with the drain at the supply voltage and the source, gate, and
bulk at ground, ideally there should be no current ow. However, for submicron
devices, there may be significant drain current to the source as subthreshold
leakage, to the gate as tunneling current, and to the bulk as gate-induced drain
leakage. The need to minimize these leakage currents while at the same time
increasing on-current limits the scaling of MOSFETs. Another characteristic of
an ideal MOSFET would be an infinite lifetime. Unfortunately, real devices tend
to degrade when exposed to high electric fields in either the gate oxide or the
channel. High field phenomenon, such as time dependent dielectric breakdown
and hot carrier effects, are especially worrisome since they can cause a chip
to suddenly fail after operating correctly for months or even years. Therefore,
reliability concerns further limit practical device designs.

2.3 Nanomaterials
Nanomaterials describe, in principle, materials of which a single unit small
sized (in at least one dimension) between 1 and 100 nm (the usual definition of

Page 50
Department of ECE NanoElectronics

nanoscale).
Nanomaterials research takes a materials science-based approach to nanotech-
nology, leveraging advances in materials metrology and synthesis which have
been developed in support of microfabrication research. Materials with struc-
ture at the nanoscale often have unique optical, electronic, thermo-physical or
mechanical properties.
Nanomaterials are slowly becoming commercialized and beginning to emerge as
commodities.
In ISO/TS 80004, nanomaterial is defined as the ”material with any external
dimension in the nanoscale or having internal structure or surface structure in
the nanoscale”, with nanoscale defined as the ”length range approximately from
1 nm to 100 nm”. This includes both nano-objects, which are discrete pieces
of material, and nanostructured materials, which have internal or surface struc-
ture on the nanoscale; a nanomaterial may be a member of both these categories.

On 18 October 2011, the European Commission adopted the following defi-

V
nition of a nanomaterial: ”A natural, incidental or manufactured material con-
taining particles, in an unbound state or as an aggregate or as an agglomerate
and for 50% or more of the particles in the number size distribution, one or
M
more external dimensions is in the size range 1 nm – 100 nm. In specific cases
and where warranted by concerns for the environment, health, safety or com-
petitiveness the number size distribution threshold of 50% may be replaced by
SV
a threshold between 1% to 50%.
The most basic method to measure the size of nanoparticles is the size analy-
sis from the picture image using the transmission electron microscope (TEM),
which could also give the particle size distribution. For this analysis, prepara-
tion of the well-dispersed particles on the sample mount is the key issue.
The different methods which are being used to synthesize nanomaterials are
SC

chemical vapor deposition method, thermal decomposition, hydrothermal syn-


thesis, solvothermal method, pulsed laser ablation, templating method, combus-
tion method, microwave synthesis, gas phase method, and conventional Sol-Gel
method.
Nanotechnology is an emerging area of research which has a potential in replace-
ment of conventional micron technologies and gives size dependent properties
of the functional materials.
The interest in nanoscience (science of low dimensional systems) is a realization
of a famous statement by Feynman that ”There’s a Plenty of Room at the Bot-
tom”.
Based on Feynman’s idea, K. E. Drexler advanced the idea of “molecular nan-
otechnology” in 1986 in the book Engines of Creation, where he postulated the
concept of using nanoscale molecular structures to act in a machine like manner
to guide and activate the synthesis of larger molecules.
When the dimension of a material is reduced from a large size, the properties
remain the same at first, then small changes occur, until finally, when the size
drops below 100 nm, dramatic changes in properties can occur.
If only one dimension of a three-dimensional nanostructure is of nanoscale, the

Page 51
Department of ECE NanoElectronics

structure is referred to as a quantum well; if two dimensions are of nanometer


scale, the structure is referred to as a quantum wire; and if all three dimensions
are of nanometer scale, the structure is referred to as a quantum dot.Hence
a quantum dot has all three dimensions in the nanorange and is the ultimate
example of nanomaterials.
The word quantum is associated with these three types of nanostructures be-
cause changes in properties arise from the physics of quantum-mechanics.

Key issues in the fabrication of Nanomaterials:

The interest in synthesis of nanomaterials has grown because of their dis-


tinct optical, magnetic, electronic, mechanical, and chemical properties com-
pared with those of the bulk materials.
The fabrication and process are the key issues in nanoscience and nanotechnol-
ogy to explore the novel properties and phenomena of nanomaterials to realize
their potential applications in science and technology. Many technological ap-

V
proaches/methods have been explored to fabricate nanomaterials.

Followings are the key issues or challenges in the fabrication of nanostruc-


M
tured materials using any process or technique:
• Can you Control the particle size ?
• Can you control the shape of nanoparticles ?
• Can you control the structure either crystalline or amorphous?
SV
• Particle size distribution (monodespersive: all particles are of same size).

Semiconductor Nanoparticles
Nanoparticle have recently attracted significant attention from the materials
science community. Nanoparticle, particles of the material with diameter in
SC

range 1 to 20 nm, promise to play a significant role in developing technologies.

They exhibit unique physical properties that give rise to many potential ap-
plicaltions in areas such as nonlinear optics, luminescence, elctronics, catalyst,
solar energy conversion, and optoelectronics .

Two fundamental factors, both related to the size of the individual nanocrys-
tal are responsible for these unique properties. The first is the large surface to
volume ratio, and the second factor is the quantum confinement effect.

The wide band gap II-VI semiconductor are of current interest. For optoelec-
tronics applications such as blue lasers, light emitting diodes, photonic crystals
and optical evieces based on non linear properties.

The properties of semiconductor nanoparticles strongly depend on its size,


shape, composition, crystallinity and structure. It is a great challenge and
prominent aim to precisely control these parameters of nanoparticles for the
synthetic nanotechnologists.

Page 52
Department of ECE NanoElectronics

The exposure of exact size and shape controlled synthesis of nanostructure


materials is becoming a great challenge for the nanotechnologists.

Magnetic Nanoparticles:
Magnetic materials are also strongly affected by the small size scale of nanopar-
ticles. Magnetic nanoparticles are being looked at for applications in cancer
diagnosis and treatment.
Before widespread usage of nanoparticles in medicine can be realized, a number
of technical challenges must be met.

These include, though are not limited to, synthesizing uniformly sized, non-
toxic particles and coating the particles to make them attach to specific tissues.

The ferromagnetic (superparamagnetic) nanoparticles can be manipulated


by magnetic fields; they offer the potential to be a powerful tool for medicine

V
and pharmacology.

In order for magnetic nanoparticles to be used within the body, they must
M
meet several stringent criteria. Some of these criteria are biocompatibility, ease
of dispersion into solution for injection, and most importantly, nontoxicity.
SV
In addition, the surfaces of the particles must be able to be functionalized
to attach and agglomerate into specific, targeted tissues.

This would allow magnetic nanoparticles to function in a wide range of


applications from drug targeting to improved resolution for nuclear magnetic
resonance imaging
SC

Recently it has been proposed that the nanoparticles could be used to treat
cancers through a treatment called thermotherapy or hyperthermia.

Iron oxides are one group of magnetic nanoparticles that meet the stringent
requirements for insertion into the body.

Methods of synthesis of nanomaterials:


Nanostructure materials have attracted a great deal of attention because their
physical, chemical, electronic and magnetic properties show dramatic change
from higher dimensional counterparts and depends on their shape and size.

• Many techniques have been developed to synthesize and fabricate nanos-


tructure materials with controlled shape, size, dimensionality and structure.

• The performance of materials depends on their properties. The properties


in tern depend on the atomic structure, composition, microstructure, defects
and interfaces which are controlled by thermodynamics and kinetics of the syn-

Page 53
Department of ECE NanoElectronics

V
M
SV
thesis.
Classification of Techniques for synthesis of Nanomaterials There are two gen-
eral approaches for the synthesis of nanomaterials:
a) Top- down approach
SC

b) Bottom–up approach

(a) Top-down approach


Top-down approach involves the breaking down of the bulk material into nano-
sized structures or particles. Top-down synthesis techniques are extension of
those that have been used for producing micron sized particles. Top-down ap-
proaches are inherently simpler and depend either on removal or division of
bulk material or on miniaturization of bulk fabrication processes to produce the
desired structure with appropriate properties.
The biggest problem with the top-down approach is the imperfection of surface
structure.
For example, nanowires made by lithography are not smooth and may contain
a lot of impurities and structural defects on its surface. Examples of such tech-
niques are high-energy wet ball milling, electron beam lithography, atomic force
manipulation, gas-phase condensation, aerosol spray, etc.

(b) Bottom-up approach

Page 54
Department of ECE NanoElectronics

The alternative approach, which has the potential of creating less waste and
hence the more economical, is the ‘bottom- up’.

Bottom-up approach refers to the build up of a material from the bottom:


atom-by-atom, molecule-by-molecule, or cluster-by cluster.
Many of these techniques are still under development or are just beginning to
be used for commercial production of nanopowders.

Oraganometallic chemical route, revere-micelle route, sol-gel synthesis, col-


loidal precipitation, hydrothermal synthesis, template assisted sol-gel, electrode-
position etc, are some of the well- known bottom–up techniques reported for the
preparation of luminescent nanoparticals.

3 Unit - III Fundamentals of NanoElectronics

V
3.1 Physical Limits to computation
Computers are physical systems: what they can and cannot do is dictated by
M
the laws of physics. In particular, the speed with which a physical device can
process information is limited by its energy and the amount of information that
it can process is limited by the number of degrees of freedom it possesses. This
SV
paper explores the physical limits of computation as determined by the speed
of light c, the quantum scale and the gravitational constant G. As an example,
quantitative bounds are put to the computational power of an ‘ultimate laptop’
with a mass of one kilogram confined to a volume of one liter.

A computation, whether it is performed by electronic machinery, on an aba-


SC

cus or in a biological system such as the brain, is a physical process. It is subject


to the same questions that apply to other physical processes: How much energy
must be expended to perform a particular computation? How long must it take?
How large must the computing device be? In other words, what are the physical
limits of the process of computation?

So far it has been easier to ask these questions than to answer them. To
the extent that we have found limits, they are terribly far away from the real
limits of modern technology. We cannot profess, therefore, to be guiding the
technologist or the engineer. What we are doing is really more fundamental.
We are looking for general laws that must govern all information processing,
no matter how it is accomplished. Any limits we find must be based solely on
fundamental physical principles, not on whatever technology we may currently
be using.
There are precedents for this kind of fundamental examination. In the 1940’s
Claude E. Shannon of the Bell Telephone Laboratories found there are limits
on the amount of information that can be transmitted through a noisy channel;

Page 55
Department of ECE NanoElectronics

these limits apply no matter how the message is encoded into a signal. Shan-
non’s work represents the birth of modern information science. Earlier, in the
mid- and late 19th century, physicists attempting to determine the fundamental
limits on the efficiency of steam engines had created the science of thermody-
namics. In about 1960 one of us (Landauer) and John Swanson at IBM began
attempting to apply the same type of analysis to the process of computing.
Since the mid-1970’s a growing number of other workers at other institutions
have entered this field.
In our analysis of the physical limits of computation we use the term ”infor-
mation” in the technical sense of information theory. In this sense information
is destroyed whenever two previously distinct situations become indistinguish-
able. In physical systems without friction, information can never be destroyed;
whenever information is destroyed,
some amount of energy must be dissipated (converted into heat). As an exam-
ple, imagine two easily distinguishable physical situations, such as a rubber ball
held either one meter or two meters off the ground. If the ball is dropped, it

V
will bounce. If there is no friction and the ball is perfectly elastic, an observer
will always be able to tell what state the ball started out in (that is, what its
initial height was) because a ball dropped from two meters will bounce higher
M
than a ball dropped from one meter.
If there is friction, however, the ball will dissipate a small amount of energy
with each bounce, until it eventually stops bouncing and comes to rest on the
SV
ground. It will then be impossible to determine what the ball’s initial state was;
a ball dropped from two meters will be identical with a ball dropped from one
meter. Information will have been lost as a result of energy dissipation.
Here is another example of information destruction: the expression 2 + 2 con-
tains more information than the expression = 4. If all we know is that we have
added two numbers to yield 4, then we do not know whether we have added 1
SC

+ 3, 2 + 2, 0 + 4 or some other pair of numbers. Since the output is implicit


in the input, no computation ever generates information.

In fact, computation as it is currently carried out depends on many oper-


ations that destroy information. The so-called and gate is a device with two
input lines, each of which may be set at 1 or 0, and one output, whose value
depends on the value of the inputs. If both inputs are 1, the output will be 1.
If one of the inputs is 0 or if both are 0, the output will also be 0. Any time the
gate’s output is a 0 we lose information, because we do not know which of three
possible states the input lines were in (0 and 1, 1 and 0, or 0 and 0). In fact, any
logic gate that has more input than output lines inevitably discards information,
because we cannot deduce the input from the output. Whenever we use such a
”logically irreversible” gate, we dissipate energy into the environment. Erasing
a bit of memory, another operation that is frequently used in computing, is also
fundamentally dissipative; when we erase a bit, we lose all information about
that bit’s previous state.

Page 56
Department of ECE NanoElectronics

3.2 Logic devices


Logic devices beyond the silicon CMOS device scaling roadmap. Project scope
covers new device concepts, device physics, circuit design, modeling, and device
fabrication using novel nanoelectronic materials such as carbon nanotube and
graphene as well as novel concepts such as nanoelectromechanical (NEM) relays.

We work on circuit-level performance modeling and optimization for end-


of-the-roadmap CMOS devices. As devices scale to small dimensions, parasitic
capacitances and parasitic resistances play an increasingly important role in
circuit/system level performance. We have developed accurate parasitic capaci-
tance and parasitic resistance models to enable circuit/device optimization and
to explore new device design options. Compact models for emerging devices such
as III-V FETs, carbon nanotube transistor have been developed and continually
being refined to enable performance benchmarking and technology assessment
at the device and circuit level.

V
We continue to develop and enhance our carbon nanotube transistor compact
device model for circuit simulation. System-level optimization is enabled by the
M
development of non-iterative compact models of carbon nanotube transistors.
We are working on robust circuit design and fabrication for carbon nanotube
and graphene electronics including active devices (carbon nanotubes) and inter-
connects (graphene). We develop synthesis techniques to achieve high-density,
SV
aligned growth of carbon nanotubes as well as low temperature carbon nan-
otube growth for electronics applications. Both digital logic and high-frequency
analog applications are explored.
SC

3.3 Two terminal devices


Two Terminal Devices
1. There are many two terminal devices which has a single P-N junction
such as zener diode, varactor diode, schottky diode, tunnel diode etc. Let’s
discuss them all.

Field effect devices

One of the most important physical mechanisms of importance to semi-


conductor devices is the field effect. Several important devices exploit this
effect in their operation, such as metal-oxide–semiconductor field-effect transis-
tors (MOSFETs), metal–semiconductor field-effect transistors (MESFETs), and
junction field-effect transistors (JFETs). In fact, the field effect transistor (FET)
is arguably the most important innovation that has fueled the computer and
information revolution. In this chapter, the fundamentals of the FET operation
are presented; the reader is directed to the references for a more comprehensive
study.

Page 57
Department of ECE NanoElectronics

The field effect can be simply defined as the modulation of the conductivity of
an underlying semiconductor layer by the application of an electric field to a gate
electrode on the surface. As we learned in Chapter 11, the application of a bias
to a MIS structure results in a modulation in the carrier concentration within
the underlying semiconductor layer. If the semiconductor is naturally n type
and a positive gate bias is applied, electrons accumulate at the semiconductor-
insulator interface. Conversely, if a negative gate bias is applied to the same
structure, the electrons are repelled from the interface and, depending on the
magnitude of the bias, the underlying semiconductor layer is either depleted or
inverted. If the semiconductor becomes inverted, the carrier type changes.

3.4 Coulomb Blockade Devices


The semiconductor transistor has been one of the most remarkable inventions

V
of all time. It has become the main component of all modern electronics. The
miniaturisation trend has been very rapid, leading to ever decreasing device
sizes and opening endless opportunities to realise things which were considered
M
impossible. To keep up with the pace of large scale integration, the idea of
single electron transistors (SETs) has been conceived. The most outstanding
property of SETs is the possibility to switch the device from the insulating to the
conducting state by adding only one electron to the gate electrode, whereas a
SV
common MOSFET needs about 1000–10,000 electrons. The Coulomb blockade
or single-electron charging effect, which allows for the precise control of small
numbers of electrons, provides an alternative operating principle for nanometre-
scale devices. In addition, the reduction in the number of electrons in a switch-
ing transition greatly reduces circuit power dissipation, raising the possibility
SC

of even higher levels of circuit integration. The present report begins with a
description of Coulomb blockade, the classical theory which accounts for the
switching in SETs. We also discuss the work that has been done on realising
SETs and the digital building blocks like memory and logic.

Various structures have been made in which electrons are confined to small
volumes in metals or semiconductors. Perhaps not surprisingly, there is a deep
analogy between such confined electrons and atoms. Such regions with only
dimensions of 1-100 nm and containing between 1,000 to 1,000,000 nuclei are
referred to as ‘quantum dots’, ‘artificial atoms’ or ‘solid state atoms’. Such
quantum dots form the heart of the SET gates.

Coulomb Blockade
Single electron devices differ from conventional devices in the sense that the
electronic transport is governed by quantum mechanics. Single electron devices
consist of an ‘island’, a region containing localized electrons isolated by tunnel
junctions with barriers to electron tunneling. In this section, we discuss the
electron transport through such devices and how Coulomb blockade originates

Page 58
Department of ECE NanoElectronics

in these devices. We also discuss how this is brought into play in SETs. The
energy that determines the transport of electrons through a single-electron de-
vice is Helmholtz’s free energy, F, which is defined as difference between total
energy, ,stored in the device and work done by power sources, W. The total en-
ergy stored includes all components that have to be considered when charging
an island with an electron.
F = Eϵ - W
Eϵ = Ec + ∆EF + EN

The change in Helmholtz’s free energy a tunnel event causes is a measure of


the probability of this tunnel event. The general fact that physical systems tend
to occupy lower energy states, is apparent in electrons favouring those tunnel
events which reduce the free energy.

V
3.5 Spintronics
Spintronics (a neologism for “spin transport electronics”), also known as mag-
netoelectronics, is an emerging technology that exploits the intrinsic spin of the
M
electron and its associated magnetic moment, in addition to its fundamental
electronic charge.
MTA is an effective process to enhance the performance of magnetic devices
and materials.
SV
Thermal annealing involves raising, maintaining, and then slowly lowering
the temperature of a material. Annealing allows the atoms inside of a solid
to diffuse more easily to find their proper locations, and maintaining a solid
at a high temperature lets it achieve equilibrium, eliminating many structural
SC

imperfections that would otherwise reduce its utility.


Annealing has been a widely used technique in metallurgy for quite some time.
However, a relatively new technique, called magnetic thermal annealing, puts a
new spin on this age-old method. The major difference between the two heat
treatments is that in magnetic annealing, an external magnetic field is applied
during the annealing process. This has some very interesting effects, especially
on ferromagnetic (FM) and antiferromagnetic (AFM) materials.
The age of electrically-based devices has been with us for more than six decades.
With more and more electrical devices being packed into smaller and smaller
spaces, the limits of physical space will prevent further expansion in the direc-
tion the microelectronics industry is currently going. Also, volatile memory,
which does not retain information upon being powered off, is significantly hin-
dering ultrafast computing speeds. However, a new breed of electronics, dubbed
“spintronics,” may change all of that.
Instead of solely relying on the electron’s negative charge to manipulate elec-
tron motion or to store information, spintronic devices would further rely on the
electron’s spin degree of freedom, the mathematics of which is similar to that of
a spinning top. Since an electron’s spin is directly coupled to its magnetic mo-

Page 59
Department of ECE NanoElectronics

V
M
ment, its manipulation is intimately related to applying external magnetic fields.
The advantage of spin-based electronics is that they are very nonvolatile com-
pared to charge-based electronics, and quantum-mechanical computing based on
spintronics could achieve speeds unheard of with conventional electrical comput-
SV
ing. Spintronics, also called magnetoelectronics, spin electronics, or spin-based
electronics, is an emerging scientific field. The research on spintronics can be
divided into the following subfields.

One spintronic device that currently has wide commercial application is the
SC

spin-valve. Most modern hard disk drives employ spin-valves to read each mag-
netic bit contained on the spinning platters inside. A spin-valve is essentially
a spin “switch” that can be turned on and off by external magnetic fields. Ba-
sically, it is composed of two ferromagnetic layers separated by a very thin
non-ferromagnetic layer. When these two layers are parallel, electrons can pass
through both easily, and when they are antiparallel, few electrons will penetrate
both layers.

The principles governing spin-valve operation are purely quantum mechani-


cal. Generally, an electron current contains both up and down spin electrons in
equal abundance. When these electrons approach a magnetized ferromagnetic
layer, one where most or all contained atoms point in the same direction, one
of the spin polarizations will scatter more than the other. If the ferromagnetic
layers are parallel, the electrons not scattered by the first layer will not be scat-
tered by the second, and will pass through both. The result is a lower total
resistance (large current). However, if the layers are antiparallel, each spin po-
larization will scatter by the same amount, since each encounters a parallel and
antiparallel layer once. The total resistance is then higher than in the parallel

Page 60
Department of ECE NanoElectronics

configuration (small current).

Thus, by measuring the total resistance of the spin valve, it is possible to


determine if it is in a parallel or antiparallel configuration, and since this is
controlled by an external magnetic field, the direction of the external field can
be measured. Since each bit in a hard drive either points in one direction or the
other, their orientation can easily be determined with a device using this mech-
anism. The two physicists who discovered the giant magnetoresistance (GMR)
effect in 1986 received the 2007 Nobel Prize in Physics.

3.6 Quantum Cellular Automata


A quantum cellular automaton (QCA) is an abstract model of quantum com-
putation, devised in analogy to conventional models of cellular automata intro-
duced by John von Neumann. The same name may also refer to quantum dot

V
cellular automata, which are a proposed physical implementation of ”classical”
cellular automata by exploiting quantum mechanical phenomena. QCA have
attracted a lot of attention as a result of its extremely small feature size (at the
M
molecular or even atomic scale) and its ultra-low power consumption, making
it one candidate for replacing CMOS technology.
n the context of models of computation or of physical systems, quantum cellu-
lar automaton refers to the merger of elements of both (1) the study of cellular
SV
automata in conventional computer science and (2) the study of quantum in-
formation processing. In particular, the following are features of models of
quantum cellular automata:
The computation is considered to come about by parallel operation of mul-
tiple computing devices, or cells. The cells are usually taken to be identical,
SC

finite-dimensional quantum systems (e.g. each cell is a qubit).


Each cell has a neighborhood of other cells. Altogether these form a network
of cells, which is usually taken to be regular (e.g. the cells are arranged as a
lattice with or without periodic boundary conditions).
The evolution of all of the cells has a number of physics-like symmetries. Lo-
cality is one: the next state of a cell depends only on its current state and that
of its neighbours. Homogeneity is another: the evolution acts the same every-
where, and is independent of time.
The state space of the cells, and the operations performed on them, should
be motivated by principles of quantum mechanics. Another feature that is of-
ten considered important for a model of quantum cellular automata is that it
should be universal for quantum computation (i.e. that it can efficiently simu-
late quantum Turing machines, some arbitrary quantum circuit[3] or simply all
other quantum cellular automata).

Models which have been proposed recently impose further conditions, e.g.
that quantum cellular automata should be reversible and/or locally unitary, and
have an easily determined global transition function from the rule for updating

Page 61
Department of ECE NanoElectronics

individual cells.[2] Recent results show that these properties can be derived ax-
iomatically, from the symmetries of the global evolution.

Early proposals
In 1982, Richard Feynman suggested an initial approach to quantizing a model
of cellular automata.[9] In 1985, David Deutsch presented a formal development
of the subject.[10] Later, Gerhard Grössing and Anton Zeilinger introduced the
term ”quantum cellular automata” to refer to a model they defined in 1988,[11]
although their model had very little in common with the concepts developed by
Deutsch and so has not been developed significantly as a model of computation.

Models of universal quantum computation


The first formal model of quantum cellular automata to be researched in depth
was that introduced by John Watrous.[1] This model was developed further by
Wim van Dam,[12] as well as Christoph Dürr, Huong LêThanh, and Miklos
Santha,[13][14] Jozef Gruska.[15] and Pablo Arrighi.[16] However it was later

V
realised that this definition was too loose, in the sense that some instances of it
allow superluminal signalling.[6][7] A second wave of models includes those of
Susanne Richter and Reinhard Werner,[17] of Benjamin Schumacher and Rein-
M
hard Werner,[6] of Carlos Pérez-Delgado and Donny Cheung,[2] and of Pablo
Arrighi, Vincent Nesme and Reinhard Werner.[7][8] These are all closely related,
and do not suffer any such locality issue. In the end one can say that they all
SV
agree to picture quantum cellular automata as just some large quantum circuit,
infinitely repeating across time and space.

Models of physical systems


Models of quantum cellular automata have been proposed by David Meyer,[18][19]
Bruce Boghosian and Washington Taylor,[20] and Peter Love and Bruce Boghosian[21]
SC

as a means of simulating quantum lattice gases, motivated by the use of ”clas-


sical” cellular automata to model classical physical phenomena such as gas dis-
persion.[22] Criteria determining when a quantum cellular automaton (QCA)
can be described as quantum lattice gas automaton (QLGA) were given by Asif
Shakeel and Peter Love.

Quantum dot cellular automata


A proposal for implementing classical cellular automata by systems designed
with quantum dots has been proposed under the name ”quantum cellular au-
tomata” by Doug Tougaw and Craig Lent,[24] as a replacement for classical
computation using CMOS technology. In order to better differentiate between
this proposal and models of cellular automata which perform quantum compu-
tation, many authors working on this subject now refer to this as a quantum
dot cellular automaton.

Page 62
Department of ECE NanoElectronics

3.7 Quantum Computing


Quantum computing is a type of computation that harnesses the collective prop-
erties of quantum states, such as superposition, interference, and entanglement,
to perform calculations. The devices that perform quantum computations are
known as quantum computers.

Quantum computing began in 1980 when physicist Paul Benioff proposed


a quantum mechanical model of the Turing machine. Richard Feynman and
Yuri Manin later suggested that a quantum computer had the potential to sim-
ulate things a classical computer could not feasibly do. In 1994, Peter Shor
developed a quantum algorithm for factoring integers with the potential to de-
crypt RSA-encrypted communications.In 1998 Isaac Chuang, Neil Gershenfeld
and Mark Kubinec created the first two-qubit quantum computer that could
perform computations. Despite ongoing experimental progress since the late
1990s, most researchers believe that ”fault-tolerant quantum computing [is] still

V
a rather distant dream.”[9] In recent years, investment in quantum computing
research has increased in the public and private sectors. On 23 October 2019,
Google AI, in partnership with the U.S. National Aeronautics and Space Ad-
M
ministration (NASA), claimed to have performed a quantum computation that
was infeasible on any classical computer, but whether this claim was or is still
valid is a topic of active research.
There are several types of quantum computers (also known as quantum comput-
SV
ing systems), including the quantum circuit model, quantum Turing machine,
adiabatic quantum computer, one-way quantum computer, and various quan-
tum cellular automata. The most widely used model is the quantum circuit,
based on the quantum bit, or ”qubit”, which is somewhat analogous to the bit
in classical computation. A qubit can be in a 1 or 0 quantum state, or in a
SC

superposition of the 1 and 0 states. When it is measured, however, it is always


0 or 1; the probability of either outcome depends on the qubit’s quantum state
immediately prior to measurement.

Efforts towards building a physical quantum computer focus on technologies


such as transmons, ion traps and topological quantum computers, which aim to
create high-quality qubits.These qubits may be designed differently, depending
on the full quantum computer’s computing model, whether quantum logic gates,
quantum annealing, or adiabatic quantum computation. There are currently a
number of significant obstacles to constructing useful quantum computers. It
is particularly difficult to maintain qubits’ quantum states, as they suffer from
quantum decoherence and state fidelity. Quantum computers therefore require
error correction.

Any computational problem that can be solved by a classical computer can


also be solved by a quantum computer. Conversely, any problem that can be
solved by a quantum computer can also be solved by a classical computer,
at least in principle given enough time. In other words, quantum computers

Page 63
Department of ECE NanoElectronics

obey the Church–Turing thesis. This means that while quantum computers
provide no additional advantages over classical computers in terms of compat-
ibility, quantum algorithms for certain problems have significantly lower time
complexities than corresponding known classical algorithms. Notably, quantum
computers are believed to be able to quickly solve certain problems that no
classical computer could solve in any feasible amount of time—a feat known as
”quantum supremacy.” The study of the computational complexity of problems
with respect to quantum computers is known as quantum complexity theory.
The quantum in ”quantum computing” refers to the quantum mechanics that
the system uses to calculate outputs. In physics, a quantum is the smallest
possible discrete unit of any physical property. It usually refers to properties of
atomic or subatomic particles, such as electrons, neutrinos, and photons.

3.8 DNA Computing

V
DNA computing is an emerging branch of computing which uses DNA, bio-
chemistry, and molecular biology hardware, instead of the traditional electronic
computing. Research and development in this area concerns theory, experi-
M
ments, and applications of DNA computing.
Leonard Adleman of the University of Southern California initially developed
this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a
form of computation which solved the seven-point Hamiltonian path problem.
SV
Since the initial Adleman experiments, advances have occurred and various Tur-
ing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for
DNA-based memory was proposed by Eric Baum[14] who conjectured that a
vast of amount data can be stored in a tiny amount of DNA due to its ultra-
SC

high density. This expanded the horizon of DNA computing into the realm
of memory technology although the in vitro demonstrations were made almost
after a decade.

The field of DNA computing can be categorized as a sub-field of the broader


DNA nanoscience field started by Ned Seeman about a decade before Len Adle-
man’s demonstration.Ned’s original idea in the 1980s was to build arbitrary
structures using bottom-up DNA self-assembly for applications in crystallog-
raphy. However, it morphed into the field of structural DNA self-assembly
which as of 2020 is extremely sophisticated. Self-assembled structure from a
few nanometers tall all the way up to several tens of micrometers in size have
been demonstrated in 2018.

In 1994, Prof. Seeman’s group demonstrated early DNA lattice structures


using a small set of DNA components. While the demonstration by Adleman
showed the possibility of DNA-based computers, the DNA design was trivial
because as the number of nodes in a graph grows, the number of DNA compo-
nents required in Adleman’s implementation would grow exponentially. There-

Page 64
Department of ECE NanoElectronics

fore, computer scientist and biochemists started exploring tile-assembly where


the goal was to use a small set of DNA strands as tiles to perform arbitrary
computations upon growth. Other avenues that were theoretically explored in
the late 90’s include DNA-based security and cryptography, computational ca-
pacity of DNA systems,[20] DNA memories and disks, and DNA-based robotics.

In 2003, John Reif’s group first demonstrated the idea of a DNA-based


walker that traversed along a track similar to a line follower robot. They used
molecular biology as a source of energy for the walker. Since this first demon-
stration, a wide variety of DNA-based walkers have been demonstrated.

Applications, examples, and recent developments


In 1994 Leonard Adleman presented the first prototype of a DNA computer.
The TT-100 was a test tube filled with 100 microliters of a DNA solution. He
managed to solve an instance of the directed Hamiltonian path problem. In
Adleman’s experiment, the Hamiltonian Path Problem was implemented no-

V
tationally as “travelling salesman problem”. For this purpose, different DNA
fragments were created, each one of them representing a city that had to be
visited. Every one of these fragments is capable of a linkage with the other
M
fragments created. These DNA fragments were produced and mixed in a test
tube. Within seconds, the small fragments form bigger ones, representing the
different travel routes. Through a chemical reaction, the DNA fragments repre-
SV
senting the longer routes were eliminated. The remains are the solution to the
problem, but overall, the experiment lasted a week. However, current technical
limitations prevent the evaluation of the results. Therefore, the experiment isn’t
suitable for the application, but it is nevertheless a proof of concept.
SC

3.9 Ultimate physical limits to computation


Computers are physical systems: the laws of physics dictate what they can and
cannot do. In particular, the speed with which a physical device can process
information is limited by its energy and the amount of information that it can
process is limited by the number of degrees of freedom it possesses. Here I
explore the physical limits of computation as determined by the speed of light
c, the quantum scale and the gravitational constant G. As an example, I put
quantitative bounds to the computational power of an ‘ultimate laptop’ with a
mass of one kilogram confined to a volume of one litre.
Over the past half century, the amount of information that computers are ca-
pable of processing and the rate at which they process it has doubled every
18 months, a phenomenon known as Moore’s law. A variety of technologies —
most recently, integrated circuits — have enabled this exponential increase in
information processing power. But there is no particular reason why Moore’s
law should continue to hold: it is a law of human ingenuity, not of nature. At
some point, Moore’s law will break down. The question is, when?
Computers are physical systems: what they can and cannot do is dictated by

Page 65
Department of ECE NanoElectronics

the laws of physics. In particular, the speed with which a physical device can
process information is limited by its energy and the amount of information that
it can process is limited by the number of degrees of freedom it possesses. This
paper explores the physical limits of computation as determined by the speed
of light c, the quantum scale and the gravitational constant G. As an example,
quantitative bounds are put to the computational power of an ‘ultimate laptop’
with a mass of one kilogram confined to a volume of one liter.
Over the past half century, the amount of information that computers are ca-
pable of processing and the rate at which they process it has doubled every two
years, a phenomenon known as Moore’s law. A variety of technologies—most
recently, integrated circuits—have enabled this exponential increase in informa-
tion processing power. There is no particular reason why Moore’s law should
continue to hold: it is a law of human ingenuity, not of nature. At some point,
Moore’s law will break down. The question is, When? [...]

We should determine just what limits the laws of physics place on the power

V
of computers. At first, this might seem a futile task: since we don’t know the
technologies by which computers one thousand, one hundred, or even ten years
in the future will be constructed, how can we determine the physical limits of
M
those technologies? In fact, as will now be shown, a great deal can be determined
concerning the ultimate physical limits of computation simply from knowledge
of the speed of light...
SV

4 Unit - IV Nano Structure Devices


4.1 Resonant Tunneling Diode
SC

Tunneling diodes (TDs) have been widely studied for their importance in achiev-
ing very high speed in wide-band devices and circuits that are beyond conven-
tional transistor technology. A particularly useful form of a tunneling diode
is the Resonant Tunneling Diode (RTD). RTDs have been shown to achieve a
maximum frequency of up to 2.2 THz as opposed to 215 GHz in conventional
Complementary Metal Oxide Semiconductor (CMOS) transistors. The very
high switching speeds provided by RTDs have allowed for a variety of appli-
cations in wide-band secure communications systems and high-resolution radar
and imaging systems for low visibility environments. Tunneling diodes provide
the same functionality as a CMOS transistor where under a specific external
bias voltage range, the device will conduct a current thereby switching the de-
vice “on”. However, instead of the current going through a channel between the
drain and source as in CMOS transistors, the current goes through the depletion
region by tunneling in normal tunneling diodes and through quasi-bound states
within a double barrier structure in RTDs.
A TD consists of a p-n junction in which both the n- and pregions are degener-
ately doped. There is a high concentration of electrons in the conduction band
(EC ) of the n-type material and empty states in the valence band (EV ) of the

Page 66
Department of ECE NanoElectronics

V
M
p-type material. Initially, the Fermi level (EF) is constant because the diode is
in thermal equilibrium with no external bias voltage. When the forward bias
voltage starts to increase, the EF will start to decrease in the p-type material
SV
and increase in the n-type material. Since the depletion region is very narrow
(¡10nm), electrons can easily tunnel through, creating a forward current. De-
pending on how many electrons in the n-region are energetically aligned to the
empty states in the valence band of the p-region, the current will either increase
or decrease. As the bias voltage continues to increase, the ideal diffusion cur-
rent will cause the current to increase. When a reverse-bias voltage is applied,
SC

the electrons in the p-region are energetically aligned with empty states in the
n-region causing a large reverse-bias tunneling current.

The current-voltage (I-V) curve shows the negative differential resistance


(NDR) characteristic of RTDs. For a specific voltage range, the current is a
decreasing function of voltage. This property is very important in the circuit
implementation because it can provide for the different voltage-controlled logic
states corresponding to the peak and valley currents. RTDs utilize a quantum
well with identically doped contacts to provide similar I-V characteristics. It
consists of two heavily doped, narrow energy-gap materials encompassing an
emitter region, a quantum well in between two barriers of large band gap mate-
rial, and a collector region, as shown in Figure 3. A current method of growth for
this device is Metal Organic Chemical Vapor Deposition using GaAs-AlGaAs.
The quantum-well thickness is typically around 5nm and the barrier layers are
around 1.5 to 5 nm thick.
The current-voltage (I-V) curve shows the negative differential resistance (NDR)
characteristic of RTDs. For a specific voltage range, the current is a decreasing

Page 67
Department of ECE NanoElectronics

function of voltage. This property is very important in the circuit implemen-


tation because it can provide for the different voltage-controlled logic states
corresponding to the peak and valley currents.
RTDs utilize a quantum well with identically doped contacts to provide similar

V
I-V characteristics. It consists of two heavily doped, narrow energy-gap materi-
als encompassing an emitter region, a quantum well in between two barriers of
large band gap material, and a collector region, as shown in Figure 3. A current
M
method of growth for this device is Metal Organic Chemical Vapor Deposition
using GaAs-AlGaAs. The quantum-well thickness is typically around 5nm and
the barrier layers are around 1.5 to 5 nm thick.
SV
When there is no forward voltage bias, most of the electrons and holes are
stationary forming an accumulation layer in the emitter and collector region re-
spectively. As a forward voltage bias in applied, an electric field is created that
causes electrons to move from the emitter to the collector by tunneling through
the scattering states within the quantum well. These quasibound energy states
SC

are the energy states that allow for electrons to tunnel through creating a cur-
rent. As more and more electrons in the emitter have the same energy as the
quasi-bound state, more electrons are able to tunnel through the well, resulting
in an increase in the current as the applied voltage is increased. When the
electric field increases to the point where the energy level of the electrons in the
emitter coincides with the energy level of the quasi-bound state of the well, the
current reaches a maximum.
Resonant tunneling occurs at specific resonant energy levels corresponding
to the doping levels and width of the quantum well. As the applied voltage
continues to increase, more and more electrons are gaining too much energy to
tunnel through the well and the current is decreased. After a certain applied
voltage, current begins to rise again because of substantial thermionic emission
where the electrons can tunnel through the non-resonant energy levels of the
well. This process produces a minimum “valley” current that can be classified
as the leakage current.
RTDs have a major advantage over TDs. When a high reverse bias voltage is
applied to TDs, there is a very high leakage current. However, RTDs have the
same doping type and concentration on the collector and emitter side. This

Page 68
Department of ECE NanoElectronics

V
M
SV
produces a symmetrical I-V response when a forward as well as a reverse bias
voltage is applied. In this manner the very high leakage current present in nor-
mal TDs is eliminated. Thus, RTDs are very good rectifiers.
RTD bandwidths were reported for InAs/AlSb RTDs at about 1.24 THz due
to their low ohmic contact resistance and short transit times. Higher band-
SC

widths could be obtained using InAs Schottky-contact RTDs (SRTDs) because


of the higher tunneling current densities and shorter transit times. However,
InGaAs/AlAs/InP is usually used instead of InAs/AlSb because of its mature
fabrication and growth technologies.

4.2 Coulomb blockade in Quantum Dots


In the resonant tunnelling diode we have treated electrons as non interacting
particles. We have discussed their wave nature, which led to quantized energy
states in small and coherent cavities. Qualitatively this is due to the requirement
that an integer number of Fermi wavelengths has to fit between the barriers.
Here we begin in the opposite limit by neglecting space quantization effects in
terms of self-interference and discussing single electron charging of small metallic
islands. Due to the small Fermi-wavelength in metals the energy spectrum of
such a system is quasi-continuous and the system can be treated classically,
except that due to the quantization of charge an integer number of particles
needs to reside on the island.

Page 69
Department of ECE NanoElectronics

V
M
Single electron charging
The device which we want to consider is a so-called single electron transistor
SV
where a small island with the self-capacitance C is weakly coupled to source and
drain contacts via tunnel barriers. At low enough temperatures and small bias
voltage, the energy cost to add an extra electron onto the island may exceed
the thermal energy and the current through the island is blocked. This is the
Coulomb blockade effect.
SC

It was first suggested in the early 50’s by Gorter as an explanation for the obser-
vation of an anomalous increase of the resistance of thin granular metallic films
with a reduction in temperature. More than 30 years later Fulton and Dolan
observed Coulomb blockade effects in a microfabricated metallic sample and
initiated a huge number of experimental and theoretical studies. Today there
are many text books and reviews on single electron systems both in metals and
in semiconductor systems.

Page 70
Department of ECE NanoElectronics

4.3 Carbon Nanotube

V
Carbon nanotubes (CNTs) are cylindrical large molecules consisting of a hexag-
onal arrangement of hybridized carbon atoms, which may by formed by rolling
up a single sheet of graphene (single-walled carbon nanotubes, SWCNTs) or by
M
rolling up multiple sheets of graphene (multiwalled carbon nanotubes, MWC-
NTs).
Carbon Nanotubes, long, thin cylinders of carbon, were discovered in 1991 by
Sumio Iijima. These are large macromolecules that are unique for their size,
SV
shape, and remarkable physical properties. They can be thought of as a sheet
of graphite (a hexagonal lattice of carbon) rolled into a cylinder. These intrigu-
ing structures have sparked much excitement in recent years and a large amount
of research has been dedicated to their understanding. Currently, the physical
properties are still being discovered and disputed. Nanotubes have a very broad
SC

range of electronic, thermal, and structural properties that change depending


on the different kinds of nanotube (defined by its diameter, length, and chirality,
or twist). To make things more interesting, besides having a single cylindrical
wall (SWNTs), Nanotubes can have multiple walls (MWNTs)–cylinders inside
the other cylinders.

Carbon Nanotubes and Moore’s Law


At the rate Moore’s Law is progressing, by 2019 it will result in transistor just a
few atoms in width. This means that the strategy of ever finer photolithography
will have run its course; we have already seen a progression from a micron, to
sub micron to 45 nm scale. Carbon Nanotubes, whose walls are just 1 atom
thick, with diameters of only 1 to 2 nm, seems to be one of the perfect candi-
dates to take us right to the end of Moore’s Law curve. We possibly cannot go
beyond that. So certainly carbon Nanotubes has a promising future!

Key properties of Carbon Nanotubes


Carbon Nanotubes are an example of true nanotechnology: they are less than
100 nanometers in diameter and can be as thin as 1 or 2 nm. They are molecules

Page 71
Department of ECE NanoElectronics

that can be manipulated chemically and physically in very useful ways. They
open an incredible range of applications in materials science, electronics, chem-
ical processing, energy management, and many other fields. Some properties
include
• Extraordinary electrical conductivity, heat conductivity, and mechanical prop-
erties.
• They are probably the best electron field-emitter known, largely due to their
high length-todiameter ratios
• As pure carbon polymers, they can be manipulated using the well-known and
the tremendously rich chemistry of that element.
Some of the above properties provide opportunity to modify their structure, and
to optimize their solubility and dispersion. These extraordinary characteristics
give CNTs potential in numerous applications.

Properties of Carbon Nanotubes

V
The structure of a carbon nanotube is formed by a layer of carbon atoms
that are bonded together in a hexagonal (honeycomb) mesh. This one-atom
thick layer of carbon is called graphene, and it is wrapped in the shape of a
M
cylinder and bonded together to form a carbon nanotube. Nanotubes can have
a single outer wall of carbon, or they can be made of multiple walls (cylinders
inside other cylinders of carbon). Carbon nanotubes have a range of electric,
SV
thermal, and structural properties that can change based on the physical design
of the nanotube.

Single-walled carbon nanotube structure

Single-walled carbon nanotubes can be formed in three different designs:


SC

Armchair, Chiral, and Zigzag. The design depends on the way the graphene is
wrapped into a cylinder. For example, imagine rolling a sheet of paper from
its corner, which can be considered one design, and a different design can be
formed by rolling the paper from its edge. A single-walled nanotube’s structure
is represented by a pair of indices (n,m) called the chiral vector. The chiral
vector is defined in the image below.
The structural design has a direct effect on the nanotube’s electrical proper-
ties. When n m is a multiple of 3, then the nanotube is described as ”metallic”
(highly conducting), otherwise the nanotube is a semiconductor. The Armchair
design is always metallic while other designs can make the nanotube a semicon-
ductor.

Multi-walled carbon nanotube structure

There are two structural models of multi-walled nanotubes. In the Russian


Doll model, a carbon nanotube contains another nanotube inside it (the inner
nanotube has a smaller diameter than the outer nanotube). In the Parchment
model, a single graphene sheet is rolled around itself multiple times, resembling

Page 72
Department of ECE NanoElectronics

V
a rolled up scroll of paper. Multi-walled carbon nanotubes have similar prop-
erties to singlewalled nanotubes, yet the outer walls on multi-walled nanotubes
can protect the inner carbon nanotubes from chemical interactions with out-
M
side materials. Multi-walled nanotubes also have a higher tensile strength than
single-walled nanotubes.

Strength
SV
Carbon nanotubes have a higher tensile strength than steel and Kevlar.
Their strength comes from the sp² bonds between the individual carbon atoms.
This bond is even stronger than the sp³ bond found in diamond. Under high
pressure, individual nanotubes can bond together, trading some sp² bonds for
SC

sp³ bonds. This gives the possibility of producing long nanotube wires. Carbon
nanotubes are not only strong, they are also elastic. You can press on the tip
of a nanotube and cause it to bend without damaging to the nanotube, and
the nanotube will return to its original shape when the force is removed. A
nanotube’s elasticity does have a limit, and under very strong forces, it is pos-
sible to permanently deform to shape of a nanotube. A nanotube’s strength
can be weakened by defects in the structure of the nanotube. Defects occur
from atomic vacancies or a rearrangement of the carbon bonds. Defects in the
structure can cause a small segment of the nanotube to become weaker, which
in turn causes the tensile strength of the entire nanotube to weaken. The tensile
strength of a nanotube depends on the strength of the weakest segment in the
tube similar to the way the strength of a chain depends on the weakest link in
the chain.

Electrical properties
As mentioned previously, the structure of a carbon nanotube determines how
conductive the nanotube is. When the structure of atoms in a carbon nanotube
minimizes the collisions between conduction electrons and atoms, a carbon nan-

Page 73
Department of ECE NanoElectronics

otube is highly conductive. The strong bonds between carbon atoms also allow
carbon nanotubes to withstand higher electric currents than copper. Electron
transport occurs only along the axis of the tube. Single walled nanotubes can
route electrical signals at speeds up to 10 GHz when used as interconnects on
semi-conducting devices. Nanotubes also have a constant resistively.

Thermal Properties

The strength of the atomic bonds in carbon nanotubes allows them to with-
stand high temperatures. Because of this, carbon nanotubes have been shown
to be very good thermal conductors. When compared to copper wires, which
are commonly used as thermal conductors, the carbon nanotubes can transmit
over 15 times the amount of watts per meter per Kelvin. The thermal conduc-
tivity of carbon nanotubes is dependent on the temperature of the tubes and
the outside environment.

V
4.4 Band Structure M
Band theory or band structure describes the quantum-mechanical behavior of
electrons in solids. Inside isolated atoms, electrons possess only certain discrete
energies, which can be depicted in an energy-level diagram as a series of dis-
tinct lines. In a solid, where many atoms sit in close proximity, electrons are
SV
“shared.” The equivalent energy level diagram for the collective arrangement of
atoms in a solid consists not of discrete levels, but of bands of levels representing
nearly a continuum of energy values. In a solid, electrons normally occupy the
lowest lying of the energy levels. In conducting solids the next higher energy
level (above the highest filled level) is close enough in energy that transitions
SC

are allowed, facilitating flow of electrons in the form of a current. In insulating


solids the next higher energy level lies far above the highest filled level (sepa-
rated from it by an energy gap), prohibiting electrical current. Semiconductors
are actually insulators, but their conduction is enough that they are classified
separately. Their gap lies between those of conductors and insulators; the en-
ergy gap is small.

Band structure is one of the most important concepts in solid state physics.
It provides the electronic levels in (ideal) crystal structures, which are charac-
terized by two quantum numbers, the Bloch vector k and the band index n.
Here the Bloch vector is an element of the reciprocal space (in units 1/length)
and the energy of the electron En(k) is a continuous function of k, so that one
obtains a continuous range of energies referred to as the energy band. Many
electrical, optical, and even some magnetic properties of crystals can be ex-
plained in terms of the bandstructure. Of particular importance is the location
of the Fermi energy, until which all levels are occupied at zero temperature. If
the Fermi energy is located in a band gap, the material is insulating (or semi-
conducting) while it is metallic otherwise.

Page 74
Department of ECE NanoElectronics

4.5 2D Semiconductors
he two-dimensional (2D) semiconductors are non-carbon materials which, sim-
ilarly to graphene, exist as monolayers of unusual properties. In contrast to
graphene, these 2D materials often have a tunable bandgap in the visible – near
IR range, and exhibit rich redox chemistry which can be controlled through
material design and special processing. Many 2D semiconductors have di-
rect bandgaps whereas the corresponding bulk phases show indirect gaps with
smaller energies. Other interesting properties include high carrier mobility and
on/off ratio.

The fact that many of these non-carbon 2D materials are semiconductors


makes them an attractive choice for producing high-performing electronic switches,
photodetectors, photo-transistors and other optoelectronic devices.

V
2D semiconductors can complement graphene in devices and applications
where an energy bandgap is required. M
2D Transition Metal Dichalcogenides (2D-TMDs) This group of 2D
materials includes MoS2 and WS2, which show great promise for many di-
verse uses in gas sensing, bio-sensors, supercapacitors, lithium-ion batteries,
SV
and sodium-ion batteries. Due to their large surface-to-volume ratio, 2D-TMDs
produce sensors with improved sensitivity, selectivity and low-power consump-
tion. The use of 2D-TMDs in energy storage is determined by their large surface
area, and large van der Waals gaps between neighbouring layers, which are suit-
able for intercalation of lithium, sodium and other ions.
SC

Our 2D-TMD inks and pastes contain MoS2 or WS2 nanoflakes of narrow
particle size distribution, and with controlled structural and electronic proper-
ties. Using our inks, it is possible to deposit novel electronic, optoelectronic and
sensor devices on flexible and heat-sensitive substrates, such as paper, polymers
and textiles. Additionally, our inks can be used as intermediaries for producing
supercapacitor and battery electrodes.
Two-dimensional (2D) semiconductors beyond graphene represent the thinnest
stable known nanomaterials. Rapid growth of their family and applications dur-
ing the last decade of the twenty-first century have brought unprecedented op-
portunities to the advanced nano- and opto-electronic technologies. In this arti-
cle, we review the latest progress in findings on the developed 2D nanomaterials.
Advanced synthesis techniques of these 2D nanomaterials and heterostructures
were summarized and their novel applications were discussed. The fabrication
techniques include the state-of-the-art developments of the vapor-phase-based
deposition methods and novel van der Waals (vdW) exfoliation approaches for
fabrication both amorphous and crystalline 2D nanomaterials with a particular
focus on the chemical vapor deposition (CVD), atomic layer deposition (ALD)

Page 75
Department of ECE NanoElectronics

of 2D semiconductors and their heterostructures as well as on vdW exfoliation


of 2D surface oxide films of liquid metals.

4.6 Graphene
Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lat-
tice. It is the building-block of Graphite (which is used, among others things,
in pencil tips), but graphene is a remarkable substance on its own - with a
multitude of astonishing properties which repeatedly earn it the title “wonder
material”.

Graphene’s properties

Graphene is the thinnest material known to man at one atom thick, and
also incredibly strong - about 200 times stronger than steel. On top of that,

V
graphene is an excellent conductor of heat and electricity and has interesting
light absorption abilities. It is truly a material that could change the world,
with unlimited potential for integration in almost any industry.
M
Potential applications

Graphene is an extremely diverse material, and can be combined with other


SV
elements (including gases and metals) to produce different materials with var-
ious superior properties. Researchers all over the world continue to constantly
investigate and patent graphene to learn its various properties and possible ap-
plications, which include:
SC

batteries
transistors
computer chips
energy generation
supercapacitors
DNA sequencing
water filters
antennas
touchscreens (for LCD or OLED displays)
solar cells
Spintronics-related products

Producing graphene
Graphene is indeed very exciting, but producing high quality materials is still
a challenge. Dozens of companies around the world are producing different
types and grades of graphene materials - ranging from high quality single-layer
graphene synthesized using a CVD-based process to graphene flakes produced

Page 76
Department of ECE NanoElectronics

from graphite in large volumes.

High-end graphene sheets are mostly used in RD activities or in extreme


applications such as sensors, but graphene flakes, produced in large volumes

V
and at lower prices, are adopted in many applications such as sports equipment,
consumer electronics, automotive and more.

4.7 Atomistic simulation


M
Atomistic simulations, the most widely used methods in the nanomechanics
SV
field, are important numerical methods for the investigation of magnetic, elec-
tronic, chemical, and mechanical properties of carbon nanostructures since these
modeling approaches can accurately trace atomic position and precisely capture
the microscale physical mechanism, such as buckling. There has been already
much research of carbon nanostructures using atomistic simulation.
SC

Page 77
Department of ECE NanoElectronics

V
M
SV
SC

Page 78
Department of ECE NanoElectronics

V
M
SV
SC

Page 79
Department of ECE NanoElectronics

V
M
SV
SC

Page 80
Department of ECE NanoElectronics

5 Unit - V Logic Devices and Applications


Analog Devices’ high speed logic devices allow for logic functions, signal rout-
ing, and signal integrity implementations with disceret IC products. Our logic
devices address applications requiring tight specifications such as a high data
rate, low jitter, and improved signal integrity, while providing indispensable
functionalities such as logic gates, flip-flops, fanout buffers, and NRZ-to-RZ
converters. These high speed logic devices are well suited to applications in-
cluding RF ATE, broadband test, digital logic systems and measurement, and
high speed data transmission.

Logic Devices

Logic devices can be broadly categorized as:

Fixed Logic Devices (FLDs)

V
Programmable Logic Devices (PLDs)

1) Fixed Logic Devices (FLDs) M


As the name indicates, the circuits in the FLD are permanent, they perform
one function or set of functions. Once manufactured, they cannot be erased.
SV
2) Programmable Logic Devices (PLDs)

PLD is an IC that contains large no. of gates, Flip-flops, and registers that
are inter-connected on the chip. PLDs can be reconfigured to perform any no.
of functions at any time.
SC

Types of PLDs are:

(a) Simple PLDs/ sequential PLDs

PROM (Programmable Read-Only Memory)


PLA (Programmable Logic Array)
PAL (Programmable Array Logic)

(b) Complex PLDs (CPLDs)

Field Programmable Gate Arrays (FPGA)


Programmable Read Only Memory (PROM)

Page 81
Department of ECE NanoElectronics

V
M
SV
SC

5.1 Silicon MOSFET


Transistor was first Transistor was first made at Bell Labs. New materials
must New materials must be introduced in be introduced in implementation of
implementation of new CMOS new CMOS generations.

Al has been Al has been replaced by Cu replaced by Cu. Cu interconnects


Cu interconnects are now embedded in low embedded in low permittivity ma-
terials (low materials (low-K) like porous like porous oxides.
Various Various silicides silicides have been introduced as have been introduced
as source, drain and gate contacts source, drain and gate contacts to lower the
to lower the device resistance device resistance, TiSi2 has been replaced , TiSi2
has been replaced by CoSi2 which maintains lower resistance.
High-K materials K materials will replace SiO2 gate will replace SiO2 gate in-

Page 82
Department of ECE NanoElectronics

V
M
sulator and metal gates will be used instead insulator and metal gates will be
SV
used instead of Poly of Poly to face the tunneling and gate to face the tunneling
and gate leakage problems.

Fundamentals of MOSFET devices


MOS capacitor MOS capacitor
SC

The Figure shows the structure of a MOS capacitor. The corresponding band
diagram is shown. Silicon dioxide has a 9 eV bandgap. This results in large
band offset relative to silicon.
VG 0 : Fermi level of metal increases, an electric field is created in Sio2
(slope of the conduction band of SiO2).
Due to low carrier concentrations, Si bands bend at the interface of SiO2, lead-
ing to accumulation of excess hole.
To conserve charge, equivalent number of electrons is accumulated at metal side
VG0:
Fermi level moves down, silicon bands bend downward
Hole concentration near the interface decreases
z This is called depletion condition
z Equivalent amount of positive charge will be induced at the metal oxide in-
terface QM as negative charge in semiconductor Qs : Q = - QM , Qs = Qd

Page 83
Department of ECE NanoElectronics

5.2 Ferroelectric Field Effect Transistor


Ferroelectric materials with spontaneous polarizations have piezoelectric, pyro-
electric, and electro-optic properties and are widely used in nonvolatile memo-
ries, sensors, and electro-optic modulators that rely on heterogeneous integra-
tion of field-effect transistors (MOSFETs) 2, 3, 4, 5.

In addition, to change the electric field direction inside the semiconductor,


on-state to off-state or off- state to on-state transitions are necessary (to change
the direction of band bending). The change of applied electric field direction is
the necessary condition of ferroelectric polarization switching in the ferroelectric
semiconductor.

5.3 NEMS

V
Nanoelectromechanical systems (NEMSs) are devices that integrate electrical
and mechanical functions at the nanoscale. They consist of miniaturized elec-
trical and mechanical apparatuses such as actuators, beams, sensors, pumps,
M
resonators, and motors. These components convert one form of energy into an-
other, which can be quickly and conveniently measured. These devices can func-
tion as biosensors to monitor important physiological variables during surgical
procedures, such as intracranial pressure, cerebrospinal fluid (CSF) pulsatility,
SV
weight load, and strain.

NEMSs provide three main advantages as mechanical biosensors in surgery.


First, they can achieve mass resolution at the nanogram scale when operating
in a fluid environment, as the minimum detectable mass added is proportional
SC

to the total mass of the device. Second, the ability of an NEMS device to be
displaced or deformed—known as mechanical compliance—increases with uni-
form reduction of its dimensions. This high degree of mechanical compliance
allows an applied force to be translated into a measurable displacement, such
that even the miniscule forces governing cellular and subcellular interactions
can be quantified. For example, NEMS sensors can resolve forces as small as 10
pN, making them sensitive enough to detect the breaking of hydrogen bonds.
Third, small fluidic mechanical devices can exhibit fast response times, which
would facilitate real-time monitoring of biological processes.

An implantable bioresorbable nanoporous silicon device, with dimensions of


1 mm×2 mm×0.08 mm, was found to provide a reliable assessment of intracra-
nial pressure in rats. The resistance of the sensing element increased monotoni-
cally in a linear manner across the full range of pressures (0–70 mmHg) that are
relevant to intracranial monitoring. The device was also amenable to wireless
transmission of information, and could therefore be used to monitor neurophys-
iological variables even after surgery. In vivo tests using the silicon device to
measure intracranial pressure compared favorably with current methods that

Page 84
Department of ECE NanoElectronics

rely on wired sensors and are thus not suitable for implantation and postsur-
gical monitoring [45]. Furthermore the device was seen to dissolve over time

V
when exposed to biofluids, such as CSF. As only biocompatible end products
were eventually formed, subsequent invasive procedures to remove implanted
NEMS devices could be rendered unnecessary in future clinical settings.
M
The implantable nature of these devices has significant implications for the
postsurgical follow-up of brain tumor patients. For example, implanted sensors
embedded in the resection cavity could facilitate a prompter detection of tumor
SV
recurrence, compared to the current strategy that depends on interval MRI.
Sensor arrays could thus be designed to register changes in tissue impedance,
hypoxia, pH, or temperature to identify the hallmark signs of tumor progres-
sion. This early warning system would allow proactive rather than reactive
initiation of secondary therapies. Furthermore the integration of miniaturized
SC

sensor arrays with an NEMS component to destroy adjacent tissue could enable
the immediate in situ ablation of recurring tumors. The administration of local
therapies through this neurally embedded system (e.g., hyperthermia induced
by passing a current between two electrodes, ultrasound or UV light, or release
of an aliquot of chemotherapy) could minimize the side effects of systemically
administered therapies. Of course, the introduction of foreign bodies such as
NEMS devices into the brain is inevitably associated with a certain degree of
parenchymal damage and local neuronal death, along with risks of bleeding, in-
fection, and seizures. Foreign bodies can also cause the activation of microglia
and astrocytes and reactive gliosis, which in turn can hinder the function of im-
planted NEMS devices. Future work will thus need to look into ways to improve
the biocompatibility and safety of implantable devices.

Apart from their potential use in neuromonitoring, NEMS technology is also


integral to the development of nanotools that could be used to perform intri-
cate nanosurgeries within the CNS. For instance, a recently developed nanoknife
has been successfully used to cut individual axons of peripheral nerves in an in
vivo mouse model. Such technology could eventually permit a more precise

Page 85
Department of ECE NanoElectronics

mechanical disconnection of individual white matter bundles during neurosur-


gical resection, and may lead to a significant improvement in surgical outcome
for brain tumor patients. In fact, avenues for intervention at the cellular and
subcellular level during neurosurgery could be made possible with NEMS tech-
nology. For instance, nanowires could be integrated with cellular components to
create a direct bridge between the cell and the external environment within the
control of neurosurgeons, in order to facilitate the delivery of biological com-
pounds. The safety and efficacy of nanowire technology was shown in a study
in which atomic force microscopy tips were repurposed for the delivery of fluo-
rescent nanoparticles [46]. The tip diameter used was less than 10 nm, as tip
lengths substantially smaller than the cell can mitigate physical damage to the
cells. With further development, NEMS technology could thus provide neuro-
surgeons with an unprecedented level of control over the cellular environment
within the brains of cancer patients.

V
5.4 MEMS
Micro-Electro-Mechanical Systems, or MEMS, is a technology that in its most
M
general form can be defined as miniaturized mechanical and electro-mechanical
elements (i.e., devices and structures) that are made using the techniques of mi-
crofabrication. The critical physical dimensions of MEMS devices can vary from
well below one micron on the lower end of the dimensional spectrum, all the
SV
way to several millimeters. Likewise, the types of MEMS devices can vary from
relatively simple structures having no moving elements, to extremely complex
electromechanical systems with multiple moving elements under the control of
integrated microelectronics. The one main criterion of MEMS is that there are
at least some elements having some sort of mechanical functionality whether or
SC

not these elements can move. The term used to define MEMS varies in different
parts of the world. In the United States they are predominantly called MEMS,
while in some other parts of the world they are called “Microsystems Technol-
ogy” or “micromachined devices”.

While the functional elements of MEMS are miniaturized structures, sen-


sors, actuators, and microelectronics, the most notable (and perhaps most in-
teresting) elements are the microsensors and microactuators. Microsensors and
microactuators are appropriately categorized as “transducers”, which are de-
fined as devices that convert energy from one form to another. In the case of
microsensors, the device typically converts a measured mechanical signal into
an electrical signal.

Over the past several decades MEMS researchers and developers have demon-
strated an extremely large number of microsensors for almost every possible
sensing modality including temperature, pressure, inertial forces, chemical species,
magnetic fields, radiation, etc. Remarkably, many of these micromachined
sensors have demonstrated performances exceeding those of their macroscale

Page 86
Department of ECE NanoElectronics

V
M
counterparts. That is, the micromachined version of, for example, a pressure
SV
transducer, usually outperforms a pressure sensor made using the most pre-
cise macroscale level machining techniques. Not only is the performance of
MEMS devices exceptional, but their method of production leverages the same
batch fabrication techniques used in the integrated circuit industry – which can
translate into low per-device production costs, as well as many other benefits.
Consequently, it is possible to not only achieve stellar device performance, but
SC

to do so at a relatively low cost level. Not surprisingly, silicon based discrete


microsensors were quickly commercially exploited and the markets for these de-
vices continue to grow at a rapid rate.

More recently, the MEMS research and development community has demon-
strated a number of microactuators including: microvalves for control of gas and
liquid flows; optical switches and mirrors to redirect or modulate light beams;
independently controlled micromirror arrays for displays, microresonators for
a number of different applications, micropumps to develop positive fluid pres-
sures, microflaps to modulate airstreams on airfoils, as well as many others.
Surprisingly, even though these microactuators are extremely small, they fre-
quently can cause effects at the macroscale level; that is, these tiny actuators
can perform mechanical feats far larger than their size would imply. For exam-
ple, researchers have placed small microactuators on the leading edge of airfoils
of an aircraft and have been able to steer the aircraft using only these micro-
miniaturized devices.

Page 87
Department of ECE NanoElectronics

V
M
The real potential of MEMS starts to become fulfilled when these miniatur-
ized sensors, actuators, and structures can all be merged onto a common sili-
con substrate along with integrated circuits (i.e., microelectronics). While the
electronics are fabricated using integrated circuit (IC) process sequences (e.g.,
SV
CMOS, Bipolar, or BICMOS processes), the micromechanical components are
fabricated using compatible ”micromachining” processes that selectively etch
away parts of the silicon wafer or add new structural layers to form the mechan-
ical and electromechanical devices. It is even more interesting if MEMS can
be merged not only with microelectronics, but with other technologies such as
SC

photonics, nanotechnology, etc. This is sometimes called “heterogeneous inte-


gration.” Clearly, these technologies are filled with numerous commercial market
opportunities.

While more complex levels of integration are the future trend of MEMS
technology, the present state-of-the-art is more modest and usually involves a
single discrete microsensor, a single discrete microactuator, a single microsensor
integrated with electronics, a multiplicity of essentially identical microsensors
integrated with electronics, a single microactuator integrated with electronics,
or a multiplicity of essentially identical microactuators integrated with elec-
tronics. Nevertheless, as MEMS fabrication methods advance, the promise is
an enormous design freedom wherein any type of microsensor and any type of
microactuator can be merged with microelectronics as well as photonics, nan-
otechnology, etc., onto a single substrate.

This vision of MEMS whereby microsensors, microactuators and microelec-


tronics and other technologies, can be integrated onto a single microchip is
expected to be one of the most important technological breakthroughs of the

Page 88
Department of ECE NanoElectronics

V
future. This will enable the development of smart products by augmenting
M
the computational ability of microelectronics with the perception and control
capabilities of microsensors and microactuators. Microelectronic integrated cir-
cuits can be thought of as the ”brains” of a system and MEMS augments this
decision-making capability with ”eyes” and ”arms”, to allow microsystems to
SV
sense and control the environment. Sensors gather information from the envi-
ronment through measuring mechanical, thermal, biological, chemical, optical,
and magnetic phenomena. The electronics then process the information de-
rived from the sensors and through some decision making capability direct the
actuators to respond by moving, positioning, regulating, pumping, and filter-
SC

ing, thereby controlling the environment for some desired outcome or purpose.
Furthermore, because MEMS devices are manufactured using batch fabrication
techniques, similar to ICs, unprecedented levels of functionality, reliability, and
sophistication can be placed on a small silicon chip at a relatively low cost.
MEMS technology is extremely diverse and fertile, both in its expected ap-
plication areas, as well as in how the devices are designed and manufactured.
Already, MEMS is revolutionizing many product categories by enabling com-
plete systems-on-a-chip to be realized.

Nanotechnology is the ability to manipulate matter at the atomic or molec-


ular level to make something useful at the nano-dimensional scale. Basically,
there are two approaches in implementation: the top-down and the bottom-
up. In the top-down approach, devices and structures are made using many of
the same techniques as used in MEMS except they are made smaller in size,
usually by employing more advanced photolithography and etching methods.
The bottom-up approach typically involves deposition, growing, or self-assembly
technologies. The advantages of nano-dimensional devices over MEMS involve
benefits mostly derived from the scaling laws, which can also present some chal-

Page 89
Department of ECE NanoElectronics

lenges as well.
Some experts believe that nanotechnology promises to: a). allow us to put es-
sentially every atom or molecule in the place and position desired – that is, exact
positional control for assembly, b). allow us to make almost any structure or
material consistent with the laws of physics that can be specified at the atomic
or molecular level; and c). allow us to have manufacturing costs not greatly
exceeding the cost of the required raw materials and energy used in fabrication
(i.e., massive parallelism).

Although MEMS and Nanotechnology are sometimes cited as separate and


distinct technologies, in reality the distinction between the two is not so clear-
cut. In fact, these two technologies are highly dependent on one another. The
well-known scanning tunneling-tip microscope (STM) which is used to detect
individual atoms and molecules on the nanometer scale is a MEMS device.
Similarly the atomic force microscope (AFM) which is used to manipulate the
placement and position of individual atoms and molecules on the surface of a

V
substrate is a MEMS device as well. In fact, a variety of MEMS technologies
are required in order to interface with the nano-scale domain.
M
Likewise, many MEMS technologies are becoming dependent on nanotech-
nologies for successful new products. For example, the crash airbag accelerom-
eters that are manufactured using MEMS technology can have their long-term
SV
reliability degraded due to dynamic in-use stiction effects between the proof
mass and the substrate. A nanotechnology called Self-Assembled Monolayers
(SAM) coatings are now routinely used to treat the surfaces of the moving
MEMS elements so as to prevent stiction effects from occurring over the prod-
uct’s life.
SC

Many experts have concluded that MEMS and nanotechnology are two differ-
ent labels for what is essentially a technology encompassing highly miniaturized
things that cannot be seen with the human eye. Note that a similar broad
definition exists in the integrated circuits domain which is frequently referred
to as microelectronics technology even though state-of-the-art IC technologies
typically have devices with dimensions of tens of nanometers. Whether or not
MEMS and nanotechnology are one in the same, it is unquestioned that there
are overwhelming mutual dependencies between these two technologies that will
only increase in time. Perhaps what is most important are the common bene-
fits afforded by these technologies, including: increased information capabilities;
miniaturization of systems; new materials resulting from new science at minia-
ture dimensional scales; and increased functionality and autonomy for systems.

Page 90

You might also like