RM Nanoelectronics
RM Nanoelectronics
RM Nanoelectronics
Mrs.Uma Balaji,
Assistant Professor/ECE,
SCSVMV
February 2021
Department of ECE NanoElectronics
Page 2
Department of ECE NanoElectronics
We are talking about the “nano tidal wave”. Not a single day passes without
the press reporting on major innovations in this area. Large industrialized coun-
tries spend considerable amounts of money, around USD 10 billion per year, on
this field of study. This should have a positive effect on the economy and on
employment1. Microelectronics and the steady miniaturization of components
V
has become commonplace. Moore’s Law (a doubling of the number of transis-
tors for the same surface every 18 months) illustrates this idea. This also makes
us think of the production of chips in laboratories.
M
With their engineers and technicians in uniform, these laboratories can be con-
sidered as the technological cathedrals of our times. Microcomputers, micropro-
cessors, mobile phones and MP3 players with a USB connection are available
to the general public. For several decades now, this technology has been largely
SV
submicronic, and the idea of nanoelectronics was created in the laboratories.
The current technological limits will soon be reached, even if ongoing innova-
tions will push them beyond these limits. Emerging technologies such as carbon
nanotubes will take over.
The nanoworld is the intermediary between the atom and the solid, from the
SC
large molecule or the small solid object to the strong relationship between sur-
face and volume. Strictly speaking, the nanoworld has existed for a long time
and it is up to chemists to study the structures and properties of molecules.
They have learnt (with the help of physicists) to manipulate them and build
more and more complex structures. Progress in observation tools (electron
microscopes, scanning-tunneling microscopes and atomic force microscopes) as
well as in analysis tools (particularly X-ray, neutron and mass spectometry)
has been a decisive factor. The production of nanoscopic material is constantly
improving, as is the case for the process of catalysis and surfaces used in the
nanoworld.
A substantial number of new materials with nano elements such as ceramics,
glass, polymers and fibers are making their way onto the market and are present
in all shapes and forms in everyday life, from washing machines to architecture.
In 1959, the physicist Richard Feynman, Nobel Prize winner for Physics in 1965,
came up with the brilliant concept of the nano when he said “there is plenty of
room at the bottom” during a conference of the American Physical Society.
Biology has been molecular for a long time. The areas of DNA, proteins, and
Page 3
Department of ECE NanoElectronics
V
M
SV
SC
Page 4
Department of ECE NanoElectronics
V
M
SV
SC
Page 5
Department of ECE NanoElectronics
V
information technology and telecommunications, just as biochips are for elec-
tronics and biology. Imaging on a molecular level revolutionized the techniques
of medical examinations. The borders between chemistry, physics, mechanics
M
and biology are disappearing with the emergence of new materials, such as in-
telligent systems, nanomachines, etc.
This is where the nano tidal wave, which will have considerable impact on soci-
SV
ety, can be found. A comprehensive public debate is required on real or possible
risks and their consequences. Will humanity be able to master these new appli-
cations or are we taking on an unfamiliar role?
Technological expertise
Progress in metallurgy and in chemistry has allowed scientists to process silicon.
Physicists, in particular, have highlighted its semi-conductor properties. The
understanding of these allowed the invention and the production of the tran-
sistor. A long succession of successful discoveries and innovations has meant
that integrated circuits are now present in everyday objects. If an object can
be understood in detail at the microscopic level, we can use our knowledge to
apply it to the macroscopic level.
Furthermore, the concept of nano is becoming fashionable as it combines what
we already know with new concepts and it conveys the idea of modern tech-
Page 6
Department of ECE NanoElectronics
nology (eg carbon nanotubes used in top of the range tennis rackets, bicycle
frames, or golf clubs).
V
These approaches come together in the nanometric domain.
human genius closer together with objects found in the biological world. The
complexity of objects in the biological world is strictly organized and at the
same time they are self-organizing. The processes of supramolecular chemistry
and of the chemistry of self-assembling materials function in the same fashion.
1.1.4 Nanoworld
Nanoscience is the study of phenomenon and manipulation of materials at
atomic, molecular and macro-molecule scales, where properties differ signifi-
cantly from those at large scale.
• Nanotechnology is the branch of science and engineering which deals with cre-
ation of materials, devices and systems through the manipulation of individual
atoms and molecules.
• Nanotechnologies are the design, characterisation, production and application
of structures, devices and systems by controlling shape and size at nanometre
scale
• The goal of nanotechnology is to control individual atoms and molecules to
Page 7
Department of ECE NanoElectronics
V
M
SV
SC
Page 8
Department of ECE NanoElectronics
V
M
SV
SC
Page 9
Department of ECE NanoElectronics
V
M
SV
SC
Figure 4: Nanoworld
Page 10
Department of ECE NanoElectronics
V
M
SV
create computer chips and other devices that are thousands of times smaller
than current technologies limit.
The prefix “Nano” is derived from the Greek word which means “Dwarf”. •
One nanometer is equal to one billionth of meter (10-9 )
Nanotechnology is the understanding and control of matter at dimensions of
roughly 1 to 100 nanometers, where unique phenomena enable novel applica-
tions
Page 11
Department of ECE NanoElectronics
V
M
SV
SC
Page 12
Department of ECE NanoElectronics
V
M
SV
Figure 7: Nanometer scale
SC
Page 13
Department of ECE NanoElectronics
The first ever concept was presented in 1959 by the famous professor of
physics Dr. Richard P. Feynman. • The term “Nano-technology” had been
coined by Norio Taniguchi in 1974
• Feynman solicit the Physicists in 1959 “to make the electron microscope
100 times better”. This was achieved about 22 years later.
• Not only seeing the atoms but also their manipulation became a reality in
1981 when Gerd Binnig and Heinrich Rohrer of IBM, Zurich Research Labora-
tory invented the Scanning Tunneling Microscope (STM) for which they were
awarded Noble Prize in 1986.
• In 1985 Binnig along with Gerber and Quate invented the Atomic Force Mi-
croscope (AFM) which did not require the specimen to be conducting.
V
1.1.5 Benefits of Nanotechnology
M
“The power of nanotechnology is rooted in its potential to transform and rev-
olutionize multiple technology and industry sectors, including aerospace, agri-
culture, biotechnology, homeland security and national defense, energy, envi-
ronmental improvement, information technology, medicine, and transportation.
SV
Discovery in some of these areas has advanced to the point where it is now
possible to identify applications that will impact the world we live in.”
• Is defined as any material that has unique or novel properties, due to the
nanoscale ( nano metre- scale) structuring.
• These are formed by incorporation or structuring of nanoparticles.
• They are subdivided into nanocrystals, nano powders, and nanotubes: A se-
quence of nanoscale of C60 atoms arranged in a long thin cylindrical structure
Nanomaterial properties can be ‘tuned’ by varying the size of the particle (e.g.
changing the fluorescence colour so a particle can be identified)
Examples of Nanomaterials
• Examples:
• Amorphous silica fume (nano-silica) in Ultra High Performance Concrete –
this silica is normally thought to have the same human risk factors as non-nano
non-toxic silica dust
• Nano platinum or palladium in vehicle catalytic converters - higher surface
area to volume of particle gives increased reactivity and therefore increased ef-
ficiency
• Crystalline silica fume is used as an additive in paints or coatings, giving e.g.
Page 14
Department of ECE NanoElectronics
V
M
SV
SC
Page 15
Department of ECE NanoElectronics
1.1.7 Classification
Classification is based on the number of dimensions, which are not confined to
the nanoscale range (¡100nm)
1) Zero dimensional (0-D)
2) One- dimensional (1-D)
3) two-dimensional (2-D)
4) Three dimensional (3-D)
V
dots
• Amorphous or crystalline
• Single crystalline or poly crystalline
• Chemically pure or impure
• Metallic, ceramic or polymeric.
One dimension lies in the nanometer range and other two dimensions are
not confined to the nanoscale
• 2D nanomaterials exhibit plate like shapes
• Two dimensional nanomaterials include nanofilms, nanlayers and nanocoat-
ings.
Page 16
Department of ECE NanoElectronics
Three dimensional materials are not confined in the nanoscale in any di-
mension. These materials are thus characterized by having three arbitrarily
dimensions above 100nm
• Materials possess a nanocrystalline structure or involve the presence of fea-
tures at the nanoscale
V
Schrodinger equation is different in a few ways from the other wave equations
we’ve seen in this book. But these differences won’t keep us from applying all of
our usual strategies for solving a wave equation and dealing with the resulting
M
solutions.
Even though there are many things that are highly confusing about quantum
mechanics, the nice thing is that it’s relatively easy to apply quantum mechanics
to a physical system to figure out how it behaves. There is fortunately no need
to understand all of the subtleties about quantum mechanics in order to use it.
Of course, in most cases this isn’t the best strategy to take; it’s usually not a
good idea to blindly forge ahead with something if you don’t understand what
you’re actually working with. But this lack of understanding can be forgiven
in the case of quantum mechanics, because no one really understands it. (Well,
maybe a couple people do, but they’re few and far between.) If the world waited
to use quantum mechanics until it understood it, then we’d be stuck back in
the 1920’s. The bottom line is that quantum mechanics can be used to make
predictions that are consistent with experiment. It hasn’t failed us yet. So it
would be foolish not to use it.
Before discussing the Schrodinger wave equation, let’s take a brief (and by
no means comprehensive) look at the historical timeline of how quantum me-
Page 17
Department of ECE NanoElectronics
chanics came about. The actual history is of course never as clean as an outline
like this suggests, but we can at least get a general idea of how things proceeded.
1900 (Planck): Max Planck proposed that light with frequency is emitted in
quantized lumps of energy that come in integral multiples of the quantity,
E = hv = hω
The frequency of light is generally very large (on the order of 1015 s1 for the
visible spectrum), but the smallness of h wins out, so the h unit of energy is very
small (at least on an everyday energy scale). The energy is therefore essentially
continuous for most purposes. However, a puzzle in late 19th-century physics
was the blackbody radiation problem. In a nutshell, the issue was that the clas-
sical (continuous) theory of light predicted that certain objects would radiate an
infinite amount of energy, which of course can’t be correct. Planck’s hypothesis
of quantized radiation not only got rid of the problem of the infinity, but also
correctly predicted the shape of the power curve as a function of temperature.
And E = pc for a light. Planck’s hypothesis simply adds the information of how
many lumps of energy a wave contains. Although strictly speaking, Planck ini-
V
tially thought that the quantization was only a function of the emission process
and not inherent to the light itself.
1905 (Einstein): Albert Einstein stated that the quantization was in fact inher-
M
ent to the light, and that the lumps can be interpreted as particles, which we
now call “photons.” This proposal was a result of his work on the photoelectric
effect, which deals with the absorption of light and the emission of elections
SV
from a material. We know from Chapter 8 that E = pc for a light wave. (This
relation also follows from Einstein’s 1905 work on relativity, where he showed
that E = pc for any massless particle, an example of which is a photon.) And
we also know that = ck for a light wave. So Planck’s E = hω relation becomes
E = hω becomes pc = h(ck) becomes p = hk
This result relates the momentum of a photon to the wavenumber of the wave
SC
it is associated with.
1913 (Bohr): Niels Bohr stated that electrons in atoms have wavelike proper-
ties. This correctly explained a few things about hydrogen, in particular the
quantized energy levels that were known.
1924 (de Broglie): Louis de Broglie proposed that all particles are associated
with waves, where the frequency and wavenumber of the wave are given by the
same relations we found above for photons, namely E = hω and p = hk. The
larger E and p are, the larger ω and k are. Even for small E and p that are
typical of a photon, and k are very large because h is so small. So any everyday-
sized particle with large (in comparison) energy and momentum values will have
extremely large and k values. This (among other reasons) makes it virtually
impossible to observe the wave nature of macroscopic amounts of matter.
This proposal (that E = homega and p = hk also hold for massive particles)
was a big step, because many things that are true for photons are not true for
massive (and nonrelativistic) particles. For example, E = pc (and hence =
ck) holds only for massless particles (we’ll see below how and k are related for
massive particles). But the proposal was a reasonable one to try. And it turned
out to be correct, in view of the fact that the resulting predictions agree with
Page 18
Department of ECE NanoElectronics
experiments.
The fact that any particle has a wave associated with it leads to the so-called
waveparticle duality. Are things particles, or waves, or both? Well, it depends
what you’re doing with them. Sometimes things behave like waves, sometimes
they behave like particles. A vaguely true statement is that things behave like
waves until a measurement takes place, at which point they behave like parti-
cles. However, approximately one million things are left unaddressed in that
sentence. The wave-particle duality is one of the things that few people, if any,
understand about quantum mechanics.
1925 (Heisenberg): Werner Heisenberg formulated a version of quantum me-
chanics that made use of matrix mechanics. We won’t deal with this matrix
formulation (it’s rather difficult), but instead with the following wave formula-
tion due to Schrodinger (this is a waves book, after all).
1926 (Schrodinger): Erwin Schrodinger formulated a version of quantum me-
chanics that was based on waves. He wrote down a wave equation (the so-called
Schrodinger equation) that governs how the waves evolve in space and time.
V
We’ll deal with this equation in depth below. Even though the equation is
correct, the correct interpretation of what the wave actually meant was still
missing. Initially Schrodinger thought (incorrectly) that the wave represented
M
the charge density.
1926 (Born): Max Born correctly interpreted Schrodinger’s wave as a proba-
bility amplitude. By “amplitude” we mean that the wave must be squared to
SV
obtain the desired probability. More precisely, since the wave (as we’ll see) is in
general complex, we need to square its absolute value. This yields the probabil-
ity of finding a particle at a given location (assuming that the wave is written
as a function of x).
ally every other example of probability you’re familiar with. For example, in
a coin toss, if you know everything about the initial motion of the coin (veloc-
ity, angular velocity), along with all external influences (air currents, nature of
the floor it lands on, etc.), then you can predict which side will land facing up.
Quantum mechanical probabilities aren’t like this. They aren’t a consequence of
missing information. The probabilities are truly random, and there is no further
information (so-called “hidden variables”) that will make things unrandom. The
topic of hidden variables includes various theorems (such as Bell’s theorem) and
experimental results that you will learn about in a quantum mechanics course.
1926 (Dirac): Paul Dirac showed that Heisenberg’s and Schrodinger’s versions
of quantum mechanics were equivalent, in that they could both be derived from
a more general version of quantum mechanics.
Page 19
Department of ECE NanoElectronics
V
M
SV
SC
Page 20
Department of ECE NanoElectronics
that the theory is consistent with the real world. The more experiments we
do, the more comfortable we are that the theory is a good one. But we can
never be absolutely sure that we have the correct theory. In fact, odds are that
it’s simply the limiting case of a more correct theory.
The Schrodinger equation actually isn’t valid, so there’s certainly no way that
we proved it. Consistent with the above point concerning limiting cases, the
quantum theory based on Schrodinger’s equation is just a limiting theory of
a more correct one, which happens to be quantum field theory (which unifies
quantum mechanics with special relativity). This is turn must be a limiting
theory of yet another more correct one, because it doesn’t incorporate gravity.
Eventually there will be one theory that covers everything (although this point
can be debated), but we’re definitely not there yet.
Due to the “i” that appears in Eq. (6), (x) is complex. And in contrast with
waves in classical mechanics, the entire complex function now matters in quan-
tum mechanics. We won’t be taking the real part in the end. Up to this point
in the book, the use of complex functions was simply a matter of convenience,
V
because it is easier to work with exponentials than trig functions. Only the
real part mattered (or imaginary part – take your pick, but not both). But in
quantum mechanics the whole complex wavefunction is relevant. However, the
M
theory is structured in such a way that anything you might want to measure
(position, momentum, energy, etc.) will always turn out to be a real quantity.
This is a necessary feature of any valid theory, of course, because you’re not
SV
going to go out and measure a distance of 2 + 5i meters, or pay an electrical
bill of 17 + 6i kilowatt hours.
SC
Page 21
Department of ECE NanoElectronics
V
M
SV
SC
Page 22
Department of ECE NanoElectronics
V
M
SV
SC
Page 23
Department of ECE NanoElectronics
V
M
SV
SC
Page 24
Department of ECE NanoElectronics
V
M
SV
SC
1.5 Degeneracy
A term referring to the fact that two or more stationary states of the same
quantum-mechanical system may have the same energy even though their wave
functions are not the same. In this case the common energy level of the station-
ary states is degenerate. The statistical weight of the level is proportional to the
order of degeneracy, that is, to the number of states with the same energy; this
number is predicted from Schrödinger’s equation. The energy levels of isolated
systems (that is, systems with no external fields present) comprising an odd
number of fermions (for example, electrons, protons, and neutrons) always are
at least twofold degenerate.
Page 25
Department of ECE NanoElectronics
V
M
SV
SC
Page 26
Department of ECE NanoElectronics
V
M
SV
SC
Page 27
Department of ECE NanoElectronics
V
1.6 Band theory of solids
M
There are usually two approaches to understand the origin of band theory as-
SV
sociated with solids. One is the “nearly free electron model” and the other
“tight-binding model”.
1) Nearly free electron model:
In the nearly free electron approximation, interactions between electrons are
completely ignored. This model allows use of Bloch’s Theorem which states
SC
Page 28
Department of ECE NanoElectronics
V
M
SV
SC
Page 29
Department of ECE NanoElectronics
V
M
SV
SC
Page 30
Department of ECE NanoElectronics
V
M
SV
SC
Page 31
Department of ECE NanoElectronics
V
M
SV
SC
Page 32
Department of ECE NanoElectronics
V
M
SV
SC
Page 33
Department of ECE NanoElectronics
V
Details of the Kronig-Penney model
The KP model is a single-electron problem. The electron moves in a one-
M
dimensional crystal of length L. The periodic potential that the electrons expe-
rience in the crystal lattice is approximated by the following periodical function.
SV
SC
Page 34
Department of ECE NanoElectronics
V
M
SV
SC
Page 35
Department of ECE NanoElectronics
V
M
SV
SC
Page 36
Department of ECE NanoElectronics
V
M
SV
SC
Page 37
Department of ECE NanoElectronics
V
stems from the description of waves in a periodic medium given by Bloch’s the-
orem, in which it is found that the solutions can be completely characterized by
their behavior in a single Brillouin zone.
M
The first Brillouin zone is the locus of points in reciprocal space that are closer
to the origin of the reciprocal lattice than they are to any other reciprocal lattice
points (see the derivation of the Wigner–Seitz cell). Another definition is as the
set of points in k-space that can be reached from the origin without crossing
SV
any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the
reciprocal lattice.
k-vectors exceeding the first Brillouin zone (red) do not carry any more in-
formation than their counterparts (black) in the first Brillouin zone. k at the
SC
Brilliouin zone edge is the spatial Nyquist frequency of waves in the lattice,
because it corresponds to a half-wavelength equal to the inter-atomic lattice
spacing a.[1] See also Aliasing § Sampling sinusoidal functions for more on the
equivalence of k-vectors.
The Brillouin zone (purple) and the Irreducible Brillouin zone (red) for a
hexagonal lattice.
There are also second, third, etc., Brillouin zones, corresponding to a sequence
of disjoint regions (all with the same volume) at increasing distances from the
origin, but these are used less frequently. As a result, the first Brillouin zone
is often called simply the Brillouin zone. In general, the n-th Brillouin zone
consists of the set of points that can be reached from the origin by crossing
exactly n 1 distinct Bragg planes. A related concept is that of the irreducible
Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries
in the point group of the lattice (point group of the crystal).
The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969),
a French physicist
It is often useful to take the primitive cell as the smallest volume bounded by
Page 38
Department of ECE NanoElectronics
planes normal to the G vectors of the nearest neighbours. It is just another way
of dividing up reciprocal space, into identical cells which fill it uniformly. Each
cell contains one lattice site at the centre of the cell. It is the first Brillouin
zone. The same construction in the direct (real) lattice is called the Wigner
Seitz cell.
The first Brillouin zone is the set of points that can be reached from the origin,
without crossing any Bragg plane. The second Brillouin zone is the set of points
that can be reached from the first zone by crossing only one Bragg plane.
V
M
SV
SC
Page 39
Department of ECE NanoElectronics
V
M
SV
The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969),
a French physicist.
The first and second Brillouin zones, for a 1D reciprocal lattice. The sites
are spaced by 2/a. The first zone is the red area (inner), and second zone is the
SC
The first and second Brillouin zones for a 2D reciprocal (square) lattice.
Notice how each is generated, and that the second zone is disconnected,
Page 40
Department of ECE NanoElectronics
V
M
SV
SC
Page 41
Department of ECE NanoElectronics
V
deliver consistent improvements in transistor area, performance, and power re-
duction.3 The methodology called for the scaling of transistor gate length, gate
width, gate oxide thickness, and supply voltage all by the same scale factor,
M
and increasing channel doping by the inverse of the same scale factor (see Fig-
ure 1). The result would be transistors with smaller area, higher drive current
(higher performance), and lower parasitic capacitance (lower active power). This
SV
method for scaling MOSFET transistors is generally referred to as “classic” or
“traditional” scaling and was very successfully used by the industry up until
the 130-nm generation in the early 2000s. For the past 20 years, we have been
developing new generations of process technologies on a two-year cadence, and
each generation scaled the minimum feature size by approximately 0.7 times to
deliver an area scaling improvement of about 0.5 times. Thus, we have been
SC
doubling transistor density every two years. But recent technology generations
(such as 14 nm and 10 nm) have taken longer to develop than the normal two-
year cadence, owing to increased process complexity and an increased number
of photomasking steps. Nonetheless, Intel’s 14-nm and 10-nm technologies have
provided better-than-normal transistor density improvements that keep us on
pace with increasing transistor density at a rate of doubling about every two
years.
Transistor Innovations
As mentioned earlier, traditional MOSFET scaling worked well up until the
130-nm generation in the early 2000s. By that generation, the SiO2 gate oxide
thickness had scaled to about 1.2 nm, and electron tunneling through such a thin
dielectric was becoming a significant portion of total transistor leakage current.
We had reached the limit for scaling transistors using traditional methods, and
we needed to start introducing innovations in transistor materials and structure
to continue scaling.
Page 42
Department of ECE NanoElectronics
V
M
SV
SC
Page 43
Department of ECE NanoElectronics
V
M
SV
One of the first significant innovations was the introduction of strained silicon
transistors on Intel’s 90-nm technology in 2003.4 This innovation used tensile
stain in n-channel MOS (NMOS) transistor channels to increase electron mo-
bility and compressive strain in p-channel MOS (PMOS) channels to increase
hole mobility. Tensile strain was induced by adding a high-stress film above
the NMOS transistor. Compressive strain was induced by replacing the PMOS
SC
Page 44
Department of ECE NanoElectronics
The next major transistor innovation was the introduction of FinFET (tri-
gate) transistors on Intel’s 22-nm technology in 2011.6 Traditional planar MOS-
FETs had been able to scale transistor gate length down to about 32 nm to de-
liver good performance and density while also maintaining low off-state leakage.
But scaling the gate length below 32 nm was problematic without sacrificing
either performance or leakage. A solution was to convert from a planar transis-
tor structure to a 3D FinFET structure in which the gate electrode had better
electrostatic control of the transistor channel formed in a tall narrow silicon
fin. This improved electrostatic control provided scaled transistors with steeper
sub-threshold slope. Steeper sub-threshold slope either provided transistors
with lower off-state leakage or allowed threshold voltage to be reduced, which
enabled improved performance at low operating voltage. Operating integrated
circuits at a lower voltage is highly desired in order to reduce active power con-
sumption. All advanced logic technologies now use FinFET transistors for their
good density and superior low-voltage performance compared to planar tran-
sistors. As Figure shows, when traditional MOSFET scaling ran out of steam
V
in the early 2000s, innovations such as strained silicon, high-k metal gate, and
FinFETs were needed, and we must now continually invent new transistor ma-
terials and structures to continue scaling.
M
2.1 Finfets
SV
A FinFET is a transistor. Being a transistor, it is an amplifier and a switch. Its
applications include home computers, laptops, tablets, smartphones, wearables,
high-end networks, automotive, and more.
fin-shaped body – the silicon fin that forms the transistor’s main body distin-
guishes it. Field-effect because an electric field controls the conductivity of the
material.
Page 45
Department of ECE NanoElectronics
V
M
SV
proximity between the drain and the source lessens the gate electrode’s ability
to control the flow of current in the channel region. Because of this, planar
SC
The channel (fin) of the FinFET is vertical. This device requires keeping
Page 46
Department of ECE NanoElectronics
in mind specific dimensions. Evoking Max Planck’s “quanta,” the FinFET ex-
hibits a property known as width quantization: its width is a multiple of its
height. Random widths are not possible.
1. Lg gate length
2. T fin thickness
3. Hfin fin height
4. W transistor width (single fin)
5. Weff effective transistor width (multiple fins)
6. For double-gate: W = 2 Hfin
7. For tri-gate: W = 2 Hfin + T
V
8. Multiple fins will increase the transistor width.
9. Weff = n W
Where n = number of fins M
FinFET (fin field-effect transistor) is a type of non-planar transistor, or ”3D”
transistor (not to be confused with 3D microchips).[16] The FinFET is a vari-
SV
ation on traditional MOSFETs distinguished by the presence of a thin silicon
”fin” inversion channel on top of the substrate, allowing the gate to make two
points of contact: the left and right sides of the fin. The thickness of the fin
(measured in the direction from source to drain) determines the effective channel
length of the device. The wrap-around gate structure provides a better electri-
cal control over the channel and thus helps in reducing the leakage current and
SC
The first finfet transistor type was called a ”Depleted Lean-channel Tran-
sistor” or ”DELTA” transistor, which was first fabricated by Hitachi Central
Research Laboratory’s Digh Hisamoto, Toru Kaga, Yoshifumi Kawamoto and
Eiji Takeda in 1989.[17][10][18] In the late 1990s, Digh Hisamoto began collab-
orating with an international team of researchers on further developing DELTA
technology, including TSMC’s Chenming Hu and a UC Berkeley research team
including Tsu-Jae King Liu, Jeffrey Bokor, Xuejue Huang, Leland Chang, Nick
Lindert, S. Ahmed, Cyrus Tabery, Yang-Kyu Choi, Pushkar Ranade, Sriram
Balasubramanian, A. Agarwal and M. Ameen. In 1998, the team developed the
first N-channel FinFETs and successfully fabricated devices down to a 17 nm
process. The following year, they developed the first P-channel FinFETs.[19]
They coined the term ”FinFET” (fin field-effect transistor) in a December 2000
paper.
In current usage the term FinFET has a less precise definition. Among
microprocessor manufacturers, AMD, IBM, and Freescale describe their double-
Page 47
Department of ECE NanoElectronics
V
random-access memory (DRAM) manufactured with a 90 nm Bulk FinFET pro-
cess.[19] In 2006, a team of Korean researchers from the Korea Advanced In-
stitute of Science and Technology (KAIST) and the National Nano Fab Center
M
developed a 3 nm transistor, the world’s smallest nanoelectronic device, based
on FinFET technology. In 2011, Rice University researchers Masoud Rostami
and Kartik Mohanram demonstrated that FINFETs can have two electrically
SV
independent gates, which gives circuit designers more flexibility to design with
efficient, low-power gates.
In 2012, Intel started using FinFETs for its future commercial devices. Leaks
suggest that Intel’s FinFET has an unusual shape of a triangle rather than rect-
angle, and it is speculated that this might be either because a triangle has a
SC
Vertical MOSFETs
A type of metal oxide semiconductor field effect transistor (MOSFET) used to
switch large amounts of current. Power MOSFETs use a vertical structure with
source and drain terminals at opposite sides of the chip. The vertical orientation
eliminates crowding at the gate and offers larger channel widths.
In addition, thousands of these transistor ”cells” are combined into one in order
to handle the high currents and voltage required of such devices.
Over the past 20 years, the channel length of MOS transistors has halved
at intervals of approximately every two or three years, which has led to a vir-
tuous circle of increasing packing density (more complex electronic products),
increasing performance (higher clock frequencies) and decreasing costs per unit
silicon area. To continue on this path, Research is underway at Southamp-
ton University to investigate an alternative method of fabricating short-channel
Page 48
Department of ECE NanoElectronics
V
M
MOS transistors, socalled Vertical MOSFET’s. In these devices the channel is
SV
perpendicular to the wafer surface in stead of in the plane of the surface. Vetical
MOSFET’s have three main advantages:
First, the channel length of the vertical MOS transistor is not defined by
lithography. This means no requirements for post-optical lithography tech-
niques such as x-ray, extreme ultra-violet, electron projection lithography, ion
SC
Second, Vertical MOS transistors are easily made with both front gate and
back gate. Using this technology doubles the channel width per transistor area.
Combined with easier design rules, this leads to an increase of packing density
of at least a factor of four as compared to horizontal transitors.
One step further, is the use of very narrow pillars with the gate surrounding
the entire pillar. This way, fully depleted transistors can be produced which have
all the advantages of SOI transistors. Third advantage of the vertical MOSFET
is the possibility to prevent short channel effects from dominating the transitor
by adding processes that are not easily realised in horizontal transistors, such
as a polysilicon (or polySiGe) source to reduce parasitic bipolar effects or a di-
electric pocket to reduce drain induced barrier lowering (dibl).
Page 49
Department of ECE NanoElectronics
o Substrate doping
o Depletion width
o Limits of miniaturization
V
o Limits of interconnect and contact resistance
M
o Limits due to sub threshold currents
For digital circuit design, the ideal MOSFET would be a perfect switch. It
would conduct infinite current in the on-state and zero current in the on-state.
Scaling of the device dimensions has been effective at increasing the on-current
SC
of the device, but at the same time it causes an increase in the on-current. For
an NMOS device with the drain at the supply voltage and the source, gate, and
bulk at ground, ideally there should be no current ow. However, for submicron
devices, there may be significant drain current to the source as subthreshold
leakage, to the gate as tunneling current, and to the bulk as gate-induced drain
leakage. The need to minimize these leakage currents while at the same time
increasing on-current limits the scaling of MOSFETs. Another characteristic of
an ideal MOSFET would be an infinite lifetime. Unfortunately, real devices tend
to degrade when exposed to high electric fields in either the gate oxide or the
channel. High field phenomenon, such as time dependent dielectric breakdown
and hot carrier effects, are especially worrisome since they can cause a chip
to suddenly fail after operating correctly for months or even years. Therefore,
reliability concerns further limit practical device designs.
2.3 Nanomaterials
Nanomaterials describe, in principle, materials of which a single unit small
sized (in at least one dimension) between 1 and 100 nm (the usual definition of
Page 50
Department of ECE NanoElectronics
nanoscale).
Nanomaterials research takes a materials science-based approach to nanotech-
nology, leveraging advances in materials metrology and synthesis which have
been developed in support of microfabrication research. Materials with struc-
ture at the nanoscale often have unique optical, electronic, thermo-physical or
mechanical properties.
Nanomaterials are slowly becoming commercialized and beginning to emerge as
commodities.
In ISO/TS 80004, nanomaterial is defined as the ”material with any external
dimension in the nanoscale or having internal structure or surface structure in
the nanoscale”, with nanoscale defined as the ”length range approximately from
1 nm to 100 nm”. This includes both nano-objects, which are discrete pieces
of material, and nanostructured materials, which have internal or surface struc-
ture on the nanoscale; a nanomaterial may be a member of both these categories.
V
nition of a nanomaterial: ”A natural, incidental or manufactured material con-
taining particles, in an unbound state or as an aggregate or as an agglomerate
and for 50% or more of the particles in the number size distribution, one or
M
more external dimensions is in the size range 1 nm – 100 nm. In specific cases
and where warranted by concerns for the environment, health, safety or com-
petitiveness the number size distribution threshold of 50% may be replaced by
SV
a threshold between 1% to 50%.
The most basic method to measure the size of nanoparticles is the size analy-
sis from the picture image using the transmission electron microscope (TEM),
which could also give the particle size distribution. For this analysis, prepara-
tion of the well-dispersed particles on the sample mount is the key issue.
The different methods which are being used to synthesize nanomaterials are
SC
Page 51
Department of ECE NanoElectronics
V
proaches/methods have been explored to fabricate nanomaterials.
Semiconductor Nanoparticles
Nanoparticle have recently attracted significant attention from the materials
science community. Nanoparticle, particles of the material with diameter in
SC
They exhibit unique physical properties that give rise to many potential ap-
plicaltions in areas such as nonlinear optics, luminescence, elctronics, catalyst,
solar energy conversion, and optoelectronics .
Two fundamental factors, both related to the size of the individual nanocrys-
tal are responsible for these unique properties. The first is the large surface to
volume ratio, and the second factor is the quantum confinement effect.
The wide band gap II-VI semiconductor are of current interest. For optoelec-
tronics applications such as blue lasers, light emitting diodes, photonic crystals
and optical evieces based on non linear properties.
Page 52
Department of ECE NanoElectronics
Magnetic Nanoparticles:
Magnetic materials are also strongly affected by the small size scale of nanopar-
ticles. Magnetic nanoparticles are being looked at for applications in cancer
diagnosis and treatment.
Before widespread usage of nanoparticles in medicine can be realized, a number
of technical challenges must be met.
These include, though are not limited to, synthesizing uniformly sized, non-
toxic particles and coating the particles to make them attach to specific tissues.
V
and pharmacology.
In order for magnetic nanoparticles to be used within the body, they must
M
meet several stringent criteria. Some of these criteria are biocompatibility, ease
of dispersion into solution for injection, and most importantly, nontoxicity.
SV
In addition, the surfaces of the particles must be able to be functionalized
to attach and agglomerate into specific, targeted tissues.
Recently it has been proposed that the nanoparticles could be used to treat
cancers through a treatment called thermotherapy or hyperthermia.
Iron oxides are one group of magnetic nanoparticles that meet the stringent
requirements for insertion into the body.
Page 53
Department of ECE NanoElectronics
V
M
SV
thesis.
Classification of Techniques for synthesis of Nanomaterials There are two gen-
eral approaches for the synthesis of nanomaterials:
a) Top- down approach
SC
b) Bottom–up approach
Page 54
Department of ECE NanoElectronics
The alternative approach, which has the potential of creating less waste and
hence the more economical, is the ‘bottom- up’.
V
3.1 Physical Limits to computation
Computers are physical systems: what they can and cannot do is dictated by
M
the laws of physics. In particular, the speed with which a physical device can
process information is limited by its energy and the amount of information that
it can process is limited by the number of degrees of freedom it possesses. This
SV
paper explores the physical limits of computation as determined by the speed
of light c, the quantum scale and the gravitational constant G. As an example,
quantitative bounds are put to the computational power of an ‘ultimate laptop’
with a mass of one kilogram confined to a volume of one liter.
So far it has been easier to ask these questions than to answer them. To
the extent that we have found limits, they are terribly far away from the real
limits of modern technology. We cannot profess, therefore, to be guiding the
technologist or the engineer. What we are doing is really more fundamental.
We are looking for general laws that must govern all information processing,
no matter how it is accomplished. Any limits we find must be based solely on
fundamental physical principles, not on whatever technology we may currently
be using.
There are precedents for this kind of fundamental examination. In the 1940’s
Claude E. Shannon of the Bell Telephone Laboratories found there are limits
on the amount of information that can be transmitted through a noisy channel;
Page 55
Department of ECE NanoElectronics
these limits apply no matter how the message is encoded into a signal. Shan-
non’s work represents the birth of modern information science. Earlier, in the
mid- and late 19th century, physicists attempting to determine the fundamental
limits on the efficiency of steam engines had created the science of thermody-
namics. In about 1960 one of us (Landauer) and John Swanson at IBM began
attempting to apply the same type of analysis to the process of computing.
Since the mid-1970’s a growing number of other workers at other institutions
have entered this field.
In our analysis of the physical limits of computation we use the term ”infor-
mation” in the technical sense of information theory. In this sense information
is destroyed whenever two previously distinct situations become indistinguish-
able. In physical systems without friction, information can never be destroyed;
whenever information is destroyed,
some amount of energy must be dissipated (converted into heat). As an exam-
ple, imagine two easily distinguishable physical situations, such as a rubber ball
held either one meter or two meters off the ground. If the ball is dropped, it
V
will bounce. If there is no friction and the ball is perfectly elastic, an observer
will always be able to tell what state the ball started out in (that is, what its
initial height was) because a ball dropped from two meters will bounce higher
M
than a ball dropped from one meter.
If there is friction, however, the ball will dissipate a small amount of energy
with each bounce, until it eventually stops bouncing and comes to rest on the
SV
ground. It will then be impossible to determine what the ball’s initial state was;
a ball dropped from two meters will be identical with a ball dropped from one
meter. Information will have been lost as a result of energy dissipation.
Here is another example of information destruction: the expression 2 + 2 con-
tains more information than the expression = 4. If all we know is that we have
added two numbers to yield 4, then we do not know whether we have added 1
SC
Page 56
Department of ECE NanoElectronics
V
We continue to develop and enhance our carbon nanotube transistor compact
device model for circuit simulation. System-level optimization is enabled by the
M
development of non-iterative compact models of carbon nanotube transistors.
We are working on robust circuit design and fabrication for carbon nanotube
and graphene electronics including active devices (carbon nanotubes) and inter-
connects (graphene). We develop synthesis techniques to achieve high-density,
SV
aligned growth of carbon nanotubes as well as low temperature carbon nan-
otube growth for electronics applications. Both digital logic and high-frequency
analog applications are explored.
SC
Page 57
Department of ECE NanoElectronics
The field effect can be simply defined as the modulation of the conductivity of
an underlying semiconductor layer by the application of an electric field to a gate
electrode on the surface. As we learned in Chapter 11, the application of a bias
to a MIS structure results in a modulation in the carrier concentration within
the underlying semiconductor layer. If the semiconductor is naturally n type
and a positive gate bias is applied, electrons accumulate at the semiconductor-
insulator interface. Conversely, if a negative gate bias is applied to the same
structure, the electrons are repelled from the interface and, depending on the
magnitude of the bias, the underlying semiconductor layer is either depleted or
inverted. If the semiconductor becomes inverted, the carrier type changes.
V
of all time. It has become the main component of all modern electronics. The
miniaturisation trend has been very rapid, leading to ever decreasing device
sizes and opening endless opportunities to realise things which were considered
M
impossible. To keep up with the pace of large scale integration, the idea of
single electron transistors (SETs) has been conceived. The most outstanding
property of SETs is the possibility to switch the device from the insulating to the
conducting state by adding only one electron to the gate electrode, whereas a
SV
common MOSFET needs about 1000–10,000 electrons. The Coulomb blockade
or single-electron charging effect, which allows for the precise control of small
numbers of electrons, provides an alternative operating principle for nanometre-
scale devices. In addition, the reduction in the number of electrons in a switch-
ing transition greatly reduces circuit power dissipation, raising the possibility
SC
of even higher levels of circuit integration. The present report begins with a
description of Coulomb blockade, the classical theory which accounts for the
switching in SETs. We also discuss the work that has been done on realising
SETs and the digital building blocks like memory and logic.
Various structures have been made in which electrons are confined to small
volumes in metals or semiconductors. Perhaps not surprisingly, there is a deep
analogy between such confined electrons and atoms. Such regions with only
dimensions of 1-100 nm and containing between 1,000 to 1,000,000 nuclei are
referred to as ‘quantum dots’, ‘artificial atoms’ or ‘solid state atoms’. Such
quantum dots form the heart of the SET gates.
Coulomb Blockade
Single electron devices differ from conventional devices in the sense that the
electronic transport is governed by quantum mechanics. Single electron devices
consist of an ‘island’, a region containing localized electrons isolated by tunnel
junctions with barriers to electron tunneling. In this section, we discuss the
electron transport through such devices and how Coulomb blockade originates
Page 58
Department of ECE NanoElectronics
in these devices. We also discuss how this is brought into play in SETs. The
energy that determines the transport of electrons through a single-electron de-
vice is Helmholtz’s free energy, F, which is defined as difference between total
energy, ,stored in the device and work done by power sources, W. The total en-
ergy stored includes all components that have to be considered when charging
an island with an electron.
F = Eϵ - W
Eϵ = Ec + ∆EF + EN
V
3.5 Spintronics
Spintronics (a neologism for “spin transport electronics”), also known as mag-
netoelectronics, is an emerging technology that exploits the intrinsic spin of the
M
electron and its associated magnetic moment, in addition to its fundamental
electronic charge.
MTA is an effective process to enhance the performance of magnetic devices
and materials.
SV
Thermal annealing involves raising, maintaining, and then slowly lowering
the temperature of a material. Annealing allows the atoms inside of a solid
to diffuse more easily to find their proper locations, and maintaining a solid
at a high temperature lets it achieve equilibrium, eliminating many structural
SC
Page 59
Department of ECE NanoElectronics
V
M
ment, its manipulation is intimately related to applying external magnetic fields.
The advantage of spin-based electronics is that they are very nonvolatile com-
pared to charge-based electronics, and quantum-mechanical computing based on
spintronics could achieve speeds unheard of with conventional electrical comput-
SV
ing. Spintronics, also called magnetoelectronics, spin electronics, or spin-based
electronics, is an emerging scientific field. The research on spintronics can be
divided into the following subfields.
One spintronic device that currently has wide commercial application is the
SC
spin-valve. Most modern hard disk drives employ spin-valves to read each mag-
netic bit contained on the spinning platters inside. A spin-valve is essentially
a spin “switch” that can be turned on and off by external magnetic fields. Ba-
sically, it is composed of two ferromagnetic layers separated by a very thin
non-ferromagnetic layer. When these two layers are parallel, electrons can pass
through both easily, and when they are antiparallel, few electrons will penetrate
both layers.
Page 60
Department of ECE NanoElectronics
V
cellular automata, which are a proposed physical implementation of ”classical”
cellular automata by exploiting quantum mechanical phenomena. QCA have
attracted a lot of attention as a result of its extremely small feature size (at the
M
molecular or even atomic scale) and its ultra-low power consumption, making
it one candidate for replacing CMOS technology.
n the context of models of computation or of physical systems, quantum cellu-
lar automaton refers to the merger of elements of both (1) the study of cellular
SV
automata in conventional computer science and (2) the study of quantum in-
formation processing. In particular, the following are features of models of
quantum cellular automata:
The computation is considered to come about by parallel operation of mul-
tiple computing devices, or cells. The cells are usually taken to be identical,
SC
Models which have been proposed recently impose further conditions, e.g.
that quantum cellular automata should be reversible and/or locally unitary, and
have an easily determined global transition function from the rule for updating
Page 61
Department of ECE NanoElectronics
individual cells.[2] Recent results show that these properties can be derived ax-
iomatically, from the symmetries of the global evolution.
Early proposals
In 1982, Richard Feynman suggested an initial approach to quantizing a model
of cellular automata.[9] In 1985, David Deutsch presented a formal development
of the subject.[10] Later, Gerhard Grössing and Anton Zeilinger introduced the
term ”quantum cellular automata” to refer to a model they defined in 1988,[11]
although their model had very little in common with the concepts developed by
Deutsch and so has not been developed significantly as a model of computation.
V
realised that this definition was too loose, in the sense that some instances of it
allow superluminal signalling.[6][7] A second wave of models includes those of
Susanne Richter and Reinhard Werner,[17] of Benjamin Schumacher and Rein-
M
hard Werner,[6] of Carlos Pérez-Delgado and Donny Cheung,[2] and of Pablo
Arrighi, Vincent Nesme and Reinhard Werner.[7][8] These are all closely related,
and do not suffer any such locality issue. In the end one can say that they all
SV
agree to picture quantum cellular automata as just some large quantum circuit,
infinitely repeating across time and space.
Page 62
Department of ECE NanoElectronics
V
a rather distant dream.”[9] In recent years, investment in quantum computing
research has increased in the public and private sectors. On 23 October 2019,
Google AI, in partnership with the U.S. National Aeronautics and Space Ad-
M
ministration (NASA), claimed to have performed a quantum computation that
was infeasible on any classical computer, but whether this claim was or is still
valid is a topic of active research.
There are several types of quantum computers (also known as quantum comput-
SV
ing systems), including the quantum circuit model, quantum Turing machine,
adiabatic quantum computer, one-way quantum computer, and various quan-
tum cellular automata. The most widely used model is the quantum circuit,
based on the quantum bit, or ”qubit”, which is somewhat analogous to the bit
in classical computation. A qubit can be in a 1 or 0 quantum state, or in a
SC
Page 63
Department of ECE NanoElectronics
obey the Church–Turing thesis. This means that while quantum computers
provide no additional advantages over classical computers in terms of compat-
ibility, quantum algorithms for certain problems have significantly lower time
complexities than corresponding known classical algorithms. Notably, quantum
computers are believed to be able to quickly solve certain problems that no
classical computer could solve in any feasible amount of time—a feat known as
”quantum supremacy.” The study of the computational complexity of problems
with respect to quantum computers is known as quantum complexity theory.
The quantum in ”quantum computing” refers to the quantum mechanics that
the system uses to calculate outputs. In physics, a quantum is the smallest
possible discrete unit of any physical property. It usually refers to properties of
atomic or subatomic particles, such as electrons, neutrinos, and photons.
V
DNA computing is an emerging branch of computing which uses DNA, bio-
chemistry, and molecular biology hardware, instead of the traditional electronic
computing. Research and development in this area concerns theory, experi-
M
ments, and applications of DNA computing.
Leonard Adleman of the University of Southern California initially developed
this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a
form of computation which solved the seven-point Hamiltonian path problem.
SV
Since the initial Adleman experiments, advances have occurred and various Tur-
ing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for
DNA-based memory was proposed by Eric Baum[14] who conjectured that a
vast of amount data can be stored in a tiny amount of DNA due to its ultra-
SC
high density. This expanded the horizon of DNA computing into the realm
of memory technology although the in vitro demonstrations were made almost
after a decade.
Page 64
Department of ECE NanoElectronics
V
tationally as “travelling salesman problem”. For this purpose, different DNA
fragments were created, each one of them representing a city that had to be
visited. Every one of these fragments is capable of a linkage with the other
M
fragments created. These DNA fragments were produced and mixed in a test
tube. Within seconds, the small fragments form bigger ones, representing the
different travel routes. Through a chemical reaction, the DNA fragments repre-
SV
senting the longer routes were eliminated. The remains are the solution to the
problem, but overall, the experiment lasted a week. However, current technical
limitations prevent the evaluation of the results. Therefore, the experiment isn’t
suitable for the application, but it is nevertheless a proof of concept.
SC
Page 65
Department of ECE NanoElectronics
the laws of physics. In particular, the speed with which a physical device can
process information is limited by its energy and the amount of information that
it can process is limited by the number of degrees of freedom it possesses. This
paper explores the physical limits of computation as determined by the speed
of light c, the quantum scale and the gravitational constant G. As an example,
quantitative bounds are put to the computational power of an ‘ultimate laptop’
with a mass of one kilogram confined to a volume of one liter.
Over the past half century, the amount of information that computers are ca-
pable of processing and the rate at which they process it has doubled every two
years, a phenomenon known as Moore’s law. A variety of technologies—most
recently, integrated circuits—have enabled this exponential increase in informa-
tion processing power. There is no particular reason why Moore’s law should
continue to hold: it is a law of human ingenuity, not of nature. At some point,
Moore’s law will break down. The question is, When? [...]
We should determine just what limits the laws of physics place on the power
V
of computers. At first, this might seem a futile task: since we don’t know the
technologies by which computers one thousand, one hundred, or even ten years
in the future will be constructed, how can we determine the physical limits of
M
those technologies? In fact, as will now be shown, a great deal can be determined
concerning the ultimate physical limits of computation simply from knowledge
of the speed of light...
SV
Tunneling diodes (TDs) have been widely studied for their importance in achiev-
ing very high speed in wide-band devices and circuits that are beyond conven-
tional transistor technology. A particularly useful form of a tunneling diode
is the Resonant Tunneling Diode (RTD). RTDs have been shown to achieve a
maximum frequency of up to 2.2 THz as opposed to 215 GHz in conventional
Complementary Metal Oxide Semiconductor (CMOS) transistors. The very
high switching speeds provided by RTDs have allowed for a variety of appli-
cations in wide-band secure communications systems and high-resolution radar
and imaging systems for low visibility environments. Tunneling diodes provide
the same functionality as a CMOS transistor where under a specific external
bias voltage range, the device will conduct a current thereby switching the de-
vice “on”. However, instead of the current going through a channel between the
drain and source as in CMOS transistors, the current goes through the depletion
region by tunneling in normal tunneling diodes and through quasi-bound states
within a double barrier structure in RTDs.
A TD consists of a p-n junction in which both the n- and pregions are degener-
ately doped. There is a high concentration of electrons in the conduction band
(EC ) of the n-type material and empty states in the valence band (EV ) of the
Page 66
Department of ECE NanoElectronics
V
M
p-type material. Initially, the Fermi level (EF) is constant because the diode is
in thermal equilibrium with no external bias voltage. When the forward bias
voltage starts to increase, the EF will start to decrease in the p-type material
SV
and increase in the n-type material. Since the depletion region is very narrow
(¡10nm), electrons can easily tunnel through, creating a forward current. De-
pending on how many electrons in the n-region are energetically aligned to the
empty states in the valence band of the p-region, the current will either increase
or decrease. As the bias voltage continues to increase, the ideal diffusion cur-
rent will cause the current to increase. When a reverse-bias voltage is applied,
SC
the electrons in the p-region are energetically aligned with empty states in the
n-region causing a large reverse-bias tunneling current.
Page 67
Department of ECE NanoElectronics
V
I-V characteristics. It consists of two heavily doped, narrow energy-gap materi-
als encompassing an emitter region, a quantum well in between two barriers of
large band gap material, and a collector region, as shown in Figure 3. A current
M
method of growth for this device is Metal Organic Chemical Vapor Deposition
using GaAs-AlGaAs. The quantum-well thickness is typically around 5nm and
the barrier layers are around 1.5 to 5 nm thick.
SV
When there is no forward voltage bias, most of the electrons and holes are
stationary forming an accumulation layer in the emitter and collector region re-
spectively. As a forward voltage bias in applied, an electric field is created that
causes electrons to move from the emitter to the collector by tunneling through
the scattering states within the quantum well. These quasibound energy states
SC
are the energy states that allow for electrons to tunnel through creating a cur-
rent. As more and more electrons in the emitter have the same energy as the
quasi-bound state, more electrons are able to tunnel through the well, resulting
in an increase in the current as the applied voltage is increased. When the
electric field increases to the point where the energy level of the electrons in the
emitter coincides with the energy level of the quasi-bound state of the well, the
current reaches a maximum.
Resonant tunneling occurs at specific resonant energy levels corresponding
to the doping levels and width of the quantum well. As the applied voltage
continues to increase, more and more electrons are gaining too much energy to
tunnel through the well and the current is decreased. After a certain applied
voltage, current begins to rise again because of substantial thermionic emission
where the electrons can tunnel through the non-resonant energy levels of the
well. This process produces a minimum “valley” current that can be classified
as the leakage current.
RTDs have a major advantage over TDs. When a high reverse bias voltage is
applied to TDs, there is a very high leakage current. However, RTDs have the
same doping type and concentration on the collector and emitter side. This
Page 68
Department of ECE NanoElectronics
V
M
SV
produces a symmetrical I-V response when a forward as well as a reverse bias
voltage is applied. In this manner the very high leakage current present in nor-
mal TDs is eliminated. Thus, RTDs are very good rectifiers.
RTD bandwidths were reported for InAs/AlSb RTDs at about 1.24 THz due
to their low ohmic contact resistance and short transit times. Higher band-
SC
Page 69
Department of ECE NanoElectronics
V
M
Single electron charging
The device which we want to consider is a so-called single electron transistor
SV
where a small island with the self-capacitance C is weakly coupled to source and
drain contacts via tunnel barriers. At low enough temperatures and small bias
voltage, the energy cost to add an extra electron onto the island may exceed
the thermal energy and the current through the island is blocked. This is the
Coulomb blockade effect.
SC
It was first suggested in the early 50’s by Gorter as an explanation for the obser-
vation of an anomalous increase of the resistance of thin granular metallic films
with a reduction in temperature. More than 30 years later Fulton and Dolan
observed Coulomb blockade effects in a microfabricated metallic sample and
initiated a huge number of experimental and theoretical studies. Today there
are many text books and reviews on single electron systems both in metals and
in semiconductor systems.
Page 70
Department of ECE NanoElectronics
V
Carbon nanotubes (CNTs) are cylindrical large molecules consisting of a hexag-
onal arrangement of hybridized carbon atoms, which may by formed by rolling
up a single sheet of graphene (single-walled carbon nanotubes, SWCNTs) or by
M
rolling up multiple sheets of graphene (multiwalled carbon nanotubes, MWC-
NTs).
Carbon Nanotubes, long, thin cylinders of carbon, were discovered in 1991 by
Sumio Iijima. These are large macromolecules that are unique for their size,
SV
shape, and remarkable physical properties. They can be thought of as a sheet
of graphite (a hexagonal lattice of carbon) rolled into a cylinder. These intrigu-
ing structures have sparked much excitement in recent years and a large amount
of research has been dedicated to their understanding. Currently, the physical
properties are still being discovered and disputed. Nanotubes have a very broad
SC
Page 71
Department of ECE NanoElectronics
that can be manipulated chemically and physically in very useful ways. They
open an incredible range of applications in materials science, electronics, chem-
ical processing, energy management, and many other fields. Some properties
include
• Extraordinary electrical conductivity, heat conductivity, and mechanical prop-
erties.
• They are probably the best electron field-emitter known, largely due to their
high length-todiameter ratios
• As pure carbon polymers, they can be manipulated using the well-known and
the tremendously rich chemistry of that element.
Some of the above properties provide opportunity to modify their structure, and
to optimize their solubility and dispersion. These extraordinary characteristics
give CNTs potential in numerous applications.
V
The structure of a carbon nanotube is formed by a layer of carbon atoms
that are bonded together in a hexagonal (honeycomb) mesh. This one-atom
thick layer of carbon is called graphene, and it is wrapped in the shape of a
M
cylinder and bonded together to form a carbon nanotube. Nanotubes can have
a single outer wall of carbon, or they can be made of multiple walls (cylinders
inside other cylinders of carbon). Carbon nanotubes have a range of electric,
SV
thermal, and structural properties that can change based on the physical design
of the nanotube.
Armchair, Chiral, and Zigzag. The design depends on the way the graphene is
wrapped into a cylinder. For example, imagine rolling a sheet of paper from
its corner, which can be considered one design, and a different design can be
formed by rolling the paper from its edge. A single-walled nanotube’s structure
is represented by a pair of indices (n,m) called the chiral vector. The chiral
vector is defined in the image below.
The structural design has a direct effect on the nanotube’s electrical proper-
ties. When n m is a multiple of 3, then the nanotube is described as ”metallic”
(highly conducting), otherwise the nanotube is a semiconductor. The Armchair
design is always metallic while other designs can make the nanotube a semicon-
ductor.
Page 72
Department of ECE NanoElectronics
V
a rolled up scroll of paper. Multi-walled carbon nanotubes have similar prop-
erties to singlewalled nanotubes, yet the outer walls on multi-walled nanotubes
can protect the inner carbon nanotubes from chemical interactions with out-
M
side materials. Multi-walled nanotubes also have a higher tensile strength than
single-walled nanotubes.
Strength
SV
Carbon nanotubes have a higher tensile strength than steel and Kevlar.
Their strength comes from the sp² bonds between the individual carbon atoms.
This bond is even stronger than the sp³ bond found in diamond. Under high
pressure, individual nanotubes can bond together, trading some sp² bonds for
SC
sp³ bonds. This gives the possibility of producing long nanotube wires. Carbon
nanotubes are not only strong, they are also elastic. You can press on the tip
of a nanotube and cause it to bend without damaging to the nanotube, and
the nanotube will return to its original shape when the force is removed. A
nanotube’s elasticity does have a limit, and under very strong forces, it is pos-
sible to permanently deform to shape of a nanotube. A nanotube’s strength
can be weakened by defects in the structure of the nanotube. Defects occur
from atomic vacancies or a rearrangement of the carbon bonds. Defects in the
structure can cause a small segment of the nanotube to become weaker, which
in turn causes the tensile strength of the entire nanotube to weaken. The tensile
strength of a nanotube depends on the strength of the weakest segment in the
tube similar to the way the strength of a chain depends on the weakest link in
the chain.
Electrical properties
As mentioned previously, the structure of a carbon nanotube determines how
conductive the nanotube is. When the structure of atoms in a carbon nanotube
minimizes the collisions between conduction electrons and atoms, a carbon nan-
Page 73
Department of ECE NanoElectronics
otube is highly conductive. The strong bonds between carbon atoms also allow
carbon nanotubes to withstand higher electric currents than copper. Electron
transport occurs only along the axis of the tube. Single walled nanotubes can
route electrical signals at speeds up to 10 GHz when used as interconnects on
semi-conducting devices. Nanotubes also have a constant resistively.
Thermal Properties
The strength of the atomic bonds in carbon nanotubes allows them to with-
stand high temperatures. Because of this, carbon nanotubes have been shown
to be very good thermal conductors. When compared to copper wires, which
are commonly used as thermal conductors, the carbon nanotubes can transmit
over 15 times the amount of watts per meter per Kelvin. The thermal conduc-
tivity of carbon nanotubes is dependent on the temperature of the tubes and
the outside environment.
V
4.4 Band Structure M
Band theory or band structure describes the quantum-mechanical behavior of
electrons in solids. Inside isolated atoms, electrons possess only certain discrete
energies, which can be depicted in an energy-level diagram as a series of dis-
tinct lines. In a solid, where many atoms sit in close proximity, electrons are
SV
“shared.” The equivalent energy level diagram for the collective arrangement of
atoms in a solid consists not of discrete levels, but of bands of levels representing
nearly a continuum of energy values. In a solid, electrons normally occupy the
lowest lying of the energy levels. In conducting solids the next higher energy
level (above the highest filled level) is close enough in energy that transitions
SC
Band structure is one of the most important concepts in solid state physics.
It provides the electronic levels in (ideal) crystal structures, which are charac-
terized by two quantum numbers, the Bloch vector k and the band index n.
Here the Bloch vector is an element of the reciprocal space (in units 1/length)
and the energy of the electron En(k) is a continuous function of k, so that one
obtains a continuous range of energies referred to as the energy band. Many
electrical, optical, and even some magnetic properties of crystals can be ex-
plained in terms of the bandstructure. Of particular importance is the location
of the Fermi energy, until which all levels are occupied at zero temperature. If
the Fermi energy is located in a band gap, the material is insulating (or semi-
conducting) while it is metallic otherwise.
Page 74
Department of ECE NanoElectronics
4.5 2D Semiconductors
he two-dimensional (2D) semiconductors are non-carbon materials which, sim-
ilarly to graphene, exist as monolayers of unusual properties. In contrast to
graphene, these 2D materials often have a tunable bandgap in the visible – near
IR range, and exhibit rich redox chemistry which can be controlled through
material design and special processing. Many 2D semiconductors have di-
rect bandgaps whereas the corresponding bulk phases show indirect gaps with
smaller energies. Other interesting properties include high carrier mobility and
on/off ratio.
V
2D semiconductors can complement graphene in devices and applications
where an energy bandgap is required. M
2D Transition Metal Dichalcogenides (2D-TMDs) This group of 2D
materials includes MoS2 and WS2, which show great promise for many di-
verse uses in gas sensing, bio-sensors, supercapacitors, lithium-ion batteries,
SV
and sodium-ion batteries. Due to their large surface-to-volume ratio, 2D-TMDs
produce sensors with improved sensitivity, selectivity and low-power consump-
tion. The use of 2D-TMDs in energy storage is determined by their large surface
area, and large van der Waals gaps between neighbouring layers, which are suit-
able for intercalation of lithium, sodium and other ions.
SC
Our 2D-TMD inks and pastes contain MoS2 or WS2 nanoflakes of narrow
particle size distribution, and with controlled structural and electronic proper-
ties. Using our inks, it is possible to deposit novel electronic, optoelectronic and
sensor devices on flexible and heat-sensitive substrates, such as paper, polymers
and textiles. Additionally, our inks can be used as intermediaries for producing
supercapacitor and battery electrodes.
Two-dimensional (2D) semiconductors beyond graphene represent the thinnest
stable known nanomaterials. Rapid growth of their family and applications dur-
ing the last decade of the twenty-first century have brought unprecedented op-
portunities to the advanced nano- and opto-electronic technologies. In this arti-
cle, we review the latest progress in findings on the developed 2D nanomaterials.
Advanced synthesis techniques of these 2D nanomaterials and heterostructures
were summarized and their novel applications were discussed. The fabrication
techniques include the state-of-the-art developments of the vapor-phase-based
deposition methods and novel van der Waals (vdW) exfoliation approaches for
fabrication both amorphous and crystalline 2D nanomaterials with a particular
focus on the chemical vapor deposition (CVD), atomic layer deposition (ALD)
Page 75
Department of ECE NanoElectronics
4.6 Graphene
Graphene is a one-atom-thick layer of carbon atoms arranged in a hexagonal lat-
tice. It is the building-block of Graphite (which is used, among others things,
in pencil tips), but graphene is a remarkable substance on its own - with a
multitude of astonishing properties which repeatedly earn it the title “wonder
material”.
Graphene’s properties
Graphene is the thinnest material known to man at one atom thick, and
also incredibly strong - about 200 times stronger than steel. On top of that,
V
graphene is an excellent conductor of heat and electricity and has interesting
light absorption abilities. It is truly a material that could change the world,
with unlimited potential for integration in almost any industry.
M
Potential applications
batteries
transistors
computer chips
energy generation
supercapacitors
DNA sequencing
water filters
antennas
touchscreens (for LCD or OLED displays)
solar cells
Spintronics-related products
Producing graphene
Graphene is indeed very exciting, but producing high quality materials is still
a challenge. Dozens of companies around the world are producing different
types and grades of graphene materials - ranging from high quality single-layer
graphene synthesized using a CVD-based process to graphene flakes produced
Page 76
Department of ECE NanoElectronics
V
and at lower prices, are adopted in many applications such as sports equipment,
consumer electronics, automotive and more.
Page 77
Department of ECE NanoElectronics
V
M
SV
SC
Page 78
Department of ECE NanoElectronics
V
M
SV
SC
Page 79
Department of ECE NanoElectronics
V
M
SV
SC
Page 80
Department of ECE NanoElectronics
Logic Devices
V
Programmable Logic Devices (PLDs)
PLD is an IC that contains large no. of gates, Flip-flops, and registers that
are inter-connected on the chip. PLDs can be reconfigured to perform any no.
of functions at any time.
SC
Page 81
Department of ECE NanoElectronics
V
M
SV
SC
Page 82
Department of ECE NanoElectronics
V
M
sulator and metal gates will be used instead insulator and metal gates will be
SV
used instead of Poly of Poly to face the tunneling and gate to face the tunneling
and gate leakage problems.
The Figure shows the structure of a MOS capacitor. The corresponding band
diagram is shown. Silicon dioxide has a 9 eV bandgap. This results in large
band offset relative to silicon.
VG 0 : Fermi level of metal increases, an electric field is created in Sio2
(slope of the conduction band of SiO2).
Due to low carrier concentrations, Si bands bend at the interface of SiO2, lead-
ing to accumulation of excess hole.
To conserve charge, equivalent number of electrons is accumulated at metal side
VG0:
Fermi level moves down, silicon bands bend downward
Hole concentration near the interface decreases
z This is called depletion condition
z Equivalent amount of positive charge will be induced at the metal oxide in-
terface QM as negative charge in semiconductor Qs : Q = - QM , Qs = Qd
Page 83
Department of ECE NanoElectronics
5.3 NEMS
V
Nanoelectromechanical systems (NEMSs) are devices that integrate electrical
and mechanical functions at the nanoscale. They consist of miniaturized elec-
trical and mechanical apparatuses such as actuators, beams, sensors, pumps,
M
resonators, and motors. These components convert one form of energy into an-
other, which can be quickly and conveniently measured. These devices can func-
tion as biosensors to monitor important physiological variables during surgical
procedures, such as intracranial pressure, cerebrospinal fluid (CSF) pulsatility,
SV
weight load, and strain.
to the total mass of the device. Second, the ability of an NEMS device to be
displaced or deformed—known as mechanical compliance—increases with uni-
form reduction of its dimensions. This high degree of mechanical compliance
allows an applied force to be translated into a measurable displacement, such
that even the miniscule forces governing cellular and subcellular interactions
can be quantified. For example, NEMS sensors can resolve forces as small as 10
pN, making them sensitive enough to detect the breaking of hydrogen bonds.
Third, small fluidic mechanical devices can exhibit fast response times, which
would facilitate real-time monitoring of biological processes.
Page 84
Department of ECE NanoElectronics
rely on wired sensors and are thus not suitable for implantation and postsur-
gical monitoring [45]. Furthermore the device was seen to dissolve over time
V
when exposed to biofluids, such as CSF. As only biocompatible end products
were eventually formed, subsequent invasive procedures to remove implanted
NEMS devices could be rendered unnecessary in future clinical settings.
M
The implantable nature of these devices has significant implications for the
postsurgical follow-up of brain tumor patients. For example, implanted sensors
embedded in the resection cavity could facilitate a prompter detection of tumor
SV
recurrence, compared to the current strategy that depends on interval MRI.
Sensor arrays could thus be designed to register changes in tissue impedance,
hypoxia, pH, or temperature to identify the hallmark signs of tumor progres-
sion. This early warning system would allow proactive rather than reactive
initiation of secondary therapies. Furthermore the integration of miniaturized
SC
sensor arrays with an NEMS component to destroy adjacent tissue could enable
the immediate in situ ablation of recurring tumors. The administration of local
therapies through this neurally embedded system (e.g., hyperthermia induced
by passing a current between two electrodes, ultrasound or UV light, or release
of an aliquot of chemotherapy) could minimize the side effects of systemically
administered therapies. Of course, the introduction of foreign bodies such as
NEMS devices into the brain is inevitably associated with a certain degree of
parenchymal damage and local neuronal death, along with risks of bleeding, in-
fection, and seizures. Foreign bodies can also cause the activation of microglia
and astrocytes and reactive gliosis, which in turn can hinder the function of im-
planted NEMS devices. Future work will thus need to look into ways to improve
the biocompatibility and safety of implantable devices.
Page 85
Department of ECE NanoElectronics
V
5.4 MEMS
Micro-Electro-Mechanical Systems, or MEMS, is a technology that in its most
M
general form can be defined as miniaturized mechanical and electro-mechanical
elements (i.e., devices and structures) that are made using the techniques of mi-
crofabrication. The critical physical dimensions of MEMS devices can vary from
well below one micron on the lower end of the dimensional spectrum, all the
SV
way to several millimeters. Likewise, the types of MEMS devices can vary from
relatively simple structures having no moving elements, to extremely complex
electromechanical systems with multiple moving elements under the control of
integrated microelectronics. The one main criterion of MEMS is that there are
at least some elements having some sort of mechanical functionality whether or
SC
not these elements can move. The term used to define MEMS varies in different
parts of the world. In the United States they are predominantly called MEMS,
while in some other parts of the world they are called “Microsystems Technol-
ogy” or “micromachined devices”.
Over the past several decades MEMS researchers and developers have demon-
strated an extremely large number of microsensors for almost every possible
sensing modality including temperature, pressure, inertial forces, chemical species,
magnetic fields, radiation, etc. Remarkably, many of these micromachined
sensors have demonstrated performances exceeding those of their macroscale
Page 86
Department of ECE NanoElectronics
V
M
counterparts. That is, the micromachined version of, for example, a pressure
SV
transducer, usually outperforms a pressure sensor made using the most pre-
cise macroscale level machining techniques. Not only is the performance of
MEMS devices exceptional, but their method of production leverages the same
batch fabrication techniques used in the integrated circuit industry – which can
translate into low per-device production costs, as well as many other benefits.
Consequently, it is possible to not only achieve stellar device performance, but
SC
More recently, the MEMS research and development community has demon-
strated a number of microactuators including: microvalves for control of gas and
liquid flows; optical switches and mirrors to redirect or modulate light beams;
independently controlled micromirror arrays for displays, microresonators for
a number of different applications, micropumps to develop positive fluid pres-
sures, microflaps to modulate airstreams on airfoils, as well as many others.
Surprisingly, even though these microactuators are extremely small, they fre-
quently can cause effects at the macroscale level; that is, these tiny actuators
can perform mechanical feats far larger than their size would imply. For exam-
ple, researchers have placed small microactuators on the leading edge of airfoils
of an aircraft and have been able to steer the aircraft using only these micro-
miniaturized devices.
Page 87
Department of ECE NanoElectronics
V
M
The real potential of MEMS starts to become fulfilled when these miniatur-
ized sensors, actuators, and structures can all be merged onto a common sili-
con substrate along with integrated circuits (i.e., microelectronics). While the
electronics are fabricated using integrated circuit (IC) process sequences (e.g.,
SV
CMOS, Bipolar, or BICMOS processes), the micromechanical components are
fabricated using compatible ”micromachining” processes that selectively etch
away parts of the silicon wafer or add new structural layers to form the mechan-
ical and electromechanical devices. It is even more interesting if MEMS can
be merged not only with microelectronics, but with other technologies such as
SC
While more complex levels of integration are the future trend of MEMS
technology, the present state-of-the-art is more modest and usually involves a
single discrete microsensor, a single discrete microactuator, a single microsensor
integrated with electronics, a multiplicity of essentially identical microsensors
integrated with electronics, a single microactuator integrated with electronics,
or a multiplicity of essentially identical microactuators integrated with elec-
tronics. Nevertheless, as MEMS fabrication methods advance, the promise is
an enormous design freedom wherein any type of microsensor and any type of
microactuator can be merged with microelectronics as well as photonics, nan-
otechnology, etc., onto a single substrate.
Page 88
Department of ECE NanoElectronics
V
future. This will enable the development of smart products by augmenting
M
the computational ability of microelectronics with the perception and control
capabilities of microsensors and microactuators. Microelectronic integrated cir-
cuits can be thought of as the ”brains” of a system and MEMS augments this
decision-making capability with ”eyes” and ”arms”, to allow microsystems to
SV
sense and control the environment. Sensors gather information from the envi-
ronment through measuring mechanical, thermal, biological, chemical, optical,
and magnetic phenomena. The electronics then process the information de-
rived from the sensors and through some decision making capability direct the
actuators to respond by moving, positioning, regulating, pumping, and filter-
SC
ing, thereby controlling the environment for some desired outcome or purpose.
Furthermore, because MEMS devices are manufactured using batch fabrication
techniques, similar to ICs, unprecedented levels of functionality, reliability, and
sophistication can be placed on a small silicon chip at a relatively low cost.
MEMS technology is extremely diverse and fertile, both in its expected ap-
plication areas, as well as in how the devices are designed and manufactured.
Already, MEMS is revolutionizing many product categories by enabling com-
plete systems-on-a-chip to be realized.
Page 89
Department of ECE NanoElectronics
lenges as well.
Some experts believe that nanotechnology promises to: a). allow us to put es-
sentially every atom or molecule in the place and position desired – that is, exact
positional control for assembly, b). allow us to make almost any structure or
material consistent with the laws of physics that can be specified at the atomic
or molecular level; and c). allow us to have manufacturing costs not greatly
exceeding the cost of the required raw materials and energy used in fabrication
(i.e., massive parallelism).
V
substrate is a MEMS device as well. In fact, a variety of MEMS technologies
are required in order to interface with the nano-scale domain.
M
Likewise, many MEMS technologies are becoming dependent on nanotech-
nologies for successful new products. For example, the crash airbag accelerom-
eters that are manufactured using MEMS technology can have their long-term
SV
reliability degraded due to dynamic in-use stiction effects between the proof
mass and the substrate. A nanotechnology called Self-Assembled Monolayers
(SAM) coatings are now routinely used to treat the surfaces of the moving
MEMS elements so as to prevent stiction effects from occurring over the prod-
uct’s life.
SC
Many experts have concluded that MEMS and nanotechnology are two differ-
ent labels for what is essentially a technology encompassing highly miniaturized
things that cannot be seen with the human eye. Note that a similar broad
definition exists in the integrated circuits domain which is frequently referred
to as microelectronics technology even though state-of-the-art IC technologies
typically have devices with dimensions of tens of nanometers. Whether or not
MEMS and nanotechnology are one in the same, it is unquestioned that there
are overwhelming mutual dependencies between these two technologies that will
only increase in time. Perhaps what is most important are the common bene-
fits afforded by these technologies, including: increased information capabilities;
miniaturization of systems; new materials resulting from new science at minia-
ture dimensional scales; and increased functionality and autonomy for systems.
Page 90