0% found this document useful (0 votes)
51 views8 pages

Memcomputing: A Computing Paradigm To Store and Process Information On The Same Physical Platform

This document introduces the concept of memcomputing, which is a new computing paradigm that allows both information processing and storage to occur on the same physical platform using memory circuit elements called "memelements". Memelements are two-terminal electronic devices whose resistance, capacitance or inductance can store information based on their past dynamics. The document discusses how memcomputing satisfies the requirements of being massively parallel while having a unified storage and processing system, outlines six criteria for realizing memcomputing, and provides an example of using it to solve the shortest path problem in a way that demonstrates self-healing capabilities.

Uploaded by

Wanderzs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views8 pages

Memcomputing: A Computing Paradigm To Store and Process Information On The Same Physical Platform

This document introduces the concept of memcomputing, which is a new computing paradigm that allows both information processing and storage to occur on the same physical platform using memory circuit elements called "memelements". Memelements are two-terminal electronic devices whose resistance, capacitance or inductance can store information based on their past dynamics. The document discusses how memcomputing satisfies the requirements of being massively parallel while having a unified storage and processing system, outlines six criteria for realizing memcomputing, and provides an example of using it to solve the shortest path problem in a way that demonstrates self-healing capabilities.

Uploaded by

Wanderzs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Memcomputing: a computing paradigm to store and process information on the same

physical platform
Massimiliano Di Ventra1, and Yuriy V. Pershin2,

arXiv:1211.4487v2 [cs.ET] 8 Apr 2013

Department of Physics, University of California, San Diego, California 92093-0319, USA


2
Department of Physics and Astronomy and University of South Carolina Nanocenter,
University of South Carolina, Columbia, South Carolina 29208, USA

In present day technology, storing and processing of information occur on physically distinct regions of space. Not only does this result in space limitations; it also translates into unwanted delays
in retrieving and processing of relevant information. There is, however, a class of two-terminal
passive circuit elements with memory, memristive, memcapacitive and meminductive systems collectively called memelements that perform both information processing and storing of the initial,
intermediate and final computational data on the same physical platform. Importantly, the states of
these memelements adjust to input signals and provide analog capabilities unavailable in standard
circuit elements, resulting in adaptive circuitry, and providing analog massively-parallel computation. All these features are tantalizingly similar to those encountered in the biological realm, thus
offering new opportunities for biologically-inspired computation. Of particular importance is the
fact that these memelements emerge naturally in nanoscale systems, and are therefore a consequence
and a natural by-product of the continued miniaturization of electronic devices. We will discuss the
various possibilities offered by memcomputing, discuss the criteria that need to be satisfied to realize this paradigm, and provide an example showing the solution of the shortest-path problem and
demonstrate the healing property of the solution path.
PACS numbers:

I.

INTRODUCTION

In conventional computers, information is stored in the


volatilerandom-access memory (RAM)and non-volatile
(e.g., hard drives, solid-state drives) memories, and it is
then sequentially processed by a central processing unit
(CPU). Such a mode of operation requires a significant
amount of information transfer to and from the CPU and
the appropriate memories. This necessarily imposes limits on the architecture performance and scalability. For
instance, in the traditional von Neumann architecture the
rate at which data is transferred between the CPU and
the memory units is the true limiting factor of computational speed the so-called von Neumann bottleneck1 .
A possible way to partially reduce this problem is to
employ parallel computing, in which the program execution is distributed over several processing cores that,
in many configurations, access a physically close (local)
memory at a faster rate than the (non-local) memory of
the other processors. However, while several computing
cores are normally combined in modern CPUs, considerable scaling on a single workstation is achieved only with
specialized units (e.g., graphic processing units, GPUs)
that have been traditionally used in video games and,
only recently, also in scientific computing2 .
A non-incremental change in computing performance
therefore requires a paradigm shift from the traditional
von Neumann architecture (or similar off-springs) to
novel and efficient massively-parallel computing schemes,
likely based on non-traditional electronic devices. Quantum computing has been hailed as one such scheme since
its inception attributed to Richard Feynman3 . The parallelism in quantum computing essentially relies on the

ability of a physical system to be in a superposition of


states, thus allowing a massively-parallel solution of specific problems such as the integer factorization4 that
cannot be practically handled by a classical computer.
However, despite much effort in the past two decades, a
practical quantum computer able to outperform a classical computer even in its most simple operations has yet
to be fabricated. One of the main issues is related to the
non-unitary evolution of quantum states when the system
is in interaction with one or more environments, including the environment (apparatus) that has to eventually
read the outcome of the computation5,6 .
Can we then envision an alternative computing
paradigm such that i) it is intrinsically massively-parallel,
and ii) its information storing and computing units are
physically the same? Our brain seems to fit such a requirement. Even though the full topology of the human brain is not yet known the Human Connectome Project has been launched to address precisely this
issue7 we know that the neurons and their connections
(synapses) store and process information simultaneously,
and their operation is collective and adaptive. Moreover,
the brains operation to a certain extent is stable with
respect to failure of some amounts of neurons. Therefore,
our brain boasts an embedded self-healing mechanism,
namely, the ability to bypass broken connections with
alternative paths that self-reinforce8 .
In order to reproduce such features in electronic circuits, we would need elements that adapt to the incoming signal and retain information on demand. Traditional transistor-based architectures would indeed fit
these requirements. However, transistors are active
three-terminal devices, and their operation comes at the

FIG. 1: (Color online) Symbols of the three memory elements that we consider for memcomputing. Memcapacitive (left),
memristive (center), and meminductive (right) systems.

cost of relatively high power consumption and low density. While active elements are unlikely to be eliminated
from electronics all-together, it would be desirable to
keep them at a minimum, and instead leave the storing
and processing of information to passive elements, preferably with dimensions at the nanometer scale, namely
comparable to, or even smaller than their biological counterpart.
In this work we introduce the concept of
memcomputingcomputing using memory circuit
elements (memelements)9 which indeed satisfies requirements i) and ii), and does not rely on active elements as
main tools of operation. Memelements are two-terminal
electronic devices whose resistance, capacitance or
inductance keeps track of the systems past dynamics.
They arise naturally at the nanoscale due to the delayed
response of electrons and ions in condensed matter
systems subject to external time-dependent fields9 .
Their general definition is quite straightforward: an n-th
order u-controlled memory circuit element is defined by
the set of equations9
y(t) = g (x, u, t) u(t)
x = f (x, u, t)

(1)
(2)

where f is a continuous n-dimensional vector function


of internal state variables (e.g., spin polarization, ion
dopants position, etc.10 ), u(t) is the input (voltage,
charge, current, or flux), and y(t) the output (the complementary constitutive variable of the voltage, charge,
current, or flux). The following relations (including interchanges of u(t) and y(t) in each pair) then define the three
different classes of elements (their symbols are shown in
Fig. ??)

u(t) = current
Memristive ,
(3)
y(t) = voltage

In what follows, under massively-parallel processors we


understand arrays of memelements combined with traditional circuit components. In these structures, the information processing and storage are realized on the same
platform. The computation is realized as the evolution
of the system connected to external voltage (or current)
sources. The collective circuit dynamics then results in
an unprecedented increase11 in computational power as
we demonstrate below by applying memcomputing architectures to certain graph-theory optimization problems.
Massively-parallel analog and digital computing architectures based on memelements can be designed in several different ways using a variety of physical systems
with memory10 . All of them, however, have to satisfy
general fundamental criteria in order for this paradigm
to be of value. We then formulate the six most important criteria for memcomputing. These criteria should
be used as a guideline for the rational design of computing architectures. Based on these criteria, we design a
memristive processor and consider certain features of its
collective dynamics.
Arguably, one of the most interesting aspects of its
dynamics is self-reinforcement. This feature shares a
striking similarity to the collective behavior of certain
biological organisms, such as ant colonies12 . Therefore,
the massively-parallel analog processors we discuss are
ideally suited for the hardware realization of a family of
related ant-search optimization algorithms. As an illustration, we consider in this paper only the solution of the
shortest path probleman important problem for several
technological applications (e.g., transportation)as well
as demonstrate the healing property of the solution path.
We will then conclude this paper with considerations on
future directions.

II.

u(t) = charge
y(t) = voltage

u(t) = current
y(t) = voltage

Memcapacitive ,

(4)

Meminductive .

(5)

MEMCOMPUTING CRITERIA

We now formulate a few basic criteria that are both


necessary and/or desirable for the implementation of
computing with memory. Some of them are similar to
those introduced by DiVincenzo in the field of quantum computation13 , which in turn are indeed similar to

3
those required by any computing paradigm. Others are
specific to memcomputing. As briefly mentioned above,
both memcomputing and quantum computation rely on
massive parallelism of information processing. The basic mechanisms of the massive parallelism, however, are
quite different. While quantum computing relies on the
superposition of states, memcomputing utilizes the collective dynamics of a large number of (essentially classical) systems. Its specific criteria are then as follows.
1. Scalable massively-parallel architecture with combined information processing and storage
Fundamentally, memcomputing is performed by an
electronic circuit containing a collection of memelements
(memristive, memcapacitive or meminductive systems or
their combinations) that simultaneously allow for information processing and storage. By definition, memelements store information in analog form in their response
characteristics (in addition to the ability of memcapacitive and meminductive systems to store information in
the electric and magnetic field energies, respectively). It
is expected that all, or at least a large number of memelements are involved in the parallel computation. This is
the basis of the potential advantage of memcomputing
over the traditional sequential one.
Combined information processing and storage is a useful feature of memelements. It simplifies the hardware
design reducing the amount of components needed to realize computing functions. In this way, it becomes possible to achieve higher integration densities as well as
more complex connectivities. Clearly, both factors improve the hardware functionality. For example, such a
feature has been recently employed in memristive binary
logic circuits, where the same memristive devices serve
simultaneously as a gate and latch1417 , or in memristive networks solving a maze problem11 . Moreover, it
has been shown that the performance of logic circuits
improves if several types of memelements are combined
together17 . This fact should also be taken into account
in memcomputing architecture design.
2. Sufficiently long information storage times
Next, let us consider requirements that should be
imposed on individual memelements.
First of all,
they should provide sufficiently long information storage times. At least, much longer than the calculation
time. Ideally, one would use memelements with nonvolatile memory storage capabilities, such as, emergent
non-volatile memory cells10,1822 . It is important that
many of these elements are realized at the nanoscale, and
thus many of these can be incorporated on a single chip.
Additionally, it is desirable to use memelements with
low power consumption and short read/write times.
Emergent non-volatile memory cells satisfy these requirements and thus are ideal candidates for memcomputing architectures. For example23 , CMOS compatible
nanoionic resistive switches based on amorphous-Si offer promising switching characteristics in terms of write
speed (<10 ns), endurance (>105 cycles), retention (7

years), and scaling potential (<30 nm). Asymmetric


high-endurance Ta2 O5x /TaO2x bilayer structures sustain up to 1012 write cycles24 . Finally, the energy to
write a bit of information into a nanoionic cell can be as
small as 5 1014 J,25 , which represents a major energy
saving.
3. The ability to initialize memory states
Memelements should be initialized before the computation begins the initialization could be provided automatically in the case of volatile memelements, and
may not be needed for memelements storing intermediate/final values of parameters. In any case, the computing device should provide a mechanism for initialization
of relevant memelements. This is expected to be an easy
task because, typically, memdevice response functions
change between two limiting values. For instance, in the
case of the most studied bipolar memristive devices, the
application of a high amplitude pulse of a given polarity
for a sufficiently long time interval guarantees the device
switches into one of its limiting states (depending on the
pulse polarity).
4. Mechanism(s) of collective dynamics, strong "memory content"
The device architecture should provide a mechanism of
collective dynamics in which the evolution of a memdevice state depends on the states of several/all other devices. For example, in the case of memristive logic, voltages applied to a couple of memristive devices change the
state of one of these depending on the state of the second one. In order to obtain a reliable switching, we also
require a strong "memory content": the device characteristics in its limiting states should be sufficiently different
to provide a significant influence on other memelements.
5. The ability to read the final result (from relevant
memelements)
Once the computation is performed, the read out of the
information needs to be done (preferably) without modifying the states of the single memelements. This can
be accomplished if we choose a reading input u(t) such
that the device state (described by the state equation 2)
stays constant or varies little. Generally, it is not a problem in a system based on memelements with threshold10 .
This can also be done with threshold-less devices, however by appropriately tailoring the reading input u(t) to
minimize the state change.
6. Robustness against small imperfections and noise
Insensitivity to small imperfections of computer components or small "damages" of the architecture should
be an essential feature of (most) memcomputing architectures. Indeed, small imperfections are always introduced during the fabrication of each element and are
therefore unavoidable. The computer architecture should
be thus robust with respect to such imperfections. "Damages" are more serious deviations from the ideal computing structure that can be introduced during the fabrica-

4
tensively studied yet, memcapacitors and meminductors9
can also be used in the above memcomputing schemes by
replacing memristors, albeit in a modified form. Since
memcapacitors and meminductors may in principle be
constructed to consume little or virtually no energy, their
use in memcomputing is potentially energetically more
efficient than the use of memristors. An important milestone in this field would be the demonstration of a memcomputing device with computing capabilities and power
consumption comparable to (or better than) those of the
human brain.

FIG. 2:
Memristive processor consisting of a network of
memristive elements in which each grid point is attached to
several basic units. Each basic unit involves two memristive devices connected symmetrically (in-parallel) and two
switches (field-effect transistors). The switches provide access
to individual memristive devices, while in-parallel connection
symmetrizes the response of bipolar memristive elements.

tion and/or use of the computing device. In the human


brain, for example, although multiple neurons die each
day, the overall brain functionality is not influenced for
many years. Likewise, it is reasonable to require that
the operation of memelements networks is not sensitive
to relatively small "damages".

III.

MEMCOMPUTING SCHEMES

There are several schemes that have been recently


suggested that satisfy all or some of the above criteria. These schemes include neuromorphic computing
with memristive synapses17,2631 , massively-parallel computing with memristive networks11 , logic with memory circuit elements14,16,17,32 , and memristive cellular
automata33 . For instance, in the work of Ref. 11 a
memristive network has been used to solve a popular
optimization problem, namely the maze problem. Criterion 1 is fully satisfied by that network: the solution
of the maze is done in an analog massively-parallel fashion, and it is locally stored in the system for essentially
an unlimited time (criterion 2). The network can also
be initialized easily as explained in Ref. 11, thus satisfying criterion 3. The dynamics of the system is collective,
and the difference between the low-resistance state and
the high-resistance state was chosen in order to easily
perturb all memelements in the system (criterion 4). Although not explicitly discussed in that work, criterion 5
can be easily accomplished with an appropriate choice of
memelements. Finally, criterion 6 was naturally built in
the problem: any change of topology of the networkand
consequent emergence of new maze solution(s)could be
handled effortlessly. Similar considerations would apply
to the other schemes proposed in the literature.
We note at this point that although they are not ex-

IV. APPLICATION OF MEMRISTIVE


NETWORKS TO THE SHORTEST-PATH
PROBLEM

We now provide an explicit example of computing with


memelements specifically with memristive networks in
order to exemplify even further the criteria given above,
and the possibilities offered by this paradigm. Details of
the specific memristive systems used and the simulation
details can be found in the Methods section. The memristive network (processor) consists of a square array of
grid points connected by basic units involving two bipolar memristive elements and two switches (see Fig. 2).
Although not strictly necessary, in this example we have
chosen the design of each basic unit to be symmetric so as
to conveniently provide independency of the circuit operation on the sign of the applied voltage. The switches
are introduced for two functions: i) to provide independent access to each individual memristive device, and ii)
to define the network topology, if needed (see Ref. 11 for
an example). Thus, the architecture of Fig. 2 provides
access to each individual memristive device for the purpose of initialization and reading of the calculation result.
The calculation consists in the evolution of the network
statedefined as the collection of states of all memristive
deviceswhen an appropriate pulse sequence is applied to
a sub-set of grid points.
In order to demonstrate the main principles of memcomputing, let us consider the solution of the shortestpath problem along a symmetry direction of the memristive processor36 Specifically, we would like to find the
shortest path between two pre-selected nodes (shown by
red arrows in Fig. 3 (a) or (b)) in the network. The
algorithm used for this calculation is the following:
Initialization stage: all memristive devices are preinitialized into the "OFF" (high-resistance) state.
Calculation stage: a voltage pulse of a suitable amplitude and duration is applied to the pair of pre-selected
nodes.
Calculation reading: the shortest path is given by
a sub-set of basic units in their "ON" (low-resistance)
state.37
Fig. 3(a) shows the initial state of the network when
all memristive devices are in their "OFF" states. At the
initial moment of time t = 0, a single constant amplitude

5
10

10

(a)

(b)

Ron
Roff

Ron

Roff

0
0

10

10

10

10

(c)

(d)
8

0
0

10

10

FIG. 3: (Color online). Solution of the shortest path problem for the pair of nodes indicated by red arrows in (a) and (b) in a
an 1111 memristive network. (a) Initial and (b) final states of the memristive network. Here, the memristance of each basic
unit (involving two memristive devices) is represented by a color. Distributions of electron current corresponding to (a) and
(b) are shown in (c) and (d), respectively. See Methods for details of all calculations.

voltage pulse is applied to the input/output nodes shown


by red arrows in Fig. 3(a). The final state of the memristive network (the calculation result) is presented in Fig.
3(b). Clearly, two pre-selected nodes are connected by a
chain of memristive devices in the "ON" state giving the
shortest path problem solution. Note that Fig. 3(a), (b)
depict the memristance of each basic unit consisting of
two memristive devices.
Let us now consider the network evolutionthe dynamics of the calculation stagein more details. First of all,
we would like to mention a similarity between the process
of computing as performed by the memristive processor
and the ant-colony optimization algorithm12,34 . The latter is an adaptable algorithm inspired by the observation that ants, upon finding food, mark their trails by
pheromones thus attracting other ants in order to reinforce the trail closest to their nest. A similar type of
reinforcement is observed in the memristive network dynamics. Fig. 3(c) shows the current distribution in the
network at the initial moment of time when all the memristive devices are in the "OFF" state. In this case, the
current flows in multiple paths. However, since the rate

of memristance change is proportional to the current, the


memristance of the least resistive path will decrease faster
attracting more and more current. Therefore, the current flowing through the least resistive path will reinforce
this path, similarly to the trail reinforcement of the ant
colony, see Fig. 3(d).
Moreover, it is interesting to note that the problem
solution develops gradually, starting from both the preselected nodes. This is clearly seen in Fig. 4(a) that
presents the rate of change of the memristances along
the solution path as a function of time.
In order to quantify the system evolution even further, we define a network entropy with respect to memristances, or equivalently currents in the network. For
example, by considering currents through a vertical cross
section of the network at its center, we can define the
network entropy as (note that in the absence of capacitive components the total current is independent of the
choice of this cross section at any given time, provided
such cross section crosses the shortest path solution and
spans the far upper and lower ends of the network with-

-10
i =1, 10
i =2, 9
i =3, 8
i =4, 7
i =5, 6

-20

(a)

-30

-40
0.0

0.1

0.2

Time (arb. units)

0.3

0.4

2.4

Entropy

2.2
2.0
Roff /Ron=20

1.8
1.6
1.4
0.0

Roff /Ron=10
Roff /Ron=4

(b)

Roff /Ron=1.25

0.1

0.2

0.3

0.4

0.5

0.6

Time (arb. units)


FIG. 4:
(a) Dynamics of resistance switching within the
calculation stage corresponding to the shortest-path problem
solution presented in Fig. 3. This plot shows that the solution emerges from both sides and propagates to the center.
(b) Network entropy as a function of time from Eq. (6) for
networks of different memory content. A slight increase of the
M
M
entropy for the Rof
f /Ron = 10 curve at the time of about 0.4
(arb. units) is due to a delayed switching of four vertical units
directly connected to the input/output nodes.

out self-intersecting)
(t) =

In order to prove this last statement, we perform a


calculation similar to that presented in Fig. 3 assumM
ing, however, a much smaller difference between Ron
and
M
M
M
Rof f of each memristive device (Rof f /Ron = 1.25 comM
M
pared to Rof
f /Ron = 20 used in the previous calculation). Fig. 5 demonstrates that now the solution of the
shortest path problem can not be found exactly at the
given bias. In fact, in addition to the switching of memristive devices directly connecting the input and output
nodes, many other memristive devices are also switched
into the "ON" state (see Fig. 5). This example demonstrates the importance of the strong "memory content"
requirement (criterion 4).
In order to analyze criterion 6 in more detail, let us
now damage the shortest-path problem solution shown
in Fig. 3(a) by removing three grid points in the central
part of the network as shown in Fig. 6(a). We note
that the memristive network has a remarkable ability to
repair damaged solutions the healing ability we have
mentioned above. Indeed, this property is close to the
self-healing ability that can be ascribed to systems or
processes, which by nature or by design tend to correct
any disturbances.
The healing of the damaged solution is performed by
applying a single pulse of a certain amplitude and duration to the input and output nodes. The result presented in Fig. 6(b) shows that a new path connecting
two pieces of the initial shortest-path solution develops
below the damaged region. The three missing grid points
have been removed intentionally in an asymmetric fashion in order to show that the healing occurs along the
shortest possible path around the damaged region. It is
easy to understand the origin of this healing process: as
soon as we switch on the pulse between the input and the
output nodes, the current will flow through all possible
paths. However, the shortest one is again the one that is
mostly affected, and thus reinforced during dynamics.

10
N
X


Iij (t)ln Iij (t) ,

(6)

i=1

where N is the number of basic units connected in the


horizontal direction, j is the index of the row of horizontal
basic units crossed by the cross-section, Iij = Iij /Itot is
the normalized current through a horizontally connected
PN
basic unit, and Itot = i=1 Iij . Fig. 4(b) demonstrates
(with N = 11 and j = 5) that the network entropy decreases in time as the computation proceeds. In statistical physics, the entropy is related to the number of states
available to the system. Here, its decrease can be thus interpreted as due to the decrease in the number of paths
available for the current, with a more pronounced decrease the larger the memory content in the system (as
represented by the ratio Rof f /Ron ).

8
7
6

Ron

Roff

dRi 5 /dt (arb. units)

4
3
2
1
0
0

10

FIG. 5: (Color online). Solution of the shortest path problem


M
M
by a network of low memory content, Rof
f /Ron = 1.25.

7
10

(a)

9
8
7

Ron

Roff

quite naturally with increasing miniaturization of electronic devices. The computational possibilities offered by
this paradigm are varied, and due to its tantalizing similarities both with some features of the brain as well as
with the collective properties of colonies of living organisms, it promises to open new directions in neuromorphic
architectures and biological studies.

4
3

VI.

2
1

This work has been partially supported by NSF grants


No. DMR-0802830 and ECCS-1202383, and the Center
for Magnetic Recording Research at UCSD.

0
0

10

10

ACKNOWLEDGMENT

(b)

Methods

8
7

Ron

Roff

5
4

The numerical results presented in this paper have


been obtained for a network of current-controlled bipolar memristive devices with threshold. Each memristive
device is described by

VM = R (x) IM ,

2
1

(7)

and

0
0

10

FIG. 6: (Color online). Healing (b) of a damaged (a) solution. To heal the solution damage in (a), a single square pulse
of appropriate width and duration is applied to the input and
output nodes shown by the red arrows in (b).

The stability of the shortest path problem solution to


small imperfections of the system (e.g., finite width distributions of threshold current, limiting values of memristance, etc.) is evident, and therefore does not deserve
a closer inspection.

V.

CONCLUSION

In conclusion, we have discussed the concept of memcomputing: storing and processing of information on the
same physical platform. In particular, we have outlined
the main criteria that need to be satisfied in order to realize such a paradigm and analyzed a specific example to
show the healing properties of the solution. Unlike other
promising but more speculative proposals, like quantum
computing, memcomputing is already a practical reality,
at least in regard to some applications, such as digital
logic. It bypasses several of the bottlenecks of presentday computing architectures and its constitutive units
memristors, memcapacitors, and meminductors are already widely available. Indeed, these elements emerge

dx
=
dt

0
sgn (IM ) (|IM | It )

for |IM | < It


for |IM | It

(8)
(9)

where VM and IM = q(t)

denote the (time-dependent)


voltage and current across the device, respectively;
R(x) x is the memristance that changes between two
M
M
limiting values Ron
and Rof
f ; x is the internal state variable; is a constant describing the rate of change of memristance when the magnitude of the electric current IM
exceeds the threshold current It ; and sgn is the sign function. We note that a current-controlled threshold-type
memristive device model was used to describe switching
in bipolar memristive devices35 . Moreover, many models of voltage-controlled memristive devices can be easily
reformulated in the current-controlled form10 .
All numerical results were obtained for an 1111
memristive network using in all calculations the followM
ing model parameters: Rof
f = 200 Ohms, Rij (t =
M
6
0) = Rof
,

=
10
Ohms/(sA),
It = 10mA. Figs.
f
M
3, 4(a) and 6 are obtained with Ron = 10 Ohms and
V = 6V of applied voltage; Fig. 4(b) is found using
M
M
V = 6, 6.75, 10, 15.25V for Rof
f /Ron = 20, 10, 4, 1.25
M
curves, respectively; Fig. 5 is plotted with Ron
= 160
M
M
Ohms and V = 15.25V. Note that Ron and Rof
f are
related to individual memristive devices, while Ron and
Rof f (used in Figs. 3, 5 and 6) represent limiting values of memristance of the basic unit. While the "OFF"
state of the basic unit is attained when both memristive
devices are in their "OFF" states, the "ON" state of the
basic unit corresponds to the "ON", "OFF" combination
of single device states. In our simulations, at each time

8
step, the potential at all grid points is found as a solution of Kirchhoffs current law equations obtained using
a sparse matrix technique. The corresponding change in

1
2

3
4
5

7
8

10

11

12

13
14
15

16

17

18
19
20
21

22

Electronic address: [email protected]


Electronic address: [email protected]
J. Backus, Comm. ACM 21, 613 (1978).
J. D. Owens, D. Luebke, N. Govindaraju, M. Harris,
J. Krger, A. Lefohn, and T. J. Purcell, Computer
Graphics Forum 26, 80 (2007).
R. P. Feynman, Foundations of Physics 16, 507 (1986).
P. W. Shor, SIAM J. Comput. 26, 1484 (1997).
I. L. Chuang, R. Laflamme, P. W. Shor, and W. H. Zurek,
Science 270, 1633 (1995).
T. Pellizzari, S. A. Gardiner, J. I. Cirac, and P. Zoller,
Phys. Rev. Lett. 75, 3788 (1995).
http://www.humanconnectomeproject.org/.
W. M. Cowan and C. F. Sudhof, Thomas C. and. Stevens,
eds., Synapses (The Johns Hopkins University Press,
2001).
M. Di Ventra, Y. V. Pershin, and L. O. Chua, Proc. IEEE
97, 1717 (2009).
Y. V. Pershin and M. Di Ventra, Advances in Physics 60,
145 (2011).
Y. V. Pershin and M. Di Ventra, Phys. Rev. E 84, 046703
(2011).
N. Monmarche, F. Guinand, and P. Siarry, Artificial Ants
(Wiley-ISTE, 2010).
D. P. DiVincenzo, Fortschr. Phys. 48, 771 (2000).
D. Strukov and K. Likharev, Nanotechn. 16, 888 (2005).
G. S. Snider and R. S. Williams, Nanotechnology 18,
035204 (2007).
J. Borghetti, G. S. Snider, P. J. Kuekes, J. J. Yang, D. R.
Stewart, and R. S. Williams, Nature 464, 873 (2010).
Y. V. Pershin and M. Di Ventra, Proc. IEEE 100, 2071
(2012).
R. Waser and M. Aono, Nat. Mat. 6, 833 (2007).
J. C. Scott and L. D. Bozano, Adv. Mat. 19, 1452 (2007).
A. Sawa, Mat. Today 11, 28 (2008).
S. F. Karg, G. I. Meijer, J. G. Bednorz, C. T. Rettner,
A. G. Schrott, E. A. Joseph, C. H. Lam, M. Janousch,
U. Staub, F. La Mattina, et al., IBM J. Res. Dev. 52, 481
(2008).
G. W. Burr, B. N. Kurdi, J. C. Scott, C. H. Lam,

the memristive states was computed using Eq. (9). The


width of the voltage pulse is selected sufficiently long to
reach the steady state in each calculation.

23
24

25

26
27

28

29

30

31

32

33
34

35

36

37

K. Gopalakrishnan, and R. S. Shenoy, IBM J. Res. Dev.


52, 449 (2008).
S. H. Jo, K.-H. Kim, and W. Lu, Nano Lett. 9, 870 (2009).
M. Lee, C. B. Lee, D. Lee, S. R. Lee, M. Chang, J. H. Hur,
Y. Kim, C. Kim, D. H. Seo, S. Seo, et al., Nature Materials
10, 625 (2011).
ITRS. The International Technology Roadmap for Semiconductors - ITRS 2009 Edition. http://www.itrs.net
(2009).
G. S. Snider, SciDAC Review 10, 58 (2008).
B. Linares-Barranco and T. Serrano-Gotarredona, Proc.
IEEE-NANO pp. 601604 (2009).
A. Afifi, A. Ayatollahi, and F. Raissi, IEICE El. Express
6, 148 (2009).
Y. V. Pershin and M. Di Ventra, Neural Networks 23, 881
(2010).
S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya,
P. Mazumder, and W. Lu, Nano Lett. 10, 1297 (2010).
I. Ebong and P. Mazumder, Proceedings of the IEEE 100,
2050 (2012).
E. Lehtonen and M. Laiho, in Proceedings of the
2009 International Symposium on Nanoscale Architectures
(NANOARCH09)) (2009), p. 33.
M. Itoh and L. O. Chua, Int. J. Bif. Chaos 19, 3605 (2009).
A. A. Colorni, M. M. Dorigo, and V. Maniezzo, in actes de
la premire confrence europenne sur la vie artificielle, Paris, France, Elsevier Publishing (1991), pp. 134
142.
M. D. Pickett, D. B. Strukov, J. L. Borghetti, J. J. Yang,
G. S. Snider, D. R. Stewart, and R. S. Williams, J. Appl.
Phys. 106, 074508 (2009).
Different grid and connectivity patterns are needed to solve
the shortest path problem in more general cases. Such a
study is out of scope of this publication that focuses on
general criteria for memcomputing.
The memristance of a basic unit is that of two memristive
devices connected in-parallel. Note that in-parallel connection of two memristive devices can be described as a single
higher-order memristive device.

You might also like