Literature Review
Literature Review
Egala, Bhaskara S., et al. [2021] The rapid developments in the Internet of Medical Things
(IoMT) help the smart healthcare systems to deliver more sophisticated real-time services. At
the same time, IoMT also raises many privacy and security issues. Also, the heterogeneous
nature of these devices makes it challenging to develop a common security standard solution.
Furthermore, the existing cloud-centric IoMT healthcare systems depend on cloud computing
for electrical health records (EHR) and medical services, which is not suggestible for a
decentralized IoMT healthcare system. In this article, we have proposed a blockchain-based
novel architecture that provides a decentralized EHR and smart-contract-based service
automation without compromising with the system security and privacy. In this architecture,
we have introduced the hybrid computing paradigm with the blockchain-based distributed
data storage system to overcome blockchain-based cloud-centric IoMT healthcare system
drawbacks, such as high latency, high storage cost, and single point of failure. A
decentralized selective ring-based access control mechanism is introduced along with device
authentication and patient records anonymity algorithms to improve the proposed system’s
security capabilities. Author evaluated the latency and cost effectiveness of data sharing on
the proposed system using Blockchain. Author conducted a logical system analysis, which
reveals that our architecture-based security and privacy mechanisms are capable of fulfilling
the requirements of decentralized IoMT smart healthcare systems
Shafagh, Hossein, et al. [2017] Today the cloud plays a central role in storing, processing,
and distributing data. Despite contributing to the rapid development of IoT applications, the
current IoT cloud-centric architecture has led into a myriad of isolated data silos that hinders
the full potential of holistic data-driven analytics within the IoT. In this paper, we present a
blockchain-based design for the IoT that brings a distributed access control and data
management. Author depart from the current trust model that delegates access control of our
data to a centralized trusted authority and instead empower the users with data ownership.
Our design is tailored for IoT data streams and enables secure data sharing. Author enable a
secure and resilient access control management, by utilizing the blockchain as an auditable
and distributed access control layer to the storage layer. Author facilitate the storage of time-
series IoT data at the edge of the network via a locality-aware decentralized storage system
that is managed with the blockchain technology. Our system is agnostic of the physical
storage nodes and supports as well utilization of cloud storage resources as storage nodes.
Aleem Ali et al. [2021] Blockchain and the Internet of Things float across the technical
world. Both innovations have a major impact on our daily lives. The internet collects and
transmits a vast amount of data from different all over the world. IoT comprises
heterogeneous machines operating over a wide variety of networks and transmitting a vast
volume of sensitive and non-critical data, raising questions about the protection and
ownership of the data collected from users. The paper proposed the Internet security
architecture of items being done in a constrained space. A secure framework is proposed
using blockchain for IoT-based data Communication. A performance evaluation has been
done among the proposed system and the IoT-based existing system in terms of processing
time and writing time.
Gündoğan, Cenk, et al. [2021] Content objects are confined data elements that carry
meaningful information. Massive amounts of content objects are published and exchanged
every day on the Internet. The emerging Internet of Things (IoT) augments the network edge
with reading sensors and controlling actuators that comprise machine-to-machine
communication using small data objects. IoT content objects are often messages that fit into
single IPv6 datagram. These IoT messages frequently traverse protocol translators at
gateways, which break end-to-end transport and security of Internet protocols. To preserve
content security from end to end via gateways and proxies, the IETF recently developed
Object Security for Constrained RESTful Environments (OSCORE), which extends the
Constrained Application Protocol (CoAP) with content object security features commonly
known from Information Centric Networking (ICN). This paper revisits the current IoT
protocol architectures and presents a comparative analysis of protocol stacks that protect
request-response transactions. We discuss features and limitations of the different protocols
and analyze emerging functional extensions. We measure the protocol performances of CoAP
over Datagram Transport Layer Security (DTLS), OSCORE, and the information-centric
Named Data Networking (NDN) protocol on a large-scale IoT testbed in single-and multi-
hop scenarios. Our findings indicate that (a) OSCORE improves on CoAP over DTLS in
error-prone wireless regimes due to omitting the overhead of maintaining security sessions at
endpoints, (b) NDN attains superior robustness and reliability due to its intrinsic network
caches and hop-wise retransmissions, and (c) OSCORE/CoAP offers room for improvement
and optimization in multiple directions.
Danijela Efnusheva et al. [2021] The usage of the Internet of Things (IoT) is growing
rapidly. They can be found in our homes, in hospitals, reporting the changes in the
environment and many valuable functionalities. This vast application of IoT comes with
some security issues and concerns. This paper begins with introducing the architecture of
IoT, analyzing the security issues, security challenges and the security requirements that
exist in the three-layer architecture. Next, the IoT domains and the possible attacks that can
occur in each domain are studied, hence inspecting the possible countermeasures and the
necessary protective measures to be taken.
N. K. Shukla et al. [2021] The basic IoT architecture has various sub-systems such as
applications, gateways, processors and sensors. These sub-systems require low power
and high-speed memories such as SRAM and ROM. The applications of IoT define the
type of memory required, i.e., either high speed or low power. Since low-power devices
are more in demand, ultra-low-power SRAMs are required for portable and handheld
devices. At the same time, the performance of the SRAM should not be degraded.
Wearable devices are nowadays the latest trend so these devices require ultra-low-power
SRAMs. The need for memory in IoT systems depends on the application. For example,
in applications where a vast amount of data storage and handling is required, the memory
requirement switches to DRAM and flash memories. The applications that require a high
data transfer rate need fast SRAM memory. High-speed data transfer is essential for
communication between IoT devices. So, SRAM is chosen as a cache memory due to its
faster response. FinFET-based SRAMs are successful in various applications and are
being used nowadays in most of the mobile phones and many IoT devices. Similarly,
CNTFET- and TFET-based SRAMs have also been proposed, and in future, we may
expect their production. This chapter deals with various design issues of SRAMs from
CMOS to various nano-scale devices and their advantages and limitations to their design
and applications.
Prasad, Govind, et al. [2020] SRAM (static random-access memory) based cache memories
are widely used due to their high speed. On-chip SRAMs are considered as one central part of
the SOCs (system on chips) circuits as they determine the power dissipation of SoCs and the
speed of its operation. Hence, having a low-power SRAMs is very important. Due to the
downsizing of CMOS (complementary metal-oxide-semiconductor), low power design has
become the major challenge of modern chip designs to make the devices portable and
compact. The significant zone of worry in today's technology regards to speed, power
consumption, size, and reliability to accomplish better performance. In conventional 6T
SRAM, power consumption is very high.
Xie, Mimi, et al. [2018] Emerging nonmemory technologies have been widely employed in
intermittently powered the Internet of Things (IoT) devices to bridge program execution
across different power cycles. Together with register contents, the cache contents will be
checkpointed to non-volatile memory upon power outages. While pure non-volatile memory-
based cache is an intuitive option, it suffers from inferior performance due to high write
latency and energy overhead. We introduce a spin-transfer torque magnetic random-access
memory (STT-RAM)-based hybrid cache which is specifically tailored for the intermittently
powered embedded system. This cache design supports both normal memory access and
checkpointing: During normal access, the large density of STT-RAM and the fast access
speed of static random-access memory (SRAM) are fully taken advantage to achieve high
performance and low energy consumption; during checkpointing, the most important cache
blocks in SRAM are migrated to the dead or unimportant clean cache blocks in STT-RAM.
This design achieves instant resumption speed without restoration of large cache state.
Gupta, Navneet, et al. [2021] Conventional differential sensing is used in most CMOS
memories; single-ended sensing has already been reported in literature and used in CMOS
memories but is limited to specific use-cases such as 8T-CMOS SRAM, CMOS-DRAM,
Flash, non-volatile memories. Single-ended sensing is a promising option to use for
optimized TFET memory cells due to the unidirectionality property of TFETs, which
represents an obstacle for providing a differential output to the SA. Therefore, most state-
of-the-art TFET memories are using single-ended sensing for read. Most TFET memory bit
cells presented in literature have static power consumption several decades below that of
their CMOS counterparts but exhibit limited performance. Therefore, the main challenge
for designing single-ended sensing for TFET memories is to reliably differentiate “1” and
“0” while limiting the required bitline voltage drop with a compact Sense Amplifier (SA),
as for many applications the SA has to fit in the column pitch. Especially for compact
memories, such as the 3T-TFET bit cell memory presented in Chap. 3, either a standard SA
with tall and inefficient layout or an inverter-based SA can be used to meet the column
pitch.
T. Sudha, et al. [2021] Sense amplifiers plays a significant role in terms of its recital,
functionality and reliability of the memory circuits. In this paper two new circuits have been
proposed. The proposed circuit is PMOS biased sense amplifier, which provides very high
output impedance and has reduced sense delay and power dissipation. As such, the proposed
circuit performs the identical operations as that of conventional circuits but with the reduced
the sense delay and power consumption.
Monica Panjwani, et al. [2021] Memory design is one of the interesting subjects in
semiconductor technology. They have fascinated world through storage of data values and
program instructions. Cell structure and topology is governed by the technology. The
proposed memory design takes into account the type of memory unit that is preferable for a
given technology and application and is a function of the required memory size, the time it
takes to access the memory data, other access patterns and configuration to optimize the
memory architecture for low power design and more importantly overall system
requirements. The project is dealt with basic memory architecture and their essential
peripheral blocks. The peripheral blocks include the address decoders, sense amplifiers,
voltage references, drivers, buffers, timing and control. In this design we will be defining our
memory size in bits. The proposed size of the array is 4 - 64 bits i.e., 2x2 to 8 x 8 arrays.
Moreover, this CMOS memory design will also be including sensing amplifiers and circuits,
row and column decoders, control circuitry and finally the operation of memory cells
themselves. Initially memory design procedure will be primarily consisting of defining the
architecture and then laying out respective memory cell. Then according to architecture, the
peripheral circuitry is laid out. Finally, memory check and optimization procedures are
carried out with various simulation tools.
Subodh Wairya et al. [2021] A Shrink in technology leads to decrease in voltage supply
which returns in power leakage. This affects the data stability in Static Ram Access Memory
(SRAM) cell. Static noise margin (SNM) is need for measurement of data stability in SRAM
cell. Data stability for SRAM cell relies on largest DC noise which can be ignored at the
inverters outputs which are cross coupled without change the data in SRAM cell. This paper
presents 6T, 7T, 8T, 9T, 10T design and analysis which increases the data stability of SRAM
during write and read mode. These cells are compared with respect to their read static noise
margin (RSNM), hold noise margin (HSNM), write 0 delay, write 1 delay, average write
delay, static power, average dynamic power, total power dissipation and surface area. Finger
method is used to create layout of different SRAM cell which reduces the surface area of cell.
This method also required to reduces the parasitic in layout design. The layout of different
cell of SRAM and an SRAM with 4×4 array of 6T cell is implemented on virtuoso tool of
cadence software using 45nm Technology. After the simulation outcome it has found that that
10T cell has maximum RSNM which is 0.42V and it has minimum total power dissipation.
10T SRAM cell has average dynamic power 154.309nW, static power 1.14022μW and total
power dissipation 1.294529μW. Process variation and Monte Carlo simulation for different
SRAM cells are carried out. Process variation is done for the RSNM parameter of the
memory cells. Monte Carlo simulation is carried out for average dynamic power, static
power, rise time and fall time using 2000 samples. After the simulation it is observed that the
performance of 10T design is best among all simulated SRAM cells. Comparison of 6T, 8T
and 9T cell design with previous work shows the improvement in RSNM at the price of write
delay. 10T SRAM cell using CNFET (carbon nanotube field-effect transistor) is simulated
which has channel length of 11nm at 0.3V of voltage supply. After simulation it has observed
that 10T SRAM cell based on CNFET has low total power of dissipation.
Nanda, Umakanta, et al. [2021] The latest electronic gadgets demand many functionalities
which requires enhanced performance of the processor. To ensure this, cache based on Static
Random-Access Memory (SRAM) is a vital part in electronic devices. However, leakage and
variability of transistors in SRAM cell have become dominating factor below 90nm
technology. Recently SRAM cells have been designed to deal with these problems. However,
it is an important task to achieve a balanced performance between all SRAM cell parameters
of sub-nanometre technology. A new design of SRAM is presented in this paper to elevate
the performance. The proposed SRAM is designed and implemented in CMOS 90nm
technology and produces comparatively better performance in terms of Static Noise Margin
(SNM), stability and power dissipation when compared with conventional 6T SRAM. The
cell reduces the energy consumption by using the technique of stacking. The stability of
proposed SRAM is also high compared to recent designs.
Neeta Pandey et al. [2021] This paper presents a comprehensive overview of leakage
reduction techniques prevailing in Static Random Access Memories (SRAMs) by classifying
them in three categories namely latch, bitline and read port. The performance of the
techniques is evaluated in terms of leakage reduction capability along with the impact on read
performance and hold stability through extensive simulative investigations at 32 nm
technology node by taking conventional SRAM cell as reference. Further, as SRAMs are
susceptible to inter-die as well as intra-die process variations, the performance at
different PVT corners is also captured to demonstrate the efficacy of each technique under
PVT variations. It is found that among the techniques used for reducing latch leakages, Multi-
threshold CMOS technique possess the highest leakage reduction capabilities followed by
Drowsy mode and Substrate-bias techniques. The results also indicate that Negative word
line technique is more effective at low supply voltages whereas the Leakage biased bitline
technique is more effective at high supply voltages for reducing bitline leakages. Amongst
the read port leakage reduction techniques, Stack-effect and Dynamic control of power
supply rail techniques are capable of suppressing the leakages at high voltages whereas
Virtual cell ground technique is more efficacious at low voltages. The impact of technology
scaling on SRAM cell performance with leakage reduction techniques is also studied. For the
sake of completeness, suggestions are put forward for adopting a particular technique to
address leakages at latch, bitline and read port levels.