2018 Book AlgorithmsAndApplications
2018 Book AlgorithmsAndApplications
Sajal K. Das
Nabendu Chaki Editors
Algorithms and
Applications
ALAP 2018
Smart Innovation, Systems and Technologies
Volume 88
Series editors
Robert James Howlett, Bournemouth University and KES International,
Shoreham-by-sea, UK
e-mail: [email protected]
Editors
123
Editors
Sajal K. Das Nabendu Chaki
Department of Computer Science Department of Computer Science
Missouri University of Science and Engineering
and Technology University of Calcutta
Rolla, MO Kolkata, West Bengal
USA India
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
part of Springer Nature
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
v
vi Preface
also toward conceptualizing the conference and helping us since the first meeting
held in early 2017. We appreciate the role of Prof. Ananya Kanjilal as Organizing
Chair for ALAP 2018. She and her colleagues took care of every detail that ensured
the success of the conference.
We appreciate the initiative and support of Mr. Aninda Bose and his colleagues
at Springer Nature for their strong support toward publishing this volume. Finally,
we thank all the authors and participants without whose presence and support the
conference would not have reached the expected standards.
Chief Patron
Patrons
General Chairs
Program Chairs
vii
viii Committee Members
Organizing Chair
Program Committee
xi
About the Editors
Sajal K. Das is a Professor of Computer Science and Daniel St. Clair Endowed
Chair at Missouri University of Science and Technology, USA, where he was
Computer Science Department Chair from 2013 to 2017. Prior to 2013, he was a
University Distinguished Scholar Professor of Computer Science and Engineering at
UT Arlington, USA. He also served as an NSF Program Director during 2008–2011.
His research interests include IoT, big data analytics, security, cloud computing,
wireless sensor networks, mobile and pervasive computing, cyber-physical systems
and smart environments, biological and social networks, and applied graph theory.
He has directed high-profile, funded projects and published over 700 papers in
journals and conference proceedings. He holds 5 US patents and has co-authored 52
chapters and 4 books. A recipient of 10 best paper awards, he has also received
numerous awards for teaching, mentoring, and research including IEEE Computer
Society Technical Achievement Award for pioneering contributions to sensor net-
works and mobile computing. He is the founding editor-in-chief of the Pervasive and
Mobile Computing Journal and associate editor of several other journals. He is an
IEEE fellow.
xiii
Part I
VLSI and Embedded Systems
Taxonomy of Decimal Multiplier
Research
1 Introduction
With the advent of digital computers, researchers and computer scientists indulged
in the worldwide debate whether the base architecture should be binary or decimal.
Binary got the lead in 1946 with Burks, Goldstine, and Von Neumann advocating
for binary architecture. On a different note, many researchers were of the view that
the architecture should comprise of binary addressing supported by decimal data
arithmetic [1]. Richards [2] provided a few decimal arithmetic architecture propos-
als as well as different decimal digit representation codes (i.e., 4221, 5211, etc.)
D. Sengupta (B)
Techno India-Batanagar, Kolkata, India
e-mail: [email protected]
M. Sultana
Techno India College of Technology, Kolkata, India
e-mail: [email protected]
which initiated appreciable research [3, 4] in the previous decade for search of dec-
imal multiplier hardware. Decimal arithmetic was primarily sustained by software
implementations [5] and libraries [6–8] but at the expense of slower processing
as performance penalty of software over hardware implementations is 100–1000
times. Intel was the first microprocessor manufacturer to introduce the 8087 numeric
extension processor supporting 18 decimal digits along with additional software
routines in 1983 [9] allowing easy compatibility with IEEE 754–1985. The 8087
coprocessor served the financial community and the Certified Public Accountants
(CPAs) for almost 25 years. IEEE rolled out the “IEEE 754-2008” standards in 2008
defining two Decimal Floating Point (DFP) formats, decimal 64 and decimal 128,
having a precision of 16 and 34 digits, respectively. With continual depreciation of
die space expense and potential speedup achievable in hardware implementations
[10, 11], microprocessor manufacturers have already mirrored the current trend of
decimal arithmetic hardware research by commercializing digital processors with
embedded decimal arithmetic units [12–14]. DFP multiplier architecture propos-
als comprise of decoding of the Densely Packed Decimal (DPD) numbers [15]
into equivalent BCD format, decimal multiplication, cohort selection [16], rounding
[16–22], and eventual encoding [15] in the DFP format with a few other important
intermediate steps.
We provide taxonomy of decimal multiplier hardware designs in this paper.
We have generated the vocabulary from IEEE Xplore using “decimal multiplica-
tion/multiplier” as the seed term. The taxonomy has been created manually using
steps mentioned in Automated Taxonomy Generation Process based on a bibliometric
method presented in [23] keeping the seed term constant.
2 Article Classification
A broad collection of research articles form the prerequisites for conducting a sur-
vey as well as generating taxonomy. In purview of taxonomy, raw data refers to the
published research articles. We have identified “IEEE Xplore” for generating the
vocabulary of raw data. “Decimal multiplication/multiplier” was used as the seed
term to shortlist the relevant articles from the online database. Each article in the raw
data was allotted a “Document Id” for further processing. Table 1 provides the nec-
essary allotments for all the published articles till date. The time span for the survey
was divided into three parts depending upon IEEE floating point standardizations:
Research on decimal arithmetic majorly focused after the year 2000. Therefore, we
provide a statistical analysis from year 2000 till date. Table 2 provides the year-wise
article count for Journal and Conference publications. Article count prior to 2000 has
been clubbed as a single entry as the number of published articles is highly discrete.
Figure 1 presents the publication count for Periods 1 through 3. If publication
count can be assumed to resemble the quantum of global research, then it can be
observed from Fig. 1 that global research on decimal multiplier hardware has been
conducted after IEEE rolled out the IEEE 754-2008 standards [16].
Figure 2 provides the categorical (Journal/Conference) publication count using
data from Table 2. It can be observed from Fig. 3 that the rate of publication through
the years has been logarithmic in nature.
The following equations provide the rate of Journal and Conference publications,
respectively, as observed from Fig. 3.
The equations reflect a growth in research in the previous years. One of the impor-
tant measures of published materials is their citation count. The citation count can
also be assumed to be a measure of research conducted on a certain design and the
expansion and full exploration of the design with peers. Hence, we provide the cita-
tion count in Table 3 and Fig. 4 for all articles mentioned in Table 1. The median of
Taxonomy of Decimal Multiplier Research 7
the graph presented in Fig. 4 is 13, whereas the arithmetic, geometric, and harmonic
mean values are 23.94, 11.95, and 5.49, respectively. Since the graph is populated
with extreme outliers and no inter-related data, hence, we consider the harmonic
mean value to be more accurate to give a measure of the citation trend. Also from the
difference between the values of the arithmetic, geometric, and harmonic mean, we
observe that the harmonic mean gives a more average count for citations. Therefore,
going by the harmonic mean, we assume that research in any article that fetches a
citation count greater than 5.49 has been further explored and expanded. Therefore,
these can form the very pivotal publications in decimal multiplier research. Hence,
we classify the articles into four classes based on their citation count as shown in
Table 4.
3 Taxonomy Proposal
140
120
100
80
60
40
20
0
PL1
PL4
PL7
PL10
PL13
PL16
PL19
PL22
PL25
PL28
PL31
PL34
PL37
PL40
PL43
PL46
PL49
PL52
PL55
PL58
PL61
PL64
PL67
term and the nodes at the bottom comprise of specific terms. The topmost node is
generally referred to as the “root” and the nodes at the bottom as “leaves”. The
expansion from the single root to multiple leaves reflect the interrelationship between
Taxonomy of Decimal Multiplier Research 9
the concepts of research. There are many functional uses of taxonomy. We propose
the taxonomy for better understanding of the research based on the characteristic
features of “Decimal Multiplier Architectures”.
We have created the taxonomy using IEEE Xplore as the vocabulary and “Dec-
imal Multiplier” as the seed term as mentioned earlier. The process described in
the automated taxonomy generation methodology has been implemented manually
comprising the following four steps.
Data Processing,
Database Creation,
Taxonomy Generation, and
Visualization.
where
n x,y = Number of articles containing both term “x” and term “y”,
nx = Number of articles containing only term “x”, and
n y = Number of articles containing only term “y”
Table 6 provides the data for n x and n y . For n x,y , we create Table 7 populating
it with the number of articles containing both T i and T j ; i, j ∈ [1, 17]. There-
fore, Table 7 provides the numerator of Eq. 3. We then generate the denominator of
√ √
Eq. 3 and populate Table 8. Table 8 basically contains the values of n x n y . Using
Tables 7 and 8, Table 9 is populated which provides the cosine similarity index for
two nodal terms, i.e., the measure of similarity between two keywords/phrases.
The taxonomy is generated using Table 9. Let us assume “i” to be the row index
and “j” to be the column index. Each TNi is scanned horizontally row-wise and the
TNj giving the highest value is assigned the child of TNi. Since values in Table 9
are mirror images along the diagonal, hence, once a certain combination of TNi TNj
has already been considered, it is not considered when i j and j i. In such cases,
the TNi becomes the final leaf node. For example, TN6 has the highest similarity
with TN8, i.e., 0.72. Hence, TN6 is considered to be the parent of TN8. Now, while
considering TN8, it fetches the highest similarity value with TN6 only; hence, this
relation is discarded and TN8 is considered to be a final leaf node. All the rows are
scanned ∀i ∈ [1, 17] and the taxonomy is as follows:
Table 7 Term—Co-occurrence matrix: Weightage for two nodal terms in an article
TN1 TN2 TN3 TN4 TN5 TN6 TN7 TN8 TN9 TN10 TN11 TN12 TN13 TN14 TN15 TN16 TN17
TN1 – 10 5 32 1 17 11 12 12 9 23 5 9 8 3 19 1
TN2 10 – 3 8 0 2 1 2 3 1 4 1 2 2 1 3 0
TN3 5 3 – 3 0 1 1 1 0 1 3 1 1 1 1 2 0
TN4 32 8 3 – 0 10 6 8 8 6 14 4 7 6 3 12 0
TN5 1 0 0 0 – 3 1 1 0 1 1 2 0 0 0 0 0
TN6 17 2 1 10 3 – 10 19 4 8 7 17 3 0 2 11 5
Taxonomy of Decimal Multiplier Research
TN7 11 1 1 6 1 10 – 8 0 2 3 7 3 0 3 9 0
TN8 12 2 1 8 1 19 8 – 1 7 9 14 2 1 2 7 4
TN9 12 3 0 8 0 4 0 1 – 2 4 0 2 5 0 5 0
TN10 9 1 1 6 1 8 2 7 2 – 9 4 2 1 0 2 1
TN11 23 4 3 14 1 7 3 9 4 9 – 3 5 4 1 3 1
TN12 5 1 1 4 2 17 7 14 0 4 3 – 2 0 2 6 3
TN13 9 2 1 7 0 3 3 2 2 2 5 2 – 4 2 4 5
TN14 8 2 1 6 0 0 0 1 5 1 4 0 4 – 0 3 0
TN15 3 1 1 3 0 2 3 2 0 0 1 2 2 0 – 3 0
TN16 19 3 2 12 0 11 9 7 5 2 3 6 4 3 3 – 1
TN17 1 0 0 0 0 5 0 4 0 1 1 3 5 0 0 1 –
11
12
√ √
Table 8 Multiplicative data ( n x n y )
TN1 TN2 TN3 TN4 TN5 TN6 TN7 TN8 TN9 TN TN TN TN TN TN TN TN
10 11 12 13 14 15 16 17
TN1 – 24.29 17.18 45.44 13.30 42.07 27.69 36.84 26.61 27.69 41.36 33.48 25.48 23.04 15.36 36.03 17.18
TN2 24.29 – 7.07 18.71 5.48 17.32 11.40 15.17 10.95 11.40 17.03 13.78 10.49 9.49 6.32 14.83 7.07
TN3 17.18 7.07 – 13.23 3.87 12.25 8.06 10.72 7.75 8.06 12.04 9.75 7.42 6.71 4.47 10.49 5.00
TN4 45.44 18.71 13.23 – 10.25 32.40 21.33 28.37 20.49 21.33 31.86 25.79 19.62 17.75 11.83 27.75 13.23
TN5 13.30 5.48 3.87 10.25 – 9.49 6.24 8.31 6.00 6.24 9.33 7.55 5.74 5.20 3.46 8.12 3.87
TN6 42.07 17.32 12.25 32.40 9.49 – 19.75 26.27 18.97 19.75 29.50 23.87 18.17 16.43 10.95 25.69 12.25
TN7 27.69 11.40 8.06 21.33 6.24 19.75 – 17.29 12.49 13.00 19.42 15.72 11.96 10.82 7.21 16.91 8.06
TN8 36.84 15.17 10.72 28.37 8.31 26.27 17.29 – 16.61 17.29 25.83 20.90 15.91 14.39 9.59 22.49 10.72
TN9 26.61 10.95 7.75 20.49 6.00 18.97 12.49 16.61 – 12.49 18.65 15.10 11.49 10.39 6.93 16.25 7.75
TN10 27.69 11.40 8.06 21.33 6.24 19.75 13.00 17.29 12.49 – 19.42 15.72 11.96 10.82 7.21 16.91 8.06
TN11 41.36 17.03 12.04 31.86 9.33 29.50 19.42 25.83 18.65 19.42 – 23.47 17.86 16.16 10.77 25.26 12.04
TN12 33.48 13.78 9.75 25.79 7.55 23.87 15.72 20.90 15.10 15.72 23.47 – 14.46 13.08 8.72 20.45 9.75
TN13 25.48 10.49 7.42 19.62 5.74 18.17 11.96 15.91 11.49 11.96 17.86 14.46 – 9.95 6.63 15.56 7.42
TN14 23.04 9.49 6.71 17.75 5.20 16.43 10.82 14.39 10.39 10.82 16.16 13.08 9.95 – 6.00 14.07 6.71
TN15 15.36 6.32 4.47 11.83 3.46 10.95 7.21 9.59 6.93 7.21 10.77 8.72 6.63 6.00 – 9.38 4.47
TN16 36.03 14.83 10.49 27.75 8.12 25.69 16.91 22.49 16.25 16.91 25.26 20.45 15.56 14.07 9.38 – 10.49
TN17 17.18 7.07 5.00 13.23 3.87 12.25 8.06 10.72 7.75 8.06 12.04 9.75 7.42 6.71 4.47 10.49 –
D. Sengupta and M. Sultana
Table 9 Cosine similarity index
TN1 TN2 TN3 TN4 TN5 TN6 TN7 TN8 TN9 TN10 TN11 TN12 TN13 TN14 TN15 TN16 TN17
TN1 – 0.41 0.29 0.70 0.08 0.40 0.40 0.33 0.45 0.32 0.56 0.15 0.35 0.35 0.20 0.53 0.06
TN2 0.41 – 0.42 0.43 0.00 0.12 0.09 0.13 0.27 0.09 0.23 0.07 0.19 0.21 0.16 0.20 0.00
TN3 0.29 0.42 – 0.23 0.00 0.08 0.12 0.09 0.00 0.12 0.25 0.10 0.13 0.15 0.22 0.19 0.00
TN4 0.70 0.43 0.23 – 0.00 0.31 0.28 0.28 0.39 0.28 0.44 0.16 0.36 0.34 0.25 0.43 0.00
TN5 0.08 0.00 0.00 0.00 – 0.32 0.16 0.12 0.00 0.16 0.11 0.26 0.00 0.00 0.00 0.00 0.00
TN6 0.40 0.12 0.08 0.31 0.32 – 0.51 0.72 0.21 0.41 0.24 0.71 0.17 0.00 0.18 0.43 0.41
Taxonomy of Decimal Multiplier Research
TN7 0.40 0.09 0.12 0.28 0.16 0.51 – 0.46 0.00 0.15 0.15 0.45 0.25 0.00 0.42 0.53 0.00
TN8 0.33 0.13 0.09 0.28 0.12 0.72 0.46 – 0.06 0.40 0.35 0.67 0.13 0.07 0.21 0.31 0.37
TN9 0.45 0.27 0.00 0.39 0.00 0.21 0.00 0.06 – 0.16 0.21 0.00 0.17 0.48 0.00 0.31 0.00
TN 10 0.32 0.09 0.12 0.28 0.16 0.41 0.15 0.40 0.16 – 0.46 0.25 0.17 0.09 0.00 0.12 0.12
TN 11 0.56 0.23 0.25 0.44 0.11 0.24 0.15 0.35 0.21 0.46 – 0.13 0.28 0.25 0.09 0.12 0.08
TN 12 0.15 0.07 0.10 0.16 0.26 0.71 0.45 0.67 0.00 0.25 0.13 – 0.14 0.00 0.23 0.29 0.31
TN 13 0.35 0.19 0.13 0.36 0.00 0.17 0.25 0.13 0.17 0.17 0.28 0.14 – 0.40 0.30 0.26 0.67
TN 14 0.35 0.21 0.15 0.34 0.00 0.00 0.00 0.07 0.48 0.09 0.25 0.00 0.40 – 0.00 0.21 0.00
TN 15 0.20 0.16 0.22 0.25 0.00 0.18 0.42 0.21 0.00 0.00 0.09 0.23 0.30 0.00 – 0.32 0.00
TN 16 0.53 0.20 0.19 0.43 0.00 0.43 0.53 0.31 0.31 0.12 0.12 0.29 0.26 0.21 0.32 – 0.10
TN17 0.06 0.00 0.00 0.00 0.00 0.41 0.00 0.37 0.00 0.12 0.08 0.31 0.67 0.00 0.00 0.10 –
13
14 D. Sengupta and M. Sultana
The taxonomy generated in the previous section is mapped to the articles. Table 10
provides the document Ids of all the articles that contain a certain Keyword/Term.
Taxonomy of Decimal Multiplier Research 15
Basically, Table 10 provides all the articles related to a certain node of interest
in the taxonomy. This table is of particular research interest as it provides all the
relevant publications pertaining to a specific subarea of decimal multiplier research.
For example, TN12 provides all the published articles related to rounding architecture
proposals for decimal multipliers in literature. Table 11 provides the inverse mapping
of Table 10. Table 11 is populated using keywords mapped to document Ids. Table 11
provides percentage coverage of the taxonomy by a given article. Therefore, PL2
covers 59% of the nodes in the taxonomy and going by Table 1, research content of
[4] justifies the findings.
Table 11 (continued)
DiD Term Id % DiD Term Id % DiD Term Id %
PL11 TN11 5.9 PL34 TN12 5.9 PL57 TN1, TN4, 29.5
TN6, TN8,
TN12
PL12 TN6, TN8, 23.6 PL35 TN4, TN11, 23.6 PL58 TN1, TN11, 17.7
TN11, TN12 TN13, T14 TN16
PL13 TN1, TN6, 41.3 PL36 TN6, TN8, 23.6 PL59 TN6, TN8, 17.7
TN7, TN8, TN12, TN17 TN12
TN12, TN15,
TN16
PL14 TN1, TN16 11.8 PL37 TN1, TN4 11.8 PL60 TN1 5.9
PL15 TN1, TN6, 23.6 PL38 TN1 5.9 PL61 TN1, TN4, 23.6
TN7, TN12 TN10, TN11
PL16 TN1, TN4, 23.6 PL39 TN1, TN2, 35.4 PL62 TN1, TN4, 35.4
TN9, TN14 TN4, TN9, TN6, TN8,
TN14, TN16 TN10, TN11
PL17 TN1, TN4, 23.6 PL40 TN1, TN2, 35.4 PL63 TN1, TN6, 23.6
TN7, TN16 TN4, TN6, TN8, TN11
TN9, TN16
PL18 TN1, TN2, 17.7 PL41 TN1, TN4, 23.6 PL64 TN1, TN6, 29.5
TN4 TN11, TN14 TN7, TN8,
TN12
PL19 TN1, TN4, 23.6 PL42 TN1, TN4, 23.6 PL65 TN1, TN9 11.8
TN7, TN16 TN8, TN11
PL20 TN1, TN9, 17.7 PL43 TN1, TN11 11.8 PL66 TN1 5.9
TN11
PL21 TN1. TN4, 23.6 PL44 TN1, TN2, 35.4 PL67 TN1 5.9
TN10, TN11 TN4, TN8,
TN10, TN11
PL22 TN1, TN3, 35.4 PL45 TN5, TN6, 17.7
TN10, TN11, TN12
TN13, TN14
PL23 TN1, TN6, 29.5 PL46 TN1, TN4, 23.6
TN7, TN8, TN13, TN15
TN16
18 D. Sengupta and M. Sultana
4 Conclusion
We have provided taxonomy for decimal multiplier research in this study. The prime
objective of the taxonomy is to present a platform where node-wise research can be
conducted in future, i.e., beginning research in decimal rounding (T1.3.1.1.3, Term
TN12) will require an initial study of published literature provided in Table 10 along-
side TN12. This study classifies articles in decimal multiplier research into different
nodes of taxonomy so as to minimize the literature survey duration for initiating
research in a particular node within the domain. Table 11 provides the % content of
taxonomy in a certain published literature. Hence, articles having taxonomy cover-
age of more than 25% can be termed as state-of-the-art articles and referred to as
foundation literature for research on decimal multiplier architectures.
The present study can be further explored using more robust algorithms, i.e.,
Google similarity distance, etc. It can also be extended to form a sub-domain within
taxonomy for decimal arithmetic architectures.
References
16. IEEE Computer Society: IEEE Standards for Floating-Point Arithmetic 754-2008, IEEE. Aug
2008
17. Quach, N., Takagi, N., Flynn, M.: Systematic IEEE rounding method for high-speed floating-
point multipliers. IEEE Trans. Very Larg. Scale Integr. VLSI Syst. 12(5), 511–521 (2004)
18. Even, G., Seidel, P.: A comparison of three rounding algorithms for IEEE floating-point mul-
tiplication. IEEE Trans. Comput. 49(7), 638–650 (2000)
19. Wang, L.-K., Schulte, M.: Decimal floating-point adder and multifunction unit with injection-
based rounding. In: 18th IEEE Symposium on Computer Arithmetic, 2007. ARITH ‘07, Mon-
tepellier, pp. 56–68 (2007)
20. Wang, L.-K., Schulte, M., Thompson, J., Jairam, N.: Hardware designs for decimal floating-
point addition and related operations. IEEE Trans. Comput. 58(3), 322–335 (2009)
21. Tsen, C., Gonzalez-Navarro, S., Schulte, M., Hickmann, B., Compton, K.: A combined dec-
imal and binary floating-point multiplier. In: 2009 20th IEEE International Conference on
Application-specific Systems, Architectures and Processors, Boston, MA, pp. 8–15 (2009)
22. Tsen, C., Schulte, M., Gonzalez-Navarro, S.: Hardware design of a binary integer decimal-
based IEEE P754 rounding unit. In: IEEE International Conference on Application-specific
Systems, Architectures and Processors (ASAP) 2007, Montreal, Que, pp. 115–121 (2007)
23. Camina, S.: A comparison of taxonomy generation techniques using Bibliometric. EECS The-
sis, Massachusetts Institute of technology (2010)
24. Guardia, C.: Implementation of a fully pipelined BCD multiplier in FPGA. In: VIII Southern
Conference on Programmable Logic (SPL), 2012, Bento Goncalves, pp. 1–6 (2012)
25. Carlough, S., Schwarz, E.: Decimal Multiplication using Digit Recoding. US, Patent US
7136893 B2, US. 14 Nov 2006
26. Baesler, M., Teufel, T.: FPGA implementation of a decimal floating-point accurate scalar prod-
uct unit with a parallel fixed-point multiplier. In: International Conference on Reconfigurable
Computing and FPGAs 2009, Quintana Roo, pp. 6–11 (2009)
27. Erle, M., Schulte, M.: Decimal multiplication via carry-save addition. In: Proceedings IEEE
International Conference on Application-Specific Systems, Architectures, and Processors, pp.
348–358 (2003)
28. Croy, J.: Improved arrangement of a decimal multiplier. IRE Trans. Electron. Comput. EC-9(2),
263 (1960)
29. Han, L., Ko, S.-B.: High-speed parallel decimal multiplication with redundant internal encod-
ings. IEEE Trans. Comput. 62(5), 956–968 (2013)
30. Castellanos, I., Stine, J.: Decimal partial product generation architectures. In: 51st Midwest
Symposium on Circuits and Systems 2008, Knoxville, TN, pp. 962–965 (2008)
31. Bozdas, K., Alkar, A.: Analysis on the column sum boundaries of decimal array multipliers.
In: IEEE 55th International Midwest Symposium on Circuits and Systems (MWSCAS) 2012,
Boise, ID, pp. 318–321 (2012)
32. Erle, M., Schulte, M., Hickmann, B.: Decimal floating-point multiplication via carry-save
addition. In: 18th IEEE Symposium on Computer Arithmetic, 2007. ARITH ‘07, Montepellier,
pp. 46–55. June 2007
33. Dadda, L., Nannarelli, A.: A variant of a Radix-10 combinational multiplier. In: IEEE Interna-
tional Symposium on Circuits and Systems (ISCAS 2008), pp. 3370–3373 (2008)
34. Hickman, B., Krioukov, A., Schulte, M., Erle, M.: A parallel IEEE P754 decimal floating-point
multiplier. In: 25th International Conference on Computer Design, 2007. ICCD 2007. Lake
Tahoe, CA, pp. 296–303 (2007)
35. Gorgin, S., Jaberipur, G.: Sign-magnitude encoding for efficient VLSI realization of decimal
multiplication. IEEE Trans. Very Larg. Scale Integr. (VLSI) Syst. (99), 1–13 (2016)
36. Erle, M., Hickmann, B.: Combined binary/decimal fixed-point multiplier and method. US,
Patent US8577952 B2, US. 5 Nov 2013
37. Dadda, L., Pisoni, M., Santambrogio, M.: A parallel-serial decimal multiplier architecture. In:
IEEE 15th International Conference on Computational Science and Engineering (CSE), 2012,
Nicosia, pp. 310–317 (2012)
20 D. Sengupta and M. Sultana
38. Hickmann, B., Schulte, M., Erle, M.: Improved combined binary/decimal fixed-point multipli-
ers. In: IEEE International Conference on Computer Design, 2008. ICCD 2008. Lake Tahoe,
CA, pp. 87–94 (2008)
39. Gorgin, S., Jaberipur, G.: Fully redundant decimal arithmetic. In: 2009 19th IEEE International
Symposium on Computer Arithmetic, pp. 145–152 (2009)
40. Jaberipur, G., Kaivani, A.: Binary-coded decimal digit multipliers. IET Comput. Digit. Tech.
1(4), 377–381 (2007)
41. Jaberipur, G., Kaivani, A.: Improving the speed of parallel decimal multiplication. IEEE Trans.
Comput. 58(11), 1539–1552 (2009)
42. James, R., Jacob, K., Sasi, S.: High performance, low latency double digit decimal multiplier
on ASIC and FPGA. In: World Congress on Nature and Biologically Inspired Computing,
2009. NaBIC 2009, Coimbatore, pp. 1445–1450 (2009)
43. James, R., Shahana, T., Jacob, K., Sasi, S.: Decimal multiplication using compact BCD mul-
tiplier. In: Electronic Design, 2008. ICED 2008. penang, pp. 1–6. Dec 2008
44. James, R., Shahana, T., Jacob, P., Sasi, S.: Fixed point decimal multiplication using RPS
algorithm. In: IEEE International Symposium on Parallel and Distributed Processing with
Applications 2008, Sydney, NSW, pp. 343–350 (2008)
45. Kaivani, A., Han, L., Ko, S.-B.: Improved design of high-frequency sequential decimal multi-
pliers. Electron. Lett. Inst. Eng. Technol. 50(7), 558–560 (2014)
46. Kenney, R., Schulte, M., Erle, M.: A high-frequency decimal multiplier. In: Proceedings IEEE
International Conference on Computer Design: VLSI in Computers and Processors, 2004.
ICCD 2004, pp. 26–29 (2004)
47. Lin, K., Chiu, Y., Lin, T.-H.: A decimal squarer with efficient partial product generation. In:
18th IEEE/IFIP International Conference on VLSI and System-on-Chip 2010, Madrid, pp.
213–218 (2010)
48. Navarro, S., Tsen, C., Schulte, M.: Binary integer decimal-based floating-point multiplication.
IEEE Trans. Comput. 62(7), 1460–1466 (2013)
49. Ohtsuki, T., Oshima, Y., Ishikawa, S., Yabe, H., Fukuta, M.: Apparatus for decimal multipli-
cation. US, Patent US 4677583 A, US. 30 June 1987
50. Osama, D., Khaleel, A., Tulic, N., Mhaidat, K.: FPGA implementation of binary coded dec-
imal digit adders and multipliers. In: 8th International Symposium on Mechatronics and its
Applications (ISMA), 2012, Sharjah, pp. 1–5 (2012)
51. Sutter, G., Todorovich, E., Bioul, G., Vazquez, M., Deschamps, J.: FPGA implementations
of BCD multipliers. In: International Conference on Reconfigurable Computing and FPGAs,
2009. ReConFig ‘09. Quintana Roo, pp. 36–41 (2009)
52. Ueda, T.: Decimal multiplying assembly and multiply module. US, Patent US 5379245, US.
Jan 1995
53. Vázquez, Á., Antelo, E., Bruguera, J.: Fast Radix-10 multiplication using redundant BCD
codes. IEEE Trans. Comput. 63(8), 1902–1914 (2014)
54. Veeramachaneni, S., Srinivas, M.: Novel high-speed architecture for 32-Bit binary coded deci-
mal (BCD) multiplier. In: 2008 International Symposium on Communications and Information
Technologies, 2008. ISCIT, Lao, pp. 543–546 (2008)
55. Véstias, M., Neto, H.: Iterative decimal multiplication using binary arithmetic. In: 2011 VII
Southern Conference on VII Southern Conference on Programmable Logic (SPL). Cordoba,
pp. 257–262 (2011)
56. Véstias, M., Neto, H.: Parallel decimal multipliers and squarers using Karatsuba-Ofman’s
algorithm. In: 15th Euromicro Conference on Digital System Design (DSD), 2012, Izmir, pp.
782–788 (2012)
57. Véstias, M., Neto, H.: Parallel decimal multipliers using binary multipliers. In: VI Southern
Programmable Logic Conference (SPL), 2010, Ipojuca, pp. 73–78 (2010)
58. Wahba, A., Fahmy, H.: Area efficient and fast combined binary/decimal floating point fused
multiply add unit. IEEE Trans. Comput. (99), 1 (2016)
59. Zhu, M., Baker, A., Jiang, Y.: On a parallel decimal multiplier based on hybrid 8421–5421
BCD recoding. In: 2013 IEEE 56th International Midwest Symposium on Circuits and Systems
(MWSCAS), Columbus, OH, pp. 1391–1394 (2013)
Taxonomy of Decimal Multiplier Research 21
60. Zhu, M., Jiang, Y.: An area-time efficient architecture for 16 × 16 decimal multiplications. In:
Tenth International Conference on Information Technology: New Generations (ITNG), 2013,
Las Vegas, NV, pp. 210–216 (2013)
61. Baesler, M., Voigt, S.-O., Teufel, T.: A decimal floating-point accurate scalar product unit with
a parallel fixed-point multiplier on a virtex-5 FPGA. Int. J. Reconfigurable Comput. 2010, 13
(2010). (Article ID 357839)
62. Cui, X., Liu, W., Wenwen, D., Lombardi, F.: A parallel decimal multiplier using hybrid binary
coded decimal (BCD) codes. In: IEEE 23nd Symposium on Computer Arithmetic (ARITH)
2016 (2016)
63. Varma, C., Ahmed, S., Srinivas, M.: A decimal/binary multi-operand adder using a fast binary
to decimal converter. In: 27th International Conference on VLSI Design and 2014 13th Inter-
national Conference on Embedded Systems (2014)
64. Eduardo, C., Guardia, M.: Implementation of a fully pipelined BCD multiplier in FPGA. In:
VIII Southern Conference on Programmable Logic (SPL) (2012)
65. Ding, H., Shu, P., Wang, X., Yang, J.: A design and implementation of decimal floating-point
multiplication unit based on SOPC. In: Third International Conference on Digital Manufactur-
ing and Automation (ICDMA) (2012)
66. Tsen, S., Gonzalez-Navarro, S., Schulte, M., Compton, K.: Hardware designs for binary integer
decimal-based rounding. IEEE Trans. Comput. 60(5), 614–627 (2011)
67. Vázquez, Á., Dinechin, F.: Efficient implementation of parallel BCD multiplication in LUT-6
FPGAs. In: International Conference on Field-Programmable Technology (FPT) (2010)
68. Navarro, S., Tsen, C., Schulte, M.: A binary integer decimal-based multiplier for decimal
floating-point arithmetic. In: Forty-First Asilomar Conference on Signals, Systems and Com-
puters, 2007. ACSSC 2007 (2007)
69. Rekha, K., Jacob, K., Sasi, S.: Performance analysis of double digit decimal multiplier on
various FPGA logic families. In: 5th Southern Conference on Programmable Logic, 2009.
SPL. Sao Carlos, pp. 165–170 (2009)
70. Minchola, C., Sutter, G.: A FPGA IEEE-754-2008 Decimal64 floating-point multiplier. In:
International Conference on Reconfigurable Computing and FPGAs, 2009. ReConFig ‘09.
Quintana Roo, pp. 59–64 (2009)
71. Kaivani, A., Chen, L., Ko, S.: High-frequency sequential decimal multipliers. In: 2012 IEEE
International Symposium on Circuits and Systems (ISCAS). Seoul, pp. 3045–3048 (2012)
72. Lang, T., Nannarelli, A.: A Radix-10 combinational multiplier. In: Fortieth Asilomar Confer-
ence on Signals, Systems and Computers, 2006. ACSSC ‘06. Pacific Grove, CA, pp. 313–317
(2006)
73. Gorgin, S., Jaberipur, G., Parhami, B.: Design and evaluation of decimal array multipliers. In:
2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and
Computers, pp. 1782–1786. IEEE. Nov 2009
74. Neto, H., Vestias, M.: Decimal multiplier on FPGA using embedded binary multipliers. In:
2008 International Conference on Field Programmable Logic and Applications, Heidelberg,
pp. 197–202 (2008)
75. Gonzalez-Navarro, S., Tsen, C., Schulte, M.: A binary integer decimal-based multiplier for
decimal floating-point arithmetic. In: 2007 Conference Record of the Forty-First Asilomar
Conference on Signals, Systems and Computers, Pacific Grove, CA, pp. 353–357 (2007)
76. Baesler, M., Voigt, S.-O., Teufel, T.: An IEEE 754-2008 decimal parallel and pipelined FPGA
floating-point multiplier. In: 2010 International Conference on Field Programmable Logic and
Applications (FPL). Milano, pp. 489–495 (2010)
77. Erle, M., Schwarz, E., Schulte, M.: Decimal multiplication with efficient partial product gener-
ation. In: 17th IEEE Symposium on Computer Arithmetic, 2005. ARITH-17 2005, pp. 21–28
(2005)
78. Jouppi, N.: Wallace-tree multipliers using half and full adders. US, Patent US 6065033 A, US.
16 May 2000
79. Lehman, M.: Short-cut multiplication and division in automatic binary digital computers, with
special reference to a new multiplication process. Proc. IEEE—Part B: Radio Electron. Eng.
105(23), 496–504 (2010)
A Synoptic Study on Fault Testing
in Reversible and Quantum Circuits
1 Introduction
the temperature in absolute scale. Though this value is pretty low for a single bit
of information loss, in modern computers, millions of instructions are computed
every second, and hence the heat dissipation becomes formidable. It was Bennett
[2] who showed that in order to avoid that power dissipation, the computation must
be reversible. This proof by Bennett has attracted the interest of many researchers
toward reversible logic synthesis [14, 21, 27–29].
Evolution of a quantum bit or qubit is governed by unitary operators, and hence
quantum computation is inherently reversible. Unlike a classical bit, a qubit can be
in a superposition of both |0 and |1. A general quantum state is mathematically
denoted as α |0 + β |1. Quantum computing promises to reduce the computational
complexity of some problems [7, 23]. However, qubits are extremely prone to errors.
Hence, error correction is of utmost importance for implementation of a quantum
computer. This has led to study on quantum error correction [6, 22, 24] and their
efficient implementation [9, 13].
In the recent times, many researchers are focusing on the testing and fault modeling
for reversible and quantum circuits. In this paper, we present a review of various
faults occurring on reversible and quantum circuits, and their testing. Their testability
and design by k-CNOT gates are also studied. We also argue with an example that
some fault models that can be corrected in classical reversible circuits may not be
correctable in quantum circuits, which may prove to be a serious hindrance in the
design of fault-free quantum circuits.
The rest of the paper is organized as follows—In Sect. 2, we discuss some of
the basic reversible gates. In Sects. 3 and 4, we discuss the various fault models in
reversible circuits and their testing, respectively. Section 5 addresses some faults in
quantum circuit and in Sect. 6, we make our arguments on the impossibility of testing
SMGF in quantum circuits. We conclude in Sect. 7.
NOT Gate: NOT gate is the simplest reversible gate (Fig. 1). It is a (1 × 1) gate, i.e.,
there is a single input and a single output. The function of this gate is to invert the
input value and is denoted as [{x} ⇒ {x }].
CNOT Gate: CNOT or Controlled-Not Gate [5] is a (2 × 2) gate (Fig. 2). One of the
inputs is the control and the another one is the target. After the gate operation, the
control input passes unchanged while the value of the target input is complemented
iff the value of control input is 1. The function is denoted as [{x, y} ⇒ {x, x ⊕ y}].
Toffoli Gate: The Toffoli gate [28] is a (3 × 3) gate where there are two control
lines and one target line (Fig. 3). If both the control lines are set to logical 1, then
A Synoptic Study on Fault Testing in Reversible and Quantum Circuits 25
Y X Y
Y Y
Z XY Z
the output value is inverted, otherwise it remains same. The function is denoted as
[{x, y, z} ⇒ {x, y, x y ⊕ z}]. It is immediately evident that
N AN D(x, y) = T O F F O L I (x, y, 1)
Since NAND gate is universal and can be implemented using TOFFOLI gate, the
latter is also universal for classical reversible circuits.
Fredkin Gate: Fredkin gate is also a (3 × 3) gate (Fig. 4) where the second and third
inputs are swapped with each other if the first input is set to 1; otherwise, it remains
the same [28]. The function is denoted as [{x, y, z} ⇒ {x, x y ⊕ x z, x z ⊕ x y}].
Peres Gate: Peres Gate (PG) is a (3 × 3) gate composed of two XOR gates and one
AND gate (Fig. 5) [19]. The function is denoted as [{x, y, z ⇒ x, x ⊕ y, x y ⊕ z}].
26 R. Dey et al.
We are mainly focused on the faults that occur in a reversible circuit due to the
presence of k-CNOT gates (k number of control lines). This is because, for such a
gate, a single fault in one of the control lines propagate to the target line, leading to
multiple faults in the circuit. Faults occurring on a k−CNOT gate can be categorized
into four basic groups—(i) Single Missing Gate Fault (SMGF), (ii) Repeated Gate
Fault (RGF), (iii) Partial Missing Gate Fault (PMGF), and (iv) Multiple Missing
Gate Fault (MMGF). We provide an overview of these faults individually and touch
on the idea of detecting these faults.
Single Missing Gate Fault (SMGF) [20]: This type of fault occurs when one of the
k-CNOT gates (Fig. 6) is not applied (or the gate fails, hence acting simply as a wire).
This means that the gate gets short or missing. Such a fault can be detected by the
test vector {x1 , x2 , x3 } = {0, 1, 1}.
Repeated Gate Fault (RGF) [20]: This fault occurs when there is one or more
unwanted repetition of a k-CNOT gate (Fig. 7). It can be detected by the test vector
{x1 , x2 , x3 } = {0, 1, 1} which is same as that of single missing gate fault.
Partial Missing Gate Fault (PMGF) [20]: Such a fault occurs when gate pulses are
partially misaligned or mistuned (Fig. 8). For example, one of the control signals may
not work properly, thus incorporating fault in both the control line and the target line.
Such a fault changes a k-CNOT gate into a p-CNOT gate, p < k. Hence, this fault
is also called (k − p)-th order PMGF. It is important to note that SMGF is a special
case of PMGF (0-th order PMGF). The test set {x1 , x2 , x3 } = {1, 0, 1} detects this
fault.
Multiple Missing Gate Fault (MMGF) [20]: This fault occurs (Fig. 9) when two or
more consecutive k-CNOT gates are not applied. This fault is detected by the test
vector {x1 , x2 , x3 } = {0, 1, 1}.
A Synoptic Study on Fault Testing in Reversible and Quantum Circuits 27
In this section, we review some of the techniques to test the presence of the faults,
introduced in the previous section, in a circuit. The testing can be done in four ways—
(i) by constructing a Diagnostic Test Set (DTS), (ii) using voter insertion techniques,
and (iii) using Binary Decision Diagrams (BDDs) (which is only applicable to SMGF
and MMGF). We review each of these techniques individually in brief.
A Synoptic Study on Fault Testing in Reversible and Quantum Circuits 29
1. Diagnostic Test Set (DTS) Method [15]: In this method, initially test patterns
from Automatic Test Pattern Generation (ATPG) are fed to the fault-free circuit
and the outputs are recorded. When a single fault is encountered in the circuit, the
same test patterns are applied on the circuit once more and the output is compared
with the fault-free output. This allows to specify the location of the fault. If the
circuit encounters more than one fault, then the same test is repeated with different
test cases till all the faults are detected. For every positive result obtained, the test
pattern is recorded in a fault diagnosis table and the corresponding tree structure
is constructed. Tree structure has lesser search time compared to any linear data
structure. Reference [4] hence, the choice of tree structure, reduces the time
complexity of the whole process. The test set that is used to detect the faults is
known as Diagnostic Test Set (DTS). A DTS of the reversible circuit shown in
Fig. 10 is {000, 001, 110, 111}.
2. Vector Insertion technique: The voting technique for reversible logic was intro-
duced based on majority multiplexing where a reversible majority gate named
MAJ was introduced [30]. In a MAJ, the output is the majority of input bits.
Hence, if a single input bit incurs fault, then the output may change. This is
called single-point failure. Here, the two garbage outputs are used up for fault
diagnosis method. There are two types of implementation: (a) Minimal Triplicated
Voter(MTV) implementation and (b) Robust Triplicated Voter(RTV) implemen-
tation (Figs. 11 and 12).
In case of MTV, there are four stages (i.e., four reversible gates). The analy-
sis is made taking into consideration that a single fault results in three types of
errors, viz., maskable error, recoverable error, and unrecoverable error. It still
cannot properly detect the faults that have occurred in the input lines of voter. To
increase the robustness, RTV was proposed. In RTV, the three copies of voted
value are produced independently from the inputs in a direct manner. It reduced
the chances of unrecoverable error by masking the single fault in the cost of
increased area and delay. The diagnosis can be of single block level or multiple
block level. For single block, say, if there are three copies of a sub-circuit with k
gates and a RTV voter, then the diagnosis is on the block of (k + 8) gates. The
garbage outputs only show the location of the fault on the data inputs used for
fault detection in the sub-circuits. Each RTV produces two outputs.
In case of multiple blocks with n number of RTVs, we obtain 2n number of
diagnosis outputs.The locations of the faults are identified through these outputs.
Further, to reduce the time of monitoring, a special reversible circuit, called Diag-
nosis Collector (DC), was designed to collect all the diagnosis outputs as inputs
and give a single output indicating the location of the faults in the circuit (Fig. 13).
3. BDD Method: BDD, which stands for Binary Decision Diagram, is only appli-
cable for SMGF and MMGF. It starts with generation of test patterns. The main
goal is to obtain a test set that will cover all possible faults in the circuit with min-
imum number of test patterns. The next and the most vital step is the dependency
analysis, where the dependencies between all possible combinations of two fault
gates are analyzed using the dependency analysis algorithm [26].
30 R. Dey et al.
Consider a circuit with m lines, then the BDD contains 2m inputs and only one
output. The same output can be represented by the minimum weighted path where
the arc has a weight of 1 and other than that it has a weight of 0. Here, BDD con-
taining SMGF test patterns are altered for all the MMGF resulting in a BDD with
n inputs (where n is the number of lines) and 2 ∗ N C2 outputs (where N is the
number of gates).
Universal Test Set: All the techniques discussed above point us to the same
direction—the construction of a unique test set that can detect all types of faults [20].
⎛ ⎞
x1 x2 . . . xn cx
⎜0 1...1 1⎟
⎜ ⎟
⎜1 0...1 1⎟
SU = ⎜
⎜. . .
⎟
⎜ ... . . .⎟
⎟
⎝1 1...0 1⎠
1 1...1 0
T, cx . This is done by adding an extra control line (cx ) and repeating the same
sequence (x1 , x2 , x3 , . . . x j , xk , t, cx ) of gates (k-CNOT gate) as embedded in the
circuit, such that the extra control line can give a controlled output (cx ) and has an
output sequence of c1 , c2 , c3 , . . . c j , ck , T 1, cx .
Lemma 1 We will get the same output at the target after augmentation, as the given
input in t when cx = 1. This can be simply shown as follows:
T 1 = T ⊕ (1, x1 , x2 , x3 . . . x j . . . xk )
= T ⊕ (x1 , x2 , x3 . . . x j . . . xk )
(1)
= t ⊕ (x1 , x2 , x3 . . . x j . . . xk ) ⊕ (x1 , x2 , x3 . . . x j . . . xk )
=t
Note that the process of augmentation must be repeated for each and every level of
logic implementation.
Lemma 2 A test set matrix SU of size (n + 1), shown in Fig. 14, will be enough to
detect all missing gate faults of order > 1.
Although quantum circuits are reversible, the technologies that are used to imple-
ment a quantum circuit [1, 16, 25] are different from that of a classical reversible
circuit. Some faults in classical and quantum circuit are similar in nature, such as
manufacturing fault, design error, physical defects, and probabilistic error. In addi-
tion to these, quantum circuits have some notable properties: (i) a quantum circuit
may contain a non-detectable fault at the time of measurement and (ii) a quantum
circuit may incorporate a fault that may be partially detected during measurements.
This demands the need to characterize faults observed so far only in quantum circuits.
An ideal quantum circuit, i.e., a circuit without any fault, is termed Gold Circuit
(GC). If, by executing a particular test set, one can determine all the faults present in
a given circuit, then the test set is said to be a complete fault coverage. Fault localiza-
tion [12] is a typical method that is inherited from classical fault model to rectify the
possible faults. Due to the adaptation of the test set, this diagnostic method helps to
locate the types of faults and their positions in the circuit. We provide a brief sketch
of the fault models in quantum circuit.
cos(θ/2) −isin(θ/2) cos(θ/2) sin(θ/2)
Rx (θ ) = , R y (θ ) =
−isin(θ/2) cos(θ/2) −sin(θ/2) cos(θ/2)
and
−ιφ/2
e 0
Rz (φ) =
0 eιφ/2
Pauli matrices form the basis of 2 × 2 matrix space. Hence, any such arbitrary
rotational matrix can be written as a linear combination of the Pauli matrices, whose
spectral decompositions [17] are as
Hence, any arbitrary fault can be written as an addition of one unwanted Pauli
matrix f in a quantum circuit (QC) at some error location l with probability p. Some
error correcting codes [6, 22, 24] have been proposed in the literature to correct a
single-qubit Pauli error. Many faults in quantum circuits can be modeled using Pauli
fault model, viz., depolarizing channel [11], phase dampening [8, 11], amplitude
dampening [8], initialization inaccuracies [18], and measurement inaccuracies [2,
18]. Moreover, errors closer to the classical circuit, like pulse length error [8], off-
resonance effects [11], and refocusing errors [8, 22], can also be described by the
Pauli fault model. In paper [3], it is shown that σx and σz (refer Eq. 3) faults are
reachable with any computational basis input state from K -CNOT gates. We check
the percentage accuracy to detect fault in the circuit.
A qubit cos θ |0 + sin θ |1 can incur some rotational error Rn (δ), where n ∈
{x, y, z} and the state changes to cos θ |0 + ei sin θ |1. A qubit is said not to have
initialization fault if it is not affected by such a rotational error. We have already
stated before that any rotational error can be represented as a linear combination
of the Pauli matrices. Hence, correcting only the Pauli matrices suffices. To correct
σx alone, a repetition code along with majority voting is sufficient. It was shown
by Shor [22] that phase flip error is equivalent to bit flip in {|+ , |−} basis, where
|± = √12 (|0 ± |1).
Now, we explain initialization fault with an example. Suppose the primary state
is |01c, c ∈ {0, 1} (Fig. 15), then by inverting the top qubit (|c ⇒ |c), we get the
final state as cos θ |01c − i sin θ |11c. After an effect of Toffoli gate, the state is
modified such that the probability of error becomes (sin θ )2 . Similarly, Fig. 15 also
shows the situation where initialization fault impacted the center qubit. In order to
Fig. 15 Initialization errors impacting a 2CN Gate: a correct circuit, b–d various initialization
errors
34 R. Dey et al.
detect initialization fault, the qubit must be measured once in σz basis for bit flip and
then again in σx basis for phase flip.
Lost phase model is basically a random phase error by allowing unwanted phase
shift ± . A correct 3-CN gate may connect an input state |+ + + |− to out-
put state (|000 + |001 + |010 + |011 + |100 + |101 + |110 + |111) |− and
here superposition term triggers the gate which will encounter a phase shift of
|n ⇒ eiπ |n.
The impact of phase faults on the input state |+ + + is described in Fig. 16
and Table 1. First column projects the phase of each term before being acted upon
the circuit. The column GC (a) shows the correct relative phase of each term in
entanglement. And another depicts phase changes due to the presence of fault.
If such a fault is present in the circuit of Fig. 16, then the output will be (|000 +
|001 + |010 − |011 + |100 + |101 + |110 − |111) |−. From this output, it is
visible that relative phase shift occurs on both the states |011 and |111, since those
Fig. 16 CCNOT gate and phase fault: a Gold circuit, b weak top control, c weak second control,
d weak gate
A Synoptic Study on Fault Testing in Reversible and Quantum Circuits 35
activate the gate when the top control is broken. Another type of fault is phase
dampening, i.e., a noise process altering relative phases between quantum states.
Fig. 18 Measurement Errors: Figs a, b, and c illustrate measurement faults that statistically favor
logic 0. Figs d, e, and f contain measurement faults statistically favoring logic 1. [3]
36 R. Dey et al.
Every quantum operator is associated with a unitary matrix [17]. However, some
of the quantum operators, viz., Pauli operators, Hadamard, and CNOT, are self-
adjoint. A matrix A is said to be self-adjoint if A = A† . Due to some error in tech-
nology, a particular gate G may have been applied multiple times instead of the
desired single occurrence of the gate. However, if G is a self-adjoint operator, then
G.G † = G.G = I . Hence, if a self-adjoint operator G is applied n times due to a
fault, two scenarios can occur—(i) if n is even, then G.G . . . G = I and (ii) if n is
odd, then G.G . . . G = G. So, if the gate is applied odd number of times, then it does
not account for any fault and if it is applied even number of times, then it is similar
to SMGF. So, for self-adjoint operators in a quantum circuit, multiple occurrences
of a gate does not lead to any new fault model. A test pattern that can identify single
missing gate fault will also be able to identify fault due to multiple occurrences of the
gate. This is indeed an advantage of quantum circuits over classical ones. However,
not every quantum operator is self-adjoint. Hence, this advantage is not general to
all quantum operators.
7 Conclusion
In this paper, we have discussed fault testing in reversible and quantum circuits. We
have given a detailed description of fault models in classical and quantum circuits
and provided remedies for their detection. We have also shown with an example
that since it is not possible to distinguish two non-orthogonal quantum states, it may
not be possible to detect certain faults in a quantum circuit. While fault testing has
been studied largely in classical circuits, research on quantum circuits is still mostly
limited to error correction. It will be worthwhile to pursue more studies on quantum
fault models, which are likely to depend on and vary with the various technologies
that are used presently to implement a quantum computer.
References
1. Barrett, M.D., Schätz, T., Chiaverini, J., Leibfried, D., Britton, J., Itano, W.M., Jost, J.D., Knill,
E., Langer, C., Ozeri, R., et al.: Quantum information processing with trapped ions. In: AIP
Conference Proceedings, vol. 770, pp. 350–358. AIP (2005)
2. Bennett, C.H.: Logical reversibility of computation. IBM J. Res. Dev. 17(6), 525–532 (1973)
3. Biamonte, J.D., Allen, J.S., Perkowski, M.A.: Fault models for quantum mechanical switching
networks. J. Electron. Test. 26(5), 499–511 (2010)
4. Cormen, T.H.: Introduction to Algorithms. MIT press, Cambridge (2009)
5. Feynman, R.P.: Quantum mechanical computers. Found. Phys. 16(6), 507–531 (1986)
6. Gottesman, D.: Stabilizer codes and quantum error correction (1997). arXiv preprint quant-
ph/9705052
7. Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Proceedings of the
Twenty-eighth Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM (1996)
8. Knill, E., Laflamme, R., Ashikhmin, A., Barnum, H., Viola, L., Zurek, W.H.: Introduction to
quantum error correction (2002). arXiv preprint quant-ph/0207170
9. Knill, E., Laflamme, R., Viola, L.: Theory of quantum error correction for general noise. Phys.
Rev. Lett. 84(11), 2525 (2000)
10. Landauer, R.: Irreversibility and heat generation in the computing process. IBM J. Res. Dev.
5(3), 183–191 (1961)
11. Lee, S., Lee, S.-J., Kim, T., Lee, J.-S., Biamonte, J., Perkowski, M.: The cost of quantum gate
primitives. J. Mult. Valued Log. Soft Comput. 12 (2006)
12. Li, C., Liu, L., Pang, X.: A dynamic probability fault localization algorithm using digraph. In:
2009 Fifth International Conference on Natural Computation, August 2009, vol. 6, pp. 187–191
(2009)
13. Majumdar, R., Basu, S., Mukhopadhyay, P., Sur-Kolay, S.: Error tracing in linear and concate-
nated quantum circuits (2016). arXiv preprint arXiv:1612.08044
14. Majumdar, R., Saini, S.: A novel design of reversible 2: 4 decoder. In: 2015 International
Conference on Signal Processing and Communication (ICSC), pp. 324–327. IEEE (2015)
15. Mondal, B., Das, P., Pradyut, S., Chakraborty, S.: A comprehensive fault diagnosis technique
for reversible logic circuits. Comp. Electr. Eng. 40(7), 2259–2272 (2014)
16. Munro, W.J., Nemoto, K., Spiller, T.P., Barrett, S.D., Kok, P., Beausoleil, R.G.: Efficient optical
quantum information processing. J. Opt. B: Quantum Semiclassical Opt. 7(7), S135 (2005)
17. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge
university press, Cambridge (2010)
18. Obenland, K.M., Despain, A.M., Turchette, T.Q.A., Hood, C.J., Lange, W., Mabuchi, H.,
Kimble, H.J., et al.: Impact of errors on a quantum computer architecture (1996)
38 R. Dey et al.
19. Peres, Asher: Reversible logic and quantum computers. Phys. Rev. A 32(6), 3266 (1985)
20. Rahaman, H., Kole, D.K., Das, D.K., Bhattacharya, B.B.: On the detection of missing-gate
faults in reversible circuits by a universal test set. In: 21st International Conference on VLSI
Design. VLSID 2008. pp. 163–168. IEEE (2008)
21. Saligram, R., Hegde, S.S., Kulkarni, S.A., Bhagyalakshmi, H.R., Venkatesha, M.K.: Design
of fault tolerant reversible multiplexer based multi-boolean function generator using parity
preserving gates. Int. J. Comput. Appl. 66(19) (2013)
22. Shor, P.W.: Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A
52(4), R2493 (1995)
23. Shor, P.W.: Polynomial-time algorithms for prime factorization and discrete logarithms on a
quantum computer. SIAM Rev. 41(2), 303–332 (1999)
24. Steane, A.M.: Error correcting codes in quantum theory. Phys. Rev. Lett. 77(5), 793 (1996)
25. Strauch, F.W., Johnson, P.R., Dragt, A.J., Lobb, C.J., Anderson, J.R., Wellstood, F.C.: Quantum
logic gates for coupled superconducting phase qubits. Physical Rev. Lett. 91(16), 167005 (2003)
26. Surhonne, A.P., Chattopadhyay, A., Wille, R.: Automatic test pattern generation for multiple
missing gate faults in reversible circuits. In: International Conference on Reversible Compu-
tation, pp. 176–182. Springer (2017)
27. Thapliyal, H., Ranganathan, N.: Design of reversible sequential circuits optimizing quantum
cost, delay, and garbage outputs. ACM J. Emerg. Technol. Comput. Syst. (JETC) 6(4), 14
(2010)
28. Tommaso, T.: Reversible computing. In: Automata, Languages and Programming, pp. 632–644
(1980)
29. Wille, R., Drechsler, R.: Bdd-based synthesis of reversible logic for large functions. In: Pro-
ceedings of the 46th Annual Design Automation Conference, pp. 270–275. ACM (2009)
30. Zamani, M., Farazmand, N., Tahoori, M.B.: Fault masking and diagnosis in reversible circuits.
In: 16th IEEE European Test Symposium (ETS), pp. 69–74. IEEE (2011)
Part II
Distributed Systems
and Security
TH-LEACH: Threshold Value and
Heterogeneous Nodes-Based
Energy-Efficient LEACH Protocol
Abstract Sensor nodes are used to measure ambient of environment. Sensor network
can be defined as a collection of sensor nodes, which senses the environment and
sends information to the base station. These types of networks are facing problems
related to energy dissemination. LEACH (Low Energy Adaptive Cluster Hierarchy)
protocol is one of the most suitable protocols used for communication in a sensor
network. LEACH protocol is a cluster-based protocol, in which each cluster consists
of multiple nodes and one cluster head node. The cluster head aggregates all the
information from other nodes in the cluster and conveys it to the base station. In
LEACH protocol, cluster head selection is performed in each round at the cost of
some amount of energy. In our proposed solution, the energy consumption for electing
cluster head in every round is circumvent. This can be achieved by threshold value-
based cluster formation. The proposed method reduces the overhead of forming
cluster in every round, which helps in reduction of energy consumption. As a result
of the work, the performance of proposed solution is compared among a variety of
LEACH protocols with respect to lifetime of the network.
1 Introduction
Wireless Sensor Network (WSN) consists of sensor nodes that collecting the infor-
mation about the environment and transmits it to the base station. In this type of
network, sensor nodes have limited amount of energy. WSN network is very useful
to serve various parameters in remote and hostile regions as in detecting attacks,
monitoring enemies’ movement, etc. Routing protocols have very important role in
this type of network. LEACH protocol is a cluster-based protocol that can be used
in WSN. The protocol is designed to preserve the maximum amount of energy. This
protocol comprises two phases—setup phase and steady-state phase.
In LEACH protocol, setup phase is required in each round [1]. Selection of cluster
head is based on threshold value T(n).
T(n) p/ 1 − p x r{mod(1/p)} (1)
2 Literature Survey
There are so many exiting parameters for measuring the efficiency of any routing
protocol but out of them most important parameter for measuring the performance
of wireless sensor network is lifetime of the network that is directly proportional to
the energy available to the network.
In [3], LEACH-DT focuses on selection of cluster head that depends upon the
residual energy of each node and distance from the base station. The solution pro-
posed in [3], the cluster head will transfer responsibilities to a node with the highest
residual energy. In the result section, the modified LEACH-DT and LEACH-DT
protocols are compared with respect to remaining energy available in the whole
network and lifetime of the network.
K-LEACH [4] protocol concentrates on uniform clustering of nodes and one of the
best selections of cluster heads. K-medoids algorithm is used for uniform selection of
head. Euclidian distance is also considered to make selection of cluster head closest to
the previous cluster head. After first round, cluster heads are elected depending upon
the position of previous cluster head and so on. Here the proposed work compares
TH-LEACH: Threshold Value and Heterogeneous Nodes … 43
the performance of LEACH protocol and K-LEACH protocol with respect to energy
retention and number of live nodes per round.
In paper [2], a very clear impression of overall energy required for setup phase and
steady-state phase is given. It includes the idea to calculate the length of each round.
Round is defined as when every node in a cluster completes one-time data transfer to
the cluster head for existing TDMA schedule. The performance of LEACH protocol
in wireless sensor networks is analyzed with respect to lifetime and throughput. Loss
of energy by cluster head in each steady state gives an idea for calculating threshold
value this work.
IB-LEACH [5] is a protocol that supports heterogeneous energy level for few
nodes and decreases the failure probability of nodes. As mentioned earlier, the IB-
LEACH protocol consists of two phases: setup and steady-state phases. Setup phase
mainly divided into various components like gateway selection, cluster formation,
and so on. The probability of electing cluster gateway depends upon threshold value.
In a cluster, there are two types of nodes—normal and advanced nodes. These nodes
are assigned with different probability values. Depending upon the probability values,
nodes are elected as cluster heads. At the end of this paper, IB-LEACH is compared
with LEACH protocol with respect to number of alive nodes available in each round.
3 Proposed Solution
Most of the existing works concentrate on best cluster head selection in each round,
which leads to some amount of energy consumption. In this work, we have tried to
avoid cluster formation in each round so that we can save energy required for making
cluster or setup phase. Heterogeneous energy level is assigned to some of the sensor
nodes, and it helps in preserving energy. This can be achieved by avoiding cluster
formation as much as possible throughout network’s lifetime. There are two phases
of LEACH protocol—setup phase and steady-state phase. In this proposed solution,
changes are made in setup phase only.
The proposed solution is as follows:
1. Probability p 0.1 implies 10% of nodes in complete network are selected as
cluster head. Depending on the value of p, i.e., 10% node of the whole network is
made as heterogeneous node. These nodes encompass different levels of energy
than normal nodes.
2. Energy level of heterogeneous node is dependent on initial energy level of normal
nodes, i.e., Ein . Ech 2*Ein .
3. Positions of the heterogeneous nodes are placed in such a way that it covers the
whole area of a network and elected as cluster head in first round. Normal nodes
are placed with some random value. Coordinates for heterogeneous nodes are
calculated by
end if
end if
Step 5: r=r+1 and send TDMA schedule;
Step 6: When r>1
Step 6.1: for i=1 to N
Step 6.1.1: if CH.E>T(n) //Energy of cluster head greater than
Threshold value
Step 6.1.1.1: r=r+1 send TDMA schedule;
Step 6.1.2: else // Energy of cluster head below Threshold value
Step 6.1.2.1: N=numbers of alive node
Step 6.1.2.2: goto Step 2.
5. The amount of energy saved by in case of “the energy level of cluster head more
than threshold value” calculated as follows:
6. After selection, cluster head will send signals to other clusters for joining.
Depending upon signal strength received by different cluster heads, other nodes
join them. These are the modification made in setup phase but none in the steady-
state phase.
4 Simulation Environment
This work is simulated on MATLAB simulator tool. In the region on 400 × 400 m2 ,
normal numbers of nodes are placed randomly. Heterogeneous nodes are placed in
such a way so that nodes are elected as cluster head.
Coordinates for heterogeneous nodes are calculated by x(i) (s/2*N*p) + (i/p) and
y(i) (s/2*N*p) + (i/p) (as explained in solution strategy). Heterogeneous nodes are
initialized with more amount of energy than normal nodes. There are four scenarios
46 P. Sarkar and C. Kar
40
20
0
100 200 300 400
Number of nodes in a network
LEACH Proposed solution
considered for 100, 200, 300, and 400 nodes, and in each of the simulation, lifetime
of the LEACH protocol is compared with the proposed solution (Table 2).
In this work, LEACH protocol is compared with the proposed solution based on the
following parameters: improvement in lifetime of the network, first dead node, and
number of dead nodes for 300 rounds.
First dead node means that the number of round required by a node to die. Lifetime
of any network determines the minimum number of round in which all nodes of the
network are dead.
Figure 1 shows number of dead nodes in 300 round. In the proposed solution,
number of dead nodes is much less than LEACH protocol for all the different sce-
narios. The reason behind this is that the selected cluster heads are heterogeneous
nodes which have more amount of energy than other nodes, and they will be able to
serve more numbers of rounds.
The graph in Fig. 2 considers four different scenarios with 100, 200, 300, and 400
nodes. Here y-axis denotes a number of rounds that are used to represent in which
TH-LEACH: Threshold Value and Heterogeneous Nodes … 47
Rounds
200
100
0
100 200 300 400
Number of nodes in the network
LEACH Proposed solution
round first node of the network dies. The above graph clearly shows the proposed
solution giving a better result than LEACH protocol. In LEACH protocol, selection is
done randomly but in this work, it is done on the basis of maximum residual energy.
In first round, random nodes are placed such a way that they are selected as cluster
head. Cluster selection is not for every round, and hence, the energy consumption
for cluster formation is also reduced.
Figure 3 considers four different situations where % of improvement in lifetime
is plotted with increasing numbers of nodes.
Lifetime in LEACH Protocol
% of improvement in life time ∗ 100 (6)
Life time in proposed solution
Figure 3 shows that in all scenarios, the proposed solution gives a better result
than LEACH protocol. The performance of the network improves because of proper
selection of cluster head in the network.
Table 3 shows that the improvement in first node dies with respect to LEACH
outweigh the improvement in last node dies with respect to LEACH. Our method
shows better result with respect to other standard methods when second parameter
considered.
6 Conclusion
In this work, a novel approach is proposed for increasing lifetime of the network.
The proposed protocol for wireless sensor network is a modification of LEACH
protocol. Figure 3 shows that lifetime of the network has improved by more than
48 P. Sarkar and C. Kar
40%. Lifetime of the network is directly proportional to energy available in the nodes.
In this work, selection of cluster head is depending on maximum residual energy so
cluster head can able to serve for more number of rounds. Preserving energy of
nodes is achieved by avoiding cluster formation in each and every round. Formation
of cluster is dependent on the threshold value. It saves energy required for creating
a new cluster and also increases the lifetime of the network.
7 Future Scope
This paper shows promising result to save energy of a network and also to increase
the lifetime of network. Security issues related to LEACH protocol will be addressed
further, which may improve the accuracy and effectiveness of the work.
References
10. Bajelan, M., Bakhshi H.: An adaptive LEACH-based clustering algorithm for wireless sensor
networks. J. Commun. Eng. 2(4), (2013)
11. Xinhua, W., Sheng, W.: Performance comparison of LEACH and LEACH-C protocols by
NS2. In: 2010 Ninth International Symposium on Distributed Computing and Applications to
Business, Engineering and Science, pp. 254–258 (2010)
12. So-In, C., Udompongsuk, K., Phudphut, C., Rujirakul, K., Khunboa, C.: Performance evalu-
ation of leach on cluster head selection techniques in wireless sensor networks. In: The 9th
International Conference on Computing and Information Technology (IC2IT2013). Advances
in Intelligent Systems and Computing, vol. 209. Springer, Berlin, Heidelberg (2013)
13. Mehta, R., Pandey, A., Kapadia, P.: Reforming clusters using C-LEACH in wireless sensor
networks. In: International Conference on Computer Communication and Informatics, pp.
1–4. IEEE Press, India (2012)
14. Pantazis, N.A., Nikolidakis, S.A., Vergados, D.D.: Energy-efficient routing protocols in wire-
less sensor networks: a survey. IEEE Commun. Surv. Tutor. 15(2), 1–41 (2012)
15. Mahapatra, R.P., Yadav, R.K.: Descendant of LEACH based routing protocols in wireless sensor
networks. Procedia Compu. Sci. 57, 1005–1014 (2015)
A Novel Symmetric Algorithm
for Process Synchronization
in Distributed Systems
In a distributed system, mutual exclusion (ME) means, at any given time only one
process is allowed to access critical section (CS). The design of distributed mutual
exclusion algorithm (DMEA) is not easy because these algorithms have to deal with
unpredictable message delays and have incomplete knowledge of the system state.
There are three basic approaches to achieve mutual exclusion in a distributed envi-
ronment, which are symmetric non-token-based approach, token-based approach,
and quorum-based approach. In non-token-based approach, two or more succes-
sive rounds of message exchanges between processes are required to decide which
process will enter critical section next. In the token-based approach, a process gets
permission to enter CS if it holds the token and releases it when the execution of CS
is over. In the quorum-based approach, each process needs to seek only permission
from the subset of other processes to execute the CS. In this manuscript, we will
propose a new algorithm for non-token-based symmetric mutual exclusion which is
an extension of Ricart–Agrawala (RA) algorithm. In Sect. 2, we have given a brief
description on permission-based DMEA. In Sect. 3, we have discussed our proposed
approach. The performance of the proposed approach has been given in Sect. 4. Sim-
ulation results establish the superiority of the performance of the proposed algorithm
as compared to RA which is given in Sect. 5.
exclusion in distributed systems using causal ordering. Here, they implemented the
causal ordering in Suzuki–Kazami’s [17] token-based algorithm that realized mutual
exclusion among N processes. An energy-efficient algorithm for distributed mutual
exclusion in mobile ad hoc networks has been given in [18].
Start:
Total_num_process Total number of the process will be present in the list.
do {
Create a new NODE node1 values taken from record [ ] and priority [ ]
if (head==NULL)
head node1;
else {
external_priority_of_node1= floor (node1-> priority)
external_priority_of_head floor (head->priority)
do{
Traverse the list corresponding to R.
do{
if (Pri = found) // Pri, is a requesting process at level i.
{
1. Pri sends a request message to all nodes in its level.
2. If any process who is not interested in getting into CS will send a
“Go Ahead” message to Pri .
3. If any “interested” process with internal priority higher than the
requesting process will send message “Stop” to Pri and
processes with the highest internal priority that wish to get into
critical section will enter the CS.
4. Process Pri will enter critical section after getting (|pi |-1) “Go
Ahead” message.
5. Calculate turnaround time and waiting time of the CS executing process
}
else
Continue traversing;
Aging section: After process executes its CS, then the following actions are taken:
1. Increment the priority of all the requesting but not yet served processes in the
GL by a random factor (a1)
node → pr i or i t y node → pr i or i t y − a1
4 Performance Analysis
In this section, the proposed algorithm is evaluated from multiple perspectives. Issues
considered for performance evaluation include correctness of the algorithm in terms
of both progress condition and safety, message complexity, fairness, etc.
For each critical section access, 2*(|Pi|−1) message exchanges are required. Pi is the
number of processes present at level i.
58
1.9 1.8
A B C
25s 10s
2.6
D E F G
5s
Priority
3.5 3.2
H I J
30s 45s
K L M N
35s 20s 16s
1 n
(2 ∗ |Pi − 1|) . . . . . . . (i)Where n 0, n, Pi ∈ N+
n i1
1
(2P1 − 2 + 2P2 − 2 + 2P3 − 2 . . . . . . 2Pn − 2)
n
1 1
(2P1 + 2P2 + 2P3 + . . . + 2Pn) − ∗ 2n
n n
1 n
(2 ∗ Pi) − 2 . . . . . . . . . .(ii)
n i1
The best case occurs when every priority level has just one node, i.e., Pi 1 for all i,
1 ≤ i ≤ n, as no message exchanges are required in such a situation. In the original
Ricart–Agrawala algorithm, the best-case message complexity is 2(n−1).
60 S. Banerjee et al.
In the worst case, all processes have the same external priority. Thus, we infer that
P1 N, i.e., total N processes are participating and all have same external priority.
So message passing complexity per critical section is 2(P1−1) 2(N−1). In the
worst case, our proposed algorithm is equal to the RA algorithm.
Delay due to aging
If there are m requesting processes, the amount of time delay spent on aging would
be
m
(m − i)
i0
O (m ).
2
If m is very small, then the delay due to aging is almost negligible, but for a good
distribution, a certain delay has been encountered.
Proof The proposed algorithm is said to be starvation free, if every requesting pro-
cess eventually is allowed access to its critical section. We prove this lemma by the
method of contradiction. We propose that
Low − priorit y pr ocesses will star ve using the pr oposed algorithm . . . hypothesis 1
Let A and B be two requesting processes, where A has the highest priority and B
has the lowest. Hence, A and B occupy the first and last subgroup of the request set
respectively. Let us assume that A is allowed access to its critical section and, as per
Proposition 1, process B never gets entry to its critical section.
According to the proposed algorithm, the priority of the all requesting and yet
to be served processes are increased by the same finite random value every time a
process is allowed access to its CS. Hence, eventually, the requesting processes move
to the upper level subgroups as their priority values exceed the external priority value
of their assigned subgroup after a finite interval. Also, once all the requesting nodes
have been served at the topmost priority level, the proposed algorithm forces it to
descend to the next level. Thus, eventually, B will be allowed to access its CS due to
this upward movement in the priority hierarchy after every access.
Hence, our initial hypothesis that “low priority processes will starve using the
proposed algorithm” is found to be false even for process B that has the lowest
priority. This proves that the converse of our initial hypothesis is correct. Thus, the
proposed algorithm is starvation free.
A Novel Symmetric Algorithm for Process Synchronization … 61
Fair ness will not be held f or the pr oposed appr oach . . . hypothesis1
Let A and B be two requesting processes in the presently served level with priori-
ties pa and pb with pa > pb . We shall assume that B gets access before A to its critical
section. Let us also assume that B has requested before A. As per the proposed algo-
rithm, B sends a request message to A, A compares its priority with B. On seeing its
value is greater, by Lemma 2 it does not send a “go-ahead” message, hence B does
not enter its critical section. After that, it is A’s turn. A now repeats the algorithm and
gets a “go-ahead” from B and enters its critical section, hence violating our initial
assumption that B enters first.
This proves that the converse of our initial hypothesis that “Fairness will not be
held for the proposed approach” is correct. Hence, the proposed algorithm maintains.
initially contains an equal number of processes with different internal priorities. Each
process has burst time 25 ms and all processes have requested for access to the critical
section. Our experimental data and results are given in Table 3. Average turnaround
time and average waiting time were calculated in ms.
It is observed from the Table 3 that when the number of the processes increases,
total message complexity will also be increased. Total message exchanges will be
much higher if we use RA algorithm where the proposed approach gives lesser
number of message exchanges. In Fig. 4, it is observed that RA algorithm gives
an exponential performance with respect to all the participating nodes, whereas the
proposed algorithm gives linear performance observed under general cases. So, on
an average, the proposed algorithm performs much better than the standard RA.
Number of Messages
1500000
1000000
500000
0
100 200 400 600 800 1000
Processes
It has been observed from Table 3, Figs. 5 and 6 that the proposed approach produces
better average turnaround time and the average waiting time than RA algorithm. A
process has to wait less amount of time after requesting for access to critical section.
for RA versus PA
12000
10000
8000
6000
4000
2000
0
100
200
400
600
800
1000
Number of Processes
Ricart Agrawala algorithm
Proposed Algorithm
A Novel Symmetric Algorithm for Process Synchronization … 65
the nodes into priority levels. For example, if there are 1000 participating nodes,
distributing them into hundred priority levels each with a set of ten nodes gives a fair
and symmetric distribution and helps us to obtain a linear time output in comparison
to RA. Thus, the height of n/10 achieves a very good distribution.
6 Conclusions
The proposed algorithm contains one or more priority levels and one or number of
processes is placed in each level. It is established in the text that the proposed algo-
rithm maintains safety, liveness, and fairness. The theoretical analysis and experi-
mental results presented in Sects. 4 and 5, respectively, establish that the message
complexity and execution time for the proposed solution is better than the existing
solutions compared. The number of message exchanges per critical section access is
in the range of 0 to 2(N−1).
References
1. Ricart, G., Agrawala, A.K.: An optimal algorithm for mutual exclusion in computer networks.
Commun. ACM 24(1), 9–17 (1981)
2. Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM
21(7), 558–565 (1978)
3. Lodha, S., Kshemkalyani, A.: A fair distributed mutual exclusion algorithm. IEEE Trans.
Parallel Distrib. Syst. 11(6), 537–549 (2000)
4. Kanrar, S., Chaki, N.: FAPP: a new fairness algorithm for priority process mutual exclusion in
distributed systems, special issue on recent advances in network and parallel computing. Int.
J. Netw. 5(1), 11–18 (2010). ISSN 1796-2056
5. Raymond, Kerry.: A tree-based algorithm for distributed mutual exclusion. ACM Trans. Com-
put. Syst. 7(1), 61–77 (1989)
6. Lejeune, J., Arantes, L., Sopena, J., Sens, P.: Service level agreement for distributed mutual
exclusion in cloud computing. In: 12th IEEE/ACM International Conference on Cluster, Cloud
and Grid Computing (CCGRID’12) (2012)
7. Lejeune, J., Arantes, L., Sopena, J., Sens, P.: A fair starvation-free prioritized mutual exclusion
algorithm for distributed system. J. Parallel Distrib. Comput. (2015)
8. Swaroop, A., Singh, A.K.: A distributed group mutual exclusion algorithm for soft real-time
systems. Proc. World Acad. Sci. Eng. Technol. 26, 138–143 (2007)
9. Swaroop, A., Singh, A.K.: A token-based group mutual exclusion algorithm for cellular wireless
networks, In: India Conference (INDICON-2009), pp. 1–4 (2009)
10. Housini, A., Trehel, M.: Distributed mutual exclusion token-permission based by prioritized
groups. In Proceedings
√ of the ACS/IEEE International Conference, pp. 253–259 (2001)
11. Maekawa, M.: A N algorithm for mutual exclusion in decentralized systems. ACM Trans.
Comput. Syst. 3(2), 145–159 (1985)
12. Atreya, R., Mittal, N., Peri, S.: A quorum-based group mutual exclusion algorithm for a dis-
tributed system with dynamic group set. IEEE Trans. Parallel Distrib. Syst. 18(10), 1345–1360
(2007)
13. Kanrar, S., Choudhury, S., Chaki, N.: A link-failure resilient token based mutual exclusion
algorithm for directed graph topology. In: Proceedings of the 7th International Symposium on
Parallel and Distributed Computing (ISPDC) (2008)
66 S. Banerjee et al.
14. Kanrar, S., Chaki, N., Chattopadhyay, S.: A new hybrid mutual exclusion algorithm in absence
of majority consensus. In: Proceedings of the 2nd International Doctoral Symposium on
Applied Computation and security System, ACSS (2015)
15. Singhal, M.: A heuristically-aided algorithm for mutual exclusion for distributed systems. IEEE
Trans. Comput. 38(5), 70–78 (1989)
16. Naimi, M., Thiare, O.: Distributed mutual exclusion based on causal ordering. J. Comput. Sci.,
398–404 (2009). ISBN: 1549–3636
17. Suzuki, I., Kasami, T.: A distributed mutual exclusion algorithm. ACM Trans. Comput. Syst.
(TOCS) 3(4), 344–349 (1985)
18. Sayani, S., Das, S.: An energy efficient algorithm for distributed mutual exclusion in mobile
ad-hoc networks. World Acad. Sci. Eng. Technol. 64, 517–522 (2010)
Part III
Big Data and Analytics
A Mid-Value Based Sorting
Abstract Proper ordering of data has always grabbed the attention of mankind.
Researchers have always put forward their ideas for efficient sorting. A new method
of sorting a large number of data in an array has been presented which is an iterative
approach of sorting. The proposed method calculates an interval value from user
input. Avoid repeated successive scan and repeated adjacent swap by comparing at
the intervals. Promise to take less time to sort both small and large data than the
existing sorting methods.
1 Introduction
Sorting is one of the key operations of data structure. It is the storage of data sys-
tematically either in ascending or descending order. Efficient sorting improves the
performances of other data structure operations like searching, inserting, etc. [3].
Sorting methods are mainly categorized as traditional, divide and conquer and greedy
methods depending on the flow of mechanism, computational complexity and space
complexity [4].
Traditional sorting methods, namely, bubble sort, insertion sort and selection sort
carry time complexity of O(n2 ) [2]. Bubble sort performs repeated adjacent swaps [7].
Insertion sort fetches an element and places it at the correct location in the array [5].
Selection sort repeatedly finds the smallest element and then swaps [8]. Traditional
methods have better applications in educational organizations. Divide and conquer
approaches such as quick sort, merge sort, heap sort and radix sort of O(n log n) time
complexity and greedy approach are way better in solving real-life problems [1].
Detailed analysis of the proposed algorithm based on mechanism, execution time,
ease of implementation and brief comparison with the traditional sorting methods is
done in the following sections.
2 Proposed Algorithm
The new algorithm calculates a block value from the number of input. Scanning from
the left it jumps at a regular interval of block value and swaps if two data are out
of order. Hence, this algorithm takes lesser time than the traditional algorithms as
repeated successive scan and repeated adjacent swap get avoided. The below sub-
sections introduce the pseudocode of the proposed algorithm followed by a detailed
explanation of the steps of the algorithm using few elements in an array.
2.1 Pseudocode
Considering n element as input, block value(k) is calculated as n/2 for even value of
n otherwise ((n/2) +1). While k is non-zero positive and decreasing by one, the first
loop starts from 0 to (n−k) and swaps if two data are out of order. Second loop starts
from 0 to (k−1) and swaps if out-of-order data are encountered and finally shows up
the sorted list. The flow of mechanism of the proposed algorithm for an array size of
8 has been shown below:
A Mid-Value Based Sorting 71
9 5 1 3 11 4 8 2
0 1 2 3 4 5 6 7
9 4 1 2 11 5 8 3
0 1 2 3 4 5 6 7
2 4 1 9 3 5 8 11
0 1 2 3 4 5 6 7
1 4 2 5 3 9 8 11
0 1 2 3 4 5 6 7
1 2 4 3 5 8 9 11
0 1 2 3 4 5 6 7
72 N. Sultana et al.
Step 5: Now the algorithm scans from left up to Kth (i.e. fourth in this case) position
and does adjacent swap if out of order encountered else remains unchanged.
1 2 3 4 5 8 9 11
0 1 2 3 4 5 6 7
1 2 3 4 5 8 9 11
0 1 2 3 4 5 6 7
Considering n number of elements in an array, k be the block size such that initially
k n/2.
The number of repetitions of step 6 in the above pseudocode depends on the
successive values of k and the number of iterations required by Steps 4 and 5 as
given below:
Number of comparisons required for first scanning
The second scan depends on Step 9 which in turns repeats for (n/2) number of
times.
Hence, the numbers of comparisons required for second scanning
(n/2) (2)
Thus, the total time complexity {(3n2 + 2n)/8} + (n/2) [from Eqs. (1) and (2)]
3n2 + 6n /8
5 Comparison Study
3.5
2.5
Mid Value Sort
2
Bubble Sort
1.5 SelecƟon Sort
1 InserƟon Sort
0.5
0
100 200 500 1000
small data [4]. Whereas the newly proposed algorithm is found to have fewer compar-
isons (i.e. ((3n2 + 6n)/8)) than the three and also has high performance on large data.
Unlike the existing methods, the newly proposed algorithm consists of two scan-
ning. First scanning repeatedly compares at interval positions in the list until block
value becomes zero from k. Second scanning swaps adjacent elements of the half of
76 N. Sultana et al.
the list and produces the sorted list. Found to have less computational time to sort than
the traditional sorting methods. The comparison table for execution time of different
sorting methods along with number of comparisons and a graphical representation
of time complexity is shown in Table 1 and Fig. 1, respectively.
6 Conclusion
Introduction of a new sorting method and a brief comparison study with the existing
sorting methods has been done. The execution timetable followed by the graphical
representation of time complexity of different sorting methods helped to prove the
effectiveness of the new algorithm. It is found to have fewer comparisons and swaps
to sort a large number of data. It is an initialization of more efficient future works.
Inventions are nothing without real-life applications. Improvisation of this algorithm
for real-life applications and comparable research works are assured in near future.
References
1. Langsam, Y., Tenenbum, A.M.: Data Structures Using c and c ++, 2nd edn. Indian printing
(Prentice Hall of India private limited), New Delhi-110001
2. Lipschutz, S.: Data Structure & Algorithm, 2nd edn. Schaum’s Outlines Tata McGraw Hill,
ISBN13: 9780070991309
3. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd edn.
MIT Press and McGraw-Hill (2001). ISBN 0-262-03293-7. Problem 2-2, pp. 38
4. Knuth, D.: The Art of Computer Programming, vol. 3: Sorting and Searching, 3rd edn. Addison-
Wesley (1997). ISBN 0-201- 89685-0. pp. 106–110 of section 5.2.2: Sorting by Exchanging
5. Ergül, O.: Guide to Programming and Algorithms Using R. Springer, London (2013). ISBN
978-1-4471-5327-6
6. Yu, P., Yang, Y., Gan, Y.: Experiment analysis on the bubble sort algorithm and its improved
algorithms. In: Information Engineering and Applications. Lecture Notes in Electrical Engi-
neering, vol. 154, pp. 95–102. Springer, London (2002)
7. Min, W.: Analysis on Bubble Sort algorithm optimization. In: 2010 International Forum on
IEEE Information Technology and Applications (IFITA), Kunming, China (2010)
8. Yu, P., Yang, Y., Gan, Y.: Experimental study on the five sort algorithms. In: 2011 Second
International Conference on IEEE Mechanic Automation and Control Engineering (MACE),
China (2011)
9. www.sciencedirect.com/data_structure
10. www.answers.com/Q/What_are_advantages_and_disadvantages_of_selection_sort
11. www.answers.com/Q/What_are_advantages_and_disadvantages_of_insertion_sort
Tracking of Miniature-Sized Objects
in 3D Endoscopic Vision
Abstract The advent of 3D endoscope has revolutionized the field of industrial and
medical inspection. It allows visual examination of inaccessible areas like under-
ground pipes and human cavity. Miniature-sized objects like kidney stone and indus-
trial waste products like slags can easily be monitored using 3D endoscope. In this
paper, we present a technique to track small objects in 3D endoscopic vision using
feature detectors. The proposed methodology uses the input of the operator to seg-
ment the target in order to extract reliable and stable features. Grow-cut algorithm is
used for interactive segmentation to segment the object in one of the frames and later
on, sparse correspondence is performed using SURF feature detectors. SURF feature
detection based tracking algorithm is extended to track the object in the stereo endo-
scopic frames. The evaluation of the proposed technique is done by quantitatively
analyzing its performance in two ex vivo environment and subjecting the target to
various conditions like deformation, change in illumination, and scale and rotation
transformation due to movement of endoscope.
1 Introduction
Endoscope which started as a rudimentary device to examine hollow cavity has come
long way. Advancements in the field of lens designing and fiber optics have made this
device as one of the most powerful tools of the present day medicine and industry. The
advantage of visual examination of hollow cavity without creation of cuts and large
Z. Khanam (B)
Department of Computer Engineering, Aligarh Muslim University, Aligarh, India
e-mail: [email protected]
J. L. Raheja
CSIR-Central Electronics Engineering Research Institute, Pilani, India
e-mail: [email protected]
incisions has made it not only a diagnosis tool but also a crucial part of upcoming
minimum invasive and robotic surgery.
The 2D display of the region of interest was the major huddle for application of
endoscope in visualization of internal body parts. Loss of depth perception causes
visual misperceptions. To compensate for this lack, 3D visualization using stereo
vision has been employed. This technological innovation has accelerated the transi-
tion from open surgical methods to minimally invasive surgery. Surgical robots like
da Vinci Surgical System uses 3D endoscope for visualization for complex neuro-
surgery, maxilla-facial and orthopedic surgery. Similarly, industry also comprises of
environments which are inaccessible and demand remote visual inspections. There-
fore, endoscopes have been deployed for rapid, nondestructive internal assessments
of objects in industrial environment. In recent years, 3D endoscopes have been used
to carry out safety checks in aerospace, pipelines, heavy plants, and manufacturing
industry to name few.
In this paper, we elucidate an algorithm in which video stream from the stereo
endoscopic camera is used to track miniature-sized objects. These objects because of
their appearance and size are difficult for the operator to track using naked eyes. This
computer vision framework will provide better assistance to doctors and inspectors.
One such case where this technique will be useful is the peculiar cases of the ectopic
pelvic kidney with ureteropelvic junction obstruction and presence of renal stones
[17]. Tracking of dust particles in blast furnace for improving gas flow distribution
[5].
In recent years, computer vision techniques have been used to computationally
infer 3D tissue surface, soft tissue morphology, and surgical motion using 3D endo-
scope [20]. Current research focuses on addressing the issue of organ shift and soft
tissue tracking [2, 7]. Several studies in past have tracked regions using methods
based on optical flow, time of light, structured lightening, natural anatomical fea-
tures, and fiducial markers [19]. However, direct application of tracking techniques
to the endoscopic vision of MIS is not possible due to the non-Lambertian surface
and contrastingly different visual appearances during surgery [18]. In [8], a prob-
abilistic framework was proposed to track anisotropic features. Extended Kalman
filter was designed to model the properties of affine invariant regions. However,
this methodology explored soft tissue tracking and targeted the problem of free
scale tissue deformation. Various techniques have been proposed to detect surgical
instruments and track them during MIS. Earlier, external tracking systems were used
which resulted in calibration errors. Later on, many works were produced without the
use of external tracking system. These methodologies rely on discriminative region
statistics between the target object and background [1]. However, no work targeting
tracking of miniature-sized rigid objects was found in the literature. Therefore, in
this work, inspiration was taken from the works trageting tracking of tissues and
organs in 3D endoscopic vision where feature detection techniques were used.
Feature detectors based tracking have been used as some feature detectors are
invariant to the global transformation. The extensions of Harris and Hessian cor-
ner detectors have been used to detect affine invariant region [14, 15]. Edge based
region (EBR) [23] is one of the view invariant detectors used for localization and
Tracking of Miniature-Sized Objects … 79
2 Proposed Method
The main aim of tracking is to provide further assistance to the operator who has
limited view of the inspection scenes. A pair of stereo endoscopic camera is inserted
through a small cavity for internal visualization. The interlaced output of the stereo
camera allows the perception of depth which enhances dexterity of the operator.
The proposed methods use 3D endoscopic vision as input and successfully track the
stones in interlaced endoscopic vision. Figure 2 shows the flowchart depicting the
steps used for tracking.
80 Z. Khanam and J. L. Raheja
Awaiba NanEye 2C is used as a binocular camera. This stereo camera has a footprint
of 2.2 × 1.0 × 1.7 mm. The dynamic illumination control is provided using fiber
light source at the tip of the camera as seen in Fig. 3b. Figure 3a shows the entire
assembly of NanEye 2C.
2.2 Preprocessing
The stereo endoscope generates the left and right images of the scenes which cannot
be used directly for the interlace generation. Few mandatory steps are required as the
preprocessing for correct 3D visualizations. As shown in Fig. 2, stereo calibration
is the first step where intrinsic parameters and extrinsic parameters of the stereo
camera are obtained. It also computes the geometrical relationship between left and
right camera. Zhangs method [25] of stereo calibration using chessboard is applied.
Tracking of Miniature-Sized Objects … 81
Fig. 4 Side-by-side view of left and right images after application of rectification using a Bouguet
algorithm b Hartley algorithm
Harris corner detector [9] is used to detect internal corners of the chessboard. The
detected corner points serve as the reference point for stereo calibration. Miniature
size of the lens causes the stereo camera to suffer from severe lens distortion. Radial
lens distortion, which is mainly due to shape of the lens, is removed. The last step
in preprocessing phase is rectification. It aligns left and right images into same
plane in the world space. This ensures that respective coordinate of the pixel in
other image lies in the same row. This reduces the computational time during sparse
correspondence. Bouguet algorithm [4] was used, as it considers the calibration and
distortion coefficients and gave acceptable results in comparison to Hartley algorithm
[10] as shown in Fig. 4.
2.3 3D Segmentation
Fig. 5 a Foreground and background selection indicated by green and blue respectively
b Segmented object
Algorithm 1 Segmentation
for ∀q ∈ N ( p) do
A pixel p is attacked by its neighbor
−
→ − →
if g(|| C p − Cq ||2 .θq > θ p ) then
Pixel p is overpowered
θ p = For egr ound
−
→ − →
Q (q+d) = g(|| C p − Cq ||2 .θq
l p = lq
end if
end for
Figure 5b shows the final segmented kidney stone. The kidney stone with the fuzzy
boundary and irregular shape can be segmented.
Sparse Correspondence After segmentation of the object in left frame, the object
in right frame needs to be segmented. Due to rectification, the search space of the
object is restricted to 1D. SURF [3] is used as the feature point detection technique.
Features are detected in the segmented object in left frame and the horizontal region
in the right frame as illustrated in Fig. 6.
FLANN [16] was used for matching the features. Figure 7a illustrates the result
after application of FLANN. The stable matches with minimum Euclidean distance
are retained and remaining outliers are rejected as seen in Fig. 7b.
Disparity The disparity is calculated as the average of the difference between stable
features in the left and right image. If d is the disparity calculated, then the object is
segmented in the right image using algorithm 2.
Fig. 6 Features points in the object of the left image and corresponding horizontal patch in the
right image
84 Z. Khanam and J. L. Raheja
The complexity of this algorithm is O(n), where n is the size of the image. In case
the object is occluded, the algorithm segments the object in right frame, as a result of
small baseline difference between the left and right sensor. Figure 8 shows interlace
generated of the segmented kidney stone. This will allow the surgeon to perceive
depth and estimate accurate size of the stone.
2.4 Tracking
SURF-based tracking algorithm [22] is used for tracking objects in the monocular
frames. This algorithm was extended for tracking objects in stereo frame. The object
is detected in the left frame using SURF feature tracking. The object is tracked in
the right frame using disparity algorithm mentioned above. Figure 9 illustrates the
algorithm used for tracking. The tracked objects are marked on left and right frame
and interlace of both the frames are generated for the surgeon to enable visualization
in 3D.
3 Experimental Results
In order to validate our algorithm, two ex vivo simulating environments were cre-
ated. The first environment is a customized matchbox with paper balls of different
colors and varying sizes (few millimeters to 1 cm), shown in Fig. 10a. In the second
environment, real kidney stones are placed in an enclosed box in Fig. 10b.
Performance of the technique proposed was evaluated under scale and rotation
changes due to the movement of the endoscope, significant deformation of stones
during surgery, and illumination changes. The stereo endoscope captures the left and
right frames each of resolution 248 × 248 at 44 fps. Figure 11 illustrates the result
obtained after subjecting the environment to the following four conditions:
1. Scale invariance
2. Rotation (±30◦ )
3. Illumination changes
4. Rigid deformation
86 Z. Khanam and J. L. Raheja
Fig. 11 Tracking during (a1 ) − (a3 ) scale changes (b1 ) − (b3 ) illumination changes (c1 ) − (c2 )
rotational changes (d1 ) − (d2 ) deformation of stones
The error is calculated as the minimum Euclidean distance between stable feature
point of actual region and detected region in the frame. Table 1 indicates the mean and
standard deviation of the error calculated when tracking the stones in different condi-
tions around 30 s (910 frames). It is evident from Table 1 that the technique performs
better in the case of scale and illumination changes than rotation and deformation
changes. However, results show that overall the technique is robust and successful
in tracking miniature size objects.
Tracking of Miniature-Sized Objects … 87
4 Conclusion
In this paper, we have presented a robust technique for tacking miniature-sized objects
in 3D endoscopic vision. The method presented is robust as demonstrated, and is able
to track small objects under different environmental conditions. It detects the stable
and reliable features for the object tracking using grow-cut algorithm. The extended
SURF-based tracking algorithm is incorporated to track the miniature objects in the
real-time stereo endoscopic vision. The results of experiment allow us to conclude
that the proposed algorithms track stones successfully despite the scale and rotation
changes due to the movement of endoscope. It is able to perform well during the
illumination changes in the stereo endoscopic vision and deformation of the objects.
Quantitative experiments on the simulated environment under various conditions
help conclude that the technique is robust and reliable in tracking rigid objects of
size varying from few millimeter to 5 cm. This work focuses on tracking a single
object. Our future plan is to track multiple objects simultaneously. The experiments
have been carried out by simulating the environment in the lab. However, in future,
in vivo data will be used to assess the practical value of the algorithm accurately. The
computational performance of technique will be improvised using optimal strategies
for tracking. The work presented in this paper will serve as a computation tool for
assistance to the surgeons and inspectors. We are hopeful that it will be a building
block for the future tracking techniques for 3D endoscopic vision.
Acknowledgements This research work was financially supported by the CSIR-Network Project,
“Advanced Instrumentation Solutions for Health Care and Agro-based Applications (ASHA)”. The
authors would like to acknowledge the Director, CSIR-Central Electronics Engineering Research
Institute for his valuable guidance and continuous support. The authors would also extend their
gratification to Birla Sarvajanik Hospital, Pilani for providing kidney stones for experiments.
References
1. Allan, M., Ourselin, S., Thompson, S., Hawkes, D.J., Kelly, J., Stoyanov, D.: Toward detection
and localization of instruments in minimally invasive surgery. IEEE Trans. Biomed. Eng. 60(4),
1050–1058 (2013)
2. Baumhauer, M., Feuerstein, M., Meinzer, H.P., Rassweiler, J.: Navigation in endoscopic soft
tissue surgery: perspectives and limitations. J. Endourol. 22(4), 751–766 (2008)
3. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: Computer Vision–
ECCV 2006, pp. 404–417. Springer, (2006)
4. Bouguet, J.Y.: Camera calibration. Toolbox for Matlab (22 Nov 2010). Available from: http://
www.vision.caltech.edu/bouguetj/calib_doc/index.html
5. Chen, Z., Jiang, Z., Gui, W., Yang, C.: A novel device for optical imaging of blast furnace
burden surface: parallel low-light-loss backlight high-temperature industrial endoscope. IEEE
Sens. J. 16(17), 6703–6717 (2016)
6. Ghosh, P., Antani, S.K., Long, L.R., Thoma, G.R.: Unsupervised grow-cut: cellular automata-
based medical image segmentation. In: 2011 First IEEE International Conference on Healthcare
Informatics, Imaging and Systems Biology (HISB), pp. 40–47. IEEE, (2011)
88 Z. Khanam and J. L. Raheja
C Najma Sultana, 69
Chinmoy Kar, 41
P
D Prasita Mukherjee, 51
Diganta Sengupta, 3 Pratima Sarkar, 41
M S
Mahamuda Sultana, 3 Sk Safikul Alam, 69
Smita Paira, 69
Sourabh Chandra, 69
N Sourasekhar Banerjee, 51
Nabendu Chaki, 51 Sukhendu Kanrar, 51