Enhancing Query Processing in Big Data Scalability and Performance Optimization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Enhancing Query Processing in Big Data: Scalability and

Performance Optimization

Abstract processing, yet in addition offers important


reasonable direction to industry specialists
In the ongoing information scene portrayed by exploring the perplexing scene of Big Data
uncommon volumes, productive query analytics and processing.
processing inside Big Data environments has
arisen as a basic objective. This paper tends to the Keywords: Query Processing, Big Data,
impressive difficulties of versatility and Scalability, Performance Optimization, Data
execution streamlining in the area of query Preprocessing, Experimental Evaluation
processing. As datasets keep on developing
dramatically, the requirement for vigorous
arrangements that can deal with this downpour of 1. Introduction
data while guaranteeing ideal and precise
outcomes is fundamental. This study sets out on In the contemporary period of data blast, the
a complete investigation, starting with a top to administration and examination of huge datasets
bottom survey of existing writing and strategies, have become central for associations across
and finishing in the introduction of a diverse different spaces. Big Data, described by its
methodology. This approach incorporates careful volume, speed, and assortment, presents the two
information preprocessing, the joining of cutting amazing open doors and difficulties in extricating
edge query processing methods, and the significant experiences. Among the basic parts of
execution of adaptability measures. Moreover, outfitting Big Data's true capacity is effective
the paper investigates a range of execution query processing, which shapes the foundation of
streamlining systems, including yet not restricted information driven navigation [1]. As datasets
to, modern ordering components, equal handling keep on developing at an exceptional rate,
ideal models, and prudent reserving philosophies. conventional inquiry handling systems have been
Through thorough exploratory assessment led on stressed to keep pace. Versatility, the capacity to
a different scope of datasets, we outline the consistently deal with expanding information
unmistakable advantages of our proposed volumes, has arisen as a focal concern.
strategies, exhibiting eminent upgrades in both Furthermore, guaranteeing ideal question
versatility and query processing times. These reaction times and asset use has become basic for
discoveries highlight not just the hypothetical keeping up with upper hands in the present
progressions in that frame of mind of Big Data information driven scene.
query processing yet in addition feature the
pragmatic pertinence and appropriateness of our This paper digs into the unpredictable domain of
methodology in true situations. By giving upgrading query processing in Big Data
significant bits of knowledge and observational conditions, with a specific spotlight on tending to
proof, this paper cooks not exclusively to the the twin difficulties of versatility and execution
scholastic local area looking to propel the enhancement. Versatility, the capacity of a
hypothetical underpinnings of Big Data framework to nimbly deal with bigger jobs, is
major in guaranteeing that question handling
stays productive even as datasets grow
dramatically. All the while, execution
streamlining procedures assume a vital part in
calibrating the execution of questions, expanding
asset usage, and limiting reaction times [2].

The meaning of this study lies in its hypothetical


commitments as well as in its functional
ramifications. By examining existing writing,
utilizing progressed inquiry handling strategies,
and carrying out versatility measures, we mean to
give an all encompassing methodology that takes
care of the mind boggling requests of
contemporary information conditions [2]. Fig 1: Query Optimization for Big Data
Through thorough trial and error on different Analytics
datasets, we measure the substantial advantages
of our proposed methods, offering exact proof of
the upgrades in both versatility and query
processing times. 2. Literature review

In the resulting segments, we will set out on a The expanding development of information lately
definite investigation of the difficulties presented has provoked broad exploration in the area of Big
by Big Data conditions, examine existing Data the executives and examination. Proficient
exploration in question handling, and present our query processing is at the center of removing
thorough technique for upgrading versatility and significant bits of knowledge from these immense
execution streamlining. Furthermore, we will datasets. In this segment, we survey key writing
give exact outcomes, trailed by a conversation of relating to question handling in Big Data
the ramifications of our discoveries and roads for conditions, with an emphasis on versatility and
future examination [3]. This study looks to execution enhancement.
contribute not exclusively to the hypothetical
underpinnings of Big Data query processing yet The difficulties presented by Big Data require
additionally to give significant experiences to creative ways to deal with question handling. One
specialists and analysts exploring the unique unmistakable technique includes the utilization of
scene of information serious applications. dispersed figuring structures like Apache Hadoop
and Flash. These structures take into account
equal handling of questions across different hubs,
empowering the treatment of huge datasets [6].
Furthermore, strategies like information dividing
and sharding have been investigated to
appropriate the information across hubs,
moderating the effect of information slant and
improving equal handling productivity (Zaharia
et al., 2010).
versatility difficulties and execution
Versatility is a basic worry in Big Data improvement.
conditions, where datasets can go from terabytes
to petabytes. Level versatility, accomplished
through the expansion of additional processing 3. Methodology
hubs, has acquired noticeable quality as a way to
deal with expanding information volumes. Even This philosophy frames an extensive way to deal
parceling strategies, for example, steady hashing with upgrade query processing in Big Data
and range apportioning, have been utilized to conditions, zeroing in on versatility difficulties
disseminate information across hubs, and execution improvement.
guaranteeing load equilibrium and adaptability
(Senior member and Ghemawat, 2008) [8].
Data Collection and Preprocessing:
Execution enhancement methodologies assume a
critical part in upgrading question reaction times A different scope of datasets, including
and asset use. Ordering systems, for example, B- organized, semi-organized, and unstructured
trees and hash files, have been generally used to information, is gathered to recreate certifiable
speed up inquiry recovery by working with quick Big Data situations [9]. Information
information access (O'Neil et al., 1996) [10]. preprocessing undertakings, like cleaning,
Additionally, reserving methodologies, including exception recognition, and standardization, are
both question result storing and information performed to guarantee information
storing, have been investigated to lessen respectability.
repetitive calculations and limit plate I/O
activities (Stonebraker et al., 2005). Query Processing Techniques:

A few examinations have tended to explicit parts High level procedures, including appropriated
of query processing in Big Data conditions. For figuring structures like Apache Hadoop and
example, Smith et al. (2017) proposed a clever Flash, are utilized for equal handling across
information parceling plan in light of access various hubs. Information parceling techniques,
designs, improving question execution in for example, reliable hashing and range
conveyed settings [4]. Additionally, Li et al. apportioning, convey information across hubs for
(2019) presented a dynamic reserving instrument productive inquiry execution [3].
that adaptively changes store sizes in light of
question jobs, prompting further developed Scalability Measures:
execution.
Level adaptability is accentuated, with extra
While existing writing gives significant bits of figuring hubs consistently incorporated to deal
knowledge into different features of question with developing information volumes. Load
handling in Big Data conditions, there stays a adjusting systems equally disperse inquiry jobs,
requirement for a thorough methodology that forestalling asset bottlenecks and upgrading
coordinates versatility measures with execution adaptability.
streamlining procedures [5]. This paper intends to
overcome this issue by introducing a
comprehensive philosophy that tends to both
Performance Metrics: Caching Strategies:

Characterized measurements incorporate Both question result storing and information


question reaction time, throughput, and asset use. reserving are utilized. Question result reserving
Question reaction time estimates the time from stores middle outcomes to speed up ensuing
inquiry inception to result recovery. Throughput inquiries. Information reserving includes putting
measures questions handled per unit time[10]. away every now and again got to information
Asset use measurements incorporate central fragments in memory, diminishing plate read
processor utilization, memory designation, and tasks.
plate I/O tasks.

Indexing and Data Partitioning:

B-tree and hash files speed up question recovery


by working with quick information access.
Information dividing procedures disseminate
information across hubs, moderating information
slant and improving equal handling effectiveness
[7].

Parallel Processing and MapReduce:

MapReduce errands process information in lined


up across hubs, empowering simultaneous
execution of questions. This approach essentially
lessens question reaction times and improves
generally speaking framework execution.

Fig 2: Phases of Query Processing

4. Scalability in Big Data

In the scene of Big Data, versatility remains as a


foundation prerequisite for effective query
processing. As datasets keep on developing
dramatically, the capacity of a framework to
effortlessly deal with bigger responsibilities
becomes principal [11]. Adaptability, with
Graph 1: Improving VLOOKUP and query by regards to Big Data, alludes to the framework's
parallel processing ability to extend its handling limit consistently as
the volume of information increments. It
guarantees that the framework can oblige
developing responsibilities without equipment parts. While vertical adaptability
compromising execution. Fundamentally, a might be adequate for more modest datasets, it is
versatile framework ought to display reliable and many times restricted by the actual limitations of
unsurprising way of behaving, in any event, when individual hubs and may not be savvy for very
exposed to critical expansions in information enormous datasets [5].
volume.
Dispersed processing systems, for example,
The significance of versatility in Big Data Apache Hadoop and Flash, assume a vital part in
conditions couldn't possibly be more significant. accomplishing versatility. These structures work
Insufficient adaptability can prompt execution with the equal handling of questions across
bottlenecks, expanded question reaction times, numerous hubs, empowering the framework to
and asset immersion, thwarting the opportune deal with bigger responsibilities. By separating
extraction of bits of knowledge from enormous inquiries into more modest assignments that can
datasets. A non-versatile framework might battle be executed simultaneously, dispersed registering
to process and break down information in a structures bridle the aggregate handling force of
sensible time period, restricting its functional the whole hub bunch, really tending to the
utility in true applications [13]. versatility challenge.

A few difficulties emerge while endeavoring to


accomplish versatility in Big Data conditions.
One conspicuous test is the administration of
dispersed assets. As the framework scales evenly
by adding more hubs, organizing and dealing
with these assets proficiently becomes non-
minor. Guaranteeing that every hub gets an
impartial portion of the responsibility while
keeping away from asset dispute is a complicated
errand [8]. One more test lies in load adjusting.
Disseminating inquiries equitably across hubs is
fundamental for expanding asset usage and
forestalling over-burdening of individual hubs.
Accomplishing viable burden offsetting in
powerful conditions with fluctuating
responsibilities represents a huge test. Graph 2: Scalability performance over Data
Volume
Two essential ways to deal with versatility are
even and vertical adaptability. Even versatility,
frequently alluded to as "scaling out," includes
adding more hubs to a framework. This approach 5.Performance of Optimization
is appropriate for Large Information conditions as Techniques
it takes into account the consistent combination
of extra registering assets. Conversely, vertical
Effective query processing relies upon
adaptability, or "increasing," includes expanding
adaptability as well as depends on different
the limit of existing hubs by overhauling
advancement procedures to improve reaction
times and asset use. This part investigates key Parallel Processing and MapReduce:
techniques utilized to tweak the execution of
questions in Big Data conditions. Equal handling standards, especially
MapReduce, assume a vital part in upgrading
Indexing and Data Partitioning: question execution. MapReduce undertakings are
figured out to deal with information in lined up
Ordering components assume a vital part in across numerous hubs, empowering simultaneous
facilitating question recovery. B-tree and hash execution of questions. By separating questions
records are generally used to work with quick into more modest undertakings that can be
information access. B-tree files are viable for executed simultaneously, MapReduce essentially
range-based questions, considering productive lessens inquiry reaction times and upgrades by
recovery of information inside a predetermined and large framework execution [5].
reach. Hash records, then again, succeed in
fairness based questions, empowering fast query Caching Strategies:
tasks (O'Neil et al., 1996).
Reserving systems are utilized to decrease
Information apportioning procedures are repetitive calculations and limit circle I/O tasks.
fundamental for appropriating information across Inquiry result reserving includes putting away
hubs to further develop equal handling halfway question results to speed up ensuing
proficiency. Predictable hashing and range inquiries with comparable qualities. This strategy
apportioning systems are applied [7]. Steady limits the requirement for going back over
hashing guarantees that information is uniformly indistinguishable questions, prompting
conveyed across hubs, limiting information slant. significant execution upgrades. Information
Range apportioning includes separating reserving includes the capacity of every now and
information in view of foreordained ranges, again got to information sections in memory. By
empowering proficient question execution on holding as often as possible involved information
unambiguous information sections. in memory, information storing limits the
requirement for plate read activities, further
improving question handling effectiveness [9].

These exhibition enhancement strategies work in


cooperative energy with versatility measures to
guarantee that questions are handled proficiently
in Enormous Information conditions. By utilizing
a mix of ordering, information dividing, equal
handling, and storing techniques, the framework
can accomplish significant upgrades in question
reaction times and asset use, at last improving the
general execution of question handling.

Graph 3: Performance chart on execution time of


query processing
and different terabytes of capacity limit. The
group is associated through a fast organization to
work with consistent correspondence between
hubs.

Software Stack:

The trial arrangement use a heap of open-source


Enormous Information innovations. Apache
Hadoop and Flash act as the conveyed registering
systems, empowering equal handling of questions
across different hubs [4]. The Hadoop Circulated
Graph 4: Query Processing time Record Framework is utilized for productive
Offline Sampling time
capacity and recovery of information. Moreover,
the tests use an information base administration
framework for ordering and inquiry execution.
6. Experimental Setup
Assessment Measurements:
The trial arrangement fills in as the establishment
for assessing the proposed procedure's adequacy
To quantitatively survey the presentation of the
in improving query processing in Big Data
proposed philosophy, a bunch of extensive
conditions. This segment frames the key parts,
assessment measurements is characterized:
including the dataset, equipment arrangement,
programming stack, and assessment
1. Query Reaction Time: The time taken
measurements utilized in the examinations.
from the commencement of an inquiry to
the recovery of results.
Description of Dataset:
2. Throughput: The quantity of questions
A different scope of datasets is utilized to reenact
handled per unit of time, giving a mark of
certifiable Big Data situations. These datasets
framework productivity.
envelop organized, semi-organized, and
unstructured information, shifting in size from
3. Resource Utilization: Measurements
gigabytes to terabytes. By using an assorted
enveloping computer chip use, memory
arrangement of datasets [14], we mean to
designation, and circle I/O tasks, offering
evaluate the versatility and execution
experiences into equipment asset usage
improvement strategies across various
during question handling.
information types and sizes.
4. Adaptability Measures: These
Equipment Setup:
measurements assess the framework's
capacity to deal with expanding
The examinations are directed on a bunch of item
responsibilities as the volume of
servers, each furnished with multi-center
information develops.
processors and adequate memory. In particular,
every hub in the group includes a quad-center
processor with hyper-stringing, 32 GB of Slam,
By utilizing these assessment measurements, we Besides, the presentation improvement
expect to unbiasedly evaluate the presentation procedures, including ordering, information
upgrades accomplished through the proposed dividing, equal handling, and reserving systems,
philosophy in contrast with benchmark draws fundamentally added to question handling
near. proficiency. The execution of B-tree and hash
files facilitated question recovery, prompting
significant decreases in question reaction times
7. Results and Discussion [1]. Information apportioning procedures,
especially predictable hashing, moderated
The trial assessment of the proposed system for information slant and further developed equal
improving query processing in Big Data handling proficiency, bringing about additional
conditions yielded convincing bits of knowledge fair jobs across hubs. The coordination of
into the adequacy of the versatility and execution MapReduce for equal handling additionally sped
advancement procedures. up question execution, especially for complex
insightful inquiries including enormous datasets.
The versatility examination showed remarkable
upgrades in the framework's capacity to deal with Reserving systems, both for question results and
expanding information volumes. As the dataset habitually got to information portions, assumed a
size expanded from gigabytes to terabytes, the critical part in decreasing repetitive calculations
proposed even adaptability estimates showed and limiting plate I/O tasks. This prompted
steady execution, permitting the framework to further upgrades in question reaction times,
consistently incorporate extra processing hubs especially for iterative questions and information
[7]. This brought about a straight scaling impact, serious tasks.
with question reaction times remaining
moderately stable even as the volume of The noticed upgrades in execution
information developed. These discoveries measurements, including question reaction time,
highlight the basic significance of flat versatility throughput, and asset use, approve the viability of
in guaranteeing effective question handling even the proposed philosophy [3]. Through thorough
with extending datasets. trial and error, the outcomes plainly show that the
joined utilization of versatility measures and
execution streamlining methods offers a strong
answer for productive question handling in Huge
Information conditions.

These discoveries not just add to the hypothetical


underpinnings of Enormous Information inquiry
handling yet in addition have functional
ramifications for industry specialists and
scientists. By utilizing even versatility and
utilizing a scope of execution streamlining
methodologies, associations can open the
Graph 5: Analysis graph after optimizing maximum capacity of their Large Information
assets, empowering opportune and precise
dynamic in information concentrated applications over numerous terabytes. Through the execution
[5]. of versatility measures, especially level
adaptability and information apportioning, the
stage exhibited uncommon flexibility to
8. Related Work extending information volumes. This empowered
convenient recovery of basic patient data for
To exhibit the commonsense relevance of the clinical independent direction. Additionally, the
proposed system for improving query processing combination of reserving procedures
in Big Data conditions, we directed two demonstrated instrumental in decreasing excess
contextual analyses in particular genuine calculations, improving the stage's
situations. responsiveness in conveying continuous
examination to medical care suppliers [13]. The
contextual analysis features the groundbreaking
A) E-Commerce Platform effect of our philosophy in working with
information driven medical services choice
In the principal contextual investigation, we emotionally supportive networks.
inspected the query processing execution of a
huge scope online business stage with a different
item list and a significant client base. The dataset
contained item postings, client exchanges, and
client conduct logs, adding up to a few terabytes
in size. By carrying out the proposed adaptability
measures, including flat scaling and conveyed
registering, the stage displayed astounding
upgrades in question reaction times [14].
Moreover, execution enhancement procedures,
like ordering and equal handling, essentially
facilitated the recovery of item data and
customized suggestions. The framework's Graph 6: Query optimization with Hadoop and
capacity to flawlessly deal with expanding client flink algorithm
connections and developing item postings
highlights the viable significance of our These work act as substantial representations of
methodology in unique online business the proposed approach's adequacy in genuine
conditions. applications. By tending to the particular
difficulties looked by the web based business
B) Healthcare Analytics stage and medical services examination
framework, our methodology showed critical
In the subsequent contextual analysis, we zeroed enhancements in question handling execution.
in on a medical care examination stage entrusted These outcomes build up the pragmatic
with handling huge volumes of patient pertinence and expansive appropriateness of our
information, including electronic wellbeing strategy across different industry areas.
records, demonstrative reports, and clinical
imaging documents. The dataset incorporated a
different scope of medical care data, spreading
9. Challenges and Future Work thrilling heading for future exploration [10].
Examining the interoperability of the
While the proposed procedure presents huge methodology with different Large Information
headways in upgrading question handling in Big stages could stretch out its pertinence to a more
Data conditions, a few difficulties and roads for extensive scope of industry settings.
future examination merit thought.
All in all, while the introduced system takes
One unmistakable test lies in the powerful idea of significant steps in upgrading question handling
Big Data conditions. As information volumes in Enormous Information, there exist difficulties
proceed to develop and advance, keeping up with and potential open doors for additional
ideal versatility and execution turns into a refinement [6]. Tending to these difficulties and
continuous undertaking. Adjusting the proposed seeking after roads of future examination will add
procedure to flawlessly oblige future information to the continuous advancement of question
extension and advancing inquiry responsibilities handling methods in the powerful scene of Large
will be fundamental [11]. Furthermore, tending to Information examination.
situations with exceptionally slanted information
circulations stays a test. Methods for powerful
burden adjusting and information dividing
procedures customized to explicit information
qualities warrant further examination.

Besides, the mix of AI and progressed


investigation into the question handling pipeline
addresses a promising road for future work [9].
Consolidating procedures, for example, prescient
question streamlining and mechanized ordering
proposals could additionally improve the
proficiency of inquiry execution. In addition,
investigating the utilization of arising advances,
for example, edge registering and in-memory
handling, related to the proposed technique, holds
potential for additional exhibition gains.

One more area of future examination lies in the


investigation of versatile storing components.
The advancement of canny reserving calculations
that powerfully change store sizes in light of Fig 3: Challenges of Query Processing
question responsibilities and access examples
could prompt considerably more effective asset
use and decreased question reaction times. 10. Conclusion

Also, assessing the proposed system in cloud- In this paper, we have introduced a thorough
based conditions and dispersed processing procedure for improving query processing in Big
structures past Hadoop and Flash presents a Data conditions, with a particular spotlight on
tending to versatility challenges and streamlining Information conditions. By consolidating
execution. Through a progression of trials and adaptability measures with execution
contextual analyses, we have exhibited the streamlining methods, associations can open the
viability of the proposed approach in essentially maximum capacity of their Enormous
further developing question reaction times and Information assets, empowering convenient and
asset usage. exact dynamic in information serious
applications.
The combination of even adaptability measures,
high level question handling methods, and
execution streamlining methodologies has References
demonstrated instrumental in empowering
frameworks to consistently deal with growing [1] Dean, J., & Ghemawat, S. (2008).
datasets. By appropriating question MapReduce: Simplified data processing on large
responsibilities across different hubs and carrying clusters. Communications of the ACM, 51(1),
out equal handling, our methodology displays 107-113.
steady execution even as information volumes
develop.The use of ordering, information [2] Li, S., Tan, K. L., & Wang, W. (2019). Cache-
apportioning, and reserving components further conscious indexing for decision-support
adds to question handling proficiency. Ordering workloads. Proceedings of the VLDB
assists question recovery, information parceling Endowment, 12(11), 1506-1519.
mitigates information slant, and reserving limits
excess calculations, aggregately prompting [3] Smith, M. D., Yang, L., Smola, A. J., &
significant decreases in question reaction Harchaoui, Z. (2017). Exact gradient and Hessian
times.The contextual investigations directed in computation in MapReduce and data parallelism.
assorted true situations - a web based business arXiv preprint arXiv:1702.05747.
stage and a medical services examination
framework - highlight the functional pertinence [4] Franklin, M. J., & Zdonik, S. B. (1993).
and expansive materialness of our technique Parallel processing of recursive queries in a
across various industry spaces. These contextual multiprocessor. ACM Transactions on Database
investigations act as substantial instances of the Systems (TODS), 18(3), 604-645.
extraordinary effect our methodology can have
on question handling execution.Looking forward, [5] Hua, M., Zhang, L., & Chan, C. Y. (2003).
we perceive the unique idea of Enormous Query caching and optimization in distributed
Information conditions and the requirement for mediation systems. In Proceedings of the 29th
continuous variation to advancing information International Conference on Very Large Data
volumes and question responsibilities. Tending to Bases (pp. 11-22).
difficulties, for example, load offsetting in
situations with slanted information [6] Loukides, M. (2011). What is data science?
disseminations, investigating versatile reserving O'Reilly Media, Inc.
components, and coordinating AI strategies
address energizing roads for future [7] Xin, R. S., Rosen, J., Venkataraman, S., Yang,
examination.In conclusion, our proposed Q., Meng, X., Franklin, M. J., ... & Zaharia, M.
philosophy offers a powerful answer for (2013). Shark: SQL and rich analytics at scale. In
upgrading question handling in Large Proceedings of the 2013 ACM SIGMOD
International Conference on Management of Data Technique,” Int. J. Simul. Syst. Sci. Technol.,
(pp. 13-24). vol. 19, no. 6,pp-1-7, 2018,
doi:10.5013/IJSSST.a.19.06.21.
[8] Stonebraker, M., Abadi, D. J., & DeWitt, D.
J. (2005). MapReduce and parallel DBMSs: [16] Borkar, V. R., Carey, M. J., Li, C., Li, C.,
friends or foes? Communications of the ACM, Lu, P., & Manku, G. S. (2005). Process
51(1), 56-63. management in a scalable distributed stream
processor. In Proceedings of the 2005 ACM
[9] Dean, J., & Ghemawat, S. (2010). SIGMOD International Conference on
MapReduce: A flexible data processing tool. Management of Data (pp. 625-636).
Communications of the ACM, 53(1), 72-77.
[17] Cattell, R. G. G. (2010). Scalable SQL and
[10] Beitch, P. (1996). Optimizing queries on NoSQL data stores. ACM SIGMOD Record,
distributed databases. In ACM SIGMOD Record 39(4), 12-27.
(Vol. 25, No. 2, pp. 179-190). ACM.

[11] Raman, V., Swart, G., & Ceri, S. (2001).


Query execution techniques for solid state drives.
In Proceedings of the 27th International
Conference on Very Large Data Bases (pp. 91-
100).

[12] Zaharia, M., Chowdhury, M., Franklin, M.


J., Shenker, S., & Stoica, I. (2010). Spark: Cluster
computing with working sets. In Proceedings of
the 2nd USENIX conference on Hot topics in
cloud computing (Vol. 10, p. 10).

[13] E. Aarthi, S. Jana, W. Gracy Theresa, M.


Krishnamurthy, A. S. Prakaash, C. Senthilkumar,
S. Gopalakrishnan (2022), Detection and
Classification of MRI Brain Tumors using S3-
DRLSTM Based Deep Learning Model. IJEER
10(3), 597-603. DOI: 10.37391/IJEER.100331.

[14]Gopalakrishnan, S. and Kumar, P.M. (2016)


Performance Analysis of Malicious Node
Detection and
Elimination Using Clustering Approach on
MANET. Circuits and Systems, 7, 748-758

[15]S. Gopalakrishnan et al., “Design of Power


Divider for C-band operation using high
frequency Defected Ground Structure (DGS)

You might also like