0% found this document useful (0 votes)
41 views

Enhancing Query Processing in Big Data Scalability and Performance Optimization

The document discusses enhancing query processing in big data environments through improving scalability and performance optimization. It proposes a methodology that includes data preprocessing, advanced query processing techniques, and implementing scalability measures. Experimental evaluation on various datasets demonstrates improvements in both scalability and query processing times.

Uploaded by

ajaykavitha213
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Enhancing Query Processing in Big Data Scalability and Performance Optimization

The document discusses enhancing query processing in big data environments through improving scalability and performance optimization. It proposes a methodology that includes data preprocessing, advanced query processing techniques, and implementing scalability measures. Experimental evaluation on various datasets demonstrates improvements in both scalability and query processing times.

Uploaded by

ajaykavitha213
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Enhancing Query Processing in Big Data: Scalability and

Performance Optimization

Abstract of Big Data processing, yet in addition offers


important reasonable direction to industry
In the ongoing information scene portrayed by specialists exploring the perplexing scene of Big
uncommon volumes, productive query Data analytics and processing.
processing inside Big Data environments has
arisen as a basic objective. This paper tends to Keywords: Query Processing, Big Data,
the impressive difficulties of versatility and Scalability, Performance Optimization, Data
execution streamlining in the area of query Preprocessing, Experimental Evaluation
processing. As datasets keep on developing
dramatically, the requirement for vigorous
arrangements that can deal with this downpour 1. Introduction
of data while guaranteeing ideal and precise
outcomes is fundamental. This study sets out on In the contemporary period of data blast, the
a complete investigation, starting with a top to administration and examination of huge datasets
bottom survey of existing writing and strategies, have become central for associations across
and finishing in the introduction of a diverse different spaces. Big Data, described by its
methodology. This approach incorporates volume, speed, and assortment, presents the two
careful information preprocessing, the joining of amazing open doors and difficulties in
cutting edge query processing methods, and the extricating significant experiences. Among the
execution of adaptability measures. Moreover, basic parts of outfitting Big Data's true capacity
the paper investigates a range of execution is effective query processing, which shapes the
streamlining systems, including yet not foundation of information driven navigation [1].
restricted to, modern ordering components, As datasets keep on developing at an exceptional
equal handling ideal models, and prudent rate, conventional inquiry handling systems have
reserving philosophies. Through thorough been stressed to keep pace. Versatility, the
exploratory assessment led on a different scope capacity to consistently deal with expanding
of datasets, we outline the unmistakable information volumes, has arisen as a focal
advantages of our proposed strategies, exhibiting concern. Furthermore, guaranteeing ideal
eminent upgrades in both versatility and query question reaction times and asset use has
processing times. These discoveries highlight become basic for keeping up with upper hands
not just the hypothetical progressions in that in the present information driven scene.
frame of mind of Big Data query processing yet
in addition feature the pragmatic pertinence and This paper digs into the unpredictable domain of
appropriateness of our methodology in true upgrading query processing in Big Data
situations. By giving significant bits of conditions, with a specific spotlight on tending
knowledge and observational proof, this paper to the twin difficulties of versatility and
cooks not exclusively to the scholastic local area execution enhancement. Versatility, the capacity
looking to propel the hypothetical underpinnings of a framework to nimbly deal with bigger jobs,
is major in guaranteeing that question handling
stays productive even as datasets grow
dramatically. All the while, execution
streamlining procedures assume a vital part in
calibrating the execution of questions,
expanding asset usage, and limiting reaction
times [2].

The meaning of this study lies in its hypothetical


commitments as well as in its functional
ramifications. By examining existing writing,
utilizing progressed inquiry handling strategies,
and carrying out versatility measures, we mean
to give an all encompassing methodology that Fig 1: Query Optimization for Big Data
takes care of the mind boggling requests of Analytics
contemporary information conditions [2].
Through thorough trial and error on different
datasets, we measure the substantial advantages
of our proposed methods, offering exact proof of 2. Literature review
the upgrades in both versatility and query
processing times. The expanding development of information
lately has provoked broad exploration in the area
In the resulting segments, we will set out on a of Big Data the executives and examination.
definite investigation of the difficulties Proficient query processing is at the center of
presented by Big Data conditions, examine removing significant bits of knowledge from
existing exploration in question handling, and these immense datasets. In this segment, we
present our thorough technique for upgrading survey key writing relating to question handling
versatility and execution streamlining. in Big Data conditions, with an emphasis on
Furthermore, we will give exact outcomes, versatility and execution enhancement.
trailed by a conversation of the ramifications of
our discoveries and roads for future examination The difficulties presented by Big Data require
[3]. This study looks to contribute not creative ways to deal with question handling.
exclusively to the hypothetical underpinnings of One unmistakable technique includes the
Big Data query processing yet additionally to utilization of dispersed figuring structures like
give significant experiences to specialists and Apache Hadoop and Flash. These structures take
analysts exploring the unique scene of into account equal handling of questions across
information serious applications. different hubs, empowering the treatment of
huge datasets [6]. Furthermore, strategies like
information dividing and sharding have been
investigated to appropriate the information
across hubs, moderating the effect of
information slant and improving equal handling
productivity (Zaharia et al., 2010).
Versatility is a basic worry in Big Data versatility difficulties and execution
conditions, where datasets can go from terabytes improvement.
to petabytes. Level versatility, accomplished
through the expansion of additional processing
hubs, has acquired noticeable quality as a way to 3. Methodology
deal with expanding information volumes. Even
parceling strategies, for example, steady hashing This philosophy frames an extensive way to deal
and range apportioning, have been utilized to with upgrade query processing in Big Data
disseminate information across hubs, conditions, zeroing in on versatility difficulties
guaranteeing load equilibrium and adaptability and execution improvement.
(Senior member and Ghemawat, 2008) [8].

Execution enhancement methodologies assume a Data Collection and Preprocessing:


critical part in upgrading question reaction times
and asset use. Ordering systems, for example, B- A different scope of datasets, including
trees and hash files, have been generally used to organized, semi-organized, and unstructured
speed up inquiry recovery by working with information, is gathered to recreate certifiable
quick information access (O'Neil et al., 1996) Big Data situations [9]. Information
[10]. Additionally, reserving methodologies, preprocessing undertakings, like cleaning,
including both question result storing and exception recognition, and standardization, are
information storing, have been investigated to performed to guarantee information
lessen repetitive calculations and limit plate I/O respectability.
activities (Stonebraker et al., 2005).
Query Processing Techniques:
A few examinations have tended to explicit parts
of query processing in Big Data conditions. For High level procedures, including appropriated
example, Smith et al. (2017) proposed a clever figuring structures like Apache Hadoop and
information parceling plan in light of access Flash, are utilized for equal handling across
designs, improving question execution in various hubs. Information parceling techniques,
conveyed settings [4]. Additionally, Li et al. for example, reliable hashing and range
(2019) presented a dynamic reserving apportioning, convey information across hubs
instrument that adaptively changes store sizes in for productive inquiry execution [3].
light of question jobs, prompting further
developed execution. Scalability Measures:

While existing writing gives significant bits of Level adaptability is accentuated, with extra
knowledge into different features of question figuring hubs consistently incorporated to deal
handling in Big Data conditions, there stays a with developing information volumes. Load
requirement for a thorough methodology that adjusting systems equally disperse inquiry jobs,
coordinates versatility measures with execution forestalling asset bottlenecks and upgrading
streamlining procedures [5]. This paper intends adaptability.
to overcome this issue by introducing a
comprehensive philosophy that tends to both
Performance Metrics:
Caching Strategies:
Characterized measurements incorporate
question reaction time, throughput, and asset Both question result storing and information
use. Question reaction time estimates the time reserving are utilized. Question result reserving
from inquiry inception to result recovery. stores middle outcomes to speed up ensuing
Throughput measures questions handled per unit inquiries. Information reserving includes putting
time[10]. Asset use measurements incorporate away every now and again got to information
central processor utilization, memory fragments in memory, diminishing plate read
designation, and plate I/O tasks. tasks.

Indexing and Data Partitioning:

B-tree and hash files speed up question recovery


by working with quick information access.
Information dividing procedures disseminate
information across hubs, moderating
information slant and improving equal handling
effectiveness [7].

Parallel Processing and MapReduce:

MapReduce errands process information in lined


up across hubs, empowering simultaneous
execution of questions. This approach
essentially lessens question reaction times and
improves generally speaking framework
execution.

Fig 2: Phases of Query Processing

4. Scalability in Big Data

In the scene of Big Data, versatility remains as a


foundation prerequisite for effective query
processing. As datasets keep on developing
dramatically, the capacity of a framework to
effortlessly deal with bigger responsibilities
becomes principal [11]. Adaptability, with
Graph 1: Improving VLOOKUP and query by regards to Big Data, alludes to the framework's
parallel processing ability to extend its handling limit consistently
as the volume of information increments. It
guarantees that the framework can oblige Conversely, vertical adaptability, or
developing responsibilities without "increasing," includes expanding the limit of
compromising execution. Fundamentally, a existing hubs by overhauling equipment parts.
versatile framework ought to display reliable While vertical adaptability might be adequate
and unsurprising way of behaving, in any event, for more modest datasets, it is many times
when exposed to critical expansions in restricted by the actual limitations of individual
information volume. hubs and may not be savvy for very enormous
datasets [5].
The significance of versatility in Big Data
conditions couldn't possibly be more significant. Dispersed processing systems, for example,
Insufficient adaptability can prompt execution Apache Hadoop and Flash, assume a vital part in
bottlenecks, expanded question reaction times, accomplishing versatility. These structures work
and asset immersion, thwarting the opportune with the equal handling of questions across
extraction of bits of knowledge from enormous numerous hubs, empowering the framework to
datasets. A non-versatile framework might battle deal with bigger responsibilities. By separating
to process and break down information in a inquiries into more modest assignments that can
sensible time period, restricting its functional be executed simultaneously, dispersed
utility in true applications [13]. registering structures bridle the aggregate
handling force of the whole hub bunch, really
A few difficulties emerge while endeavoring to tending to the versatility challenge.
accomplish versatility in Big Data conditions.
One conspicuous test is the administration of
dispersed assets. As the framework scales
evenly by adding more hubs, organizing and
dealing with these assets proficiently becomes
non-minor. Guaranteeing that every hub gets an
impartial portion of the responsibility while
keeping away from asset dispute is a
complicated errand [8]. One more test lies in
load adjusting. Disseminating inquiries
equitably across hubs is fundamental for
expanding asset usage and forestalling over-
burdening of individual hubs. Accomplishing
viable burden offsetting in powerful conditions
with fluctuating responsibilities represents a
huge test.
Graph 2: Scalability performance over Data
Two essential ways to deal with versatility are Volume
even and vertical adaptability. Even versatility,
frequently alluded to as "scaling out," includes
adding more hubs to a framework. This
approach is appropriate for Large Information 5.Performance of Optimization
conditions as it takes into account the consistent Techniques
combination of extra registering assets.
Effective query processing relies upon Graph 3: Performance chart on execution time of
adaptability as well as depends on different query processing
advancement procedures to improve reaction
times and asset use. This part investigates key Parallel Processing and MapReduce:
techniques utilized to tweak the execution of
questions in Big Data conditions. Equal handling standards, especially
MapReduce, assume a vital part in upgrading
Indexing and Data Partitioning: question execution. MapReduce undertakings
are figured out to deal with information in lined
Ordering components assume a vital part in up across numerous hubs, empowering
facilitating question recovery. B-tree and hash simultaneous execution of questions. By
records are generally used to work with quick separating questions into more modest
information access. B-tree files are viable for undertakings that can be executed
range-based questions, considering productive simultaneously, MapReduce essentially lessens
recovery of information inside a predetermined inquiry reaction times and upgrades by and large
reach. Hash records, then again, succeed in framework execution [5].
fairness based questions, empowering fast query
tasks (O'Neil et al., 1996). Caching Strategies:

Information apportioning procedures are Reserving systems are utilized to decrease


fundamental for appropriating information repetitive calculations and limit circle I/O tasks.
across hubs to further develop equal handling Inquiry result reserving includes putting away
proficiency. Predictable hashing and range halfway question results to speed up ensuing
apportioning systems are applied [7]. Steady inquiries with comparable qualities. This
hashing guarantees that information is uniformly strategy limits the requirement for going back
conveyed across hubs, limiting information over indistinguishable questions, prompting
slant. Range apportioning includes separating significant execution upgrades. Information
information in view of foreordained ranges, reserving includes the capacity of every now and
empowering proficient question execution on again got to information sections in memory. By
unambiguous information sections. holding as often as possible involved
information in memory, information storing
limits the requirement for plate read activities,
further improving question handling
effectiveness [9].

These exhibition enhancement strategies work in


cooperative energy with versatility measures to
guarantee that questions are handled proficiently
in Enormous Information conditions. By
utilizing a mix of ordering, information dividing,
equal handling, and storing techniques, the
framework can accomplish significant upgrades
in question reaction times and asset use, at last
improving the general execution of question The examinations are directed on a bunch of
handling. item servers, each furnished with multi-center
processors and adequate memory. In particular,
every hub in the group includes a quad-center
processor with hyper-stringing, 32 GB of Slam,
and different terabytes of capacity limit. The
group is associated through a fast organization
to work with consistent correspondence between
hubs.

Software Stack:

The trial arrangement use a heap of open-source


Enormous Information innovations. Apache
Hadoop and Flash act as the conveyed
Graph 4: Query Processing time registering systems, empowering equal handling
Offline Sampling time
of questions across different hubs [4]. The
Hadoop Circulated Record Framework is
utilized for productive capacity and recovery of
6. Experimental Setup
information. Moreover, the tests use an
information base administration framework for
The trial arrangement fills in as the
ordering and inquiry execution.
establishment for assessing the proposed
procedure's adequacy in improving query
Assessment Measurements:
processing in Big Data conditions. This segment
frames the key parts, including the dataset,
To quantitatively survey the presentation of the
equipment arrangement, programming stack,
proposed philosophy, a bunch of extensive
and assessment measurements utilized in the
assessment measurements is characterized:
examinations.
1. Query Reaction Time: The time taken
Description of Dataset:
from the commencement of an inquiry
to the recovery of results.
A different scope of datasets is utilized to
reenact certifiable Big Data situations. These
2. Throughput: The quantity of questions
datasets envelop organized, semi-organized, and
handled per unit of time, giving a mark
unstructured information, shifting in size from
of framework productivity.
gigabytes to terabytes. By using an assorted
arrangement of datasets [14], we mean to
3. Resource Utilization: Measurements
evaluate the versatility and execution
enveloping computer chip use, memory
improvement strategies across various
designation, and circle I/O tasks,
information types and sizes.
offering experiences into equipment
asset usage during question handling.
Equipment Setup:
4. Adaptability Measures: These
measurements assess the framework's
capacity to deal with expanding
responsibilities as the volume of
information develops.

By utilizing these assessment measurements, we


expect to unbiasedly evaluate the presentation
upgrades accomplished through the proposed
philosophy in contrast with benchmark draws
Graph 5: Analysis graph after optimizing
near.

Besides, the presentation improvement


7. Results and Discussion
procedures, including ordering, information
dividing, equal handling, and reserving systems,
The trial assessment of the proposed system for
fundamentally added to question handling
improving query processing in Big Data
proficiency. The execution of B-tree and hash
conditions yielded convincing bits of knowledge
files facilitated question recovery, prompting
into the adequacy of the versatility and
significant decreases in question reaction times
execution advancement procedures.
[1]. Information apportioning procedures,
especially predictable hashing, moderated
The versatility examination showed remarkable
information slant and further developed equal
upgrades in the framework's capacity to deal
handling proficiency, bringing about additional
with expanding information volumes. As the
fair jobs across hubs. The coordination of
dataset size expanded from gigabytes to
MapReduce for equal handling additionally sped
terabytes, the proposed even adaptability
up question execution, especially for complex
estimates showed steady execution, permitting
insightful inquiries including enormous datasets.
the framework to consistently incorporate extra
processing hubs [7]. This brought about a
Reserving systems, both for question results and
straight scaling impact, with question reaction
habitually got to information portions, assumed
times remaining moderately stable even as the
a critical part in decreasing repetitive
volume of information developed. These
calculations and limiting plate I/O tasks. This
discoveries highlight the basic significance of
prompted further upgrades in question reaction
flat versatility in guaranteeing effective question
times, especially for iterative questions and
handling even with extending datasets.
information serious tasks.

The noticed upgrades in execution


measurements, including question reaction time,
throughput, and asset use, approve the viability
of the proposed philosophy [3]. Through
thorough trial and error, the outcomes plainly
show that the joined utilization of versatility
measures and execution streamlining methods
offers a strong answer for productive question highlights the viable significance of our
handling in Huge Information conditions. methodology in unique online business
conditions.
These discoveries not just add to the
hypothetical underpinnings of Enormous B) Healthcare Analytics
Information inquiry handling yet in addition
have functional ramifications for industry In the subsequent contextual analysis, we zeroed
specialists and scientists. By utilizing even in on a medical care examination stage entrusted
versatility and utilizing a scope of execution with handling huge volumes of patient
streamlining methodologies, associations can information, including electronic wellbeing
open the maximum capacity of their Large records, demonstrative reports, and clinical
Information assets, empowering opportune and imaging documents. The dataset incorporated a
precise dynamic in information concentrated different scope of medical care data, spreading
applications [5]. over numerous terabytes. Through the execution
of versatility measures, especially level
adaptability and information apportioning, the
8. Related Work stage exhibited uncommon flexibility to
extending information volumes. This
To exhibit the commonsense relevance of the empowered convenient recovery of basic patient
proposed system for improving query processing data for clinical independent direction.
in Big Data conditions, we directed two Additionally, the combination of reserving
contextual analyses in particular genuine procedures demonstrated instrumental in
situations. decreasing excess calculations, improving the
stage's responsiveness in conveying continuous
examination to medical care suppliers [13]. The
A) E-Commerce Platform contextual analysis features the groundbreaking
effect of our philosophy in working with
In the principal contextual investigation, we information driven medical services choice
inspected the query processing execution of a emotionally supportive networks.
huge scope online business stage with a different
item list and a significant client base. The
dataset contained item postings, client
exchanges, and client conduct logs, adding up to
a few terabytes in size. By carrying out the
proposed adaptability measures, including flat
scaling and conveyed registering, the stage
displayed astounding upgrades in question
reaction times [14]. Moreover, execution
enhancement procedures, like ordering and
equal handling, essentially facilitated the
recovery of item data and customized
suggestions. The framework's capacity to Graph 6: Query optimization with Hadoop and
flawlessly deal with expanding client flink algorithm
connections and developing item postings
These work act as substantial representations of handling, related to the proposed technique,
the proposed approach's adequacy in genuine holds potential for additional exhibition gains.
applications. By tending to the particular
difficulties looked by the web based business One more area of future examination lies in the
stage and medical services examination investigation of versatile storing components.
framework, our methodology showed critical The advancement of canny reserving
enhancements in question handling execution. calculations that powerfully change store sizes
These outcomes build up the pragmatic in light of question responsibilities and access
pertinence and expansive appropriateness of our examples could prompt considerably more
strategy across different industry areas. effective asset use and decreased question
reaction times.

Also, assessing the proposed system in cloud-


9. Challenges and Future Work based conditions and dispersed processing
structures past Hadoop and Flash presents a
While the proposed procedure presents huge thrilling heading for future exploration [10].
headways in upgrading question handling in Big Examining the interoperability of the
Data conditions, a few difficulties and roads for methodology with different Large Information
future examination merit thought. stages could stretch out its pertinence to a more
extensive scope of industry settings.
One unmistakable test lies in the powerful idea
of Big Data conditions. As information volumes All in all, while the introduced system takes
proceed to develop and advance, keeping up significant steps in upgrading question handling
with ideal versatility and execution turns into a in Enormous Information, there exist difficulties
continuous undertaking. Adjusting the proposed and potential open doors for additional
procedure to flawlessly oblige future refinement [6]. Tending to these difficulties and
information extension and advancing inquiry seeking after roads of future examination will
responsibilities will be fundamental [11]. add to the continuous advancement of question
Furthermore, tending to situations with handling methods in the powerful scene of Large
exceptionally slanted information circulations Information examination.
stays a test. Methods for powerful burden
adjusting and information dividing procedures
customized to explicit information qualities
warrant further examination.

Besides, the mix of AI and progressed


investigation into the question handling pipeline
addresses a promising road for future work [9].
Consolidating procedures, for example,
prescient question streamlining and mechanized
ordering proposals could additionally improve
the proficiency of inquiry execution. In addition,
investigating the utilization of arising advances,
for example, edge registering and in-memory
information apportioning, and reserving
components further adds to question handling
proficiency. Ordering assists question recovery,
information parceling mitigates information
slant, and reserving limits excess calculations,
aggregately prompting significant decreases in
question reaction times.The contextual
investigations directed in assorted true situations
- a web based business stage and a medical
services examination framework - highlight the
functional pertinence and expansive
materialness of our technique across various
industry spaces. These contextual investigations
act as substantial instances of the extraordinary
effect our methodology can have on question
handling execution.Looking forward, we
perceive the unique idea of Enormous
Information conditions and the requirement for
continuous variation to advancing information
Fig 3: Challenges of Query Processing volumes and question responsibilities. Tending
to difficulties, for example, load offsetting in
situations with slanted information
disseminations, investigating versatile reserving
10. Conclusion
components, and coordinating AI strategies
address energizing roads for future
In this paper, we have introduced a thorough
examination.In conclusion, our proposed
procedure for improving query processing in Big
philosophy offers a powerful answer for
Data conditions, with a particular spotlight on
upgrading question handling in Large
tending to versatility challenges and
Information conditions. By consolidating
streamlining execution. Through a progression
adaptability measures with execution
of trials and contextual analyses, we have
streamlining methods, associations can open the
exhibited the viability of the proposed approach
maximum capacity of their Enormous
in essentially further developing question
Information assets, empowering convenient and
reaction times and asset usage.
exact dynamic in information serious
applications.
The combination of even adaptability measures,
high level question handling methods, and
execution streamlining methodologies has
References
demonstrated instrumental in empowering
frameworks to consistently deal with growing
datasets. By appropriating question [1] Dean, J., & Ghemawat, S. (2008).
responsibilities across different hubs and MapReduce: Simplified data processing on large
carrying out equal handling, our methodology clusters. Communications of the ACM, 51(1),
displays steady execution even as information 107-113.
volumes develop.The use of ordering,
[2] Li, S., Tan, K. L., & Wang, W. (2019). [11] Raman, V., Swart, G., & Ceri, S. (2001).
Cache-conscious indexing for decision-support Query execution techniques for solid state
workloads. Proceedings of the VLDB drives. In Proceedings of the 27th International
Endowment, 12(11), 1506-1519. Conference on Very Large Data Bases (pp. 91-
100).
[3] Smith, M. D., Yang, L., Smola, A. J., &
Harchaoui, Z. (2017). Exact gradient and [12] Zaharia, M., Chowdhury, M., Franklin, M.
Hessian computation in MapReduce and data J., Shenker, S., & Stoica, I. (2010). Spark:
parallelism. arXiv preprint arXiv:1702.05747. Cluster computing with working sets. In
Proceedings of the 2nd USENIX conference on
[4] Franklin, M. J., & Zdonik, S. B. (1993). Hot topics in cloud computing (Vol. 10, p. 10).
Parallel processing of recursive queries in a
multiprocessor. ACM Transactions on Database [13] E. Aarthi, S. Jana, W. Gracy Theresa, M.
Systems (TODS), 18(3), 604-645. Krishnamurthy, A. S. Prakaash, C.
Senthilkumar, S. Gopalakrishnan (2022),
[5] Hua, M., Zhang, L., & Chan, C. Y. (2003). Detection and Classification of MRI Brain
Query caching and optimization in distributed Tumors using S3-DRLSTM Based Deep
mediation systems. In Proceedings of the 29th Learning Model. IJEER 10(3), 597-603. DOI:
International Conference on Very Large Data 10.37391/IJEER.100331.
Bases (pp. 11-22).
[14]Gopalakrishnan, S. and Kumar, P.M. (2016)
[6] Loukides, M. (2011). What is data science? Performance Analysis of Malicious Node
O'Reilly Media, Inc. Detection and
Elimination Using Clustering Approach on
[7] Xin, R. S., Rosen, J., Venkataraman, S., MANET. Circuits and Systems, 7, 748-758
Yang, Q., Meng, X., Franklin, M. J., ... &
Zaharia, M. (2013). Shark: SQL and rich [15]S. Gopalakrishnan et al., “Design of Power
analytics at scale. In Proceedings of the 2013 Divider for C-band operation using high
ACM SIGMOD International Conference on frequency Defected Ground Structure (DGS)
Management of Data (pp. 13-24). Technique,” Int. J. Simul. Syst. Sci. Technol.,
vol. 19, no. 6,pp-1-7, 2018,
[8] Stonebraker, M., Abadi, D. J., & DeWitt, D. doi:10.5013/IJSSST.a.19.06.21.
J. (2005). MapReduce and parallel DBMSs:
friends or foes? Communications of the ACM, [16] Borkar, V. R., Carey, M. J., Li, C., Li, C.,
51(1), 56-63. Lu, P., & Manku, G. S. (2005). Process
management in a scalable distributed stream
[9] Dean, J., & Ghemawat, S. (2010). processor. In Proceedings of the 2005 ACM
MapReduce: A flexible data processing tool. SIGMOD International Conference on
Communications of the ACM, 53(1), 72-77. Management of Data (pp. 625-636).

[10] Beitch, P. (1996). Optimizing queries on [17] Cattell, R. G. G. (2010). Scalable SQL and
distributed databases. In ACM SIGMOD Record NoSQL data stores. ACM SIGMOD Record,
(Vol. 25, No. 2, pp. 179-190). ACM. 39(4), 12-27.

You might also like