CVR Jour No.11
CVR Jour No.11
CVR Jour No.11
CVR JOURNAL
OF
SCIENCE & TECHNOLOGY
EDITORIAL
It is with immense pleasure that we bring Volume-11 of the Biannual Journal of our college,
CVR Journal of Science and Technology. We have received good number of research papers for review,
from our own faculty and from outside our institution also. A rigorous filtration process is done, anti
plagiarism check by software, and review by experts are done. Finally research papers were selected for
publication in the present volume. We are also happy to share with the readers that the college is
Accredited by NAAC with A grade and NBA Accreditation is also obtained. Affiliation for all
courses and all seats in all branches is also obtained from JNTUH. It is expected that the contributors
will further enhance the reputation of the college through this Journal.
The breakup of the papers among various branches is:
Civil 4, Mech 2, CSE 1, ECE 4, EEE 3, EIE 3, IT 2, H & S- 1
The research papers from Civil engineering cover the areas of rheology, fracture energy, non-
linear static procedures on high rise buildings. Research papers from Mechanical engineering cover
investigation on wear characteristics and analysis and design of plastic mold for male insulator. A survey
paper on computational intelligence is contributed by CSE department. The research papers in the ECE
branch cover interesting areas like SoC using ANFIS algorithm, SIFT algorithm, Static and Dynamic
Core Assignment and WT based speech compression using VC++.
The research papers from EEE Department cover the areas of Transformerless PV inverter,
Rooftop Solar Plants and DG with micro grid. Research papers received from EIE Department cover the
areas of orthogonal frequency division multiplexing for SDR, optimizing leakage power in FPGAs and
chemical combustion process in rotary kiln using LabVIEW. Contribution from IT department authors is
on dynamic load balancing and user level runtime systems. A brief overview on conservation of lakes in
India is contributed by H&S faculty.
The management is supporting the research and Ph.D Programmes by liberally sanctioning study
leave for the faculty of this college. Faculty members working for Ph.D and on research projects are
expected to contribute for the journal. Management is also encouraging the authors of research papers
with incentives, based on merit.
I am thankful to all the members of the Editorial Board for their help in reviewing and short
listing the research papers for inclusion in the current Volume of the journal. I wish to thank Dr. S.
Venkateshwarlu, HOD EEE and Associate Editor, for the pains he has taken in bring out this Volume.
Thanks are due to HOD, H & S, Dr. E. Narasimhacharyulu and the staff of English Department for
reviewing the papers to see that grammatical and typographical errors are corrected. I am also thankful
to Smt. A. Sreedevi, DTP Operator in the Office of Dean Research for the effort put in the preparation of
the papers in Camera Ready form.
For more clarity on waveforms, graphs, circuit diagrams and figures, readers are requested to
browse the soft copy of the journal, available on the college website www.cvr.ac.in, wherein a link is
provided and is available in color.
Prof. K. Lal Kishore
Editor
CONTENTS Page No
1. Studies on Rheology, Strength and Cementing Efficiency of High Strength Grade Quaternary
Blended Self-Compacting Concrete Incorporating High Reactivity Metakaolin
M V Seshagiri Rao, S Shrihari, V Srinivasa Reddy
2. Need For Fracture Behaviour Based Designer Friendly Expressions For Fracture
Energy and Minimum Flexural Reinforcement
T Muralidhar rao, T.D.Gunneswara Rao
3. Investigation on Effects of Nonlinear Static Procedures on High Rise Buildings
Sreenath Mahankali, P.V.V.S.S.R.Krishna
4. Fracture Parameters of Plain Concrete Beams Using ANSYS
Manasa Koppoju, T. Muralidhara Rao
5. Transformerless Photo Voltaic Inverter Topologies for Low Power Domestic Applications
G. Janardhan, N.N.V. Surendrababu
6. A Distributed Generation System with Micro Grid for Effective Energy Management
Kalluri Deepika
7. Performance Analysis of Rooftop Solar Plants in CVR College of Engineering: A Case Study
P. Rajesh Kumar, Ch. Lokeshwar Reddy
8. Investigation of wear characteristics of Fe-Cr-C hardfacing alloy on AISI-304 steel
K.Sriker, P.Uma Maheswera Reddy, M.Venkata Ramana
9. Analysis and Design of Plastic Mold for Male Insulator of Solar Connector using Plastic Advisor 7.0
Lokeswar Patnaik, Sunil Kumar
10. SoC Based Sigma Delta ADC Using ANFIS Algorithm for ECG Signal Processing Systems
M. Alakananda, B. K. Madhavi
11. SoC Based SIFT Algorithm for Identification of the Quality Objects for Palletization Application
D.Renuka, B.K.Madhavi
12. Comparison of WT based Speech Compression Techniques using VC++
K.Arun Kumar, M.VinodKumar Reddy
13. User level Static and Dynamic Core assignment in Multicore System
Dhruva R. Rinku, M. Asha Rani
14. A Survey on Computational Intelligence Applications in Software Engineering and its Data
K. Narendar Reddy, Kiran Kumar Jogu
15. Dynamic load balancing in cloud using extended hungarian method
S. Jyothsna, Bipin Bihari Jayasingh
16. Performance Analysis of load balancing queues in User Level Runtime systems for multi-core processors
Vikranth B
17. Implementation of Orthogonal Frequency Division Multiplexing for SDR using MATLAB
R. Prameela Devi
18. Efficient Design Methodologies for Optimizing the Leakage Power in FPGAs
O.VenkataKrishna, B.Janardhana Rao
19. Analysation of Industrial Parameters for Chemical Combustion Process in Rotary
KILN using LabVIEW with Wireless Technology
G.Venkateswarlu, K.Uday
20. A Brief Overview on Conservation of Lakes in India
Rohini.A
Abstract - The present work aims at determining the most etc.) [1].The use of such pozzolans may provide greater
suitable mix proportion that can produce metakaolin based cohesiveness by improving the grain-size distribution
quaternary blended high strength SCC of desirable and particle packing [2]. But the use of mineral
strength. The results of this study will lead to the reduction admixtures not only reduced the material cost but also
of the usage of cement, further sustainable development in
the concrete industry by reusing industrial waste by-
improved the fresh and hardened properties of blended
products (SCMs) as cement replacements and reducing SCCs [3].In recent years, there has been a growing
harmful impact on the environment. This study attention in the use of metakaolin (MK) as a mineral
systematically investigate the synergistic effect of admixture to enhance the properties of concrete [4].In
metakaolin (MK) and microsilica (MS) on fresh and the literature, however, the use of MK in the production
strength properties of fly ash based SCC of M80 grade. of self-compacting concrete has not found adequate
These results are compared to establish the enhanced interest. Considered to have high reactivity than most
micro-structural and engineering properties of metakaolin other pozzolans, metakaolin is a valuable admixture for
based quaternary blended SCC.By incorporating MK into concrete/cement applications [5]. Replacing Portland
MS+FA based ternary blended SCC mixes, the amount of
fly ash used has almost doubled. From this observation, it
cement with 820% (by weight), metakaolin produces
can be concluded that MS in blended SCC mixtures a concrete mix which exhibits favorable engineering
imparts high strength while MK inclusion enhances the properties, including: the filler effect, the acceleration of
usage of high quantity of fly ash in SCC mixes for similar OPC hydration, and the pozzolanic reaction [6]. The
strengths and flow properties.The quaternary blended fly filler effect and hydration reaction is immediate, while
ash based M80 grade SCC mix made of MS and MK the effect of pozzolanic reaction occurs between 3 to 14
together is found to be superior to ternary blended fly ash days.
based M80 grade SCC mix made with MS or MK due to
reason that for similar strength, less cement is used and II.OBJECTIVE
more fly ash quantity is consumed. Efficiency factor for
This study systematically investigate the synergistic
quaternary blended SCC mix reveals that for similar
strength, 50% of cement can be replaced with effect of metakaolin (MK) and microsilica (MS) on fresh
FA28%+MS11%+MK11% combination of pozzolanic and strength properties of fly ash based SCC of M80
mixture. grade. These results are compared to establish the
enhanced micro-structural and engineering properties of
Index terms Self-compacting concrete, metakaolin, metakaolin based quaternary blended SCC over ternary
quaternary blended, efficiency factor, rheology, blended SCC. The use of appropriately proportioned
Compressive Strength. metakaolin and microsilica in fly ash blended SCC
reveal the benefits of their synergic effect in improving
I. INTRODUCTION the rheological properties, strength characteristics of fly
Though self-compacting concrete (SCC) can be used ash based M80 grade SCC. The primary objectives of
on most construction sites, its rheological this research work is to quantitatively comprehend and
characterization must be enhanced to better control its assess the role of metakaolin (MK) in development of
placement. Also, the fresh SCC must be stable to early strength in fly ash based SCC of High strength
safeguard the homogeneity of the mechanical strength of grade (M80).
the structure. The stability of SCC can be improved by
incorporating fine materials such as metakaolin (MK),
micro silica (MS) and fly ash (FA) because an increase
in cement content leads to a substantial rise in material
cost and often has other negative effects on concrete
properties (e.g. increased thermal stress and shrinkage,
III. MATERIALS AND MIX PROPORTIONS the compressive strength in MPa, C is the cement content
The materials used in the experimental investigation are in kg /m3, W is the water content in kg/m3, A and B are
locally available OPC cement, river sand (Zone II), Bolomeys coefficients /or constants, P is the amount of
coarse aggregate (10mm), mineral and chemical SCMs replaced bwc,k denotes efficiency factor of SCMs
admixtures such as fly ash, micro silica and metakaolin, combination.By knowing the amounts of C, P, W and
PCE based Super Plasticizer (SP), Viscosity Modifying the strength S achieved for each SCMs dosage
Agent (VMA). Based on Nan Su mix design method, replacement , efficiency factor k has been computed for
material quantities such as powder content (Cement + each of the replacement dosages. Thus, W/(C+ kP) is the
Pozzolan), fine aggregate, coarse aggregate, water and water/effective powder ratio and kP is the equivalent
dosages of SP and VMA required for 1 cu.m, are evaluated cement content of SCMs combination. SCMs /OPC ratio
for high strength grade (M80) of Self Compacting is an important factor for determining the efficiency of
Concrete (SCC). Final mix proportions and optimum SCMs in SCC. So SCMs proportioning is arrived at based
proportions of FA, MS and MK combinations in binary, on the strength data experiments on SCMs blended SCC
ternary and quaternary blended high strength SCC mix Mixes. Efficiency factors found from this strength equation
which are assumed after several trial mixes on material are used to describe the effect of the SCMs replacement.
quantities computed using Nan Su mix design method;
subjected to satisfaction of EFNARC flow properties. VI. TEST RESULTS AND DISCUSSIONS
A. Optimization of Mix Proportions
IV. EXPERIMENTAL INVESTIGATIONS The initial quantities calculated using Nan Su method
The aim of the present experimental investigations are for high strength (M80) grade SCC mix are tabulated in
aimed to obtain specific experimental data which helps to Table 1.The computed amount of total powder (i.e.,
understand the effect of synergic action of metakaolin OPC+FA) is 658 kg. For the above quantities, even though
(MK) , micro silica (MS) and fly ash (FA) combinations in flow properties are achieved conforming to EFNARC
SCC mixes of high strength grade (M80) on rheological guidelines, the high quantity of cement computed using
behavior and strength properties. Test for Compressive Nan Su method is a matter of concern. From durability
strengths at 3, 7, 28, 60 and 90 days were determined by perspective, the maximum cement content is limited to 450
conducting detailed laboratory investigations on high kg per cum of concrete as per clause 8.2.4.2 of IS 456-
strength grade (M80) for optimally blended self - 2000. After trail mixes, revised quantities in kg per cu.m
compacting Concrete (SCC) mixes made with fly ash (FA), for high strength grade (M80) SCC mix are arrived by (i)
microsilica (MS) and metakaolin (MK).For calculating the limiting the Cement to maximum permissible amount, (ii)
efficiency of metakaolin, microsilica and fly ash increasing quantity of pozzolan (fly ash) to maximum
combination in quaternary blended SCC, an equation has amount possible and adjusting the super plasticizer without
been proposed by author based on the principle of compromising the EFNARC flow properties and desired
Bolomeys equation for predicting the strength of concrete strength property. The final revised quantities for high
containing mineral admixtures. The efficiency factors strength M80 grade SCC mix are tabulated in Table 2.
evaluated can be used for proportioning of high strength Henceforth, the total amount of powder quantity
grade (M80) of binary, ternary and quaternary blended (cement + pozzolanic mixture) adopted for high strength
self-compacting concrete (SCC) made with SCMs such as M80 SCC is 700 kg/m3 and water/powder ratio is 0.25 for
fly ash (FA), microsilica (MS) and metakaolin(MK). all blended high strength M80 SCC mixes. For higher
grades, Nan Su mix design method computations yield
V. EVALUATION OF CEMENTING EFFICIENCY FACTORS very less powder content. In fact, from the observations it
An effort is made to quantify the cementitious efficiency may be stated that Nan Su method is very difficult to apply
of fly ash (FA), microsilica (MS) and metakaolin (MK) in for higher grades of concrete to arrive at appropriate
binary, ternary and quaternary blended self-compacting quantities of materials.
concrete (SCC) systems of high strength grade (M80).The Depending on the above calculated base quantities for high
effect of synergic action of metakaolin (MK), microsilica strength grade (M80), twenty nine (29) blended SCC mixes
(MS) and fly ash (FA) combination on the strengths of were designed in three groups of binary, ternary and
binary, ternary and quaternary blended SCC may be quaternary. Table 3 shows various blended high strength
modelled by using a Cementing Efficiency Factor (k).The grade (M80) SCC mixtures with mix numbers and mix
concept of efficiency can be used for comparing the designations. In mix designation, number indicates
relative performance of SCMs when incorporated into SCC percentage by weight of total powder content. One
to the performance of OPC SCC. Efficiency factors found reference SCC mix was also prepared by only OPC (Mix
from Bolomeys strength equation are used to describe the C1) while in the remaining mixtures (Mix B1 to B8, Mix
effect of the SCMs combination replacement in SCC in the T1 to T8 and Mix Q1 to Q12) OPC was partially replaced
enhancement of strength characteristics. This factor will with fly ash (FA), microsilica (MS), metakaolin (MK) and
give only an indication of the added materials effect on their combinations. Mix B1 to B8 are binary blended SCC
concrete strength, since it does not distinguish between mixtures made of either fly ash (FA) or microsilica (MS)
filler effect and chemical reactions.The well-known or metakaolin (MK) while Mix T1 to T8 are ternary
Bolomeys equation often used to relate strength and blended fly ash based SCC mixtures made of microsilica
water/cement ratio is:S = A [(C+ kP)/W) 0.5] where S is (MS) or metakaolin (MK) and Mix Q1 to Q12 are
quaternary blended fly ash based SCC mixtures made of flow properties are satisfied. Then author proposed to
microsilica (MS) and metakaolin (MK) together. In binary additionally increase fly ash content incrementally by 10%
blended high strength grade (M80) SCC mixtures, by weight of powder content (700 kg/m3), thereby
percentage replacement of fly ash by weight of total incrementally increasing the powder quantity by 70 kg.
powder content is 35% i.e. 250 kg/m3 which is based on The optimum combination of cement and pozzolanic
preliminary calculation from mix design method. For mixture is obtained for C50+FA28+MS11+Mk11 SCC mix
binary blended SCC mixtures made with percentage (Mix Q11) where final total powder content is 910 kg/m3
replacement of MS or MK, MS and MK are limited to 5- in which cement content is 455 kg/m3 and pozzolanic
15% and 5-20% respectively. In ternary blended MS+FA mixture is 455 kg/m3. For this optimum mix (Mix Q11),
based high strength grade (M80) SCC mixtures (Mix T1 to MS and MK are optimally proportioned at 14% each and
T4) percentage replacement of MS is limited to 5 -20% by additional percentage of FA is 30% by weight of powder
weight of total powder content. Similarly in ternary content (700kg/m3), for which required flow and strength
blended MK+FA based high strength grade (M80) SCC properties are fulfilled. From table 3, three optimum SCC
mixtures (Mix T5 to T8), percentage replacement of MK is mixes are nominated, one each from binary, ternary and
limited to 5 -20% by weight of total powder content. In quaternary SCC blends. From the experimental
both the above ternary blended MS+FA based SCC and investigations, the mixes B2, T3 and Q11 are chosen as
MK+FA based SCC mixtures (Mix T1 to T8), the cement optimum binary, ternary and quaternary blended high
content is kept constant (65% by weight of total powder strength grade (M80) SCC mixes where both flow and
content). In high strength grade (M80) SCC mix C1 desired strength properties are met along with optimal
developed with 100% OPC does not yield desired strength usage of pozzolanic quantities as shown in Table 4 and 5.
though required flow properties are achieved. In binary Thus, by incorporating MK into MS+FA based ternary
blended high strength grade (M80), FA based SCC mix blended SCC mixes, the amount of fly ash has almost
(Mix B1) and MK based SCC mixes (Mix B5 to B8), doubled. From this observation, it can be understood that
although required flow properties are achieved but desired MS in blended SCC mixtures imparts high strength while
strength is not realized while in binary blended MS based MK inclusion enhances the usage of high quantity of fly
SCC mixes, both required flow properties and desired ash in SCC mixes for similar strengths and flow properties.
strength are attained, if the MS percentage replacement is The quaternary blended fly ash based SCC mix made of
limited to 5-10% by weight of powder. The optimal mix MS and MK together is found to be superior to ternary
chosen for binary blended MS based SCC mix is 5% MS blended fly ash based SCC mix made with MS or MK due
replacement (Mix B2). Henceforth, for high strength grade to reason that for similar strength, less cement is used and
(M80) mixes, Mix B2 is taken as reference mix. In ternary more fly ash quantity is consumed to develop blended high
blended MK+FA based high strength grade (M80) SCC strength grade (M80) SCC.
mixtures (Mix T5 to T8), however required flow properties Based on the compressive strength attained at specified
are satisfied but desired strengths are not obtained for any age of curing, the efficacy of pozzolans are understood. In
of the mixes. But for MS+FA based ternary blended SCC this study, pozzolans used for blended SCC mixes are FA,
mixes (T1 toT4), up to 15% MS by weight of powder, both MS and MK. MK blended fresh concretes will set
required flow properties and desired strength are attained relatively quickly due to high reactivity of MK, which
satisfactorily. So C65+FA20+MS15 (Mix T3) SCC mix is prevents bleeding and settling of aggregates.MK when
considered optimal in ternary blended high strength grade compared to MS has similar particle density and surface
(M80) SCC mixes. In quaternary blended high strength area but different morphology and surface chemistry. MK
grade (M80) SCC mixtures (Mix Q1 to Q12) made of concrete normally requires smaller SP dose than does the
microsilica (MS), metakaolin (MK) together, keeping equivalent SF concrete. The workability of FA based SCC
cement content constant (65% by weight of total powder concrete, without super plasticizer, increases significantly
content), MS and MK contents are limited to 7 14%. For with increase in FA content due to neutralization of
quaternary blended SCC mix (Mix Q1), initially 7% MS positive charges on cement particles and their resultant
and 7% MK replacements are assumed, keeping cement dispersal. Loss of workability due to the presence of MK
content constant i.e. 65% by weight of total powder or MS can be compensated for by the incorporation of FA.
content and rest of powder is fly ash, required flow The degree of restoration of workability, provided by FA,
properties are satisfied but desired strengths are not is influenced significantly by the cement replacement
obtained. So MS and MK are gradually increased to 14% level, the MK/FA ratio and the W/P ratio.
each yet there is no substantial increase in strength though
Table 1 Quantities in kg per cu.m for high strength (M80) grade SCC obtained using Nan Su method of Mix Design
Cement Total Pozzolana (Fly ash) Fine Aggregate Coarse Aggregate S.P Water
3
Quantity kg/m 644 14 810 788 11.84L 150.56 L
Table 2 Final revised quantities for high strength M80 grade SCC mix after trial mixes
Water (water/powder
Cement Total Pozzolana (fly ash) Total Powder Content Fine Aggregate Coarse Aggregate S.P.
=0.25)
Quantity kg/m3 450 250 700 714 658 12.21L 167L
Table 3 Trail mixes of various high strength grade (M80) blended SCC mixes to optimize quantities of pozzolans
Replacement % Quantities
Slump flow V-Funnel L-Box Achieved
Mix Designation (bwp)* Additional kg per cu.m Total Powder
Mix Strength
(Values indicate percentage by % of FA Content Slump T-0
No. T-50 T-5min Blocking (MPa)
weight of P OPC FA MS MK bwp* OPC FA MS MK P Diameter min
sec sec Ratio
mm sec
C1 C100 100 - - - - 700 0 - - 700 725 3.30 6.66 8.52 0.98 72.35
B1 C65+FA35 65 35 - - - 450 250 - - 700 728 3.24 6.26 8.56 0.94 58.94
B2 C95+MS5 95 - 5 - - 665 - 35 - 700 718 4.14 7.43 8.07 0.89 88.56
B3 C90+MS10 90 - 10 - - 630 - 70 - 700 700 4.29 8.12 10.63 0.91 106.04
B4 C85+MS15 85 - 15 - - 595 - 105 - 700 684 4.17 8.26 11.87 0.92 78.32
B5 C95+MK5 95 - - 5 - 665 - - 35 700 723 3.91 7.01 7.61 0.84 72,15
B6 C90+MK10 90 - - 10 - 630 - - 70 700 718 4.05 7.66 10.02 0.86 75.78
B7 C85+MK15 85 - - 15 - 595 - - 105 700 697 3.94 7.79 11.19 0.87 78. 82
B8 C80+MK20 80 - - 20 - 560 - - 140 700 694 4.25 7.76 13.62 0.89 69.35
T1 C65+FA30+MS5 65 30 5 - - 455 210 35 - 700 702 3.58 6.64 8.08 0.89 81.23
T2 C65+FA25+MS10 65 25 10 - - 455 175 70 - 700 692 3.90 6.98 9.49 0.91 84.20
T3 C65+FA20+MS15 65 20 15 - - 455 140 105 - 700 687 3.61 7.12 9.58 0.93 90.54
T4 C65+FA15+MS20 65 15 20 - - 455 105 140 - 700 682 3.95 7.09 11.01 0.95 78.91
T5 C65+FA30+MK5 65 30 - 5 - 455 210 - 35 700 730 3.72 6.91 8.41 0.92 76.23
T6 C65+FA25+MK10 65 25 - 10 - 455 175 - 70 700 720 4.05 7.26 9.87 0.95 77.34
T7 C65+FA20+MK15 65 20 - 15 - 455 140 - 105 700 714 3.75 7.41 9.96 0.97 78.12
T8 C65+FA15+MK20 65 15 - 20 - 455 105 - 140 700 709 4.10 7.38 11.45 0.99 67.21
Q1 C65+FA21+MS7+MK7 65 21 7 7 - 455 147 49 49 700 677 4.42 7.68 12.67 0.99 74.88
Q2 C60+FA28+MS6+MK6 65 21 7 7 10 455 217 49 49 770 668 4.76 8.01 14.22 0.99 76.34
Q3 C54+FA34+MS6+MK6 65 21 7 7 20 455 287 49 49 840 668 4.76 8.01 14.22 0.99 72.17
Q4 C65+FA14+MS14+MK7 65 14 14 7 - 455 98 98 49 700 720 4.05 7.26 9.87 0.95 80.16
Q5 C59+FA22+MS13+MK6 65 14 14 7 10 455 168 98 49 770 714 3.75 7.41 9.96 0.97 81.23
Q6 C54+FA28+MS12+MK6 65 14 14 7 20 455 238 98 49 840 709 4.10 7.38 11.45 0.99 83.65
Q7 C50+FA34+MS11+MK5 65 14 14 7 30 455 308 98 49 910 677 4.42 7.68 12.67 0.99 71.37
Q8 C65+FA7+MS14+MK14 65 7 14 14 - 455 49 98 98 700 668 4.76 8.01 14.22 0.99 80.94
Q9 C58+FA16+MS13+MK13 65 7 14 14 10 455 119 98 98 770 677 4.42 7.68 12.67 0.99 83.25
Q10 C53+FA23+MS12+MK12 65 7 14 14 20 455 189 98 98 840 668 4.76 8.01 14.22 0.99 84.72
Q11 C50+FA28+MS11+MK11 65 7 14 14 30 455 259 98 98 910 730 3.72 6.91 8.41 0.92 90.71
Q12 C46+FA34+MS10+MK10 65 7 14 14 40 455 329 98 98 980 720 4.05 7.26 9.87 0.95 79.91
Table 4 Flow properties of optimized blended SCC mixes for various grades
Replacement %
Additional Slump flow V-Funnel L-Box
Grade of Mix Designation (bwp)*
Mix No % of FA
SCC Mix (Values indicate percentage by weight of P Slump Diameter T-50 T-0 min T-5min Blocking
OPC FA MS MK bwp*
Mm sec sec sec Ratio
B2 C95+MS5 95 - 5 - - 718 4.14 7.43 8.07 0.89
M80 T3 C65+FA20+MS15 65 20 15 - - 687 3.61 7.12 9.58 0.93
Q11 C50+FA28+MS11+MK11 65 7 14 14 30 730 3.72 6.91 8.41 0.92
Table 5 - Final optimized mix proportions of blended SCC mixes for various grades
Replacement %
Quantities
(bwp)* Additional
Mix Designation kg per cu.m
Grade of SCC Mix Mix No % of FA
(Values indicate percentage by weight of P Total Powder Content
bwp* OPC FA MS MK Coarse W/P
OPCFAMS MK P kg Fine Aggregate Water S.P.
(i) (ii) (iii) (iv) Aggregate ratio
(i)+(ii)+(iii)+(iv)
B2 C95+MS5 95 - 5 - - 665 - 35 - 700 714 658 167 12.5 0.25
M80 T3 C65+FA20+MS15 65 20 15 - - 455 140 105 - 700 714 658 167 12.5 0.25
Q11 C50+FA28+MS11+MK11 65 7 14 14 30 455 259 98 98 910 714 658 167 12.5 0.25
Table 7 Efficiency factors for various grades of blended SCC mixes at different ages of curing
Wafa F. F. and Ashour S.A. [1993] presented Shamu et al [2004] performed fracture studies on
experimental results of 20 high strength concrete beams reinforced concrete beams based on bi-linear tension
tested in flexure to examine minimum flexural softening response of concrete. The proposed model
reinforcement requirement without fracture mechanics predicts the minimum reinforcement in flexure as well as
principles. The variables were flexural reinforcement ratio the crack width in RC beams.
and concrete compressive strength. The flexural Raghu Prasad et al [2005] considered strain
reinforcement ratio ranging from 0.21 to 0.88%, and the 28- softening of concrete in tension along with fracture
day concrete compressive strength ranging from 45 to 90 mechanics principles. An improved model based on
MPa were considered. The test results were compared with fundamental equilibrium equation for progressive failure of
the minimum reinforcement ratios of both the ACI: 318-89 plain concrete beam was presented and extended for lightly
and ACI: 318-95 Codes. Effects of specimen size and reinforced concrete beam.
fracture parameters were not considered in this paper.
Fantilli A.P et al [2005] experimentally investigated
Ruiz et al [1998] performed experimental studies on the transition from the un-cracked to the cracked phase in
lightly reinforced beams in which the importance of lightly reinforced concrete beams by testing five full-scale
concrete cover and bond between steel and concrete on the beams under three-point bending. All beams had the same
behavior of RC beams was described. Beams with a large dimensions and the same percentage of flexural
cover showed a secondary peak between concrete cover and reinforcement. But the diameters of bars and the number of
steel yielding, which provides a hint as to the role of bars used are varied. Test results demonstrate that during
reinforcement cover in crack propagation. the growth of crack, the moment rotation diagrams, the
tensile strains in concrete and the shape of crack profiles
M.Bruckner and R.Eligehausen [1998] research depend on the mechanical response of bar diameter.
results suggest that minimum reinforcement ratio dependent
on size of the member. The experiments on beams with Kumar S. and Barai V. S. [2008] presented finite
depths varying from 0.125m to 0.5m and with 0.15 % of element formulation of cohesive crack model for predicting
reinforcement revealed that the load-deformation curves of non-linear Mode-I fracture behaviour of geometrically
these beams showed a large plateau after the peak load similar notched plain concrete beams. The effect of finite
indicating very ductile behavior. It was observed that with element mesh on load bearing capacity of the beam was
increase in beam size, the brittleness increased. The need further analyzed. It was shown that for normal size-range
for a size-dependant definition of minimum reinforcement structures, the values of peak load determined using the
was emphasized. The value of, is obtained by solving concept of linear elastic fracture mechanics deviates from
those obtained using cohesive crack model. Influence of
the following condition some important softening functions of concrete on the
global response and size-effect curve was also presented.
Carpinteri. A et al [2010] presented a new fracture 27.18% and 25.49% respectively and the percent increase
mechanics based model for the analysis of reinforced in minimum flexural reinforcement for 1000mm depth
concrete beams in bending describing both cracking and beam is 15.8%, 28.91% and 25.51% respectively. For a
crushing growths taking place during the loading process compressive strength of 20 MPa, the percent decrease in
by means of the concept of strain localization. The minimum flexural reinforcement when the beam depth is
nonlinear behaviour of concrete in compression is modelled varied from 200mm to 1000mm is 55.29%, 21.45% and
by the Overlapping Crack Model. On the basis of different 25.19% respectively. The variation of minimum
nonlinear contributions due to concrete and steel, a reinforcement with size of beam for different compressive
numerical finite element algorithm is proposed. According strengths of concrete is presented in Fig. 1, Fig. 2 and Fig. 3
to this approach, the flexural behaviour of reinforced respectively.
concrete structural elements is analyzed by varying the
main geometrical and mechanical parameters. It is
concluded that the ductility is an increasing function of the
compression steel percentages, the concrete compressive
strength, the stirrups contents, whereas it decreases as the
tensile steel percentage and/or the structural dimension
increase.
For 200mm depth of beam with fy =415Mpa, the percent reinforcement for 20MPa compressive strength of concrete
decrease in minimum percent flexural reinforcement in Bosco et al and Hawkins et al and Bosco et al and Ruiz
between Bosco et al and Hawkins et al is 16.43% and the et al is 16.43% and 18% respectively. The percent increase
Bosco et al and Ruiz et al is 18%. For 200mm depth of in minimum flexural reinforcement when the compressive
beam with fy=500Mpa, the percent decrease in minimum strength of concrete is varied from20MPa to 100MPa in
percent flexural reinforcement between Bosco et al and Bosco et al, Hawkins et al, and Ruiz et al is 55.76%, 59.4%
Hawkins et al is 16.44% and the Bosco et al and Ruiz et al and 61.18% respectively.
is 18.4%. The variation of minimum reinforcement with The variation of minimum reinforcement with compressive
size of beam when yield strength of steel increased is strength of concrete is presented in Fig. 7.
presented in the Fig. 4, Fig. 5 and Fig. 6 respectively.
beams, International Journal of Fracture, 2010, pp.161- [18] BS: 8500-2003, British Standard Code of Practice for
173. Structural Use of Concrete, Part II, British Standards
[16] AS: 3600-2005, Australian Standard for Concrete Institute, London.
Structures, Standards Australia, Sydney. [19] L.Elfgren, Fracture mechanics of concrete structures
[17] BIS: 456-2000, Code of practice for design of plain from theory to applications, RTA 90-FMA fracture
and reinforced concrete structures, Bureau of Indian mechanics to concrete-application RILEM, Chapman
Standards, New Delhi. and Hall, 1989, London.
NOTATIONS
Abstract:- The objective of the paper is to evaluate and structure in different states PBD is the best approach.
compare the structural behavior and response demands by Smaller earthquake attacks have caused an abnormal
non linear static procedures (NSPs) such as Capacity inelastic behavior in buildings. After recent earthquakes,
Spectrum Method recommended in ATC40 and many buildings have faced to the damages which cannot be
Displacement Coefficient Method recommended in repaired or highly economical to repair.
FEMA356. So, for the investigation of the two methods, two
of 3-dimensional high rise RC structures with different
The concepts in PBD, which have multiple stages
characteristics are analyzed for investigation. To obtain in design, provide an improvement in the present available
nonlinear behavior of the buildings under lateral loads, the codes. So, to determine the response demands for
base force-roof displacement graphs such as capacity curves earthquake assessments of structures within Performance
are determined by pushover analysis. Then four different Based Design Concept, analysis procedures like non linear
seismic hazard levels are considered and their static analysis procedures (NSPs) are becoming more
corresponding structural responses are determined by using popular in Structural Engineering due to its realistic nature.
the two evaluations namely CSM and DCM and results are Already some Seismic Codes like Eurocode No. 8 included
obtained. Comparing structural response quantities (such as the non linear static procedures (NSPs) in them. Though
maximum displacements) obtained from the NSPs for
considered high-rise RC buildings, effects of different
nonlinear time history analysis is the most realistic
evaluations such as DCM and CSM in performance approach in determining the seismic response demands of
evaluations of the structures are comparatively investigated. structures, it need larger input data like (damping ratio, sets
of accelerograms etc..,) and provides results which are very
Index Terms Pushover Analysis, Capacity Spectrum difficult to interpret (such as seismic response demands
Method, Displacement Coefficient Method with time and variation of displacement, absorbed energy,
etc..,). So, to overcome such difficulties, NSPs are mostly
I. INTRODUCTION used in ordinary engineering applications to avoid large
assumptions required by the designer. As a result,
Past two decades, structural collapses and damages due to simplified NSPs like ATC40, FEMA356, and other became
severe earthquakes have caused great loss to economy, popular.
mostly in large cities. Subsequently, it is important to In NSPs, by pushover analysis for a specified
discuss and examine the present country codes available seismic hazard level capacity curves are obtained, from
and also to develop parallel methods which are more which as we can determine the maximum displacement.
realistic in approach to the normal force based design. So From it, other results like plastic rotations, story drifts,
for this purpose, displacement is chosen for realistic displacements, etc. are extracted by using this curve. In
approach rather than traditional force based methods which FEMA 356 and ATC 40, Single degree of freedom (SDOF)
is called performance based design (PBD) having system approach is used in determination of displacement
displacements (deformations) have been started. In so many demands in NSPs, which is called as displacement
nations, like Japan and United States of America various coefficient method (DCM) and capacity spectrum method
codes are developed like {Vision 2000 (SEAOC 1995), (CSM), respectively.
FEMA 356 (FEMA 2000) , Bluebook [Structural Engineers The aim of this study is to evaluate and compare
Association of California (SEAOC) 1999], FEMA 273 structural and nonstructural response demands obtained
(FEMA 1997), and ATC 40 [Applied Technology from CSM recommended in ATC 40 and DCM
Council(ATC) 1996] }. So, the term PBD(Performance recommended in FEMA 356, which are most commonly
Based Design) became more popular in the branch of used in practice for performance evaluation. In recent years
Structural and earthquake Engineering. And mostly high-rise buildings are becoming more popular and there
Structural Engineers have to take keen interest in the are more chances of collapsing due to earthquakes. For
concepts of PBD in order to design structure resistant to these reasons, this investigation performed on the different
Earthquake attacks. The basic concept of PBD is to perform NSPs is primarily focused on high-rise RC buildings. In
desirable characteristics even in unfavorable and sudden this study, two of three-dimensional high-rise RC buildings,
loadings. And also, it is not possible to verify and check the including regular and irregular configurations are studied.
performance of structure in different states by force based Then, four different seismic hazard levels such as E1, E2,
methods. So, to study about the performance of the E3 and E4 are determined by using CSM and DCM. In
order to determine the performance levels of the buildings, intensity of earthquake which is expected at the relevant
maximum plastic rotation and maximum story drift earthquake site. In many codes (ATC 40, FEMA 356, TEC,
demands are found for each structure pushed until the etc.), the maximum, design, and moderate earthquake for a
related maximum displacement demand is achieved. In the building with building importance factor (I) of 1, are one
study, maximum displacements in the four hazard levels of with a probability of 2%, 10% ,and 50% of occurring
two different configurations such as regular and irregular within a period of 50 years, respectively. For the low-
are determined and compared. intensity earthquake, seismic hazard level classifications
given in ATC 40(ATC 1996), FEMA 356 (FEMA 2000)
A. The Pushover Analysis Method are used. Then, the values related to the different
In general, to determine the performance of the earthquake intensity levels are taken from the design
structure various lateral loads are applied on the building spectrum given in the TEC. And with reference to it E1 is
initially starting from zero to the required displacement 0.3 times E3; E2=0.5 times E3; E4=1.5 times E3.
level and weak points in the structure are found out. Then
the performance of the building is determined by studying
B. Description of the Building
the status of plastic hinges formed at given target
The test building is a Ten storeys reinforced
displacement or performance point related to the particular
concrete building, with each storey having a height of 4.00
earthquake intensity level. The building is safe and efficient
m and bay sized having 8.00 m along both directions. The
if the demand does not exceed capacity at all hinges.
columns at the base are rectangular with dimensions
Though the loads applied, earth quake intensities and the
700x650 mm, and same dimensions continued till roof.
evaluation procedures are theoretically correct with respect
Column longitudinal reinforcement may be taken between
to the real earthquake events that are occurring; it may
the range of 1.0% and 2.5%, while 8 mm diameter bars are
differ from the rigorous dynamic analysis in many ways.
used as transverse ties. Beams are designed to dimensions
of 750x650 mm in all storeys and are lightly reinforced
B. Evaluation Procedures
(nearly up to 0.4% steel ratio). The cross-sectional
Though the methods for structure evaluation are
dimensions of columns are relatively narrow, so that the
differ from one another, their basic approach are almost the
capacity of early designs to be as much as possible low cost
same and all of them use the bilinear approximation of the
in usage of concrete, as it will be in situ mixed and
pushover curve. This static procedure equates the properties
conveyed manually and placed and because of the
of Multi degree of freedom (MDOF) structures to relevant
relatively very low level of seismic action. Hence, in the
Single degree of freedom (SDOF) equivalents, and
test structure the columns are slender and not strong enough
approximates the expected maximum displacement using
to carry a large amount of bending caused by lateral forces
the Response spectrum of relevant seismic intensity. The
generated during an seismic attack and subsequently are
different methods in Pushover analysis are ATC 40 (1996)
highly flexible than beams. More details about formwork
Capacity Spectrum Method (CSM); FEMA 356 (2000)
and reinforcement details can be found in Table. The
Displacement Coefficient Method (DCM); FEMA 440
building has been designed according to IS450:2000,that is
(2005) Equivalent Linearization - Modified CSM;
the Indian Standard design code, following allowable
FEMA440 [3]- 2005- Displacement Modification-
design stress procedures and simplified structural analysis
Improvement for DCM
models. The values of dead and live loads were specified in
the Indian Standard Codes, which are still in effect today.
II. DESIGN SPECIFICATIONS
Structural elements possess no special reinforcement bars
for confinement in the critical section and no capacity
For the investigation of Pushover analysis on high
design provisions were used in their design. In order to
rise buildings, two different types of buildings have been
resist negative moments at beams due to gravity loads
modeled in SAP2000. First one is a regular building and the
Longitudinal bars in beams, are bent upwards at their end.
second one is an irregular building and different types of
However, high intensity earthquake vibrations can alter the
Seismic hazard levels have been considered.
moments at the ends of the beams (from sagging to hogging
moment). As a result the steel in bottom section of beams at
A. Definitions of Seismic hazard levels
support may not be adequate for earthquake resistance.
For determining structural responses of the RC
Moreover, widely spaced stirrups (300 to 400 mm) do not
buildings four different seismic hazard levels are
provide required confinement. Hence, stirrups are unable to
considered investigated for two different NSPs. These
withstand large curvature demand due to earthquake loads.
seismic hazard levels are:
For this project two models of high-rise buildings of G+10
1. Low-intensity earthquake (E1);
are considered. The design specifications of two cases are
2. Moderate earthquake (E2);
shown in Table 1 and the 3D view and plan of first case and
3. Design earthquake (E3);
second case are shown in the Fig. 1, Fig.2, Fig.3 and Fig.4
4. Maximum earthquake (E4);
respectively.
As defined in ATC 40 (ATC 1996), FEMA 356 (FEMA
2000), seismic hazard levels indicates nearly the maximum
The design specifications of the building1 and building2 are shown in table1
Table 1: Design Specifications of Buildings
General Aspects Design Variable Selected Criteria For Building
General configuration for Support Configuration Fixed
design Occupancy and Use Commercial
Design Standard Code IS 456:2000
Initial Damping 5%
Concrete M30
Reinforcement bars HYSD415
Related geometric Irregularities Regular and Symmetric
properties Building(Building1) &
Irregular with L-shape and
Symmetric Building(Building2)
Basements Not considered
Number of Stories 11
Inter-Story Height 4m
Distribution of bays Uniform
Typical bay length 8m in both direction
Beam Dimensions 750*600mm
Column Dimensions 700*600mm
Load Configurations Additional Dead Load 2KN/m
Live Load 3KN/m
Basic considerations for Configuration model 3D model
Nonlinear analysis Seismic Hazard Levels Low, moderate, Design and
maximum
Plastic moment hinges Both Beams and columns
consideration
Plastic moment hinges At 5% of span length from each
location node
Software SAP2000
The 3D view and plan of the building case 1 and building case 2 are shown in Fig.1 and Fig.2 and Fig.3, Fig.4.
IV. RESULTS
V. CONCLUSIONS REFERENCES
[1] Erdal Irtem and Umut Hasgul(2009): Investigation
This paper presented the comparison between the of Effects of Nonlinear Static Analysis Procedures to
two methods of NSPs by using SAP2000. SAP2000 is Performance Evaluation on Low-Rise RC Buildings
one of the best softwares for analyzing the structures as [2] Cinitha.A, Umesha , Nagesh R. Iyer(2012):
it has a very high accuracy. It is the best tool for
Nonlinear Static Analysis to Assess Seismic
exploring and comparing different methods of
Performance and Vulnerability of Code -
approach in analyzing. The pushover analysis is one of
Conforming RC Buildings
the important approaches for analyzing the behavior of
[3] Ajay, Shilpa, Babunarayan (2012): Sensitivity of
structure towards seismic attacks. The effect level of
pushover analysis to design parameters-an
the earthquake hazard levels can be easily studied in analytical investigation
the form of deformations of the structures. Non-linear [4] V.Vysakh Dr. Bindhu K.R. Rahul Leslie(2013):
pushover analysis serves the basis for determining the Determination Of Performance Point In Capacity
capacity of the RC building in terms of base shear and Spectrum Method
roof displacement when displacement based approach [5] N.K. Manjula, Praveen Nagarajan, T.M. Madhavan
is adopted. Pillai(2013) :A Comparison of Basic Pushover
Displacement based approach tends to give Methods
realistic opinion of demand over the building as it uses
[6] Rajesh P Dhakal(2010):Structural Design For
roof displacement as preliminary input parameter. In Earthquake Resistance: Past, Present And Future
future times these types of approaches become more [7] L. E. Yamin, Hurtado, J. R. Rincn, J. F. Pulido, J.
popular as the high rise buildings are increasing more
C. Reyes and A. H. Barbat(2014): Evaluation Of
in number. Capacity curves of the structures have been
Seismic Code Specifications Using Static Nonlinear
plotted. The graph is linear to some extent of base Analyses Of Archetype Buildings
shear and then it becomes constant, this is due to the
[8] Dominik H. Lang, Dr.Ing. Dr.philos(2007):
formation of plastic hinges in the structure. Capacity
Seismic response estimation using nonlinear static
Spectrum Method is one of the good approaches for
methods
Non linear Static Procedures as this approach gives the
[9] Jorge Ruiz-Garca , Erick J. Gonzlez(2013):
results about displacements, spectral acceleration (Sg)
Implementation of Displacement Coefficient method
and Time Periods (T).
for seismic assessment of buildings built on soft soil
The Maximum Displacements for frames by
sites
Displacement Coefficient Method recommended by
[10] Ioannis Giannopoulos(2009):Seismic Assessment of
FEMA356 are higher than that of Capacity Spectrum
a RC Building according to FEMA 356 and
Method recommended by ATC40.For Regular
Eurocode 8
Building (Building Case1) the Percentage Difference in
[11] Bruce F. Maison1 And Carl F. Neuss(2015):
Maximum Displacement between DCM and CSM is
Dynamic Analysis Of A Forty-Four Story Building
gradually increasing from Low Hazard Level to
[12] Cinitha.A, P.K. Umesha , Nagesh R. Iyer(2010):
Maximum Hazard Level (E1 to E4). So it concludes
Seismic Performance And Vulnerability Analysis Of
that in case of Regular buildings for low seismic
Code Conforming Rc Buildings
hazard levels both the methods will give almost same
[13] Sinan Akkar And Asli Metin(2007): Assessment Of
results where as for high seismic hazard levels, the
Improved Nonlinear Static Procedures In Fema-
difference between two methods differ in a great
440
manner. For irregular building (Building Case2) the
[14] Mohammad Azaz(2015): Pushover Analysis On
Percentage Difference in the Maximum Displacement
G+10 Reinforced Concrete Structure For Zone Ii
between DCM and CSM is constant for all Hazard
And Zone Iii Ad Per Is 1893 (2002)
Levels (E1 to E4). It concludes that even though the
both approaches are different they arise at same results [15] Dr. Mayank Desai and Darshit Jasani(2015):
in case of Irregular Buildings. Application Of Nonlinear Static Pushover
Though the values are high for the DCM method, Procedure To The Displacement Based Approach Of
it is one of the good approaches for Seismic Evaluation Seismic Analysis Of G+10 Storey Building Structure
of RC Frames For Indian Terrain
[16] M.Mouzzoun, Moustachi, A.Taleb, S.Jalal (2013):
Seismic Performance Assessment Of Reinforced
Concrete Buildings Using Pushover Analysis
Abstract The present paper analyses the size dependency of Quasi-brittle: It is characterized by a gradually decreasing
the fracture energy and the fracture toughness of concrete stress after the peak stress.
determined as per the RILEM Work-of-fracture method
(WFM). Normal and high strength concrete notched beams A. Modes of Fracture
have been modeled using the finite element software, ANSYS
12.1 to study the variation of the fracture parameters. The According to the mode of failure, fracture behaviour is
fracture parameters (GF , KI and SIF) are determined using classified into three categories. The three basic modes of
Work of fracture method by testing geometrically similar failure are presented in Fig1.1. Mode I failure is known as
notched Plain normal and high strength concrete the Opening mode failure. In this mode, the displacement
(20,30,40,50,60,70MPa) specimens of different sizes in a size of the crack surfaces is perpendicular to the plane of the
ratio of 1:4 with different notch depths (a0 /d = 0.15, 0.30 and crack. Mode II failure is known as the Sliding mode or
0.45) under three point bending through load-deflection
Planar Shear mode failure. In this mode, the displacement
curves. The variation of both the fracture energy, fracture
toughness and the stress intensity factor as a function of the of the crack surfaces is in the plane of the crack and
specimen size and notch depth was determined using RILEM perpendicular to the leading edge of the crack. The third
Work-of-fracture method. Fracture energy, fracture basic mode is known as the Tearing mode or Anti-Plane
toughness and stress intensity factor calculated using Work- Shear mode failure. In this mode, the displacement is in the
of-fracture method are increasing with the increase in size of plane of the crack and parallel to the leading edge of the
specimen and decreasing with the increasing notch depth crack. In practice, it is difficult to develop pure mode II or
ratios. mode III fractures in concrete structures. Thus, besides
pure mode I, mode of failure is often a combination of
Index TermsCrack length, Fracture energy, Fracture basic modes which is called mixed mode.
toughness, Stress Intensity factor, Brittleness, Peak load,
Finite element analysis, ANSYS.
I. INTRODUCTION
Concrete, the highest consumed material in the
construction field endowed with the inherent qualities of
easy mouldability to the desired architectural shape and
finish, high resistance to fire, easy and economically
available raw ingredients with high compressive strength.
Cracking in any material occurs when the principal tensile Figure1. Modes of Fracture
stress reaches the tensile strength of the material at that
location. The study of the conditions around the crack tip is B. Stress Intensity Factor Ki
called fracture mechanics. None of the conventional
The stress intensity factor is utilized as a part of fracture
strength theories like elastic or plastic theory describes how
mechanics to predict the stress state ("stress intensity")
the cracks propagate in a structure. The safety and
close to the tip of a notch brought about by a remote load or
durability of concrete structures is significantly influenced
residual stresses. It is a hypothetical construct normally
by the cracking behavior of concrete. Therefore, concrete
applied to a homogeneous, linear elastic material and is
structures are mainly designed to satisfy two criteria helpful for giving a failure criterion for brittle materials,
namely, safety and serviceability. The evaluation of and is a basic technique and is a critical technique in the
adequate margin of safety of concrete structures against
discipline of damage tolerance.. The idea can likewise be
failure is assured by the accurate prediction of ultimate load
connected to materials that display little scale yielding at a
and the complete load-deformation behavior or moment-
notch tip.
curvature response .Based on the tensile stress-deformation
C. Fracture Energy Gf
response, most engineering materials can be categorized
The strain energy discharge rate (or essentially energy
into three main classes:
discharge rate) is the energy dispersed during fracture per
Brittle: stress suddenly drops to zero when a brittle material unit of newly created crack surface region. The energy
fractures. discharge rate failure criterion expresses that a notch will
Ductile: stress remains constant when a ductile material
grow when the accessible energy discharge rate G is greater
yields.
than or equivalent to a basic worth Gc. The amount Gc is Live Load, w = 6884.28 N
the fracture energy. Self weight of Beam:
D. Non-Linear Fracture Parameters 0.1 X 0.15 X 25 = 0.36kN/m = 360N/m
Fracture Energy usingWork-Of-Fracture Method. Based
on a measured load-deflection curve of a fracture specimen, Grade of Size of Beam a/D Peak Load N
typically a three point bend beam (including the effect of its concrete (mm x mm)
0.15 3536.643
own weight), the work of load P on the load-point 100 X 75 0.3 2428.88
displacement in RILEM method is calculated as Wf = 0.45 454.79
Pd. 0.15 7262.28
Figure 1.shows a typical three point bend test set up for M20 100 X 150 0.3 5046.57
the determination of fracture parameters using RILEM 0.45 3260.209
0.15 15280.77
Work-of-Fracture method.
100 X 300 0.3 10850.1
0.45 5764.11
0.15 5255.13
100 X 75 0.3 3594.327
0.45 634.6788
0.15 10699.27
M30 100 X 150 0.3 7377.365
0.45 4699.154
0.15 22154.84
100 X 300 0.3 15512
0.45 10154.77
Figure2. Three point bend test set up 0.15 6975.346
100 X 75 0.3 4760.936
3 0.45 814.7379
The fracture energy according to the RILEM definition,Wf 0.15 14139.69
M40 100 X 150 0.3 9710.487
GF (o ,d) = 0.45 6139.539
0.15 29035.78
Where 0 = 100 X 300 0.3 20178.67
0.45 13034.46
E. Fracture Toughness KI C 0.15 8695.558
Fracture toughness is the property which portrays the 100 X 75 0.3 5927.546
capacity of the material containing a crack to resist fracture. 0.45 994.79
If a material has high fracture toughness it will presumably 0.15 17580.52
M50 100 X 150 0.3 12043.61
undergo ductile fracture. For two dimensional issues (plane
0.45 7579.92
stress, plane strain, anti-plane shear) including crack that 0.15 35916.73
move in a straight path, the Mode I fracture toughness is 100 X 300 0.3 24845.33
identified with the energy release rate, Gf by 0.45 15915.08
KIC= 0.15 10415.77
0.3 7094.155
0.45 1174.856
II. SAMPLE LOAD CALCULATION 0.15 21020.54
0.3 14376.73
The most extreme load and Fracture Load are observed 0.45 9020.309
to appear as something else and an exceptional quality for 0.15 42797.68
0.3 29512
the fracture load is obtained. 0.45 1879.69
The peck load carried by M20 grade concrete having 0.15 12135.98
beam size of 100mm x 150mm & a/D: 0.15 0.3 8260.765
BendingEquation: 0.45 1354.916
0.15 24460.96
= = 6.67N/mm2 0.3 16426.35
0.45 10460.69
For simply supported beams, the maximum bending 0.15 49678.62
moment is 0.3 34178.67
M= = 0.45 21676.31
Dead Load wD = 378 N
M = 262.5w Total Load = w+ wD= 7262.28N
Where width of beam is = 100mm
Effective depth, d = 150 22.5 = 127.5mm The Peak load values of various grades of concrete (M20 M70) with
Moment of inertia different a/D ratiosand different beam sizes are calculated and tabulated in
the Table I.
I= = 17.272 x 10 6mm4
Depth of Neutral axis y = = 63.75mm
= xy= x 63.75 = 6.67MPa
TABLE I.
PEAK LOAD VALUES FOR BEAMS OF DIFFERENT SIZES,
GRADES AND NOTCH-DEPTH RATIOS
TABLE II
DEFLECTION, STRESS INTENSITY FACTOR FOR BEAMS OF
DIFFERENT SIZES, GRADES AND NOTCH-DEPTH RATIOS
TABLE III
Figure12. Peak Load vS Notch-depth ratio (M40 Concrete)
FRACTURE ENERGY FOR BEAMS OF DIFFERENT SIZES,
GRADES AND NOTCH-DEPTH RATIOS
Figure23. Fracture Energy Vs Notch depth (M30 concrete) Figure27. Fracture Energy Vs Notch depth (M70 concrete)
Figure34. Peak Load vs SINT (M20 Concrete, a/D: 0.15, 0.3, and 0.45)
Figure35. Peak Load vs SINT (M30 Concrete, a/D: 0.15, 0.3, and 0.45)
Figure36. Peak Load vs SINT (M40 Concrete,a/D: 0.15, 0.3, and 0.45)
TABLE IV
0.15 1630.551
100 X 75 0.3 1172.099
0.45 224.4949
0.15 3041.527
Figure37. Peak Load vs SINT (M50 Concrete, a/D: 0.15, 0.3, and 0.45) M20 100 X 150 0.3 2004.835
0.45 1628.77
0.15 5964.461
100 X 300 0.3 4421.192
0.45 2908.368
0.15 2188.163
100 X 75 0.3 1715.451
0.45 307.7274
0.15 4443.351
M30 100 X 150 0.3 3238.135
0.45 2152.639
0.15 8597.053
100 X 300 0.3 5918.291
0.45 4051.29
0.15 2993.999
100 X 75 0.3 2254.136
0.45 403.6956
Figure38. Peak Load vs SINT (M60 Concrete, a/D: 0.15, 0.3, and 0.45) 0.15 6038.525
M40 100 X 150 0.3 4310.684
0.45 2770.805
0.15 10684.07
100 X 300 0.3 7665.504
0.45 6027.837
VI. CONCLUSIONS
The fracture behavior of the notched plain concrete
beams of different sizes and notch depth ratios for different
grades of concrete has been analyzed based on the
modelling of beams in ANSYS. The variation of fracture
parameters has been studied and presented below.
1. In a particular size of the beam and for a particular notch
depth ratio, the fracture energy and fracture toughness are
observed to be increasing with the increase in the grade of
the concrete. This is due to the increase in the depth of
uncracked ligament which has enhanced the load resisting
Figure42. Fracture Toughness Vs notch-depth ratio (M40 Concrete)
capacity. Hence, the fracture energy of the larger depth
beams. Same trend was observed with the increase in the
notch depth ratios.
2. In a particular grade of concrete and for a particular size
of the beam, the fracture energy and fracture toughness are
observed to be decreasing with increase in the notch depth
ratios. This is due to the decrease in the depth of uncracked
ligament. Same trend was observed with the increase in the
size of the beams.
3. When the grade of concrete and the size of the beam is
constant, then the peak load and the deflection were found
to be decreasing with the increase in the notch depth ratios.
This is due to the increase in the brittleness of the member.
In other words, the increase in the crack length in a member
makes it to behave in a brittle manner.
4. In a particular size of the beam and for a particular notch
Figure43. Fracture Toughness Vs notch-depth ratio (M50 Concrete)
depth ratio, the stress intensity factor is observed to be
increasing with the increase in the grade of the concrete.
This is due to the increased load resisting capacity of the plain concrete beams,International Journal of Fracture
beam with the increase in the grade of concrete. 45: 195 219, 1990. Received 20 August 1988; accepted
5. In a particular size of the beam and for a particular notch in received from 7 June 1989
depth ratio, the peak deflection value is observed to be [4] B.K.Raghu Prasad, Rabindra Kumar Saha,
A.R.Gopalakrishnan.,Fracture behavior of plain concrete
increasing with the increase in the grade of the concrete.
beams experimental verification of one parameter
This is due to the increased load resisting capacity of the model, ICCES, Vol.14, No.3, pp.65-83.
beam with the increase in the grade of concrete. [5] B.K. Raghu Prasad, T.V.R.L. Rao, A.R.Gopalakrishnan,
6. Increase in the notch ratio (a/D) increases the brittleness Modified lattice model for mode-i fracture analysis of
of the member. In other words, increase in crack length in a notched plain concrete beam using probabilistic: ICCES,
structure pushes the structure to behave in a brittle manner. Vol.6, No. 2, pp.99-112.
7. It indicates that the increase in notch depth ratio [6] Prashanth M. H., Parvinder Singh, J. M. Chandra Kishen,
decreases the fracture energy. In other words, increase in Fatigue crack propagation in plain concrete beams by
acoustic emission technique, 9th International conference
crack length of a structure requires less fracture energy for
on Fracture Mechanics of Concrete and Concrete
extending the crack. A decrease in fracture energy for StructuresFraMCo 9, DOI 10.21012/FC9.069.
crack extension indicates the brittleness of the structure. [7] Rajkumar. K and Vasumathi.A.M., Study on the flexural
behavior of concrete beam with two point loading,
International Journal of Earth Science and Engineering,
REFERENCES April 2013, P.P.21-27.
[8] Desayi P., Krishnan S.,Equation for the Stress-Strain
[1] P. Subba Rao, A. Venkateshwara Rao, A study on load- Curve of Concrete, Journal of the American Concrete
deflection behavior of cracked concrete beam using FEM: Institute, 61, pp. 345-350, March 1964.
fracture mechanics approach ,International Journal of
[9] Prabhakara R., Muthu K.U., Meenakshi R., Investigation
Engineering Research & Technology, ISSN: 2278 0181,
on ultimate flexural strength and ductility behavior of HSC
Vol. 1 Issue 6, August 2012.
beams, Indian Concrete Journal, Oct 2006, pp.40-50.
[2] T. Muralidhara Rao, T.D.Gunneswara Rao, Size effect of
plain concrete beamsan experimental study,
[10] F.P.Zhou, R.V.Balendran, A.P.Jeary, Size effect on
International Journal of Research in Engineering &
flexural, splitting tensile and Torsional strengths of High
Technology, ISSN: 2319 1163, Vol. 02 Issue 06, June
Strength concrete, Cementand ConcreteResearch, Vol.28,
2013. No.12,pp.17251736,1998.
[3] H. Ananthan, B.K. Raghuprasad, K.T. Sundara Raja
Iyengar, Influence of strain softening on the fracture of
.
Fig.3. Module integrated inverter
Multi String Inverter: This inverter is developed in 2005.
It combines the advantages of string inverters and module
inverters. Each String is made of several solar panels is
coupled to its own DC-DC converter with individual MPPT Fig.6. PV inverter topology using low frequency
and feed energy to common dc to ac inverter shown in Fig. transformer on ac output side of the inverter
4 [2]. The advantages of the topology are low cost, In low power domestic applications, transformer
flexibility and high energy [14]. occupies large space and also reduces the efficiency of the
PV system. By removing the transformer, the size of the PV
system can be reduced and the efficiency can be improved.
PV PV PV A PV transformerless topology is shown in Fig. 7.
PV PV PV
PV PV PV
DC DC DC
AC
DC AC
DC AC
DC
F
DC
Fig.7. PV inverter topology without using Transformer
Since there is no transformer, the issues like
AC galvanic isolation and variable common mode voltage are
arisen. A variable common mode voltage results in leakage
AC BUS current that flows through parasitic capacitances from PV
Fig.4. Multi String Inverter panels to the ground. This results in the increase of system
II. GRID CONNECTED INVERTERS losses, reduction in quality of the grid current [2].
The DC voltage generated from PV panels has to be
III LOW POWER GRID CONNECTED INVERTER
converted into AC voltage of required magnitude and
TOPOLOGIES
frequency. A typical PV system consists of PV panels, DC-
In low power applications, the PV system consists
DC converter, DC-AC converter and a line frequency
of PV panels, DC-DC converter and DC-AC inverter. In
transformer on AC side or high frequency transformer on
single phase applications, a full bridge inverter (H4
DC side. A PV topology using transformer on DC side is
topology) shown in Fig. 8. can be used. The power semi
shown in Fig. 5 and PV topology using a transformer on ac
conductor devices, IGBTs (Insulated Gate Bipolar
side is shown in Fig. 6.
Transistor) can be controlled using following two
In high power applications, the PV systems are
modulation techniques: (i) Bipolar PWM (ii) Unipolar
including a transformer to provide galvanic isolation
PWM.
between PV panels and the grid. This helps in reducing the
common mode leakage currents and also provides safety
[14].
D)H6 Topology:
600
M a g (% o f F u n d a m en t al)
200
O utpu t voltage
100
0
-200 50
-400
0
0 1000 2000 3000 4000 5000
-600
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 Frequency (Hz)
Fig.14: Output Voltage of H4 Full
Time Bridge using Bipolar PWM of
inverter topology. Fig.18. FFT analysis of Output voltage using unipolar PWM
Fundamental (50Hz)=401,THD 81.09% It is evident that, from the Fig.16 and Fig.17, the ripple
400
using bipolar PWM is more when compared to the ripple
using unipolar PWM. Thus Unipolar PWM considered as
O utput voltage M agnitude
200 30
O u t p u t C u r re n t M a g n it u d e
20
100
10
0
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 -10
Frequency (Hz)
-20
-500
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02
Time
AbstractA Distributed generation (DG) system with Micro considered uncontrolled in most of the situations, and hence
grid(MG) powered by the renewable energy sources like solar, the required amount of power must be generated
wind and fuel cells along with the battery is under study. The dynamically. With the conventional power plants, there are
profile of the power consumed from the grid and the power primary and secondary load control mechanisms. However,
delivered back to the grid is found. The load forecasting data
the scenario is different here and unconventional control of
is considered by the Microgrid system simulation with
industry and domestic loads and it is observed that the load the source must be considered. The sources considered here
demand is met efficiently with the available energy resources are Solar Photovoltaic, Fuel-cell, and power from wind
and some part of the energy is fed back to the grid as well. energy. Out of these, the solar and wind power generated is
Also, the voltage at the PCC is observed with different loads purely dependent on the solar insolation levels, and wind
such as domestic loads and industry loads. power output depends on the wind speed available. Fuel
cell power output is controllable as the hydrogen and
Index TermsDistributed Generation, Microgrid, Fuel Cell, oxygen inputs are controlled at the inlet value. Normally
Wind Energy, Solar Energy, load forecasting combined heat and power is considered as a source but for
Indian conditions, heat is not available naturally in a large
I. INTRODUCTION scale. So it is not considered in the simulation. It is
Today, the local energy generation is very prominent for observed that efficiently managing the resources could save
the sustainability of energy for future. Distributed the operating cost of the system and hence more profits can
generation system refers to the energy generated at the be obtained. The coordination of the supply from different
consumer point. A microgrid along with the distributed sources increases the overall efficiency of the system. It is a
generation system can be used as a backup for the grid in recommended practice to also have the power supply
case of emergencies like grid outages. A microgrid system connection from the grid. This helps in meeting the load
gives scope for greener energies and also has advantages of demand with the generation possible at the load end with
providing flexible electricity. Therefore, with the microgrid the renewable energy sources and the balance load demand
systems, the consumers become more energy independent is met by the grid. Whenever the load is completely met by
and environment-friendly. the generation from these renewable energy sources and
there is excess power available, it is required to feed the
excess power back to the grid. The power exchange can be
measured to observe whether the power is taken from the
grid or the power is fed back to the grid. This is called Net-
Metering. [2]
Considering the operation of the microgrid, traditionally
there are two modes. Mode 1: The Microgrid is connected
to the main grid so that the power can be either consumed
or delivered back to the grid. (Grid Connected Mode
(GCM)) [3]. Mode 2: The Micro grid is disconnected with
the grid during the emergencies and has the capability to
run the local loads. (Islanded Mode) [4]. A typical
Microgrid System with utility connection is shown in figure
1.
Fig: 1 Schematic of a Microgrid with utility Interconnection
It is also required to control the microgrid in the islanded
and grid-connected modes [4], [5]. A study is done on the
A microgrid is powered by renewable resources like energy management in the microgrid, in grid-connected
wind energy, solar energy, fuel cell and the backup diesel mode and its effectiveness is discussed.
generators, batteries. [1] The microgrid infrastructure The contents of the paper are as follows. The first section
provides the platform for efficiency improvement and gives the introduction to the problem addressed. The
enhanced energy consumption within a small area. Power objectives are discussed in the second section. The
generation happening at the load end reduces the methodology used for the analysis is explained in the third
transmission losses. Also, the distribution system is section. The results and discussions are in the fourth section
comparatively less complex in construction. The load is followed by conclusions.
lighting and other auxiliary loads are assumed to be industries. Both the above loads are simulated as a single-
constant and are included as a RL model. There is no phase loading. A heavy industry load is modeled which is a
reserve considered in this model and can be the future 3-Ph load. Analysis on the model is done for three different
enhancement of the paper. cases.
Case A: All the connected loads in the MG system are
IV. SIMULATION , RESULTS AND DISCUSSION assumed to be domestic loads. In that case, the voltage
fluctuations and power exchange at the PCC are as shown
A. MATLAB based simulation in the figure 5 & 6 respectively.
A Distributed generation system with solar, wind fuel
cell and the batteries[1] connected to a microgrid is Case B: In this case, along with the domestic loads, small-
simulated in the MATLAB-Simulink environment. scale industry load is added. In that case, the voltage
fluctuations and power exchange at the PCC are as shown
in figure 7 & 8 respectively.
B. Results
The results for the three different cases simulated are
shown in the following figures. It is to be noted that all the
figures x-axis corresponds to a 24hr horizon, sampled at
each minute.
The power met by the heavy industry load is shown in Fig 13. adding additional load in the microgrid area and see that the
This is also similar to the industry load discussed above. power exchange is close to zero but in the negative region.
Secondly, load can be scheduled to be connected to the
microgrid during noon time. This can reduce the peak value
in the graph to look flat, so that the power quality at the
PCC is improved.
As mentioned in case 3, figure 8 shows the voltage
fluctuation, where the voltage deviation is found to be
predominant in all the three phases. The deviation is
happening between 91.42 to 104.25% of the rated. This is
again within the standards. The voltage profile seems to
now shift from the negative to the positive region, i.e.,
power is now drawn from the grid predominantly. During
night time, there is much fluctuation in the power flow.
All the load demands are met by the energy sources
available at the microgrid as well as the utility grid in all
the three cases. The individual load profile shown in figure
10 and figure 11 shows that whatever is the load demand,
Fig: 13 Power met by Heavy industry load
that power is supplied after considering the available
sources of fuel cell, solar, wind and battery power. The
C. Discussion loads at household and industry are chosen in a way to see
It can be observed that always there is a power exchange that there is always a load present and distributed among
the 24hr horizon. The power coming from solar can be
between the utility grid and micro-grid. Whenever the load
precisely found here. These are assumed to be rooftop solar
demand is not met by the local generation, the utility
photovoltaics installed. The fluctuation in the load is
supplies the required power. Whenever there is an excess
considered to check the stability of the system under such
power in the micro grid it is sent back to the utility grid.
conditions and it is found that the sum of generation plus
The rated voltage at the PCC is 12.47 kV. Whenever
the grid power exchange is capable of supplying
there is a power exchange happening from the grid, the
uninterrupted power to the load. This is desirable so as to
voltage tends to change. In figure 4, the voltage variation
can be observed to be deviating from 99.98% to 100.03% maintain power quality at the load end.
of the rated value. This is well between the allowable
deviation of 10%. It can also be observed that the voltage V. CONCLUSIONS
is not different in different phases. The power exchange at In the paper, the Micro-grid system that connects the
the PCC can be analyzed from figure 5. It is obvious from household loads and various industry loads is studied. The
the graph that most of the time, the icrogrid is producing total load demand is met from the energy generated from
power not only enough for the local loads, there is a surplus renewable energy sources like solar, wind and fuel cells, the
of power available. So, the measured power shows negative UPS system and the grid. The power consumed and the
values, indicating the flow from microgrid to the grid. power fed back to the grid in a day is analyzed and found to
There is a maximum feeding to the grid during noon time, be well within the standards. It is seen that the energy is fed
due to the maximum power generation from Solar back to the grid whenever the generated energy at the load
Photovoltaics. side exceeds the demand. Also, the voltage fluctuations at
The second case considered is the influence of the PCC are observed for various load conditions.
presence of a small-scale industrial load in the system, at a
phase. Now this should cause unbalance in the system REFERENCES
voltage, as different loads are connected on different
phases. This can be observed in figure 6. Two household [1] X. Guan, Z. Xu, and Q. S. Jia, Energy-efficient buildings
facilitated by microgrid, IEEE Trans. Smart Grid, vol. 1, no.
loads connected to two phases have similar voltage profile,
3, pp. 243252, 2010.
while the third phase has a voltage which is fluctuating [2] Tesoro Elana Del Carpio Huayllus,Dorel Soares Ramos and
because of the quantity of load. It is again observed that the Ricardo Leon Vasquez Arnez, Microgrid Systems: Main
deviation is between 99.98% to 100.08%, which is much Incentive Policies and Performance Constraints Evaluation
less than the allowable 10%. for their Integration to the Network, IEEE Trans. Latin
The second observation is that the voltage fluctuations at America Volume 12, Issue 6 , September, 2014.
the PCC are different in three cases. It can be observed that [3] Hussai Basha Sh, Venkatesh P, Control of Solar
when the industry load is introduced along with domestic Photovoltaic (PV) Power Generation in Grid-Connected and
load at phase c, the fluctuations are more in that phase. islanded microgrids, International Journal of Engineering
Research and General Science Volume 3, Issue 3, Part-2 ,
When the heavy industry load is introduced, all the phases
May-June, 2015.
are disturbed showing high voltage and current fluctuations. [4] C. L. Moreira, and A. G. Madureira, Dening Control
The power exchange is again predominantly happening Strategies for MicroGrids Islanded Operation IEEE Trans.
from the microgrid to the grid. Two conclusions can be On PowerSystems, Volume 21, No. 2, MAY 2006.
made from figure 7. Firstly, there is always a scope for
[5] C. L. Moreira, F. O. Resende, and J. A. Peas Lopes, Senior 9.00 3.06 1.23 -0.16 1.23 1.00 1.00
Member, IEEE, Using Low Voltage MicroGrids for Service 10.00 2.80 2.20 -1.40 2.20 1.00 1.00
Restoration IEEE Trans. On PowerSystems, Volume 22, 11.00 2.65 3.17 -2.52 3.17 1.00 1.00
NO. 1, FEBRUARY 2007 12.00 2.30 3.80 -3.51 3.80 1.00 1.00
13.00 6.91 4.20 0.70 4.20 1.00 1.00
14.00 7.31 4.38 0.94 4.38 1.00 1.00
15.00 3.42 4.22 -2.80 4.22 1.00 1.00
APPENDIX 16.00 2.91 3.68 -2.77 3.68 1.00 1.00
17.00 2.77 2.85 -2.08 2.85 1.00 1.00
18.00 7.27 1.57 3.70 1.57 1.00 1.00
Time Use Gen Grid Solar Fuel UPS 19.00 2.82 0.49 0.33 0.49 1.00 1.00
(hrs.) [kW] [kW] [kW] [kW] Cell[KW] [KW]
20.00 6.59 -0.08 4.66 0.08 1.00 1.00
0.00 6.46 0.00 4.46 0.00 1.00 1.00
21.00 7.99 0.00 5.99 0.00 1.00 1.00
1.00 6.39 0.00 4.39 0.00 1.00 1.00
22.00 7.86 0.00 5.86 0.00 1.00 1.00
2.00 4.56 0.00 2.56 0.00 1.00 1.00
23.00 6.62 0.00 4.63 0.00 1.00 1.00
3.00 2.41 0.00 0.41 0.00 1.00 1.00
24.00 6.51 0.00 4.51 0.00 1.00 1.00
4.00 2.44 0.00 0.44 0.00 1.00 1.00
5.00 2.41 0.00 0.41 0.00 1.00 1.00
6.00 2.42 0.00 0.42 0.00 1.00 1.00
7.00 2.52 0.00 0.52 0.00 1.00 1.00
8.00 2.42 0.20 0.23 0.20 1.00 1.00
AbstractThe growing electrical energy demand in The most familiar way to generate electricity from
developing countries like India has triggered the scientists and solar energy is to use photovoltaic cells, which are made up
engineers to think of new and innovative methods in the field of silicon that converts the solar energy falling on them
of renewable energy sources especially solar energy. Grid directly into electrical energy. This is a direct energy
connected PV systems have become the best alternative to
bulk electrical power consumers like industries and other
conversion which involves photo-electric effect. Large
institutions. In this paper 280kWp Photovoltaic Grid scale applications of photovoltaic for power generation,
Connected Power Plant commissioned at CVR College of either on the rooftops of houses or in large fields connected
Engineering is taken for research study. This Plant uses three to provide clean, safe and strategically sound alternatives
different mechanisms to trap the Solar energy from the Sun, for production of electrical power generation [3].
namely Seasonal tilting, Single axis tracking and Single axis Solar PV solution has the potential to transform the
Polar tracking. The twelve months of a year is segmented into lives of many people, who rely on highly subsidized
two time frames viz. from September to March and other kerosene oil and other fossil fuels, primarily to light up
from April to August. In one time frame Single Axis Polar their homes. Renewable energy source is a practical
Tracking Power Plant is giving better performance whereas in
solution to address the persistent demand supply gap in the
other time frame Single axis tracking Power Plant is giving
better output. Further research and investigation has to be power industry.
done on these results to predict the performance of the plants
in these two time frames and exact reasons for these outputs. II. TYPES OF PHOTO VOLTAIC SYSTEM
On the basis of working operation, PV systems can
Index Terms Grid Connected Solar Power Plant, Polar operate in four basic forms [4].
Tracking, Single Axis Tracking, Solar Radiation, Seasonal
Tilt. A. Grid Connected PV Systems
I. INTRODUCTION
Increasing electrical energy demand and high cost fossil
fuels, global warming and environmental issues increase
the importance of the use of clean and renewable energy
sources. There are many types of naturally available
sources of energy which can be replenished over a period of
time. Solar energy, wind energy and biomass energy are
few to mention as examples. All these renewable sources of
energy have the capability of converting the naturally
available sources of energy into electrical Energy [1].
Today's daily Electrical energy needs, cost of fossil fuels
and effect of greenhouse gases on environment forces the
industrial and other institutions to seek for new ways to
generate their own electrical demand using renewable Figure 1. Block diagram of Grid connected PV Plant with net metering
energy.
India has an abundance of solar radiation, with the The Fig. 1, shows the block diagram of Grid connected
peninsula receiving more than 300 sunny days as an PV plant with net metering. These systems are connected to
average in a year. Due to its proximity to the equator, India a broader electrical network called Grid. The PV System is
receives abundant sunlight throughout the year [2]. For connected to the utility grid using a high quality sine wave
countries like India, Solar energy became the best inverters, which converts DC power generated from the
alternative and viable renewable energy to fulfill the solar array into AC power. During the day, the solar
electrical energy requirements of majority of the people electricity generated by the system is either used
living in both urban and rural areas for variety of immediately for local loads or export to electricity supply
applications. companies. In the evening, when the PV System is unable to
B. Standalone Systems
The Fig. 2, shows the block diagram of standalone PV
system. PV systems not connected to the electric utility grid
are known as Off Grid PV Systems and also called stand-
alone systems. Direct systems use the PV power
immediately as it is produced, while battery storage systems
can store energy to be used at a later time, either at night or
during cloudy weather. These systems are used in isolation
of electrical grid, and maybe used to power local loads.
Figure 4. Block diagram of Grid tied with Battery Backup PV System
Figure 2. Block diagram of standalone PV system Project location Climate data location
Location: Hyderabad Location: Hyderabad
C. Hybrid System Latitude: 17.20 N Latitude: 17.45 N
Longitude: 78.60 E Longitude: 78.47 E
The Fig. 3, shows the block diagram of Hybrid PV Elevation: 545.00 m Elevation: 545.00 m
system. A hybrid system combines PV with other forms of
power generation, usually a diesel generator. Bio gas is also The solar radiation data of project location is not available
used. The other form of power generation is usually a type at exact project location. So the solar radiation data and
which is able to modulate power output as a function of other climate conditions at project location are assumed to
demand. However, more than one form of renewable energy be the same that is available at the nearest climate data
may be used e.g. Wind and Solar. The Photo Voltaic Power location given by NASA [7]. Climate data of the location
generation serves to reduce the consumption of fossil fuel. as per information provided by NASA are tabulated in
Table I below.
TABLE I.
IV. PLANT DESCRIPTION In these methods panels are oriented in East-West direction.
The moment sun rises in the East solar panels automatically
The Grid connected solar power plant located on Roof face east direction. It will be horizontal when LST is 12:00
top of CVR College of engineering has total installed noon and will face the West in the evening. This method of
capacity of 280kWp.This 280kWp capacity solar power tracking the sun is called Single-Axis tracking
plant is further decentralized into 5 sub plants for proper
operational and maintenance.
The Electrical Energy output of the Solar Power Plant is C. Single- Axis Polar Tracking
directly proportional to amount of Solar radiation that solar In Single-Axis tracking, the modules are mounted flat at
Array will receive at any point of time. More the amount of 0 degrees with respect to horizontal. While in Single Axis
solar energy received by the solar array, more the electrical Polar Tracking the modules are installed at a certain tilt (10
energy output. vice-versa. degrees) with respect to horizontal axis. It works on same
So the new classification of Solar power plants based on principle as Single- Axis Tracking, keeping the axis of tube
the way how the solar array receives the solar energy from horizontal in northsouth line and rotates the solar modules
the sun can be categorized into three types from the east to the west throughout the day. This method
(A) Seasonal Tilt/Manual tilte of tracking is named as Single axis Polar tracking. These
(B) Single- Axis Tracking trackers are usually suitable in high latitude locations
(C) Single- Axis Polar Tracking
The complete capacity of solar plant on each block of Main components of Grid connected PV Plant are [9]
CVR College of Engineering is tabulated in Table II
including the dates on which each plant operation started to 1. Solar Panels/ Solar Modules/ Solar Array
their full capacity. 2. String inverter
Out of 280kWp of installed capacity 120kWp on rooftop 3. DC Cables
of EEE Block and 20kWp on rooftop of Library block 4. AC Cables
makes use of Seasonal tilt mechanism.80 kWp on Main 5. Junction Boxes
block and 60kWp on CSE block uses tracking mechanism 6. Net meter (bi-directional Meter)
to extract the Solar energy from the sun. Overall, 140 kWp .
solar plant uses seasonal tilt mechanism and remaining 140 Solar panels/ Modules/Array will collect the energy
kWp solar plant uses tracking mechanism for electrical
from the sun in the form of solar radiation. this solar
power generation [8]. radiation is converted into DC electrical energy by solar
Array. This DC Electrical energy is given as input to grid
TABLE II.
interactive String inverters with the help of DC Cables.
DETAILS OF NAME OF SUB PLANTS, THEIR CAPACITIES AND DATE OF These string inverters convert DC electrical Energy into AC
COMMENCEMENT OF PLANT electrical energy. The output AC electrical energy of
Date of inverter will be sent to local junction box with the help of
Name of the Sub Plant Installed Power Commencement AC Cables. From the junction box depending upon local
of Plant load requirement the AC power will be either pumped to
CVR EEE Block 120.00 kWp 03-03-2014 electrical grid or utilized for local energy requirements.
Single Axis Tracking-MB 40.00 kWp 18-01-2015 There will be a bi- directional energy meter is installed at
Library 20.00 kWp 23-02-2015 incoming transformer from the external grid [10].
Polar Tracking -MB 40.00 kWp 11-03-2015 Generated Solar Electric Power is synchronized at 11kV
CVR CS Block 60.00 kWp 22-10-2015 bus. This Net meter increment the energy units when ever
Overall Plant Capacity 280.00kWp ---- local load requirement is more than AC electrical energy
output of Solar Plant and vice-versa.
A. Seasonal Tilt/Manual tilt The below Fig. 5, shows the exact on site photograph
of 40-kWp Single axis tracking power plant commissioned
Placing the modules facing the south with some tilt with on Main block of CVR College of Engineering. It is evident
respect to horizontal roof, on which panels are mounted. In from photograph that Solar array at 9:30 A.M. facing
this method for every month we need to adjust the tilt of towards the east.
solar modules. This is called seasonal tilting but changing
the position of solar panels every month is quite difficult,
we try to restrict the change the tilt of modules for every
season.
Figure 9. Energy outputs of single axis tracking power plant with single
axis Polar Tracking Power plant over a monitored period
TABLE III.
REFERENCES
1
Wear resistance Wr = (5)
Wa
D. Results
C. Wear Test
The sliding wear tests are carried out for both tribo pairs
at constant parameters by using Ducoms Pin on Disc TR-
20 machine in controlled room temperature under applied
contact load of 30N with speed 300 rpm as per ASTM G-
99A. The two tribo pairs are made contact with surface of
SS alloy steel disc and revolved with track diameter of
25mm and each 2000 revolutions. Wear is not only depends
on load but also dependent on the sliding speed. If sliding
speed is more then, more distance is covered by the sample
and static wear will be more. The wear will higher for
higher sliding speed.
Massloss
1000 (3)
Density
Figure 3. Wear Vs Sliding Time of Multilayer Hardfacing.
The mass loss, volume loss and wear rate are higher for
mono and wear resistance is very poor when compared to
multilayer hardfacing depositions. This is due to brittleness
of specimens and poor binding strength. It may be offered
due to operations done on grinding and lathe machined
caused the surface harder and brittle.
E. Worn surfaces
TABLE 4.
WEAR CHARACTERISTICS OF PINS.
formation of wear debris is mainly occurred on single layer all hardfacings. A buffer layer is preferred for these
as compared on multi layer and major loss of volume and hardfacings
wear rate is also occurred on single layer surface.
REFERENCES
[1] Vishal I. Lad, Jyoti V. Meghani, and S. A. Channiwala
Studies on the effect of alloying element in iron base
hardfacing alloy, Trans Indian Inst Met, TP 2763. October
2013.
[2] E.Badisch, C.Katisch, H.Winkelmann, F.Franek and Manish
Roy Wear behavior of hardfaced Fe-Cr-C alloy and
austentic steel under 2-body and 3-body cobditions at
elevated temperatures Tribology International, pp.1234-
1244, 2013.
[3] Chieh Fan, Ming-Che Chen, Chia-Ming Chang and Weite
Wu Microstructure change caused by (Cr, Fe)23C6
carbides in high chromium FeCrC hardfacing alloys,
Surface and Coatings Technology, Volume 201, Issues 3-4,
pp. 908-912, 2006.
[4] K.Gurumurthy, M.Kamaraj and S. Venugopal, Microstural
aspects of plasma transfeered arc surfaced Ni-based
hardfacing alloy, Materials science and Engineering, pp. 11-
Figure 8. Worn Surface of 10.5% Cr - 19% Ni Multilayer Hardfacing 19, 2007.
after Wear Test [5] C.Katisch, E.Badisch, Manish Roy, and F.Franek,Erosive
wear of hardfaced Fe-Cr-C alloys at elevated temperatures
One of the common features absorbed in the both worn Journal of Wear, pp.1856-1864, 2009.
surfaces is formations of ridges running parallel to the [6] D.M. Kennedy and M.S.J.Hashmi, Methods of wear testing
sliding direction. The damaged spots in the form of craters for advanced surface coatings and bulk materials, Journal of
can be seen in single layer depositions are decreased in Materials Processing Technology, pp.246-253, 1998.
multilayer depositions. The multilayer hardfaced deposition [7] Yuksel N. Sahin S, Wear behavior-hardness-micro-structure
showed mild wear transition and no severe cracks was relation of Fe-Cr-C and Fe-Cr-C-B based hardfacing alloys,
Journal of material and design, Volume 58, pp 491-498,
formed. The loosed wear debris formations were observed
2014.
in previous hardfacing but here no such criteria were [8] Chang Kyu Kim, Sunghak Lee, Jae-Young Jung and
observed. SanghoAhn ,Effects of complex carbide fraction on high
temperature wear properties of hardfacing alloys reinforced
III . CONCLUSION with complex carbides, Journal of material Science and
Enginerring, Volume A 349, pp. 1-11, 2003.
Multilayer hardfacing reduces rate of pin holes, cracks [9] Jun-Ki Kim, Geun-mo Kim and Seon Jin Kim, The effect of
formations and also reduces internal stress in interface manganese on the strain-induced martensitic transformation
layers and leads to increase in volume of carbides which and high temperatures wear resistance of Fe-20Cr-1C-1Si
strongly influences the hardness of the material. hardfacing alloy, Journal of Nuclear Materials, Volume 289,
Multilayering can reduce the internal void formations and pp. 263-269, 2001.
external surface defects and results in greater binding [10] Y. F. Zhou, Y. L. Yang, P. F. Zhang, X. W. Qi, X. J. Ren and
strength and hardness as the carbon content is increased Q. X. Yang, wear resistance of hypereutectic Fe-Cr-C
hardfacing coatings with in situ formed TiC, surface
with high fraction of carbide formations results in excellent
engineering, Volume 29, No. 5, 2013.
resistance to external load and wear. [11] Chia-Ming Chang, Yen-Chun Chen and Weite Wu
Increase in carbon content increase the hardness but also Microstructural and abrasive characteristics of high carbon
increases the brittleness of the material but on other way it Fe-Cr-C hardfacing alloy, Journal of Tribology International,
can form more carbides which resists towards abrasive Volume 43, pp 929-934, 2010.
wear
In some cases the finishing operations are also responsible
for hardening of surface layers.
The morphology of worn surfaces on Multilayer showed
only plough nature on the surface and good wear resistance
compared to single layer. As in single layer showed uneven
cracks formation, voids and severe plough.XRD results
stated that all hardfacing are in austenitic phase and after
wear test no phase transformation was observed.
Broken flake particles from specimens acted as abrasive
particles increased the wear rate due lesser birding strength.
Cr-Ni based hardfaced alloy showed good binding strength,
good wear resistance and fewer amounts of oxide layer
formations on the surfaces of specimens as compared with
Abstract The intricacies in plastic mold design are Materials cooled to get desired form. Injection molding
accommodated by precise and correct methodology in design process can be divided into four stages which are
steps and taking into considerations the right factors. In this Plasticizing, injection, packing and cooling [3]. The
paper a two plate finger cam actuated injection mold is operations are to be carried out precisely. The tool is
designed for a component namely male insulator of solar
clamped together under high loads and is subjected to high
connector, the material selected for it is Poly Phenylene Oxide
(Noryl-PPO). The male insulator component has intricate injection pressures and high heat levels from the incoming
projections on its surface and has a threaded brass insert as polymer. During the cooling cycle, the mold is cooled until
well. The 3-D model of the component and extraction of core it reaches ejection temperature. All these factors combine to
and cavities was performed in Plastic Advisor 7.0. Plastic make the mold tool highly stress dynamic heat exchanger.
advisor software is a powerful simulation tool to locate gate It is important, therefore, to ensure that the mold design
location and predict the defects in the component. takes all factors into consideration. Additionally, there are
several other requirements that are needed to be considered,
Index TermsMold design, Male Insulator, Solar Connector, among which are the type of tool needed, e.g. two-plate,
Plastic advisor 7.0
side core, spilt, three-plate, hot runner. And the mold
material, the cavity construction, the required tool life and
I. INTRODUCTION temperature control.
Injection molding is an ideal plastic manufacturing In addition to runners and gates, there are many other
process. Due to its ability to manufacture complex plastic design issues that must be considered in the design of
parts with high precision and production rates at low plastic molds. Firstly, the flow must allow the molten
operation costs only a relatively high initial investment for plastic to flow easily into all the cavities. The removal of
mold design and fabrication [1]. The molding may cause solidified part from the mold is also equally important, so a
defects and its processing offers a challenge during its draft angle must be applied to the mold walls. The design of
development phase. The cost of the mold is high and any the mold must also accommodate any complex features on
process that is not optimized renders heavy overheads the part such as undercuts or threads which will require
during its development cycle and production. So, designing additional mold pieces. Most of these devices slides into the
the mold which ensures best suitability for the features on part of the cavity through the side of the mold, and are
the component with smooth flow of molten plastic is the therefore known as sliders or side-actions.
demand of plastic industry [2].
Solar insulators are single contact electrical connectors II. PROCESS VARIABLES AND MACHINE
commonly used in solar connectors. It consists of a plug SPECIFICATIONS AND COMPONENT DETAILS
and socket design. The plugs and sockets are placed inside
a plastic shell that appears to be the opposite gender- the A. Speed related process variables
plug is inside a cylindrical shell that looks like a female
connector but is referred to as male and the socket is inside The process variables related to speed are mold opening
a square probe that looks male but is electrically a female. and closing speed, injection speed, screw rotation speed,
The female connector has two plastic fingers that have to be component retracting speed.
pressed towards the central probe slightly to insert into
holes in the front of the male connector. When the two are B. Pressure related process variables
pushed together, the fingers slide down the holes until they
reach a notch cut into the side of the male connector, where The process variables related to speed are injection
they pop outward to lock the two together. The male pressure, holding pressure and hydraulic jack pressure.
insulator consists of threaded insert of brass which is
placed inside the die and then the molten plastic is C. Temperature related process variables
introduced into the impression inside the mold.
The process variables related to temperatures are melt
temperature and cooling water temperature.
TABLE I.
Figure 1. 3D model of male insulator of solar connector
CLAMPING UNIT
TABLE II.
INJECTION UNIT
Screw diameter 28 mm
Shot weight 96 gm
Theoretical shot volume 108 cm3
Maximum injection pressure 2670 kg/cm2
Injection rate 55.2 cm3/s
Plasticizing capacity 20.8 gm/s
Screw rotation 253 rpm
III. SIMULATION
The simulations are performed in Plastic advisor 7.0
which is a powerful tool to simulate the best gate location,
confidence in fill, fill time, determining weld lines and air
traps.
D. Fill time
H. Quality prediction
F. Flow front temperature
The quality of the component can be predicted using
Flow front temperature is the temperature at which
mold flow analysis. The result of quality prediction is
shows the temperature of the polymer when the flow front
shown in figure 10.
reaches a specified point in the center of the plastic cross
section.
It also facilitates uniform flow ability and uniform
shrinkage, weld lines, hesitations, flow mark and material
degradation due to high temperature in the component. The
result for flow front temperature is shown in figure 8.
The flow front temperature should not drop more than
3 to 5. While doing the simulation it looks for hot spots,
cold spots or check where the material is excessively
cooling or heating
J. Air traps
Air traps causes blow holes in the component thus
decreasing the quality and strength of the component. It is
highly undesirable and it has to be taken care beforehand
itself so that once the production starts there should not be
any design modification. The result of air traps in the
component is shown in figure 12.
REFERENCES
[1] Castro, C.E, Rios, M.C, B.L and M.C.J (2005) Simultaneous
Optimization of Mold Design and processing Conditions in
Figure 14. Finger can actuator Injection molding, Journal of Polymer Engineering, 25(6),
459-486.
Working length, L = (M/ sin ) + (2c/sin ) (3) [2] Jitendra Dilip Ganeshkar, Prof. R.B. Patil, Swapnil S.
Where, Kulkarni, Design of plastic injection mold for an automotive
M = Spilt movement= 27.39 mm component through flow analysis (CAE) for design
enhancement, International journal of Engineering research
= Angle of finger cam = 15
and studies, Jan 2014, E-ISSN2249- 8974.
c= Clearance = 0.5 mm [3] Alireza Akbarzadeh, Mohammad Sadegi, Parameter study
L= (27.39/ sin 15) (20.5/ sin 15) in plastic Injection Molding process using Statistical
= 107.83 mm Methods and IWO Algorithm, International journal of
modeling and optimization, Vol.1, No. 2, June 2011.
O. Cooling system
[4] Nik Mizamzul Mehat, Shahrul Kamaruddin, Abdul Rahim
Serial cooling channel system has been adopted for this Othman, Modeling and Analysis of Injection Molding
mold for better cooling quality. For this, cooling holes of Process parameters for Plastic Gear Industry Application,
6.0mm diameter with 1/8BSP (British standard pipe) plugs ISRN Industrial Engineering Vol 13 Article ID 869736, 2013
[5] CITD, Die Design Handbook. 1980. Company standards
have been made.
book, Mold master data hand book, 2003.
The ADC can also be defined as the on-chip interface neural network has the ability to recognize the patterns and
between digital domain and the real domain of analog adapt them with changing environment.
signals. The important steps for developing a neuro fuzzy system
The general process of converting an analog signal into are:
digital signal consists of two steps .They are: 1) sampling 2) 1) Fuzzification of the input parameters.
quantization. Consider the analog signal x (t) when 2) Computation of degree for linguistic terms.
performing the sampling process the analog signal is 3) Conjunction of fuzzy inferred parameters.
converted into sampled signal and then the sampled signal 4) Defuzzification of the output.
is quantized and finally the signal is converted into digital. The adaptive fuzzy inference system architecture[4],[6]
Different types of analog to-digital converters are flash with two inputs such as X, Y and the corresponding output
ADC, successive approximation ADC, pipe line ADC, and F is given in Fig4.
time interleaved ADC. The highest resolution ADC is .
Sigma delta ADC [2] which is a cost efficient low power
converter. The basic building blocks of ADC are
comparators, amplifiers and integrators. The brief
explanation about sigma delta ADC can be given below:
A. SIGMA-DELTA ANALOG-TO-DIGITAL CONVERTER
The sigma delta Analog to digital converter [2] is a 1-bit
sampling system. This type of ADC can also be called as
an oversampling ADC. Delta-sigma ADCs implement
oversampling, decimation filtering, and quantization noise
shaping to achieve high resolution and excellent anti
aliasing filtering.
The block diagram of the ADC is as shown in Fig 3. The
components of the ADC are summing amplifier, integrator,
quantize (comparator), DAC, digital filter.
Fig. 4 ANFIS structure for two-input one output.
g Very
Verylow low medium high
f high
Layer 3: Every node in this layer calculates the ratio of rule veryhigh HCLK HCLK HCLK HCLK HCLK
firing strength to the sum of all the firing strengths of the
rules. The outputs of this layer are the normalized firing The terms such as verylow, low, medium, high, veryhigh
strengths. are called as linguistic variables for input parameters.
HCLK, MCLK, LCLK are the linguistic variables for the
output of the ANFIS system. An example for different
clocks is as shown in the Fig 5.
It is used for converting the analog ECG signal into digital V. METHODOLOGY FOR THE PROPOSED DESIGN
stream. This design is used for the proposed ECG signal The working flow of the proposed system in detail can be
proposed system because of its various characteristics such explained from the flow chart which is shown below in
as various sampling rates can be used and it is a high Fig.7
resolution ADC.
Fig. 8 Simulation block for the proposed design using Xilinx system
generator & MATLAB
The input given to the system is an ECG signal data. The Fig. 12 RTL Netlist for The Proposed Design
ECG signal data loaded in to the MATLAB can be shown The specifications of the proposed design includes area,
in the below Fig 9. power, resolution, frequency, process are shown in the table
listed below:
TABLE III
CHIP RESULTS OF T HE P ROPOSED SYSTEM LEVEL DESIGN
Specification Chip Results
Process UMC 90nm
Core Area 121613m
Power 15mw
Supply Voltage 1.62v
Operating Frequency 300mhz
Resolution 16 Bit
Fig. 9 Input ECG signal used for the simulation of the proposed system
The chip layout for the proposed design is as shown in the [5]. G.B. Moody, R.G. Mark, And A. L. Goldberger, PhysioNet:
Fig 13. This process is performed in Cadence Encounter A Web-based resource for the study of physiologic signals,
tool. IEEE Eng.Med. Biol. Mag.,vol.20, no.3, pp.70-75, May/june
By performing this physical design we can obtain the 2001.
[6]. T.M. Nazmy, H. El-Messiery, B. Al- Bokhity,Adaptive
netlist, DEF(Design Exchange File) & GDS II(Graphical Neuro Fuzzy Inference System For classification of ECG
Data System) file. With these files we can go for further signals,in proc. IEEE conference ain shams university, april
steps such as fabrication of the chip for the proposed 2010, pp.71-76.
design. This can be used for real time applications. [7]. Asim M. Murshid, SajadA.Loan, Shuja A Abasi, And Abdul
VIII. CHIP LAYOUT FOR T HE COMPLETE Rehman, VLSI architecture of Fuzzy logic hardware
DESIGN implementation: A Review., International Journal of
Fuzzy Systems, Vol. 13, No. 2, June 2011.
[8]. H. Kim Et Al., A configurable and low-power mixed signal
Soc for portable ECG monitoring applications, in proc.
Symp. VLSI circuits(VLSIC), jun.2011,pp, 142-143.
[9]. Henry Jos Block Saldaa, Carlos Silva Crdenas design
and implementation of an adaptive neuro fuzzy inference
system on an FPGA used for non linear function generation,
IEEE, December 2010.
[10]. MaurIcioFigueiredo And Fernando Gomide, Design of
Fuzzy systems using Neuro fuzzy networks, in IEEE, VOL.
10, NO. 4, JULY 1999.
[11]. P Laguna, B Simson, L Sornmo, Improvement in High-
Resolution ECG Analysis by Interpolationbefore Time
Fig. 13 Physical design of the proposed ECG system chip layout in Alignment,IEEE, pp.0276-6547, vol 24, 1997.
Cadence Encounter [12]. Philip T. Vuong, Asad M. MadniAnd Jim B. Vuong, vhdl
VIII. CONCLUSIONS implementation for a fuzzy logic controller, IEEE, conf.
In this project the system level design of the low power QC. Los angles, august 2006.
[13]. Gurpreet S. Sandhu And Kuldip S. Rattan, Design of a
ECG signal processing system which includes the
Neuro Fuzzy Controller, Electron. Lett.,vol 41, no.11, may
integration of sigma-delta ADC, ANFIS system, 2005.
multiplexer, up sampling and down sampling circuits is [14]. H. C. Kim, O. Urban, And T.-G. Chang, post-filtering of
implemented. The sampling clock of sigma delta ADC is DCT coded images using fuzzy blockiness detector and
selected by using the decision technique which is done by linear interpolation, IEEE Trans. Circuits vol.53, no.3,
using intelligent algorithm called ANFIS algorithm which pp.1125-1129, Aug.2007.
can also be called as an intelligent system. The design is [15]. Yagiz, N., And Sakman, Fuzzy logic control of a Full
modeled by taking the ECG signal features as the input vehicle without suspension Gap Degeneration, Int. J.
parameters. As the ANFIS approach provides a general Vehicle Design, 42, 198-212.
[16]. SYSTEM GENERATOR for DSP user guide,
frame work for combination of NN and fuzzy logic, the
efficiency of ANFIS for deciding the sampling clock of December,2009. www.physionet.in.
ADC can be concluded by observing the power
consumption levels of ADC at different clock periods. The
SoC implementation of the proposed design reduces the
area, power consumed by the circuit. The results show that
this work can not only improve the quality of the ECG
signals but also improves the power consumption of the
devices. The obtained chip area is about 121613m2 and the
power consumed by the device is 15mW.the resolution of
the ECG signal is increased by 0.14%. This is applicable
for real time health care monitoring applications.
REFERENCES
[1]. Shih-Lun Chen, A power efficient adaptive fuzzy resolution
control system for wireless body sensor networks in IEEE,
VOL 3, and JUNE 2015.
[2]. S.-Y. Lee And C.-J Cheng.A low-voltage and low power
adaptive switched current sigma-delta ADC for bio-
acquisition Microsystems,IEEE trans. Circuits syst. I Reg
papers, vol 53, no. 12,pp.2628-2636, Dec,2006.
[3]. Sukhmeet Kaur, Parminder Singh Jassal, field
programmable gate array implementation of 14 bit sigma-
delta analog to digital converter in vol 1, issue 2, pp.2278-
6856 august 2012.
[4]. Braud Thomas Funsten, ECG classification with an adaptive
neuro fuzzy inference system, in august 2015.
Abstract- The main purpose of this paper is to compresses the The generation of human speech is analogous to
speech signal using wavelet transform. Psychoacoustics is the the electronic circuit as shown in below figure.
scientific study of sound perception. From the psychoacoustic
point of view, we have selected Wavelet analysis for the digital
speech compression. Also Wavelet Transform eliminates the
irrelevancies and redundancies present in the speech signal.
The two popoular models of Wavelet Transform for speech
compression are-Filter bank model and Lifting Scheme.Filter
Bank model is also called Subband Filtering model. Both
models decompose the speech signal into approximate and
detailed components. But the lifting scheme is fast compared
to the filtering model. We have tested some of the lifting
scheme WT algorithms like Haar, Daubechies series, Cohen-
Daubechies and Cohen-Daubechies-Feauveau bidirectionnel
wavelet and have implemented them.
Keywords- Wavelet Transform, Psycoacoustics, Speech Figure 2 Electronic structure of speech generation
compression, Haar wavelet, Daubechies series, Cohen-
Daubechies-Feauveau wavelets. For efficient transmission and storage, we
compress the signal. If the signal is uncompressed, it
I. INTRODUCTION requires large memory. Consider the telephone level speech
The generation of human speech is very signal in the range of 300 Hz to 3400 Hz coded on at 8 bits
complicated. It involves lungs, vocal tracts and vocal folds. per sample, and sampling rate of 8000 samples per second
The message is formulated in the brain and is converted is defined as an uncompressed speech signal. [5]. This non-
into an acoustic signal by vocal tract system. The vocal compressed signal requires storage capacity of 28.8Mbytes
tract system is similar to the electronic parts like power for one-hour duration and transmission rate of 64Kbps. The
supply, oscillator and resonator.[2] The structure of speech main idea of speech compression algorithm is to represent
generation contains lungs, ribcage and abdominal muscles. the non-compressed speech with less number of bits and
Controlled airstream is produced between the vocal folds optimum speech quality. Wavelets are used for the speech
by this combination. From the lungs, the chest cavity compression.
expands and contracts to force air. By opening and closing
of glottis the resistance of the air is changed. Finally a II. SPEECH COMPRESSION MODELS
sound is produced from of the vibration of vocal cords.[2]
The human speech generation system is shown in With modern telecommunication, speech
Figure 1. compression plays an important role. Speech compression
is also speech coding. It is the process of indicating digital
speech signal with a small number of bits with the
normalized speech quality and low computational
complexity. [1] Also these techniques will remove the
unimportant components from the original speech.
The main requirements of the speech compression
algorithms are
1) High compression ratio- The compression ratio gives
memory size and transmission rate required after
compression.
2) Low computational complexity- For real-time encoding
Figure 1 Generation of Human speech
and decoding, to minimize the power and delay coding, the
computational complexity should be minimum.
The Sub-band coder is a waveform coders method Wavelet is a mathematical function, used to represent
in f-domain. [8] At Txr, The speech is passed through an data. Another definition is wavelet is 'small wave' having
analysis filter bank and by down-sampling method; the its energy concentrated in the form of time. Any function
bandwidth of each sub band of the input speech signal is can be represented in terms of wavelets similar to Fourier
reduced. At the Rxr, by up-sampling and synthesis filter series. [1] It also provides time and frequency analysis
bank, the original speech is retrieved. [9] The structure of simultaneously.
the sub band coder is shown in figure 3. Let f(t) is real-time signal and Continuous Wavelet
Transform of f(t) is given by
(3) (6)
c) Predict stage:
(8)
(9)
We have implemented the wavelet compression For Daub-4 wavelet algorithm, compression ratio and
techniques using vision C++ software. Here, we compare speech quality is given below. The compression ratio of 3.9
the compression ratio and quality of speech for various is achieved.
types of wavelet transform algorithms like Haar, CDF,
cubic lifting method, Daub-4, Daub-8, Daub-16. We have
also measured the statistical parameters of speech quality.
They are-MSE, RMSE, SNR, and mean deviation.
For the Haar transform, the compression ratio and
speech quality is shown in the following figure. Simple
algorithm and compression ratio of 3.31 is achieved.
VI. CONCLUSION
REFERENCES
[1] A. Verbuch, B. Gutman Speech compression using
wavelet packet transform & vector quantisation,
SPIE Proceedings Vol. 2569: 2014.
[2] Pramila shrinivasan & Leah H. Jamieson high
quality audio compression using adaptive wavelet
packet decomposition and psycho-acoustic
modelling ,IEEE transactions on signal processing,
vol. 46, no. 4, april 2010
Figure14 Compression ratio vs Speech quality for Daub-8 [3] P.S. Sathidevi & Y. Venkataramani Applying
wavelet analysis for coding of speech & audio signals
For Daub-16, compression ratio of 4.61 is achieved. for multimedia applications TEXTBOOK.
Compression ratio and speech quality of this wavelet is [4] Amara Graps, IEEE Computational Science and
given by Engineering, 2011 vol. 2 No.2 An introduction to
wavelets.
[5] Howard L. Resinkoff & Raymond o. Wells Wavelet
analysis : the scalable structure of information
[6] Andreas S. Spanias Speech coding: A tutorial
review, Proceedings of the IEEE, Vol. 82, No.10,
October 2014.
[7] N. Benevuto et al, "The 32Kb/s coding standard,"
AT&T Technical Journal, Vol. 65(5), pp. 12-22,
Sept.-Oct. 2006
[8] R. Crochiere, S. Webber, and J. Flanagan, "Digital
Coding of Speech in Sub-bands" The Bell Tech. J.,
Figure 15 Compression ratio vs Speech quality for Daub-16 Vol. 55(8), p. 1069, Oct. 2006.
[9] Langlias, Masson, Montagna Real-time
implementation of 16 kbps subband coder with vector
quantization Proceedings of EUSIPCO-86, Signal
processing-III, part-1, page 419-422.
[10] http://www.wavelet.org
[11] P.P. Vaidyanathan, Proceedings of the IEEE, Vol.41,
No. 2, 1993 Multi-rate digital filters, filter banks,
polyphase networks and applications: A tutorial
review.
[12] Raghuveer Rao & Bopardikar. Wavelet transform
Introduction to theory & applications, Addison-
Figure 16 Compression ratio of various wavelet algorithms Wesley, Pearson Education.
shown in fig 1. Any number of processes can be listed in Simplified load balancing algorithm.
the drop down list of the application, and the n numbers of
cores are given as buttons. Whenever the user selects a //check the process pid of process name from PID list
particular process from the drop down list and assigns to a through /proc
particular core, the process is assigned to that core. System process_pid = get_pid("process name");
calls system and taskset are used to achieve processor or read file proc/stat;
CPU affinity. read each field using the file pointer;
while cpu count is less than 5
read CPU usage for each core
store the values of CPU core usage data
increment count until it reaches number of cores
CPUx usage = time lapsed between reads - CPUx idle
time
compare the loads to find the least loaded core
assign the process to this core using taskset -p
lowest_load, "process pid"
REFERENCES
[1] Amdahl (1967) "Validity of the Single Processor Approach
to Achieving Large-Scale Computing Capabilities" ://www
inst.eecs.berkeley.edu/~n252/paper/Amdahl.pdf
[2] J.H. Anderson. Real-time scheduling on multicore platforms.
In Real-Time and Embedded Technology and Applications
Symposium, 2006.
[3] Robert A Aleri. Apparatus and method for improved CPU
afnity in a multiprocessor system. http:
//www.google.com/patents/US5745778
[4] S Brosky. Shielded cpus: real-time performance in standard
linux. In Linux Journal, 2004Y.
Figure 3 Static core Assignment
[5] Knauerhase. Using os observations to improve performance
in multicore systems. In Micro IEEE, May 2008.
Figure 3 shows output of Static core assignment of [6] SimonDerr,PaulMenage.CPUSETS.http://http://www.kernel.
processes. It shows browser process is running on core 1. org/doc/Documentation/cgroups/cpusets.txt.
[7] A survey of Multicore Processors, Geoffrey Blake, Ronald
B. Dynamic Core Assignment G. Dreslinski, and Trevor Mudge IEEE SIGNAL
The main program was compiled using gcc along with PROCESSING MAGAZINE NOVEMBER 2009 1053-
other source files. Camera application, cheese was taken 5888/09/IEEE
as the program that should be assigned dynamically to the [8] A High Performance Load Balance Strategy for Real-Time
least loaded core. The cheese program was launched, and Multicore Systems,The Scientific World Journal
Volume 2014 (2014), Article ID 101529, 14 pages,Keng-
then the load balancer program task_switch was also
Mao Cho, Chun-Wei Tsai, Yi-Shiuan Chiu, and Chu-Sing
launched in another terminal. The task-switch program Yang
started displaying the loads of various cores, and identified [9] Chip Multi Processing aware Linux Kernel
the least loaded core. Camera application was assigned to SchedulerSuresh Siddha Venkatesh Pallipadi,Asit
the least loaded core by changing the affinity mask, which Mallick,2006 Linux Symposium, Volume Two ,Page Nos.
could be either 2, 4, or 8 for core 1, 2, or 3 respectively. 330-340
The period of monitoring has been taken as three [10] Multi-core and Many-core Processor Architectures, A.
seconds, and for every three seconds the terminal window Vajda, Programming Many-Core Chips, DOI 10.1007/978-1-
was updated with current load values of all cores, and 4419-9739-5_2, Springer Science+Business Media, LLC
2011,Page Nos.- 9 to 36
affinity mask was also updated accordingly. The same was
[11] A Study on Setting Processor or CPU Affinity in Multi-Core
checked using top command running on another terminal. Architecture for Parallel Computing, International Journal
The screen shots of the same were given below in figure 4. of Science and Research ISSN (Online): 2319-7064 ,Volume
All the cores were used efficiently with fine load balancing. 4 Issue 5, May 2015, Page Nos.- 1987 - 1990
[12] Parallel Task Scheduling on Multicore Platforms,
Department of Computer Science,The University of North
Carolina at Chapel Hill,
[13] Multi-core and Linux* Kernel ,Intel Open Source
Technology center, Suresh Sidhdha
[14] Real-Time Scheduling on Multicore Platforms, Issue Date:
04-07 April 2006 On page(s): 179 - 190 Print ISBN: 0-7695-
2516-4 doi: 10.1109/RTAS.2006.35 Date of Current
Version: 24 April 2006
[15] The Linux Scheduler: a Decade of Wasted Cores, Nice
Sophia, Justin Funston.
[16] A Hierarchical Approach for Load Balancing on Parallel
Multi-core Systems,Laercio L. Pilla , Christiane Pousa
Ribeiro , Daniel Cordeiro , Chao Mei , Abhinav Bhatele
Figure 4 Dynamic Core Assignment
,pages 119-129
V. CONCLUSIONS [17] Benefits of Cache-Affinity Scheduling in Shared-Memory
Multiprocessors: A Summary,Josep TorrelIas, Andrew
This paper tried to touch couple of enormous Tucker, and Anoop Gupta Computer Systems Laboratory,
possibilities to achieve optimal performance of a system, Stanford University, CA 94305,page Nos: 272-274
beyond OS kernel level scheduling. Also demonstrated that [18] White Paper Processor Affinity Multiple CPU Scheduling ,
the programs at application level may be used to enhance November 3, 2003
[19] Knauerhase. Using os observations to improve performance
efficiency of scheduling further using various monitoring
in multicore systems. In Micro IEEE, May 2008.
techniques and assignment methods. Further, considering
static and dynamic core assignments, to get the best of the
both worlds, an algorithm can be developed to mix them,
and achieve in a single program, giving the user the control
to run the program.
[20]
Abstract Ubiquitous software has become an indispensable reliability and to reduce the high cost of performing
technology for science, engineering, and business. Software is software engineering tasks, computational intelligence
everywhere, as a standalone system, or part of a new techniques and algorithms are being used as problem
technology, or as a service in the cloud. Hence, it has solving, decision-support, and research oriented
paramount importance. As size and complexity of software
approaches. As per the definition in [1], Computational
systems are increasing, software engineering problems such as
software effort estimation, software testing, software defect intelligence is the study of adaptive mechanisms to enable
prediction, software project scheduling, software reliability or facilitate intelligent behavior in complex and changing
maximization, software module clustering, and software environments. As such, computational intelligence
maintenance have become more difficult to handle. In order to combines artificial neural networks, evolutionary
reduce the high cost of performing software engineering computing, swarm intelligence and fuzzy systems.
activities and to increase software quality and reliability, Software sizes are becoming bigger. The complexity of
computational intelligence techniques are being used for software is increasing with size non-linearly, and in
problem solving using research oriented approaches and for addition, software complexity further increasing as it is
decision-support. Computational intelligence has been used in
changing very rapidly to keep pace with changing user
different fields for a long time. There has been a recent surge
in interest in the application of computational intelligence business dynamics and needs. Increased software
techniques in software engineering. Search based software complexity poses many problems. Computational
engineering and machine learning for software engineering intelligence is the appropriate vehicle to address software
are the areas of computational intelligence research which problems. In this paper, two areas of computational
have been showing promising results in this context. Search intelligence which are showing promising results in
based software engineering reformulates software engineering software engineering are considered for review. They are:
problems as optimisation problems, and then using 1) Search-Based Software Engineering (SBSE). 2)
optimisation algorithms problems are solved. Software Machine Learning for Software Engineering. Search-
engineering produces lot of data related to software, like effort
Based Software Engineering (SBSE) [5] reformulates
estimates, source code, test cases, data on bugs and fixes,
version data, and metrics data. As part of analytics on software engineering problems as optimisation problems.
software data, machine learning techniques are used to solve These reformulated problems are solved using optimisation
some software engineering problems and for effective decision algorithms. Some of the software engineering problems
making. The objective of this survey paper is to identify which can be reformulated as optimization problems are; 1)
software engineering problems and applications of software project scheduling with an aim to minimize cost
computational intelligence techniques to solve those problems. and time of completion of different tasks, 2) test case
In survey, computational intelligence applications for solving design with an aim to maximize code coverage and bug
different software engineering problems are identified and detection, 3) test case design with an aim to minimize
presented. In this paper, some research questions which
indicate research directions and some possible research topics
testing effort and maximize bug detection, 4) during
are presented. New research issues and challenges posed by starting of an iteration in WinWin spiral model identifying
the hard problems in software engineering could stimulate set of requirements maximizing user satisfaction and
further development of new theories and algorithms in probability of completion in given time and cost. Similarly,
computational intelligence. many more problems with different optimization criteria
can be reformulated as optimization problems. Some of the
search based optimisation techniques which are being used
Index Terms Computational Intelligence, Software are: 1) Genetic programming 2) Genetic algorithms 3) Ant
Engineering Problems, Search Based Software Engineering, colonies 4) Particle swarm optimization 5) Hill climbing 6)
Optimisation Techniques, Machine Learning, Software Data Simulated annealing.
Analytics. As per the definition in [2], machine learning deals
with the issue of how to build programs that improve their
I. INTRODUCTION performance at some task through experience. Software
As size and complexity of software systems are systems process data, but software is data too [3]. Software
increasing, software engineering problems have become engineering activities and different stake holders produce
more difficult to handle. In order to increase software lot of data related to software. Machine learning makes use
of software data to create machine learning models to solve Design alternatives, inconsistencies in software
software engineering problems. Some of the software design
engineering problems which can be addressed by machine 3. Implementation and testing phase:
learning models are; 1) predicting effort for the next project Software reuse
based on machine learning model, which is created using Source code searching
the data from previous completed projects, 2) predicting Integration method, defect prediction
project duration for the next project based on machine Test case generation, test case prioritization
learning model, which is created using the data from Prediction of test effectiveness
previous completed projects, 3) software defect prediction Bug management/triage, debugging
can be made using the defect prediction models created 4 .Maintenance phase:
using previous versions of software. Software understanding and comprehension
Major types of machine learning include; decision trees, Impact analysis, ripple effects during changes
concept learning, artificial neural networks, reinforcement Regression testing
learning, Bayesian belief networks, genetic programming Automatic software repair, quality enhancement
and genetic algorithms, instance-based learning, and Reengineering legacy software, software module
analytical learning. clustering
Some problems can be viewed either as optimization or 5. Problems related to umbrella activities:
machine learning problem. For example, test case design Configuration management
can be viewed as an optimization or machine learning Prediction of software quality and reliability
problem. That is where research comes into picture to Classification of software components
figure out which approach and technique is better.
The aim of this paper is to carry out basic survey (not B. Software Data
exhaustive survey) to find research openings in the context
of application of computational intelligence techniques in Lot of data are generated during software development and
maintenance. Over the time from different projects the data
software engineering.
populate databases in the range of terabytes and more. Data
related to software are listed below.
II. SOFTWARE ENGINEERING PROBLEMS AND SOFTWARE
DATA Source code, data on versions, code analysis data
Test cases designed, test execution data
An iteration in a software development model consists
Data on bugs and fixes
following activities (phases):
Metrics data on size, design, code, testing, project,
process, and maintenance
1. Software requirements collection, specification and
Cost and schedule data
planning phase, 2. Design phase
Usage and run time data of deployed software
3. Implementation and testing phase
User feed back
The maintenance activity takes place in parallel with
development. While next version is under development III. COMPUTATIONAL INTELLIGENCE TECHNIQUES TO
SOLVE SOFTWARE ENGINEERING PROBLEMS
previous deployed versions will be under maintenance.
Some problems which are encountered frequently are In this section computational intelligence techniques which
identified and listed out in this paper. Some of the problems are applied to software engineering problems are surveyed
which are encountered during software engineering are and presented. The survey is not exhaustive. This survey
identified by different authors in [3], [4], [5], [6]. [7], gives basis for further refinement in the subsequent work.
[8],[9],[10],[11]. They are categorized and listed below:
A. Solving Software Engineering Problems with
Optimization Techniques (SBSE)
A. Software Engineering Problems
Search based software engineering reformulates
1. Software requirements collection, specification and software engineering problems as optimization problems.
planning phase: The problems and optimization techniques are discussed in
Eliciting, recording of all functions and constraints detail in [4],[5],[8],[9],[12]. The optimization techniques
Ambiguity, completeness, conflicts in requirements, are used by different researchers in solving different
prototyping, requirements tracing software engineering problems. Simulated annealing
Cost and time estimation for the project approach is used in solving many problems like improving
Tasks, dependencies, duration, resources for the software quality prediction[13], next release problem [14],
project program flaw finding[15]. Genetic programming
2. Design phase: applications can be found in software cost predictive
High-level (architectural) - architectural design modeling [16], in reliability modeling[17], and in model for
problems, modularity, coupling software quality enhancement[18]. Software release
Low-level (detailed) - algorithms selection, planning [19] and software test data generation [20] are
complexity of modules, cohesion solved by genetic algorithms. Hill climbing technique is
used to improve program structure by module clustering intelligence techniques and research directions presented in
[21]. Hill climbing technique and genetic algorithm are this paper are applicable to standalone, distributed and
applied for regression test case prioritization [22]. Software cloud software as service.
modularization using hill climbing, simulated annealing, The application of computational intelligence
and genetic algorithm is presented in [23],[24]. Search techniques in software engineering started more recently.
based software engineering techniques use metrics as Diversity, size and complexity of software systems are
fitness functions in one form or the other [25]. Proposing increasing. In addition, due to frequent changes to software
new metrics for using in optimization techniques is also an systems to keep pace with changing user business
important research area. There is little work on the requirements and personal needs, the software systems are
combinations of search algorithms [26]. Research on getting deteriorated and becoming complex. Due to this
exploring the potential of combinations of search scenario, the earlier easily solvable software engineering
algorithms has lot of scope. Bug detection using particle problems have now become more challenging. To address
swarm optimization is given in [27]. Ant colony software engineering problems, researchers started
optimization techniques [28] and particle swarm exploring the application of computational intelligence
optimization [29] techniques have potential to be used in techniques. Survey indicates results are encouraging and
solving software engineering problems and have not been there is lot of potential in further research in this area.
used much in the literature. In addition to the papers listed Computational intelligence techniques are grouped under
out in this section, interested researcher on SBSE can look two headings. They are; optimization techniques and
into the repository of papers on SBSE maintained by machine learning techniques. Optimization techniques and
Y.Zhang [49]. machine learning techniques are listed in introduction
section. Some techniques are used as optimization and
B. Solving Software Engineering Problems with Machine machine learning techniques. Similarly, some problems can
Learning Techniques be viewed either as optimization or machine learning
problem. For example, project estimates can be viewed as
Application of machine learning in the context of
an optimization or machine learning problem. That is where
software engineering are discussed in [6], [7], [10], [11].
research comes into picture to figure out which approach
Solutions to different software engineering problems using and technique is better. Some problems which are
machine learning techniques are attempted by many encountered frequently are identified and listed out in this
researchers. Instance based learning is used in component
paper.
retrieval and reuse [6] and software project effort
estimation [30]. Genetic programming is applied for
A. Research Directions in Software Data Analytics
understanding and validating user software requirements
[31]. One of the supervised learning methods, concept There are many research questions that need to be
learning, is used to derive functional requirements from addressed related to software data analytics. These
legacy software [6]. To predict effort required for software questions indicate the scope for research work in those
development, artificial neural networks & decision trees lines. Some of them are listed below.
[32], bayesian analysis [33], artificial neural networks &
What data and how to analyse the data to address a
nave bayes classifier [34], and artificial neural networks
particular software engineering problem?
[35] are applied and results are presented. For predicting
How to integrate heterogeneous data? Much of the
software defects bayesian belief networks have found
software data is unstructured, while some data is
applications [36], [37], [6]. Detecting bad aspects of code
stored in structured format [3].
and design are important. Because, it enables refactoring, to
Which algorithm is better for data analysis? Do we
improve quality of code and design. Bayesian approach is
need to design a new algorithm for a particular
used in detecting design and code smells [38]. Overview on
situation?
techniques based on machine learning used for software
How to customize existing algorithms to suite to
engineering is given in [39]. Even though there is active
the data, problem and situation?
research happening in machine learning applications in
software engineering, more research is possible as machine
learning has the potential [40]. B. Some of the More Specific Possible Research Topics in
the Category of Software Data Analytics
IV. DISCUSSION AND RESEARCH DIRECTIONS ON 1) Project cost and time estimation. Design and use of
COMPUTATIONAL INTELLIGENCE APPLICATIONS IN hybrid models to predict estimates.
SOFTWARE ENGINEERING 2) Building machine learning models using metrics data
Standalone software, distributed software, or software and predict the defects and design quality for the
as a service in the cloud need to be designed, coded, and software that is going to be deployed. Prediction will
tested before deployment. As part of evolution and help to know whether further testing is required or not.
maintenance (corrective, perfective, adaptive, and 3) Building machine learning models using source code
preventive) software is changed. During change software is and source code analysis data to design test cases and
not available for use. Change is applicable to all the above prioritizing test cases.
mentioned types of software. The computational 4) Building machine learning models using source code
and source code analysis data for automatic bug repair.
5) Building machine learning models using coupling data Can we propose new metrics which can be input for
to predict regression testing effort. Prediction will help existing optimization techniques for solving a
to make decision whether change can be implemented problem? How these metrics are different?
or it has to be deferred. Can we create an hybrid algorithm (from existing
6) Building machine learning models for software testing algorithms)? But, why?
[40]. Can we identify a new problem to which existing
7) Building machine learning models using bug data for optimization techniques can be applied?
defect prediction. In solving a problem, use of optimization techniques
8) Building machine learning models using version data to versus machine learning models (Software data
predict software maturity index. analytics). Why the results are not same? Justify.
Research on software data analytics requires software data. D. Some of the More Specific Possible Research Topics in
But, from where to get the data. There are some options the Category of Application of Optimization Techniques in
available. They are: Software Engineering
1) Use software data from reliable public domain sites 1) Modular design (clustering) with an aim to reduce
contributed by research groups. change propagation due to ripple effects during
Example: http://promisedata.org/repository [41] and maintenance and regression testing.
other repositories [42]. 2) Modular design with an aim to maximize cohesion and
2) Lot of software data are available from the sites of minimize coupling. Application of search based
standard open source tools. optimization techniques in design can be found in [44].
Example: Software data on different plug-ins to eclipse 3) Refactoring software design during an iteration in a
3) Data published in research papers and text books. sprint of scrum agile process model with an aim to
Example: COCOMO database containing 63 projects reduce coupling and to increase cohesion. Refactoring
published in Boehms text book [43]. using SBSE is presented in [45].
4) Create small programs with problems and parameters 4) Test case design for scrum agile process with an aim to
related to your research interest. Research findings reduce testing time.
based on small size programs need to be justified that 5) Test case design with an aim to maximize code
results are in fact applicable to bigger size programs coverage and bug detection.
(scalability). 6) Test case design with an aim to maximize bug detection
5) Use synthetic data. Synthetic data are "any production and minimize testing effort. Some work on test case
data applicable to a given situation that are not design is given in [46].
obtained by direct measurement" according to the 7) Prioritising the test cases with an aim to reduce testing
McGraw-Hill Dictionary of Scientific and Technical effort with same bug coverage. Recent work on
Terms. Synthetic data are generated to meet specific prioritizing test cases can be found in [47].
needs or certain conditions. This can be useful when 8) Test case design to detect critical defects [48].
designing any type of technique, algorithm, or a 9) Debugging with an aim to minimize the time to locate
method, because the synthetic data are used as a the cause for the bug. Code, cohesion and coupling
simulation or as a theoretical value, situation, etc. metrics can be used for this purpose.
10) Reengineer the software with an aim to minimize
C. Research Directions in the Application of coupling and maximize cohesion.
Optimization Techniques 11) Version management with an aim to minimize
redundancy and retrieval time of a required component.
There are many research questions that need to be
12) During reuse, aiming to identify and retrieve most
addressed related to solving software problems using appropriate component for a given situation.
optimization techniques (Search Based Software 13) Model based testing with an aim to maximize the
Engineering). These questions indicate the scope for
likelihood of early detection of defects and estimation
research work in those lines. Some of them are listed
of software testing effort.
below.
For carrying out research using optimization techniques
What are the benefits of solving a software in software engineering requires software (program) as
engineering problem using optimization techniques?
input. This program can be developed for research purpose
Can the benefits be quantified? For example: Test or any free open source tool can be downloaded.
case design to maximize code coverage. It should be Computational Intelligence initial subjects of interest
quantified to indicate how much extra code coverage
are; fuzzy systems, neural networks, evolutionary
is achieved by using optimization techniques over
computation, and swarm intelligence. But, different authors
traditional techniques.
of research papers treat computational intelligence as an
For a particular optimization criterion or criteria umbrella, under which more and more algorithms,
which algorithms perform better? Why? techniques, and methods are slowly added, as advancement
Can we customize the algorithms for better results?
is taking place as part of basic research and applied
research. That is how computational intelligence subjects of International Conference on Tools with Intelligence, pp.22-
interest have grown. 29,2002
[11] Du Zhang, J.J.P.Tsai, Machine learning applications in
software engineering, World Scientific, 2005
V. CONCLUSIONS
[12] Ilhem Boussaid, Julien Lepagnot, and Patrick Siarry,A
The software engineering problems which can be addressed survey on optimization metaheuristics, International journal
by computational intelligence are identified from different of Information Sciences, pp.82-117, March 2013
publications. Some of them are: software effort estimation, [13] S. Bouktif, H. Sahraoui, and G. Antoniol, Simulated
annealing for improving software quality prediction, In
software testing, software defect prediction, software
GECCO 2006: Proceedings of the 8th annual conference on
project scheduling, software reliability maximization, Genetic and evolutionary computation, ACM, Volume 2, pp.
software module clustering, and software maintenance. 18931900, 2006.
Different computational intelligence techniques which can [14] M. Harman, K. Steinhofel, and A. Skaliotis, Search based
be used to solve some of the software engineering problems approaches to component selection and prioritization for the
are surveyed and given. As size and complexity of software next release problem, In 22nd International conference on
systems are increasing, software engineering problems have software maintenance (ICSM 06), 2006.
become more difficult to handle. In this context, based on [15] N. Tracey, J. Clark, and K. Mander, Automated program
the survey, it is found that computational intelligence flaw finding using simulated annealing, In International
symposium on software testing and analysis (ISSTA 98),pp.
techniques which include optimization techniques and
7381, 1998.
software data analytics play a significant role in solving [16] J. J. Dolado,On the problem of the software cost function,
software engineering problems and developing high quality Information and software technology, 43(1):6172, 2001.
software products with low maintenance cost. Based on the [17] Eduardo Oliveira Costa, Aurora Trinidad Ramirez Pozo, and
survey, some research questions and research topics related Silvia Regina Vergilio, A genetic programming approach
to software data analytics and application of optimization for software reliability modeling, IEEE Transactions on
techniques to software engineering problems are identified reliability, 59(1), pp.222-230, 2010
and presented. Research topics indicate computational [18] Taghi M. Khoshgoftaar, Yi Liu, and Naeem Seliya, A
intelligence in software engineering and its data has multiobjective module-order model for software quality
enhancement, IEEE Transactions on evolutionary
tremendous scope for aspiring researchers.
computation, 8(6), pp. 593-608, 2004
Detailed research directions for software in the cloud [19] D. Greer and G. Ruhe,Software release planning: an
will be presented in future work. Some of the research evolutionary and iterative approach, Information and
directions in this context are; virtualization, multi-tenant software technology, 46(4), pp.243253, 2004.
modeling, testing as a service (TaaS), design of cloud [20] Christoph C.Michael, Gary McGraw, and Michael
services user interface, design of cloud computing metrics, A.Schatz,Generating software test data by evolution, IEEE
performance testing of SaaS, security testing of SaaS, and Transactions on software engineering, 27(12), pp.1085-1110,
architectures for dynamic scalability. 2001
[21] Kata Praditwong,Mark Harman, and Xin Yao, Software
module clustering as a multi-objective search problem,
REFERENCES IEEE Transactions on software engineering, 37(2), pp.264-
[1] Andries P.Engelbrecht, Computational intelligence: An 282, 2011
introduction, Wiley, 2002. [22] Z.Li,M.Harman, and R.Hierons,Search algorithms for
[2] T.Mitchel, Machine learning, McGraw-Hill, 1997. regression test case prioritization, IEEE Transactions on
[3] Andrian Marcus and Timothy Menzies, Software is data software engineering, 33(4), pp.225-237, 2007
too, FoSER, ACM, pp.229-231, 2010. [23] S. Mancoridis, B. S. Mitchell, C. Rorres, Y.-F. Chen, and E.
[4] Witold Pedrycz, Computational intelligence as an emerging R. Gansner, Using automatic clustering to produce high
paradigm of software engineering, SEKE 02, ACM, pp. 7- level system organizations of source code, In International
14, 2002 workshop on program comprehension (IWPC98), IEEE, pp.
[5] Mark Harman, The current state and future of search based 4553, 1998.
software engineering, Proceedings of Future of software [24] S.Mancoridis, B. S.Mitchell, Y.-F. Chen, and E. R. Gansner,
engineering (FOSE 07), IEEE, pp.342-357, 2007 Bunch: A clustering tool for the recovery and maintenance
[6] Du Zhang, Applying machine learning algorithms in of software system structures, In IEEE International
software development Proceedings of Monterey workshop conference on software maintenance, pp. 5059, 1999.
on modeling software system structures in a fastly moving [25] M. Harman and J. Clark, Metrics are fitness functions too,
scenario, pp. 275-290, Italy, 2000 In 10th International software metrics symposium
[7] Mark Harman, The role of artificial intelligence in software (METRICS 2004),, IEEE, pp. 5869, 2004.
engineering, RAISE 12 - Proceedings of the first [26] K.Mahdavi, M. Harman, and R.M. Hierons, A multiple hill
international workshop on realizing AI synergies in software climbing approach to software module clustering, In IEEE
engineering, IEEE, pp. 1-6, 2012 International conference on software maintenance, pp.315
[8] Mark Harman, S.A.Mansouri, and Y.Zhang, Search-based 324, 2003.
software engineering: Trends, Techniques and applications, [27] Arun Reungsinkonkarn and Paskorn Apirukvorapinit, Bug
ACM Computing surveys, 45(1), Article no. 11, 2012 detection using particle swarm optimization with search
[9] Pedrycz, W and Peters J.F. (eds), Computational Intelligence space reduction, ISMS 15 Proceedings of the 2015 6 th
in software engineering, World Scientific, 1998. International conference on Intelligent Systems, Modelling
[10] Du Zhang and Jeffrey J.P.Tsai,Machine Learning and and Simulation, IEEE, pp.53-57, 2015
Software Engineering,Proceedings of 14th IEEE
[28] M. Dorigo and C. Blum,Ant colony optimization theory: A [39] T. Menzies, Practical machine learning for software
survey, Theoretical computer science,344(2-3),pp.243278, engineering and knowledge engineering, in Handbook of
2005. Software Engineering and Knowledge Engineering. World-
[29] X. Zhang, H. Meng, and L. Jiao, Intelligent particle swarm Scientific, December 2001, available from
optimization in multiobjective optimization, In IEEE http://menzies.us/pdf/00ml.pdf.
Congress on Evolutionary Computation, volume 1, pp. 714 [40] Lionel C.Briand, Novel applications of machine learning in
719, 2005. software testing, Proccedings of the 8th International
[30] M. Shepperd and C. Schofield, Estimating software project Conference on Quality Software, IEEE, pp.3-10, 2008.
effort using analogies, IEEE Transactions on software [41] G. Boetticher, T. Menzies, and T. Ostrand, PROMISE
engineering, 23(12), pp. 736-743, 1997. repository of empirical software engineering data, 2007,
[31] M. Kramer, and D. Zhang, Gaps: a genetic programming available at http://promisedata.org/ repository.
system, Proc. of IEEE International conference on computer [42] D. Rodriguez, I. Herraiz, and R. Harrison, On software
software and applications (COMPSAC 2000). engineering repositories and their open problems, in The
[32] K. Srinivasan and D. Fisher, Machine learning approaches International workshop on realizing AI synergies in software
to estimating software development effort, IEEE engineering (RAISE12), 2012.
Transactions on software engineering, 21(2), pp. 126-137, [43] B.W.Boehm,Software engineering economics, Prentice-Hall,
1995. 1994.
[33] S. Chulani, B. Boehm and B. Steece, Bayesian analysis of [44] Outi Raiha,A survey of search-based software design,
empirical software engineering cost models, IEEE Computer science review, Elsevier, 4(4),pp.203-249,2010.
Transactions on software engineering, 25(4), pp. 573- [45] M. Harman and L. Tratt, Pareto Optimal Search-Based
583,1999. Refactoring at the Design Level, Proc. 9th Ann. Conf. Ge-
[34] Jyoti Shivhare and Santanu Ku.Rath, Software effort netic and Evolutionary Computation (GECCO 07), ACM
estimation using machine learning techniques, Proceedings Press, pp. 1106-1113,2007.
of the 7th India Software Engineering Conference (ISEC14), [46] Shaukat Ali, L.C.Briand, Hadi Hemmati, and Rajwinder
ACM, Article No. 19, 2014. K.Panesar-Walawege, A systematic review of the
[35] C. Mair, G. Kadoda, M. Lefley, K. Phalp, C. Schofield, M. application and empirical investigation of search-based test
Shepperd, and S. Webster, An investigation of machine case generation, IEEE Transactions on software
learning based prediction systems, The Journal of Systems engineering, 36(6), pp.742-762, 2010.
and Software, 53(1), pp.2329, 2000. [47] Alessandro Marchetto, Md. Mahfuzul Islam, Waseem
[36] N. Fenton and M. Neil, A critique of software defect Asghar, Angelo Susi, and Giuseppe Scanniello, A multi-
prediction models, IEEE Transactions on software objective to prioritise test cases, IEEE Transactions on
engineering, 25(5), pp. 675-689,1999. software engineering, 42(10), pp.918-940, 2016.
[37] V. U. B. Challagulla, F. B. Bastani, I.-L. Yen, and R. A. Paul, [48] A. Baresel, H. Sthamer, and J. Wegener, Applying Evolu-
Empirical assessment of machine learning based software tionary Testing to Search for Critical Defects, Proc. Conf.
defect prediction techniques, International journal on Genetic and Evolutionary Computation (GECCO 04), LNCS
artificial intelligence tools, 17(2),pp. 389400, 2008. 3103, Springer, pp. 1427-1428,2004.
[38] F.Khomh,S.Vaucher,Y.G,Gueheneuc, and H.A.Sahraoui, A [49] Y.Zhang, Repository of SBSE papers,
Bayesian approach for the detection of code and design http://crestweb.cs.ucl.ac.uk/resources/sbse_repository/.
smells, in Proc. of Int. Conf. Quality Software, pp.3015-314,
2009 .
appropriate algorithm such that the cost of the processing performance of cloud. Dynamic transfer is complex to
should be minimized and servicing the number of users implement with no or less downtime but static transfer
should be maximized. Here the virtual machine manager is have more downtime. This needs to be considered in server
responsible to allocate and monitor the virtual machines. scheduling.
Data center controller is processing the tasks(cloudlets)
using the load balancer algorithm. In designing this paper III. MATHEMATICAL MODEL
the optimum load balancing algorithm to minimize the cost
is discussed. A. Hungarian Method
Market-based and auction-based schedulers are suitable for For Solving the assignment problem create one table based
regulating the supply and demand of cloud resources. on the value of data, we call it as cost matrix. With the
Market based resource allocation is effective in cloud determined optimal solution we can compute the maximal
computing environment where resources are virtualized and profit.
delivered to user as a service. Service provisioning in - Worker1 => Machine2 - 9
Clouds is based on Service Level Agreements (SLA). SLA
represents a contract signed between the customer and the - Worker2 => Machine4 - 11
service provider stating the terms of the agreement
- Worker3 => Machine3 - 13
including non-functional requirements of the service
specified as Quality of Service (QoS), obligations, and - Worker4 => Machine1 - 7
penalties in case of agreement violations. Thus there is a
need for scheduling strategies considering multiple SLA Steps
parameters and efficient allocation of resources. The focus
1. Find the minimum from each row and subtract the
of model is to provide fair deal to the users and consumers,
minimum from all the rows.
enhanced quality of service as well as generation of optimal
revenue. 2. Find the minimum from each row and subtract the
minimum from all the rows.
C. System Level Scheduling 3. Assign one zero (0) to each row and column.
System level scheduling is scheduling virtual machines to 3. Cover all zeros with a minimum number of lines.
the corresponding physical machines. While scheduling
virtual machines, the scheduler needs to consider the 4. Create additional zeros if required.
capacity of each physical machine and some threshold
value must be chosen for each physical machine depending 5. Map all zeroes then this will give the minimum cost.
on the capacity. The load of each physical machine must be
updated for every assignment and when the load reaches the Existing work was using balanced assignment i.e the
threshold value,The migration or transfer of virtual assignment was square matrix, the number of sources and
machines must take place. The transfer may be static or destinations are equal. This type of assignment problems
dynamic. Static transfer considers the transfer of volume can be considered as balanced assignment. This can be
only where as dynamic transfer has to consider the state of represented as a square matrix, but in cloud environment
virtual machine along with volume. Downtime of virtual this may not work as the cloud is servicing number of
machine is important to be considered to improve the requests with minimum resources compared to the requests.
The following table gives the general Hungarian Step 9 : Find minimum value which is not covered by any
assignment, where number of tasks(souces) are equal to the row and column and Subtract it from uncovered row and
virtual machines(destinations) so that this can be add it from value which is twice covered by line.
represented as a square matrix. The servicing time is
represented as the cost matrix. Step 10 : This matrix and original matrix are to be
compared and choose best resources(VM) for the
B. Hungarian Method for load balancing in cloud assignment of tasks and update the assignment status of
VM as busy.
Load balancing problem can be represented as a two
dimensional cost matrix. Resources are represented in a } until there is no job left unassigned
column and tasks who need resources are represent in a
row. Step 11: The above process should be repeated infinitely.
The hungarian method can be used for assignment of
tasks to virtual machines. Based on the virtual machine's As the cloud is providing service based computing for n
current load and capacity the tasks are to be assigned to the number of users for all the time until there is no resource
corresponding virtual machines dynamically. Whenever the available, but ideally the resource must be available all the
cost matrix of an assignment problem is not a square time as the cloud is a collection of enormous pool of
matrix, that is, whenever the number of sources is not equal resources as per definition. And once after the request is
to the number of destinations, the assignment problem is serviced again the availability status of the virtual machine
called an unbalanced assignment problem. In such must be updated to be available. so the load balancer
problems, dummy rows (or columns) are added in the algorithm should update the availability status and load on
matrix so as to complete it to form a square matrix. The each resource ( virtual macine) periodically.
dummy rows or columns will contain all costs elements as
zeroes. IV. IMPLEMENTATION AND RESULTS
Periodically checking the availability and load on each
virtual machine by a load balancing algorithm then assigns When we apply the above algorithm on the test data it
the tasks to the corresponding virtual machine if its capacity shows the following results.
is more than the required capacity to process the request
and its availability status is free.
Steps 1&2: According to the considered capacities and task
repeat { lengths, the following assignments were taken
Step 1: Initialize the capacity(VC[i]) of each virtual VM capacity (MIPS) Task Length(MI)
machine(VM).
VC[i] = VM Capacity in MIPS 120 9000
150 8000
Step 2 : Initialize the Length of each Task
180 6000
TL[j] = Length of Request in MI
200 5000
Step 3: Initialize the availability status of each
VM=available, Vma[i]=available
Steps 3&4: According to the estimated execution times
Step 4: Find Execution Time for each Request to VM if calculated on the corresponding virtual machines , The
VMa[i]=available following Execution time matrix is formed. Here there are
Ex_Time[i][j]= TL[j]/VC[i] Seconds four tasks and three vms, so it is an unbalanced assignment
Step 5 : Construct Expected Cost Matrix ECM[V,R] Task/VM Task1 Task2 Task3 Task4
VM1 30 40 50 40
Step 6 : Find out Minimum Execution Time from each row VM2 40 30 25 30
and subtract it from entire row. VM3 40 35 45 40
AbstractThe speed of single processors is limited by the Though thread creation is of less overhead than a process
speed of light or speed of electron. Hence, processor creation overhead, it is still considerable load on
manufacturers are packing multiple low speed processors or performance involving:
cores on to the same chip which are called multi-core
system call overhead involving trap
processors. The number of processors on single chip is
gradually increasing. Though multi-core processors are kernel level data-structure access on every
similar to Symmetric Multi-Processors (SMP), there are operation.
notable differences between them like shared last level cache. Because of the above mentioned disadvantages of native
Operating systems consider these multi-core processors as thread API, modern approach of parallel programming
SMP, and apply the suitable methods for task-scheduling and runtime systems evolved from kernel level to user level.
load balancing. But these strategies cannot fully explore the These user level runtime systems are popular and
details of multi-core processors. The software also must be became a De-facto standard for parallel programming [1].
written to take the advantage of these multi-core processors.
Popular parallel runtimes such as Open MP, Cilk and TBB
In this context, user level runtime systems evolved with a task
as primitive concurrency construct. These tasks created by the
follow this approach for their runtime implementation.
program during runtime are added to the queues at user level These runtime systems introduce a new scheduling entity
runtime. called task which is considered to be even lighter in weight
In this paper, we analyze the performance of various user level than thread since it is completely maintained at user level.
queues and their contention using Java. These runtime systems, provide API calls to create and
maintain tasks. The programmer has to follow a sequence
Index TermsConcurrency, Task, Work stealing, Centralized of API calls to implement parallel programs.
queue, Multi queue Double ended queue. 1. Init():Initialize the runtime
2. spawnTask(): Create tasks where ever parallel
I. INTRODUCTION activity is to be done
The goal of parallelism is to maximally exploit the 3. Join():Wait for the tasks to complete.
number of CPU cores present at hardware level. The goals 4. Release():Free the runtime.
of parallelism are: init(): During the initialization of user level runtime
Increasing throughput system, a pool of native threads(pthread on Linux) is
Reduce latency created and a single queue or multiple queues are created.
These threads which get created during initialization of
Reduce power consumption
runtime are called worker threads or workers. Since it being
Scale depending on number of cores (CPUs).
a one time duty, during the initialization of runtime, every
Prevent loss of data locality. time a parallel execution entity is to be created, we need not
enter the kernel level
Since the operating systems consider multi core spawnTask(): spawning a task allocates enough memory
processors as SMPs, the user level scheduler has to take for task body and its parameters and this task object is
care of scheduling the tasks created by the user to schedule added to the queue.
in an optimal fashion [1]. The common practice by popular join(): All coarse grain or fine grain tasks have to wait
operating systems in case of multiprocessors is maintaining until all the remaining tasks of that level have been
a single queue or multiple queues per processor. The completed at the joining points.
processes or threads get added to these queues dynamically release(): It is the last call to the runtime which does join
during the runtime and are scheduled on to the CPUs. Rear operation on native threads of thread pool and deallocation
end of the queue is used for adding work load and front end of queue objects which were created during the init().
of the queue is used for popping the work load and be In this paper, our focus is on studying the impact of the
executed. But the smallest execution unit in operating queue data structures which are used by worker threads to
systems like Linux is a process or a thread ( in Linux, queue the tasks created by the programmer. We
process and thread are created using fork() and implemented three types of queues:
pthread_create() respectively). But both of these calls
Centralized or single queue
involve clone() system call of kernel. Using clone system
Multiple queues without work stealing
call involves much overhead making it heavy weight.
Multiple queues with work stealing Locality of the queue may cause cache
performance isolation problem and false sharing.
In this paper, the load balancing strategies are
implemented and evaluated their performance by using a B. Multi Queue
Matrix multiplication benchmark. To the best of our To overcome the main disadvantage of single queue
knowledge this is not ever studied in previous literature. approach, if a separate queue is associated with every
Section II describes various types of queues used in user worker thread, that is multi queue approach. Hence it is the
level runtime systems. Section III describes the first step for distributed load balancing. The tasks created
experimental setup and result analysis. by the programmer are added to separate queues associated
with individual worker threads [2]. This approach
II. TYPES OF LOAD BALANCING QUEUES guarantees transparent load balancing with the constraint
that all the tasks are of equal duration. Since individual
A. Single Queue worker thread has access to its own queue, they need not
In this approach, during the initialization of runtime contend on dequeue operation.
system, a single global queue data structure is created. This But this multi queue approach is not effective when all
global queue is responsible for queuing the tasks created tasks are of variant duration. If tasks are of different
using taskSpawn(). This global queue is shared among all
worker threads. Every worker thread is bound to a hardware
level core or processor. If hyper threading is enabled at User level runtime task queues
BIOS setup, the number of worker threads can be equal to
the number of hyper threads. All these worker threads
attempt to perform a dequeue operation on this global
queue to get a task object when a worker becomes
available. Once it is successful in dequeue operation a
worker gets a task object and worker invokes its task body
execution on its associated core.
Single queue approach is the simplest mechanism of
implementing user level runtime system. When a worker
thread is ready to execute a task, it attempts to dequeue a
task from the global queue. This operation is a critical
section and the worker must acquire a lock before this
operation and release the lock after the operation. But it
may suffer from the following disadvantages which may
effect the overall performance of the parallel application.
task gets added at the tail of a queue associated with each how effectively load balancing is done across the queues.
processor. When execution reaches a fork point such as a The benchmark is executed on
spawn or parallel loop, one or more new tasks are created IBM x3400 server with 4 core ( 8 hyper threaded ) Xeon
and put on a queue. The main strategies by which idle E5-2401 and Linux kernel version 3.16.2.200. The number
workers find new tasks are: of worker threads taken in our experimental set up is equal
Find a task from its own work queue to the number of cores ( 4 ) by disabling hyper-threading.
Distributed work queues with randomized stealing It can be observed from the execution times presented in
Table1 that work stealing queue approach gives better
performance than Centralized queue and Multi worker
queue approaches. Though the difference of execution
times between multi worker queue approach and work
stealing is little, the difference is gradually significant for
bigger matrix size inputs. The main difference is due to the
stealing approach to balance the load across queues which
is not addressed in plain multi queue approach. As stated in
introduction section, centralized queue approach gives poor
performance due to contention among the worker threads to
access the single global queue. The difference of execution
time is spent on saving the critical section code to access
the global queue.
128 2 1 1
256 7 6 5
REFERENCES
V. CONCLUSIONS
In this paper, we the effect of different load balancing [1]Blagodurov, Sergey, and Alexandra Fedorova."User-
queues on performance of task parallel applications. It is level scheduling on NUMA multicore systems under
observed that work stealing queue approach performs better Linux." 2011 s.l. : Proceedings. of Linux Symposium.,
when compared to centralized queue approach and multi 2011.
queue approach. [2]. P.E. Hadjidoukas, G.Ch. Philos, V.V.
Dimakopoulos, Exploiting fine-grain thread
parallelism on multicore architectures, .. 2009,
Scientific Programming,, pp. Vol. 17, No. 4, Nov., pp.
309323.
[3] Faxn, Karl-Filip."Wool-a work stealing library". s.l. :
ACM SIGARCH Computer Architecture News, 2009.
36.5 : 93-100.
[4] Hendler, Danny, et al "A dynamic-sized nonblocking
work stealing deque".. s.l. : ACM, 2005.
Abstract Wireless Communication system is the emerging received signal will have many delayed versions of the
field of research. Users need to do many changes in the original signal along with the actual one. This effect is
hardware to obtain high data rate less latency for less cost. To called multipath. These delayed copies may cause
avoid Inter Symbol Interference (ISI) in a single carrier interference at the receiver called ISI.
system, delay time must be very less compared to the symbol
period. The data rate is inversely proportional to the symbol
Main objective of any multi carrier system is efficient
period. The low data rate is, due to the long symbol period. usage of spectrum, power consumption, robustness of
System with long symbol period h is not considered as an multipath propagation and implementation complexity.
efficient communication system. The multi carrier system like Every objective with one or other may conflict, so
Frequency Division Multiplexing (FDM) technique, the implementations or techniques are selected which offer the
available bandwidth is used for multi carrier transmission by best possible trade off between them. The best means of
dividing the total bandwidth to sub bands. To obtain high data reducing the gap between the performance and channel
rates, the multiple carriers can be placed closely in the capacity is OFDM modulation which combats multipath
spectrum. But due to the less gap between multiple carriers in fading.
the spectrum, there is a chance of having Inter Carrier
Interference. To avoid this interference guard bands need to
In addition to error correction coding techniques like turbo,
be inserted which in turn results in low data rate. This paper spherical and Low density parity check codes, advanced
focuses on the Orthogonal Frequency Division Multiplexing OFDM is the good choice against multipath[2]. Fading
(OFDM) system implementation for testing purposes and its effect can also be reduced to a certain extent. And to
verification. Most of the stages in a communication system can improve the efficiency of the spectrum the best method is to
be replaced by the software. For this process a new system increase the capacity of the channel, which can be easily
called software defined radio came to exist. SDR is a radio done with the new multi carrier system, OFDM.
which can be used for signal processing with minimum cost. In OFDM method, many orthogonal signals, which are
The main goal is to create an OFDM signal using MATLAB overlapping are transmitted. It divides the available
for signal processing, synchronization, equalization,
spectrum bandwidth into sub-channels. The gap between
demodulation and detection. The paper also gives the design of
an OFDM transmitter and receiver for Software Defined the sub-carriers is minimal theoretically such that a channel
Radio. bandwidth is utilized. The main reason to opt OFDM is, it
can handle the effects caused due to the multipath fading.
Keywords: FDM, OFDM, ICI, Software Defined Radio and Multipath causes mainly two problems - frequency
MATLAB. selective fading and inter symbol interference. The
achieved flat narrow band overcomes the frequency fading
I. INTRODUCTION and modulation at low symbol rate removes the ISI by
making long symbols than the channel impulse response.
The requirement for high speed data transmission has
Using better error correcting methods along with frequency
increased due to the rapid growth in the area of digital
and time interleaving, more robustness can be achieved
communication. To keep up with demand, a new
against frequency selective fading. And by inserting the
modulation technique such as Orthogonal Frequency
required amount of guard band between the symbols of an
Division Multiplexing (OFDM) is currently being
OFDM signal, the effects of interference can be reduced
implemented. Now-a-days all the processors power, has
even more so that at receiver, equalizer can be removed.
increased where the capability of handling the speed has
As the system increases information rate, the time required
become big task. A new multiplexing technique i.e. OFDM
for each transmission becomes less. As there is no change
has become more important as it can have high speed with
in the delay time caused by the multipath, ISI is the main
low interference. By studying the multi carrier system
problem in a high speed communication. The new
OFDM in many books and journals, it is clear that there
modulation method that is OFDM can avoid this problem
will be a very good impact of OFDM in future
by transmitting many low speed signals in parallel. Observe
communication system.
the below Figure.1 which shows two ways of transmitting
The main problem found in any communication system
the same data. If this transmission takes four seconds then
for high datarate is Inter Symbol Interference (ISI)[1]. ISI
each data will have duration of one second. But OFDM can
means the interference when a transmission interferes with
send all the data in the same period simultaneously.
itself and at receiver side it becomes difficult to decode the
A Software Defined Radio (SDR) is a system where it uses
transmission correctly. As the receiver receives the signal
software methods on digital signal. In a communication
from many obstacles by taking many reflected paths, the
A. Frame Guard
The main block in the OFDM transmitter is the
modulator. Modulator modulates the input data .This
modulated digital data stream is divided into frames based
To avoid interference in channels due to multipath fading on variable symbol-per-frame, which gives the number of
guard interval is included during this period a cyclic prefix symbols per frame for a carrier.
is inserted before the OFDM block. OFDM with cyclic Here in this paper the data is taken from an image file.
prefix is given as Image pixel data is converted into matrix and then the
matrix data is chunked based on the modulation type
choosen. Here the number of symbols per frame is decided
using modulation type If the number of symbols to be
transmitted of a data stream, is less than a number of
symbols per frame then the data is not converted into
Finally the transmitted signal is represented as
frames. And also if there is no data stream sufficiently long
to divide it into multiple frames, then zeros are padded with
two guard interval at either of the ends of the signal. This is
used to find the starting point of substantial portion of the
signal.
B. OFDM Modulator
The modulator generates a PSK signal matrix composed
of complex numbers where the phases are the translated
phases and all the magnitudes are considered as ones. For
the remaining processing these complex numbers are
converted into rectangular form[4].
V. SOFTWARE ALGORITHM
Figure 4: Images received at the OFDM receiver with BPSK technique for
(a) SNR=0 (b) SNR=5 (c)SNR=10 (d)SNR=20
VI. RESULTS
A script file to invoke the OFDM parameters is
simulated first, which invokes all the parameters of
OFDM and initializes all the variables to start the
simulation. Some variable values are allowed to the user
to enter. The other variable values are obtained from the
input variable data.
Please enter the data:
1) Image file name an 8-bit grayscale (256 levels) bmp
file;
2) IFFT length (power of two);
3) total number of carriers less than
[(IFFT size)/2 2];
4) Modulation type PSK;
5) Clipping of peak power in dB;
6) SNR in dB.
Figure 5: Images received at the OFDM receiver with QPSK technique for
(a) SNR=0 (b) SNR=5 (c)SNR=10 (d)SNR=15 (e)SNR=20 (f)SNR=25
VII CONCLUSIONS
Figure 6: Images received at the OFDM receiver with 16PSK technique
for (a) SNR=0 (b) SNR=10 (c)SNR=35 (d)SNR=40 An OFDM modulation and demodulation is successfully
simulated using MATLAB in this paper. All main
components of an OFDM system are implemented. Some of
the problems faced while developing the OFDM simulation
program are, mapping the modulation and demodulation
and also maintaining the data format throughout the
process.
Possible future works are adding a feature to accept the
data in word size rather in bits, and including many other
modulation techniques other than PSK like QAM,
implementing multiplexing for OFDM for multiple inputs;
Figure 7: Images received at the OFDM receiver with 256PSK technique that is even increasing the data rate. With increase in data
for (a) SNR=20 (b) SNR=50 rate performance of the system may degrade which can be
still enhanced by including the diversity methods. So to
Table shows comparison of different parameters for PSK , overcome this, Forward Error Correction can be
QPSK,16PSK and 256 PSK modulation techniques for the implemented
following input data. It is clearly observed that in any
modulation technique as signal to noise ratio increases the REFERENCES
error rate decreases. Transmission time and receiving time
decreases as number of bits increased for PSK. [1] Usama S. Mohammed, H. A. Hamada, Image transmission
over OFDM channel with rate allocation scheme and
TABLE I. COMPARISON TABLE minimum peak-to average power ratio, Journal of
Telecommunications, Volume 2, Issue 2, May 2010.
[2] Pawan Sharma, Seema Verma, Performance Analysis of
Average phase Peak-to-Average Power Ratio Reduction Techniques for
% pixel
Modulation SNR BER(%) error Wireless Communication Using OFDM Signals, IJCSI
error
(degree) International Journal of Computer Science Issues, Vol. 7,
BPSK 0 14.745 48.21 70.77 Issue 6, November 2010.
[3] Marek TICHY, Karel ULOVEC, OFDM System
5 1.61 25.1 9.045
Implementatiopn Using a USRP for Testing Purposes,22nd
10 0.701 14.31 1.626 International Conference Radioelektronika 2012.
15 0.648 8.33 1.58 [4] Roshan Jain, Sandhya sharma , Simulation and performance
analysis of OFDM with different modulation techniques ,
20 0.669 5.236 1.588
International Journal of Engineering and Technical Research
25 0.000 2.299 0.000 (IJETR) ISSN: 2321-0869, Volume-1, Issue-1, March 2013 .
QPSK 0 42.98 48.31 89.10 [5] Wasiu Lawal* Adewuyi,s.O Ogunti,e.O, Effect of Cyclic
Prefix on OFDM over AWGN Channel, International
5 16.07 26.39 47.85 Journal of Innovative Research in Advanced Engineering
10 3.375 16.04 8.222 (IJIRAE) ISSN: 2349-2163 Volume 1 Issue 9 (October
15 2.485 12.11 3.919 2014).
I INTRODUCTION
The continuous scaling of the CMOS process has
attracted FPGA vendors to integrate more and more devices
on the same chip to increase the chip functionality. As a
result, the power dissipation of modern FPGAs increased
significantly. Much of this increase in power dissipation is
attributed to the increase in leakage power dissipation
which is expected to exceed 50% of the FPGA power
dissipation as modern FPGAs start using the 65nm CMOS
process. In addition, the excessive scaling of the MOS gate
oxide thickness tox resulted in a significant increase in the
gate oxide tunneling current, thus exacerbating the leakage
problem. In recent experiments, it was found that both the
sub-threshold and gate leakage power dissipation increase
by about 5X and 30X, respectively, across successive
technology generations [1].
This paper will provide architectural modifications to
FPGA designs to reduce the impact of leakage power
Figure 1: Modern FPGA fabric.
dissipation on modern FPGAs. Firstly, multi-threshold
CMOS (MTCMOS) techniques are introduced to FPGAs to A) CAD for FPGAs
permanently turn OFF the unused resources of the FPGA. Generally, all the FPGAS are implemented by huge
The FPGAs are characterized with low utilization number of programmable switches. By using those switches
percentages that can reach 60%. Moreover, such only logic functions are designed and implemented. The
architecture enables the dynamic shutting down of the Computer Aided Design tools of FPGAs transform the
FPGA idle parts, thus reducing the standby leakage design in to stream of binary bits 1s and 0s only. The
significantly. Employing the MTCMOS technique in design is either schematic entry or a hardware description
FPGAs requires several changes to the FPGA architecture, language. These binary streams of 0s and 1s are used to
program the FPGA by proper configuration. The Figure 2 leakage current is limited, because sleep transistor is turned
represents the flow diagram of the CAD tools for FPGA OFF.
design.
unutilized blocks with same activity is taken place. Blocks optimize area [8], and the average overhead of the
with similar activity profiles are forced into a standby mode MTCMOS architectures using fine granularity in FPGAs is
together. The next stage is T-V Pack algorithm, which is around 5% [9, 10]. For sleep transistor implementation, two
used to integrate with activity profiles generation method types of devices are considered: header or footer devices.
and packing algorithm resulted algorithm is AT-V Pack as The Pmos transistors are used as header devices to block
shown in figure 4(b). Finally, a modified power estimating the current path from the supply line and pull down
model is introduced to properly calculate the power savings network. Footer devices are implemented by Nmos
in this proposed MTCMOS FPGA architecture. transistors to block the ground path, as shown in figure.7.
The conventional FPGA architecture is used in most of
the modern FPGAs. This FPGA architecture consists of
logic blocks, which are implemented by a four input Look
Up Table (LUT), a Flip Flop and a 2X1 multiplexer as
shown in figure 5.
V CONCLUSIONS
This paper proposed several methodologies for leakage
power reduction in modern nanometer FPGAs. The use of
supply gating using Multi-Threshold CMOS (MTCMOS)
techniques was proposed to enable turning OFF the unused
resources of the FPGA, which are estimated to be close to
30% of the total FPGA area. Moreover, the utilized
resources are allowed to enter a sleep mode dynamically
during run-time depending on certain circuit conditions.
Figure 8: The proposed pin reordering algorithms with VPR CAD flow. Several new activity profiling techniques were proposed to
identify the FPGA resources that will share common
Figure 9 shows the different leakage savings due the idleness periods, such that these resources will be turned
LPR Phase. From the figure, we can understand the OFF together.
maximum power savings are generated due to the input Another technique proposed in this paper for leakage
swapping phase. The very small leakage power saving is power reduction in FPGAs is the pin reordering algorithm.
due to the unutilized block into a low leakage mode The pin reordering algorithm makes use of the input state
because of the absolute minimum unutilized logic blocks or dependency of leakage power to place as much as possible
resources. of the FPGA circuits in a low leakage mode without
incurring and physical or performance penalties. The [6] S. V. Kosonocky, M. Immediato, P. Cottrell, T. Hook,
guidelines for finding the lowest leakage power dissipation R. Mann, and J. Brown, Enchanced Multi-Threshold
mode were derived and it was shown how they vary with (MTCMOS) Circuits using Variable Well Bias," in
every process node depending on the relative magnitude of Proc. of Intl. Symp. on Low Power Electronics and
sub-threshold and gate leakage power components. The Design, 2001, pp. 165-169.
proposed pin reordering technique was applied to several [7] H.-O. Kim, Y. Shin, H. Kim, and I. Eo, Physical
FPGA benchmarks and resulted in an average of 50% Design Methodology of Power Gating Circuits for
leakage power savings. Furthermore, another version of the Standard-Cell-Based Design," in Proc. Of IEEE/ACM
proposed algorithm is also developed that results in a Design Automation Conf., 2006, pp. 109-112.
performance was improved by 2.5%, while achieving an [8] Calhoun, F. Honore, and A. Chandrakasan, A
average leakage power reduction of 48%. This paper Leakage Reduction Methodology for Distributed
presented new CAD methods for the reduction of power MTCMOS," IEEE J. Solid-State Circuits, vol. 39, no.
dissipation in FPGAs. 5, pp. 818-826, May 2004.
[9] [10] T. Tuan, S. Kao, A. Rahman, S. Das, and S.
REFERENCES Trimberger, A 90nm Low-Power FPGA for Battery-
Powered Applications," in Proc. of ACM Intl. Symp. on
[1] S. Borkar, Design Challenges of Technology Field Programmable Gate Arrays, 2006, pp. 3-11.
Scaling," IEEE Micro, vol. 19,no. 4, pp. 23-29, 1999. [10] R. S. Guindi and F. N. Najm, Design Techniques for
[2] A. Gayasen, Y. Tsai, N. Vijaykrishnan, M. Kandemir,
Gate-Leakage Reduction in CMOS Circuits," in Proc.
M. J. Irwin, and T. Tuan, Reducing leakage energy of IEEE Intl. Symp. on Quality of Electronic Design,
in FPGAs using region constrained placement," in 2003, pp. 61-65.
Proc. of ACM Intl. Symp. on Field Programmable Gate [11] Marquardt, V. Betz, and J. Rose, Timing-Driven
Arrays, 2004, pp. 51-58. Placement for FPGAs," in Proc. of ACM Intl. Symp. on
[3] J. Kao and A. Chandrakasan, Dual-Threshold Field Programmable Gate Arrays, 2000, pp. 203-213.
Voltage Techniques for Low Power Digital Circuits," [12] M. Anis, S. Areibi, and M. Elmasry, Design and
IEEE J. Solid-State Circuits, vol. 35, no. 7, pp. Optimization of Multi threshold CMOS (MTCMOS)
1009{1018, July 2000. Circuits," IEEE Trans. Computer-Aided De sign, vol.
[4] Betz, J. Rose, and A. Marquardt, Architecture and 22, no. 10, pp. 1324-1342, Oct. 2003.
CAD for Deep Submicron FPGAs. Norwell, MA: [13] K. Roy, S. Mukhopadhyay, and H. Mahmoodi-
Kluwer Academic Publishers, 1999. Meimand, Leakage Current Mechanisms and Leakage
Reduction Techniques in Deep Sub micrometer CMOS
[5] Z. Hu, A. Buyuktosunoglu, V. Srinivasan, V. Zyuban,
Circuits," Proc. IEEE, vol. 91, no. 2, pp. 305{327, Feb
H. Jacobson, and P. Bose, Micro architectural 2003.
Techniques for Power Gating of Execution Units," in [14] J. Anderson, F. N. Najm, and T. Tuan, Active
Proc. of Intl. Symp. on Low Power Electronics and Leakage Power Optimization for FPGAs," in Proc. of
Design, 2004, pp. 32-37. ACM Intl. Symp. on Field Programmable Gate Arrays,
2004, pp. 33-41.
Abstract Analysis and control of Industrial parameters is of methods are yet to be realized, principally due to the
prime importance for most of the process control deployment and complexity costs involved. Wireless High
applications. But deploying cables from various transducers Way Addressable Remote Transducers based methods
and final control elements to monitoring and controlling prove to be most encouraging for chemical process control
stations becomes costly and complex. This problem calls for applications, but requires huge investments for deployment.
wireless implementations for monitoring and control
The prototype distillation column system adopts the
applications. This paper dispenses a low cost wireless
combustion process parameters Analysing system based on ZigBee wireless communication protocol to acquire real-
ZigBee communication protocol. This system contains Rotary time data, and cool down the high temperature of a
KILN unit ,sensors, ZigBee device ,PIC-Micro-controller, and communication room. The system is designed to be simple,
PC (LabVIEW).The system uses Microchip PIC16F72 - highly reliable and cost effective. In this system ,the
controller as signal processing unit in the field of combustion measured parameters from Rotary KILN of chemical process
process and NI LabVIEW interface as the Analysing unit. It can be processed by Microchip PIC16F72 controller and
utilizes a precision temperature sensor , outlet pressure sensor the processed data is transmitted by ZigBee transmitter
which being a low cost sensors provides a authentic output module to ZigBee receiver module ,which is interfaced with
within its operating range. The system follows a request
PC LabVIEW software through RS-232 communication.
response methodology, where the field unit responds to the
request made by the Analysis unit of LabVIEW. The received data from ZigBee module can be processed or
analyzed by graphical Terminals Units of LabVIEW
Index Terms LabVIEW, ZigBee, PIC-Micro-controller, programming.
Rotary KILN. This idea represents low cost system to design wireless
analysis of the Rotary KILN temperature and pressure
I. INTRODUCTION suitable for small-scale industrial applications. The
fundamental aim of this paper is to develop a system to
The measurement and Analysis of Rotary KILN design a wireless Rotary KILN combustion [2] parameters
combustion parameters are very crucial in order to optimize Analyzing system which enables to analyze the Rotary
the performance of combustion process. Moreover, reliable
KILN combustion process parameters using ZigBee
data acquisition determines the accuracy of the Analysis
technology and analyses the parameters on the front panel
algorithm [1]. In a field of industry, a plant may be having a
of LabVIEW software on PC screen.
host of independent processes consist various control
The system mainly contains two parts One is the Rotary
objectives, and analysis of all these processes prove the
KILN combustion unit [2] along with ZigBee transmitter
conclusive factor in a process performance and safety. The node and another is the receiver node. The transmitter part
interfacing of sensors in the field of process to the central consists of physical variable sensors, microcontroller and
analysis unit results in high installation and preservation
ZigBee. Receiver consists of a PC interfaced with ZigBee
costs, and make the system susceptible to wear tear ,
through a serial port. Here the parameters like temperature,
thereby reduction of its reliability. Whereas, wireless
steam pressure are detected by the sensors given to the
network communication offers lower set up and microcontroller and transmitted to the receiver part through
maintenance costs with better flexibility to re-build and the wireless medium ZigBee.
update the wireless network. Further, sensors of a wireless
sensor network can also be installed in hazardous and
II. SYSTEM DESIGN AND ARCHITECTURE
secluded environments like nuclear power plants. The quick
improvement in wireless system has resulted in a vast Fig.1.represents the system design and architecture, which
number of possibilities in different commercial incorporates two stand-alone units: one is the field unit and
environments and applications. In the past days various second one is the analyzing unit, where thermocouple
methods for industrial control over wireless sensor senses the temperature of Combustion Product at Rotary
networks have been designed and implemented. But, KILN Junctions [3]. The analyzing unit is in wireless
industrial and commercial adaptations of most of these synchronization with the combustion unit. It shows the
A. Zigbee module
This system is designed with wireless network
communication using ZigBee module. Two different
ZigBee modules are used, one is ZigBee transmitter and
another is receiver. ZigBee transmitter is used at Rotary
KILN combustion process to transmit the measured data to
LabVIEW through wireless communication range of 30-40
meters. The ZigBee communication network protocol
supports different network topologies like mesh, tree, star
etc. This communication support is needed for interfacing
more number of sensors module from different field
stations of the process.
Type of network topology can be selected based on
requirement of interfacing of sensors and actuators at the
field of process. If the number of fields of process is more, at Rotary KILN. If the operators are unable to maintain
then suitable network topology is to be selected for better sufficient level of pressure at KILN OUTLET-DOOR, it
communication [6]. This ZigBee network layer is causes blasting of the chamber. KILN produces 30-150PSI
applicable for network sans high power transmitter. Huge ranging pressure at KOD chamber. This sensor is sealed
number of nodes can be handled by this ZigBee network from water. In this prototype system, SIKA make product is
layer. Here RS-232 Recommended standard is used to used for pressure measurement of KILN outlet door
make the communication between ZigBee Receiver module pressure. Its pressure temperature is operating from 00c to
and LabVIEW installed PC. 800c.Here pressure sensor produces analog output voltage
proportional to KILN outlet pressure changes. This voltage
B. Microchip PIC16F72 controller: is in the range of volts, so we can directly interface this
Microchip PIC16F72 is a CMOS -flash based 8 bit signal to PIC-microcontroller without any signal
microcontroller. It has a great feature of 200ns instruction conditioning circuit and easily process the signal through
execution speed, It has 5 channel of 8 bit analog to digital micro controller programming .Pressure sensor, used in
converter ,which can be more flexible for interfacing analog proto type system is shown below Figure.4.
sensor from the field of process. In this system two analog
sensors are interfaced to microchip PIC16F72 controller
and the sensing signal is processed through micro controller
program; processed data is transmitted to ZigBee receiver
module through wireless communication
This micro chip has a high performance RISC CPU [6].
This can be operated at industrial temperature and the wide
operating voltage range is from 2.0 v to 5.5 v. One more
important thing of micro chip is it has inbuilt PWM
module. So it can be used for control of industrial field
devices along with suitable signal conditioning circuit[7].
at every junction in order to maintain perfect chemical ZigBee both transmitter and receiver modules are
reactions. If the chemical reactions are perfect, then product operating at 2.4 KH frequency. These two modules need to
grade or quality is good. Different chemical reactions take be configured through PC programming (or) the unit micro-
place at KILN junction are shown below. controller. Before the system is setup, for this existing
system, these two modules were configured through
C+O2-------CO2 personal computer to have the same personal area network
CaO+CO2------CaCO2 ID and bit rate by configuration of two ZigBee modules
MgO+ CO2-------MgCO3 have been done using configuring software, TMFT V2.6 by
C+ CO2--------2CO module manufactures.
2Fe2O4+3 CO2-------3FeO3+3CO B. Rotary KILN configuration unit:
Fe3O4+CO------3FeO+3 CO2 In this application; rotary KILN combustion unit
FeO+CO-------Fe+ CO2 produces sponge iron as a final product with respect to raw
material iron ore. Quality of product depends on
In this process, operators need to maintain different temperature of the junction and size of the iron ore; here
levels of temperature of bed, gas and reduction .The production quality is analyzed by considering necessary
temperature profile along the length of the rotary KILN of parameter like temperature and pressure. Three temperature
sponge Iron process can be adjusted to heat the raw sensors are used to measure the temp- of bed, gas and rec.
material iron ore to the reduction temperature with in short This K-type thermocouple (temperature sensor) produces
time and maintain this bed temperature levels to carry out output signal in the range milli-volts, which is amplified to
the reduction to achieve the desire level or quality of volts. this voltages signal is calibrated into temperature by
Ferrous (Fe).In general the temperature profile is different using micro-controller programming which is displayed on
for various iron ore depending upon the reductionbility LCD as field unit and it transmitted to analysis unit.
characteristics. The typical temperature ranges to be Pressure sensor output system is processed like temperature
maintained at 100TPD KILN is shown in below table. 1 signal. Prototype of this Rotary KILN combustion unit is
shown below Figure.6. The microcontroller 10-bit analog to
digital converter to update by read the signal from
Table1.Rotary KILN Junction temperatures
temperature sensor and pressure sensor and store it into the
memory for transmission purpose and historic al report.
TEMP T1C T2C T3C T4C T5C T6C T7C
REC 750 860 960 1020 120 1030 1030
GAS 930 1030 1080 1080 1090 1100 1120
BED 850 980 1020 1030 1040 1050 --------
.
Quality of the product of sponge iron not only depends on
the temperature, pressure levels at combustion process, it
also depends on the size of raw material of iron core. If the
temperature levels at junction of Rotary KILN is
controlling perfectly then the resulting Ferrous % at
different iron core sizes is shown in the below table.2.This
paper completely describes the range of temperature to be
maintained at combustion process of reduction of iron ,and
also describes how grade levels are changing with respect
to unwanted change in reduction of iron.
REFERENCES
[1] Dr.Md.Fakhruddin Ansari Design and Implementation of
SCADA Based Induction Motor Control. Journal of
Engineering Research and Applications ISSN: 2248-9622,
Vol. 4, Issue 3, pp.05-18, March 2014.
[2] K. Gowri Shankar Control of Boiler Operation using PLC
SCADA, Proceedings of the International Multi Conference
of Engineers and Computer Scientists, Vol-II ,IMECS 2008,
Hong Kong ,19-21 March, 2008..
[3] Nabil Daoud Talka Quatar steel company limited,
Utilization of Sponge Iron in Electric Arc Furnace, These
paper was originally presented at AISUs 2nd electric furnace
symposium in Damascus, Syria.
Figure 8.Logical flow diagram of Receiver module at analyzing unit
[4] D.Roy Choudhury and Shail B Jain, Linear Integrated
Circuits,2nd Edition, New Age International (P) Limited.
[15] G.Venkateswarlu SCADA Based Automatic Direct
Communication between ZigBee Receiver and LabVIEW is Reduction of Iron Process using Allen-Bradley 74
configured in Measurement and Automation Explorer Programmable Logic Controllers - ,CVR journal vol. no.10-
(MAX) of LabVIEW. The data from communication parts June 2016.
[16] Quan luo; Linlen Qin; Xiaofeng li; Gangwu ;The using Lab VIEW 2016 International conference on energy
implementation of wireless sensor and control system in efficient technologies for sustainability(ICEETS.
green house based on zigbee 2016 35th Chinese control [19] XuhuiRen,Shili Rong;Zhuohan Li LabVIEW based
conference(CCC). temparature control platform design a 4L Jacketed reactor
[17] Xian-JunYi;Mi zhou;Jian Liu; Design of smart home control 2016 35 th chinese control conference(ccc).
system by internet of things based on zigbee-2016 IEEE 11
th conference on Industrial Electronics and [10]. Yasin karan;salim kahhveci wireless measurement of
Applications(ICIEA).. thermocouple with micro controller 2015 23 rd signal
[18] . J.Ashely Jenifer;T.Sivac handar banu,A.Darwin Jose
processingand communications application conference.
RajuAutomation and energy management of smart home
Abstract-Aquatic ecosystem like lakes worldwide are being II. CAUSES FOR DEGRADATION OF LAKES
severely altered or destroyed at a rate greater than that at any
other times and far faster than they are being restored. This For mankind, water is a basic need. It influences and alters
article focuses on the restoration and conservation measures the social, cultural, political and religious heritages of
to be followed for the management of lakes which are facing different communities. The need for plentiful supply of
the problem of environmental degradation due to population water is universally demanded. However, much importance
explosion, urbanization, industrialization, discharge of is not given to the quality of water. All over the world, the
domestic sewage, industrial effluents, chemical intensive first victims of water pollution are the water bodies like
agriculture, dumping of municipal solid waste, idol immersion lakes so that even one time drinking water resources are
etc. These measures should be strictly followed by the people
and the government because lakes and their surroundings are
facing the crisis. In the last half of the 20th century, lakes
unique assets and valuable ecosystems for both society and underwent unprecedented environmental degradation. The
nature. Lakes have social, cultural, aesthetic and economic major factors that lead to the degradation of lakes are
values and it is our responsibility to retain the glory and
pristine beauty of the lakes.
Rapid urbanization and encroachment
Index terms-Lakes, Degradation of lakes, Urbanization, Continuous flow of untreated sewage
Eutrophication, Siltation, lake pollution, lake conservation, Intensive agricultural runoff
Dal lake, Hussain sagar lake. Discharge of industrial toxic effluents
Dumping of debris and garbage
I. INTRODUCTION Heavy siltation and pollution due to idol immersion.
Water is the source of life on earth and earth is known as
the Watery Planet because it is the only planet in the solar
system with the abundant source of water. About 97 percent
of earths total water supply lies in oceans which is
unsuitable for human consumption due to its saline content,
2% percent is frozen in the polar ice caps and the
remaining 1% percent is available in lakes, rivers and
groundwater (Table 1) which is suitable for human
consumption. Lakes are the best available fresh water
resources on the earths surface as we can freely access
only the water in lakes. Lakes have traditionally served the
function of meeting water requirements of the people for
Fig.1. Encroachment along the lake boundary
drinking, household uses like washing, for agriculture, for
fishing and also for religious and cultural purposes. Apart
from these functions, lakes are known to recharge ground
water, controls runoff, moderate the hydrological events
drought and floods, host variety of flora and fauna and
provide a wide array of recreational activities and aesthetic
benefits for humans to enjoy.
With rapid urbanization and expansion of city boundaries, a
number of lakes in urban areas are facing issues of over
exploitation, encroachment, pollution etc. Therefore, it is
needed to initiate efforts to restore and conserve the lakes.
TABLE 1
DISTRIBUTION OF WATER ON EARTH Fig.2. Disposal of garbage in the lake
Oceans 97.3 Various problems are associated with urban water bodies.
Glaciers and icecaps 2.14 Human settlements and effluent geneses from various
Ground water 0.61 sources are the chief factors for the degradation of lakes.
Lakes 0.017 The anthropogenic pressure has also resulted in degradation
Rivers 0.0001 due to deforestation, extensive agricultural use, flow of slit
Atmosphere 0.001
and harmful chemicals. The tourists who come for visiting Waste water especially sewage that is discharged into
the lakes pollute the lakes by throwing harmful waste and lakes contain pathogenic (disease causing) organisms like
polythene bags. Increasing encroachment on the bank of the bacteria, virus and parasites that are capable of
lakes causes deterioration of water quality and disturbs the transmitting water borne diseases in humans. Some of the
biodiversity of the lake, all these cause an impact on water borne diseases are diarrhoea, cholera, typhoid,
climate change also. dysentery, jaundice, gastroenteritis etc.
Rapid urbanization has following impacts on lakes: V. NEED FOR LAKE CONSERVATION
A. Eutrophication Lakes when restored and conserved have the following
Lakes in the urban areas receive enough nutrients like environmental and ecological benefits:
nitrates and phosphates from sewage, industrial effluents
and fertilizer and pesticide rich run-off from agricultural Harvest rainwater and recharge ground water.
fields which promote the rapid and lush growth of oxygen Reduce water logging and flood risk
consuming algae especially blue-green algae and aquatic Increase economic activities through ecotourism and
weeds like water hyacinth. This growth deoxygenates water recreational activities.
and the depleted levels of oxygen in water leads to a Enhance biodiversity in and around the lakes
situation where aquatic life forms cannot survive. The dead Improve the health conditions of the people living in
organisms undergo anaerobic decomposition releasing the lake surroundings.
anoxic gases like methane, hydrogen sulphide and carbon VI. LAKE CONSERVATION STRATEGIES
dioxide which makes the lakes stink emitting foul smell.
This process of nutrient enrichment of the lakes leading to A. Lake protection steps:
excessive plant growth is called Eutrophication and the
Preventing encroachment of lake surroundings for
lakes are termed as Eutrophic lakes.
different activities.
The shore line of the lakes must be properly fenced
to protect from encroachment.
B. Lake management steps:
Construction of sewage treatment plants for treating
sewage and letting the water into the lakes.
Separating waste water from the storm water.
The inlets and outlets of the lakes should be
identified and need to be monitored at regular
intervals.
Encouraging management and handling of
Fig. 3. Eutrophication infestation with water hyacinths municipal solid waste.
Beautification of lake bund by landscaping and
B. Siltation or Sedimentation
plantation.
Siltation is a form of water pollution where water
Plantation on the lake surroundings to prevent soil
flowing into the lakes brings sediment which is either silt
erosion.
or clay and settles at the bottom of the lakes. Activities
like deforestation, intensive agricultural cultivation and Increasing community participation
land clearances for construction loosen the top soil, Reducing pollution through idol immersion by
which finds its way into the lakes. following Green Ganesha drive i.e., making use of
environmentally friendly idols.
IV. IMPACT O F LAKE POLLUTION
Environmental education and awareness
Lake pollution has the following effects:
C. Lake restoration steps:
A. Toxic chemical effects
Chemical toxic substances like heavy metals and De-silting the lake bed by dredging
pesticides in sewage, industrial effluents and agricultural De-weeding
run-off pollute the lakes affecting aquatic organisms and Removal of floating aquatic plant species.
humans. Pesticides like DDT and heavy metals like lead,
mercury and cadmium which are not water soluble are VII. LEGAL FRAMEWORK FOR LAKE
absorbed into the tissues of the organisms from the CONSERVATION IN INDIA
polluted water and accumulate in the organisms body.
This process is called Bioaccumulation. The The lakes and water bodies of India are directly influenced
concentration of these toxic substances build up at by a number of legal and regulatory frameworks. The
successive levels of food chain. This process is called fundamental duties enshrined in the constitution of India
Biomagnification. These phenomena cause various types Article 51A (g) states It shall be the duty of every citizen
of diseases in humans effecting different body organs. of India to protect and improve the natural environment
B. Water borne diseases
REFERENCES