0% found this document useful (0 votes)
18 views493 pages

Proceedings CONST2021

The ICONST EST 2021 conference on Engineering Science and Technology took place from September 8-10, 2021, in Budva, Montenegro, focusing on the theme of 'science for sustainable technology.' The event featured 157 papers from 28 countries, with Turkey being the leading participant. The conference aimed to address sustainability challenges through interdisciplinary scientific discussions and technological advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views493 pages

Proceedings CONST2021

The ICONST EST 2021 conference on Engineering Science and Technology took place from September 8-10, 2021, in Budva, Montenegro, focusing on the theme of 'science for sustainable technology.' The event featured 157 papers from 28 countries, with Turkey being the leading participant. The conference aimed to address sustainability challenges through interdisciplinary scientific discussions and technological advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 493

SEPTEMBER 8-10, 2021

BUDVA, MONTENEGRO

ICONST EST’21
ENGINEERING SCIENCE
th
4 INTERNATIONAL CONFERENCE ON
ENGINEERING SCIENCE AND TECHNOLOGY
www.iconst.org

Proceedings & Abstracts Book


ICONST EST 2021
International Conferences on Science and Technology
Engineering Science and Technology
September 8-10 in Budva, MONTENEGRO

ABSTRACTS
&
PROCEEDINGS BOOK
ICONST EST 2021
International Conferences on Science and Technology
Engineering Science and Technology
September 8-10 in Budva, MONTENEGRO

Editors
Dr. Mustafa Karaboyacı
Dr. Kubilay Taşdelen
Dr. Abdullah Beram
Dr. Hamza Kandemir
Dr. Ergin Kala
MSc. Serkan Özdemir

Technical Editors
MSc. Tunahan Çınar
MSc. Şerafettin Atmaca
MSc. Doğan Akdemir
Ma. Fıratcan Çınar

Cover design & Layout


MSc. Kubilay Yatman

Copyright © 2021
All rights reserved. The papers can be cited with appropriate references to the publication. Authors are responsible for the
contents of their papers.

Published by
Association of Kutbilge Academicians, Isparta, Turkey
E-Mail: [email protected]

Publication Date: 28/09/2021


ISBN: 978-605-70965-2-4
ICONST EST 2021
International Conferences on Science and Technology
Engineering Science and Technology
September 8-10 in Budva, MONTENEGRO

Scientific Honorary Committee


Prof. Dr. Rade RATKOVIC, Fakultet za biznis i turizam Budva University, MONTENEGRO
Prof. Dr. İlker Hüseyin ÇARIKÇI, Suleyman Demirel University, TURKEY
Prof. Dr. İbrahim DİLER, Isparta University of Applied Sciences, TURKEY
Prof. Dr. Vujadin VEŠOVIĆ, Faculty of Transport Communications and Logistics, MONTENEGRO
Prof. Dr. Bujar DEMJAHA, Rector of AAB College, KOSOVO
Prof. Dr. Samedin KRRABAJ, University of Prizren, KOSOVO
Prof. Dr. Edmond HAJRİZİ, University for Business and Technology, KOSOVO
Prof. Dr. Fadıl HOCA, International Vision University, MACEDONIA
Prof. Dr. Naime BRAJSHORİ, Kolegji Heimerer, KOSOVO
Prof. Dr. Harun PARLAR, Parlar Research & Technology-PRT, GERMANY
Prof. Dr. Ahmad UMAR, Science of Advanced Materials, KINGDOM OF SAUDI ARABIA
Prof. Dr. Kürşad ÖZKAN, Isparta University of Applied Sciences, TURKEY
Prof. Dr. Mehmet KILIÇ, Suleyman Demirel University, TURKEY

Organizing Committee
Dr. Mustafa Karaboyacı, Suleyman Demirel University, TURKEY
Dr. Hamza Kandemir, Isparta University of Applied Sciences, TURKEY
Dr. Kubilay Taşdelen, Isparta University of Applied Sciences, TURKEY
Dr. Abdullah Beram, Isparta University of Applied Sciences, TURKEY
MSc. Serkan Özdemir, Isparta University of Applied Sciences, TURKEY
Dr. Ergin Kala, University of Prizren, KOSOVO

Technical Committee
MSc. Kubilay Yatman, Isparta University of Applied Sciences, TURKEY
MSc. Doğan Akdemir, Balıkesir University, TURKEY
MSc. Şerafettin Atmaca, Suleyman Demirel University, TURKEY
MSc. Fatih Yiğit, Isparta University of Applied Sciences, TURKEY
MSc. Tunahan Çınar, Isparta University of Applied Sciences, TURKEY
Ma. Fıratcan Çınar, Isparta University of Applied Sciences, TURKEY
ICONST EST 2021
International Conferences on Science and Technology
Engineering Science and Technology
September 8-10 in Budva, MONTENEGRO

Scientific Committee
Dr. Alev Akpınar Borazan, Bilecik Seyh Edebali University, Turkey
Dr. Amer Kanan, Al-Quds University, Palestine
Dr. Andrea G. Capodaglio, University of Pavia, Italy
Dr. Aybeyan Selim, International Vision University, North Macedonia
Dr. Apostolos Kiritsakis, Alexander Tech. Educational Ins. of Thessaloniki, Greece
Dr. Ayodeji Olalekan Salau, Obafemi Awolowo University, Nigeria
Dr. Bülent Derviş, International Vision University, North Macedonia
Dr. Cristian Fosalau, Technical University of Iasi, Romania
Dr. Driton Vela, University of Business and Technology, Kosovo
Dr. Eda Mehmeti, University of Business and Technology, Kosovo
Dr. Elvida Pallaska, University of Business and Technology, Kosovo
Dr. Ermek A. Aubakirov, Al – Farabi Kazakh National University, Kazakhstan
Dr. Fecir Duran, Gazi University, Turkey
Dr. Gauss M. Cordeiro, Federal University of Pernambuco, Brazil
Dr. Gholamhossein Hamedani, Marquette University, USA
Dr. Gülcan Özkan, Süleyman Demirel University, Turkey
Dr. Hamid Doost Mohammadian,FHM University of Applied Sciences, Germany
Dr. Ines Bula, University of Business and Technology, Kosovo
Dr. Izabela Zimoch, Silesian Universum of Technology, Poland
Dr. Joanna Boguniewicz-Zabłocka, Opole University of Technology, Poland
Dr. Kari Heliövaara, University of Helsinki, Finland
Dr. Kłosok-Bazan Iwona, Opole University of Technology, Poland
Dr. Kubilay Akçaözoğlu, Niğde Ömer Halisdemir University, Turkey
Dr. Leyla Tavacıoğlu, Istanbul Technical University, Turkey
Dr. Lulzim Beqiri, University for Business and Technology, Kosovo
Dr. Mathew Ademola Jayeola, Obafemi Awolowo University, Nigeria
Dr. Mehmet Kılıç, Suleyman Demirel University, Turkey
Dr. Mehmet Kitiş, Suleyman Demirel University, Turkey
Dr. Merita Barani, University for Business and Technology, Kosovo
Dr. Meruyert Kaygusuz, Pamukkale University, Turkey
Dr. Mirosław Kwiatkowski, AGH- University of Science And Technology, Poland
Dr. Mohd Aswadi Bin Alias, Unıversıty Kuala Lumpur- Bmı, Malaysia
Dr. Muhamet Ahmeti, University of Business and Technology, Kosovo
Dr. Naushad Ali Mamode Khan, University of Mauritius, Mauritius
Dr. Nicholas Baldacchino, Malta College of Arts, Science & Technology, Malta
Dr. Nuray Benli Yıldız, Duzce University, Turkey
Dr. Rahmon Ariyo Badru, Obafemi Awolowo University, Nigeria
Dr. Ramazan Şenol, Suleyman Demirel University, Turkey
Dr. Salina Muhamad, Universiti Selangor, Malasıa
Dr. Sami Makolli, University of Business and Technology, Kosovo
Dr. Serhat Oğuzhan Kıvrak, Hitit University, Turkey
Dr. Shpend Dragusha, University of Business and Technology, Kosovo
Dr. Şule Sultan Uğur, Suleyman Demirel University, Turkey
Dr. Valmir Hoxha, University of Business and Technology, Kosovo
Dr. Vehebi Sofiu, University of Business and Technology, Kosovo
Dr. Vincenzo Naddeo, University of Salerno, Italy
Dr. Zhandos T. Mukayev, Shakarim State University of Semey, Kazakhstan
ICONST 2021
International Conferences on Science and Technology
Engineering Science and Technology
Life Science and Technology
Natural Science and Technology
September 8-10, 2021 in Budva, MONTENEGRO

Dear Readers;

The fourth of ICONST organizations was held in Budva/Montenegro between 8-10


September 2021 with the theme of ‘science for sustainable technology' again. In recent years,
weather changes due to climate change have reached a perceptible level for everyone and
have become a major concern. For this reason, scientific studies that transform technological
progress into a sustainable one is seen as the only solution for humanity's salvation. Here we
ask ourselves "which branch of science is responsible for sustainability?". Sustainability
science is an interdisciplinary field of study that covers all basic sciences with social,
economic, ecological dimensions. If we consider technology as the practical application of
scientific knowledge, the task of scientists under these conditions is to design products that
consume less energy, require less raw materials, and last longer.

ICONST organizations organize congresses on sustainability issues of three main fields of


study at the same time in order to present different perspectives to scientists. This year, 157
papers from 28 different countries presented by scientists in ICONST Organizations.

85 papers from 17 countries presented in our International Conference on Engineering


Science and Technology organized under ICONST organizations. Turkey leads the way with
49% of the participants, followed by Kosovo and Moldova with 8.2%, North Macedonia
4.7%, Algeria, Azerbaijan, Hungary, Italy, Montenegro and Poland 3.5%, Croatia, Czech
Republic, Kingdom of Saudi Arabia, Japan, Kyrgyzstan, Portugal and Russia with 1.2%.

57 papers from 13 countries presented in our International Conference on Life Science and
Technology organized under ICONST organizations. Turkey leads the way with 49% of the
participants, followed by Poland with 12.7% and Kosovo 11%, United Kingdom 5.4%,
Kazakhstan, USA, Tunisia and Croatia 3.6%, Serbia, Israel, Czech Republic and Montenegro
with 1.8%.

Finally, 15 papers from 8 countries presented in our International Conference on Natural


Science and Technology organized under ICONST organizations. Turkey leads the way with
47% of the participants, followed by Kosovo with 11% and Serbia, Egypt, Bosnia and
Herzegovina, Italy, Poland, North Macedonia and Romania with 6%.

As ICONST organizations, we will continue to organize organizations with the value you
deserve in order to exchange ideas against the greatest threat facing humanity, to inspire each
other and to contribute to science. See you at future events.

ICONST Organizing Committee


ICONST EST 2021
International Conferences on Science and Technology
Engineering Science and Technology
September 8-10 in Budva, MONTENEGRO

Contents

Advanced Functional Nanomaterials: From Growth to Applications Kingdom of


Online
Saudi 1
Ahmad Umar Presentation
Arabia
Efficiency of Singularity and PCA Mapping of Mineralization-Related
Geochemical Anomalies: A Comparative Study Using BLEG and
<180µm Stream Sediment Geochemical Data in Eskisehir-Sivrihisar Online
Turkey 2
Region Presentation
Fatma Nuran Sönmez, Hüseyin Yılmaz
The Approach of The Critical Size Bone Defects by Decellularized
Vascularized Bone Allografts Online
Moldova 3
Presentation
Pavlovschi Elena, Stoian Alina, Verega Grigore, Nacu Viorel
Submarine Active and Potentially Active Faults, Gas Seeps and Diapirs
in the Kusadasi Gulf and Surroundings, Aegean Sea Online
Turkey 4
Presentation
Savaş Gürçay
Determination of Geological-Geochemical Properties of Magnesite
Formations Observed in Kizildag Ophiolites Online
Turkey 5
Presentation
Yusuf Topak
Denim Clothing Design with Ecological Footprint Online
Turkey 6
Şükriye Yüksel Presentation
Extraction of Heavy Elements Using Liquid-Liquid Extraction Online
Algeria 7
F. Ghebghoub, D. Barkat Presentation
Encryption Technique With Catalan Numbers Online North
8
Aybeyan Selim Presentation Macedonia
Between History And Technology: The Small Villas Of The Late
Nineteenth Century In The Syracuse Countryside. The Recovery Of Online
Villa Ortisi Italy 9
Presentation
Fernanda Cantone, Francesca Castagneto
General Usage and Features of Data Visualization Software Online North
10
Fehmi Skender, Aybeyan Selim, Ilker Ali Presentation Macedonia
The Chance of Architectural Heritage of Our Recent History Online
Hungary 11
Péter Fejérdy Presentation
Identıfıcatıon And Measures to Elımınate Delays in The Constructıon
Sector in Kosovo Online
Kosovo 12
Presentation
Muhamet Ahmeti
The Impact of Digital Technology on Advanced Business Processes Online
Kosovo 13
Ylber Limani, Edmond Hajrizi Presentation
The Geology And Paleogegraphic Evolutıon of Saraykoy (Denizli) Online
Turkey 14
Mahmut Ziya Görücü Presentation

i
Lessons Learnt from Chernobyl and Fukushima Accidents Applicable
Oral Czech
for the Protection Against CBRN Attacks 15
Presentation Republic
Jozef Sabol
Effect of Rare Earth Metals on Electrode Potential in Anodic Oxidation
as A Novel Electrode for Different Kind of Contaminants Oral
Turkey 16
Presentation
Dilara Öztürk, Abdurrahman Akyol
Biomimetic Approaches to Develop Safe-By-Design Antimicrobial
Textiles Online
Portugal 17
Isabel C. Gouveia, Frederico Nogueira, Cláudia Mouro, Ana P. Presentation
Gomes
An Estimation of the Hydrologic Water Budget Components of the
Seyfe Lake Basin by Using Hydrologic Characteristics and Oral
Hydrometeorological Data Turkey 18
Presentation
Cansu Yurteri, Türker Kurttaş
Comparative Analysis of Dimension Reduction and Classification using
Cardiotocography Data Oral
Turkey 19
Presentation
Mahmut Tokmak, Ecir Uğur Küçüksille
Main Sources of Microplastic Pollution in Aquatic Environments
Oral
Poland 20
Presentation
Kamila Sobkowiak
Product Debugging Facility Design Using Image Processing For Defense
Industry Gunpowder Production Oral
Turkey 21
Presentation
Fatih Ilgın, Mustafa İlker Erdursun
Concentrated Kefir Production by Ultrafiltration
Online
Firuze Ergin, Gülfide Tair, Ezgi Tekşan, Gözde Nuran Balcı, Turkey 22
Presentation
Ahmet Küçükçetin
Behavior of Sugar Consumption and Lifestyle in the Republic of
Moldova Poster
Moldova 23
Aurica Chirsanova, Tatiana Capcanari1, Rodica Sturza, Olga Presentation
Deseatnicova
A Research on Urban Furniture Design: Example of Isparta Oral
Turkey 24
Abdullah Beram Presentation
Impact of Pandemic (Covid 19) on air quality in Prishtina Poster
Kosovo 25
Besa Veseli, Shkumbin Shala, Vehebi Sofiu Presentation
The light metals minerals of Montenegro Poster
Montenegro 26
Biljana Zlaticanin, Sandra Kovacevic Presentation
The influence of the process parameters on the microstructure of Al-Cu-
Poster
Mg-Ti alloys Montenegro 27
Presentation
Biljana Zlaticanin, Sandra Kovacevic
Multivariate Methods for Seasonal Characterization of Air Pollution Poster
Hungary 28
Virgjina Lipoveci, Mirjana Čurlin Presentation
The Assessment of Trihalomethanes (THMs) Concentrations in Drinking
Water from Selected Distribution Systems in Opole Province Poster
Poland 29
Presentation
Iwona Klosok-Bazan, Joanna Boguniewicz, Agnieszka Drozdek
Application of the Rutherford Backscattering Method in Powder
Nanotechnology Poster
Russia 30
A.A. Tatarinova, A.S. Doroshkevich, M. Kulik, M.A. Balasoiu, V. Presentation
Almasan, D. Lazar

ii
Process of Dryıng Peaches by Forced Convectıon Poster
Moldova 31
Natalia Tislinscaia, Vitali Visanu, Mihail Balan, Mihail Melenciuc Presentation
Innovation Strategies of Functional Plant Yogurt Production for
Personalized Nutrition Poster
Moldova 32
Presentation
Tatiana Capcanari, Aurica Chirsanova1, Rodica Siminiuc
Evaluatıon Des Bıomaterıaux A Base Des Resıdus Agrıcoles [Coques De
Noıx] Actıvee Par L'acıde De Cıtron, Naoh Et H3po4. Applıcatıon Au Poster
Traıtement Des Eaux Algerie 33
Presentation
Amel Aidi, Assia Slimani, Ammar Fadel
Physico-Chemical Properties of Rapeseed Honey from the Republic of
Moldova Poster
Moldova 34
Presentation
Chirsanova Aurica, Tatiana Capcanari, Alina Boistean
Study of the Ohrid Traditional Ottoman Houses Local Architecture in
Sustainability Context Online North
35
Presentation Macedonia
Levent Menga
Cluster Analysis of Mobile Devices Oral
Kosovo 36
Samedin Krrabaj Presentation
Proposal and Analysis of the Geothermal Energy Based Plant;
Thermodynamic Assessment Online
Turkey 37
Presentation
Oğuzhan Akbay, Fatih Yılmaz
Comparative Performance Investigation of a Transcritical CO2 Power
Plant Using with Waste Heat Online
Turkey 47
Presentation
Fatih Yılmaz
The Effects of Coronavırus in the Construction Industry: A Case of
Turkey Online
Turkey 55
Presentation
Pınar Usta, Başak Zengin, Kübra Arslan
Sustainable Thinking, Educational Opportunities in Interior Architecture
Projects Online
Moldova 74
Presentation
Munteanu Angela
Reducing the Estimation Error of the Measure of Proximity Between
Objects in Pattern Recognition Online
Azerbaijan 79
Presentation
Rahim Mammadov, Gurban Mammadov, Sevinj Aliyeva
Hide Data In 24-Bit And 8-Bit Bmp And Tiff Files, Reading
Confidential Data And Comparing With Image Quality Criteria Online
According To Steganography Principles Turkey 88
Presentation
Remzi Gürfidan, Ziya Dirlik
Detection of Hail Damage in Fruits Using Image Processing Techniques
with Kinect Sensor Online
Turkey 97
Presentation
Enes Açıkgözoğlu, Remzi Gürfidan
A Novel Multi-attribute Visual CAPTCHA Model Approach Online
Turkey 104
Ziya Dirlik, Ayhan Arısoy Presentation
Assistant Referee Offside Signals Training Simulator System Design Online
Turkey 110
Ayhan Arısoy, Enes Açıkgözoğlu Presentation
Internet of Things Based Real-Time Fatigue Detection System for
Drivers with Kinect Sensors Online
Turkey 116
Presentation
Enes Açıkgözoğlu, Ziya Dirlik, Ayhan Arısoy

iii
Performance Analysis Of Advanced Encryption Standard Algorithm
Using Parallel Computing For Embedded Systems Online
Turkey 121
Presentation
Muhammet Cihat Mumcu, Güner Tatar
Changes in Agrochemical Indicators of Soils Under the Rotational
Technique of Pasture Use in The Conditions of the Kyrgyz Republic Online
Kyrgyzstan 132
Presentation
Totubaeva N.E., Shalpykov K.T.
The “sustainable” Landscape: Learning from the Building Tradition of
the Hyblean Countryside to Prepare for the Future Online
Italy 142
Presentation
Gianfranco Gianfriddo, Luigi Pellegrino, Matteo Pennisi
Evaluation of Parameters Affecting Frequency Response Analysis
Measurements in Power Transformers Online
Turkey 152
Presentation
Selim Köroğlu, Akif Demirçalı, Mustafa Yıldız
Simultaneous Hybrid Use of Drinking Water Pump Energy from Grid
and Solar Energy Online
Turkey 162
Presentation
Aydın Güllü
Effects of Different Stitch Combinations on The Seam Bursting
Characteristics of PET/Co Workwear Online
Turkey 169
Presentation
Sükran Kara
Geomorphometric Analysis of the Sub-watersheds in the Eastern Black
Sea Region, Turkey Online
Turkey 176
Presentation
Senem Tekin, Tolga Çan
Influence of Cell Transportation Microchannel Wall Quality on Cell
Online
Deposition Rate: a DPM Analysis Turkey 188
Presentation
Daver Ali
Preprocessing of Seismic Signals on base of AI Online
Azerbaijan 192
Ramziyya Garazade, Naila Allahverdiyeva Presentation
The Effect of Holding Time on the Mechanical Properties of TFP
Produced Thermoplastic Matrix Online
Turkey 203
Presentation
Hasan Kara, Mustafa Özgür Bora, Emine Baş
Image Data Augmentation Techniques for Fracture Detection of Dogs Online
Turkey 214
Gülnur Begum Ergün, Selda Güney Presentation
An Example of Construction Structures and Design Planning as a
Sustainable-Eco-Village Akbaş Village Analysis Online North
220
Presentation Macedonia
Ayşe Arıcı
Moving Towards Sustainable Construction: A primitive transitional
Online
guide Turkey 229
Presentation
Hagar Ali Habib, Gökhan Gelişen
Relatıonshıp And Dıfferences Between Leadershıp And Management In
Constructıon Online
Turkey 235
Presentation
Hagar Ali Habib, Gökhan Gelişen
Thermodynamic Assessment of Solar-Driven Rankine Cycle for
Supercritical Working Fluids Oral
Japan 241
Serpil Çelik Toker, Gamze Soytürk, Hiroshi Yamaguchi, Önder Presentation
Kızılkan

iv
Comparative Thermodynamic Investigation of Ground Coupled
Refrigeration System for Supercritical Refrigerants Oral
Turkey 252
Presentation
Gamze Soytürk, Serpil Çelik Toker, Önder Kızılkan
Modelling the Color Removal Efficiency of an Electrochemical Process
from Organic Wastewater by Response Surface Method Oral
Turkey 265
Presentation
Oğuz Şahiner, Murat Solak
Descrıptıon of 7.5kW Plant Pollutıon In PV System
Oral
Vehebi Sofiu, Sami Gashi, Besa Veseli, Shkelzim Ukaj, Muhaxherin Kosovo 278
Presentation
Sofiu
Diagnostic Expert Systems Oral
Azerbaijan 284
Rahimova N.A., Abdullayev V.H. Presentation
Investigation of the Relationship Between Bridge Equipment Location,
Fatigue and Mental Workload by Using Piper Fatigue Scale and NASA-
TLX Oral
Turkey 293
Presentation
Leyla Tavacıoğlu, Bayram Barış Kızılsaç, Neslihan Gökmen İnan,
Özge Eski, Can Tanguç
The Developing Automation and Applications in Maritime
Transformation Process of Freights Oral
Turkey 300
Leyla Tavacıoğlu, Bayram Barış Kızılsaç, Özge Eski, Neslihan Presentation
Gökmen İnan, Mehmet Mert Dalyan, Ercan Emre Erköse
Identification of Defective Cherries Using Convolutional Neural
Network Oral
Turkey 312
Presentation
Ali Kaygısız, Abdulkadir Çakır
Estimation of the NiTi alloy Corrosion Rate Dependence on the
Percentage of Oxygen in Three Different Seawater Environments Oral
Montenegro 323
Presentation
Nataša Kovač, Špiro Ivošević, Radmila Gagić
Estımatıon Forest Cover Map With Fusıon Lidar And Sentınel Data Oral
Turkey 335
Nuray Bas Presentation
Observations on Public Space in The City: the Town Hall Square in
Vigonza (Italy) Oral
Italy 347
Presentation
Enrico Pietrogrande, Alessandro Dalla Caneva
Crease-Resistance Treatments of Cotton Fabrics by Electrostatic self-
Assembly Oral
Turkey 359
Presentation
Buse Sağgün, Şule Sultan Uğur, Okan Ayvacık
All Optical Gate Based on Photonic Crystal Ring Resonator
Oral
Lila Mokhtari, Hadjira Badaoui, Mehadji Abri, Rahmi Bachir, Algeria 363
Presentation
Lallam Farah, Moungar Abdelbasset
Air Pollution Prediction Based on LSTM Neural Network: Sample of
Isparta Province Oral
Turkey 371
Presentation
Mahmut Tokmak
Pro and Contra for Self-Driving Car: Public Opinion in Serbia Oral
Hungary 378
Livija Cveticanin, Ivona Ninkov Presentation
Effects of Activated Carbon on Medium Density Fiberboard Properties Oral
Turkey 391
Ayşe Ebru Akın, Mustafa Karaboyacı Presentation

v
Performance Analysis of the FBMC-OFDM Waveform in Multipath
Fading Channels Oral
Turkey 402
Presentation
Halil Alptuğ Dalgıç, Kubilay Taşdelen
The Normative Regulations, Legislation and Standards on the Control
and Preservation of Electronic Records in the Northern Countries of Poster
Europe Croatia 410
Presentation
Lana Žaja
E-learning Technology in Higher Education: A Review Poster
Kosovo 428
Faton Kabashi, Zamir Dika, Lamir Shkurti, Vehbi Sofiu Presentation
The Experience in TUMnanoSAT Launch Preparation
Poster
Viorel Bostan, Valentin Ilco, Vladimir Melnic, Alexei Martiniuc, Moldova 442
Presentation
Vladimir Vărzaru, Nicolae Secrieru
Signal Performance with eon-xr Technology and Frequency Simulation
Mode with Radio Telescope on the MATLAB Platform Poster
Kosovo 453
Presentation
Vehebi Sofiu, Faton Kabashi, Naim Baftiu
Sun Dyeıng of Wool Yarns with Pyracantha coccınea Roem. Fruıts Oral
Turkey 461
Selime Çolak, Meruyert Kaygusuz, Fatoş Naslihan Arğun Presentation
Synthesis and Characterization of Cellulose Acetate from Waste
Spartium Juncem Flowers Oral
Turkey 467
Presentation
Özlem Karaboyacı, Semra Kılıç
Determination of Volatile Component and Saponin Content of Jujube
Tree Leaves Pre-Fruit and Post-Harvest Oral
Turkey 473
Presentation
Musa Denizhan Ulusan, Mustafa Karaboyacı
Dye Sensitized Solar Cell Production by Doctor Blade Method Using
Bezathren Yellow 5GF Vat Dye Oral
Poland 478
Presentation
Kamila Sobkowiak, Mustafa Karaboyacı

vi
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021 – Keynote Talk

Advanced Functional Nanomaterials: From Growth to


Applications

Ahmad Umar1,2*

Abstract: Nanoscale materials, which are at the foundation of Nanoscience and


Nanotechnology, have sparked a lot of interest and anticipation in recent years because they
possess high surface area and exhibited better chemical and physical characteristics that are
different from both the bulk phase and individual molecules. Nanoscale materials or
nanomaterials research has grown at a rapid pace, and it is now one of the most popular research
topics among scientists and engineers due to their diverse structures, intriguing properties, and
high-tech applications in electronics, catalysis, chemical engineering, pharmaceutics, biology,
magnetic recording, and other fields. Due to their broad structural, physical, and chemical
characteristics and functions, metal oxide semiconductor nanostructures stand out as one of the
most prevalent, most diversified, and most likely richest classes of materials among
semiconductor nanostructures. Metal oxide nanostructures' unique and tunable characteristics,
such as optical, optoelectronic, magnetic, electrical, mechanical, thermal, catalytic, and photo-
electrochemical, among others, making them ideal candidates for a variety of high-level
technological applications.

In this lecture, I will demonstrate the growth, properties and applications of functional
nanomaterials, especially the pure and doped metal oxide nanomaterials. Various metal oxide
nanomaterials such as zinc oxide (ZnO), copper oxide (CuO), iron oxide (Fe2O3), cerium oxide
(CeO2), etc and their composites will be explored from their synthesis to the potential
applications. This lecture will cover a variety of applications based on metal oxide
nanostructures, including chemical and biosensors, photocatalysis, dye-sensitized solar cells,
and so on.

1Department of Chemistry, Faculty of Science and Arts, Najran University, Najran-11001, Kingdom of Saudi Arabia
2Promising Centre for Sensors and Electronic Devices (PCSED), Najran University, Najran-11001, Kingdom of Saudi Arabia
* Corresponding author: [email protected]
1
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Efficiency of Singularity and PCA Mapping of Mineralization-


Related Geochemical Anomalies: A Comparative Study Using
BLEG and <180μm Stream Sediment Geochemical Data in
Eskişehir-Sivrihisar Region

Fatma Nuran Sönmez1, Hüseyin Yılmaz1*

Abstract: Fine-grained stream sediments are the most common sampling media in regional
geochemical exploration programs, with analysis of Au, Ag and other elements extracted by
bulk cyanide leach (BLEG methods) used in the early stages of stream sediment-based
exploration followed up with aqua regia extractions at follow-up stages. The Eskisehir-
Sivrihisar region in Western Turkey includes several orogenic type mineral deposits
including Au-bearing quartz vein systems. The purpose of this study is to delineate
geochemical anomalies of ore and related elements and track their dispersion, which may
lead to discovery of unknown ore deposits. Using a geochemical database generated
through company’s exploration campaigns (Eurogold, Normandy Mining Ltd., Australia),
this research also compares the capability of conventional statistical methods (such as Q-
Q, Mean + 2STD, 3SD and 4SD) and principal component analysis (PCA), with
concentration area (C-A) and number-size/concentration (N-S/C) fractal methods and
singularity index method to differentiate anomalous and background Au distributions or
define areas with geochemical signals related to mineralization (given singularity index
mapping/S.M does not define threshold values). Known Au mineralization in the region of
interest is strongly reflected in stream sediment BLEG Au patterns, which have robust
singularity indices with C-A and N-S multifractal modeling and PCA. A hundred % of the
Au deposits were detected using either BLEG Au and Ag singularity index mapping with C-
A fractal analysis whereas those of factor analysis revealed 85% efficiency. Several strong
Au-Ag anomalies defined by the singularity index and factor analysis in this study needs
further follow up for the discovery of new deposits. Conventional approaches to anomaly
detection in the BLEG and <180µm stream sediment data failed to detect a significant
proportion of the deposits, including some major deposits in the vicinity of Sivrihisar

Keywords:Singularity mapping; C-A; N-S; PCA; multi-fractal models, Eskişehir.

1
Dokuz Eylül University, Faculty of Engineering, Department of Geology, İzmir, Turkey
* Corresponding author: [email protected]
2
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The approach of the critical size bone defects by decellularized


vascularized bone allografts. Preliminary report of the in-vivo
experiment.

1
Pavlovschi Elena, 1,2Stoian Alina, 2Verega Grigore, 1Nacu Viorel

Abstract: Tissue transplantation is a successful approach to rebuild the osteoarticular defects.


Critical bone defects remain a dilemma for reconstructive surgery. Decellularization of
organs, including bone, gives an acellular biological graft, which keeps their extracellular
three-dimensional structure. Theoretically, maintaining the osteoplastic properties of the
vascularized autograft, combining them with the orthotopic characteristics of an allogenic
bone, the vascularized bone allograft would be a successful alternative for the reconstructive
surgery of the skeletal system. To extract the cellular component from the vascularized bone
allograft by the combined method, according to the algorithm, without injuring the
extracellular structure and matrix, for obtaining a graft to be able for next inclusion in the host
blood circulation, without immunosuppression by decellularization.

The bone segment was taken from the domestic rabbit- New Zealand White Rabbit. The
femur was taken with the internal iliac artery, located between the upper part of the great
trochanter and the distal 1/3 of the femoral shaft, respecting the vascular continuity. The graft
was processed, gradually, with a series of solutions, during mechanical agitation.

The optimal segment for vascularized allografting (the rabbit model) was determined the
upper third of the femur with the up to the level of the internal iliac artery. The
decellularization process was applied according to the established protocol. Used
decellularizing agents were physical, chemical, and biological. They assured the efficient
removal of cellular content from the tissue, without damaging the three-dimensional structure
of the extracellular matrix. The greatest part - the cells, were removed first, and then the
protein and lipid residues. In the last step, the smallest compartments DNA and RNA, were
eliminated. The grafts were examined radiologically, histological and morphologically.

The combined process of decellularizing of vascularized bone tissue can generate bone grafts
devoid of immunological agents. The vascularized allogeneic bone without
immunosuppression would be a perfect alternative in the treatment of the massive bone
defects.

Keywords: vascularized bone allograft, combined decellularization, surgical


revascularization

1
Laboratory of Tissue Engineering and Cellular Culture, the University of Medicine and Pharmacy "Nicolae
Testemițanu", Chișinău, Republic of Moldova.
2
Department of Orthopaedics and Traumatology, the University of Medicine and Pharmacy "Nicolae
Testemițanu", Chișinău, Republic of Moldova
*
Corresponding author: [email protected], tel. +373 79 050 049
3
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Submarine Active and Potentially Active Faults, Gas Seeps and


Diapirs in the Kusadasi Gulf and Surroundings, Aegean Sea

Savas Gurcay1*

Abstract: Gulf of Kuşadası and surroundings is an important area in terms of active faults
both under the sea and on land. Karaburun Fault, Tuzla (Orhanlı) Fault, Seferihisar Fault,
Gümüldür Fault and Küçük Menderes Fault are the most important faults in the study area and
surroundings. Each of them can be traced until the shore line of the study area. Compare to
the previous marine seismic data collected around the study area the high resolution Chirp
marine seismic data, which were benefited from on this study, both have much higher
resolution and comprises larger area than the others. In the light of the data, the features and
distributions of the seafloor deformation made by submarine active faults, potentially active
faults, gas seeps and diapirs in Kuşadası Gulf and surroundings were investigated. For this
purposes, the high resolution Chirp marine seismic data which collected previously were
processed to obtain two dimensional seismic profiles. The results were illustrated on a map in
detail after interpretation of these profiles depending on their structural properties.

Keywords: Chirp-Seismic, Submarine Active Faults, Aegean Sea

1
Canakkale Onsekiz Mart University, Marine Technology Vocational School, Canakkale, Turkey
* Corresponding author: [email protected]
4
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Determination of Geological-Geochemical Properties of Magnesite


Formations Observed in Kızıldağ Ophiolites

Yusuf Topak1*

Abstract: In this study, the geochemical characteristics of the economically valuable


magnesite (MgCO3) in an area of approximately 200 km2 in the region between the
Samandağ Central district of Hatay province and Iskenderun in southeast Turkey were
evaluated by geological-geochemical analyzes. In this study, magnesite mineralizations
developed along the cracks of ultramafic rocks of Kızıldağ ophiolite were sampled to
determine their formation and origins. The study area is geologically composed of Arabian
Platform, melange, Kızıldağ ophiolite and cover sediments. The sedimentary units of the
Arabian platform include Lower Cambrian to Lower Carboniferous and Triassic to
Cretaceous sediments and outcrops in the Amanos Mountains. The Arabian platform starts
with fine-coarse-grained clastic units at the bottom and passes upwards to limestone-bearing
units. The Mesozoic units of the platform start with large clastic units and pass the
Cenomanian-Turanian aged platform carbonates and unconformably overlie the Paleozoic
units. The melange unit presents small outcrops under the Kızıldağ ophiolite within the
tectonic window observed around Kömürçukuru, which is called the Amanos olistostrome.
The matrix of the unit consists of sheared serpentinites and is observed on the eastern and
western slopes of the Amanos Mountains. The blocks within the matrix are diverse and
include harzburgite, dunite, gabbro and pillow lavas, as well as limestone and sandstones. The
Kızıldağ ophiolite melange unit is tectonically located around the town of Kömürçukuru with
a low-angle fault. The ophiolite begins with serpentinized tectonites containing large
limestone blocks that were combined with peridodites during thrust on the continent at the
bottom. The Kızıldağ ophiolite represents one side of the spreading ridge and the other side of
this spreading ridge is represented by the Troodos ophiolite. For geochemical analysis, 10
samples, all of which are magnesite, were taken from the field from 3 different locations and
XRD analyzes were performed. As a result of the analyzes made, the whole rock
geochemistry analyzes show that magnesite is in a more pure structure with other elements
compared to dolomites and it accepts less main, trace and soil elements into its structure. As a
result of isotope analysis, the low δ-18OV-PD (%) value shows that the origin of the water,
which is effective in the formation of magnesite, is meteoric waters. As a result, the magnesite
found in the study area was formed as a result of the alteration caused by the circulation of
meteoric water in the ultramafics. For this reason, it has been determined that magnesite is not
located along the fracture and crack zones.

Keywords: Hatay-Çevlik, Magnesite, Carbon and oxygen isotopes, Geochemistry

1
AdıyamanUniversity, Mining and Mineral Extraction Department, Adıyaman, Turkey
* Corresponding author: [email protected]
5
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Denim Clothing Design with Ecological Footprint

Sukriye Yuksel1*

Abstract: Natural resources in the world are being consumed rapidly. Consumption
preference of the environmental friendly people are turning to invest sustainable products in
fashion, which aim is less harm/no harm to nature. Because, textile industry takes the major
scale of using natural resources. Accordingly, to resent fashion researches, since 70’s,
especially denim become top desired material for the global fashion industry and use of denim
varieties in textile developed for decades. However, the process of a denim, from cotton to a
garment means waste of 1kg cotton and about 10800lt* water. For this reason, Denim
companies, conscious about how to design denim products with less damage to nature and
they formulated the process with this program called Eim Score (Environmental Impact
Measuring).In this design project, denim clothing design’s Eim score technically measured as
a green project. The process and the testing of denim wash water consumption formulated to
lower the impact and the dying steps of denim material explained. Fit and scale of the denim
garment design process explained thru presentation, not only considered sustainable and
“green label” for future fashion trends, but also made of “redone look” which is highly adding
value to the products, presented for global fashion followers, for merchandise buyers and for
nature conscious consumers. The design creation and the collection production with details of
technical drawings, aim to involve as an important inspiration part in textile Industry for
future Eco system.*(https://www.vatekcevre.com/blog/bir-urunun-uretimi-asamasinda-ne-
kadar-su-kullaniliyor -biliyor-muyuz)

Keywords: EIM score, Denim clothing, Ecological garment designs, Sustainable Denim
textile

Istanbul Technical University, Textile Engineering Faculty, Textile Technologies and Design Department,
Istanbul, Turkey/
Yıltem Konfeksiyon, Yıkama ve Boya, Istanbul; Turkey
* Corresponding author: [email protected]
6
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Extraction of heavy elements using liquid-liquid extraction

F. GHEBGHOUB1* , D. BARKAT2*

Abstract: The extraction of Co(II) and Cu(II) with bis(2-ethylhexyl)phosphoric acid is


investigated at 25°C with the following parameters: pH, concentration of the extractant, and
the nature of diluent. The effect of the diluent using polar and nonpolar solvents in the
extraction of nickel(II) is discussed. The extracted copper(II) species were CoiL2 in 1-octanol
and methyl isobutyl ketone and CuL2 2HL in toluene, carbon tetrachloride, and cyclohexane.
The extracted cobalt(II) species were CoL2 in 1-octanol and methyl isobutyl ketone and CoL2
2HL in toluene, carbon tetrachloride, and cyclohexane.
Keywords: liquid–liquid extraction; cobalt(II);copper(II) ;di(2-ethylhexyl)phosphoric acid;
diluent effect

1
University of Biskra, Biskra, Algeria.
* Corresponding author: [email protected]

7
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Encryption Technique With Catalan Numbers

Aybeyan Selim1*

Abstract: The permanent development of information technologies enables us to exchange


and process a large amount of data. Due to the expansion of computer networks and the
development of attack techniques, every computer connected is potentially endangered and
primarily treated data. Protecting data from unwanted attacks with cryptological methods that
allow unauthorized access and offer data modification is necessary. The paper provides an
overview of cryptographic techniques that ensure data security to exchange or store. The
method in this research explains data protection with Catalan numbers and computational
geometry. Computational geometry is a discipline from computer sciences that deals with
solutions to geometrical problems with computers and today have various areas of
applications. Our encryption scenario contains four phases. In the first phase, we triangulate
the 3D object with the Delaunay triangulation algorithm. Selection of polygon in the
triangulated object we done in the second phase. The third phase contains the creation of
Catalan keys with a triangulation problem. In the fourth phase, we encrypt the information
with our technique. After presenting our developed procedure, we made cryptoanalysis for
generated cryptographic keys gave proposals for practical application of the scenario
developed in this research.

Keywords: Encrytion, Decryption, Catalan numbers, Cryptographic keys and Polygon


triangulation.

1
International Vision University, Faculty of Engineering and Architecture, Department of Computer Science,
Gostivar, North Macedonia
* Corresponding author: [email protected]
8
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Between History And Technology: The Small Villas Of The Late


Nineteenth Century In The Syracuse Countryside. The Recovery
Of Villa Ortisi
Fernanda Cantone1*, Francesca Castagneto2*

Abstract: The building heritage claims the unity of a culture that, to manifest itself
completely, uses different linguistic structures, all referable to a single expressive code. It is
also an expression of "conflict between oblivion and memory, between protection and abuse,
between Viollet-le-Duc and Ruskin, between present and future, between tourism and
museum ...". Today the interventions on the building heritage approve an overlap of traces
and documents that make it a protected asset but above all usable.

The research theme is the built heritage of the late nineteenth century and, in particular, the
small villas, “villini” in italian language, built by wealthy families to spend some periods of
the summer season. In particular, the architectural model is a cottage, an imposing building,
located in a peripheral area of Syracuse, Sicily, Italy close to others, which is in a state of
neglect.

The research objective is to preserve these unique testimonies of a very interesting and
particular past and to reuse these villini for a contemporary use.

The sustainable arrangement of a space, composed of several levels of liveability, plays a


central role in the reuse project carried out as part of this research. In this sense, the re-use
project does not only entail recovery but absolute ‘fitting’ to the environment, intention and
invasion. The aim is to obtain close correspondence between the environmental and
technological systems so that they fit together, developing a set of values that are contiguous
with that of the past.

The intention of such research is to recover the lost identity of materials, geometric forms,
volumes and empty spaces and foster the emergence of social, civil and symbolic values in
which we recognize the architectural model of “villini” in Syracuse, Sicily, Italy. A project
that aims to reconcile conservation and change, defining the possibilities of transforming the
building property, enhancing its identity. The pre-existence becomes, in this sense, the
starting point and the main reference system, in a contemporary vision of intervention. These
aspects governed the design process with regards to the environment, the sites and the identity
of the architectural model.

Keywords: adherence, integration, compatibility, respect.

1
University of Catania, SDS Architecture, Department DICAR, Siracuse, Italy
* Corresponding author: [email protected]
2
University of Catania, SDS Architecture, Department DICAR, Siracuse, Italy
* Corresponding author: [email protected]

9
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

General usage and features of data visualization software

Fehmi Skender*1, Aybean Selim, Ilker Ali

Abstract: With the development of technology, the importance of easy presentation and
perception of data has increased. People give a lot of importance to visuals. In addition to
attracting attention within the content, the contents also make it easier to remember. Today,
with the use of multiple devices and the widespread use of the internet compared to the past,
visuals in the modern digital world have begun to attract the attention of users more than text.
The visualization and processing of data has reached the crux.
In the research, besides visualizing the data, it is explained how important it is when firstly
science researches and then analysis in other fields. Production data, which has different
geographies around the world, has been accumulated and processed. There are different big
data visualization software available today. Although they have different interfaces and
algorithms, most of them have common functions.

Keywords: big data, data visualization, data visualization tools.

1
International Vision University, Faculty of Informatics, Gostivar , North Macedonia
* Corresponding author: [email protected]
10
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The chance of architectural heritage of our recent history

Péter Fejérdy1*

Abstract: Nowadays it becomes important requirement the ecological point of wiev of the
buildings. Even it becomes more clear that it means not only energetical renewal, but it is
important to take into account the ecological circle of a building. The buildings of the recent
history is in a special situation. Not enough time has passed, that the value, which is clear for
professionals can be accepted by the society. Becouse of this, the buildings of this period is in
danger. Throught restauration the can easily loose their caracter by changeing their shutters,
or by covering them with heat insulation without thinking about the caracter of the former
facade. Many time they demolish this buildings just becouse in the post-communist region the
assessment of this period is negative. We have to change the relation with this buildings,
becouse they are the part of our memory.
Keywords: recent past, monuments, renewal, heritage

1
Budapest University of Technology and Economics, Faculty of Architecture, Dept. of Public Building Design,
Budapest, Hungary,
* Corresponding author: [email protected]
11
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Identification And Measures to Eliminate Delays in The


Construction Sector In Kosovo

Muhamet AHMETI1*

Abstract: In the construction sector in Kosovo, whether in the public or private sector, there
are delays and non-compliance with the dynamic plans approved in the implementation of
contracted projects. Delays in the implementation of various projects directly affect the
Kosovo economy in general. Delays in implementation of the construction of various projects
also mean the slowdown of development in all other interrelated areas. Therefore, the main
objective of this paper is to analyze the different types of delays based on the results of
questionnaires and identify the reasons for causing these delays that are currently affecting the
implementation of various construction projects in Kosovo. Delays in the construction
industry are one of the acute and common problems in the construction sector.
Moreover, in writing this paper we have analyzed 132 questionnaires that were realized by
various construction companies which operate in different municipalities of Kosovo and carry
out various construction works.
Based on the data from these assessments, there are proposed measures aiming to reduce or
eliminate these delays in general, based on various methods that affect the reduction and
mitigation of delays. Hence, there are different types of delays identified. It is important to
identify whether the delay is critical or not as identifying critical delays helps to take
appropriate action on time. Delays may be unjustifiable (caused by the contractor or other
factors) for which the client or consultant must have the means of organizing and managing
the project to effectively manage compensatory delays or delays (caused by the employer) of
which are owed to the employer.
Delays can come from both parties and can be harmful at the same time and directly affect the
deadline for completion of works. The reasons for delays are mainly due to an unreasonable
project goal, project defects, inadequate organization and planning, and lack of risk
management systems and measures. The contractor further contributes to the delay due to a
lack of resources (financial, manpower, mechanization, etc.) and labor productivity.
On ambitious assessments of the company's capacity, inaccurate assessment of the task and
work to be performed, lack of clarity of task, lack of experience for various construction
works, delays of the main project/approval of changes, and interference in the decision-
making process by the employer are factors that directly affect the delay in the
implementation of projects in general.

Keywords: Delays, construction, factors, critical, employer.

1
UBT – Higher Education Institution – Prishtinë Kosovë
* Corresponding author: [email protected]

12
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Impact of Digital Technology on Advanced Business


Processes

Ylber Limani1* , Edmond Hajrizi 1

Abstract: Business processes are undergoing complex challenges related to the


fasttechnological changes. The proper functionality of processes necessitates increased
flexibility, higher reliability and augmented working speed of production systems and
processes. The integration of information technology is accomplished by the development and
use of cyberphysical systems, which actually are the enablers of the industrial alteration
named “Industry 4.0”. The debates about the digital transformation and competitive
challenging advantages directed the industries to the creation of a new business vision named
"Industry 4.0". Since the concept of Industry 4.0 and its impact on business processes is
creating various challenges, this research addresses and examines the consequences and
potentials of Industry 4.0 on advanced business transformation processes. The scope of this
research is limited to the study of functional integration of Cyber-Physical Systems, Artificial
Intelligence (AI) and Data Science (data security) providing the potential for the functioning
of new technologies with focus on developing countries. The research utilizes qualitative and
quantitative approaches to data collection and analysis based on literature and on the case
studies. The contribution of this research is focused on the identification and analyses of
needs, problems, and benefits related to the implementation of digital technology on business
automated processes in developing countries.

Keywords: Digital technology, AI, Data Science, Industry 4.0.

1
University for Business and Technology, 10000, Prishtina, Kosovo.
* Corresponding author: [email protected]

13
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Geology And Paleogegraphıc Evolutıon of Saraykoy(Denizli)

Mahmut Ziya GÖRÜCÜ 1*

Abstract: Basement rocks of the Tırkaz-Sarayköy area consist of the Palaeocene-Early


Eocene Zeybekölen Tepe Formation and tectonically overlying Late Triassic-Early Jurassic
Gereme Formation and Mid Jurassic-Late Cretaceous Çataltepe Limestone all of which
belong to the Menderes Massif (Okay, 1989). All this formations are unconformably overlain
by the Kolonkaya Formation which belong to Denizli group of the Neocene age (Late
Miocene). Kolonkaya Formation is overlain by Asartepe formation of Pleistocene age. After
all these formation, are covered with travertene, some terrestrial sediments, slope debris and
allivual sediments. We have collected some diatomite samples from differents levels of
Sazak, Sakızcılar and mostly Kolonkaya formations for describing the species by comparing
the electron microscope view to diatome catalogue. During this study we described Cymbella
brehmii Hustedt, Eunotia sudetica Müller, Navicula cf. Phyllepta Kützing, Cyclotella
meneghiana Kützing, Stephanodiscus subtranssylvanicus Gasse, Cyclotella plitvicensis
Hustedt, Fragilaria sp., Cyclotella meneghiana Kützing, Raphoneis amphiceros Ehrenberg
and put in the spreading map.
According this work and research we realize that especially Neogene sediments includes the
diatoms. These diatoms are usually found in Sazak, Sakızcılar and Kolonkaya formations
which are of Late Miocene age. At the same time wherever we find diatomites are together
with marl deposits. Strata alternation shows that all over the colomn we had drilled 13 point,
marl and diatomite levels are together. Therefore it is certain that marl and diatomite
deposited the place which includes silica enough and the basin was shallow or marinal
mesopelagic zone. According to these core data, we try to describe paleogeographic
conditions and climate by looking both the vertical and horizontal spreading of diatoms. On
the other hand there are many gypsium levels as well and we use and comment these data too
for describing the paleogeographic conditions of the region.
Keywords: Diatomite, Upper Miocene, Paleogeographic, Denizli, Tırkaz, Jips.

1
Istanbul University-Cerrahpasa, Engineering Faculty, Department of Geology Hadımkoy-Istanbul-Turkey
* Corresponding author: [email protected]

14
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Lessons learnt from Chernobyl and Fukushima accidents


applicable for the protection against CBRN attacks

Jozef Sabol1*

Abstract: Both accidents involving nuclear power plants at Chernobyl in Ukraine and
Fukushima in Japan have caused releases of tremendous amount of radioactive materials into
the environment which resulted in huge contamination and resulted in overexposure of some
workers as well as the general public especially in the vicinity of these nuclear facilities. This
year, it is the 10th anniversary of the nuclear accident at Fukushima which occurred almost
exactly 25 years after the Chernobyl nuclear accident in 1986. Analysis of each of these
accidents and their consequences provided valuable late and early lessons that could prove
helpful to minimize the impact of any emergency situation with massive radioactivity
discharges which may contribute to the exposure of people affected. These events have been
extensively publicized and led to the creation of a negative attitude towards the application of
radiation and nuclear technologies for peaceful purposes. It is well known that some of this
information has been exaggerated and formed undesirable perception especially among lay
population. It is obvious that this was partially produced because of the lack and negligence in
radiation risk communication with the public. Now, after so many years, the situation has
considerably improved namely in terms of the upgraded nuclear safety and security,
quantification and monitoring of radiation exposure in the case of accidents, and enhanced
risk assessment and its mitigation. Most of these incentives come as lessons learned from the
accidents at Chernobyl and Fukushima. The paper will discuss two major issues: a) the factual
assessment of the impact of nuclear accidents on the human exposure and the environment,
and b) the current state of the preparedness and response to a similar accident based on our
present knowledge and lessons learned from the past misfortunes.

Keywords: Chernobyl, nuclear, Ukraine.

1
Police Academy of the Czech Republic
* Corresponding author: [email protected]
15
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Effect of Rare Earth Metals on Electrode Potential in Anodic


Oxidation as A Novel Electrode for Different Kind of
Contaminants

Dilara ÖZTÜRK1*, Abdurrahman AKYOL2*

Abstract: Nowadays, novel metal electrodes have become a very interesting subject for both
electrooxidation (EO) studies and catalyst production. Many different types of alloys and
coatings are tried to increase the catalytic activity of the electrodes to be produced. Rare earth
elements have started to gain an important place among the studies. The fact that rare earth
elements are so abundant in nature and they are not processed, they are named as rare earth
elements, which has made it an interesting research area. It was thought that rare earth
elements could increase the catalytic activity and the fast transfer. It is well known that rare
earth oxides are powerful oxidants and widely used as catalysts for oxidation reaction. So,
various rare earth oxides were used to modify coating to increase the electrochemical
performance of electrode.
In this study, we intend to improve the electro-catalytic activity and stability of Ti/TuO2-IrO2
electrode by modification of Lantane and Cerium. The electro-catalytic activity and stability
of the La and Ce modified Ti/TuO2-IrO2 electrode were compared with those of undoped
Ti/TuO2-IrO2, Ce-La doped Ti/TuO2-IrO2, and BDD electrodes, it has been tried both in the
removal of paracetamol, which is a model pharmaceutical compound, and in the use of anodic
oxidation in the removal of textile wastewater. Commercial electrodes in PST removal, Ti
/‫ܱݎܫ‬ଶ /ܴ‫ܱݑ‬ଶ electrodes exhibit the same performance with BDD electrode, providing 40%
TOC and 100% PST removal at pH 5 and the current density of 350 A/m² for 90 minutes. In
textile wastewater treatment trials, it was achieved faster and higher removal efficiency than
BDD electrode, which is the most efficient commercial electrode known at the point of color
removal, and 98% removal was achieved for both electrodes at the same conditions at 15
minutes. The results show promise for the new electrode compositions to be used in EO
systems.
Keywords: Rare earth elements, anode materials, anodic oxidation, textile wastewater,
paracetamol.

1
Gebze Technical University, Engineering Faculty, Environmental Engineering, Kocaeli, Turkey
* Corresponding author: [email protected]
16
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Biomimetic Approaches to Develop Safe-By-Design Antimicrobial


Textiles

Isabel C. Gouveia1*, Frederico Nogueira, Cláudia Mouro, Ana P. Gomes

Abstract: Antimicrobial textile materials may significantly reduce the risk of infections and because
they are able to absorb substances from the skin and release therapeutic compounds to the skin, they
can also find applications as complementary therapy of skin-diseases as part of standard management.
Although functional textiles may be a promising area in skin disease/injury management, as part of
standard management, few offer complementary treatment even though they are well known to reduce
scratching and aiding emollient absorption, reducing infection, and alleviating pruritus. The reason for
this may rely on the low quality of supporting evidence and negative effect that antimicrobial agents
may exert on skin microbiome, as for example additional irritation of the vulnerable skin, and by
causing resistant bacteria.

Several antimicrobial agents have been tested in textiles: quaternary ammonium compounds, silver,
polyhexamethylene-biguanides and triclosan have been used, with success. They have powerful
bactericidal activity but the majority have a reduce spectrum of microbial inhibition and may cause
skin irritation, ecotoxicity and bacteria resistance. Furthermore, the rising flow of strains resistant to
last-resort antibiotics rekindles interest in alternative strategies. In this regard, new functional textiles
incorporating highly specific antimicrobial agents towards pathogenic bacteria, are required. Recent
research has been conducted on naturally occurring antimicrobials as novel alternatives to antibiotics.
Conscious of this need our team firstly reported new approaches using L-cysteine and antimicrobial
peptides (AMP). Briefly, we were able to develop different immobilization processes towards 6 Log
Reduction against bacteria such as S. aureus and K. pneumoniae. Therefore, here we present several
innovative antimicrobial textiles incorporating AMP and L-Cysteine which may open new avenues for
the medical textiles market and biomaterials in general. Team references will be discussed as an
overview and for comparison purposes in terms of potential therapeutic applications.

1
FibEnTech Research Center, Faculty of Engineering University of Beira Interior, Covilhã – Portugal.
* Corresponding author: [email protected] 17
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

An Estimation of the Hydrologic Water Budget Components of


the Seyfe Lake Basin By Using Hydrologic Characteristics and
Hydrometeorological Data

Cansu Yurteri1*, Türker Kurttaş2*

Abstract: A water budget calculated for a defined system, consists of the account of all water
flowing into and out of an area of interest, along with the change of water storage in the area. The
water budget, also known as the water balance, is computed over a specific time period for a particular
system or region and is based on the conservation of the mass equation. Water budgets can help to
provide a clearer understanding of past circumstances, as well as provide an indication of how future
changes in hydrology, population, supply, demand, land use, and climate factors may influence the
water resources in the basins. Water budget calculations are an essential tool for devising sustainable
solutions regarding surface water and groundwater resources, water supply planning, and monitoring
and designing water systems. Human activities such as groundwater withdrawals and irrigation alter
natural flow patterns, which must be accounted for in the water budget calculation. In this research, the
Seyfe Lake Basin was selected as a study area. Seyfe Lake is a closed basin and the lake area and its
surroundings are a protected Ramsar Site. Lake area is located 15 km northeast of the Mucur district,
Kırşehir province of Central Anatolia. Lake basin has a 1447 km2 catchment area, in which 10.192
inhabitants reside. This paper aims to estimate the water budget components in the Seyfe Lake basin
and the influence of budget components over a 50 year (1970-2020) period. The estimate of
groundwater budget components is one of the most difficult issues in basin-scale management. The
main issue with calculating the water budget components is a lack of routinely collected data. In this
study, precipitation, evapotranspiration, surface outflow data were calculated by using annual data for
the lake catchment area and the changes in storage reserves were calculated for the study area. Seyfe
Lake Basin is recharged by precipitation and groundwater inflow. Discharge of the lake area via
evaporation, drainage channels and withdrawal of water for irrigation and drinking purposes. The
difference between lake recharge and discharge components for the period 1970-2020 was calculated
to be 15.7 x106 m3/year. Based on groundwater budget calculations, the sum of annual groundwater
recharge of 554.49x106 m3/year comes from precipitation (552x106 m3/year), water flow returning
from irrigation (1.59x106 m3/year) and the recharge from marble lithologies into the basin (0.90x106
m3/year). The total annual groundwater discharge was calculated as 570.6 x106 m3/year. Discharge
components include evapotranspiration loss (474.23x106 m3/year), surface runoff taken up by drainage
canals (63.1x106 m3/year), domestic water usage (0.73x106 m3/year), irrigation water usage (13.28x106
m3/year), evaporation (16.26 x106 m3/year) from the swamp-wetland surface and discharge of
groundwater from the marble units out of the basin (3.17 x106 m3/year). According to the budget
results, evaporation and human activities are effective processes in the lake basin. Furthermore, the
difference between recharge and discharge quantity causes a decrease in the flow rates of the springs
throughout the basin. This corresponds to a decrease in groundwater levels in the wells thus, leading to
an overall decrease in the lake level throughout the entire the basin.

Keywords: Water balance, human activities, precipitation, irrigation water usage, Seyfe Lake Basin,
Central Anatolia

This research has been supported by Hacettepe University Scientific Research Projects Coordination
Unit (Project Number:18960-2021).

1
Hacettepe University, Engineering Faculty, Hydrogeological Engineering Department, Ankara, Turkey
* Corresponding author: [email protected] 18
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Comparative Analysis of Dimension Reduction and Classification


using Cardiotocography Data

Mahmut TOKMAK1*, Ecir Uğur KÜÇÜKSİLLE2

Abstract: Dimension reduction is the transformation of data from a high-dimensional space to


a low-dimensional space in a way that does not lose its meaning. Processing a high-dimensional
data requires more processing overhead. Therefore, dimension reduction is frequently used in
fields such as signal processing, speech recognition, pattern recognition and bioinformatics
where a large number of observations and variables are examined. Cardiotocography (CTG) is
a tool used for recording of the fetal heart rate (FHR) and uterine contractions (UC) during the
intrauterine life. As a technique for diagnosing fetal well-being, CTG is often used to help
obstetricians obtain detailed physiological information of the fetus and pregnant woman.

In this work three of the prominent dimensionality reduction techniques, Principal Component
Analysis (PCA), Auto-Encoder (AE) and Stacked Auto-Encoder (SAE) are investigated on
popular Machine Learning (ML) algorithms, Support Vector Machine (SVM), Random Forest
Classifier, Naive Bayes Classifier and using publicly available Cardiotocography (CTG) dataset
from University of California and Irvine Machine Learning Repository. The obtained results
are presented comparatively.

Keywords: Cardiotocography, Machine Learning, Dimension Reduction, Classification

1
Isparta University of Applied Sciences, Gelendost Vocational School, Gelendost, Isparta, Turkey
2
Süleyman Demirel University, Engineering Faculty, Department of Computer Engineering, Isparta, Turkey
* Corresponding author: [email protected]
19
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Main Sources of Microplastic Pollution in Aquatic Environments

Kamila Sobkowiak*

Abstract: Micro plastics can be intentionally added to products (primary microplastics) or


randomly formed (secondary micro plastics) - they arise as a result of the degradation of
larger plastic products such as plastic bags, bottles, fishing nets or due to the mechanical wear
of materials. The last aspect states nearly two-thirds (63.1%) of global microplastics
emissions to the seas and oceans occuring in the wake of washing of synthetic fabrics (34.8%)
or abrasion of car tires while driving (28.3%). It is suspected that microplastics accumulating
in living organisms may play a large role in the development of neoplastic diseases and
hormonal disorders and therefore can be toxic in the long run. Cosmetics represent only a
small fraction of all sources of plastic microbeads found in the aquatic environment, however,
consciously eliminating products containing these elements from our daily life might have
enormous influence on both fauna and flora of aquatic environments. Accordingly, the
introduction of a restriction on the use of microplastics in cosmetic products will not solve the
environmental problem, on the other hand, these are pollutants added to products on purpose,
so it is worth limiting their emissions, especially if the scale of these emissions is contingent
on us.

Keywords: Microplastics, aquatic, environment, effects.

1Lodz University of Technology, Chemistry Faculty, Polymer and Dye Technology Department, Lodz, Poland
* Corresponding author: [email protected]
20
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Product Debugging Facility Design Using Image Processing For


Defense Industry Gunpowder Production

Fatih ILGIN¹*, Mustafa İlker ERDURSUN²*

Abstract: Nowadays, mechatronic systems are using extensively at industry due to


technological developments and Industry 4.0. Defense Industry is one of the essential area
which uses different axis robotic arms. In this study, it was aimed to separate the gunpowder
used in the firing of artillery shells in the defense industry with the help of robot arms using
image processing techniques to extract the faulty products formed after manufacturing. The
grain shapes of cannonball gunpowder are cylindrical single or multi-bore. In multi-hole
gunpowder, the number of holes is usually seven. A knife that slices gunpowder can
sometimes close existing holes by plastering them. It is important that the holes in the middle
of the Gunpowder are open, as the closing of these holes affects the combustion surface of the
gunpowder, the combustion pressure, and therefore the rate of bullet output.
Our system consists of two conveyor belts and 3-axis robot arms working in opposite
direction. After manufacturing, the products coming out of the first conveyor belt are checked
and packaged, and from the other conveyor belt running in the opposite direction, the faulty
products go to production for re-processing. Gunpowder images coming through the moving
tape were taken with the help of the existing camera, and these images were detected by
applying the image processing technique with the Python program. The process of taking the
faulty object over the tape and transferring it to the opposite direction tape was carried out
with the help of a robot arm.
Thanks to this system, which is controlled in real time, the detection of products suitable for
packaging or not has been carried out. The system created in this study is a prototype and will
save time and labor if it is adapted to real systems. In addition, because the substance
produced is dangerous, work accidents that may be caused by individual errors will be
prevented. This system can also be used in different industrial production plants using new
algorithms.

Keywords: Defense technologies, Mechatronics, İmage processing, Robotic arm, Gunpowder

1
Machinery Chemistry Industry, Gunpowder Factory, Maintenance and Repair Directorate, Kırıkkale/
TÜRKİYE
2
Hitit Universty, Osmancık Ömer Derindere Vocational School, Computer Programing (Lecturer)

* Corresponding author: milkererdursunhitit.edu.tr


21
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Concentrated Kefir Production by Ultrafiltration

Firuze Ergin1*, Gülfide Tair1, Ezgi Tekşan, 1 Gözde Nuran Balcı1, Ahmet Küçükçetin1

Abstract: Milk-based powders (skim milk powder, sodium caseinate, whey


concentrate/isolate) or several filtration techniques (traditional cloth bag, centrifugation,
reverse osmosis) can be used in manufacture of concentrated dairy products due to improve the
texture, chemical and nutritional properties of products. Besides, membrane technology is one
of the methods used to concentrate milk components. In ultrafiltration technique, a membrane
with pores of certain sizes that allows the passage of water and small molecules is used. When
skim milk is ultrafiltrated, concentration of casein and whey proteins that are larger than
membrane pores collected in the retentate. Lactose and minerals in soluble phase of milk are
removed with permeate. In previous studies, the ultrafiltration technique was used for Greek-
style yoghurt, dahi and labne productions. In recent years, consumption of kefir has increased
due to their nutritive value and positive health properties. Kefir is acidic, slightly alcoholic and
a viscous fermented dairy beverage that has a health benefits including anti-obesity, anti-
oxidative, cholesterol-lowering, anti-allergenic, anti-inflammatory, anti-tumour, and anti-
microbial properties. The aim of this work was to manufacture of concentrated kefir by using
ultrafiltration technique. In this study, concentrated kefir was produced with two different
ultrafiltration techniques: ultrafiltration of milk prior to the fermentation process (UFM) and
ultrafiltration of kefir (UFK). Kefir was filled into cloth bag to produce traditional concentrated
kefir (TCK). The concentrated kefir samples were stored at 4°C for 30 days and the
physicochemical, microbiological, and sensory properties of the kefir samples were determined
on days 1, 15 and 30 of the storage.

Keywords: kefir, rheology, storage, ultrafiltration,

1
Akdeniz University, Faculty of Engineering, Department of Food Engineering, Antalya, TURKEY
* Corresponding author: [email protected]
22
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Behavior of sugar consumption and lifestyle in the Republic of


Moldova

Aurica Chirsanova 1*, Tatiana Capcanari2*, Rodica Sturza3*, Olga Deseatnicova4*

Abstract: The population of the Republic of Moldova faces the double burden of the
consequences related to nutritional behavior. On the one hand, malnutrition and nutritional
deficiencies, characteristic of developing countries, on the other hand - overweight and
obesity, characteristic of developed countries. 6% of children up to 5 years have growth
retardation, caused by insufficient energy, and one-fifth of children suffer from anemia. About
a third of women of childbearing age and more than 40% of pregnant women have anemia.
Half of the adult population is overweight or obese.

The main objectives of the study were focused on the decisive aspects of food consumption
behavior in relation to lifestyle trends, culture and traditions, common values and economic
and social changes and the identification of knowledge about the risk of eating foods and
beverages high in sugar. The data collection was carried out between January and April 2021
by applying a questionnaire structured in three types of questions: related to the socio-
demographic profile; style and food preferences and consumption of high-sugar products. In
the study were taken into account 1989 responses of adults. It was found that the citizens of
the Republic of Moldova consume an amount of sugar four times higher than the daily limit
recommended by the WHO. This leads to an increase in the number of cases of weight gain,
obesity, diabetes, fatty liver disease, hypertension, etc., which, in the context of the COVID-
19 pandemic, considerably increases the risk of severe complications.

It is worth mentioning that adult citizens of the Republic of Moldova consume large amounts
of foods rich in sugar, but predominantly low in nutritional value, instead of a balanced diet
based on fresh vegetables and fruits, meat, fish, etc. At the same time, the adult citizens in this
study have an excessive consumption of table salt, refined carbohydrates, unhealthy fats and
others. A close relationship has been established between the culinary traditions applied by
consumers, the unfavorable economic environment and food consumption habits in the
Republic of Moldova. The eating behavior research of the respondents allows us to conclude
that it is in line with international trends based on fast and cheap food with a high sugar
content.

Keywords: sugar consumption, questionnaire, nutritional behaviour, Republic of Moldova

1
Technical University of Moldova, Faculty of Food Technology, Food and Nutrition Department, Chisinau,
Republic of Moldova1
2
Technical University of Moldova, Faculty of Food Technology, Department of Oenology and Chemistry,
Chisinau, Republic of Moldova

* Corresponding author: [email protected]


23
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

A Research on Urban Furniture Design: Example of Isparta

Abdullah Beram1*

Abstract: Urban construction and renewing efforts become more important in last years.
Most of the municipalities are aware of the importance of renovation works. This situation is
included in many studies. Urban furniture is an indispensable phenomenon for municipalities
to satisfy the public. Local administrators aim to provide urban functions in common areas
and to give the city a contemporary and aesthetic look.

The furniture in use in the city center of Isparta was evaluated. A face-to-face interview was
conducted with 128 people from different age groups. Questions were asked about the
function, aesthetics, form, material, color, texture and ergonomics of Isparta urban furniture.
As a result, the original and creative designs of urban furniture satisfy the 16-25 and 26-35
age groups. It has been revealed that the group between the ages of 46-55 cares about
materials and ergonomics. Design, color and aesthetic importance are more prominent in
young and middle age groups.

Keywords: Urban furniture, design, local, ergonomic, satisfy.

1
Isparta University of Applied Sciences, Faculty of Forestry, Isparta, Turkey
* Corresponding author: [email protected]

24
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Impact of Pandemic (Covid 19) on air quality in Prishtina

Besa Veseli1*, Shkumbin Shala 2*, Vehebi Sofiu3*

Abstract: Kosovo is a small country with an area of about 10.887 km², pollution at country
level is very big, but the main pollution problems are in urban areas which are highly polluted
as the main cause of this pollution are: industries, power plants KEK, Road Transport, District
Heating Companies (in Prishtina, Gjakova and Mitrovica), Urban and Industrial Waste
Disposal (with different local impact), Wood and lignite for home heating. (World, 2011).
Regarding the regions Prishtina region is the area with the highest air quality pollution caused
by KEK power plants located nearby, other smaller industries, transport, heating, and other
individual heating facilities. (Botrore, 2011). Since air pollutants know no bounds, Of greater
concern are; volatile organic compounds (VOC), CO2, NOx, CO, sulfur compounds SO2,
PM10, PM2,5 etc. (MESP, 2015). During this paper we will present air quality in the
Prishtina region, where air quality analyzes were obtained from KHMI for the Pandemic
Period (COVID-19) by measuring these parameters, SO2, CO, NO2, O3, PM 10 and PM 2.5
all of these measuring (µg / m3), and always referring to the Directive (2008/50 / EC) and the
Law on Air Protection from Pollution (No. 03 / L-160).
Keywords: Air, Pollution, CO, NOX, SO2, O3, PM10, PM2.5, MESP, IHMK, WHO.

1
Institution of Higher Education, UBT-College, Energy Efficiency Engineering
* Corresponding author: [email protected]

25
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The light metals minerals of Montenegro

Biljana Zlaticanin1*, Sandra Kovacevic2*

Abstract: In the area of Montenegro, which belongs to the southeastern Dinarides, there are
red karst bauxites and white karst bauxites. According to conditions and way of origin,
bauxite deposits are divided into three genetic groups: weathering deposits, sedimentary
deposits and metamorphosed deposits. Significant results were achieved regarding knowledge
on characteristics of the bauxite formations. Their economic significance was successfully
defined long time ago. This study based on the up-to-date technical achievements which
undertaken for the processing of bauxites.

Keywords: red karst bauxites, white bauxites

1
University of Montenegro, Faculty of Metallurgy and Technology, Cetinjski put bb, 81000 Podgorica,
Montenegro
2
Central School of Chemical Technology Spasoje Raspopović, 81000 Podgorica, Montenegro
* Corresponding author: [email protected]
26
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The influence of the process parameters on the microstructure of


Al-Cu-Mg-Ti alloys

Biljana Zlaticanin1*, Sandra Kovacevic2

Abstract: Results presented in this paper contribute to investigation of the influence of


process parameters on the microstructure of samples during solidification of Al-Cu-Mg-Ti
alloys. In this aim the 30 samples was solidified by different growth rate. The growth rate has
been very important factor in the crystallization process. Obtained results give us possibility
to create the desired microstructure by growth parameters. The similar microstructure was
observed for the very close values of growth rate.

Keywords: Al-Cu-Mg-Ti alloys, growth rate

1
University of Montenegro, Faculty of Metallurgy and Technology, Cetinjski put bb, 81000 Podgorica,
Montenegro
2
Central School of Chemical Technology Spasoje Raspopović, 81000 Podgorica, Montenegro
* Corresponding author: [email protected]
27
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Multivariate methods for seasonal characterization of air


pollution

Virgjina Lipoveci1, Mirjana Čurlin2*

Abstract: A multivariate analysis of air quality monitoring data in the Kosovo region was
performed. The aim of this work is seasonal classification based on air quality monitoring
datasets in 2017. Different chemometric methods were used to process the dataset, such as
basic statistical methods, Pearson correlation coefficients, principal component analysis
(PCA) and cluster analysis (CA). The results obtained show and explain the seasonal
distribution of SO2 and NO2 as air quality indicators. This study makes it possible to obtain
new information from the monitoring datasets, which is necessary for the establishment of
guidelines in the framework of the health protection of the population in this region.

Keywords: multivariate analysis, correlation, air pollution, quality indicators

1
National Centre of Labour Medicine in Gjakova, Kosovo
2*
University of Zagreb, Faculty of Food Technology and Biotechnology, Department of Process Engineering,
Section for Fundamental of Engineering Pierottijeva 6, 10000 Zagreb
*Corresponding author: [email protected]

28
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Assessment of Trihalomethanes (THMs) Concentrations


in Drinking Water from Selected Distribution Systems
in Opole Province

Iwona Klosok-Bazan1*, Joanna Boguniewicz2*, Agnieszka Drozdek3*

Abstract: Decreasing water abstraction, low flow velocities and variable water transport
directions have become the cause of mineral and organic sediment deposition in water supply
networks. This element in the presence of chlorine increases the likelihood of THMs
formation, the main representative of which is chloroform. These compounds have
carcinogenic and mutagenic effects on humans and animals, so their presence in water should
be strictly controlled. Biofilm development contributes to an increase in color intensity,
turbidity and organic matter content in tap water. The result is a significant increase in the risk
of microorganisms and coliforms in the water. It is therefore necessary to maintain an
adequate amount of disinfectant in the water, such as chlorine, which at a rate as low as 0.2g
Cl2/m3 ensures that the microbiological risk of the water is reduced. Using high doses of Cl2
for water treatment increases the risk of trihalomethanes in the water. The selection of an
appropriate water disinfection method is important in this case.

Analyses of THMs levels in water from the water supply network in selected distribution
systems in Brzeg, Glubczyce, Opole, Nysa and Krapkowice were performed. None of the five
selected distribution systems exceeded the total THMs concentration in the analyzed period,
however, it can be noticed that in a few measurement points the concentration of this
compound reaches 19 μg/l, while in the remaining measurement points it does not exceed 8
μg/l. This may indicate good technical conditions of water supply networks as well as
efficient operation of disinfection methods.

Additionally, the analysis of the data obtained from these five distribution systems, show that
67% of them have a slight excess of chloroform concentration, while in 7% of cases the
excess is almost three times higher than the recommendations. The reasons for this may be
sought in the quality of the intake water at the given measurement points for which the
exceedance occurred. The temperature of the water was considerably elevated due to the time
of the year, pH within 7 and the content of total organic carbon could have led to such effects.
In order to maintain the sanitary safety of the drinking water, it is therefore necessary to
control not only microbiological contamination but also the results of disinfection i.e.
disinfection by-products.

Keywords: drinking water, disinfection by-products, trihalomethanes, distribution system,

1
Opole University of Technology, Faculty of Mechanical Engineering, Department of Thermal Engineering and
Industrial Facilities, Mikolajczyka 5, Opole 45-271, Poland
* Corresponding author: [email protected]
29
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Application of the Rutherford Backscattering Method in powder


nanotechnology

A.A. Tatarinova1 *, A.S. Doroshkevich, M. Kulik,


M.A. Balasoiu, V. Almasan, D. Lazar

Abstract: Rutherford Backscattering Spectrometry (RBS) is an ion scattering technique used


for compositional thin film that are less than 1μm thick analysis. During an RBS analysis,
high-energy He2+ ions with energies in the region from several hundred kiloelectron-volts to 2
- 3 MeV are directed onto the sample and the energy distribution and yield of the
backscattered He2+ ions at a given angle is measured. Since the backscattering cross section
for each element is known it is possible to obtain a quantitative compositional depth profile
from the RBS spectrum obtained. The capabilities of this method can be significantly
expanded. In particular, the method can be used in powder nanotechnology to study elemental
composition in microscopically small objects. The application of methods based on
Rutherford Backscattering Spectrometry is extremely interesting for adsorption energy
devices, in particular, these methods can be used with maximum efficiency for various
chemoelectronic converters. A unique opportunity is to study the elemental surface of
adsorbates on the surface phase separation in functional nanostructured layers. For this
reason, the preparation of planar-distributed chemoelectronic converters and the study of the
elemental composition of adsorbates using the Rutherford Backscattering Spectrometry
technique was the purpose for the investigation. The tasks of this study included: development
and optimization of the technology for producing planar chemoelectronic converters a
functional layer in the form of rounded drops containing monodisperse nanosized (7.5 μm)
particles of a solid solution of the ZrO2 system - 3 mol% Y2O3 (YSZ) in the PVA polymer
matrix, study of the theoretical characteristics of the obtained chemoelectronic converters [1],
study of the elemental composition of the obtained chemoelectronic converters using
Rutherford Backscattering Spectrometry. The atomic and chemical composition of these
layers has been studied using nuclear and atomic methods. The thickness of the oxide layers
was found to be approximately the same for all implanted samples. These values were
determined on the basis of Rutherford Backscattering Spectrometry and nuclear reactions
(RBS/NR).

The study was performed in the scope of the H2020/MSCA/RISE/SSHARE number 871284
and the RO-JINR Projects within the framework of themes FLNP JINR: 04-4-1105-
2011/2022 and 03-4-1128-2017/2022.

Keywords: RBS, Powder Nanotechnology, chemoelectronic converters.

1
Joint Institute for Nuclear Research, Dubna, Russia
2
University, Faculty, Department, City, Country
* Corresponding author: [email protected]
30
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Process of Drying Peaches by Forced Convection

Natalia Tislinscaia1*, Vitali Visanu, Mihail Balan, Mihail Melenciuc

Abstract: For the Republic of Moldova peaches represent a strategic economic product,
especially on the fresh fruit market, but due to the fact that peaches are perishable product
huge quantities remain unused. The solution would be dehydration, which involves both
economic and health benefits. As raw material or dry on average peaches with firmness 1.05
kgf/cm2, dry matter 11.5%, humidity 88.5%); by forced convection at temperatures 50-90°C;
with different speeds of the working agent: 0.5-2.5 m/s, and at different thicknesses of the
product layer: 2-10mm.

The study of peaches convective drying kinetics revealed that the increase both thermal agent
temperature, speed and decreasing the thickness of the rolls, leads to an intensification of the
process. Therefore, for the convective drying of peaches, the temperature of 60°C with the
speed of the heating agent 2.0 m. s-1 and the thickness of the rolls ±3 10-3m are recommended
for getting an optimal drying process.

Keywords: Moldova, material, health.

1
Technical University of Moldova, Moldova
* Corresponding author: [email protected]
31
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Innovation strategies of functional plant yogurt production for


personalized nutrition

Tatiana Capcanari1*, Aurica Chirsanova1, Rodica Siminiuc1

Abstract: Speaking of personalized nutrition, we are talking about the exclusion of specific
foods or chemical elements from the diet of a certain person for various reasons. It can be
both a state of health and personal taste preferences, principles of life. For a long time, the
issues of the individuality of our organisms from the point of view of biochemical functioning
worried people. The reaction of different people to the same ingredient can be completely
opposite. A person's energy, health and resilience largely depends on his nutrition. With
prolonged malnutrition, this quickly leads to cardinal health problems.
The frequency of iron deficiency anemia in the Republic of Moldova is high, being detected
in certain population groups, such as women of childbearing age (especially pregnant
women), young children and adolescents. The risk groups also include the elderly,
vegetarians, people with a poor socio-economic level, as well as people suffering from certain
chronic conditions. Hypocalcemia is a decrease in the level of calcium in the blood, which can
be caused by a problem with the parathyroid glands, as well as by bad nutrition. As
hypocalcemia progresses, muscle cramps are common, as well as confusion, depression,
memory problems, tingling in the lips, fingers and toes, but also tension and muscle pain. So
that the elaboration of the assortment of functional fermented products for the Republic of
Moldova is extremely relevant.
A technology for plant yogurt producing was developed using fermentation technology. The
range of natural, fortified and enriched yoghurts were developed. Rice, oat and coconut milk
was used as the main raw material. Flax, sesame and chia seeds, which are rich in vitamins,
minerals, dietary fibers were used to produce a range of enriched yoghurts. To obtain fortified
yoghurts, the minerals as iron and calcium were used, which prevent the development of
anemia and hypocalcemia. The problem solved by the invention consists in expanding the
base of fermented vegetarian products for personalized nutrition for people with anemia,
hypocalcemia, lactose intolerant people, people with avitaminosis, gastrointestinal disorders
by improving the chemical composition and increasing the biological value of yogurts by
fortifying with vitamins, minerals, dietary fiber, natural antioxidants with a high activity, by
reducing the amount of stabilizer and the duration of fermentation.
Experimental assortment of plant yoghurts was assessed by physicochemical and organoleptic
methods. All developed samples meet the standards of technical documentation for this type
of food product. The organoleptic characteristics were highly appreciated. Developed
products are an opportunity for many people to return to a normal healthy diet.

Keywords: personalized nutrition, functional food products, hypocalcemia, iron deficiency


anemia, natural antioxidants, dietary fiber, yogurt production.

1
Technical University of Moldova, Faculty of Food Technology, Food and Nutrition Department, Chisinau,
Republic of Moldova
* Corresponding author: [email protected]
32
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Evaluatıon Des Bıomaterıaux A Base Des Resıdus Agrıcoles


[Coques De Noıx] Actıvee Par L'acıde De Cıtron, Naoh Et H3po4.
Applıcatıon Au Traıtement Des Eaux

Amel Aidi 1*, Assia Slimani2*,Ammar Fadel3*

Abstract: Les eaux de surface contiennent des matières organiques(substances humiques).


Elles sont responsables de la coloration de l'eau, possèdent des propriétés d'échangeurs d'ions
et des propriétés de complexassions. Elles peuvent être un véhicule pour la plupart des
substances toxiques (métaux lourds…), participent également à la corrosion du système de
distribution et au colmatage des résines et des membranes. Pour cela l’adsorption est l’une des
techniques les plus adoptées pour cette élimination de polluants, à cause de sa grande capacité
d’épurer les eaux contaminées. Dans ce contexte l'objectif principal de ce travail est de
préparer trois biomatériaux avec une fortes capacités d'adsorption de la matière organique SH:
il s'agit des matériaux à base d'un résidus de l'agricole il subit un traitement thermique jusqu'a
600°C, puis une activation chimique par trois activant (hydroxyde de sodium NaOH, l'acide
phosphorique H3PO4et l' acide de citron) . La caractérisation des matériaux a été déterminée
par la technique de spectroscopie infrarouge IR-TF ainsi que par diffraction des rayons X
(DRX).Par la suit mettre en évidence leurs capacités dans les procédés d'adsorption pour
l'élimination des substances organiques (SH) dans les traitements des eaux. Les valeurs
optimales pour les paramètres réactionnelles ont été déterminé pour chaque activant
notamment, la température, la vitesse d'agitation, le temps de contact, et la masse du matériau
et la comparaison entre eux par la suite. L'étude de l'isotherme montre que les modelés de
Langmuir et Freundlich décrit bien le processus d'adsorption de la substance humique avec
des coefficients de corrélation linéaires arrive à 97%. Le modèle de pseudo-second-ordre est
le modèle établi dans ces études. On peut conclure aussi que les coques de noix forment un
résidu naturel non coûteux représentant un avantage majeur pour le traitement les eaux de
surfaces.
Keywords: H3PO, substances humiques, NaOH

1
Département de Chimie Industrielle / Université Mohamed Khider, Biskra, Algérie
* Corresponding author: [email protected]
33
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Physico-chemical properties of rapeseed honey from the Republic


of Moldova

Chirsanova Aurica1*, Tatiana Capcanari2*, Alina Boistean3*

Abstract: Bee honey is an empirical and image product of the Republic of Moldova. At the
same time, rapeseed honey is becoming more and more appreciated and sought after by local
consumers. In the last 5 years, the surface of agricultural lands sown with rapeseed is
constantly growing (from 36 thousand hectares in 2019 to 45 thousand hectares in 2021),
which also ensured the increase of the volume of rapeseed honey proposed for consumption.
In the context of a major focus on the authenticity of bee honey worldwide, this study is
relevant.
Melisopalinological analysis of rapeseed honey samples showed that the dominant pollen is
Brassica spp. in proportion of 57.4% -68.3%. Therefore, the presence of over 45% of the
pollen grains of Brassica spp allows us to say that the honey samples are part of the
monofloral category. The moisture content ranged from 17.02% to 18.6%, and the pH from
4.19 to 4.28. Free acidity analysis is useful for assessing the freshness of honey. With the
alteration of honey, the value of free acidity increases as a result of the fermentation of sugars
into organic acids. A low acidity of 16.02-16.9 milliequivalents acid / kg was recorded in the
analyzed samples. Another physico-chemical parameter that indicates the degree of freshness
of honey is the content of hydroxymethylfurfural (HMF) which varied in the range from 11.21
mg / kg to 38.12 mg / kg which is below the maximum limit of 40 mg / kg allowed by
European standards. The electrical conductivity gives us information about the botanical
origin of honey. The analysis of this parameter is very often used, being considered a good
criterion to be able to identify the botanical origin and implicitly the purity of honey. Thus, in
the rapeseed honey samples, the electrical conductivity was in the range of 160.1 µS / cm to
182.9 µS / cm, which denotes a low electrical conductivity. The functional properties of
honey are related to the amount of natural antioxidants in bee pollen and floral nectar. The
antioxidant effects of bee honey are attributed to polyphenols, flavonoids and others. Thus in
the analyzed samples the total content of polyphenols was between 23.71 mg GAE / 100 g
and 25.09 mg GAE / 100 g and flavonoids of 19.05 mg QE / 100 g and 21.15 mg QE / 100 g
honey . The DPPH method was used as a means to determine the antioxidant activity of honey
samples. DPPH radical inhibition activity in rapeseed honey ranged from 53.12% to 56.78%.
Thus, the researches showed that rapeseed honey from the Republic of Moldova meets the
requirements of the admissible norms and is recommended for consumption.

Keywords: rapeseed honey, physical and chemical indicators, HMF, polyphenols, flavonoids

1
Technical University of Moldova, Faculty of Food Technology, Food and Nutrition Department, Chisinau,
Republic of Moldova
* Corresponding author: [email protected]
34
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Study Of The Ohrid Traditional Ottoman Houses Local


Architecture In Sustainability Context

Levent Menga1*

Abstract: Ohrid is the most important city in North Macedonia in terms of tourism. One of
the most important reasons why it is the most important city in terms of tourism is Ohrid lake,
from which the town takes its name. Ohrid lake is the largest of the three natural lakes in
North Macedonia. Within the scope of this study, we examine the historical settlement area
around the Ohrid castle. This area, one of the first residential areas in the region, has historical
buildings and monasteries built with stone streets around the castle. In this paper, we show a
residential area as an example of a sustainable local settlement and analyze the sustainability
of the local architecture. The analysis we do is about the socio-economic, socio-cultural, and
environmental analysis criteria. In analyzing the socio-economic context, supporting
autonomy, promoting local events, optimizing construction work, extending the building life,
and protecting resources are examined. In the environmental context, respect for nature,
suitable location selection, reducing pollution and waste material, contributing to health
quality, and reducing natural hazards items are analyzed. Cultural protection, transferring
building cultures, developing creativity, recognizing moral values, and promoting social
cohesion are considered in the socio-cultural context. As a result of these examinations, we
have rated each criterion as “good,” “bad,” “average,” and “ineffective, we converted the
results in the table, and we have got conclusions for the sustainability of this area.
Keywords: Sustainability, local architecture, environment, old town and cultural protection.

1
International Vision University, Faculty of Engineering and Architecture, Department of Computer Science,
Gostivar, North Macedonia
* Corresponding author: [email protected]
35
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Cluster Analysis of Mobile Devices

Samedin Krrabaj1*

Abstract: Today, with the rapid development of technology, mobile devices have become one
of the most important elements of our lives. With the hardware and software developments,
mobile devices have become more than just a means of communication and have become used
in many parts of our daily lives with software applications developed in many areas such as
health, finance, social, photography, games, education and business life. Technology
companies make various strategic plans in the production and marketing of mobile devices that
have become a part of daily life. Mobile device manufacturers aim to produce mobile devices
that will appeal to every budget by making various pricing according to various hardware
features. Thus, they provide targeted mobile device sales in the market share and increase their
profit share. In this study, 8 attributes were d etermined by using hardware (processor speed,
battery, ram, storage space, camera, weight, NFC, fingerprint) data of mobile devices. Together
with the attributes obtained, the price information of 163 mobile devices was discussed in four
categories as 0-249 €, 250-499 €, 500-749 €, 750-1000 €. Cluster analysis was performed
according to attribute and price categories. In the study, Expectation Maximization, one of the
clustering analysis algorithms, was used and a success rate of 88 percent was achieved.

Keywords: Artificial Intelligence, Data Mining, Clustering.

1University of Prizren, Faculty of Computer Science, Prizren, Kosovo.


* Corresponding author: [email protected]
36
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Proposal and Analysis of the Geothermal Energy Based Plant;


Thermodynamic Assessment

Oğuzhan AKBAY1*, Fatih YILMAZ2*

Abstract: Due to global warming and environmental problems, the importance of renewable
energy sources continues to increase day by day, so it can be specified that research on
renewable energy power generation plants has increased in recent years. In this proposed study,
thermodynamic investigation of the geothermal energy supported plant is conducted according
to the n-butane fluid. In this regard, the energy and exergy efficiency of the flash-binary
geothermal power plant is examined, and also, the impacts of some limitations such as
geothermal fluid temperature, geothermal outlet pressure, and environmental temperature on
system performance are examined. According to the thermodynamic results, the energy and
exergy efficiency of the proposed total plant is found to be 15.97% and 44.63%, respectively.

Keywords: Energy, exergy, geothermal, sustainability, renewable energy

1. Introduction

By rising the population and industrial development lead to rising in the need for energy and
the use of fossil fuels has increased in parallel. The consumed fossil fuels have caused global
warming problems due to the effect of various greenhouse gases (Ratlamwala et al., 2012). It
is known that renewable energy sources are more preferable than fossil fuels because they are
clean, environmentally friendly, reliable and sustainable (Takan and Kandemir, 2020). There is
increasing interest in geothermal energy source derived from hot liquid or hot dry rock systems
produced by the Earth's core (Altun and Kilic, 2020).

Geothermal fluid is used in low and medium temperature ranges in organic Rankine cycles
(ORC). ORC systems are capable of generating electricity at sources of 150 °C and lower
temperatures (Yilmaz, 2018). In power generation cycles with a low temperature source,
organic fluid is preferred, which has a high saturation pressure at atmospheric pressure at
condensation temperature and is in the state of hot steam at the turbine output. There are several
studies in the literature on the performance analysis of power plants driven by geothermal.

Erdemir, (2020) has developed an underground pumped hydro-energy storage system (UPHES)
with geothermal sources integrated with ORC and regional heating. As a result of
thermodynamic analysis, 60 MWh of electricity was stored by UPHES. The heating needs of
100 homes of 2.8 MW were met and 1.7 MW of energy was produced by the orc system.
Siddiqui et al., (2019) developed a new renewable energy source model. The system comprised
1
Isparta University of Applied Science, Graduate Education Institute, Isparta, Turkey
2
Isparta University of Applied Science, Faculty of Technology, Mechatronics Engineering, Isparta, Turkey
* Corresponding author: [email protected]

37
of a flash steam plant and Cu-Cl process. As a result of the study, the energetic and exergetic
performance of the trigeneration model were computed as 19.6% and 19.1% respectively.
Yilmaz and Koyuncu, (2020) modeled and optimized the dual geothermal power plant in Afyon
province using artificial neural network-based genetic algorithm method. After that, they found
the repayment period of the electricity produced at the plant to be 2.87 years and the exergy
cost to be 0.0176 $/kWh. Yuksel and Ozturk, (2020) performed a thermodynamic evaluation of
a multigeneration system with geothermal resources. As a result of their studies, they concluded
that multi-generation plants are more efficient than single and co-generation systems because.
Gnaifaid and Ozcan, (2020) studied the thermodynamic and economic analysis of power plant
for the generation of natural freshwater, power, cooling and heating using geothermal energy.
Furthermore, they calculated that the total energy and exergy efficiency of the plant could be
as high as 61% and 37.8%, respectively, and the cost of the plant ranged from $160-330/hour.

In the above studies, it is seen that different geothermal power plants in our country are
examined, thermodynamic performance analyzes are made and cycle proposals with different
refrigerants are presented. The thermodynamic performance examination of the geothermal
energy supported power generation plant is made according to the n-butane fluid. According to
the n-butane refrigerant of the flash-binary power generation model, the energetic and exergetic
effectiveness are investigated and compared with different refrigerants. In addition, in this
study, the influence of some limitations such as geothermal fluid temperature, geothermal outlet
pressure and ambient temperature on the plant performance is examined.

2. Modeled plant description

This suggested model contains of a flash unit, a direct steam turbine and an ORC system. The
geothermal water pass in the plant at 200 °C temperature and 1600 kPa pressure and it is
assumed that it emerges from the geothermal well as a liquid vapor mixture. The flow chart of
the system is given in Figure 1.

Figure 1. The layout of the working point of modeled plant

38
Briefly, the geothermal fluid entering the separator at state 2, goes to the steam turbine as
saturated steam at point 3. Afterward, the geothermal fluid in the saturated liquid phase at state
5 transfers its heat to the subsystem ORC system in the heat exchanger (HEX) and the thermal
energy required for the ORC is provided. Finally, at state 11, it returns back to the re-injection
well.

2.1. Thermodynamic analysis

In this proposed study, thermodynamic performance analysis was examined parametrically with
respect to the n-butane fluid. Thermodynamic analyses are performed in this studied system
according to four basic balance equations (Cengel and Boles, 2015; Dincer and Rosen, 2013).
In Table 1, general thermodynamic balance equations are mentioned. According to
thermodynamic laws, balance equations for the proposed model are presented in Table 1.

Table 1. Thermodynamic balance equations


Balance equations
Mass ∑ ∑
Energy ∑ ∑

Entropy ∑ ∑ ∑ ∑

Exergy ∑ ∑ ∑ ∑ ∑ ∑ ∑

In addition, the four equilibrium equations of the whole system, namely mass, energy, entropy
and exergy, are presented in Table 2.

Table 2. General thermodynamic analysis of system subcomponents


System
Balance equations
subcomponents
Mass Energy Entropy Exergy

,
Seperator
,

,
Turbine
,

, _
ORC turbine _
_
, _

;
Condenser ,
,
,
Pump
,

;
Heat exchanger ,
,

Taking into account the balance equations in Tables 1 and 2, the overall energy and exergy
efficiency of the entire system was found using the following equations.

,
(2.1)

39
,
(2.2)

3. Results and Discussion

In this suggested work, power generation from the flash-binary power plant supported by
geothermal energy was examined from a thermodynamic point of view. In the ORC subsystem,
n-butane fluid was used as the working fluid and thermodynamic examination was also
performed. In the study, all calculations were modeled using the EES package program. The
results obtained are tabulated in Table 3.

Table 3. Analysis results of the proposed plant and the ORC system
ORC Overall system
Energy efficiency 10.24 15.97
Exergy efficiency 42.09 44.63
Net power generation 565.4 3038
Total exergy destruction rate - 3663

A net power of 3038 kW is produced from the cycle with a reservoir temperature of 70 °C. The
energetic and exergetic performance of the planned model are computed as 15.97% and
44.63%, respectively. After that, the influence of the flash pressure change on the performance
of the ORC is observed and specified in Figure 2. As a result of increasing the flash pressure
from 300 kPa to 800 kPa, the energy and exergy performance of the ORC decrease by 0.15%
and 0.95%, respectively.

Figure 2. Impact of flash pressure change on the performance of the ORC

In Figure 3, the influence of flash pressure change on the performance of the whole model was
examined and as a result of increasing the flash pressure from 300 kPa to 800 kPa, the energy
efficiency of the whole plant increased from 0.15% to 0.21%, and the exergy efficiency from
0.4% to 0.59%. The reason for this increase is that with the increase of flash pressure, the fluid
going to the steam turbine goes at higher pressure and temperature, so the performance of the
proposed plant also increases.

40
Figure 3. Effect of flash pressure variation on the performance of the entire plant

The effect of flash pressure change on turbine inlet temperature and net power production of
the whole plant has been investigated and given in Figure 4. When the flash pressure increases
from 300 kPa to 800 kPa, the produced power rate increases and the turbine inlet temperature
also increases by about 33 °C. The cause for this rise can be expressed as the higher temperature
and enthalpy of the fluid going to the steam turbine at high flash pressures. In Figure 5, the
influence of turbine isentropic efficiency change on the net power generation of the suggested
plant is examined and given in Figure 5. As a result of increasing the turbine isentropic
efficiency by 20%, the turbine and ORC system power generation rate also increments.

Figure 4. Effect of flash pressure change on turbine inlet temperature and power generation

41
3500

3000

Wnet;total (kW) 2500

2000
Wnet;total
Wturbine
1500
Wnet;ORC
1000

500

0
0,75 0,8 0,85 0,9 0,95
Turbine isentropic efficiency
Figure 5. Net power generation according to turbine isentropic efficiency change

Changes in the performance of the modeled plant and ORC versus various turbine isentropic
efficiency are presented in Figure 6. As a result of 20% rise in turbine isentropic efficiency, the
energy efficiency of the suggested plant increases from approximately 12.8% to 15.97%, and
the exergy efficiency increases from 37.2% to 44.63%. Under the same situations, the energetic
efficiency of the ORC increased from about 8% to 10.24%, and the exergy performance
increased from 34% to 42.09%.

Figure 6. ORC and overall system performance change according to turbine isentropic
efficiency change

In Figure 7, the change in net power generation according to the ORC turbine input pressure is
examined. Once the ORC turbine input pressure is increased from 1500 kPa to 2500 kPa, it is
realized that the power generation rate obtained from both ORC and the overall system tends
to increase. The cause for this tendency is that with the rise of turbine input pressure, the fluid
entering the turbine reaches higher enthalpy values.

42
Figure 7. Power generation rate with various ORC turbine inlet pressure

The influence of ORC turbine input pressure on the ORC and whole system performances was
investigated and given in Figure 8. By increasing the ORC turbine pressure by 1000 kPa, the
energetic and exergetic performances of the ORC enhance by approximately 2.75% and 10%,
respectively.

Figure 8. System and ORC performance variation at different ORC turbine inlet pressures

In Figure 9, the effects on the overall system due to the PPT of HEX change are examined.
When the PPT of HEX is increased by about 25 °C, a linear reduction is occurring in the
energetic and exergetic performance of the total model. The reason for this decrease is that as
the temperature of the coolant entering the ORC turbine decreases with the increase of the PPT
of HEX, the enthalpy of the working fluid decreases, so the power generation drops in and as a
result its performance decreases.

43
Figure 9. Effect of PPT of HEX change on overall system performance

In Figure 10, the energetic and exergetic performance of two different fluids used as cycle fluids
in the proposed system are given for the entire system. As can be seen in Figure 10, in the case
of using n-butane as the work fluid, the energetic and exergetic performance of the plant was
calculated as 15.97% and 44.63%, respectively. If isopentane is used in the work fluid, the
energy and exergy performance of the suggested model was found to be 14.40% and 40.26%,
respectively. Once these two fluids are compared under the same conditions, the performance
of n-butane fluid is higher than isopentane.

Figure 10. Performance comparison of the whole plant for two working fluids

4. Conclusion

Geothermal energy, which is one of the renewable energy sources, has widespread use today...
In this planned work, thermodynamic performance analyses of geothermal energy assisted
power generation system were examined according to n-butane fluid. Under the same
circumstances; the influence of some parameters such as flash pressure, turbine isentropic
efficiency, ORC turbine inlet pressure and TPP of HEX on the modeled system were examined

44
and presented in graphs. By applying the first and second laws of thermodynamics, performance
analyses of this system were performed and some important results obtained are as follows:

1. The power generated from the entire plant was calculated as 3038 kW for the n-butane
fluid.
2. Energetic and exergetic efficiencies of the modeled ORC for the n-butane fluid were
calculated as 10.24% and 42.09%, respectively.
3. Energy and exergy performance of the total power plant are found as 15.97% and
44.63% for n-butane fluid, respectively.

References

Altun, A. F., & Kilic, M. (2020). Thermodynamic performance evaluation of a geothermal ORC
power plant. Renewable Energy, 148, 261-274.

Cengel, Y. A., & Boles, M. A. (2015). Thermodynamics: An Engineering Approach 8th Editon.
The McGraw-Hill Companies, Inc., New York.

Dincer, I., & Rosen, M. (2013). Exergy: energy, environment, and sustainable development.

Erdemir, D. (2020). Development and assessment of geothermal‐based underground pumped


hydroenergy storage system integrated with organic Rankine cycle and district
heating. International Journal of Energy Research, 44(13), 10894-10907.

Erdogan, A., & Kucuka, S. (2019). Bir jeotermal enerji santralinin termodinamik analizi ve
hava ve su soğutmalı çevrim performanslarının değerlendirilmesi. Ulusal Tesisat
Mühendisliği Kongresi, 17-20 Nisan 2019, İzmir.

Gnaifaid, H., & Ozcan, H. (2021). Development and multiobjective optimization of an


integrated flash-binary geothermal power plant with reverse osmosis desalination and
absorption refrigeration for multi-generation. Geothermics, 89, 101949.

Ratlamwala, T. A. H., Dincer, I., & Gadalla, M. A. (2012). Thermodynamic analysis of a novel
integrated geothermal based power generation-quadruple effect absorption cooling-
hydrogen liquefaction system. International Journal of hydrogen energy, 37(7), 5840-
5849.

Siddiqui, O., Ishaq, H., & Dincer, I. (2019). A novel solar and geothermal-based trigeneration
system for electricity generation, hydrogen production and cooling. Energy Conversion
and Management, 198, 111812.

Takan, M. A., & Kandemir, S. Y. Türkiye’deki Jeotermal Enerjinin Birincil Enerji Arzı
Yönünden Değerlendirilmesi. Avrupa Bilim ve Teknoloji Dergisi, 381-385.

Yilmaz, F. (2018). Jeotermal enerji destekli güç ve temiz su üretim sisteminin incelenmesi ve
termodinamik analizi. Akademik Platform Mühendislik ve Fen Bilimleri Dergisi, 6(2),
86-93.

Yilmaz, C., & Koyuncu, I. (2021). Thermoeconomic modeling and artificial neural network
optimization of Afyon geothermal power plant. Renewable Energy, 163, 1166-1181.

45
Yuksel, Y. E., & Ozturk, M. (2020). Jeotermal enerji destekli çok fonksiyonlu enerji üretim
sisteminin termodinamik analizi. Pamukkale Üniversitesi Mühendislik Bilimleri
Dergisi, 26(1), 113-121.

46
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Comparative Performance Investigation of a Transcritical CO2


Power Plant Using with Waste Heat

Fatih YILMAZ1*

Abstract: The energy generation sector is one of the most important players in the emergence
of environmental challenges today. Therefore, the effective use of energy resources such as
energy management with using the waste heat is important for humanity at this point. The
proposed study deals with thermodynamic performance analysis of the transcritical CO2-based
Rankine cycle with waste heat as an energy source. This system mainly comprised of a Rankine
cycle (RC) with CO2 refrigerant at the transcritical situation. system A detailed performance
analysis are made to investigate the RC and overall system. To examined the effects of some
important parameters on the system performance and exergy destruction rate, parametric
analysis is also conducted. In addition, the efficiency comparison for co-generation is discussed
by making some modifications in this Rankine cycle. Thermodynamic analysis results show
that energetic and exergetic efficiency of the overall system are 55.66% and 36.06%,
respectively.

Keywords: Energy, exergy, Rankine cycle, waste heat

1. Introduction

Today, as a result of the excessive use of fossil fuels, it is a known fact that greenhouse gases
both to the environment and to the atmosphere cause great harm not only to the atmosphere but
also to human health (Karapekmez and Dincer, 2021). However, despite these known damages
of fossil fuels, the fact that they are still in the first place in energy generation (Gunderson et
al., 2020; Zhang et al, 2020) and then continue to increase in environmental problems day by
day. One of the most important ways to prevent this increase and to fight as humanity is the use
of renewable energy sources and use of energy efficiency methods. In this context, energy
production with waste heat comes to the fore, and in recent years, there are many academic
researches in this field, both in real studies and in the literature.

Liao et al. (2020) planned an organic Rankine cycle (ORC) using waste heat in terms of
advanced exergy analysis method. They also conducted an energy and exergy analysis to
examine the impact of some constraints on the system performance. They computed that
optimum compression rate for simple supercritical CO2 ORC as 1.8. Kizilkan (2020) suggested
a performance evaluation of a cement plant driven by waste heat. Also, this author conducted a
detailed comparative analysis for supercritical fluids. He stated that the highest energy
efficiency was obtained in the closed CO2 Brayton cycle with 27.9%. Feng et al. (2020)
modeled a parametric and thermo-economic analysis of ORC that is the usage of the waste heat.
1
Isparta University of Applied Science, Faculty of Technology, Mechatronics Engineering, Isparta, Turkey
* Corresponding author: [email protected]

47
In their proposed study, parametric analyzes of supercritical and subcritical cycles are
performed according to R1234ze fluid Butcher and Reddy (2007) examined an excess heat
recovery power plant using the exergy analysis method. Also, they investigated that some
important parameters such as the effect of heat recovery steam generation (HRSG) temperature
on the system performance. They stated that increasing the HRSG pinch point temperature
(TPP) decreased the exergetic efficiency of the proposed system.

In short, it is a fact that there are many studies on waste heat management and power production.
However, in this study, the CO2-based transcritical (t-CO2) cycle is preferred for power
generation. In this proposed study, a detailed analysis of energetic and exergetic performance
analyzes for the performance evaluation of the waste heat-assisted tCO2-RC system is
discussed.

2. Explanation of Proposed System

In this work, a waste heat supported RC system, with t-CO2 working fluid, is proposed as shown
in Fig. 1. In short, the thermal energy for the RC plant is met by waste heat. first of all, the
waste heat centers in the heat recovery steam generator (HRSG), at state 7, and then the heat
transfer occurs in this component. Then, the CO2 fluid enters the turbine of the Rankine cycle
from state 3 in the superheated vapor phase, where power production takes place. The high
temperature CO2 at the turbine outlet transmits its heat to the water in the heat recovery system.
The water entering the HEX 1 under environmental conditions is heated here and hot water is
produced and cogeneration is made with this proposed system, as revealed in Fig.1. The
working fluid entering the pump in the saturated liquid phase at the gas cooler outlet and then
it is pressurized and enters the HRSG component again. As a result, the RC system employs for
heating and power generation application from waste heat.

Waste heat 3
inlet
Turbine
8 Net Power

tCO2 4
HRSG

7
Hot water for
heating
HEX 1

9
Pump
Waste heat 6
outlet 2 1 Gas cooler
5

Fig.1 Design Layout of the proposed model system

2.1. Thermodynamic Modeling

In this suggested study, as above mentioned in Fig.1, a detailed thermodynamic performance


evaluation is conducted to examine of the plant effectiveness in terms of the energy and exergy
efficiency perspectives. Before moving on to the thermodynamic analysis in a system in its
most general form, some of the assumptions made are given below;

48
 The modeled plant is expected to operate under steady-state flow circumstances
 The kinetic and potential energy disregarded
 Waste heat is considered as air in the analysis
 The employing fluid at the turbine inlet is modeled according to the saturated vapor
phase.
 Pumps and turbines considered adiabatic
 The efficiency of the heat exchangers is accepted as 80%.
 Reference temperature and pressure values are accepted as 25 °C and 101.325 kPa.

With the aforementioned assumption, the general thermodynamic balance equivalences, for the
modeling of the thermodynamic evaluation, should be described as (Cengel and Boles, 2007;
Dincer and Rosen, 2012; Kotas, 2013);

Mass Balance: ∑ ∑ (1)


Energy Balance: ∑ ∑ ∑ ∑ ∑ ∑ (2)
Entropy Balance: ∑ ∑ ∑ ∑ (3)
Exergy Balance: ∑ ∑ ∑ ∑ ∑ ∑ (4)

In equation (3), ∑ ∑ terms describe the heat and work exergy transfer rates,
and also, should be written as below;
1 (5)
(6)

After these assumptions and general equations, the specific exergy term, ie physical exergy,
can be defined as follows;
(7)
As a result, the power generation and inlet of the thermal energy for the modeled system can
be depicted as;
(8)
(9)
(10)

After calculating the produced and consumed power rate, the net power generation rate can be
formulated as below;

(11)

At the end of the formulation, the overall system's energy and exergy efficiency as;
(12)

(13)

3. Results

The suggested proposed study addresses the thermodynamic analysis of the waste heat-
supported tCO2-RC for cogeneration purposes. For this purpose, power generation and system

49
performance are investigated in terms of energetic and exergetic efficiencies by using waste
heat at 200 oC in an RC system with CO2 fluid for advanced energy management. In the light
of the assumptions presented in Table 1, thermodynamic analysis is applied to the proposed
system with the EES program and the results are shown in Table 2.
Table 1. Proposes system assumptions
unit Value
o
Waste heat inlet temperature C 200
Waste heat inlet pressure kPa 101.325
Pump compression rate - 1.5
o
Pinch Point Temperature of C 15
HRSG
Pump inlet pressure kPa 5800
Pump isentropic efficiency % 85
Turbine isentropic efficiency % 92
HEX effectiveness % 80
o
Reference temperature C 25
Reference pressure kPa 101.325

The total net power generation capacity of the proposed system is 92.43 kW, by using the waste
heat. According to the results presented in Table 2, the system produces 463.4 kW of heating
load, and also the energetic and exergetic efficiency of the RC plant is 9.08% and 31.08%,
respectively. As a result, the modeled power plant has an energy performance of 55.66% and
an exergy performance of 36.06%.

Table 2. Thermodynamic analysis results


Unit Value
Net power generation rate kW 92.43
Heat production rate kW 463.4
Energy efficiency for RC % 9.08
Exergy efficiency for RC % 31.08
Energy efficiency for whole system % 55.66
Exergy efficiency for whole system % 36.06
Total exergy destruction rate kW 279.6

In this study, since the basis of the study is waste heat, this temperature change is an
important parameter. Fig. 2 examines the impact of waste heat temperature change on the
performance of the RC plant. With the rise in the waste heat temperature, the energy efficiency
of the proposed RC plant increases from 8.9% to 9.25%, since the temperature of the fluid
entering the system at point 3 increases. However, at the same time, the exergy performance of
the RC system decreases as the exergy entering the system with heat increases.

Another figure, Fig. 3, discusses the effect on overall system performance in the same
waste heat temperature range. As can be seen in this figure, the rise in the waste heat
temperature increases the energy efficiency as it increases the useful outputs in the whole
system. However, it reduces the exergy efficiency. Therefore, waste heat temperature
management is important in system designs.

50
Fig.2. Impact of the waste heat inlet temperature on the RC plant performance

Fig.3. Impact of the waste heat inlet temperature on the overall plant performance

The last figure on waste heat temperature, Fig.4, evaluates the net power generation and the
irreversibility of the entire plant. With the increase in waste heat temperature, both the power
produced and the irreversibility increase. While the rise in electricity production is a positive
situation, the rise in the irreversibility is an undesirable situation. Therefore, waste heat
temperature management in power generation systems can be expressed as a very important
parameter.

51
Fig.4. Influence of the waste heat inlet temperature on the net power and irreversibility

Fig. 5 and 6 examine the impact of pump compression ratio on net power production,
irreversibility, RC performance and, overall system performance, respectively. In short, it is
clearly realized in Fig. 5 that while the power generation from the system increases with the
increase of pump compression ratio, the exergy destruction of the whole system decreases.
Depending on this situation, the energetic and exergetic efficiency of the plant and RC, as in
Fig. 6, is also increased.

Fig.5. Net power and total irreversibility vs pump pressure ratio

52
Fig.6. Performance of the whole and RC plant vs pump pressure ratio

In this proposed study, the last figure, Fig. 7, presents the change in overall system performance
and power rate with increasing pinch point temperature of the HRSG. As the HRSG temperature
rises from 5 °C to 35 °C, the net power output drops by approximately 2.5 kW. Depending on
this situation, both energy efficiency and exergy performance are in a downward trend.

Fig.7. Effect of the PPT of HRSG on net power rate and system’s performance

4. Discussion and Conclusions

The intent of this suggested plant is to study the thermodynamic performance examination of
the waste heat supported transcritical CO2-RC for power and heating production applications.
In this context, a parametric study is made to examine the effects of changes in some parameters
such as waste heat inlet temperature, compression ratio and pinch point temperature of HRSG
on system performance. To summarize, some important results obtained as stated by the results
of the analysis can be declared as follows;

53
i. The power and heat generation capacity of the whole system was determined as 92.43
kW and 463.4 kW. In addition, the total exergy destruction was calculated as 279.6 kW.
ii. While the entire system has an energy efficiency of 55.66%, it has an exergy efficiency
of 36.06%.
iii. The highest exergy destruction was seen in the HRSG subcomponent.

As a result, it is very important to increase system efficiencies with cogeneration, trigeneration


or waste heat management and to design environmentally friendly systems by applying energy
management.

References

Butcher, C. J., & Reddy, B. V. (2007). Second law analysis of a waste heat recovery based
power generation system. International Journal of Heat and Mass Transfer, 50(11-12),
2355-2363.

Cengel, Y. A., & Boles, M. A. (2007). Thermodynamics: An Engineering Approach 6th Editon
(SI Units). The McGraw-Hill Companies, Inc., New York.

Dincer, I., & Rosen, M. A. (2012). Exergy: energy, environment and sustainable development.
Newnes.

Feng, Y. Q., Zhang, W., Niaz, H., He, Z. X., Wang, S., Wang, X., & Liu, Y. Z. (2020).
Parametric analysis and thermo-economical optimization of a Supercritical-Subcritical
organic Rankine cycle for waste heat utilization. Energy Conversion and
Management, 212, 112773.

Gunderson, R., Stuart, D., & Petersen, B. (2020). The fossil fuel industry’s framing of carbon
capture and storage: Faith in innovation, value instrumentalization, and status quo
maintenance. Journal of Cleaner Production, 252, 119767.

Karapekmez, A., & Dinçer, İ. (2021). Development of a multigenerational energy system for
clean hydrogen generation. Journal of Cleaner Production, 299, 126909.

Kizilkan, O. (2020). Performance assessment of steam Rankine cycle and sCO2 Brayton cycle
for waste heat recovery in a cement plant: A comparative study for supercritical
fluids. International Journal of Energy Research, 44(15), 12329-12343.

Kotas, T. J. (2013). The exergy method of thermal plant analysis. Elsevier.

Liao, G., Jiaqiang, E., Zhang, F., Chen, J., & Leng, E. (2020). Advanced exergy analysis for
Organic Rankine Cycle-based layout to recover waste heat of flue gas. Applied
Energy, 266, 114891.

Zhang, W., Yan, Q., Yuan, J., He, G., Teng, T. L., Zhang, M., & Zeng, Y. (2020). A realistic
pathway for coal-fired power in China from 2020 to 2030. Journal of Cleaner
Production, 275, 122859.

54
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Effects of Coronavirus in the Construction Industry: A Case


of Turkey

Pınar Usta1*, Başak Zengin2 and Kübra Arslan3

Abstract: The coronavirus (COVID 19) pandemic, which has affected the whole world, has
changed the balance of life. Unconfirmed the epidemics in history, it has become a more
effective global problem. While searching for solutions to problems on the health side, new
problems have emerged in different sectors within the scope of the measures taken. The
construction area, which is one of the sectors that affect global economic development, the
fluctuation in the construction sector, was greatly affected by this process.

The quantitative measurement techniques to investigate the impact of the pandemic on


descriptive analysis of the construction sector in Turkey by applying the survey fieldwork was
conducted. In the study, it was felt that the construction industry was affected by global
economic difficulties. Even though precautions were taken in terms of health, there was
unemployment anxiety as well as the health problems of the employees and their environment.
In addition, another major problem was in the procurement of the material. Due to the findings,
economic contractions were observed in the construction sector. According to the study, it was
revealed that more measures should be taken for the further progress of the pandemic or for a
similar situation.

Keywords: COVID 19, Epidemic, Construction Sector and Descriptive Study

1. Introduction

Epidemic diseases have affected the health of living creatures and deteriorated their quality of
life. The recent COVID-19 epidemic has caused strong damage to community levels. While
certain systems and locations are not equally affected, it is noteworthy that the coronavirus
outbreak is global, rapid and significantly widespread disrupting current practices. For the post-
epidemic period, it has been called "the beginning of a new era". Following the rapid infection
of 3 million patients worldwide by the pandemic, the health organization announced the
increase of measures (COVID‐19 STRATEGY UPDATE 2020). Many cities/countries have
implemented quarantine measures and have made decisions to close many borders and roads
while workplaces, schools and universities were closed.

The anxiety, fear, and depression caused by the epidemic caused psychological, sociological
and economic problems by disturbing both individuals and sectors, and therefore businesses.
11
Isparta University of Applied Sciences Technology Faculty, Civil Engineering Department, Isparta- Turkey
ORCID Code: 0000-0001-9809-3855
* 2Kahramanmaraş Istiklal University, Elbistan Vocational High School Kahramanmaraş, Turkey.
ORCID Code: 0000-0003-3719-9423
3
Istinye University, Faculty of Fine Arts, Design and Architecture, Interior Architecture and Environmental
Design, Istanbul Turkey ORCID Code: 0000-0002-2803-7185.
* [email protected]

55
Due to these effects, problems related to the epidemic progressed faster (Barua); (Wang et al.
2020). While the pandemic is a serious problem in countries, the rapid increase in deaths and
the threat of the virus is seen as an enemy that must be fought collectively, the importance of
the measures that the world will work together to eliminate this problem has emerged.

While investigating the effects of COVID 19, it was seen that there were major crises. In
addition to the crises experienced in terms of health, there were wide-ranging crises. When the
effect of the epidemic on the economy was investigated, a meta-transition was encountered. It
has been specified as a meta-transition due to the fact that the pandemic infects more than one
regime at the same time. According to this transition theory, multiple system elements must be
taken into account, including technology and innovation, markets, business, government,
behaviors and norms, regulatory and governance frameworks, and pathways of change. This
multi-parameter way of working is integrated into the analysis. Such systematic changes are
examined under two valid conditions; government (management and regulation) and economics
(business, markets and finance). States have faced major problems in applying these multiple
parameters. It is a process where states are not ready. Sectors that progress due to globalization
have experienced the most difficult constraints.

International air travel, shipping and trade suffered the most. Small businesses and self-
employed people have gone through more troublesome processes (Wells et al. 2020). When the
COVID-19 pandemic first manifested itself, production and general economic activities in
China, and later on all over the world, led to shrinkage in demand together with production.
These negativities were felt in most countries. Vasiev et al. (2020) conducted input-output
analyses in 42 sectors in 31 provinces in order to investigate how and how much the Chinese
economy was affected by the epidemic during the pandemic. In this context, they developed
various scenarios for changes in production and consumption depending on 22 different
parameters in economic, environmental and social fields. According to the analysis, it has
shown that there are 23 sustainable factors for China's regional development and the pandemic
has a strong impact on hazardous waste, carbon dioxide emissions and energy resource
efficiency. When countries are examined from different angles, they estimated that the revenue
would decrease by 8.5% in the positive scenario during the pandemic process of the Galician
economy, and 12.7% in the case of the negative scenario, if the pandemic lasts more than a
quarter (Ellison, 2020). During the Covid-19 pandemic, the economy in Iran contracted, and
according to preliminary data, a 30% decrease in oil exports was found (Duddu 2020).

Fornaro and Wolf (2020) examined the effects of supply and demand shocks on economic
growth within the scope of the macroeconomic effects of the COVID-19 pandemic (Fornaro
and Wolf 2020). The Covid-19 crisis has dragged many sectors to collapse in many countries
around the world, a rapid acceleration in high inflation and unemployment rates of economies,
and if this process continues for a few more periods, there will be profits in production
(Chakraborty and Maity 2020) (Fig 1).

56
Figure.1 After-Before Covid crisis process affects different sectors

When discussed the Indian economy, he examined the impact of the Covid-19 pandemic on
sectors such as agriculture, industry, tourism, finance and economy in the period of March-June
2020 (Jaya 2020). According to these findings, financial and real estate services in India 17.3%,
mining and quarrying 14.7%, basic consumption/energy (electricity, gas and water supply)
13.9%, construction/building sector 13.3%, general production 6.3%, trade, tourism While the
travel sector emphasizes a decrease of 9% and safety by 0.4%, it has been determined that the
agriculture, forestry, fishing and nutrition sectors have increased by 1.3% compared to the past.
In the research findings of it was observed that industrial production decreased during the
COVID-19 pandemic in India, unemployment reached the highest level in the last 45 years, and
private sector investments have recently started to decline rapidly (Dev and Sengupta 2020).

The subsistence of individual beings relies on critical items during the outbreak and that can
be content by appropriately exploit the critical resource, like crude materials, employees, and
active logistics systems as explain in the internal coterie of Figure 2.

Fig.2 Covid crisis process affects different sectors

The recent example of the coronavirus COVID-19 outbreak clearly shows the necessity of
this new perspective. This research introduces a new angle in SC resilience research when
resistance to extraordinary disruptions needs to be considered at the scale of viability. The
integrity ISN is elaborated with viability. The system illustrates the viability formation through
dynamic game-theoretic modelling of a biological system that resembles the ISN. That
compares some future research areas (Ivanov and Dolgui 2020; Mishra et al. 2021, Singh et al.,
2021).

57
While examining the Egyptian economy under the influence of the pandemic process, if the
pandemic continues, the income revenue in the Egyptian economy will decrease by 0.7-0.8%
every month and the consumption of the people will decrease by 10%, if the crisis continues
for 3 or 6 months, the revenue will be approximately% at the end of 2020. They mentioned the
possibility of a decrease of 2.1- 4.8%. (Diao et al. 2020). Investigating the impact of the
pandemic process across different areas. has compared 63 sectors such as agriculture,
transportation, mining and construction in Myanmar. According to the findings, they predicted
that with the contraction of the economy by 41% during the pandemic process, the demand and
the decline in exports to the agriculture and food sector would play an important role in this. In
their study (Cajner et al. 2020) found that approximately 13 million wage earners lost their jobs
in two weeks in employment due to the pandemic in the USA, and this rate was twice as much
during the period followed, compared to the 2008 Global Financial Crisis.

Examining the reflections of COVID-19 on a sectoral basis, emphasized that the tourism
sector has an important share in the economy of Namibia and that the impact of the pandemic
is pessimistic in the manufacturing, construction, mining and quarrying sectors, as well as in
the tourism sector, and the agriculture, forestry and fishing sector and technology are optimistic
(Evelina et al. 2020) After the date of start detecting the first case in the world countries in
parallel with the effect seen since March 11, which detected the first case of 2020 in Turkey,
the economic side of health, social, was observed to be felt largely sociological and
psychological effects. Covidien-19 of Turkey's economy also impact the pandemic is obvious
that affected so many other countries. Considering this process, sectoral employment has
become more fragile (Koyuncu and Meçik, 2020).

Although there were negative effects on the sectors after the pandemic, it ensured that
precautions were taken for the problems that may occur from now on. Larger investments and
budgets began to be allocated for management systems, especially in the education and health
sectors. Vaccination studies were carried out, which affected the treatment process on a large
scale. Equipment used in hospitals was developed. A new era has begun in the education system
that concerns all fields. The distance education system, which is generally preferred in adult
education, has turned into a system used by everyone. Compulsory education and working
environments were created with the remote system. In that period, the tendency towards the
virtual shopping system increased. With the increasing trend in e-commerce, developments in
this direction have been observed. The tendency towards agriculture has increased with
renewable energy resources. With the transition of life online, the need for cyber precautions
has increased even more. The most significant improvement in this process has been observed
in energy resources. Considering the carbon dioxide measurements, it was determined that the
rates in the emission measurements decreased (Karakaş 2021; Karakaya, and Uzmanı 2021;
Doğan and Doğan 2020).

1.1 New coronavirus disease (COVID-19)

It is a virus that was first identified on January 13, 2020, as a result of research conducted in
a group of patients who developed respiratory symptoms (fever, cough, shortness of breath) in
the Wuhan Province of China at the end of December. When we analyze COVID-19, the "Co"
of "Corona", the "vi" of "virus", the "d" of the word "disease" in English, and the number "19"
for the first time in 2019 The World Health Organization (WHO) declared the coronavirus-
borne Covid-19 disease, which threatens the whole world, as a "pandemic", which means a
global epidemic (Cucinotta and Vanelli 2020). With the impact of Covid-19 on the global

58
community, it was declared a pandemic (epidemic) by the World Health Organization (WHO)
on March 11 (Valtonen et al. 2019).

Coronaviruses (COV) are a large family of viruses that cause a variety of illnesses, from the
common cold to more serious diseases such as Middle East Respiratory Syndrome (MERS) and
severe acute respiratory syndrome (SARS) (UNCTAD 2020). The new type of coronavirus,
coded as COVID-19, usually causes diseases in the respiratory and gastrointestinal system in
humans. According to clinical results in adults; common cold, bronchitis, pneumonia, severe
acute respiratory distress syndrome (ARDS) and multiple organ failure resulting in death
(Reyad 2020; Aslan and Özdemir 2020). According to the WHO report, public health and social
measures must be implemented with the participation of all members of the society in order to
slow down or stop the spread of Covid19, and a global struggle must be carried out (World
Health Organization, 2020). When we examine from history to the present, the Plague
Epidemic, also known as "Black Death" in the 1300s, "Bleeding Fever" in the 1500s, "Cholera"
in the 1900s, "SARS" in the early 2000s, and later in 2009 and 2014. Outbreaks such as “Swine
Flu and Ebola” have threatened public health and the economy in large areas around the world.
As in all pandemics in world history, the Covid-19 pandemic manifests itself not only in health
but also in many areas such as economy, finance, education, transportation, industry, public
services and tourism (WHO, 2020).

The global epidemic spread from China and the Far East countries to other Asian, European,
American and African countries and spread rapidly. Despite the coronavirus (Covid-19)
epidemic, while countries are struggling with different approaches, they have also revealed that
they are generally caught unprepared (Dyer (2020); Manderson and Levine (2020)). So much
so that while the health system and infrastructure in many countries were questioned, the
insufficiency of the health personnel, the lack of medical equipment and supplies created
complete confusion.

1.2 The effect of covid-19 on the construction industry

The construction sector is one of the sectors affected by the coronavirus epidemic in the world
and in our country. Like many other industries, it negatively affected the construction industry.
The slowdown in building products in the sector, the decrease in the working capacity, the
disruption of the allowances, the insufficient supply of the material needs and the adverse
effects of the rising exchange rates caused great negativities. Looking at the sectoral study in
Turkey has emerged worldwide contraction. Studies have shown that the construction sector is
affected by this epidemic in crises, except for tourism, trade, aviation and social areas.
Especially during the pandemic in China, it has been observed in the findings of the studies on
the tunnel construction sites that are far from the settlements that there are great difficulties in
the availability of workers and the sustainability of the work (Guo et al. 2020). As the stretch
of COVID-19 has sustained after December 2019, stay at residence demand about the globe
have modified how mostly from physical to practical interface, similar as going to institution
and doing our jobs; though, some activities are in substance impracticable to accomplish
virtually, such as construction exercise.

Therefore, the construction sector has been highly impaired by the current pandemic. The
construction sector lay out a key constituent of countries’ economies—it is approximately 13%
of global GDP—as such, adopted the availability to perform construction activities with a least
spread of COVID-19 might support with the financial feedback to the outbreak. Particular this
context, this analysis aims to comprehend the capability impact of COVID-19 on construction

59
employee using an agent-based modelling approach. Activities are classified as being of low-
medium-high hazard for workers, and the stretch of COVID-19 is simulated in the midst of
construction workers in a project. This research establishes that the workforce from a
construction project might be reduced by 30% to 90% due to the spread of COVID-19.
Understanding how COVID-19 may spread among construction workers may assist
construction project managers in creating adequate conditions for workers to perform their job,
minimizing the chances of getting polluted with COVID-19. That way investigations given
contribute to quantifying the benefits of using multiple working shifts to ease the spread of
COVID - 19 in between construction laborer. (Araya 2021a; Araya 2021b)

Singapore’s construction sector applies more than 450,000 laborers. In the course of the
summit of the COVID-19 pandemic in Singapore from April to June 2020, migrant workers
were disproportionately impaired, including plenty operation in the construction industry.
Divided adaptation and structure worksites emerged as connection for COVID-19 transmittal.
Official government resources including COVID-19 epidemiological data, 43 advisories and
19 circulars by Singapore’s Ministries of Health and Manpower were examined over an 8-
month period from March to October 2020. From a peak COVID-19 incidence of
1,424.6/100,000 employee in May 2020, the prevalence declined to 3.7/100,000 workers by
October 2020. Multilevel safe management measures were performed to enable the phased
resumption of construction worksites from July 2020. Using the Swiss cheese risk management
model, the authors described the several governmental, industry, supervised and worker-
specific intrusion to prevent, detect and contain COVID-19 for safe resumption of work for the
structure industry (Zhang et al. 2021)

COVID-19 prevention policies have also been found to hinder the arrival of materials to be
used in construction (García-Alberti et al. 2021) Events similar to these have been experienced
in our country. For the construction sector, which covers hundreds of businesses lines and
contributes to the economy and employment, in order to revive the construction sector in our
country, packages such as reducing the down payment to 10% in the housing, discounts in credit
rates in banks and facilitation in loan usage have been prepared. The search for bigger incentives
has begun. For a while, there has been trading activity in the sector. Adhering to this research
study data to analyze the situation in the construction sector in Turkey is made. It caused some
problems in terms of working conditions in the sector, work follow-up, allowances. With the
effect of this process on people, a study has been carried out on what kind of problems are
experienced in construction. Based on the data obtained as a result of the studies, the main
reasons for COVID 19 to affect the construction industry are;

• Import and export issues.


• Changing oil prices.
• Social distance effect.
• Curfew situation.
• Impact on newly graduated and experienced engineers.
• The subject of taking adequate precautions.
• New policies developed.
• New conditions in the working environment.
• Measures.

This approach concise discuss that the COVID-19 pandemic exposes the break in the current
world-wide socio-technical command and offers the chance of multiple various option outlook.
The policy brief explores the pandemic through the lens of the multi-level perspective on socio-

60
technical transitions. The pandemic is devised as a meta-transition event at the scenery level of
unprecedented scale, pace, and pervasiveness such that it permeates all socio-technical regimes
simultaneously. The prospects for the future are then defined on a array that compares the
toughness of civil society and that financial structures. The result is four dissimilar eventualities
for are linked to contemporary conversation on socio-economic futures: trade as usual; managed
transition; chaotic transition; and succeeded degrowth. A socio-technical transitions perspective
for assessing future sustainability following the COVID-19 pandemic. The eventuality is
presented as a starting point for policy discussion and the participation of societal actors to
define social and economic option for the future, and the involvement that the various forward
would have for ecological responsibility. It is all over that the COVID 19 pandemic can act as
a catalytic event in which the authenticity and efficacy of existing economic and political
structures will be questioned and reformed, and hence is an opportunity to redefine the
ecological responsibility our action cause.

2. Material and Method

In this section, the research design, population and sample, measurement tool, validity and
reliability study and analysis of the data are explained.

2.1 Material and researched background

The aim of this research is to investigate how the construction industry is affected by COVID-
19 pandemic. By determining these effects, it is aimed to share the existing problems with the
public, civil engineers, techniques and company managers. As a result of this, awareness will
be provided on the creation of emergency action plans against other possible future epidemics.
It is thought that this research will fill an important gap since there are a limited number of
studies conducted abroad and domestically on how the epidemic has affected the construction
sector.

The research population, located in the Republic of Turkey civil engineer, civil engineering
technicians and includes the final year students. The study universe of the research consists of
civil engineers, technicians and civil engineering senior students working in the districts of the
European side of Istanbul. The major different people live in these districts in terms of socio-
economic and socio-cultural aspects. Based on this, civil engineers, technicians and civil
engineering senior students working in these districts have the variety and number to reflect
this structure (Figure 3).

Figure. 3. Survey researched group

61
In line with the calculations made, the contact information of 170 of the civil engineers,
technicians and civil engineering senior students who were determined from the research area
was reached and 156 people who were able to get responses from 156 people who answered
the scale as well-marked questionnaires formed the study group of this study. The general
characteristics of the research groups are given in Table 1.

Table 1. Total research group


n %
Male 136 87.2
Gender
Female 20 12.8
Age 20-27 59 37.8
28-35 35 22.4
36-43 30 19.2
44 and- 32 20.5
Seniority 0-4 years 95 60.9
5-10 years 42 26.9
11-15 years 12 7.7
16-20 years 3 1.9
21 years and- 4 2.6
Jobs Engineering 78 50
Technician 63 40.4
Intern 15 9.6
Education Undergraduate 22 14.1
License 91 58.3
Postgraduate 43 27.6
Total 156 100
When Table 1. is examined, the distribution of the participants by gender; 87.2% male (n =
136), 12.8% female (n = 20), distribution by age groups; 37.8% were aged 20-27 (n = 59),
22.4% were aged 28-35 (n = 35), 19.2% were aged 36-43 (n = 30) and 20.5% of those aged 44
and over (n = 32), their distribution by seniority; 60.9% 0-4 years (n = 95), 26.9% 5-10 years
(n = 42), 7.7% 11-15 years (n = 12), 1% 9 of them 16-20 years, 2.6% of them 21 years and
above (n = 4), their distribution by profession; 50% of them are engineers (n = 78), 40.4% are
technicians (n = 63), 9.6% are trainees (n = 15), their distribution by education level; 14.1%
were associate degree (n = 22), 58.3% were undergraduate (n = 91) and 27.6% were graduate
(n = 43).

2.2. Collection of data

In this study, the data were collected with the "Effect of Coronavirus on the Construction
Industry" scale. According to the validity and reliability results of the Cronbach's Alpha values
of the managerial strength scale were calculated as .88. The Cronbach Alpha value of the
technology literacy scale, whose validity and reliability calculations were made by was
calculated as 0.86. According to Cronbach Alpha values being 0.60 and above indicate that the
scale is highly reliable and valid, and .80 and above indicates that the scale is highly reliable
and valid. Table 2 gives the evaluation according to the questions according to the subtitles of
the research.

62
Table 2. Evaluation according to the subtitles of the research and the questions

Cronbach's
Dimensions Item Factor Variances Average S. Deviation
Alpha
Q1 0.755
Q2 0.719
Epidemic
Q3 0.71 60.25 0.95 4.21 0.59
Measures
Q4 0.701
Q5 0.699
Q6 0.732
Q7 0.727
Q8 0.721
Economics Q9 0.823 9.686 0.86 4.26 0.51
Q 10 0.541
Q11 0.558
Q12 0.668
Q13 0.591
Q14 0.726
Q15 0.679
Labour
Q16 0.657 10.18 0.83 4.16 0.63
conditions
Q17 0.763
Q 18 0.673
Q 19 0.627
Q20 0.664
Q 21 .837
Q 22 .871
State Q 23 .833 4.83 0.81 4.27 0.54
assistance Q 24 .859
Q 25 .817
Q 26 .865
Q 27 0.531
Q 28 0.823
Unemployment 11.21 0.83 3.96 0.66
Q 29 0.793
Q 30 0.558
xsquare=
KMO=0.897
14021.932

As can be seen in Table 2. in the scale factor analysis assessment of the impact of the
pandemic on the construction industry, attention was paid to the factor load values being higher
than 0.50 and the variance explained to be greater than 50%. In the phase of determining the
factors, a total of six factors were determined by paying attention that the eigenvalues of the
factors were greater than 1. These factors are; Pandemic measures are Economy, Working
conditions, State aid and Unemployment factors. The variance value for items 1, 2, 3, 4 and 5
of the scale, which constitutes the pandemic precautions factor, was found to be 60.25 and the
"Cronbach Alpha" value to be 0.95. The variance value for the 6, 7, 8, 9, 10, 11 and 12 items
of the scale, which constitutes the economy factor, was found to be 9.686 and the "Cronbach
Alpha" value to be 0.86. The variance value for the 13, 14, 15, 16, 17, 18 and 19 items of the
scale, which constitutes the working conditions factor, was 4.83 and the "Cronbach Alpha"
value was 0.81. The variance value for the 20, 21, 22, 23, 24, 25 and 26th items of the scale,

63
which constitutes the state aid factor, was found to be 11.21 and the "Cronbach Alpha" value
as 0.83. The variance value for items 27, 28, 29 and 30 of the scale, which constitutes the
unemployment factor, was found to be 4.7 and the "Cronbach Alpha" value to be 0.8

2.3 Data Analysis

After the data was obtained, the data were expressed in the form of distributions of
frequencies and percentages, and comparisons were made between them by creating various
graphics and tables. In the analysis of the data, the findings obtained in the study were evaluated
using the arithmetic mean, standard deviation (s), percentage (%) among the descriptive
statistical methods. SPSS (Statistical Package for Social Sciences) 20.0 For Windows software
was used in the analyses.

In the study, firstly, normality distribution coefficients were examined to determine whether
the data points obtained from the "Technology Literacy Scale", "Managerial Strength Scale"
showed normal distribution or not. Normal distribution of data is required to perform parametric
tests. In this study, Kolmogorov-Smirnov Test was used to test the normality of data
distribution. Kolmogorov-Smirnov Test is used when the group size is greater than 50 A p
<0.05 value obtained from the Kolmogorov-Smirnov Test indicates that the data are not
normally distributed; A p> 0.05 has been interpreted as the data distributed normally For the
comparison of scale items and variance in paired groups (gender), parametric t-test and
ANOVA test for multiple groups (age, seniority, occupation, educational status) were used.

3. Recent Finding

Findings Regarding the Relationship Between the Impact of the Pandemic on the
Construction Sector and Demographic Variables. In Table 3, the analysis of the study was made
by gender

Table 3. Analysis of the research by gender


Gender n Average SS t p
Epidemic Male 20 3.08 .97 2.20 .071
Measures Female 136 3.50 .76
Male 20 2,07 .57 6.96 .369
Economics
Female 136 3.02 .57
Laboure Male 20 2.85 .98 3.51 .114
conditions Female 136 3.52 .76
Male 20 2.76 .90 3.03 .061
State assistance
Female 136 3.23 .59
Male 20 1.71 .67 7.07 .109
Unemployment
Female 136 2.99 .76

64
When Table 3 is examined, the effect of the pandemic on the construction sector does not
differ significantly in terms of gender in the dimension of pandemic measures (p = 0.071, t =
2.20) The average of the female participants is at the I agree level (x̄ = 3.08) and the average of
the male participants is partially at the level of agree (x̄ = 3.50). Gender does not cause a
significant difference again in the economy sub-height (p = .369, t = 6.96). The average of the
female participants is at the level of disagree (x̄ = 2.07), and the average of the male participants
is at the level of undecided (x̄ = 3.02). Working conditions sub-dimension did not differ
significantly in terms of gender (p = .114, t = 3.51). The average of the female participants is at
the undecided level (x̄ = 2.85), and the average of the male participants is at the level of agree
(x̄ = 3.52). The state support sub-dimension does not differ significantly in terms of gender (p
= .061, t = 3.03). The average of the female participants is at the undecided level (x̄ = 2.76), the
average of the male participants is at the level of agree (x̄ = 3.23). The unemployment sub-
dimension does not differ significantly in terms of gender (p = .109, t = 7.07). The average of
the female participants is at the level of disagree (x̄ = 1.71), and the average of the male
participants is at the level of undecided (x̄ = 2.99). In table 4 the analysis of the study was
researched by groups.

Table 4. Analysis of the research by different education group

Sources of Sum of Quadratic Significance


Sd f
Variation Squares Mean Level
Between
groups 3.570 2 1.78 2.82 .063
Epidemic
Within 96.810 153 .633
Measures
groups 100.380 155
Total
Between
9.589 2 4.795 12.751 .000
groups
Economics Within
57.533 153 .376
groups
Total 67.122 155
Between
6,475 2 3.238 4.981 .008
groups
Laboure
Within
conditions 99.450 153 .650
groups
Total 105.926 155
Between
1.842 2 .921 2.165 .118
groups
State Within
65.101 153 .425
assistance groups
Total 66.943 155
Between
24.640 2 12.320 20.525 .000
groups
Unemployment Within
91.835 153 .600
groups
Total 116.476 155

65
As a result of the responses of the participants to the scale of the impact of the pandemic on the
construction sector in Table 4, there is no significant difference between the pandemic measures
sub-dimension and the educational status (F = 2.82, p <.05). It shows a significant difference
between the economics sub-dimension and educational status (F = 12.75, p <.05). Tukey test
was applied to understand from which educational status this difference arises. According to
the results of the Tukey test, the averages of the participants with a postgraduate degree are
higher than the participants with an associate degree. There is a significant difference in terms
of working conditions sub-dimension and educational status (F = 4.98, p <.05). Tukey test was
applied to understand from which educational status this difference arises. According to the
results of the Tukey test, the averages of undergraduate graduate participation are lower than
the average of the graduate graduates. There is no significant difference between the sub-
dimension of state support and educational status (F = 2.16, p <.05). It shows a significant
difference in terms of unemployment sub-dimension and educational status (F = 4.98, p <.05).
Tukey test was applied to understand from which educational status this difference arises.
According to the results of the Tukey test, the averages of the participants with a postgraduate
degree are higher than the graduates of the undergraduate and graduate departments. In table 5
the analysis professional of the study was researched by groups.

Table 5. Analysis of the research by different groups


Sources of Sum of Quadratic Significance
Sd f
Variation Squares Mean Level
Between
groups 3.327 2 1.663 2.62 .076
Epidemic
Within 97.053 153 .634
Measures
groups 100.380 155
Total
Between
97,053 153 6,870 19,690 .000
groups
Economics
Within
100.380 155 .349
groups
Total 13.740 2
Between
53.383 153 5.454 8.783 .000
groups
Laboure
Within
conditions 67.122 155 .621
groups
Total 10.909 2
Between
95.017 153 1,770 4.272 .016
groups
Within
State assistance 105.926 155 .414
groups
Total 3.41 2
Between
63.403 153 18.469 35.529 .000
groups
Unemployment Within
66.943 155 .520
groups
Total 36.939 2

66
As a result of the responses of the participants to the scale of the impact of the pandemic on the
construction sector in Table 5, the sub-dimension of pandemic measures does not show a
significant difference in terms of occupation variable (F = 2.62, p <.05). Economics sub-
dimension shows a significant difference in terms of the profession (F = 18.69, p <.05). Tukey
test was applied to understand which profession caused this difference. According to the results
of the Tukey test, the average of the participants working as technicians is lower than the other
participants. Working conditions sub-dimension shows a significant difference in terms of the
profession (F = 8.78, p <.05). Tukey test was applied to understand which profession caused
this difference. According to the results of the Tukey test, the average of the participants
working as technicians is lower than the other participants. The state support sub-dimension
shows a significant difference in terms of the profession (F = 4.27, p <.05). Tukey test was
applied to understand which profession caused this difference. According to the results of the
Tukey test, the average of the participants who are working as interns is lower than the other
participants. The unemployment sub-dimension shows a significant difference in terms of
occupation (F = 35.52, p <.05). Tukey test was applied to understand which profession caused
this difference. Tukey test was applied. According to the results of the Tukey test, the average
of the participants working a technician is lower than the other participants. In table 6 the
analysis professional of the study was researched by groups.

Table 6. Analysis of professional the research by different groups

Sources
Sum of Quadratic Significance
of Sd f
Squares Mean Level
Variation
Between
groups 3.327 4 2,316 3.83 .075
Epidemic
Within 91.114 151 .603
Measures
groups 100.380 155
Total
Between
15.608 4 3.902 11.438 .090
groups
Economic
Within
51.514 151 .341
groups
Total 67.122 155
Between
11.753 4 2.938 4,711 .001
groups
Laboure
Within
conditions 94.172 151 .624
groups
Total 105.926 155
Between
6.978 4 1.745 4.393 .072
groups
Within
State assistance 59.965 151 .397
groups
Total 66.943 155
Between
37.937 4 9.484 18.234 .000
groups
Unemployment Within
78.539 151 .520
groups
Total 116.476 155

67
As a result of the responses of the participants to the scale of the impact of the pandemic on the
construction sector in Table 6, the sub-dimension of pandemic measures does not show a
significant difference in terms of seniority variable (F = 3.83, p <.05). The economy sub-
dimension does not show a significant difference in terms of seniority (F = 11.43, p <.05).
Working conditions sub-dimension shows a significant difference in terms of seniority (F =
4.71, p <.05). Tukey test was applied to understand from which seniority interval this difference
arises. According to the results of the Tukey test, the averages of the participants in the 0-4
years seniority range are lower than the average of the participants in the 11-15 years seniority
range. The state support sub-dimension does not show a significant difference in terms of
seniority (F = 4.39, p <.05). The unemployment sub-dimension shows a significant difference
in terms of seniority (F = 18.23, p <.05). Tukey test was applied to understand from which
seniority interval this difference arises. Tukey test was applied. According to the results of the
Tukey test, the average of the participants in the 0-4 years seniority interval is lower than the
participants in the other seniority interval. In table 7 the analysis seniority of the study was
researched by groups.

Table 7. Analysis of seniority the research by different groups

Sources of Sum of Quadratic Significance


Sd f
Variation Squares Mean Level
Between 5.724 3 1.908 3.064 .030
Epidemic
groups
Measures
Within groups 94.657 152 .623
Total 100.380 155
Between
31.260 3 10. 420 44.166 .000
Economics groups
Within groups 35.862 152 .236
Total 67.122 155
Between
17.723 3 5.908 10. 180 .000
groups
Laboure
Within groups 88.203 152 .580
conditions
Total 105.926 155
Between
5.162 3 1.721 4.233 .007
State assistance groups
Within groups 61.781 152 .406
Total 66.943 155
Between
90.157 3 30.052 173.568 .000
groups
Unemployment
Within groups 26.318 152 .173
Total 116.476 155

68
As a result of the responses of the participants to the scale of the impact of the pandemic on the
construction sector in Table 7, the sub-dimension of pandemic measures shows a significant
difference in terms of the age variable (F = 3.06, p <.05). According to the results of the Tukey
test conducted regarding the source of this difference; The averages of the participants aged 44
and over are higher than those between the ages of 20-27. The economy sub-dimension shows
a significant difference in terms of age (F = 44.16, p <.05). Tukey test was applied to understand
from which age range this difference arises. According to the results of the Tukey test, the
average of the participants between the ages of 20-27 is lower than the other age groups.
Working conditions sub-dimension shows a significant difference in terms of age (F = 10.18, p
<.05). Tukey test was applied to understand from which age range this difference arises.
According to the results of the Tukey test, the averages of the participants between the ages of
20-27 are lower than the averages of the participants between the ages of 36-43 and 44 and
over. State support sub-dimension shows a significant difference in terms of age (F = 4.23, p
<.05). Tukey test was applied to understand from which age range this difference arises.

According to the results of the Tukey test, the averages of the participants aged 44 and over
are higher than the participants between the ages of 20-27. The unemployment sub-dimension
shows a significant difference in terms of age (F = 173.56, p <.05). Tukey test was applied to
understand from which age range this difference arises. Due to the results of the Tukey test,
the averages of the participants between the ages of 20-27 are lower than the averages of the
other participants.

4. Conclusion
By Turkey in the sector of shocks should be reconstructed employment considering the
sensitivity and education policy in the country, the inclusion in the role of providing financing
in these areas of employment policies and public sector are expected to increase the
effectiveness of doubt macroeconomic policy. This pandemic of the Covidien-19 works in
Turkey, especially during the short-term effects of changes in the industry and construction
sector will then be a source for measures to be taken.

Analyzing the effects of the pandemic in the short, medium and long term with new research at
both macroeconomic and sectoral level will be of great importance in terms of understanding
the impact mechanisms of this process that affects all segments of the economy.

When examined in general, according to the findings, the areas that continue with the
application are experiencing difficulties in this sense. Materials could not be procured. The
workers could not work at full capacity. Necessary occupational health measures could not be
taken. It was stated that unemployment is increasing and a new job area has not been created.
With the increasing anxiety of people, different problems have arisen. Due to this natural abrupt
process, tenders have been extended and it has been revealed that some companies have
difficulties. People with shifted priorities have tried to pay more attention to nutrition and
health.

In this context, the priority of the implementation of sector-specific policies in the economy,
and the emphasis on digital transformation in the medium and long-term, especially in the
industry, manufacturing, retail trade and service sectors, can alleviate the sectoral effects in
possible epidemics, and strengthening and encouraging remote work opportunities in these
sectors will reduce the fragility in the sectors and it is thought that employment losses in these
sectors will be prevented.

69
In the light of the research, the decrease in the share of the working group from the national
income during the pandemic, the decrease in wages, the increase in unemployment and the
contraction of capital reflected economic problems. In this context, it is emphasized that
institutional opportunities such as transfers in the short term and basic income system and
family insurance should be strengthened in the long term in order not to worsen the income
inequality conditions of those who do not have access to financial markets. By improving
standards in construction project and cost management and applying new technologies,
construction can play an important part in economic recovery from the coronavirus. The
sessions in the Working Week examine problems and potential solutions in construction
investment and delivery and link new technologies to these potential solutions in a pragmatic
way. Longer term, these improvements will allow construction to contribute meaningful to the
attainment of sustainable development goals and which will remain a problem long after the
pandemic has abated.

Refences
Araya, F., (2021a). Modeling Working Shifts in Construction Projects Using an Agent-Based
Approach to Minimize the Spread of COVID-19. Journal of Building Engineering 41 doi:
10.1016/j.jobe.2021.102413.

Araya, F., (2021b). Modeling the Spread of COVID-19 on Construction Workers: An Agent-
Based Approach. Safety Science 133 (September 2020). doi: 10.1016/j.ssci.2020.105022.

Aslan, M., T., Aslan, I., O., Özdemir. O., (2020). COVID-19 (Yeni Tip Koronavirüs)
Günlerinde Dahi Anne Sütü Yine Çok Önemli!” Journal of Biotechnology and STtrangeic
Health Research 1 (April). Journal of Biotechnology and Strategic Health Research: 111–
115. doi:10.34084/bshr.721702.

Barua, S.,. 2020. Archive Understanding Coronanomics: The Economic Implications of the
Coronavirus (COVID-19) Pandemic Understanding Coronanomics: The Economic
Implications of the Coronavirus (COVID-19) Pandemic.

COVID‐19 STRATEGY UPDATE. 2020. https://www.who.int/docs/default-


source/coronaviruse/covid-strategy-update-14april2020.pdf.

Domenico, C., Vanelli. M., (2020). WHO Declares COVID-19 a Pandemic. Acta Biomedica..
91. Mattioli 1885. doi:10.23750/abm.v91i1.9397.

Dev, S. M., Sengupta, R., (2020). Covid-19: Impact on the Indian Economy. Indian Journal of
Labour Economics 63 (October). Springer: 105–111. doi:10.1007/s41027-020-00264-z.

Diao, X., Nilar A., Wuit Y. L., Zone P, P., Thurlow J. (2020). Assessing the Impacts of COVID-
19 on Myanmar’s Economy: A Social Accounting Matrix (SAM) Multiplier Approach.
doi:10.2499/p15738coll2.133745.

Doğan, Y., Doğan S., (2020). Koronavirüs Pandemisi ve Türkiye’de Bitkisel Üretime Etkisi.
Artuklu Kaime International Journal of Economics and Administrative Researches
Y.2020. Vol. 3. https://dergipark.org.tr/en/pub/artuklu/729961.
Duddu P., (2020). Iran Coronavirus (Covid-19): Updates on the Outbreak, Measures & Impact.
https://www.pharmaceutical-technology.com/features/iran-coronavirus-covid-19-death-
toll-cases-ncov-measures-impact/.

70
Dyer, O., (2020). Trump Claims Public Health Warnings on Covid-19 Are a Conspiracy against
Him. BMJ (Clinical Research Ed.) 368 (March). NLM (Medline): m941.
doi:10.1136/bmj.m941.

Ellison, G. (2020). Implications of Heterogeneous SIR Models for Analyses of COVID-19.


Cambridge, MA. doi:10.3386/w27373.

Evelina, J, Nuugulu S, Julius L H. (2020). Estimating the Economic Impact of COVID-19: A


Case Study of Namibia, April.

Fornaro, L, Martin Wolf. (2020). Covid-19 Coronavirus and Macroeconomic Policy.”


Economics Working Papers. Department of Economics and Business, Universitat
Pompeu Fabra. https://ideas.repec.org/p/upf/upfgen/1713.html.

García-Alberti, M, Fernando S, Isabel C, Juan C. (2021). Challenges and Experiences of Online


Evaluation in Courses of Civil Engineering during the Lockdown Learning Due to the
COVID-19 Pandemic. Education Sciences 11 (2). MDPI AG: 59.
doi:10.3390/educsci11020059.

Ghebreyesus, T. A. 2020BC. “WHO Director-General’s Opening Remarks at the Media


Briefing on COVID-19 - 25 March 2020.” Ghebreyesus, T. A.

Guo, Yan Rong, Qing Dong Cao, Zhong Si Hong, Yuan Yang Tan, Shou Deng Chen, Hong
Jun Jin, Kai Sen Tan, De Yun Wang, and Yan Yan. 2020. “The Origin, Transmission and
Clinical Therapies on Coronavirus Disease 2019 (COVID-19) Outbreak- A n Update on
the Status.” Military Medical Research. BioMed Central Ltd. doi:10.1186/s40779-020-
00240-0.

Hamins-Puertolas, A., Kurz C. (2020). Tracking Labor Market Developments during the
COVID-19 Pandemic: A Preliminary Assessment. Finance and Economics Discussion
Series 2020 (030). Board of Governors of the Federal Reserve System.
doi:10.17016/feds.2020.030.

Indranil C., Maity. P., (2020). COVID-19 Outbreak: Migration, Effects on Society, Global
Environment and Prevention. Science of the Total Environment 728 (August). Elsevier
B.V.: 138882. doi:10.1016/j.scitotenv.2020.138882.

Ivanov, Dmitry, and Alexandre Dolgui. 2020. “Viability of Intertwined Supply Networks:
Extending the Supply Chain Resilience Angles towards Survivability. A Position Paper
Motivated by COVID-19 Outbreak.” International Journal of Production Research 58
(10). Taylor & Francis: 2904–2915. doi:10.1080/00207543.2020.1750727.

Jaya S, S. 2020. “COVID-19: An Overview of Economic Waves on Indian Economy.” Shanlax


International Journal of Economics 8 (3). Shanlax International Journals: 114–119.
doi:10.34293/economics.v8i3.3201.

Karakaş M. 2021. “Covid-19 Salgınının Çok Boyutlu Sosyolojisi ve Yeni Normal Meselesi.”
İstanbul Üniversitesi Sosyoloji Dergisi 40 (1): 541–573. Accessed May 3.

71
doi:10.26650/SJ.2020.40.1.0048.

Karakaya, E, İklim Değişikliği, and Enerji Uzmanı. 2021. COVID-19 Krizinin Ekonomi, Enerji
ve Emisyonlara Etkileri: Mevcut Durum ve Olası Post-Corona Senaryoları. Accessed
May 3. https://www.iklimhaber.org/covid-19-krizinin-ekonomi-enerji-ve-emisyonlara-
etkileri-mevcut-durum-.

Koyuncu, T., Meçik O. (2020). COVİD-19 Pandemics of Sectoral and Cross-Sectoral Effects
on Economic Growth in Turkey . Ankara. http://www.is-
be.org/Content_Files/Content/ISBE2020 Proceeding.pdf.
,
Manderson, L, Susan L., (2020). COVID-19, Risk, Fear, and Fall-Out.” Medical Anthropology:
Cross Cultural Studies in Health and Illness. Taylor and Francis Inc.
doi:10.1080/01459740.2020.1746301.

Rahimi, R., (2014). The Effect of Using Different Rock Failure Criteria in Wellbore Stability
Analysis. http://scholarsmine.mst.edu/masters_theses.

Reyad, O., (2020). Novel Coronavirus COVID-19 Strike on Arab Countries and Territories: A
Situation Report I. ArXiv. arXiv.

UNCTAD. 2020. “How COVID-19 Is Changing the World : A Statistical Perspective.”


Committee for the Coordination of Statistical, 1--90.
https://unstats.un.org/unsd/ccsa/%0Ahttps://unstats.un.org/unsd/ccsa/documents/covid1
9-report-ccsa.pdf.

Valtonen, M, Matti W, Tytti V, Erkki E, Antti J. H, Katja M., Wilma G., Olli J. H., Olli R.
(2019). Common Cold in Team Finland during 2018 Winter Olympic Games
(PyeongChang): Epidemiology, Diagnosis Including Molecular Point-of-Care Testing
(POCT) and Treatment. British Journal of Sports Medicine 53 (17). BMJ Publishing
Group: 1093–1098. doi:10.1136/bjsports-2018-100487.

Vasiev, M., Kexin B., Artem D., Vladimir B., (2020). How COVID-19 Pandemics Influences
Chinese Economic Sustainability. Foresight and STI Governance (Foresight-Russia till
No. 3/2015) 14 (2). National Research University Higher School of Economics: 7–22.
https://ideas.repec.org/a/hig/fsight/v14y2020i2p7-22.html.

Wang, C., Riyu P., Xiaoyang W., Yilin T., Linkang X., Cyrus S, Roger C. (2020). Immediate
Psychological Responses and Associated Factors during the Initial Stage of the 2019
Coronavirus Disease (COVID-19) Epidemic among the General Population in China.
International Journal of Environmental Research and Public Health 17 (5). MDPI AG:
1729. doi:10.3390/ijerph17051729.

Wells, P., Wessam A., Stephen P., Anthony B., (2020). A Socio-Technical Transitions
Perspective for Assessing Future Sustainability Following the COVID-19 Pandemic.
Sustainability: Science, Practice, and Policy 16 (1). Taylor & Francis: 29–36.
doi:10.1080/15487733.2020.1763002.

Zhang, Xiaofeng, Huanyun Yu, Fangbai Li, Liping Fang, Chuanping Liu, Weilin Huang,
Yanhong Du, Yemian Peng, and Qian Xu. 2021. “Covid 19 and Retuen to Work for the

72
Construction Sector: Lessons from Singapore.” Science of the Total Environment.
Occupational Safety and Health Research Institute, 136204.
doi:10.1016/j.shaw.2021.04.001.

73
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Sustainable thinking, educational opportunities in interior


architecture projects

Munteanu Angela1*

Abstract: The current ecological situation in the world has become a catastrophic problem, and
sustainability can provide many opportunities to protect and save the world. The primary
concern of mankind must be the rational use of natural resources in protecting the environment
and for the benefit of future generations, one of the major challenges of our time. Therefore, an
approached thinking of the existentialism of humanity to solve the objectives, can be intervened
through institutional educational projects in various fields, including interior architecture,
offering solutions for recycling and reuse of different materials, and technologies for
transformation into objects of furniture, lighting fixtures, wall finishes, floors, etc.Thus, the
method of sustainability is an example to follow, which offers many opportunities to protect
and save the environment. The architecture of residential and non-residential interior space is
the field of application through sustainable and effective design methods of recycling, reuse,
reorganization, in a comfortable space. Thus, we get more spacious, brighter, healthier interiors
through materials that regain a new utility. Sustainable theoretical and practical research models
reflect results in the design of engineering products, sustainable design, which ensures an
impact on future generations of specialists - architects, engineers, and interior designers, with a
sustainable vision of the environment and the future of humanity.

Keywords: : interior architecture, sustainability, design, recycled materials

1. Today's reality
For centuries, the Earth has changed beyond recognition. The air has become poisoned
by emissions. Even ecologically clean areas and regions are no longer the same as they were a
few centuries ago. Impurities will be present almost everywhere. Today we are facing multiple
ecological problems: environmental pollution, floods, and fires that devastate everything
around us, global warming which leads to a global disaster. Thus, the destruction of nature as a
result of climate change is the most pressing problem facing humanity. The problem affecting
today's world, demonstrated by scientific research, is that humans are responsible for global
climate change. Through a prompt approach and thinking of rational use of natural resources,
stopping harmful emissions, reducing energy consumption, recycling, and reuse, etc., we can
save the future of generations [1; 6].

2 Consciously approached thinking


1
Technical University of Moldova, Faculty of Urbanism and Architecture, Department of Architecture,
Chisinau, Moldova

* Corresponding author: [email protected]


74
Therefore change starts with us! Why throw away and pollute nature if it is possible to reduce
the impact, reuse and recycle - the result of sustainability! Sustainability is the ability to exist
and develop without depleting natural resources for the future. And sustainable development is
the impetus that meets the needs of the present without compromising the ability of future
generations to meet their own needs. Earth's resources are finite and should therefore be used
conservatively and carefully to ensure that they are sufficient for future generations without
diminishing the quality of life today. A sustainable society must be socially responsible,
focusing on environmental protection and dynamic balance in human and natural systems.
Sustainability offers many opportunities to protect and save the world. [2].

2.1 Education and sustainable approach

SUSTAINABLE education through existentialist thinking and approach, healthy for a


sustainable and bright future starts in college. The principles of the existentialism of
contemporaneity and the environment, are manifested in the sense that it puts the destiny of
humanity in the hands of itself. Starting from the famous observation of the French philosopher,
Rene Descartes - "I think, so I exist" is a current issue, focused on a lived experience of thought,
senses, actions in becoming a responsible human being for the environment. [3].

Table 1. Eco-design principles


Low impact materials:
the use of non-toxic materials, durable or recycled products, which require little energy for
processing;
Energy efficiency:
the use of manufacturing processes and the production of products that require less energy;

Quality and durability:


Eco design products that last longer and work better will need to be replaced less often, reducing the
principles impact of replacements;
Design for recycling:
„ Products, processes and systems should be designed for performance in a "life beyond"
commercial ";
Renewability:
materials should come from nearby renewable sources (local or bioregional), sustainably
managed, which can be composted when their usefulness has been exhausted;

Sustainable design also includes social considerations: occupational safety and health; utility;
responsible social use; the origin of the materials; design according to human needs. Sustainable
design is the philosophy of designing physical objects, the built environment, and services to
respect the principles of social, economic, and ecological sustainability. Namely, through the
interior architecture, we can intervene with the principles of sustainability, by designing spaces
and using intelligent and economical lighting systems, with many windows to provide natural
lighting during the day, and the use of materials and furniture by reuse.

Eco-innovation, creativity is any innovation that leads to significant progress and the goal of
sustainable development. Ecodesign supports the need to incorporate environmental and
sustainability criteria into the basic requirements of product design, such as cost, function,
utility, aesthetics, reliability, safety, etc. (tab. 1) [4; 5].

Thus, together with the student-architects, year V from the Technical University of Moldova,
Faculty of Urbanism and Architecture, Architecture department, within the course unit: Interior

75
space architecture, we both researched and approached the topic of ecodesign, through
recycling and reuse, through creative thinking in the elaboration of objects for interior
architecture. Event of the Institutional Project, highly publicized by the media (TV, press, radio,
news portals on the Internet, etc.), presented in several Scientific-Practical Seminars (2019, first
edition, 2021, second edition), which each this time gathers more and more public, interested
in the issue of sustainability [8; 9].

Figure 1. Cionanu Jana, Iordan Ana-Lucia, st. gr. ARH-161, UTM, FUA, Architecture
department. Living room table, made of recycled wood

Examples of sustainability are student works, transformable and functional furniture, lighting
objects made from recycled materials: cardboard sheets and tubes, wood material (plywood,
construction wood, old or degraded furniture), envelopes, old objects, metal, etc. . For example
- the wood used for multiple purposes and shapes, forests, represent the lungs of the earth,
affected today. The most important use in the world is fuel. Wood is also used as a building
material in the architecture of wooden houses, bridge industry, railway sleepers, furniture,
parquet, various interior design elements. But it is often the case of mass deforestation, and the
impact is imminent on the environment: intensifying the processes of soil erosion; droughts are
becoming more frequent; impoverishment of flora and fauna leading to global warming;
intensification of landslides [5; 6; 7].

Figure 2. Malanici Maria, st. gr. ARH-162, UTM, FUA, Architecture department. Sustainable
lighting fixture

To avoid harmful problems, it can present an approach by recycling and reusing garden crates
and wooden pallets, to create a table for the living room, made of four elements with storage
spaces that aesthetically complete the space in a harmony of chromatic contrast, work by the
authors, students gr. ARH-161, Architecture department, UTM, FUA Cionanu Jana, Iordan

76
Ana-Lucia (fig. 1). A lamp that mimics the shape of the human body, made of solid wood, can
be comfortable and can serve as a support for a book or phone, is the work of student gr. ARH-
162, Malanici Maria (fig. 2). Thus, as the authors mention - art is contemplation, it is the
pleasure of the mind that seeks in nature and describes the spirit with which Nature itself is
animated ...

Figure 3. Iordan Ana-Lucia, st. gr. ARH-161, UTM, FUA, Architecture department. Project
with sustainable furniture

Figure 4. Furniture models, sustainable lighting objects, presented at the Scientific-Practical


Seminar eco-design, developed by architecture students, DA, FUA, UTM

The final results of the project, models of utilitarian and functional objects, aesthetically defined
by finishes, framed in residential and non-residential spaces developed within the

77
interdisciplinary project "Urban Metamorphosis of 31 AUGUST 1989, Chisinau" and kept in
the EXPO hall of the FUA and the museum Department of Architecture (fig. 3, 4) [10].

Student satisfaction is culminated by the desire of the theme approached for the recycling and
reuse of unnecessary objects and things, not to be thrown in nature. The educational message
of the Project is addressed to the academic environment and society for recycling and reuse,
each contributing through healthy thinking, behavior for a healthy environment of our planet
Earth!

... What human needs is not only the persistent questioning of the final questions,
but the sense of what is feasible, of what is possible, of what is right, here and now ...
/Hans-Georg Gadamer, Truth and method /

3. Conclusions

In conclusion, we mention that the architecture of the residential and non-residential


interior space is the field through which we can manage and educate sustainability in the
academic and social environment, through our own examples of recycling and reuse.
Following the elaboration of these projects, we notice that there are no non-recyclable
materials. Various materials can be combined to obtain original and interesting objects for
interior architecture. Such an approach can lead to both technological and economic progress.
Through work and creation, we achieve pleasant goals, both in terms of aesthetics and quality

References

1. Aloone M., Bey, N., (2009). Improving the environment through product development -
guide, Danish EPA, Copenhagen Denmark, ISBN 978-87-7052-950-1, 46 p.
2. Benyus J., (1997). Biomimicry: Innovation Inspired by Nature. New York, USA: William
Morrow & Company. ISBN 978-0-688-16099-9.
3. Descartes R., (1701). Regulae ad directionem ingenii, Amsterdam.
4. Kant I., (2008). Observations on the feeling of beauty and sublime, translated by Rodica
Croitoru, All Publishing House, Bucharest, ISBN 5-322-00020-8, 120 p.
5. Lindahl M., (2003). Designer's use of environmentally friendly and sustainable methods.
Progress of the first international workshop on "Sustainable Consumption", Tokyo, Japan,
Non-Traditional Technology Society (SNTT) and the Life Cycle Assessment Research
Center (AIST).
6. Shedroff N., (2010). Design is the problem: the future of design must be sustainable. Design
Journal: vol 13, nr. 1.
7. What is sustainability - https://www.twi-global.com/locations/romania/ce-facem/intrebari-
frecvente-faq/ce-este-sustenabilitatea (visited 09.05.2021).
8. Sustainability - Wikipedia, the free encyclopedia,
https://ro.wikipedia.org/wiki/Sustenabilitate (visited 25.06.2021).
9. World Recycling Day https://utm.md/blog/2021/03/22/fua-utm-a-marcat-ziua-reciclarii-
mondiale-prin-seminarul-eco-design/.
10. The interdisciplinary project Metamorphosis str. 31 augut, Chisinau
https://utm.md/blog/2021/01/27/studentii-arhitecti-ai-fua-utm-isi-propun-un-proiect-de-
metamorfoza-urbana/.

78
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Reducing the Estimation Error of the Measure of Proximity


Between Objects in Pattern Recognition

Rahim Mammadov1*, Gurban Mammadov2, Sevinj Aliyeva3

Abstract: When similar or close objects are recognized in a report, the reliability of
recognition becomes very low when the price of the measure of proximity between objects is
close to the price of the resulting error. There are several reasons for this: The presence of a
modular sign in the existing formulas for the assessment of the measure of proximity between
objects; The correlation coefficient between the measurements is not taken into account ( it is
assumed that there is no) but the correlation exists; gross errors that appearing during
measurements are not taken into account. To reduce random errors, it is necessary to carry out
repeated measurements, which reduces the processing speed. İn the report suggests an
algorithm for solving this problem. In this algorithm, the reference parameters are subjected to
a large number of measurements in the training mode. The parameter of the researched object
is measured in such a number that does not affect the speed of the system. During recognition,
each measured parameter is compared with all measured values of the reference parameter.
Thus, the number of repeated measurements is artificially increased enough. Since the
comparison process is performed on a computer by software, it does not affect the speed of
the system. In this case, errors caused by statistical processing, the correlation coefficient, the
use of the modular sign in formulas, and gross error are eliminated, resulting in increased
accuracy. This algorithm was simulated on a computer and the positive results were obtained.
The distance between objects is decided by analyzing the range of change of parameters, not
on the basis of Manhattan or other formulas. During separate measurements, 3 options are
proposed, replacing the values of the lower, middle and upper limits of the range over which
the current distances fall. The processing of the results showed that the proposed algorithm
can significantly increase the accuracy of estimating the measure of proximity between
objects in all three options. In this case, the speed of the system will not be affected.

Keywords: pattern recognition, recognition reliability, measurement errors, random errors,


interval analysis, correlation coefficient.

1. Introduction

The efficiency of flexible automated production (FAP) and mobile robots (MR) largely
depends on the reliability of the pattern recognition (PR) of the technical vision system
(TVS), which is one of the main components giving them flexibility and adaptability [1-7].
The reliability of pattern recognition is determined by the accuracy of the estimation of the
measure of proximity between objects, the features of which are determined by measurement.
The errors allowed when measuring the values of the features of images, summing up
1Azerbaijan State Oil and Industry University, “Instrumentation engineering” department, Baku, Azerbaijan,
2 Azerbaijan State Scientific-Research Institute for Labor Protection and Occupational Safety, Baku, Azerbaijan
3 Azerbaijan Technical University, Baku, Azerbaijan

* Corresponding author: [email protected]

79
according to the most complex law, create an error in assessing the measure of proximity
between objects, which in the computer vision system is commensurate with the actual value
of the distance between the features of objects [8-10]. Therefore, these errors, reducing the
values of the reliability of image recognition, are a serious obstacle to the use of technical
vision systems for the widespread introduction of flexible automated production and mobile
robots [11,12].

2. Analysis of the current situation

Many works have been devoted to minimizing systematic errors in estimating the proximity
measure between objects, and recommendations have been given to reduse them [13-15].

However, theoretical and experimental studies of errors in estimating the measure of


proximity between objects show that these methods cannot significantly increase the accuracy
of estimating of the final assessment to the fact that random errors associated with hidden
influences and commensurate with the actual value of the measure of proximity between
objects are large enough and minimize these errors are necessary. They arise due to the fact
that the nature and function of the impact of these destabilizing factors on the occurrence of
errors in estimating the measure of proximity between objects are somewhat different from
our knowledge. For example, the transfer characteristic of the converter, contrary to our
knowledge, may differ slightly from the linear one, and even in different areas these
nonlinearities are different in value and direction. The current technique is not able to detect
the nonlinearity of these.

To do this, it is necessary to analyze the sources of the occurrence of random errors, and it is
important to eliminate them. In the search for methods to reduce random errors in estimating
the measure of proximity between objects, various destabilizing factors were analyzed as
sources of creating systematic and random errors.

The action of these factors creates additive, multiplicative, and higher-order errors. The latter,
along with the existing destabilizing factors, which are either not identified, or their effect is
difficult to take into account, or these actions individually are so small that they are not taken
into account separately. However, these errors add up to create random errors in value and
polarity, which are impossible to predict. Experiments have shown that random errors in
measuring the values of the features of the recognized and the reference image are
distributed according to the normal law. On the basis of these errors and the  - correlation
coefficient between them, it is possible to find random errors in estimating the measure of
proximity between objects, since the latter are a composition of the laws of distribution of
random errors in measuring the values of features of the recognizable and reference images
and should also be distributed according to the normal law. Whereas in the computer vision
system all features are measured by one measuring device and under the same conditions,
then when assessing the measure of proximity between objects, the random errors in
measuring the values of individual features by subtracting should have significantly reduced
the total error in assessing the measure of proximity between objects [13-16].

80
However, the presence of the modulus sign in the formulas for assessing the measure of
proximity between objects has a negative impact on the formation of random errors in
estimating the measure of proximity between objects. In this case, since errors with a negative
sign become positive, the distribution of errors in estimating the measure of proximity
between objects becomes truncated and the estimated value shifts to the positive direction.
This disadvantage appears when the recognized and reference images are so close that their
measure of proximity between objects is commensurate with the error in estimating its value.
Since there are a lot of such cases in the practice of using the technical vision system, the
additional error that appears, commensurate with the value of the standard deviation of the
estimate of the measure of proximity between objects, makes a significant negative
contribution to pattern recognition [17,18].

Therefore, when evaluating the measure of proximity between objects, its value is shifted to
the right side by an indefinite amount, which makes the result incorrect and there is an error
associated with the use of the modulus sign in formulas for evaluating the measure of
proximity between objects. The displacement continues until the real value of the measure of
proximity between objects becomes equal to or greater than the minimum of the differences
between the values of individual features of the recognized and reference objects. Such real
distributions of random errors in estimating the measure of proximity between objects is
completely in the positive plane of the Euclidean metric. Therefore, the use of different
methods and techniques to reduce the influence of these destabilizing factors does not have a
significant effect. Thus, the direct reduction of random errors in the estimation of the measure
of proximity between objects is necessary.

The use of traditional methods of statistical processing of measurement results using multiple
measurements of the values of the features of patterns increases the time of pattern
recognition, which is undesirable.

Therefore, the development of algorithms that use a certain number of repeated measurements
of feature values, which does not reduce the capabilities of the computer vision system in
terms of pattern recognition time, significantly reducing the level of random errors in
estimating the measure of proximity between objects is actual [17-19].

3. Problem statement

It is known that scientific research is carried out not only on the Earth, but also by sending
mobile robots to uninhabitable space, underwater and other planets. In order for the sent
robots to adapt to the environment and perform the given tasks, it is important to first
recognize the images. The main issue of reliability of image recognition is to find a match
between images taken from real objects and reference images. But in some cases, it is
impossible to get a reference images and to pass certain trainings, since it is impossible for a
person to go to the listed places. At that time, the recognition operation is done only on the
basis of the real images of the input. The reliability of the image recognition depends on the
correct calculation of the measure of proximity between objects. Certain formulas (for
example, Manhattan, Euclidean, Camberra, etc.) are used to find measure of proximity
between objects in known recognition and control systems. These formulas are highly

81
integrated and all of them use modul sign. Because these formulas are integral, the
characteristics of the individual values are not taken into account. The object is given an
overall price. The modular sign also cuts its characteristic and leans in a positive direction.
Therefore, it is not possible to determine the exact value of measure of proximity between
objects.

The reliability of pattern recognition depends on the correct calculation of the measure of the
proximity between objects. The more accurately we calculate the measures of the proximity
between objects, the more accurate the results will be In addition, the following shortcomings
lead to incorrect results using these formulas:The presence of a modular sign in the existing
formulas for the assessment of the measure of proximity between objects;

1. The correlation coefficient between the measurements is not taken into account ( it is
assumed that there is no) but the correlation exists;
2. To reduce random errors, it is necessary to carry out repeated measurements, which
reduces the processing speed;
3. Gross errors that appearing during measurements are not taken into account.
For these reasons, the processing of algorithms that use a certain number of repeat
measurements of feature values, which do not reduce the capabilities of the technical vision
system in terms of the duration of recognition of pattern, significantly reduce the level of
random errors in the evaluation of the measure of proximity between objects is actualistic.

4. Problem solving

Modern development of nanotechnology, information technology and computers allows a


more intelligent approach to estimating the size of proximity between objects, abandoning
formulas, it possible the development of new methodologies for analyzing errors in measuring
the values of each parameter of known and comparable images. In the presented report, an
algorithm is proposed that achieves the solution of this issue. The algorithm is implemented as
follows:
The input and reference parameters | 1, and | 1, are entered into the
computer. The program method finds in the calculation of the numerical average, the standard
deviation, the correlation coefficient and the final error of the measures of the proximity
between objects values of the input and reference parameters [3,5].

According to the Manhattan formula, the input and reference parameters are checked for
compatibility with each other [2]:

Ƶ ∑ | | (1)

Because and obey the normal distribution, will obey in the normal distribution the
law. Therefore, it is necessary to find out in what part of this distribution the difference
between the input and reference parameters falls. The difference between the input and
reference parameters is . and are measured in n times, it is usually checked
with each value of y for each value of x. That is:

82
, , …
, , …
… … … … (2)
, , …
The difference - a is checked in the range a -3* - +3* by every 0.5 step ([-3* , -
2,5* ], [-2,5* , -2* ], [-2* , -1,5* ], [-1,5* , - ], [- , -0,5* ], [-0,5* , 0], [0,
0,5* ], [0,5* , ], [ ,1,5* ], [1,5* ,2* ], [2* ,2,5* ] vә [2,5* ,3* ]).
If a does not fall in the interval, the program checks whether it falls in other intervals. In the
case of an interval falls, then as the price of a, the smallest price of the interval is accepted
and sent to the total input and the possible deviations are minimized. In the measurement
technique, errors are accepted up to ± 3*σ. Greater than it, is thrown like a gross error.
Therefore, a's greater than 3 and 3 are not taken into account. Then the a's in the
interval 3 , 0 and 0, 3 are collected and the average value is found by dividing by
the number of measurements. The final values found in the general order by the Manhattan
formula and found by the operation of our algorithm are given. We take the number of pre-
measurements as NK instead of n. NK varies from 1 to n. It is interesting to us how the
algorithm performs itself in the number of different measurements. That is, the result should
actually be "0". Because the X and Y arrays are deliberately taken from the same in advance.
They differ from zero simply because they came by mistake and were collected by the module
sign. According to mathematical statistics, the greater the number of repeated measurements,
the smaller the random errors. However, the presence of a modular sign in the formula greatly
weakens this rule. Thus, the effectiveness of repeated measurements is lost. Therefore, the use
of interval analysis, taking a relatively small number of repeated measurements, both slows
down the speed of the recognition system, and has the appropriate accuracy, which is
reflected in this program. It is better to say that the values of zm (Manhattan) and zk
(suggested) in the algorithm are closer to the corresponding values obtained at the maximum
value of n.

The proposed algorithm for estimating the size of proximity between objects was
mathematical modeling in software on the computer. By replacing the value of the current
parameter-a with the minimum (Table 1 and Figure 1), average (Table 2 and Figure 2) and
maximum (Table 3 and Figure 3) values of the given interval, the program calculates the zk
(suggested) values. Then the errors of the calculated values of zm (Manhattan) and zk
(suggested) are calculated by repeating measurements once again the NK times.

First of all, when analyzing Table 1 and Figure 1 obtained at the minimum price of the current
parameter-a, it appears that as a result of the proposed algorithm by performing repeated
measurements NK, the random measurement error is smaller and can be considered the best
among the options.

When taking the average value of the current parameter-a in the given interval, the results are
very good and the accuracy increases significantly when the NK performs repeated
measurements.

83
The results obtained in computer modeling are good when the current parameter-a receives
the maximum value in that interval and much higher compared to in classical use.

Thus, as can be seen from the tables and figures, the result of the proposed algorithm is much
higher than the results obtained by the classical method, and when using the proposed
algorithm, the accuracy of the technical vision system increases significantly and the
processing speed remains at the required level.

Table 1. The results obtained when the number of repeated measurements of the reference
and input parameters is the same

NK MZNK ZM ZK(min) ZK(max) ZK (orta)

1 9 9 8,74 13,11 10,92

2 4,5 4,5 6,55 10,92 7,1

3 ‐0,33 6,33 4,85 6,74 5,07

4 ‐2 6,5 4,37 5,94 4,61

5 ‐1,2 5,6 4,19 6,04 4,5

6 ‐4,33 8 5,58 6,87 6,23

7 ‐2 8,57 4,91 6,06 5,26

8 ‐1,5 7,75 4,58 5,9 4,96

9 ‐0,66 7,55 4,8 5,88 5,2

10 ‐0,1 7,3 4,42 6,21 4,87

11 ‐0,09 6,81 4,88 6,14 5,42

12 ‐0,33 6,67 4,67 6,29 5,23

13 ‐1,07 6,92 4,66 6,02 5,25

14 ‐0,93 6,5 4,3 6,11 5,05

15 ‐1,73 6,93 4,99 6,03 5,77

16 ‐1,125 7 4,63 5,68 5,42

17 ‐0,411 7,23 4,17 5,34 4,81

18 0 7,22 4,52 5,43 5,27

84
14
12
10

MZNK,ZM, ZK
8
6
4
2
0
‐2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
‐4
‐6
NK

MZNK ZM zk‐max zk‐min Zk‐orta

Figure 1. The results obtained when the number of repeated measurements of the reference
and input parameters is the same

Table 2.Results obtained when the number of repeated measurements is different under time
constraints
NK zm (Manhattan) zk_max zk_orta zk_min
1 9 8,01 5,820 3,640
2 4,5 7,28 5,340 3,400
3 6,33 7,77 5,780 3,800
4 6,5 7,65 5,640 3,640
5 5,6 7,57 5,560 3,540
6 8 7,36 5,480 3,600
7 8,57 7,25 5,390 3,530
8 7,75 7,19 5,290 3,400
9 7,55 7,12 5,240 3,370
10 7,3 7,09 5,220 3,350
11 6,81 7,19 5,320 3,440
12 6,66 7,2 5,310 3,420
13 6,92 7,32 5,410 3,510
14 6,5 7,32 5,400 3,480
15 6,93 7,41 5,480 3,560
16 7 7,36 5,440 3,530
17 7,23 7,34 5,430 3,520
18 7,22 7,31 5,400 3,490

85
10

Zm,Zk
6

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
NK

Zm Zk_max Zk_orta Zk_min

Figure 2. Results obtained when the number of repeated measurements is different under time
constraints

5. Conclusion

It should be noted that since all these shortcomings have been eliminated, it is more
expedient to use this methodology in recognition and control systems instead of the existing
formulas. Thus, along with the elimination of shortcomings, there are a number of advantages.
In this case, errors caused by statistical processing, the correlation coefficient, the use of the
modular sign in formulas, and gross error are eliminated, resulting in increased accuracy. This
algorithm was simulated on a computer and the results were obtained. The distance between
objects is determined not by a formula, but by analyzing the range of changes in parameters.
As can be seen from the mathematical analysis of the given graphs, the best option is to take
the minimum of a, and the random error is reduced by 15-30%. The processing of the results
showed that the proposed algorithm can significantly increase the accuracy of estimating the
measure of proximity between objects. In this case, the speed of the system will not be
affected.

References

1. Bastian Hartmann, Christoph Schauer and Norbert Link, “Worker Behavior Interpretation
for Flexible Production,” Engineering And Technology International Journal Of Industrial
And Manufacturing Engineering, 2009, pp. 1224-1232.
2. Tushar Jain and Meenu, “Automation and Integration of Industries through Computer
Vision Systems,” International Journal of Information and Computation Technology,
2013, pp. 963-970.
3. Keith Jacksona, Konstantinos Efthymioua, John Borton, “Digital manufacturing and
flexible assembly technologies for reconfigurable,” Changeable, Agile, Reconfigurable &
Virtual Production Conference, 2016, pp. 274-279.
4. F. Leighton, R. Osorio, G. Lefranc, “Modelling, Implementation and Application of a
Flexible Manufacturing Cell,” International Journal of Computers, Communications &
Control, 2011, pp. 278-285.
5. Phansak Nerakae, Pichitra Uangpairoj, Kontorn Chamniprasart, “Using machine vision for
flexible automatic assembly system,” International Conference on Knowledge Based and
Intelligent Information and Engineering Systems, 2016, pp. 428-435.

86
6. Petar Marić, “Computer Vision Systems For The Enhancement Of Industrial Robots
Flexibility,” Facta Universitatis, Ser. Mechanics, Automatic Control and Robotics, 2011,
pp. 1-18.
7. Herakovic N., “Robot Vision in Industrial Assembly and Quality Control Processes,”
Robot Vision / Edited by Ales Ude, 2010, pp. 501-534.
8. Mammadov R.K., Mutallimova A.S., Aliyev T.Ch., “Ispol'zovaniye momentov inertsii
izobrazheniya dlya invariantnogo k affinnym preobrazovaniyam raspoznavaniya,”
Vostochno-Yevropeyskiy zhurnal peredovykh tekhnologiy, 2012, pp. 4-7.
9. Caldwell D.G., “Robotics and automation in the food industry. Current and future
technologies,” Woodhead Publishing Limited, 2013, 523.
10. Siciliano B., Khatib. O., “Springer Handbook of Robotics,” Springer-Verlag Berlin
Heidelberg, 2008, 1628 p.
11. Vimal Sudhakar Bodke, Omkar S Vaidya, “Object Recognition in a Cluttered Scene using
Point Feature Matching,” International Journal for Research in Applied Science &
Engineering Technology, 2017, pp. 286-290.
12. Toshiaki Ejima, Shuichi Enokida, Toshiyuki Kouno, “3D Object Recognition based on the
Reference Point Ensemble,” International Conference on Computer Vision Theory and
Applications, 2014, pp. 261-269.
13. Mammadov R.K., Aliyev T.Ch., “Kontrol' polozheniya 3D-ob"yektov v gibkikh
avtomatizirovannykh sistemakh. Povysheniye dostovernosti raspoznavaniya”. LAP
Lambert academic publishing, 2014, 90 p.
14. Мамедов.Р.К, Иманова.У.Г. Повышение достоверности принятия решений при
распознавании образов, Электрон моделирование, Киев, 2014, т.36, №5, с.115-121.
15. Скачков В.В., Чепкий В.В.,и др. Минимизация доминирующей погрешности в
задачах измерения информационных параметров «зашумленной» выборки сигнала,
Information and Telecommunication Sciences, 2016, Volume 7, Number 2 Одесса,
Украина , с.62-69.
16. Володин И.Н. Лекции по теории вероятностей и математической статистике. -
Казань: (Издательство), 2006. - 271с.
17. Xiang Bai, Xingwei Yang, Longin Jan Latecki, “Detection and recognition of contour
parts based on shape similarity,” Pattern Recognition, 2008, pp. 2189-2199.
18. Konrad Schindler, David Suter, “Object Detection by Global Contour Shape,” Pattern
Recognition, 2008, pp. 3736-3748.
19. Mohammad Arafah, Qusay Abu Moghli, “Efficient Image Recognition Technique Using
Invariant Moments and Principle Component Analysis,” Journal of Data Analysis and
Information Processing, 2017, pp. 1-10.

87
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Hide Data In 24-Bit And 8-Bit Bmp and Tiff Files, Reading
Confidential Data and Comparing with Image Quality Criteria
According to Steganography Principles

Remzi Gürfidan1*, Ziya DİRLİK2

Abstract: Information hiding techniques are a subject area that all societies from the past to
the present work on and care about. Steganography means " secret writing "in ancient Greek
and is the name given to the science of concealing knowledge. Many different data hiding
methods have been found and used in history and today. The biggest advantage of
steganography over encryption is that a person who sees information cannot understand that
what they see is important information. So, when people look at an object hidden inside them
in the view of nature, they do not look for information in it. Moreover, an encrypted message
attracts attention because of its mystery, even if it is difficult to solve. The carrier containing
the confidential data is called stego. Although encryption methods are preferred in
information protection methods, it has a significant advantage in that steganography can
perform this function without attracting attention. In the study, different sizes of data are
stored in Bmp and Tiff format image images. After that, the stego image and the original
image were calculated by using different image quality.

Keywords: Steganography, Image Quality, Data Hiding, Confidential Data

1. Introduction

The confidentiality of information has been of great importance in the communications of


persons in important positions of all kingdoms and states from the past to the present. For this
reason, many different data hiding methods have been found and used from the earliest times
to the present day. Steganography means " hidden writing "in ancient Greek and is the name
given to the science of concealing knowledge. The biggest advantage of steganography over
encryption is that a person who sees information cannot realize that what they see is important
information, so when he looks at an object that is hidden in it, he does not look for
information in it. However, an encrypted message attracts attention because of its mystery,
even if it is difficult to solve (Narayana and Prasad, 2010, Seth and Ramanathan et al., 2010,
Usha and Kumar et al., 2011). The carrier containing confidential data is called stego (Usha
and Kumar et al., 2011,Koçak, 2015). Although encryption methods are preferred in
information protection methods, steganography has a significant advantage due to its lack of
attention (Gençoğlu, 2021).

If you need to explain exactly the purposes and methods of the science of steganography, it is
useful to look at the most known examples from history to the present day. Covering the

1
Isparta University of Applied Sciences, Yalvac TBMYO VHS, Isparta, Turkey
* Corresponding author: [email protected]
88
inscriptions engraved on wooden tablets in history with a candle can be considered the first
example of steganography in history. After the hair of an agent who scraped the scalp and
made a tattoo containing a secret message on the skin grew, the tattoo disappeared, and the
agent took the desired message to the desired place without attracting attention. An example
of the use of steganography is when a tortured prisoner opens and closes his eyes in
accordance with Morse code. 2. during World War II, a Japanese agent sent American
movements by hiding them among letters containing baby orders is one of the most beautiful
striking examples of steganography in history. 2 again. during World War II, messages were
transmitted those enemy forces, which seemed very ordinary in radio news, would bomb the
city tomorrow. During World War I, Washington D.C. the German Embassy in Berlin sent
two messages via telegrams to its headquarters in Berlin. If read instantly, the message does
not give a meaningful idea of what the Germans are doing. Reading the first letter of each
word in the first message or the second letter of each word in the second message gives
confidential information about the entire message (https://e‐bergi.com/y/veri‐gizleme‐bilimi/
Date: 17.07.2021, Johnson and Sushil, 1998).

Many different techniques have been used to store data. In a series of articles, methods such
as storing information within certain rules, storing information through psychomotor
movements in mutual communication, storing data in images, storing data in audio files were
used. In this study, the method of storing data into visuals from communication methods
performed using technological infrastructure was experimentally studied. In the details of the
study, an attempt was made to store data in visual formats with bitmap and tiff extensions.
The data stored in visual data is decoded and the data is compared to stego visual in terms of
raw visual quality. This benchmarking was carried out using six different quality
benchmarking criteria.

Zhang and Luo's study proposed a most significant bit (MSB) replacement-based high-
capacity reversible data hiding method that could embed hidden messages into color images.
In the study, our multiple MSB replacement-based technique performed better both in terms
of data embedding rate and PSNR values of the restored images. Furthermore, it does not
contain very complex calculations, and therefore it has performed very quickly in terms of
computational complexity (Zhang and Luo, 2020). Kumar and colleagues conducted a study
on Gray paintings similar to this study (Zhang and Luo, 2020). Here, there has been a highly
imperceptible information hiding technique that uses the characteristics of MSBs of implicit
image (CI) pixels. CI was initially grouped into two different segments. Later, segments,
along with the MSBs of each pixel, were used to embed hidden bits into the LSBs of each
pixel. The results of the study were compared with some high-quality existing techniques.
From a comparison of various metrics, the proposed study has been shown to outperform
other related studies (Kumar and Kumar et al., 2021).

2. Proposed Method

The first step is to choose the image in which the information will be stored, by the
application user. 8-bit or 24-bit pixel format is determined by the application according to the
quality of the selected picture. After this process, the size of the data that can be stored in the
picture, the size of the header and the size of the picture in terms of bits, bytes are calculated.
Afterwards, the message to be kept in the picture is entered by the user. The size of the
entered message in bytes is displayed to the user. When the storage process is completed, the
image with data embedded in it is presented to the user. The user can save this picture via the
application to send it to someone else. The uploaded picture and the stored message regarding

89
the picture are shown in the left and middle part of Figure 1. The mentioned information
hiding process is explained in detail in sections 2.1 and 2.2.

In the same application, it is in the section that reveals the message of the picture in which
data is hidden. The data stored image is uploaded to the application by the user. The uploaded
image is determined whether it belongs to 8-bit or 24-bit pixel format. With the initiation of
the read process, the low bits of the picture are read, and the hidden message is revealed. Said
reading process can be seen in the middle and right part of Figure 1. How the secret message
on the picture is revealed is explained in detail in 2.3.

Figure 1. Fully functional screenshot of the application


Quality criteria based on different mathematical models were selected to calculate the quality
criteria of the image in which the information is stored. The calculated values are shown at the
bottom in Figure 1. How the values of the quality criteria are calculated is explained in detail
in 2.4.

2.1. Image upload process


In the image loading process, three global variables named bit size, byteBoyut and
baslikBoyut were used. bitBoyut variable hold a total of how many bits of data can be hidden
in the image. byteBoyut variable hold a total of how many bytes of data can be hidden in an
image. baslikBoyut variable hold stores the maximum number of bits of information that
contains the number of bits to hide, depending on the size of the image.

A bmp or tiff image file is selected from the dialog box and loaded onto picureBox1. It is set
to the selected state of radioButton1 or radiobutton2, depending on whether the loaded image
is 24 bits or 8 bits. According to whether the uploaded image is 24 bits or 8 bits, the contents
of the bit size, byteBoyut and baslikBoyut variables are calculated and these variables are
shown in the corresponding label places on the form. bit size is the aspect multiplication of
the image. byteBoyut consists of 8 sections of bitBoyut. baslikBoyut is to contain information
about how many bits consists of the bitBoyut value.

90
2.2. Data hiding process
The data to be hidden entered by the user is compared to the maximum number of bytes
stored in the byteBoyut variable, ensuring that the amount of data to be hidden does not
exceed the image capacity. Data storage has been started according to whether the loaded
image is 24 bits or 8 bits.

Algorithm 1: Data hiding algorithm.


Input: Message
Output: Picture with data hidden inside
1 Calculate char lenght x 8
2 mesajBoyut ! = baslikBoyut ? Adding “0” : ConvertStringArray
3 for (i=0; i<image.height;i++)
4 for (k=0; k<image.width; k++)
5 GetBlueValues();
6 WriteLSB ();
7 return new_image;

2.2.1 Data hiding process for 24 bits


The veriGizle24Bit method takes the data to be hidden as a parameter. First, the number of
characters of the text stored in the message variable is multiplied by 8, which indicates how
many bits this message consists of, and the number found is also translated into a binary base.
This information is stored in the message size variable. Then, by adding 0 next to the message
size value as needed, the message size variable contains as many bits as the number of bits in
the baslikBoyut variable. Then the message is converted to a string of bits consisting of 1 and
0. This value is passed in the binaryMesaj variable. It is then delivered to each pixel of the
image with two nested loops. Only the B value is taken from the RGB values of the Pixel. the
bits in the message size variable are written to the lowest valence (LSB) bit of each pixel,
which contains information about how many bits the message consists of, to the first bits as
much as the number of bits held in the header size. In order for the bits that follow this, the
bits contained in the binaryMesaj string are written from left to right to the lowest bit of the B
value of each pixel, hiding the message.
2.2.2 Data hiding process for 8 bits
The veriGizle8Bit method takes the data to be hidden as a parameter. First, the number of
characters of the text stored in the message variable is multiplied by 8, which indicates how
many bits this message consists of, and the number found is also translated into a binary base.
This information is stored in the message size variable. Then, by adding 0 next to the message
size value as needed, the message size variable contains as many bits as the number of bits in
the baslikBoyut variable. The message is then converted to a string of 1 and 0. This value is
kept in the binaryMesaj variable. Then, with two nested loops, each pixel of the image is
reached. the bits in the message size variable are written to the lowest valence (LSB) bit of
each pixel, which contains information about how many bits the message consists of, to the
first bits as much as the number of bits held in the header size. In order for the bits that follow
this, the bits contained in the binaryMesaj string are written to the lowest bit of each pixel
from left to right, hiding the message.

91
2.3. Data reading process
If the image expected to be solved is 24 bits, the veriOku24Bit method is executed, and the
veriOku8Bit method is executed if it is 8 bits. Both methods accept the carrier image as a
parameter value.

Algorithm 2: Data reading algorithm


Input: Image
Output: Hidden_mesage
1 for (i=0; i<image.height;i++)
2 for (k=0; k<image.width; k++)
3 CheckBlueValues();
4 Hidden_message += FirstBit(baslikBoyut)
5 for (i=0; i< Hidden_message_Lenght; i++)
6 for (k=0; k< Hidden_message_Lenght; k++)
7 Hidden_message += ReadLSB ();
8 for (j=0; j< Hidden_message_Lenght; j++)
9 Hidden_message += (Convert_Int(Hidden_message / 8)).ToChar()
10 return Hidden_message;

2.3.1 Data reading process for 24 bits


First, with two nested loops, successive pixels of the image are scanned from the beginning,
and the B value of each pixel is checked. a string is added by taking the first bit of the B value
of consecutive pixels, as well as the number of bits held in the baslikBoyut variable. This
sequence of bits obtained from the first Pixels is converted to numbers, which allows you to
find out how many bits of information are hidden in the image. Again, the image is scanned
with two nested loops from the first Pixel, and the pixels that come after the pixels containing
the header information are sequentially taken and added to the string variable, taking the
lowest first bit of B values. In this loop structure, the bits of the hidden message are obtained
in a string. Then this sequence of bits is divided into groups of 8. Each 8 i is first converted to
a numeric value of byte type, then to a character of char type. A hidden string value is
obtained by combining characters. The function completes the operation by returning the text
it obtained.
2.3.2 Data reading process for 8 bits
First, with two nested loops, successive pixels of the image are scanned from the beginning
and the value of each pixel is checked. a string is added by taking the first bit of the value of
consecutive pixels, as well as the number of bits held in the baslikBoyut variable. This
sequence of bits obtained from the first Pixels is converted to numbers, which allows you to
find out how many bits of information are hidden in the image. Again, the image is scanned
with two nested loops from the first Pixel, and the pixels that come after the pixels containing
the header information are sequentially taken and added to the string variable, taking the
lowest first bit. In this loop structure, the bits of the hidden message are obtained in a string.
Then this sequence of bits is divided into groups of 8. Each 8 i is first converted to a numeric
value from byte type and then to a character of char type. A hidden string value is obtained by
combining characters. The text obtained by the function is returned.

92
2.4. Picture quality measurements and value calculations
Quality assessment algorithms are needed for optimization operations where quality is
maximized at a certain cost, to perform comparative analysis between different alternatives,
or to perform quality monitoring in real-time applications (Silpa and Mastani, 2012).
In the calculation operations, cover and carrier images are compared according to image
quality criteria and values are calculated according to six different criteria.
2.4.1. Mean square error calculation
Image quality assessment is important in various image processing processes. Experimental
results show that MSE and PSNR are simple, easy to implement, and have low computational
complexity. In addition, these methods do not give good results. MSE and PSNR are
acceptable for image similarity measurement only when images differ by increasing distortion
of a particular type. However, they cannot capture image quality when used to measure types
of distortion (Silpa and Mastani, 2012).
The mathematical model shown in Equation 1 was used to calculate the mean square error
value.

∑ ∑ , , Equation

(1)
2.4.2. Peak Signal Noise Ratio calculation
Peak Signal Noise Ratio (PSNR) is a metric that shows the ratio of a sign's maximum possible
power to the power of noise on the sign. The signal represents the original data, while the
noise represents the compression-induced error. When comparing compression encodings,
PSNR can actually be considered as an approach to human quality perception. The
mathematical model shown in Equation 2 was used to calculate the Peak Signal Noise Ratio
value.
MSE = Mean Square Error

10 log Equation
(2)
2.4.3. Average difference calculation
The mathematical model shown in Equation 3 was used to calculate the average difference
value.
, ,
∑ ∑ Equation

(3)
2.4.4. Structural Content calculation
Structural difference is the type of metric that calculates the similarity between reference and
test images. The mathematical model shown in Equation 4 was used to calculate the structural
content value.

93
∑ ∑ ,
∑ ∑
Equation
,
(4)
2.4.5. Normalized Counter Correlation calculation
Normalized Counter Correlation between left and right images is calculated using the sum of
the normalized cross correlation (NCC). The NCC algorithm is preferred because some
metrics are affected by light change (Mohamed and Adulla et al, 2018, Dankers and Barnes et
al., 2007, Zhang and Tay et al., 2011).
∑ ∑ , ,
∑ ∑
Equation
,
(5)
2.4.6. Normalized Absolute Error
This technique measures exactly what is the difference between the processed image and the
original image. It’s the numerical variance between the restored and the original image. The
result of this method falls into the interval of values between 0 and 1 (Gustafson and Yu,
2012). Moreover, the results that are near to zero means that the image have high similarity to
the original one and the results near the value one indicates that the image have a very poor
quality. The mathematical model shown in Equation 6 was used to calculate the normalized
absolute error value.
∑ ∑ | , , |
∑ ∑
Equation (6)
| , |

3. Results

Quality metric values were measured by hiding the same information inside bmp and tiff
image. The resulting values are shown in Table 1.

Table 1. Quality metric values of image file in different hiding data size

Image Pixel Hiding MSE PSNR AD SC NCC NAE


Format Size Data
Size (Byte)
BMP / 500x500 30 7,5277 38,7771 00872 0,9915 1,0096 8,3784
TIFF
BMP / 500x500 15 4,7379 40,7879 0,0509 1,0022 0,9911 4,8921
TIFF
BMP / 500x500 10 3,1824 42,5161 0,0352 0,9975 0,9973 3,3871
TIFF
BMP / 500x500 5 1,8557 44,8585 0,0207 1,0074 0,9955 1,9896
TIFF

94
MSE, NAE, and PSNR values changed significantly depending on the hidden data sizes.
Although SC, NCC, and AD values vary depending on the hidden data size, the change
amounts remain negligible.

4. Discussion and Conclusions

In this study, data hiding operation was performed on images in tiff and bmp formats.
Depending on the size of the data hidden in the image, the measurement of the error metrics
of these two formats was carried out. Obtained findings are shown in detail in the previous
section. As a result of these findings, the error metric values calculated in both formats
decrease depending on the decrease in the amount of hidden data. When the rate of change of
error metrics is examined according to the size of the hidden data, it has been determined that
the MSE metric changes more than the other metric values. In the next step of the study, it is
planned to perform error metric measurements in data hiding operations in different image
formats and different resolutions.

References

DANKERS, Andrew; BARNES, Nick; ZELINSKY, Alex. MAP ZDF segmentation and
tracking using active stereo vision: Hand tracking case study. Computer Vision and Image
Understanding, 2007, 108.1-2: 74-86.

Dolay, B. https://e-bergi.com/y/veri-gizleme-bilimi/ , Erişim Tarihi: 29.07.2021

GENÇOĞLU, M. T. (2021). Enhancing The Data Security by using Audio Steganography


with Taylor Series Cryptosystem. Turkish Journal of Science and Technology, 16(1), 47-64.

GUSTAFSON JR, William I.; YU, Shaocai. Generalized approach for using unbiased
symmetric metrics with negative values: normalized mean bias factor and normalized mean
absolute error factor. Atmospheric Science Letters, 2012, 13.4: 262-267.

Johnson, Neil F., and Sushil Jajodia. (1998) "Exploring steganography: Seeing the unseen."
Computer 31.2 : 26-34.

Koçak, C. (2015). Kriptografi ve stenografi yöntemlerini birlikte kullanarak yüksek güvenlikli


veri gizleme. Erciyes Üniversitesi Fen Bilimleri Enstitüsü Fen Bilimleri Dergisi, 31(2), 115-
123.

Kumar, K. S., Kumar, C. M., Kumar, B. S., & Cristin, R. (2021). Highly imperceptible data
hiding technique using MSB in the grayscale image. Materials Today: Proceedings.

MOHAMED, Abdulla, et al. Depth Estimation Based on Pyramid Normalized Cross-


correlation Algorithm for Vergence Control. IEEE Access, 2018, 6: 65199-65211.

ZHANG, Xuejie; TAY, Leng Phuan. A spatial variant approach for vergence control in
complex scenes. Image and Vision Computing, 2011, 29.1: 64-77.

NARAYANA, Sujay; PRASAD, Gaurav. Two new approaches for secured image
steganography using cryptographic techniques and type conversions. Signal & Image
Processing: An International Journal (SIPIJ) Vol, 2010, 1.2: 60-73.

95
SETH, Dhawal; RAMANATHAN, L.; PANDEY, Abhishek. Security enhancement:
Combining cryptography and steganography. International Journal of Computer
Applications, 2010, 9.11: 3-6.

SILPA, K.; MASTANI, S. Aruna. Comparison of image quality metrics. Int. J. Eng. Res.
Technol, 2012, 1.4: 1-5.

Usha, S., Kumar, G. S., & Boopathybagan, K. (2011, December). A secure triple level
encryption method using cryptography and steganography. In Proceedings of 2011
International Conference on Computer Science and Network Technology (Vol. 2, pp. 1017-
1020). IEEE.

ZHANG, Yike; LUO, Wenbin.2020. A Multi-MSB Replacement Based Approach for High
Capacity Data Hiding in Color Images.

96
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Detection of Hail Damage in Fruits Using Image Processing


Techniques with Kinect Sensor

Enes AÇIKGÖZOĞLU1*, Remzi GÜRFİDAN

Abstract: A study was carried out on the determination of damage caused by hail of fruits,
which have an important share in agricultural production. Images of the lateral surface of six
apples randomly taken from a random apple tree exposed to hail damage were obtained and
the damage rates were tried to be determined by applying various image processing
techniques on these images. In order to take the image, the damage detection table was
designed and the images were processed in the Matlab environment. It is planned to develop
methods to differentiate disease damage and hail damage in future studies.

Keywords: Image processing, fruit damage detection, hail damage

1. Introduction
All living things must be fed in order to survive. People started to meet their nutritional needs
by hunting in ancient times, and over time they focused on agriculture and animal husbandry.
Agriculture, which has developed with its economic dimension in addition to the field of basic
needs, has become a source of livelihood for many people today. The agricultural sector has
spread to many fields of study, such as agricultural machinery production, pesticide
initiatives, air-conditioning environments, soilless farming techniques, disease detection and
separation in harvested products by taking advantage of the developing technology. Although
it is supported by technological infrastructures, the agricultural production sector, which is
carried out in the open field, is exposed to the natural conditions of the environment and
climate. Among the natural factors affecting agricultural production in the open field,
precipitation, winds, frost events and extreme temperatures can be shown. When choosing the
products they will buy, consumers look at whether there is damage to the external appearance
of fruits or vegetables (Wang etc., 2013).
Producers strive to grow their produce as much as possible without contaminating diseases
and meeting optimum irrigation needs. However, there is no measure that can be taken to
prevent precipitation damage. For this reason, producers resort to insuring their production
gardens in order to eliminate possible grievances they may encounter.
Insurance transactions can be carried out in the categories of garden area, amount of trees,
productivity value, agricultural disease and precipitation damage. Insurance transactions are
carried out manually by experts who are agricultural engineers by going to the producer's
production garden. Persons who carry out this process are called experts. In order to
determine the insurance cost, the experts take random product samples from different sides of
1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
97
different trees from the products in the production garden. They evaluate and decide based on
their experience for the purpose of the insurance, with the samples they receive.
In this study, a system was designed to determine the damage detection rate of a tree fruit
affected by precipitation damage by image processing methods. The motivation of this study
is to transfer the damage detection process, which will be carried out by the expert by hand
and eye, to a technological autonomous system. Thanks to this proposed design, it is foreseen
that the mistakes of carelessness, fatigue and bias that can be made by the expert will be
minimized and a standard will be established among the experts in determining the damage
detection rates. In order to test the proposed model, apple fruit, which is easily accessible in
the region where the study was carried out, was preferred.
Processes such as quality classification of harvested fruits and vegetables and damage
detection can be achieved faster and more reliable by utilizing technological opportunities. A
model using image processing and artificial intelligence technologies is proposed for fast and
accurate damage detection of litchi fruit. In the study, the damage areas are determined by the
image processing infrastructure, and the support vector machine algorithm is operated for the
classification process and the results are obtained (Xiong, Lin etc., 2018). In another study,
studies were carried out on different packaging methods by classifying the fruits (Eissa and
FR, 2009, Pathmanaban and Gnanavel etc., 2019). Lü and Tang proposed a system for
detecting occult caries by hyperspectral imaging of kiwifruit. According to the experimental
results of their study, the hidden bruises of kiwifruit could only be detected at a rate of 14.5%
with hyperspectral imaging (Lü and Tang, 2011). Again using the hyperspectral imaging
technique, Pan et al. tried to detect the detection of cold damage for peach fruit with artificial
neural networks. In the proposed model, they achieved an accuracy rate of 70% to 90%
according to different cold temperatures (Pan and Zhang etc., 2016). Along with hyperspectral
systems, artificial vision systems and ultraviolet rays are used to reveal invisible defects in
fruits and vegetables and to determine the quality of fruits and vegetables (Cubero etc., 2011).
In a study on Jonagold apples, a few images taken from apple surfaces were processed to
classify the fruits (Leemans and Destain, 2004). Pattnayak and Patra identified the damaged
parts in fruits without human intervention using the salience detection technique on different
fruits (Pattnayak and Patra, 2020). It can be said that his work is promising in this field.
In the second part of the study, the architecture of the proposed model is explained in Figure
1, the working principle of the system is shown with the flow diagram in Figure 2, and the
electronic circuit drawing of the proposed system is shown in Figure 3. In the third part,
image processing techniques and application images obtained for damage detection are given.
According to the values obtained in the second part, the results of the study are mentioned.

2. Proposed Method
In the proposed model, there is a shaft attached to the rotation shaft of the stepper motor on
which the apple fruit will be positioned on the damage detection table. Opposite the shaft is
the camera that takes the image. The architecture of the proposed system is shown in Figure 1.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
98
Figure 1. Architecture of Fruit Damage Detection System
After the fruit is placed on this spindle, by pressing the button, the codes in the Arduino Uno
are started to be run beforehand and the motor calibrates itself according to the initial rotation
position. Then it starts to rotate to scan the entire lateral surface of the fruit. After each 120
degree rotation, there is a 1 second wait for the image to be taken. The acquired images are
combined side by side so that the entire lateral surface is flattened. Afterwards, feature
extractions are performed using image processing techniques and fruit damage data is
calculated. The flow diagram of the system is shown in Figure 2 in detail.

Figure 2. Flow Diagram of Fruit Damage Detection System


The electronic circuit diagram of the proposed system is shown in Figure 3 in detail.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
99
Figure 3. Circuit Diagram of Fruit Damage Detection System
Using image processing techniques, the damaged fruit image was first converted to gray scala
form. Afterwards, the image obtained was converted to binary format and the damaged spots
on the lateral area of the fruit were clarified. The counting process was completed by coloring
the damaged spots.

Figure 4. Image Processing Algorithms Results of Fruit Damage Detection System

3. Discussion and Conclusions


The results obtained by evaluating the sample selected for the detection of damaged fruit in
the system proposed in the study are shown in Table 1. In Table 1, a scenario was prepared on
the evaluation of fruit samples taken from a tree exposed to hail damage. In Table 1, starting
from the first image of the damaged fruit, pictures of the image processing processes, the
calculated total lateral area of the fruit, the detected damaged area, the number of damage
points and the total damage percentage are given.

Table 1. Evaluation Data of the Proposed Model in the Sample Selected for the Sample
Scenario
Image Sample Total Area Detected Number of Damage
of Fruit Damaged Damaged Percentage of
Area Points Fruit

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
100
616547 9850 40 1,597607319

437077 6509 19 1,489211283

481148 2153 10 0,447471464

347412 926 4 0,266542319

249317 3868 15 1,55143853

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
101
243827 3920 16 1,607697261

The Overall Total 2375328 27226 104 6,959968176

In this study, the proposed system is presented as a prototype. The necessary hardware and
costs for the installation of the system are given in Table 2. The prototype cost of the
proposed system was realized with a very affordable budget of 2165 TL.
Table 2. Hardware Components and Costs of the Proposed System
Hardware Component Hardware Cost
Arduino Uno Development Board 54 TL
Step Motor 34 TL
Step Motor Driver Circuit 15 TL
Camera 40 TL
Button 1 TL
Capacitor 1 TL
Supply Circuit 20 TL
Computer 2000 TL
The Overall Total 2165 TL

The agricultural expert using the system proposed in this study will be able to quickly obtain
the damage detection rates in numerical data from the sample taken from the tree he is
interested in. We hope this benefit will provide a solution to personal measurement and
evaluation random errors. In future studies, we plan to develop on the differentiation of hail
damage and wounds and fruit diseases.
References
Cubero, S., Aleixos, N., Moltó, E., Gómez-Sanchis, J., & Blasco, J. (2011). Advances in
machine vision applications for automatic inspection and quality evaluation of fruits and
vegetables. Food and bioprocess technology, 4(4), 487-504.
Eissa, A., & F R, G. (2009). OPERATIONAL MODAL ANALYSIS AND DAMAGE
DETECTION IN FRUIT QUALITY ASSESSMENT USING DIFFERENT METHODS OF
PACKAGING. ERJ. Engineering Research Journal, 32(1), 33-47.
Leemans, V., & Destain, M. F. (2004). A real-time grading method of apples based on
features extracted from defects. Journal of Food Engineering, 61(1), 83-89.
Lü, Q., & Tang, M. (2012). Detection of hidden bruise on kiwi fruit using hyperspectral
imaging and parallelepiped classification. Procedia Environmental Sciences, 12, 1172-1179.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
102
Pan, L., Zhang, Q., Zhang, W., Sun, Y., Hu, P., & Tu, K. (2016). Detection of cold injury in
peaches by hyperspectral reflectance imaging and artificial neural network. Food
chemistry, 192, 134-141.
Pathmanaban, P., Gnanavel, B. K., & Anandan, S. S. (2019). Recent application of imaging
techniques for fruit quality assessment. Trends in Food Science & Technology, 94, 32-42.
Pattnayak, S. B., Patra, T. K. (2020). An Image Processing Approach to Detect Fruit
Damage. International Research Journal of Engineering and Technology, 667-671.
Wang, L., Li, A., & Tian, X. (2013, November). Detection of fruit skin defects using machine
vision system. In 2013 Sixth International Conference on Business Intelligence and Financial
Engineering (pp. 44-48). IEEE.
Xiong, J., Lin, R., Bu, R., Liu, Z., Yang, Z., & Yu, L. (2018). A micro-damage detection
method of litchi fruit using hyperspectral imaging technology. Sensors, 18(3), 700.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
103
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

A Novel Multi-attribute Visual CAPTCHA Model Approach

Ziya DİRLİK1*, Ayhan ARISOY

Abstract: One of the methods we encounter in the data entry screens of applications and used
to distinguish between real users and artificial users is to use CAPTCHA. With the
development of technology, the problem of correctly estimating CAPTCHAs has arisen for
bot applications that imitate real users. In order to overcome this problem, CAPTCHAs have
changed over time and, pictures, questions, shapes, etc. have been used instead of simple
letters and numbers. In this study, a new CAPTCHA model created with color, shape and
numerical value is presented.

Keywords: Captcha, bot detection, user confirm, secure verification

1. Introduction
One of the most important problems encountered in user-interactive platforms is the difficulty
in distinguishing the real user from the artificial user. It is one of the main features that will
increase the security of the system by correctly distinguishing the artificial users designed to
use a created system and the real users, and maintaining the correct operation by the system.
They have developed interactive solution methods to distinguish the real user and the artificial
user correctly. The most well-known of these methods is captcha. Captcha can be defined as
an automatic verification test developed to correctly distinguish between real users and
artificial users. CAPTCHA (Completely Automated Public Turing test to tell Computers and
Humans Apart), meaningless as a word, is a concept formed from the initials of the Fully
Automated Public Turing Test to Separate Computers and Humans (Von Ahn, Luis, et al.,
2003).
In the operation of the captcha technique, there is the continuation of the workflow according
to the correctness of the answers to the simple questions directed to the software user. The
characteristics of the question asked should be capable of distinguishing between real users
and artificial users. As the captcha formats were solved and answered successfully by
artificial users over time, the captcha structure was differentiated and strengthened. Different
techniques have been developed, such as printing text-based expressions presented to the
system user by the user, asking the user for the results by calculating simple mathematical
operations, and selecting objects shared over different visual expressions. However, for each
new technique developed, artificial users have discovered different breaking methods for each
of them over time (Athanasopoulos and Antonatos, 2006, Chow and Susilo, 2011, Desai and
Patadia, 2009). When the process is considered, the determination of real users by traditional
captcha methods seems to be an endless chase, which seems to never end, in which new

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
104
cracking solutions are developed in addition to the closed gaps. Some examples of captcha
developed in different techniques are shown in figure 1.

Figure 1. The most well-known captcha examples


The main purpose of the scientists in the captcha studies carried out is to develop techniques
that real users can easily perceive and perform, while artificial users will fall into complexity
and err. For this purpose, the studies aimed at demanding the result to be obtained by inserting
two simple numbers into basic mathematical operations, asking the users to rewrite the textual
expressions with letter and number complexity written on the image in different thicknesses
and angles (Imsamai and Phimoltares 2010, Singh and Pal, 2014, Saalo, 2010), asking the
user for results through logical gates (Choudhary and Kaur, 2015), asking the user to select
images with common objects by mixing them with images that are similar to that image but
do not contain a common object, next to more than one visual piece containing a common
object (Fujita and Sano et al.,2016), by dividing an image into a different number of parts,
asking the user to select the images containing the part of an object scattered on the parts
(Gao, 2010), asking the user to write what he hears by listening to a simple sound (Gritzalis
and Soupionis, 2 010), asking the user to select a genre from a set of small images that are
very similar to each other on an image (Lin, 2011), cutting a small piece on an image from
where it is located, positioning it at a random point in the image by providing drag-and-drop
features to that piece, and asking the user to select a genre from it (Lin, 2011). It reveals
different methods such as asking the part to be dragged and seated where it belongs (Ali and
Karim, 2014, Chaudhari et al., 2011), asking the user to choose with face recognition
(Goswami and Gaurav et al., 2012). Some field experts, on the other hand, argue that captcha
is not a strong enough parser on its own and think that captcha and graphic cipher schemes
should be used together (Bin and Zhu et al., 2014). However, the weak point of their work is
that the model they developed is quite open to brute-force attacks.
In the second part of the study, the models of the suggested captcha method will be shown. In
the third part, statistical data of real people using the proposed captcha test are shown

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
105
according to different criteria. In the last part of the study, the evaluation of the findings of the
proposed model and the future study plans of the proposed model are given.
2. Prpposed Method
The captcha model proposed in this study contains all of the shape, color and numerical
relationship criteria. The user trying to log in to the system should interpret all three criteria at
the same time and choose the right option. Some captcha examples of the proposed model are
shown in Figure 2.

Figure 2. Some captcha examples of the proposed model


In the designed captcha examples, inductive and deductive methods are preferred in figure
flow. Numerical and character expressions written on the figures also form a meaningful
sequence in themselves.
3. Results
The test of the proposed captcha model was tested with the participation of 208 people. The
classification of the number of participants according to age range and educational status is
shown in Figure 3. It is seen that the number of participants between the ages of 20 and 30 is
higher than the number of participants in the other age range.

Figure 3. Demographic data of survey participants


1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
106
The average time spent per question by the participants performing the captcha test is shown
in Figure 4. Participants spent an average of 51 seconds for the second question and 18
seconds on average for the fourth question.

Figure 4. Average response time per questios

The total time spent by the participants on the 4 suggested captcha questions exceeds 5 hours.
The average and total times spent per question are shown in Table 1.

Table 1. Average Response Time and Total Time Per Question


Question Id Average Time (sec) Total Time (hour)
1 00:00:34.2533333 01:21:38.4600000
2 00:00:51.1633333 01:53:24.5933333
3 00:00:30.5200000 01:05:06.5366667
4 00:00:18.4066667 00:40:48.2266667

The number of correct and incorrect answers given by all participants to the captcha questions
and the average percentage of correct answers per question are shown in Figure 5. An
accuracy rate of 86% was achieved in the fourth question.

Figure 5. Correct and wrong answer per questions

4. Discussion and Conclusions

In the proposed capthca model, it was seen that the 2nd capthca question was more
challenging than the other questions, and the 4th capthca question was answered faster and

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
107
with higher accuracy than the other capthca questions. The measurement values obtained in
the capthca questions with these two extreme values of the proposed model are at an
acceptable level when compared with the values accepted in the literature studies and
practice. For this reason, the proposed models are considered to be suitable for daily use. In
the next study, it is planned to replicate capthca examples and to test their security against
cyber attacks that they may be exposed to during implementation and to strengthen their
vulnerabilities.
References
Alı, Firkhan Ali Bin Hamid; Karım, Farhana Bt. Development Of Captcha System Based On
Puzzle. In: 2014 International Conference On Computer, Communications, And Control
Technology (I4ct). Ieee, 2014. P. 426-428.

Athanasopoulos, Elias; Antonatos, Spiros. Enhanced Captchas: Using Animation To Tell


Humans And Computers Apart. In: Ifıp International Conference On Communications
And Multimedia Security. Springer, Berlin, Heidelberg, 2006. P. 97-108.

Chaudharı, S. K., Et Al. 3d Drag-N-Drop Captcha Enhanced Security Through Captcha.


In: Proceedings Of The İnternational Conference & Workshop On Emerging Trends İn
Technology. 2011. P. 598-601.

Chow, Yang-Wai; Susılo, Willy. Anicap: An Animated 3d Captcha Scheme Based On Motion
Parallax. In: International Conference On Cryptology And Network Security. Springer,
Berlin, Heidelberg, 2011. P. 255-271.

Desaı, Arpan; Patadıa, Pragnesh. Drag And Drop: A Better Approach To Captcha. In: 2009
Annual Ieee India Conference. Ieee, 2009. P. 1-4.

Gao, Haichang, Et Al. A Novel İmage Based Captcha Using Jigsaw Puzzle. In: 2010 13th
Ieee International Conference On Computational Science And Engineering. Ieee, 2010.
P. 351-356.

Goswamı, Gaurav, Et Al. Face Recognition Captcha. In: 2012 Ieee Fifth International
Conference On Biometrics: Theory, Applications And Systems (Btas). Ieee, 2012. P.
412-417.

Imsamaı, Montree; Phımoltares, Suphakant. 3d Captcha: A Next Generation Of The Captcha.


In: 2010 International Conference On Information Science And Applications. Ieee,
2010. P. 1-8.

Kaur, Ramanpreet; Choudhary, Pooja. A Novel Captcha Design Approach Using Boolean
Algebra. International Journal Of Computer Applications, 2015, 975: 8887.

Lın, Rosa, Et Al. A New Captcha İnterface Design For Mobile Devices. 2011.

Saalo, V. (2010). Novel Captcha Schemes.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
108
Sano, Ayane; Fujıta, Masahiro; Nıshıgakı, Masakatsu. Directcha: A Proposal Of Spatiometric
Mental Rotation Captcha. In: 2016 14th Annual Conference On Privacy, Security And
Trust (Pst). Ieee, 2016. P. 585-592.

Sıngh, Ved Prakash; Pal, Preet. Survey Of Different Types Of Captcha. International Journal
Of Computer Science And Information Technologies, 2014, 5.2: 2242-2245.

Soupıonıs, Yannis; Grıtzalıs, Dimitris. Audio Captcha: Existing Solutions Assessment And A
New İmplementation For Voıp Telephony. Computers & Security, 2010, 29.5: 603-618.

Von Ahn, Luis, Et Al. Captcha: Telling Humans And Computers Apart Automatically.
In: Proceedings Of Eurocrypt. 2003.

Zhu, Bin B., Et Al. Captcha As Graphical Passwords—A New Security Primitive Based On
Hard Aı Problems. Ieee Transactions On İnformation Forensics And Security, 2014,
9.6: 891-904.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
109
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Assistant Referee Offside Signals Training Simulator System


Design
Ayhan ARISOY*, Enes AÇIKGÖZOĞLU2

Abstract: The football industry has been growing in recent years and the number of
stakeholders is increasing. Football has become more than a sport with the increase in the
number of fans of league teams such as national teams and league teams of countries.
Undoubtedly, the share of referees who manage football matches is very important in this
sector. The skills and correct management of the referees in the match are followed by all fans
and clubs. For this reason, referee training and training is an issue that should be carefully
considered. In this study, a simulation system is proposed to assist the training and training of
the linesmen in executing the offside decisions. In the last part of the study, the cost analysis
of the proposed system was carried out and its attractiveness was revealed. We hope that the
system will be adopted and put into practice in a short time.

Keywords: Kinect Sensor, Referee Training Simulation, Offsides Training, Body Skeleton
Joint Detection

1. Introduction
Football, the most well-known and most followed sport today, was founded in England in
1857. There are 403 professional football clubs in the world. This number increases
considerably when football teams and amateur initiatives are included (Aydın, 2008). The
worldwide football economy consists of clubs, players, facilities, broadcasters, and fan
expenditure stakeholders. The economy in question has a budget of more than billion Euros.
This large budget is distributed to professional clubs in proportion to their success in
competitions on official platforms. This situation shows that it is of vital importance that the
competitions are managed fairly and in accordance with the football game rules. The most
critical football stakeholder to execute and fulfill these conditions is the referees. As in every
sport, certain rules are determined and updated in the football branch. In the field, a total of 7
referees, including VAR (Video Assistant Referee) referees, are responsible for the fair and
proper administration of football matches, with new applications. During the match, all
spectators, technical team and football players follow the game flow with the physical signs of
the referee and assistant referees specified in the football game rules. For this reason, referees
must reflect the game rules on the field physically and mentally during the game (Kürkçü and
Uluşar, 2014). Possible misleading or wrong physical sign negatively affects the course of the
game.

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
110
Assistant referees notify their decisions in the competition through the referee flag. For this
reason, the use of flags by the assistant referees must be error-free and closed to
interpretation. During the referee training process, the flag holding positions and angles of the
assistant referees is the main issue that needs to be studied meticulously. Even today, this
physical training and education process is carried out by the referee candidates watching each
other, recording the images and watching them later, or in front of a mirror. In learning
psychology, the immediate feedback given to the learner at the time of learning contributes
positively to the effectiveness and quality of learning. For this reason, the need for a system
that can give instant feedback to the learner in training and flag training of assistant referees is
clearly seen. The motivation source of the study is the preparation of the necessary training
and training environment for the referees to have the right effect of the football game rules on
the field.
The adaptation of football, which appeals to large audiences, with technology started at the
end of the 90s by increasing the communication quality of the referee and assistant referee.
These systems enabled the referee and assistant referees to communicate independently of
distance and ambient noise. The growing economic portfolio of the football industry has
caused all stakeholders in the field to get closer to technology to optimize their roles. The goal
line technology, which is one of the most well-known of these technologies, is a system that
helps the referee by deciding whether the ball has crossed the goal line through electronic
sensors (FIFA, 2012). This system has been developed by scientists and a system based on
radio frequency recognition and faster decision making has been proposed (Ghosh and Sasmal
et al., 2019). The contribution of the goal line technology to the continuation of the match
with the right decisions has been accepted by all football segments. This satisfaction has
created the opinion that a technological structure that will help the referee in all critical
decisions, except the goal line technology, will be positive. The VAR model, which started to
be constructed after 2010, was officially implemented for the first time in 2016 in a
preparation competition (IFAB, 2016). As a result of the positive feedback received, it has
been actively used in many national and international organizations today. The technological
infrastructures developed for the scenarios during the competition are not limited to the
competition, athlete, referee analysis, injury evaluations, athlete and referee training, athlete
and referee training. Scientists develop various software to analyze the competition (Abdullah
and Razali et al., 2016) or make use of artificial intelligence technologies (Kumar, 2013).
Similarly, the running distances, game zones, defense and attack contributions, tactical
compatibility, and physical strength of the football players and referees in the competition can
also be followed [Gong and Cui et al., 2019, Kürkçü and Uluşar, 2014]. Thanks to the
developed IoT (Internet of Things) based systems, the training environments and training
programs of football players and referees can be prepared in an optimized way. In order to
realize this system, using ZigBee technology, parameters such as the athlete's blood pressure,
blood oxygen rate, body movements, sweating rate, body temperature, past health stories are
instantly taken and transferred to cloud-based systems. Thus, the responsible personnel who
manage the training or training can follow the instant physical conditions of all learners
(Ikram and Alshehri et al, 2015). As a result of the researches, it has been determined that the
technological structures developed are designed for the detection and monitoring of the
physical conditions of the athletes. It is obvious that the systems developed for education are
very limited and the sector's hunger in this regard. With the proposed model, it is aimed to

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
111
contribute to the physical training of the assistant referee, who is an important figure in the
football industry.
In the second part of the study, how the proposed model was developed and its working
principles, the findings obtained in the third part, and the results of the study carried out in the
fourth part are given.
2. Proposed Method
The Kinect V2 is a capable sensor for game consoles that can detect human movements,
skeletal and joint structure. It is used by scientists in academic studies thanks to the library
supports that we can define the skeletal structure and joints (Taşdelen and Gürfidan, 2015).
The Kinect sensor has an infrared camera, RGB camera, infrared emitter and multiple
microphones. The resolution of the infrared camera is 512x424 pixels (px), the resolution of
the RGB camera is 1920x1080 px. The lens viewing angle capability of the RGB camera is
70x60 degrees. It has an image detection rate of 30 frames per second. The detection area
ranges from 0.5 meters to 4.5 meters. The Kinect v2 sensor is shown in Figure 1.

Figure 1. Kinect V2 Sensor


True and false flag positions taken from the proposed system are shown in Figure 2.

Figure 2. Assistant referee offside flag positions

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
112
To determine the accuracy of the movements, the length from the wrist to the elbow and the
length from the elbow to the shoulder are obtained from the absolute difference of the x-axes.
Then the obtained value is converted to vector unit. The same process is done for the length
from elbow to shoulder, and this value is also converted to vector. The angle between these
structures, which are then converted into vectors, is calculated using the cosine theorem. As
shown in Figure 3, angle calculation is performed when x, y, and z points are known. In order
to apply the cosine theorem, the distance between the joints must first be converted to a
vector. This conversion is calculated with Formula 1 and Formula 2.
PA = x1 – x2, y1 – y2, z1 – z2 (Formula 1)
PB = x3 – x2, y3– y2, z3 – z2 (Formula 2)

The expressions PA, PB express the vector norm between points. The expressions x1,
x2,x3,y1, y2, y3,z1, z2, z3 express the vector coordinates of the joints. In order to calculate
the angle between two vectors starting from the same point and extending in different
directions, the lengths of the vectors must be calculated separately. Calculation of the lengths
of the vectors is calculated as shown in Formula 3 and Formula 4.

| |= . . . (Formula 3)

| |= . . . (Formula 4)

Figure 3. Converting joint distances to vector and calculating the angle between them
(Taşdelen and Gürfidan, 2015)
After the preprocessing required to calculate the angle between the two vectors is completed,
the decision calculation that allows us to calculate the angle between the two vectors is shown
in Formula 5.
.
| |.|
(Formula 5)
|

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
113
3. Discussion and Conclusions
In order to apply the model proposed in the study to real life, cost analyzes were carried out.
All components required for efficient and correct use of the system are given in Table 1 with
cost calculations.
Table 1. Components of the System and Cost Analysis

Hardware Equipment Cost Accounting


Notebook / Desktop 3500 TL
Monitor 1500 TL
Operating System 600 TL
Kinect V2 Sensor 1100 TL
TOTAL COST 6700 TL

As can be seen in Table 1, this training and training system, which can be installed at a cost of
6700 TL, is a very appropriate amount considering the critical role of the referees. In addition,
with the development of software suitable for the system, it has the potential to contribute not
only to the offside training and training of the linesman, but also to many physical training
and training.
References
Abdullah, Mohamad Razali, et al. Development of tablet application based notational analysis
system and the establishment of its reliability in soccer. Journal of Physical Education
and Sport, 2016, 16.3: 951.
Aydın, E. Futbol Ekonomisi: 2 Ülke Kıyaslaması (İngiltere ve Türkiye). 2008. PhD Thesis.
Yüksek Lisans Tezi, Marmara Üniversitesi Sosyal Bilimler Ens. İşletme Anabilim Dalı
Uluslararası İşletmecilik Bilim Dalı.
FIFA (2012). "Testing Manual". FIFA Quality Programme for Goal Line Technology.
Ghosh, S., Sasmal, S., Bhui, S., Dutta, S., Mukherjee, S., Majumder, A., & Ganguly, B.
(2019, March). Radio Frequency Identification based Goal Line Technology for Quick
Decision Making in a Football Match. In 2019 Devices for Integrated Circuit
(DevIC) (pp. 441-445). IEEE.
Gong, Bingnan, et al. The validity and reliability of live football match statistics from
champdas master match analysis system. Frontiers in psychology, 2019, 10: 1339.
Ikram, Mohammed Abdulaziz; ALSHEHRI, Mohammad Dahman; HUSSAIN, Farookh
Khadeer. Architecture of an IoT-based system for football supervision (IoT Football).
In: 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT). IEEE, 2015. p. 69-74.
Kumar, G. (2013). Machine learning for soccer analytics. University of Leuven.
Kürkçü, Cengiz; Uluşar, Ümit Deniz. Position and motion analysis of referees during soccer
games. In: 2014 22nd Signal Processing and Communications Applications Conference
(SIU). IEEE, 2014. p. 124-127.

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
114
Minutes of the 130th Annual General Meeting of the International Football Association
Board". IFAB. pp. 13–17, 2016.
Taşedelen K., Gürfidan R. “Control of robot arm by using kinect technology”. Yüksek Lisans
Tezi, Süleyman Demirel Üniversitesi Elektronik Bilgisayar Eğitimi Anabilimdalı, 2015.

1
Isparta University of Applied Sciences, Senirkent VHS, Isparta, Turkey
* Corresponding author: [email protected]
115
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Internet of Things Based Real-Time Fatigue Detection System for


Drivers with Kinect Sensors
Enes AÇIKGÖZOĞLU1*, Ziya DİRLİK2, Ayhan ARISOY3

Abstract: A precaution and warning system has been proposed by making use of internet-
based objects and image processing techniques in order to detect fatigue of vehicle drivers. In
the study, the Kinect sensor, which can detect human joints, was preferred in order to detect
the forward tilt of the head during diving and sleep. In addition, a simple camera that sees the
driver's face was used to detect the state of closing the eyes without tilting the head in case of
fatigue and sleep. Data obtained and processed in real time can make decisions for fatigue
detection and play an accident-preventing role by warning the driver.

Keywords: Internet of things, fatigue detection, Kinect sensor, image processing

1. Introduction
As of 2020, there are approximately one billion two hundred and fifty million vehicles in the
world. These vehicles are used for public transportation, logistics services, commercial
activities and personal purposes. Transportation has become an essential need in every aspect
of daily life. For this reason, traffic is a multidimensional phenomenon that is active, alive and
growing every moment of the day. Traffic phenomenon has stakeholders such as traffic rules
and violations, traffic accidents, road conditions, number of vehicles and pedestrians. Among
these stakeholders, traffic accidents appear as the dimension that most concerns and affects
human life. According to official figures, over 14 million accidents have occurred in our
country in the last 10 years. In these accidents, a total of 64,634 people lost their lives and
over 3 million 247 thousand people were injured (Tuik, 2021). This number is much larger
worldwide. Although there are many reasons for traffic accidents, driver faults constitute
89.39% of the accident causes. When the causes of the accidents are examined, it is
distributed as 2% hitting a stationary vehicle, 6% hitting a fixed object, 15% going off the
road, and 6% mutual collision (Polis Academy, 2019). When these reasons are examined, it is
clear that the vehicle user is careless and careless in accidents. Vehicle companies such as
Nissan, Toyota, Volkswagen are updating and developing driver assistance software to
prevent vehicle accidents (Sikander and Anwar, 2018).
In this study, a system is proposed in which the driver's density can be detected in real time
while the vehicles are in motion. In the proposed system, images taken from the Kinect sensor
and images obtained from the camera are processed in real time by using image processing
techniques, triggering a warning system. Fatigue detection is a subject that has been studied
before. Scientists have tried to develop different methods to prevent drivers from diving or
sleeping at the wheel (Wang and Yang etc., 2006). Wang et al. proposed a real-time driving
fatigue detection system based on a wireless EEG headset. They used power spectrum and

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
116
sample entropy to detect the mental fatigue of the driver (Wang and Dragomir etc., 2018). In
another study, fatigue detection was performed with the help of OpenCV library by detecting
the face of the vehicle driver in both light and night vision mode by means of a camera system
installed inside the vehicle (Brandt and Stemmer etc., 2004). Rogado et al. proposed a
detection system to detect early signs of fatigue, using data from biological and vehicle
peripherals. With this detection system, it decides whether the driver of the vehicle is suitable
for driving or not. In the study carried out, the driver's heart rate changes, the pressure on the
steering wheel grip, the vehicle's interior and exterior temperature values are obtained and
analyzed to determine fatigue indirectly (Rogado and Garciaetc.,2009). Kong et al. proposed a
fatigue detection system based on artificial vision. They examined the face, eye and mouth
regions of the driver with the images they took from the camera. The open-closed states of the
eyes and the mouth opening conditions were examined and the fatigue status of the driver was
determined (Kong and Zhou., 2015). Devi and Bajaj, on the other hand, suggested the
detection of fatigue by monitoring and measuring eye opening (Devi and Bajaj., 2008,
Eriksson and Papanikotopoulos, 1997).
Among the motivation sources of the study, accidents that can be prevented by using the
system and injuries and deaths that can be prevented can be counted. In addition, a different
method has been developed for fatigue detection based on the Kinect sensor, which is one of
the elements of the Internet of Things, and the measurements of the angle that occurs in the
neck angle of the driver. This method has not been used for fatigue detection before. In the
second part of the study, the proposed model is explained in detail. In the third chapter, the
findings obtained from the proposed model are discussed and the results are presented.
2. Proposed Method
In the proposed fatigue detection system, a two-legged fatigue verification method is
preferred. First, the skeletal system of the vehicle driver was extracted by using the camera of
the Kinect sensor and the infrared module. From the obtained image, the skeletal structure of
the neck of the vehicle driver was converted into vectors and the angle between the neck
movement was calculated. In normal use, the neck angle of the vehicle driver can vary
between 169 and 174 degrees. In cases of fatigue and sleep, the neck bends forward, causing
the angle between the skeletal system to decrease. In this way, a warning code is sent to the
central control point in real time, considering that the vehicle driver is out of normal use. The
measurement values suitable for the scenario of normal use and fatigue or sleep detection are
shown in Figure 1.

Figure 1: Fatigue detection by obtaining neck angle with Kinect sensor


For the accuracy of fatigue detection, the length from the chest to the neck and the length
from the neck to the head are obtained from the absolute difference of the x-axes. Then the
obtained values are converted to vector. Then, the angle between these structures, which are

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
117
converted into vectors, is calculated using the cosine theorem. As shown in Figure 2, angle
calculation is performed when x, y and z points are known. In order to apply the cosine
theorem, firstly the conversion to vector is calculated with Formula 1 and Formula 2.
Calculation of the lengths of the vectors is calculated as shown in Formula 3 and Formula 4.

Figure 2: Converting distances to vectors and calculating the angle between

, , (Formula 1)

, , (Formula 2)

. . . (Formula 3)
.

(Formula 4)

After the preprocessing required to calculate the angle between the two vectors is completed,
the decision calculation that allows us to calculate the angle between the two vectors is shown
in Formula 5.
.
(Formula 5)
| |.| |

The second fatigue verification step is a system positioned in front of the vehicle driver and
constantly monitoring the driver's eyes. Here, the driver's eyes are detected by image
processing techniques, and a warning code is sent to the central control point in case of
closing or squinting. The EmguCV library of the OpenCV framework was used in the
software designed to detect the eyes of the driver. The fact that the driver of the vehicle is
wearing glasses does not cause any loss or error in detecting the eyes. The images obtained
from the software prepared for the detection of eyes are shown in Figure 3.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
118
Figure 3: Checking the aperture of the eyes with image processing techniques

In the general operation of the system, the neck angle of the vehicle driver and the open-
closed states of his eyes are constantly measured. If the obtained measurements are within the
required values, the image measurement process continues. As soon as abnormal values are
detected, a warning code is sent to the central control point. As soon as warning codes are sent
from both the camera and the Kinect sensor, the warning system is inserted into the
environment and the vehicle driver is warned with both sound and light warnings. The general
operating architecture of the fatigue detection system is shown in Figure 4.

Figure 4: Architecture of the Proposed Fatigue Detection System

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
119
3. Discussion and Conclusions
In this study, a system is proposed in which the driver's density can be detected in real time
while the vehicles are in motion. This system, which makes use of the values of pupil finding
and neck angles for fatigue detection, gives successful results and is open to development. As
with all developed and proposed systems, the system proposed in this study has limitations
and difficulties. Within the limitations and difficulties of the system, it can be said that the
driver of the vehicle pulls his head out of the camera angle for road control and similar
purposes while driving. This situation can be said as the activation of the warning system,
even if there is no fatigue, since it prevents the correct angle detection in the skeletal system
and the perception of the eyes. In future studies, new techniques will be developed on the
optimization of this problem.

Reference
Brandt, T., Stemmer, R., & Rakotonirainy, A. (2004, October). Affordable visual driver
monitoring system for fatigue and monotony. In 2004 IEEE International Conference
on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583) (Vol. 7, pp. 6451-6456).
IEEE.
Devi, M. S., & Bajaj, P. R. (2008, July). Driver fatigue detection based on eye tracking.
In 2008 First International Conference on Emerging Trends in Engineering and
Technology (pp. 649-652). IEEE.
Eriksson, M., & Papanikotopoulos, N. P. (1997, November). Eye-tracking for detection of
driver fatigue. In Proceedings of Conference on Intelligent Transportation Systems (pp.
314-319). IEEE.
Kong, W., Zhou, L., Wang, Y., Zhang, J., Liu, J., & Gao, S. (2015). A system of driving
fatigue detection based on machine vision and its application on smart device. Journal
of Sensors, 2015.
Polis Akademisi, 2019. Trafik Kaza ve Denetim İstatistikleri. https://www.pa.edu.tr/Upload
/editor/files/Trafik_ Kaza ve_ Denetim %C4%B 0statistikleri.pdf Erişim Tarihi:
05.08.2021
Rogado, E., Garcia, J. L., Barea, R., Bergasa, L. M., & López, E. (2009, February). Driver
fatigue detection system. In 2008 IEEE International Conference on Robotics and
Biomimetics (pp. 1105-1110). IEEE.
Sikander, G., & Anwar, S. (2018). Driver fatigue detection systems: A review. IEEE
Transactions on Intelligent Transportation Systems, 20(6), 2339-2352.
Tuik. 2021. https://data.tuik.gov.tr/Bulten/Index?p=Road-Traffic-Accident-Statistics-2020-
37436 . Erişim Tarihi : 05.08.2021
Wang, H., Dragomir, A., Abbasi, N. I., Li, J., Thakor, N. V., & Bezerianos, A. (2018). A
novel real-time driving fatigue detection system based on wireless dry EEG. Cognitive
neurodynamics, 12(4), 365-376.
Wang, Q., Yang, J., Ren, M., & Zheng, Y. (2006, June). Driver fatigue detection: a survey.
In 2006 6th world congress on intelligent control and automation (Vol. 2, pp. 8587-
8591). IEEE.

1
Isparta University of Applied Sciences, Keçiborlu VHS, Isparta, Turkey
* Corresponding author: [email protected]
120
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Performance analysis of advanced encryption standard algorithm


using parallel computing for embedded systems

Muhammet Cihat MUMCU1*, Güner TATAR2

Abstract: In today's technological world, data transfer is an important issue. It has even
become one of the most discussed topics. With the advancement of technology, processing
and storing vast quantities of data in computers and devices with limited resources in the
middle of the Internet of things and transmitting them from one location to another through
electronic communication channels has become a routine activity of everyday life. However,
since the communication networks used for data processing are accessible to everyone's use
or access, there is a risk that such data will be shared, lost, or accessed by unauthorized (third)
parties. At this stage, some transformations must be performed for the messages to be
rendered incomprehensible by third parties and transmitted via accessible electronic
communication channels. This is accomplished by the use of encryption or cryptographic
operations. The cryptographic algorithms used today may be insufficient in terms of security
and efficiency, especially for limited resource devices used in Internet of Things
environments. As a result, special lightweight cryptography algorithms built with the
constraints of restricted devices are commonly used. The implementation and analysis of the
symmetric AES algorithm in parallel computing using different methods on different
platforms were performed in this research. AES algorithm has been computed in parallel
using different software platforms in the Complex Instruction Set Computer (CISC)
architecture, and better efficiency has been achieved by making parallel hardware design in
the folded architecture.

Keywords: Internet of Things (IoT), Embedded Systems, Parallel Computing, Lightweight


Cryptography.

1. Introduction

In today's world where technology is developing rapidly, the internet and computers have
become indispensable elements of our lives. In parallel with this development, the security
gaps that have emerged are also as important. It has become compulsory to use the most
advanced encryption methods in applications requiring high security such as online sales (e-
commerce), banking transactions and credit card transactions. Various encryption, keying and
decoding algorithms are provided through the science of cryptography for reliable
transmission and acquisition of data (Kahate 2013; Goldreich 2009).

Cryptography is a term that refers to a combination of math and security engineering. It


provides us with the tools that are at the heart of most current security measures. It is the most
1
Maltepe University, Faculty of Engineering and Natural Science, Department of Electrical and Electronics
Engineering, Istanbul, Turkey
2
Fatih Sultan Mehmet Vakıf University, Faculty of Engineering, Department of Electrical and Electronics
Engineering, Istanbul, Turkey
* Corresponding author: [email protected]
121
appropriate key enabling approach for safeguarding various systems, but it is regrettably
difficult to implement correctly (Diffie and Hellman 1976). The cryptology communities and
computer security have been drifting away over the last 20 years. The users of security
systems do not always realize the cryptology’s tools, and they do not always understand the
real-world’s problems.

In the face of hostile external attackers, cryptography is a method of transferring special


information through communication routes. It includes many problems such as authentication,
encryption and the key distribution to limited persons. The field of modern cryptography
provides the users the ability to understand these problems depending on the theoretical
foundation, evaluate the response protocols that can solve these problems, and build the
needed protocols that achieve the confidence of security (Eisenbarth et al. 2007).

Today's cryptographic methods may be insufficient in terms of security and performance,


particularly for devices with low resources utilized in Internet of Things environments. As a
result, specific lightweight cryptography methods designed for limited devices are frequently
employed. This work involved the implementation and analysis of the symmetric AES
algorithm in parallel computing using various approaches on various platforms.The AES
algorithm has been computed in parallel using different software platforms in the Complex
Instruction Set Computer (CISC) architecture, and better efficiency has been achieved by
making parallel hardware design in the folded architecture.

2. Material and Method

The cryptography techniques are generally divided into two groups:

2.1. Symmetric Algorithms

The same structure is used for encryption and decryption in symmetric key algorithms whose
general structure is shown in Figure 1. This key is called a secret key. This secret key is
known to both parties (sender and receiver).

Figure 1. General Structure of Symmetric Key Algorithm

122
Symmetric algorithms work faster than asymmetric algorithms. Examples of symmetric
algorithms include AES, DES, 3DES, Blowfish, IDEA, RC4 and TEA. Symmetric ciphers are
still widely used, especially for data encryption, data decryption and integrity check of the
messages. The symmetric cryptography algorithms can be rated into two types, block ciphers
and stream ciphers (Chandra et al. 2014; Ebrahim et al. 2014).

2.2 Asymmetric Algorithms

Different keys termed public and private keys are used for encryption and decryption in this
cryptography method, and the overall structure of systems with asymmetric keys is depicted
in Figure 2. These two keys are used together. However, a person who has any of these keys
cannot produce the other key, which is mathematically impossible.

Figure 2. General Structure of Asymmetric Key Algorithm

Asymmetric algorithms are more secure and more difficult to break than symmetric
algorithms. However, their performance is quite low compared to symmetric algorithms. In
asymmetric algorithms, each person has a key pair. A person's private key is for his own use
only and should not be in the hands of others. The only recipient can open the encrypted
message with his private-key. Some examples of public-key cryptography algorithms are
Elgamal, RSA, ECC, Diffie-Hellman and DSA (Maqsood et al. 2017; Garg and Yadav 2014).

2.3. Advanced Encryption Standard

AES is an encryption standard adopted by the United States. A standard block encoder, also
known as Rijndael (Daemen and Rijmen 2001). The algorithm improved by two Belgian
researchers Joan Daemen and Vincent Rijmen was determined as the new standard as a result
of the competition organized by NIST (National Institute of Standards and Technology) in
order to determine a new encryption standard with the weakening and credibility of the DES
algorithm against the developing technology. After a long period of standardization and
verification, it was published by NIST on 26 November 2001 as the AES FIPS 197 standard
(Nechvatal et al. 2001; Schneier and Whiting 2000).

Advanced encryption standard (AES) is a block cipher algorithm that encrypts data in 128-bit
chunks. There are three types, AES-128, AES192 and AES-256, according to the key length it

123
uses. In AES, 128-bit data blocks are considered 4 words, each consisting of 32-bits. When
starting the encryption process with AES, a 128-bits data block consisting of 4 words is
written into the state sequence and all the necessary operations during the algorithm are
performed using this sequence. With the end of the last operation required for encryption, the
last state of the sequence is written to the output sequence (Selent 2010).

The AES algorithm generally consists of two blocks, the first block is the round conversion
and the second block is the key generation block. The algorithm has a repetitive structure,
128, 192 or 256-bit, depending on the length of the key sizes, is repeated 10, 12, or 14 times
in turn. The number of repetitions is given in Table 1.

Table 1. AES Key-Block-Round Comparison

Block Size Key Length Number of Rounds


(bit) (bit) (Nr)

AES-128 128 128 10

AES-192 128 192 12

AES-256 128 256 14

The block to be encrypted at the beginning of the encryption is written to the status sequence
according to Figure 3. Encryption process starts with the addition of the state sequence with
the input key. Depending on the length of the key sizes, the round conversion is repeated 10,
12 or 14 times.

Figure 3. The Structure of AES Encryption

The block diagram of the detailed encryption and decryption process is given in Figure 4.
During the tour conversion process, the Byte Change, Shifting Rows, Mixing Columns, and

124
Round Key collection sub-operations are applied on the status sequence. The 128-bit data
obtained as the output of the round conversion process is collected by the key data generated
as a result of the key generation process. Performing the last round operation and collecting
with the key block results in the encrypted block. The operations performed in the last round
differ from those performed in the previous rounds. Mixing Columns is not performed in the
last round.

Figure 4. The Detailed Encryption and Decryption Rounds of AES-128

2.4. Instruction Set Architecture (ISA)

In computer science, an instruction set architecture (ISA), often known as computer


architecture, is an abstract representation of a computer. Devices such as central processing
units (CPUs) that implement an ISA are referred to as "implementations". I/O paradigm (such
as memory consistency, addressing modes and virtual memory) and data type support are
defined by an ISA for a family of ISA implementations (Goodcare and Sloss 2005).

125
An ISA specifies the behavior of machine code running on ISA implementations in a way that
is independent of those implementations' characteristics, allowing binary compatibility across
them. This allows multiple implementations of an ISA to run the same machine code while
differing in performance, physical size, and monetary cost (among other things). This allows a
lower-performance, lower-cost machine to be replaced with a higher-cost, higher-performance
machine without replacing the software. It also enables the evolution of ISA implementation
microarchitectures, allowing a newer, higher-performance ISA implementation to run
software written for previous generations of implementations (Jamil 1995).

The architecture of processor instruction sets varies. Each architecture has its own length,
structure, and complexity of commands. As a result of the alteration, there is a difference in
design.

The first of the instruction set architectures, CISC (Complex Instruction Set Computing), is
the first developing instruction set architecture. Commands in this architecture vary in length
and complexity. Both memory and instruction number are saved since many instructions are
merged into a single instruction. The intricacy of the instructions, on the other hand, adds to
the complexity of the processor architecture. The instructions in the RISC instruction set
architecture are all the same length and have a basic structure. This makes the processor
design easier. In comparison to the CISC design, however, more instructions are required to
execute an operation (Bhandarkar 1997).

2.4.1 Complex Instruction Set Computing (CISC)

When Intel's processor series based on the x86 architecture first appeared in the 1970s, some
design architects who advocated for the support of high-level languages by utilizing these
resources economically due to the expensive and limited RAMs collaborated to create the
CISC architecture. This architecture is the result of a design philosophy that is simple to
program and makes efficient use of memory. Despite the fact that it has poor performance and
complicates the processor, it simplifies the software.

Variable-length commands are one of the two distinguishing features of CISC architecture,
along with complex commands. Commands with variable and complex lengths save memory.
Because complex instructions combine two or more instructions into a single instruction, they
save both memory and the number of instructions that must be included in the program. A
complex command necessitates a complex architecture. As the architecture becomes more
complex, undesirable situations in processor performance arise. Low memory usage when
installing and running programs, on the other hand, can eliminate this issue. A typical CISC
instruction set contains 120-350 instructions in variable format. It has a good memory
management system and more than a dozen addressing modes (Bhandarkar and Clark 1991).

The CISC architecture is based on the multi-stage processing model. The first tier is where the
high-level language is written. The next level is machine language, which, as a result of
compiling the high-level language, translates a series of commands into machine language. In
the next step, the commands translated into machine language are decoded and converted into
the simplest operable codes (microcode) that can control the hardware units of the
microprocessor. At the lowest level, the necessary tasks are carried out through the hardware
that receives the workable codes.

CISC processors are designed to complete the command before moving on to the next
instruction. In practice, however, this is not the case because the commands are too complex

126
and are not processed in a single loop. They cause delays in the production line. As a result,
most processors divide instruction execution into several distinct stages. When a stage is
completed, the result is passed on to the next stage (Lee et al. 2002).

2.4.2 Reduced Instruction Set Computing (RISC)

RISC architecture was developed as an alternative to CISC architecture with the reaction of
the market to eliminate the bad aspects of processors. Companies such as IBM, Apple, and
Motorola have worked diligently to develop RISC. Adherents of the RISC philosophy
believed that computer architecture needed to be completely overhauled and that almost all
traditional computers had architectural flaws and were therefore obsolete (Wolfe and Chanin
1992). They believed that computers were becoming increasingly complex and that they
needed to be set aside and restarted from scratch (Kane 1988).

Advances in semiconductor technology began to close the speed gap between main memory
and processor chips in the mid-1970s. As memory speed rose and high-level languages
supplanted assembly language, the CISC's primary benefits began to fade. Instead of only
speeding up hardware, computer designers began to experiment with different methods to
boost computer performance (Aletan 1992).

IBM is recognized as being the first to define the RISC architecture in the 1970s. In reality,
the universities of Berkeley and Stanford expanded on this study in order to uncover core
architectural models. Three essential concepts underpin RISC's philosophy. A requirement of
performance parity is that all instructions be performed in a single cycle. It can only be
realized if specific characteristics are present. Instruction code must have a set width equal to
or smaller than the external bus to minimize decoding delays, operands must not be
supported, and instructions must be vertical and simple. Only the "load" and "store"
commands should be used to access the memory. This principle is a logical extension of the
first. It takes many cycles to execute an instruction that directly manipulates memory for its
own purpose. The command is retrieved, and memory is examined. The RISC processor loads
data from memory into a register, reviews the register and then writes the contents of the
register to the main memory. This sequence necessitates at least three commands. To maintain
performance with register-based transaction processing, a large number of general-purpose
loggers are required. All execution units must be run directly from the hardware, without the
use of microcode. Using microcode necessitates a large number of cycles to load arrays and
similar data. As a result, it is difficult to use in the execution of single-cycle execution units
(Waterman 2016).

2.5. Parallel Programming Methods

In general terms, parallel programming is the principle of using multiple sources and
processors to solve a problem. In this type of programming, the problem is divided into
smaller steps and instructions are given to the processors to solve them simultaneously. Thus,
compared to a work done using serial programming, a serious advantage is gained in the
completion time of the work by using the parallel programming technique (Karunadasa and
Ranasinghe 2009). Today, new types of computer systems have hardware that allows this type
of programming. When parallel structures are used, it offers advantages over simultaneous
programming in terms of accelerating all processes, achieving fast results and saving time. On
the other hand, the use of parallel computing methods has high energy requirements as it uses
a large number of processor cores, and besides, such programming methods are more difficult
to learn than simultaneous programming. As it is known, there are five main methods in

127
parallel programming. One of them is the modern compilers, called compile aid, which are
used to automatically parallelize the program written by the user. Examples are Intel AVX
and Intel Parallel Studio XE. Another is the calling of parallelized libraries. Multi-core
architectures can be exploited in this process, even NVIDIA GPUs, using libraries built on
CUDA is a big step towards parallelization. Another one, even one of the important ones, is
OpenMP (Graham et al. 2005; Gabriel et al. 2004). In basic terms, OpenMP is a multi-
platform API as well as an application development interface that supports multi-platform
shared memory multiprocessing in C, C++ and Fortran programming languages. OpenACC
(open accelerator) is known as a programming standard for parallel computing. It makes
parallel programming of standard heterogeneous CPU/GPU systems simpler. That is, it can
initiate computational code on both CPU and GPU architectures. Our fourth parallel
programming technique is known as low-level hardware targeting, and even includes
hardware description languages used in FPGA programming such as VHDL and Verilog. The
focus here is on CUDA programming, mostly CPU and GPU memory allocation, data
transfer, and computing "kernels" mapped between thread blocks in the GPU (Farber 2011;
Cheng et al. 2014). In GPUs, basically one-way multi-data paradigms called SIMD are
implemented. CUDA is low level programming and it is clear that the problem is with
knowing the basic structure of GPUs and how to map it to hardware. The last but not least
important parallel programming method is MPI (message passing interface). MPI is the
"standard" for distributed memory parallelism, that is, the parallel use of networked node
clusters. Using efficient threading and intra-node communication methods, the MPI is a great
fit for the symmetric multiprocessing (SMP) node. Performance-wise, it's just as good as
direct threading methods like OpenMP. Today, it has become a standard for hybrid
programming in which OpenMP and MPI are used together. MPIs are also used with
hardware accelerators like GPUs. So multi-core architectures are suitable for MPI
acceleration. An example of distributed shared memory is given in Figure 5. The relationship
between Shared and Distributed memory can be understood here.

Figure 5. Distributed Shared Memory Architecture

3. Results

In this study, encryption and decryption performance analyses of the AES-128 algorithm on
texts of different sizes were performed. During the comparison of the algorithm on serial,
OpenMP and CUDA platforms, HP Workstation with Intel ® Core ™ i7-7700HQ @2.80
GHz (8CPUs) processor and NVIDIA Quadro M1200 graphics card was used in CISC
structure.

128
The results of computing the AES-128 algorithm for both encryption and decryption in serial
and parallel mode on plain texts are presented in Figure 6 and 7.

Figure 6. Comparison of Serial vs. Parallel Computing for Encryption

Figure 7. Comparison of Serial vs. Parallel Computing for Decryption

As can be seen in Figures 3 and 4, a significant performance improvement has been observed
in the implementation of the AES algorithm using parallel computing methods. Especially
CUDA showed a much better result compared to the OpenMP method. AES is a block cipher
algorithm based on Substitution-Permutation design. Therefore, encryption and decryption
times are expected to be similar. This similarity is observed in Figures 3 and 4 for all
computing methods.

129
4. Discussion and Conclusions

In this work, encryption-decryption implementations of the AES-128 technique on texts of


various sizes were created, and OpenMP and CUDA applications were created to boost
performance. It has been discovered that the algorithm's parallel implementation yields
considerable performance benefits when compared to executing in serial mode. It will make
major contributions to research in the field of the internet of things and information security in
the future with the application of comparable methodologies to different cryptology
algorithms.

References

Aletan, S. O. (1992, April). An overview of RISC architecture. In Proceedings of the 1992


ACM/SIGAPP Symposium on Applied computing: technological challenges of the
1990's (pp. 11-20).

Bhandarkar, D., & Clark, D. W. (1991, April). Performance from architecture: comparing a
RISC and a CISC with similar hardware organization. In Proceedings of the fourth
international conference on Architectural support for programming languages and
operating systems (pp. 310-319).

Bhandarkar, D. (1997). RISC versus CISC: a tale of two chips. ACM SIGARCH Computer
Architecture News, 25(1), 1-12.

Chandra, S., Bhattacharyya, S., Paira, S., & Alam, S. S. (2014, November). A study and
analysis on symmetric cryptography. In 2014 International Conference on Science
Engineering and Management Research (ICSEMR) (pp. 1-8). IEEE.

Cheng, J., Grossman, M., & McKercher, T. (2014). Professional CUDA c programming. John
Wiley & Sons.

Daemen, J., & Rijmen, V. (2001). Reijndael: The Advanced Encryption Standard. Dr. Dobb's
Journal: Software Tools for the Professional Programmer, 26(3), 137-139.

Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE transactions on
Information Theory, 22(6), 644-654.

Ebrahim, M., Khan, S., & Khalid, U. B. (2014). Symmetric algorithm survey: a comparative
analysis. arXiv preprint arXiv:1405.0398.

Eisenbarth, T., Kumar, S., Paar, C., Poschmann, A., & Uhsadel, L. (2007). A survey of
lightweight-cryptography implementations. IEEE Design & Test of Computers, 24(6),
522-533.

Farber, R. (2011). CUDA application design and development. Elsevier.

Gabriel, E., Fagg, G. E., Bosilca, G., Angskun, T., Dongarra, J. J., Squyres, J. M., ... &
Woodall, T. S. (2004, September). Open MPI: Goals, concept, and design of a next
generation MPI implementation. In European Parallel Virtual Machine/Message
Passing Interface Users’ Group Meeting (pp. 97-104). Springer, Berlin, Heidelberg.

130
Garg, N., & Yadav, P. (2014). Comparison of asymmetric algorithms in cryptography.
Journal of Computer Science and Mobile Computing (IJCSMC), 3(4), 1190-1196.

Goldreich, O. (2009). Foundations of cryptography: volume 2, basic applications. Cambridge


university press.

Goodacre, J., & Sloss, A. N. (2005). Parallelism and the ARM instruction set architecture.
Computer, 38(7), 42-50.

Graham, R. L., Woodall, T. S., & Squyres, J. M. (2005, September). Open MPI: A flexible
high performance MPI. In International Conference on Parallel Processing and Applied
Mathematics (pp. 228-239). Springer, Berlin, Heidelberg.

Jamil, T. (1995). Risc versus cisc. Ieee Potentials, 14(3), 13-16.

Kahate, A. (2013). Cryptography and network security. Tata McGraw-Hill Education.

Kane, G. (1988). mips RISC Architecture. Prentice-Hall, Inc..

Karunadasa, N. P., & Ranasinghe, D. N. (2009, December). Accelerating high performance


applications with CUDA and MPI. In 2009 International Conference on Industrial and
Information Systems (ICIIS) (pp. 331-336). IEEE.

Lee, J. H., Lee, W. C., & Cho, K. R. (2002, August). A novel asynchronous pipeline
architecture for CISC type embedded controller, A8051. In The 2002 45th Midwest
Symposium on Circuits and Systems, 2002. MWSCAS-2002. (Vol. 2, pp. II-II). IEEE.

Maqsood, F., Ahmed, M., Ali, M. M., & Shah, M. A. (2017). Cryptography: A comparative
analysis for modern techniques. International Journal of Advanced Computer Science
and Applications, 8(6), 442-448.

Nechvatal, J., Barker, E., Bassham, L., Burr, W., Dworkin, M., Foti, J., & Roback, E. (2001).
Report on the development of the Advanced Encryption Standard (AES). Journal of
Research of the National Institute of Standards and Technology, 106(3), 511.

Schneier, B., & Whiting, D. (2000, April). A Performance Comparison of the Five AES
Finalists. In AES Candidate Conference (pp. 123-135).

Selent, D. (2010). Advanced encryption standard. Rivier Academic Journal, 6(2), 1-14.

Waterman, A. S. (2016). Design of the RISC-V instruction set architecture. University of


California, Berkeley.

Wolfe, A., & Chanin, A. (1992). Executing compressed programs on an embedded RISC
architecture. ACM Sigmicro Newsletter, 23(1-2), 81-91.

131
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Changes in agrochemical indicators of soils under the rotational


technique of pasture use in the conditions of the Kyrgyz Republic

Totubaeva N.E.1*, Shalpykov K.T.

Abstract: Intensive use of pastures, without the use of rotational technique in the Kyrgyz
Republic, over the past 30 years has led to the deterioration of their ecological condition, as a
result more than 60% of the country's pastures are degraded. One of the indicators of pastures’
state is the agrochemical indicators of pasture soils. Therefore, the purpose of our research
was to study changes in agrochemical indicators of soils, with different types of pasture use.
We created demonstration plots of 1 hectare each, with an interval of 1 year of withdrawal
from common use. Our studies showed that the pasture plot that was fenced 1 year ago
contained more particles the size of which were <0.001mm (18.04%), and the plot that was
not fenced and subjected to overgrazing contained minimal particle content of size <0.001mm
(15.64%). In contrast, the content of large particles of 1.0-0.25 mm, in the unfenced plot was
elevated and was 9.45%, and in the fenced plot the content of particles of 1.0-0.25 mm was
2.49 and 2.71%. Ca2+ content in the unfenced plot was lower than in the fenced plots and was
0,008/0,40 %/mg.eq. As the increase of Ca2+ in soil is an important indicator of improvement
of soil structure and general condition, we can assume that creation of rotational grazing is
important to improve structure, pasture productivity, and possibility of good practice of
successful sustainable grazing and improve life and health of local population vulnerable to
climate change.
Keywords: sustainable pasture use, biodiversity, anthropogenic pressure, soil degradation,
ecosystem, agrochemical indicators

Keywords: Kyrgyz, climate change., Cryptography

1. Introduction

Many works are devoted to methods of sustainable pasture management (Ehsan Elahi,2021;
W. Crewett, 2012; Silva de Oliveria, 2017; Amorim H.C., 2020; Ragimov, A., 2020;
Andreeva O.V. et al, 2021). The relevance is caused by the intensification of pasture
degradation with the continuous growth of the world population and the possible problem
with food security, especially in arid areas (Aerts R. , 2000; FAO, 2020; Xie et al., 2020;
Elser J. J., 2010; Picasso, V. D.,2014). Many projects have been implemented to identify
vulnerable communities and target areas to implement locally appropriate sustainable
rangeland management options (FAO, 2020). However, most developments in sustainable
pasture management focus on socio-economic management issues and decision-making
theory rather than on natural processes and ecosystem responses to negative anthropogenic
impacts (Andreeva O. V., 2021; Donald M., 2021; WOCAT Database, 2020). The most
vulnerable link of unsustainable pasture management is soil, it is an indicator of the ecological

1
Kyrgyz-Turkish Manas University, Bishkek, KYRGYZSTAN.
* Corresponding author: [email protected]
132
state of the environment, the initial link in the productive chains of the ecosystem (Kovyazin
V.F., 2008), ensuring its health is one of the most important tasks of our time. Soil health is
the ability of soil to function as a single living system in nature and within the boundaries of
land use, maintain plant and animal productivity, maintain or improve water and air quality,
and contribute to plant and animal health (Reynolds, W. D., 2007; Duval, M. E., 2013).
Anthropogenic decline in pasture soil health is a pressing environmental problem in the
Kyrgyz Republic (Wibke Crewett, 2012; Dörre A., 2012). Pastures in the country occupy
almost half of the country's land area, or about 80% of agricultural land (FAO, 2018). For
more than half of the local population of the country, livestock farming is the main source of
income (SIP "Kyrgyzgiprozem" under the Ministry of Agriculture and Food, 2018). Recently,
the intensification of degradation of pastures has been observed, which in a moderately arid
climate can lead to irreversible processes of aridization of vast areas not only of the Kyrgyz
Republic, but also of all Central Asian countries, affecting the strategic resources of economic
development, food security and environmental health (Nóbrega RLB, 2017). According to
FAO (2000), pasture productivity in the Kyrgyz Republic has steadily declined since the
1960s, and by 1993 it was reported to be about 300 kg/ha dry substance due to increased
grazing pressure and poor grazing management (FAO, 2018). One of the informative
indicators of soil condition is its agrochemical characteristic, which can characterize the
qualitative condition of soil (Alexander K.G., 1991; Pierret A., 1999; Dorana J, 2000; Hunke,
P.,2015; Hunke P, 2015, Lavelle, Patrick, 2000; Mandal, A., 2020) and depends largely on its
granulometric composition and degree of degradation (Donald R.G., 1987; Dolgopolova N.,
2018). This indicator is often used for the ecological assessment of urban areas (Kovyazin
V.F., 2008). However, its application to assess the condition of pastures is of particular
interest. In this regard, the purpose of our research was to study changes in the agrochemical
indicators of the country's pasture soil withdrawn from general use for different terms.
2. Materials and Methods
The studies were conducted in the spring and fall pastures of the Chu valley of the Kyrgyz
Republic, located at an altitude of more than 1600 m a.s.l. Chui region is located in the
northern part of the Kyrgyz Republic, and occupies Chui, Chon-Kemin, high-mountain
Suusamyr valley, as well as slopes of Kyrgyz, Zaili, Kungey Ala-Too, Suusamyr-Too and
Djumgal ranges.
Soil sampling was done in the Shamshy area of the Issyk-Ata District (map-scheme1).
Map-scheme1. points of experimental plots (Shamshy ur., Chui oblast, Kyrgyz Republic).

133
In order to study the agrochemical parameters of pasture soils, we selected soil samples
shown in Table 1:
Table 1 Soil sampling scheme
# Pilot site plans Selection coordinates Height above sea
level

1 Demonstration plot withdrawn from pasture 42о35/29,1//N, 1654


use in 2020, total area 1 ha
75o24/08,53//E

42.591411

75.402.371

1а Demonstration plot withdrawn from pasture 42о35/29,1//N, 1654


use in 2019, total area 0.1 ha
75o24/08,53//E

42.591411

75.402.371

2 Demonstration plot withdrawn from pasture 42о35/17,64//N, 1948


use in 2020, total area 1 ha
75o27/06,49//E

42.588233

75.451805

3 Control, intensively used pasture type. A 42о35/17,64//N, 1948


background soil sample was taken at a
distance of 10 m from the fenced plots. 75o27/06,49//E

42.588233

75.451805

134
Figure 1 shows the demonstration plots of the study areas (Figure 1)
a) Demonstration site #1
b) Demonstration site #2

Soil samples were taken according to GOST 17.43.01-83, at a depth of 0-20 cm, from a 1x1 m
sample plot of 5 points that were combined into one total sample of 400-500 g.
Soil pH was measured with a universal ionometer EB-74.
Soil humus was determined by Tyurin's method modified by CINAO. Soil humidity was
determined by gravimetric method by drying samples in a drying oven at a temperature of
105oC. Determination of total nitrogen was determined by Meshcheryakov method,
determination of gross forms of phosphorus and potassium by Machigin method, mechanical
method by sieve method, using sieves from 0.001 to 10.0 mm in size. Determination of the
aqueous extract according to GOST 26423-85.
All data analyses were performed using Statistica 13.0 (USA). All figures created in the Excel
program from the Microsoft Office program package.

Results and Discussion


According to the International Fund for Agricultural Development, Chui oblast in the Kyrgyz
Republic is classified as a vulnerable region to climate change (Map 2).
Map-scheme 2: Levels of vulnerability to climate change in the Kyrgyz Republic (Source:
IFAD, Livestock and Market Development Program II (LMDP II). Project Completion
Report. WG 6. Impact of Climate Change on Pastures and Livestock Systems - Summary
Report, 2017)

135
The situation is aggravated by unsustainable pasture use. The study of agrochemical
properties of pastures under different pasture use showed that rotational technique
significantly affects their composition.
The mechanical composition of soils is characterized by aggregates of different shapes and
sizes, among which sandy particles (1-0.05 mm), dust (0.05-0.001 mm), silt (0.001-0.0001
mm), colloids ( <0.0001 mm) stand out (Esenzhanova G.K., 2019). The soils we studied had
the mechanical composition given in Table 2.
As shown in Table 2, the demonstration plots of pastures that were fenced 1 year ago (1a)
contained more particles of size <0.001mm (18.04%), and the plot that was not fenced in and
used in full contained minimal particles of size <0.001mm (15.64%) (Fig.2). In contrast, the
content of coarse particles of 1.0-0.25mm size, in the unfenced plot was higher and was
9.45%, while in fenced plot 1 and 1a the content of particles of 1.0-0.25mm size was 2.49 and
2.71%, which was 6.74 times lower than in the unfenced plot (Fig.3).

Table 2 Mechanical and microaggregate composition of soils under different types of


pasture use, in %
Sampling Fractional composition, % (particle size) Sum of particles
number ˂0.01

1.0-0.25 0.25- 0.05- 0.01- 0.005- <0.001


0.05 0.01 0.005 0.001

1 2,71 22,81 35,36 9,80 12,64 16,68 39,12

1a 2,49 24,35 35,16 8,64 11,32 18,04 38,00

2 8,24 27,36 30,40 5,52 11,24 17,04 33,80

3 9,45 24,27 34,60 5,52 10,52 15,64 31,68

136
3
2

Sum of particles 1
31.68
33.8
38
0 39.12
10
20
30
40
Fig.2 Change in the sum of particles <0.01 at different types of pasture use

45
40
35 35.36
35.16
34.6
30 30.4
27.36
25 24.35
24.27
22.81
20
18.04
17.04
16.68
15 15.64
12.64
11.32
11.24
10 9.45 9.8 10.52
8.24 8.64
5 5.52
2.71
2.49
0
‐5 1,0‐0,25 0,25‐0,05 0,05‐0,01 0,01‐0,005 0,005‐0,001 <0,001

1 1а 2 3

Fig.3. Changes in the texture of soils under different types of pasture use

Thus, in order to preserve and improve the productivity of mountain pastures, the use of
rotational grazing is a necessary and important measure, with the resilience of mountain
ecosystems to various types of risks. Further, according to the research program, we studied
the chemical composition of the studied soils.

Indicators of soil chemical composition showed ambiguous indicators, according to which it


is not possible to reveal any dynamics in changes of chemical composition with changes in
the type of post-land use (Table 3), however, some changes can still be traced (Fig.4). The
chemical composition of soils showed that the plot fenced for more than 1 year had the
highest soil pH of 7.80, while the pH of the rest of the plots was 7.45 (Fig. 5). The CO2
content in the unfenced plot was 0.4%, while the other plots had a value of 0.44%. However,
the humus content in the unfenced plot was higher than in the other variants of the studied
soils and was 8.85%, and in plot 1a (fenced more than 1 year ago) the humus content was
6.55%, which was the lowest of all soils studied.

137
Table 3 Chemical composition of soils, with different types of pasture use
№ pH Humus, Total The mobile Exchangeable Exchange Absorbed
, % nitrogen, form of potassium capacity mg- Na, mg-eq
% phosphorus mg/kg eq./100 g soil
% ( mg/kg

#1 7,45 0,44 7,17 0,245 18,2 600,0 29,0 0,25

#1a 7,80 0,44 6,55 0,215 21,0 240,0 28,0 0,20

#2 7,45 0,44 7,64 0,290 22,5 275,0 27,2 0,25

#3 7,45 0,40 8,85 0,323 24,0 240,0 28,6 0,25

24
3 0.323
8.85
0.4
7.45
22.5
2 0.29
7.64
0.44
7.45
21
1а 0.215
6.55
0.44
7.8
18.2
1 0.245
7.17
0.44
7.45
0 5 10 15 20 25 30

подвижная форма Р2О5, мг/кг Total nitrogen, % Humus,% СО2,% рН

Fig.4 Changes in the chemical composition of the studied soils

However, as shown in Table 4, the Ca2+ content in the unfenced plot was lower than in the
fenced plots and was 0.008/0.40 %/mg.eq., while its content in the plot fenced more than 1
year ago was highest at 0.012/0.24 %/mg.eq. (Fig.6). As Ca2+ increase is an important
indicator of improved soil structure and general condition, it can be assumed that the
establishment of rotational grazing is important to improve pasture structure, which entails the
conservation of unique mountain plants and the entire biodiversity of the studied ecosystems.
1
7.8
7.7
7.6
7.5
7.4
7.3
3 7.2 1а

Fig.5 Changes in the pH of the studied soils

138
Table 4 Indicators of aqueous extract of soils

# Dense alkalinity
sludge, % Total by HCO3 Cl- Na
%
мг.экв
1 0,057 0,026 0,001 0,012 0,006 0,001 0,007
0,43 0,03 0,24 0,80 0,08 0,32
1а 0,077 0,038 0,001 0,016 0,012 0,001 0,007
0,62 0,03 0,32 0,60 0,08 0,29
2 0,081 0,046 0,001 0,012 0,010 0,001 0,010
0,75 0,03 0,24 0,50 0,08 0,44
3 0,067 0,022 0,001 0,024 0,008 0,001 0,009
0,36 0,03 0,48 0,40 0,08 0,39

0.014

0.012 0.012
0

0.01 0.01
0

0.008 0.008
0

0.006 0.006
0

0.004

0.002

0
1 1а 2 3

Fig.6 Ca2+ content in the studied soils

Thus, changes in the mechanical and microaggregate composition of soils of different types of
pasture use have shown that the use of rotational pasture use and the creation of micro plots
with a one year withdrawal from common use is an effective way to improve the
microaggregate condition of soils, which further by chain reaction will lead to the restoration
and improvement of ecological functions of pastures and create all necessary conditions for
improving the productivity of the latter.
Acknowledgements
All activities within the project "Protection of wild tulips and support of rangeland
communities in the mountains of Kyrgyzstan" were implemented jointly with partner
organisations: Fauna & Flora International FC in the Kyrgyz Republic, Bioresurs PF and
Association of Forest and Land Users of Kyrgyzstan, with financial support from the UK
Government's Darwin Initiative.

139
References
1. Andreeva O.V., Lobkovsky V.A., Kust G.S., Sonne I.S. Current status of the concept and
development of a typology of sustainable land use models/Aridic ecosystems, 2021, Vol. 27, No. 1
(86), pp. 3-14
2. Dolgopolova N. V., Pigorev I. Y., Grudinkina V. V. Methodology of crop rotation design,
agrochemical characteristics of soils and optimal structure of sown areas in adaptive-landscape
farming (by example of Central Chernozem region) // Vestnik of Kursk State Agricultural
Academy, 2018, no. 6.
3. State Research Institute "Kyrgyzgiprozem" under the Ministry of Agriculture of the Russian
Federation. 2018г. Land resources of the Kyrgyz Republic. [Electronic resource
http://data.movegreen.kg/indicator/6].
4. IFAD, Livestock and Market Development Programme II (LMDP II). Project Completion Report.
WG 6. Climate Change Impacts on Pasture and Livestock Systems - Synthesis Report, 2017.
5. Esenzhanova G.K., Totubaeva N.E., Tokpaeva J.K., Talaibekova G.T., Kozhobaev K.A. Changes
of some indicators of soils and grounds of Balykchy city polluted with oil products after
remediation / Problems of Regional Ecology, 2019
6. Kovyazin V.F. Dynamics of agrochemical properties of soils of St.Petersburg / Fertility, 2008, №
3, p.34-37
7. FAO Carbon Sequestration through Climate Investment in Forests and Grasslands in the Kyrgyz
Republic (SUPKILPCR), 2018, p.155Amorim, H. C., Ashworth, A. J., Moore Jr, P. A., Wienhold,
B. J., Savin, M. C., Owens, P. R. & Xu, S. Soil quality indices following long-term conservation
pasture management practices. Agriculture, Ecosystems & Environment, 2020, 301, 107060.
8. Alexander K.G. & Miller M.H. The effect of soil aggregate size on early growth and shoot‐root
ratio of maize (Zea mays L.). Plant and Soil, 1991, 138, 189–194.
9. Aerts R. and Chapin F. S. The mineral nutrition of wild plants revisited: a re‐evaluation of
processes and patterns. Adv. Ecol. Res., 2000, 30: 1–67.
10. Crewett Wibke "Improving the Sustainability of Pasture Use in Kyrgyzstan," Mountain Research
and Development, 32(3), 267-274, (1 August 2012)
11. Dörre Andrei and Borchardt Peter "Changing Systems, Changing Effects—Pasture Utilization in
the Post-Soviet Transition," Mountain Research and Development 32(3), 313-323, (1 August
2012). https://doi.org/10.1659/MRD-JOURNAL-D-11-00132.1
12. Donald R.G., Kay B.D. & Miller M.H. The effect of soil aggregate size on early shoot and root
growth of maize (Zea mays L.). Plant and Soil, 1987, 103, 251–259.
13. Donald M. Ecological Intensification: A Step Towards Biodiversity Conservation and Management
of Terrestrial Landscape. Ecological Intensification of Natural Resources for Sustainable
Agriculture, 2021, p. 77-102.
14. Duval, M. E., Galantini, J. A., Iglesias, J. O., Canelo, S., Martinez, J. M., & Wall, L. Analysis of
organic fractions as indicators of soil quality under natural and cultivated systems. Soil and Tillage
Research, 2013, 131, 11-19.
15. Dorana John W.Zeissb, Michael R.Soil health and sustainability: managing the biotic component of
soil quality/ Applied Soil Ecology, Volume 15, Issue 1, August 2000, р. 3-11
16. Ehsan Elahia, Hongxia Zhanga, Xing Lironga, Zainab Khalid, Haiyun Xu Understanding cognitive
and socio-psychological factors determining farmers’ intentions to use improved grassland:
Implications of land use policy for sustainable pasture production/ Land Use Policy, 2021, Vol.102,
105250
17. Elser J. J. et al. Biological stoichiometry of plant production: metabolism, scaling and ecological
response to global change. New Phytol., 2010, 186: 593–608.
18. FAO. [Электронный ресурс http://www.fao.org/land-water/land/sustainable-land-
management/slm-decisionmaking/ru (дата обращения 05.08.2020)].
19. Hunke, P., Roller, R., Zeilhofer, P., Schröder, B., & Mueller, E. N. Soil changes under different
land-uses in the Cerrado of Mato Grosso, Brazil. Geoderma Regional, 2015, 4, 31-43.
20. Hunke P, Mueller EN, Schröder B, Zeilhofer P. The Brazilian Cerrado: assessment of water and
soil degradation in catchments under intensive agricultural use. Ecohydrology. 2015;8: 1154–1180.

140
21. Lavelle, Patrick Еcological challenges for soil science, Soil Science: January 2000, Volume 165,
Issue 1, p 73-86
22. Mandal, A., Sarkar, B., Mandal, S., Vithanage, M., Patra, A. K., & Manna, M. C. Impact of
agrochemicals on soil health. In Agrochemicals Detection, Treatment and Remediation, 2020, pp.
161-187.
23. Nóbrega RLB, Guzha AC, Torres GN, Kovacs K, Lamparter G, Amorim RSS. Effects of
conversion of native cerrado vegetation to pasture on soil hydro-physical properties,
evapotranspiration and streamflow on the Amazonian agricultural frontier. PLoS ONE, 2017,
12(6): e0179414. https://doi.org/10.1371/journal.pone.0179414
24. Pierret A., Moran C.J. & Pankhurst C.E. Differentiation of soil properties related to the spatial
association of wheat roots and soil macropores. Plant and Soil, 1999, 211, 51–58.DOI:
10.1023/a:1004490800536
25. Picasso, V. D., Modernel, P. D., Becoña, G., Salvo, L., Gutiérrez, L., & Astigarraga, L.
Sustainability of meat production beyond carbon footprint: a synthesis of case studies from grazing
systems in Uruguay. Meat science, 2014, 98(3), 346-354.
26. Ragimov, A., Mazirov, M., Nikolaev, V., Shitikova, A., & Malakhova, S. Impact Of Different
Type Of Cattle Grazing On The Processes Of Agrochemical Degradation and Digression Of Soil
Cover. In E3S Web of Conferences, 2020, Vol. 220
27. Reynolds, W. D., Drury, C. F., Yang, X. M., Fox, C. A., Tan, C. S., & Zhang, T. Q. Land
management effects on the near-surface physical quality of a clay loam soil. Soil and Tillage
Research, 2007, 96(1-2), 316-330.
28. Silva de Oliveira, R., Barioni, L. G., Hall, J. J., Moretti, A. C., Veloso, R. F., Alexander, P., ... &
Moran, D. Sustainable intensification of Brazilian livestock production through optimized pasture
restoration. Agricultural systems, 2017, 153, 201-211.
29. Xie H., Zhang Y., Wu Z., Lv T. A bibliometric analysis on land degradation: Current status,
development, and future direction // Land., 2020, No. 9. 37 p.
30. WOCAT Database Global Database on Sustainable Land Management, 2020 [Электронный
ресурс https://www.wocat.net/en/global-slm-database].

141
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The “sustainable” landscape: learning from the building tradition


of the Hyblean countryside to prepare for the future

Gianfranco Gianfriddo1*, Luigi Pellegrino1, Matteo Pennisi1

Abstract: Artifacts built in the past in a different economic and social context often contain
principles that are valid for building the present and foreshadowing the future. The necessary
condition for such human artifacts to be still relevant is their ability to go beyond the
historical reality that generated them. Observation of the few stones still standing thus
becomes the means by which to draw out those “timeless” ideas and construction principles
that are the foundation of architecture.

In Sicily, a particular geographical area contains a remarkable density of artefacts with these
characteristics: the Hyblean plateau. Here, over time, man has succeeded in building an
extraordinary landscape through a clear structuring of territory that leads from the city house,
through the streets, to the country house. The buildings in the Hyblean countryside represent a
lesson in sustainable living “ante litteram”, capable of guiding us today through the
increasingly necessary search for new approaches to building in terms of man's impact on the
environment. Through comparisons and reflections, the text will make it clear that these
“new” approaches have in fact already been placed at the foundation of the construction of the
Hyblean landscape and that therefore the sustainability we seek has already been achieved in
the past by these small but significant country houses.

The construction of the Hyblean countryside thus becomes a virtuous example of world-
building from which to draw valuable lessons to be updated with today's means, capable of
guiding us in imagining human life in the future, an increasingly urgent issue that can no
longer be postponed given the times we live in.

Keywords: Countryside, Sustainability, Tradition, Landscape, Architecture, Project

1. Introduction: the farmer's knowledge

«Pay attention to the forms with which the farmer builds. For they are heritage handed down
by the wisdom of the fathers. But try to discover the reasons that led to that shape [...] Don't
think about the roof, but about the rain and the snow. This is how the farmer thinks, and
consequently builds»1.

This idea underlies the research conducted on the country houses of the Hyblaean plateau. A
study centred on how the combination of several minimal living units, based on reasons of

1
Loos, 1921.

1
S.D.S. di Architettura di Siracusa, Università degli Studi di Catania, Catania, Italia.
* Corresponding author: [email protected]
142
necessity and sustainability, has shaped the construction of the landscape. It must be said that
research has never seen in all this heritage of houses a certain “minor architecture”,
testimonies as the expression of a less “elevated” knowledge than that of the most well-known
architectures, although in any case pertaining to the so-called “spontaneous architecture”2.
This work, on the contrary, aims to detect in these buildings the traces and reasons for
Architecture, understood in its highest sense.

The study has been articulated through the analysis of artefacts that have not been studied so
far from this point of view, both in the intrinsic relations between the parts and in those that
they establish with the territorial palimpsest, in order to understand how these houses have
managed to transform a territory into a landscape.

The tools of the research were drawing and history, for Siza the only two tools available to the
architect to continue learning3. In essence, in relation to the research, drawing was necessary
to translate historical knowledge into constructive experience. Drawing served to draw out
from these artefacts the “timeless” lessons they contain. The orthogonal representation
allowed the settlement reasons underlying an entire man-made landscape to appear. The fact
that it was not possible to materially start from any existing documentation in this sense, if on
the one hand obliged a greater effort, on the other hand forced a decisive intellectual effort in
the quality of the research aimed at the “invention” of these drawings. These drawings are
reasoned at several territorial scales and not according to a principle of approaching the
studied artefact from the vast context, but following the idea that each scale communicates its
own specific relations and reasons different from the others. On the largest scale, the
relationship between the house and the neighbouring town and the territorial structure that
holds them together emerges; on the intermediate scale, the relationship with the soil and the
site of the “cugno” or the “cava”; on the more “architectural” scale, the spatial and
dimensional relationship between the spaces of the house and their articulation is revealed.
The truly extraordinary thing was to be faced with constructions, each of which wisely solves
all the questions of environmental sustainability that we are so convinced of today. These
houses seem to come out of the ground and base their balance on that of the surrounding
environment. Sustainability in this territorial context is not an aspect that was considered in
the construction but a real settlement principle capable of shaping the whole Hyblean
countryside. It is strange to think that centuries ago, with the few means available, they
understood how to settle in the landscape without the risk of “impact”, precisely because the
building seems to fit into the metabolism of nature and form an integral part of it. This
extraordinary context therefore represents a treasure trove of ideas for our future, from which
we can take as many lessons as possible in an attempt to make sustainability truly a principle
rather than a mere technical aspect.

2. The Hyblean minimum house

The Hyblean minimal house is a building, mostly single- or two-storey, made of limestone on
sack masonry with cantonments of rough-hewn or sometimes squared stone and a wooden
roof. An elementary structure whose essence is a state of necessity and limited economic
possibilities. The house itself, in the sense of a covered volume, only covers a limited part of
the plot, in most cases ranging from two to five “tumuli” (about 2,200 to 5,500 square

2
(Rudofsky, 1979)
3
“L’apprendimento – l’acquisizione della capacità di apprendere continuamente – continua a concentrarsi, a mio
intendere, nel disegno – nell’imparare a vedere, a capire, a esprimere – e nella storia – nel senso di conquista
della coscienza del presente in divenire.” (Siza, 2008)

143
metres). Therefore, the uncovered areas used for the various tree crops were very important,
the choice of which depended strictly on the characteristics of the soil and the most
advantageous species to be cultivated. A valley with a strong presence of springs, for
example, would have been more suitable for the cultivation of vegetables; an arid plateau for
legumes; the slopes were perfect for grazing. In this extraordinary place, every human action
is almost, we might say, the “artificial continuation” of a natural accident.

The social class that initiated this structuring of the landscape through the small rural house
was that of sharecroppers and farm labourers. The economic conditions for this fragile section
of the population were anything but flourishing due to the poorly paid work on the large
estates, so much so that most of these people barely managed to procure enough to survive. It
was therefore indispensable for them to extend their city house into the countryside, in order
to acquire the land to be able to set up the small building to serve the crops necessary to
guarantee the family a dignified survival. This is in fact the socio-economic function behind
these buildings: to respond to a need arising from an imbalance in the distribution of wealth in
society. Having said this, it is easy to see that the purpose of the small rural house is still that
of a productive building, i.e. dedicated to the agricultural activity of the plot on which it
stands. In this sense, this type of building constructs the Hyblean countryside in the true sense
of the word, if by countryside we mean that part of the territory devoted to agricultural
production. As we have seen, the house is only a small part of the area in which it stands.
Other elements in fact, although without a real roof, contribute to the construction of the lot.

2.1. The elements of the house

Access to the lot is via the roads that connect the city to the countryside, the backbone of
settlement throughout the area. A dense network of secondary roads branches off from these
routes, giving shape with exceptional clarity to a “spontaneous” layout, in the sense that it is
not the result of the application of a pre-established plan, but only of the condition of
necessity due to the accidents of the soil and the shape of the plateau, an approach that has
guided the choices of the people who have settled here. There are two ways of accessing the
plot: if the building is far from the main road by means of a dirt track that cuts through the
property; if, on the other hand, one of the sides of the building borders on the main or
secondary road, then access is direct without a further route.
If the roads are the backbone, the dry-stone walls are the rest of the framework. In an overall
view, dry-stone walls constitute a complex network that structures the territory geometrically,
dividing it into smaller and smaller areas according to work requirements. Dry-stone walls are
not only the most obvious anthropic sign in the landscape, but also those which, at a closer
look, clearly reveal man's interaction with the land and his ability to make it a place to live in.
The specific feature that a piece of the world must have in order to be “domestic”, and
therefore habitable, is size, and dry-stone walls tame the territory, on the one hand regulating
the relationships between individuals in the subdivision of property, and on the other
guaranteeing human feeling a sense of protection from the outside world, necessary to
perceive living “inside” something.
Among the elements of the house there is the “Mediterranean garden”, a space built up from a
few elements which nevertheless manage to define an identifiable spatial idea, thanks to the
clarity with which these objects are related. «[…] the Mediterranean Garden made up of small
plots of land, with its small walls over which runs the tangle of suburban lanes, nestled
between the whitewashed boundary walls surmounted by the glossy green foliage of the trees

144
[…]»4. What Sereni's words are worth highlighting is his ability to render the idea of a place
where the interaction between man and nature is almost total in an artefact. The truth is that
the Mediterranean garden, while not being a formally intended volume, is in any case a
construction in its own right, part in turn of the general construction of the plot and the entire
agrarian landscape of Hyblaean. This ability to pass from the garden to the whole landscape
shows that what makes a construction “human” is its response to “internal” and at the same
time “external” rules, fitting correctly into another, larger construction, without ever losing its
recognizability and autonomy with respect to the whole it is composed of.

The “baglio” is the front part of the house and functionally its access. This space is
characterized by dry stone walls which, according to the layout of the perimeter walls, run
parallel to the elevations of the house. The “baglio” extends the inside of the house to the
outside, delimiting an area and providing valuable shade on the hottest days, thanks to the
work of a few necessary trees placed along the dry-stone walls.

The position of each of the elements listed in the plot, and therefore also of the house itself, is
a consequence of the geology of the soils within the perimeter. The plateau is composed of a
large limestone plateau which, in the places where the rock is not outcropping, guarantees the
possibility of cultivation. In the innermost areas, in fact, there is a sufficient layer of ground
above the stony layer to allow for some cultivation. In other places, however, the planking
emerges, thus making cultivation impossible due to the lack of soil. In most cases both
conditions are easy to find within a plot. In these cases, the farmer wisely sets up his house on
the rock, relegating all the precious space with the soil to cultivation. The farmer's “choice”
may seem to us, and rightly so, more a necessity than an arbitrary decision, so much so that in
a certain sense we doubt whether it can really be called a “choice”. In truth, the farmer chose
the position of his house, and the fact that today, once the work has been completed, it seems
to have come out of the earth, only testifies that the reasons that guided the farmer were those
appropriate to the place and the right ones to shape it according to his needs and living
conditions.

In analysing the elements, it becomes clear that the logic behind the construction of the small
rural house is far removed from that which would see the house as “full” and the rest of the
plot as “empty”. Whoever built this landscape started from totally different assumptions. In
these buildings there is an inseparable knot between house and lot, so much so that we could
say that they are the same thing, since the idea behind them is the same. The house, the roofed
unit, is as much a part of the living organism as the rest of the lot; it is only the place of
functions that necessarily, for various reasons, require the presence of a roof. The house
belongs to the same world as the garden, the wall and all the other elements that make up this
unicum.

The small rural house has nothing more than what is strictly necessary to make a house
“habitable”, not only with respect to a rational organization of the rooms but precisely with
respect to an idea of “measurability”, a space in which a man can feel good as a human being
living in the world. The rural house is in this sense “domestic”, and the use of the most highly
worked stone in the canton's buildings is enough to clarify the matter and add another
fundamental aspect. The fact that in the cantonments it was often decided to sketch out a
geometry, making a considerable effort, is not a marginal thing. Without the cut stone in the
corner of a building it would be as if the corner did not know it was a corner, but at the same

4
(Sereni, 1987)

145
time it is true that staggered stone blocks in the corner make the structure stronger and more
solid. This correspondence between the static wellbeing of the building and the perception of
solidity by those who live in it is the basis of the construction of the Hyblaean countryside.

3. The sustainability as principle of the minimum house

In the minimal house it is impossible to clearly identify the function of each element that
builds it; take, for example, a tree whose role might be: to reduce the radiation on the
perimeter walls, to create a comfortable shady space in which to stand, to produce fruit
necessary for subsistence, to improve the perceptive wellbeing of the inhabitants. All these
reasons coexist together and this is one of the greatest lessons of the Hyblean countryside: not
to compartmentalize the world. This applies as much to the tree as to any other element that
makes up the house.

A principle of sustainability pervades the meaning of these constructions. It is precisely an


approach that unites both the individual technical-constructive choices and the general
settlement idea underlying the relationship between man and the environment. Reducing
radiation on the building volume goes hand in hand with the creation of a space of shade for
the wellbeing of the inhabitants of the house; it would seem obvious that they go together, but
in reality, it is not so evident that an apparently exclusive aim of the technological sphere
(protection from the sun's rays) is mingled with one relating to the wellbeing of the
inhabitants (a shaded outdoor space). This approach to the overall dimension of the Hyblaean
plateau results in a human artefact that we could define as a “sustainable landscape” shaped
by the perfect union of several choices made in individual lots, all of which are necessary,
each of which is a wise response to the needs of man and the environment, which are never
seen as two opposing realities but which conform to each other.

The systematic analysis of the object of the research through the tool of drawing, carried out
with scientific precision, has made it possible to “make these artefacts speak”, drawing from
them all the questions and lessons that they still conceal beneath the surface. Drawing the
small houses has made it possible to clarify, and to clarify, the characters that make this
structure a landscape construction. By bringing together different scales of representation,
from the broadest to the most detailed, the small houses showed all their capacity to construct
both the specific area of relevance and the entire territorial palimpsest in general. These
drawings are the most evident tools for understanding the relationships between the parts
within the artefact and at the same time between the single artefact and the palimpsest: they
are drawings for “doing”. The idea was to produce drawings that could clearly show how
these houses manage to build a landscape through a principle of necessity and sustainability.
As a matter of fact, no regulation or prescription can ever ensure that new buildings are really
sustainable, at most only the respect of certain parameters but separated from any constructive
idea. On the contrary, we believe that drawings created with the aim of being useful to the
work by explaining a possible way of sustainable building that has shaped a landscape in the
past are the most effective tools for tracing the paths to be taken to reconnect, if possible, with
that tradition and try to start doing it again today.

4. Discussion and Conclusions: from the minimum house to the future

Today, although the concept of sustainability is enormously widespread and on everyone's


lips, surprisingly it seems to have lost the “active” role it had in the past when it was not
actually the subject of discussion. As if in a real paradox, just when sustainability was not an

146
issue to be discussed it took on the far more incisive role of a building principle. While
sustainability was not explicitly talked about, it was seen as a principle underlying the
construction of the world. It is clear that there is a kind of inverse proportion: on the one hand
sustainability is the subject of heated debate, but on the other hand it has lost its importance,
being relegated to the fulfilment of specific requirements. Precisely today, at a time when
ethically everyone seems to be sensitive to the subject, sustainability has gone from being a
mere answer to specific technical questions, from a construction principle capable of holding
together the small house with the whole landscape (which it was).

There is an increasing tendency to understand sustainability as a matter of law or


environment. In this way, planners see only restrictions on their work and consequently their
approach is limited to meeting requirements without sustainability being understood as a
construction principle.

Making a building sustainable is commonly perceived as an obligation (mostly only


bureaucratic-formal) and not as a fundamental necessity of living. If we look at the past, at
country houses, it is clear that the well-being of the farmer who lived on his land was
inextricably linked to the well-being of the land with which he had a relationship in which
both parties interacted and benefited. Country houses are still able to clearly manifest the idea
with which the farmer stood in relation to the surrounding environment, a way of standing in
no way in opposition but collaborative, so that every element of the house is affected by this
basic point of view. On the contrary, today we understand the surrounding environment as
something that in any case will be attacked by our building action and therefore to be
protected through the mitigation of our intervention. It would be far better for both man and
the environment to learn from the farmer and try to establish a firm foothold in the land, to
establish a relationship with it in which man's actions do not damage the environment but
rather improve it, for example by maintaining it. Man's work in the countryside is in fact
fundamental from this point of view too, in keeping a rural area in excellent condition, which
in turn guaranteed the necessary fruits for the people who lived there. It is a symbiotic
relationship in which human action takes the form of a corrugation of the soil, a modification
that gives shape to the accidents of the orography.

The disconnect between our well-being and that of the environment is creating an ever-
increasing hiatus that produces effects contrary to what we all hope for. Seeing the
environment as something to be protected from our action rather than as an active element in
shaping construction only aggravates our interaction with it.

It is clear that we cannot see the minimum house as a formal reference for sustainable
building today. To give an example, we should not proceed thinking of re-proposing today all
the elements of the minimal house as they are. It would not make much sense today to bring
the cistern of the country house back into daily use as it was. Instead, we must seize, quoting
Grassi, that “technical-practical power that the place holds”5 and not so much the visible
results of the application of that “power”. The configuration and final appearance of the
cistern in rural houses is in fact the result of numerous factors linked to the time when the
house was built, the quality of the workforce and above all the resources and knowledge
available at that time. Today we must use our technical knowledge but guided by the method
and the idea that the “ancients” show us through what remains of their artefacts. Because
while it is true that we can add newer techniques, knowledge and resources to those

5
(Grassi, 1996)

147
techniques, it is also true that the theoretical scope and idea of that settlement are
insurmountable, so much so that the only thing left for us, and that is no small thing, is to try
to understand it in order to restore sustainability to its rightful place in today's world, not as an
obligation to be formally respected but as an idea of minimal living

Figure 1. Relationship between the city and the minimal house

148
Figure 2. Minimum house near Buscemi

Figure 3. Settlement relations with the palimpsest and between the parts

149
Figure 4. Synthesis of minimal living

Figure 5. Abacus of the construction elements of the minimum house

150
References

Grassi, G., (1996). I progetti, le opere e gli scritti. Electa, Milano.

Loos, A., (1972, ed. or. 1921). Parole nel vuoto. Adelphi Editore, Milano, 271-272

Rudofsky, B., (1979). Le meraviglie dell’architettura spontanea: note per una storia naturale
dell’architettura con speciale riferimento a quelle specie che vengono tradizionalmente
neglette o del tutto ignorate. Editori Laterza, Bari.

Sereni, E., (1987). Storia del paesaggio agrario italiano. Editori Laterza, Roma-Bari, 268-268

Siza, A., (2008). Sulla pedagogia. Casabella, 770, 3-3

151
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Evaluation of Parameters Affecting Frequency Response Analysis


Measurements in Power Transformers /
Güç Transformatörlerinde Frekans Tepki Analizi Ölçümlerini
Etkileyen Parametrelerin Değerlendirilmesi

Selim Köroğlu1*, Akif Demirçalı2*, Mustafa Yıldız3*

Abstract: Power transformers are one of the most expensive and important equipment in high
voltage energy transmission systems. It is very important that these devices operate smoothly
in order to maintain the power transmission safely. For this reason, power transformers are
closely monitored until they complete their operational life, including the manufacturing
phase. They are subjected to a series of routine controls and tests in the operation in order to
monitor their general condition, to detect and prevent possible malfunctions early. Frequency
response analysis (FRA) test, which has been implemented in recent years, is an effective and
sensitive method used in the diagnosis of winding failures and structural defects in the core,
especially in determining the structural shifts that occur in transformers. In this study, the
basic principles of the FRA method used in the diagnosis of power transformer faults are
given and test connection configurations are explained. In addition, the effects of the
parameters such as tap changer, temperature, connection group, DC voltage, measurement
direction etc. on the FRA results are discussed.

Keywords: Power Transformers, Fault analysis, Frequency Response Analysis

Özet: Güç transformatörleri, yüksek gerilimli enerji iletim sistemlerinde kritik görev alan en
pahalı ve en önemli ekipmanlardan birisidir. Güç aktarımının güvenli bir şekilde
sürdürebilmesi için bu aygıtların sorunsuz çalıştırılması oldukça önemlidir. Bu nedenle, güç
transformatörleri imalat aşaması dahil işletme ömürlerini tamamlayana kadar yakın takip
edilirler. Genel durumlarının izlenmesi, olası arızaların erken teşhisi ve önüne geçilmesi için
işletmede bir dizi rutin kontrol ve testlere tabi tutulurlar. Son yıllarda uygulamaya başlanan
frekans tepkisi analizi (FRA) testi; transformatörlerde meydana gelen sargı arızaları ve
nüvedeki yapısal bozuklukların teşhisinde, özellikle de transformatörlerde meydana gelen
yapısal kaymaların belirlenmesinde kullanılan etkin ve hassas bir yöntemdir. Bu çalışmada,
güç transformatörleri arızalarının teşhisinde kullanılan FRA yönteminin temel prensipleri
verilmiş, test bağlantı konfigürasyonları açıklanmıştır. Ayrıca, kademe değiştirici, sıcaklık,
bağlantı grubu, DC gerilim, ölçüm yönü vb., parametrelerin FRA sonuçlarına etkileri
tartışılmıştır.

Anahtar Kelimeler: Güç Transformatörleri, Hata analizi, Frekans Tepkisi Analizi

1
Pamukkale Üniversitesi, Mühendislik Fakültesi, Elektrik Elektronik Mühendisliği, Denizli, Türkiye
2
TEİAŞ 21. Bölge Müdürlüğü, Denizli, Türkiye
* Sorumlu yazar: [email protected]
152
1. Giriş

Güç transformatörleri, elektrik enerji iletim sisteminin en pahalı ekipmanlılarından biri olup
işletmede oldukça kritik bir görevi yerine getirmektedirler. Transformatörlerde yaşanacak bir
arıza iletim siteminde sorunlara yol açacaktır. Bu açıdan transformatörlerin sağlıklı ve
sorunsuz bir şekilde çalıştırılması, güç sistemi yönetimi açısından önemlidir. Bunlarda
yaşanacak herhangi bir arıza nedeniyle meydana gelecek enerji kesintisi tüketicileri olumsuz
yönde etkiler ve ciddi mali kayıplara neden olur. Bu nedenle, güç transformatörlerin en az
sorunla çalıştırılması elektrik enerji sistemin sürekliliği açısından gereklidir (Bohatyrewicz et
al., 2019; CIGRE Working Group A2.37, 2015). Bunların yanında, bir hata sonucu
transformatörde meydana gelecek patlamalar, can ve mal gibi önemli kayıplara ve çevresel
risklere de neden olur. Elektrik güç sistemlerinin sürdürülebilir bir şekilde işletilmesi, güç
transformatörlerinin güvenilirliği ve kullanılabilirliği ile yakından ilgilidir (Mirzai et al.,
2006). Güç transformatörlerinde yeni başlayan arızaların erken teşhis edilmesi ile arıza
durumuna hızlı bir şekilde müdahale ederek arızanın ilerlemesi durdurulabilir, ekonomik
kayıplar azaltılır ve onarım süresi kısaltılabilir.

Transformatörler imal edilirken mekanik ve elektriksel zorlanmalara dayanacak şekilde


tasarlanmasına rağmen nakliye aşamasındaki dikkatsizlikler, depremler, doğal yaşlanma,
izolasyon bozulması, kısa devre arızaları gibi durumlar sargı ve çekirdek deformasyonuna
neden olabilir. Transformatörlerin işletmede sorunsuz bir şekilde çalışmasının sağlanması
onun teknik durumuyla yakından ilgilidir. Transformatörlerin korunmasına yönelik birçok
önlem alınmakla birlikte test ve bakımlarının düzenli yapılması gerekir. Bu amaçla
işletmedeki durumlarının değerlendirilmesi, olası arıza durumlarına karşı erken önlem
alınması, arıza durumunun acil bir şekilde belirlenmesi maksadıyla birçok koruma, mekanik
ve elektrik testlerin yanında kimyasal analizlere tabi tutulurlar. Transformatörlere uygulan
testlerle ilgili detaylı çalışmalar literatürde sunulmuştur. Korumaya yönelik önlemler
içerisinde diferansiyel koruma örnek verilebilir (Faiz & Heydarabadi, 2015; Gajić, 2008).
Arıza teşhisine yönelik uygulan elektriksel testler; AC-DC izolasyon, sarım oranı, DC direnç,
yağ üzerinde yapılan güç faktörü (%PF), delinme dayanımı vb. şeklinde sıralanabilir
(Koroglu, 2016; Mendes et al., 2004). Bunlarının yanında izlemeye yönelik test ve analizler
arasında; kısmi deşarj ölçümü, frekans yanıt analizi (FRA), yağda çözünmüş gaz analizi
(DGA), dielektrik tepki ölçümü, yağdaki nem, dinamik termal modelleme gösterilebilir
(International Electrotechnical Commission, 2015; Tenbohlen et al., 2016).

Son yıllarda, güç transformatörleri arızaların tanılanmasında FRA, yeni nesil test yöntemleri
arasındaki yerini almaktadır. FRA yöntemi özellikle transformatörlerde mekanik
deformasyonun gözlenmesinde etkin bir yöntem olarak kullanılmaktadır (IEEE, 2013;
Suwarno & Donald, 2010). Ancak, FRA ölçüm sonuçlarının karşılaştırma esasına dayanan bir
yöntem olması nedeniyle hatanın türü ve ciddiyeti hakkında kesin bir bilgi elde etmek hala
zordur. Referans verilerin karşılaştırılması ile birlikte aynı zamanda uzman bilgisine de
ihtiyaç duyulmaktadır (Al-Ameri et al., 2021). Birçok araştırmacı, FRA sonuçlarının
değerlendirilmesinde insan müdahalesini en aza indirmek veya kaldırmak maksatlı çalışmalar
da yapmaktadır (Khalili Senobari et al., 2018). FRA sonuçlarının yorumlanmasının
iyileştirilmesi için güç transformatörleri üzerinde farklı tipte mekanik arızaların test edilmesi
gerekmektedir. Yapılan bir çalışmada, güç transformatörü üç boyutlu sonlu elemanlar
yöntemi ile modellenmiştir. Bu modelde transformatör sargılarında oluşturulan çeşitli radyal
deformasyonların, eksenel yer değiştirmelerin ve aynı zamanda radyal-eksenel
deformasyonların FRA imzaları elde edilmiştir. Bu sonuçlar istatistiksel göstergeler

153
kullanılarak temel FRA izi ile karşılaştırılarak arıza değerlendirmesi yapılmıştır (Mahvi et al.,
2020).

Tüm bunlara ilaveten FRA, transformatörün frekans cevabının karşılaştırması esasına dayalı
bir yöntem olduğundan, her bir deney koşulunun birbirine yakın olmasına dikkat edilmelidir.
Ölçme işlemini etkileyecek parametrelerin titizlikle dikkate alınması yine sağlıklı bir
değerlendirme için önemli olacaktır. Bu çalışmada, FRA temel ölçme prosedürleri irdelenmiş
ve FRA sonuçlarını etkileyen faktörlerin (kademe değiştirici, sıcaklık, bağlantı grubu, DC
gerilim, ölçüm yönü ve diğer faktörler) sonuçlar üzerindeki etkileri tartışılmıştır.

2. Frekans Tepkisi Analizi

FRA, güç transformatörünün sargı ve demir çekirdekteki arızaların tespit edilmesinde hassas
bir yöntem olup mekanik deformasyonların tespiti için etkili ve ekonomik bir teşhis
tekniğidir. Aynı zamanda transformatörün aktif parçalarının geometrisi hakkında güvenilir
bilgi sağlayabilir (Ni et al., 2020). Genel manada bir transformatör gövde, nüve ve sargılardan
oluşmaktadır. Transformatör doğası gereği sargılarının direnci, endüktansı, sargı-gövde
arasındaki kapasite değeri vb. gibi elektriksel parametrelerle ifade edilir. Bu yapı ve
etkileşimler dikkate alındığında kompleks bir RLC devresi şeklinde düşünülebilir.

Şekil 1. Bir güç transformatörünü için temel FRA prensip şeması

FRA prensip şeması Şekil 1’de gösterildiği gibi geniş bir bant aralığında transformatörün
frekans cevabının gözlenmesine dayanmaktadır. Frekans tepkisi, transformatörün bir
terminaline uygulanan belli frekanslardaki alçak gerilim sinyalinin diğer bir terminalden
genlik ve faz açısı olarak ölçülmesidir. Çıkıştaki sinyal küçük olacağından dolayı genellikle
dB olarak ölçülür. Ölçüm ulaşılabilir olan bütün terminaller için yapılır. Her frekanstaki
transfer fonksiyonu, transformatörün RLC ağının etkin empedansının bir ölçüsüdür
(Alsuhaibani et al., 2016). Transfer fonksiyonu ölçülen V2 çıkış geriliminin, referans V1 giriş
gerilimine oranı olup, sistemin genlik cevabı denklem 1’den, işaretin faz açısı cevabı ise
denklem 2’den hesaplanır.

, 20 log (1)

154
, ° (2)
FRA ölçümü bir nevi transformatörler için tanımlanmış parmak izi olarak da değerlendirilir.
Herhangi bir fiziksel ve elektriksel hata sonucu, transformatör RLC devresinin empedansında
meydana gelecek değişimlere bağlı devrenin transfer fonksiyonunun da değeri değiştirecektir.
Bu yöntemle, hataya bağlı empedans değişiklikleri etkin bir şekilde izlenebilir hale
gelmektedir. Böylelikle transformatördeki hatalar, ölçülen FRA sonuçlarının, önceki test
sonuçları ile karşılaştırılması neticesinde tespit edilebilir (Kraetge et al., 2009).

3. FRA Test Konfigürasyonu

FRA yöntemi transformatör empedansının frekans cevabının faz ve genlik olarak


incelenmesini esas alır. Ölçme işlemi bu amaç için geliştirilmiş test cihazlarından
yararlanılarak gerçekleştirilir. FRA test cihazlarında, bağlantı için genellikle referans ucu,
sinyalin gönderildiği canlı uç ve ölçüm ucu şeklinde üç adet çıkış bulunur. Referans ucu
toprağa bağlanırken, ölçümü yapılan transformatör ile aynı topraklama noktasına irtibatlı
olmasına dikkat edilir. Ölçme işlemi her fazda ve istenilen kademelerde tamamlanır. Daha
önce yapılmış sonuçlarla karşılaştırılarak, önceki değerlerle uygunluğu kontrol edilir. FRA
ölçüm sonuçlarının karşılaştırılması zaman tabanlı, tip tabanlı ve fazlar arası karşılaştırma
olmak üzere üç farklı biçimde yapılabilmektedir (Kraetge et al., 2009; Picher, 2008).
Bunlardan zaman tabanlı karşılaştırma aynı transformatörde yapılan ölçümlerin daha önce
kaydedilen FRA ölçümleriyle karşılaştırmasını ifade eder. Tip tabanlı karşılaştırma, bir
transformatörün FRA sonuçlarının ikiz aynı tip başka bir transformatörün sonuçları ile
karşılaştırılması anlamına gelir. Fazlar arası karşılaştırma ise her bir faza ait sonuçların
birbiriyle kıyaslanmasından ibarettir. Karşılaştırılan sonuçlar arasında herhangi bir sapma
yoksa transformatörün sağlıklı, tersi bir durum gözlenirse transformatörde anormal bir
durumun varlığı değerlendirilir.

FRA ölçme yöntemleri standartlarda açıkça tanımlanmış olup temel olarak dört ölçümden
ibarettir (IEEE, 2013; International Electrotechnical Commission, 2012). Bunlar sargı sonu-
sargı sonu açık devre testi, sargı sonu-sargı sonu kısa devre testi, sargılar arası kapasitif test ve
sargılar arası endüktif test şeklindedir.

3.1. Sargı sonu – sargı sonu açık devre testi

Sargı sonu-sargı sonu açık devre testindeki ölçümlerde, sinyal faz veya nötr terminalden
uygulanabilmektedir. Sinyal bir fazın sargı ucundan uygulanır ve diğer terminalin sargı ucuna
iletilen sinyal ölçülür. Bu test basit ve her faza uygulanabildiğinden, yaygın bir şekilde tercih
edilmektedir. Şekil 2’de, YNyn0 bağlantı gruplu transformatör için sargı sonu-sargı sonu açık
devre örnek ölçüm şeması verilmiştir. Bu ölçüm, her üç faza da uygulanır.

Şekil 2. Sargı sonu-sargı sonu açık devre testi şeması

155
3.2. Sargı sonu – sargı sonu kısa devre testi

Bu ölçüm sargı sonu-sargı sonu açık devre ölçümüyle benzer olup, tek farklı nokta,
transformatörün sekonder faz sargılarının kısa devre edilmesi şeklindedir. Üç fazlı bir
transformatörde, bu test için, her faz sargısı sırasıyla kısa devre edilebileceği gibi, üç fazın
sargıları da kısa devre edilebilir. Şekil 3’te sargı sonu-sargı sonu kısa devre testine ait örnek
ölçüm şeması verilmiştir.

Şekil 3. Sargı sonu-sargı sonu kısa devre testi şeması

3.3. Sargılar arası kapasitif test

Bu ölçümde sinyal, primer sargının bir ucundan uygulanarak sekonder sargının aynı fazından
sinyal çıkışı ölçülür. Bu işlem, oto transformatörlerin seri ve ortak sargılarında uygulanamaz.
Bu ölçüm daha ziyade, transformatörlerde alçak frekans bölgesindeki değişimlerin
gözlenmesi, yani nüve problemlerin tespiti amaçlı uygulanır. Şekil 4’te, ölçüm işlemi için
gerekli bağlantı şeması verilmiştir.

Şekil 4. Sargılar arası kapasitif test bağlantı şeması

3.4. Sargılar arası endüktif test

Bu ölçümde sinyal, primer sargının bir terminaline uygulanırken, çıkış sinyali sekonder
sargının diğer ucundan ölçülür. Bu ölçüm tekniği ile kapasitif ölçüm arasındaki fark, bu
ölçüm tekniğinde transformatör primer ve sekonder sargılarının birer uçlarının
topraklanmasıdır. Bu ölçümün alçak frekans oranı, transformatörün sarım oranını verir. Şekil
5’te, sargılar arası endüktif ölçüm işleminin bağlantı şeması verilmiştir.

156
Şekil 5. Sargılar arası endüktif test bağlantı şeması

4. FRA Ölçümlerin Etkileyen Faktörler

Birkaç Hz’den birkaç MHz’e kadar olan FRA ölçümü, transformatör sargılarının durum
değerlendirmesi için yaygın olarak kullanılmakta olup çeşitli mekanik ve elektrik arıza
durumlarının tespit edilmesinde etkin bir yöntemdir. Test esansında elde edilen FRA sonucu,
önceki bir ölçümle veya üç fazlı bir transformatörün fazları arasında frekans yanıtlarının
görsel olarak karşılaştırılır (Picher et al., 2017). Güç transformatörü sargı yapısındaki yüksek
frekanslı elektriksel etkileşimlerin karmaşıklığı, ölçüm esasındaki diğer ekipman ve çevresel
faktörlerin varlığı FRA yorumlamasında bazı zorluklar getirmektedir. Dolayısıyla, doğru bir
yorumlama için deneyimli uzman bilgisi yanında ölçme işlemine ait şartların da iyi bilinmesi
gerekir. Karşılaştırma esasına dayanan bu yöntemde, sağlıklı bir değerlendirme yapılabilmesi
için, test şartlarının benzer olması gerekmektedir. Bunların yanında FRA test işlemlerinde
dikkate alınması gereken ve ölçüm sonuçlarını etkileyen önemli parametreler; kademe
değiştirici etkisi, sıcaklık, bağlantı grubu, DC gerilim, ölçüm yönü vb. şeklinde sıralanabilir.

4.1. Kademe değiştirici etkisi

Transformatörlerde yapılan FRA ölçümünde, kademe pozisyonu önemli olmaktadır.


Standartlarda önerilen, kademe değiştiricinin tüm sargıları içerisine alacak seviyede
bulunmasıdır. Her kademe için ayrı ölçüm yapılabilir, ancak bu durum çok fazla ölçme
gerektireceğinden pratik değildir (IEEE, 2013; International Electrotechnical Commission,
2012; Yousof et al., 2015). Bu nedenle, yapılan ölçümlerde kademe değiştiricinin pozisyon
değeri mutlaka belirtilmelidir. Bununla birlikte, yük altında kademe değiştiricinin en düşük
kademeden en yüksek kademeye doğru gitmesi ile tam tersi yönde gitmesi arasında yapılan
ölçümlerde de farklılıklar olabileceği dikkate alınmalıdır.

4.2. Sıcaklık etkisi

FRA ölçümü, bir anlamda transformatör empedansının frekansa verdiği cevap olarak
değerlendirilebilir. Dolayısıyla, sıcaklık ile direnç arasındaki ilişki ve sargı sıcaklığına bağlı
olarak, direnç değerinde bir miktar değişim söz konusu olmaktadır. Direnç değeri
değiştiğinden, ölçüm sonuçlarının genlik değerlerinde de bir miktar değişimler görülebilir.
Değişim, yüksek frekanslı bölgede ve yüksek sıcaklık farklarında daha fazla gözlenmektedir
(International Electrotechnical Commission, 2012; Reykherdt & Davydov, 2011). Bu sebeple,
ölçme anındaki sıcaklık değerlerinin tespit edilip kaydedilmesi, yerinde olacaktır.

157
4.3. Bağlantı grubu ve tersiyer sargı etkisi

Bazı güç transformatörlerinde, primer ve sekonder sargıların haricinde, üçüncü sargı olarak
tersiyer sargılar mevcuttur. Tersiyer sargı, yıldız bağlı transformatörlerde kullanılan yardımcı
bir sargıdır. Yıldız bağlı transformatörlerde sargı sonları yıldız noktasına bağlı iken, üçgen
sargılı transformatörlerde sargılar ardı sıra bağlıdır. Üçgen bağlantı uçlarının açık ve kapalı
olması, özellikle orta frekansta rezonansa sebep olmakta, bu da ölçüm sonuçlarını
etkilemektedir. Yıldız bağlı transformatörlerde, harici olarak yıldız noktası değiştirilebiliyor
ise, yıldız noktasının bağlı veya açık olması da, yine ölçüm sonuçlarını değiştirecektir
(Reykherdt & Davydov, 2011).

4.4. DC gerilim etkisi

Sahada yapılan test işlemlerinde, ilk olarak FRA ölçümü yapılması istenir. Bunun nedeni,
ölçüm sonucu elde edilecek frekans cevabının diğer test cihazlarından kaynaklanan DC
gerilimden etkilenmesini önlemektir. Nüvede meydana gelen artık mıknatısiyet, ölçüm
sonuçlarını etkiler (International Electrotechnical Commission, 2012). Yapılan bir çalışmada,
DC gerilimin FRA ölçüm sonuçları üzerindeki etkisi gözlenmiştir. İlgili çalışmada, DC
gerilim uygulanmadan ve uygulandıktan sonra frekans cevabının 0-1 kHz bölgesinde, genlik
bakımından önemli kaymalar görülmüştür (Abeywickrama et al., 2008). Bu nedenle,
transformatör nüvesinden kaynaklı artık mıknatısiyet etkisinin en aza indirilmesi gerekir.
Bunun sağlamak için, FRA ölçmeleri sargı direnç ölçümlerinden önce yapılmalı veya ölçme
işleminden önce nüvedeki artık mıknatısiyet demagnetize edilmelidir.

4.5. Buşing ve ölçüm yönü etkisi

Bu ölçme yönteminde ölçüm cihazının bağlantı terminalleri, transformatör buşinglerine


irtibatlandırılır. Sistem karakteristiğinin etkilenmemesi için, önceki ölçümde irtibatlanan
buşing kullanılmalıdır. Farklı tip buşing kullanıldığı taktirde, buşing izolasyon
malzemelerinin karakteristiğine bağlı olarak, ölçüm sonuçları da farklılık gösterebilir. Ayrıca,
ölçüm yönü de, test sonuçlarını etkileyen diğer önemli bir faktör olarak karşımıza
çıkmaktadır. Örneğin; A fazı sargısından Nötr (N) noktasına doğru yapılan ölçümle, N
noktasından A fazına doğru yapılan ölçüm arasında farklılıklar meydana gelebilir. FRA
analizinde bu etki, daha çok, yüksek frekans bölgesinde görülmektedir (International
Electrotechnical Commission, 2012).

4.6. Diğer etkiler

Ayrıca, FRA ölçüm işlemini etkileyebilecek diğer faktörler ise; ölçüm cihazının standartlar
dahilinde olmaması, bağlantı hataları, testi yapan operatörün ilgili testi standartlara uygun bir
şekilde gerçekleştirmemesi şeklinde sıralanabilir. Bütün bu faktörler, test sonuçlarının
güvenilirliği ve testlerin tekrar edilebilirliği açısından önemli birer unsurdur.

5. Sonuçlar

FRA, transformatörlerde meydana gelen sargı arızaları ve nüvedeki yapısal bozuklukların


teşhisinde, özellikle de transformatörlerde meydana gelen sargı ve nüvedeki yapısal
kaymaların belirlenmesinde kullanılan etkin bir yöntemdir. Bu çalışmada yöntemin temel
prensipleri, ölçme yöntemleri, ölçmeyi etkileyen önemli faktörler açılanarak tartışılmıştır.

158
FRA testlerinin yapılmasında, ölçüm sonuçlarının yorumlanması ve değerlendirilmesinde
dikkat edilmesi gereken önemli etkenler aşağıdaki gibi not edilmiştir.
 Bir önceki FRA sonuçlarıyla karşılaştırmayla değerlendirme yapılacaksa test
şartlarının ve ölçme yönteminin aynı olmasına dikkat edilmelidir.
 Ölçme işleminden önce transformatörde artık mıknatısiyetin olmadığından emin
olunmalı veya demagnetize işlemi yapılmalıdır.
 Ölçme bağlantı uçlarının seçilmesinde ölçüm yönünün tüm test işlemleri için aynı
seçilmesi önemlidir.
 Transformatördeki yaşlanma etkileri ve izolasyon yağının durumundaki değişimler de
hesaba katılmalıdır.

Tüm bu durumlar dikkate alındığında FRA sonuçların yorumlanmasında uzmanlık bilgisine


ihtiyaç duyulmaktadır. FRA yöntemi transformatör arızalarının teşhisinde hassas ve başarılı
sonuçlar vermekte olup test şartlarının önceki şartlarla uygun olmasına dikkat edilmelidir.
Aynı zamanda test esnasında deney şartları, ölçme yöntemi ve transformatöre ait karakteristik
değerler (sıcaklık, nem, izalasyon yağının durumu vb.) titizlikle not edilmelidir.

Kaynaklar

Abeywickrama, N., Serdyuk, Y. V., & Gubanski, S. M. (2008). Effect of core magnetization
on frequency response analysis (FRA) of power transformers. IEEE Transactions on
Power Delivery, 23(3). https://doi.org/10.1109/TPWRD.2007.909032

Al-Ameri, S. M., Kamarudin, M. S., Yousof, M. F. M., Salem, A. A., Siada, A. A., &
Mosaad, M. I. (2021). Interpretation of frequency response analysis for fault detection
in power transformers. Applied Sciences (Switzerland), 11(7).
https://doi.org/10.3390/app11072923

Alsuhaibani, S., Khan, Y., Beroual, A., & Malik, N. H. (2016). A review of frequency
response analysis methods for power transformer diagnostics. Energies, 9(11).
https://doi.org/10.3390/en9110879

Bohatyrewicz, P., Płowucha, J., & Subocz, J. (2019). Condition assessment of power
transformers based on health index value. Applied Sciences (Switzerland), 9(22).
https://doi.org/10.3390/app9224877

CIGRE Working Group A2.37. (2015). Transformer Reliability Survey. In Cigre (Issue
December). Cigre.

Faiz, J., & Heydarabadi, R. (2015). Diagnosing power transformers faults. Russian Electrical
Engineering 2014 85:12, 85(12), 785–793.
https://doi.org/10.3103/S1068371214120207

Gajić, Z. (2008). Differential protection methodology for arbitrary three-phase power


transformers [Lund University]. https://doi.org/10.1049/cp:20080009

IEEE. (2013). C57.149-2012 - IEEE Guide for the Application and Interpretation of
Frequency Response Analysis for Oil-Immersed Transformers. IEEE.

International Electrotechnical Commission. (2012). IEC 60076-18 Ed. 1.0 b:2012 Power

159
Transformers - Part 18: Measurement Of Frequency Response.

International Electrotechnical Commission. (2015). IEC 60599:2015 Mineral oil-filled


electrical equipment in service - Guidance on the interpretation of dissolved and free
gases analysis.

Khalili Senobari, R., Sadeh, J., & Borsi, H. (2018). Frequency response analysis (FRA) of
transformers as a tool for fault detection and location: A review. Electric Power Systems
Research, 155, 172–183. https://doi.org/10.1016/j.epsr.2017.10.014

Koroglu, S. (2016). A Case Study on Fault Detection in Power Transformers Using Dissolved
Gas Analysis and Electrical Test Methods. Journal of Electrical Systems, 12(3), 442–
459.

Kraetge, A., Krüger, M., Valásquez, J. L., Viljoen, H., & Dierks, A. (2009). Aspects of the
Practical Application of Sweep Frequency Response Analysis (SFRA) on Power
Transformers. CIGRÉ 2009 6th Southern Africa Regional Conference.

Mahvi, M., Behjat, V., & Mohseni, H. (2020). Analysis and interpretation of power auto-
transformer winding axial displacement and radial deformation using frequency
response analysis. Engineering Failure Analysis, 113.
https://doi.org/10.1016/j.engfailanal.2020.104549

Mendes, J. C., Marcondes, R. A., & Nakamura, J. (2004). On-site Tests on HV Power
Transformers. Cigre 2004. http://www.cigre.org

Mirzai, M., Gholami, A., & Aminifar, F. (2006). Failures Analysis and Reliability Calculation
for Power Transformers. Journal of Electrical Systems, 2(1).

Ni, J., Zhao, Z., Tan, S., Chen, Y., Yao, C., & Tang, C. (2020). The actual measurement and
analysis of transformer winding deformation fault degrees by FRA using mathematical
indicators. Electric Power Systems Research, 184.
https://doi.org/10.1016/j.epsr.2020.106324

Picher, P. (2008). Mechanical Condition Assessment of Transformer Windings Using


Frequency Response Analysis (Fra). Evaluation, April.

Picher, P., Tenbohlen, S., Lachman, M., Scardazzi, A., & Patel, P. (2017). Current state of
transformer FRA interpretation: On behalf of CIGRE WG A2.53. Procedia
Engineering, 202. https://doi.org/10.1016/j.proeng.2017.09.689

Reykherdt, A. A., & Davydov, V. (2011). Case studies of factors influencing frequency
response analysis measurements and power transformer diagnostics. IEEE Electrical
Insulation Magazine, 27(1). https://doi.org/10.1109/MEI.2011.5699444

Suwarno, & Donald, F. (2010). Frequency Response Analysis (FRA) for diagnosis of power
transformers. ECTI-CON 2010 - The 2010 ECTI International Conference on Electrical
Engineering/Electronics, Computer, Telecommunications and Information Technology.

Tenbohlen, S., Coenen, S., Djamali, M., Müller, A., Samimi, M. H., & Siegel, M. (2016).

160
Diagnostic measurements for power transformers. In Energies (Vol. 9, Issue 5).
https://doi.org/10.3390/en9050347

Yousof, M. F. M., Ekanayake, C., & Saha, T. K. (2015). An investigation on the influence of
tap changer on Frequency Response Analysis. Proceedings of the IEEE International
Conference on Properties and Applications of Dielectric Materials, 2015-October.
https://doi.org/10.1109/ICPADM.2015.7295434

161
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Simultaneous Hybrid Use of Drinking Water Pump Energy from


Grid and Solar Energy
Aydın GULLU1*

Abstract: The importance of renewable energy sources has increased with the aim of efficient
use of limited energy resources. Energy sources such as wind and sun are used in energy
production. However, energy production in these energy sources varies depending on
environmental conditions. In cases where there is a need for energy, energy may not be
provided from the sun or wind due to environmental conditions. Energy can be stored by
mechanical arrangement of the system. However, this may not always be enough. In this case,
renewable energy sources and grid hybrids are used. In this study, a deep water well pump
will be operated with electrical energy produced from solar panels. The water will be
collected in a tank and given to the drinking water network. When the water level in the tank
decreases, the pump will be activated automatically. The most efficient way of this work
depends on the use of the solar panels at the maximum level. The energy from the solar panels
is connected to the DC bus input of an AC motor drive. At the same time, the mains voltage is
connected to the drive via a disconnector. The system was applied for Edirne İpsala
Aliçopehlivan village. In this application, solar panels and electrical panels are placed on the
existing infrastructure. In system tests, the pump must be operated in the range of 45-50Hz in
order to extract water. For this reason, the use of maximum power tracking point has been
used very limitedly. The grid is used when the energy of the solar panels is not sufficient. A
separator is used for this transition. In cases where the sun is not enough, both the grid and the
solar panels are connected at the same time to operate the system. In the developed setup,
25300 W solar panel, 22kW AC motor driver and 18.5kW pump were used.

Keywords: Hybrid Energy, Solar Energy, Solar Pump Control

1. Introduction

The use of clean energy sources is desired by many people. However, they are not sufficient
in terms of continuity and installed power(Akinsipe, Moya, & Kaparaju, 2021). Solar panels
generate electricity based on daylight. Wind turbines, on the other hand, are dependent on the
wind. Since the control of resources in clean energy is not in the hands of human beings,
electricity production cannot be provided at the desired power and at the desired time (Ozan
& GÜLLÜ, 2016). It is used in the most efficient way with very good planning and statistical
calculations. The aim of this study is to use solar panels for the energy of an electric pump
working for water needs. Although the power provided by these solar panels can operate the
pump, this operation will not be possible for 24 hours. Solar energy depends on sunlight.
However, the water requirement is independent of daylight. For this reason, it is aimed to
meet the energy of the pump, which will meet the water need, as a hybrid (Upasani & Patil,
2018). The hybrid worker model and the installed system are explained in chapter 3.
1
Trakya University, Ipsala Vocational School, Electronics and Automations Dept., Edirne, Turkey
* Corresponding author: [email protected]
162
2. Materials and Methods

The material and method used to create the system are mentioned in this section. First,
information about solar panels will be given. Then, the inverters used in the conversion of the
energy produced in the solar panels will be mentioned.

2.1. Solar Panels

Solar panels convert the energy they receive from the sun into electricity. Solar cells are used
for this conversion. Solar panels are produced by connecting more than one solar cell with
each other in series or parallel. The structure of the cells determines the structure of the solar
panel. They can be monocrystalline(Munzer, Holdermann, Schlosser, & Sterk, 1999) or
polycrystalline according to cell types. monocrystalline or polycrystalline panel is shown in
figure 1(Hörömpöli & Rácz, 2018). The energy produced as direct current is produced at a
certain voltage and current. If the generated energy is to be used in DC, it is used by reducing
it to the appropriate voltage level. If AC is to be used as a source of power, direct current must
be converted to alternating current. This conversion is done with the help of inverters.
Inverters are described in chapter 2.1.

Figure 1. Types of Solar Panels

2.1. Inverters

In this study, energy will be used in the water pump working with alternating current. Since it
works with alternating current, it is necessary to convert the direct current produced in solar
panels. An inverter will be used for this conversion process. DC-AC inverters are produced
and sold commercially(Dogga & Pathak, 2019). These inverters convert to AC voltage at grid
frequency. The conversion power is constant for the device being produced. These devices are
selected in accordance with the panel power(Güllü, Kuşçu, & Yılmazlar, 2020). There are
varieties such as AC full-wave sine, or modified sine. There is an off-grid type that is
independent of the grid, as well as on-grid inverters that work connected to the grid. The off-
grid inverter connection diagram is shown in figure 2. On-grid inverters adjust the energy they
produce in accordance with the grid frequency angle. In this way, the energy produced is in
the same direction as the grid, and when desired, energy can be transferred to the grid with the
help of these inverters(Chand, Prasad, Mamun, Sharma, & Chand, 2019). The on-grid inverter
connection diagram is shown in figure2.

163
Figure 2. Off-grid inverters

Figure 3. On-grid inverters

In this study, a motor control will be made. For this reason, a direct ac motor frequency
inverter is used instead of a dc-ac inverter. The operation of AC motor drives is as follows.
The supply is usually inputted as AC. This supply is then converted to DC. Then the dc
voltage is converted back to ac at the desired frequency and transferred to the motor. DC-
converted voltage terminals are available for use from most ac frequency inverters operating
on this principle. It can also be used without AC input by applying external DC voltage to
these terminals. However, since the output potential (phase-to-phase for Turkey) is AC380V,
this voltage should be close as DC. When a 3-phase sine voltage is rectified with a blind
rectifier, it is calculated as in equation 1.

3√3 cos / (I)

164
√2 (II)

Peak voltage is 311V for a voltage of 220V rms between phase and neutral. In the formula I,
if a=0 for maximum efficiency, the DC voltage will be 514V for 380 V.
This calculated voltage is the DC link voltage of the motor driver. If a voltage above this
busbar voltage is applied externally to the DC bus, current will flow from here. The system
structure is described in chapter 3.

The 22-kW delta c2000 series inverter used in this project. Figure 4 shows the inverter and its
connection diagram. 3 phase input and motor connections are shown in the connection
diagram. The energy from the solar panels will be connected to the DC bus (DC+ DC-
terminals). Since it is the direction of the direct current, the connections must be paid attention
to.

Figure 4. Delta C2000 Invertor

3. Structure of the System

The system is designed to supply the energy of the drinking water pump of the village of
Ipsala Aliçopehlivan. The current drinking water pump is 18.5kW. The water needs of the
village are met by transferring the water from the well to a high tank. The pump is designed to
fill the tank. Since the water usage is high in the summer months, the pump must run 24 hours
a day. In the summer, the tank is never fully filled. 110 solar panels are placed in the area next
to this pumping station. The solar panel array is given in figure 6. Solar panels are planned in
the form of 5 arms of 22 pieces. PLM-230P-60 model 230W polycrystalline solar panels are
positioned on each arm. Solar panels are given in figure 5. The solar panels produce 30.15V
DC voltage and the current capacity is 7.63A. With the calculation made from the formula 1,
the dc bus voltage in the network becomes 514V. In some cases, voltage fluctuations may
increase the mains phase-to-phase voltage above 380V. An increase is also observed in the
DC bus voltage at the noses. In order for the panel currents to be used simultaneously with the
grid, the panel potential is expected to be higher than the DC potential of the grid. Otherwise,
there may be no current flow. For this, 22 branches were created by looking at the land layout
on which this system will be installed and the number of 110 panels. Since each panel voltage
is 30.15V, the branch voltage is 663.3V. For this voltage, 22 pieces of each arm are connected
in series and 5 branches are connected in parallel. Separate + and – cables were drawn from

165
the arms. It is protected with a diode so that it is not exposed to reverse current for each arm.
When there is no sun, reverse voltage is prevented from flowing through the panels. The
electrical panel made for the system is shown in the figure 7.

Figure 5. Solar Panel

The current will flow unidirectionally from the panels to the DC bus development of the
drive. With this method, both the network and the panels are integrated into the system at the
same time. The system was operated with a hybrid energy source. In addition, a contactor is
connected to the system. When the sun is sufficient, the mains current will be completely cut
off and disconnection from the mains will take place. However, in the tests carried out, it was
not possible to meet all of the pump energy from the sun. It has been observed that the
efficiency of the panels is around 70%. The panels were used for a long time and were re-
arrayed for this project.

Figure 6. Solar Panel Array

In order to use the sun efficiently in solar systems, the output is reduced in a controlled
manner according to the efficiency of the sun. This is achieved by reducing the frequency
from AC motor drives. With the method called maximum power point tracking, it is aimed to
use the maximum power of the solar panels(Espinosa, 2017; Pathare et al., 2017). In this
method, the power to be drawn from the panels is always at the maximum level and the output
power is adjusted accordingly. However, in this system, water is drawn from the well. When
the pump frequency drops below 45Hz, no water comes out. In this case, the frequency of the
pump must be set at a very limited level. In places where water demand is high, maximum

166
operation of the pump is often desired. In this study, the MPPT method was used and the
minimum level of the pump was controlled via the human-machine interface(figure 8).
However, due to the preference of maximum 50Hz and low panel efficiency most of the time,
it causes the system to work with network support.

Figure 7. Electrical Panel

While the motor current is around 40A when the daylight is most efficient, it has not been
observed that this current is drawn from the solar panels of the 10A network. However, in the
tests and measurements, it was observed that the pump did not work only from the solar
panels as the installed power. If the motor frequency is reduced to 38HZ, the motor can
operate from the solar panels. However, in this case, the water does not come out. Hybrid
operation is most suitable in terms of the performance of the system. If the number of panels
is increased, the system can be switched and only panels can be operated.

Figure 8. Human Machine Interface

4. Discussion and Conclusions

In this study, a 18.5kW pump was operated together with 110 solar panels of 25300W and the
grid. In operation, current flows from the solar panels to the DC bus. According to the solar
efficiency, some of the current is taken from the panels and some from the grid. In this way,
the panels are integrated into my system around the clock. If sufficient energy is provided
from the solar panels, it is ensured that the system is disconnected from the grid. In the
measurements made during the day, it has been observed that 40A motor current is drawn
from the mains, 10A is drawn from the mains and the rest from the panels. This has been

167
observed as the most efficient case. This situation is reversed when the efficiency of solar
panels decreases. The solar panels are insulated with a diode so that no current flows. This
allowed the current to flow in one direction. 22 panels are planned in each of the 5 branches,
thus leaving the potential of the panels above the grid. Even if the panel voltage drops,
protection is provided with a diode. It is planned to monitor the status of the system by
monitoring the daylight data for a long time with a separate project. This study has been tested
so that the solar panels are sufficient but the solar panels can be used actively when the grid is
available. In this way, as there is daylight, there will always be a current flow from the solar
panels according to the daylight level, which will make a serious contribution to the costs. In
addition, the system will run continuously.

References

Akinsipe, O. C., Moya, D., & Kaparaju, P. (2021). Design and economic analysis of off-grid
solar PV system in Jos-Nigeria. Journal of Cleaner Production, 287, 125055.
Chand, A. A., Prasad, K. A., Mamun, K. A., Sharma, K. R., & Chand, K. K. (2019). Adoption
of grid-tie solar system at residential scale. Clean Technologies, 1(1), 224-231.
Dogga, R., & Pathak, M. (2019). Recent trends in solar PV inverter topologies. Solar Energy,
183, 57-73.
Espinosa, C. L. (2017). Asynchronous non-inverter buck-boost DC to DC converter for
battery charging in a solar MPPT system. Paper presented at the 2017 IEEE
URUCON.
Güllü, A., Kuşçu, H., & Yılmazlar, E. (2020). Efficiency Analysis in Solar Panel Energy
Systems: AC-DC Conversion Cost and DC-DC Energy Use.
Hörömpöli, B., & Rácz, E. (2018). Statistical analysis of power measurements made on
mono-and polycrystalline solar cells. Paper presented at the 2018 IEEE 16th World
Symposium on Applied Machine Intelligence and Informatics (SAMI).
Munzer, K. A., Holdermann, K. T., Schlosser, R. E., & Sterk, S. (1999). Thin monocrystalline
silicon solar cells. IEEE transactions on electron devices, 46(10), 2055-2061.
Ozan, A., & GÜLLÜ, A. (2016). Solar Pump Project Feasibility Study For The Village
Kumdere. Paper presented at the UNITECH 16, Gabrova Bulgaria.
Pathare, M., Shetty, V., Datta, D., Valunjkar, R., Sawant, A., & Pai, S. (2017). Designing and
implementation of maximum power point tracking (MPPT) solar charge controller.
Paper presented at the 2017 International Conference on Nascent Technologies in
Engineering (ICNTE).
Upasani, M., & Patil, S. (2018). Grid connected solar photovoltaic system with battery
storage for energy management. Paper presented at the 2018 2nd International
Conference on Inventive Systems and Control (ICISC).

168
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Effects of different stitch combinations on the seam bursting


characteristics of PET/Co workwear

Sukran Kara1*

Abstract: Workwears are produced to protect the wearer against occupational or


environmental hazards. One of the shared properties of several workwear types is resisting
external forces during occupation. As for workwear fabrics, the seams of the workwear should
be strong enough to resist uniaxial or multiaxial forces to maintain the properties in the cut
and sewn parts of the garment. Although the uniaxial seam strengths of classical fabrics for
various end-products were determined in several studies, multiaxial strength of workwear
fabrics have not been studied in details, yet. Therefore, in this study, effects of different stitch
types and their combinations on the workwear bursting characteristics were evaluated in order
to determine their resistance against multiaxial forces. According to results, usage of stitch
combinations obviously contributed to the seam bursting strength. The stitch rows responded
to the bursting forces together therefore the bursting failures occurred in 1-step.

Keywords: Seam bursting, bursting height, workwear, stitch combinations.

1. Introduction

Workwears should resist external forces during occupation and provide protection to the
wearer. Therefore, workwears are produced with thick and heavy fabrics made of specialty
fibres or traditional synthetic/natural fibers. Raw materials of the workwear fabrics are
selected carefully in order to balance the performance and the cost of the end-products (Paul,
2019; Midha et al, 2010; Haifa, 2013). As for workwear fabrics, seams of the workwear
should be strong enough to resist the forces in order maintain the unsewn fabric properties at
the sewn areas.

In ready wear industry, six standard classes of stitches are used. They are chain stitch, hand
stitch, lock stitch, overlock stitch, multi-yarn chain stitch and cover stitch (Gurarda, 2019;
ASTM D6193, 2016). These stitch types can provide advantages such as security, strength,
flexibility, low cost, ease to unstitch etc, for different application areas. So that, the effects of
different stitch types on the seam quality of workwears should be investigated in details.

In the literature, seam strength of fabrics for different application areas were studied in details
(Unal and Baykal, 2018; Choudhary and Goel, 2013; Vijay Kirubakar Raj and Renuka Devi,
2017; Frydrych and Greszta, 2016; Sular et al, 2015; Farhana et al, 2015; Namiranian et al,
2014; Bharani et al, 2012; Gurarda, 2008; Yesilpinar and Bahar, 2007; Tarafder et al, 2007;
Gribaa et al, 2006; Mukhopadhyay et al, 2004; Chowdhary and Poynor, 2006). In most of
these studies, lock stitch was utilized as it is a universal stitch type and lock stitch machine is
1
Dokuz Eylul University, Engineering Faculty, Textile Engineering Department, Izmir, Turkey
* Corresponding author: [email protected]
169
the most available sewing machine. Nevertheless, in recent years, effects of other stitch types
on the seam strength and other seam properties were evaluated (Kara, 2020; Islam et al, 2020;
Ates et al, 2019; Akter and Khan, 2015). On the other hand, there is a very limited number of
research studies on the bursting strength of sewn fabrics (Yusof, 2013; Kovalova et al, 2019;
Rajput et al, 2018; Yesilpinar, 1997).

As seen from literature search, seam properties of workwear is very important as it ensures
the continuity of the fabric properties in the joint areas. Although the uniaxial seam strengths
of different fabrics were determined in several studies, multiaxial strength (bursting strength)
of workwear fabrics have not been studied in details, yet. Therefore, the main goal of this
study was to evaluate the effects of stitch types and stitch combinations on the multiaxial
bursting strength of sewn workwear fabrics. This study differs from the literature as it
determines the bursting strength of seams and as it compares 7 types of different stitch
types/combinations for workwear.

2. Materials and Methods

As materials, a 280 g/m2 PET/Co (65/35) blend fabric and 60 tex polyester core-spun sewing
thread was utilized as they were suitable to be used for many workwear areas (Haifa, 2013;
Midha et al., 2010; Verdu et al, 2009). The fabric was 0.46 mm thick and its weave was 2/2
warp rib. Warp and weft densities of fabric were 55.1 and 19.1 threads/cm, respectively. On
the other hand, sewing yarn was a 2 ply yarn and its breaking strength was 34.5 N.

7 types of seams were produced by using basic stitch types (lock stitch, 2-yarn chain stitch
and 3-yarn overlock stitch) and their combinations as 2-rows of stitches. The sample codes,
stitch schematics and important stitch dimensions such as seam allowance and distances
between stitch rows are given in Figure 1. Seam densities for all types of stitches were 3
stitches/cm. Seam allowances were folded in the back side of fabric and ironed before
conditioning the test samples. For all samples, seams were formed both in warp direction and
weft direction. All the samples were conditioned under standard atmosphere conditions
(20±2ºC, 65±4 % relative humidity) for 24h before the tests.

Figure1. Sample codes and seam and stitch information

170
Bursting strength and bursting height of non-sewn reference samples and sewn samples were
determined according to ASTM D6797-15 standard, utilizing Instron 4411 Tensile Tester
with a ball-burst attachment. Sample size was kept as 14 cm x 14 cm and the test speed was
300 mm/min. Normally, bursting strength test examines the strength of samples in all
directions so that the test is not repeated for warp and weft directions. However, in this study,
the bursting test samples were prepared to have seams in warp and weft directions, as shown
in Figure 2.

Figure 2. Seam placements and sample definition (Kara and Yesilpinar, 2021)

Also, seam bursting efficiencies of samples were calculated as given in Equation 1.

Seam bursting efficiency % BE % ∗ 100 Eq. (1)

3. Results

Bursting strength results of samples are given in Figure 3. In addition, seam bursting
efficiencies of samples are given in Table 1.

According to results, for the 3 main stitch types (L1, C1 and O3), bursting strength in warp
and weft directions were lower than that of non-sewn reference samples (Figure 3). O3
exhibited the lowest bursting strength among these 3 stitch types. On the other hand, bursting
strength of samples sewn with stitch combinations (L2, C2, O3L and O3C) exhibited higher
bursting strengths when compared to non-sewn reference, especially for the warp samples. A
similar result was obtained by Yesilpinar (1997), when she compared the bursting strength of
1 row lock stitched samples with 2 row lock stitched counterparts.

Second row of stitches in the combinations supported the seam line against bursting,
importantly. Chain stitch containing samples (C1, C2, O3C) exhibited higher bursting
strength when compared to lock stitch containing counterparts (L1, L2, O3L), but the
differences were low. In the literature, the bursting strength of lock stitched samples were
found slightly higher than chain stitched samples, but this study was made on knitted fabrics
and bursting test procedure was different (Rajput et al, 2018).

171
Figure 3. Bursting strength of samples

For stitch combinations, seam efficiencies against bursting were higher than 100% (Table 1).
For all sample types, bursting efficiency of warp samples were higher than weft samples.
During bursting tests, generally stitches were broken for 1 row stitch types (L1, C1, O3)
whereas fabric tear near the seamline accompanied the stitch breakages or only the fabric tear
near the seamline was observed in the bursting failure of combination stitches (L2, C2, O3L,
O3C).

Table 1. Seam bursting efficiency


Seam Bursting Efficiency
Sample code Warp (%) Weft (%)
No seam / /
L1 92.25 76.25
C1 93.31 80.46
O3 68.95 56.64
L2 133.19 124.70
C2 129.69 122.15
O3L 124.71 88.23
O3C 126.33 93.23

Bursting height of samples are given in Figure 4. Only L2 and C2 samples exhibited slightly
higher bursting heights for weft samples when compared to non-sewn reference. Also,
bursting height of combination stitch samples were slightly higher when compared to main
stitch samples (L1, C1, O3). The bursting heights for warp and weft samples were similar for
all sample types.

172
Figure 4. Bursting height of samples

4. Discussion and Conclusions

In this study, bursting properties of a workwear fabric sewn with 7 different stitches/stitch
combinations were evaluated. Stitch combinations were formed by using lock stitch, chain
stitch and overlock stitch together, as 2 rows stitches.

Normally, the main function of the second stitch row is to form a safety stitch rather than
increasing the overall performance of textile items. In spite of this fact, in this study, usage of
stitch combinations obviously contributed to the seam bursting strengths. Combined stitch
rows responded to the bursting forces together and their bursting strengths were higher than
non-sewn reference fabrics, especially in warp direction. This provided seam bursting
efficiencies higher than 100 %. During the bursting tests, fabric tear near the seamline
accompanied the stitch breakages or only the fabric tear near the seamline was observed for
combination stitches. This phenomenon made it harder to repair the bursting failures for
workwear. This situation was not valid for samples sewn with single row basic stitches.
Bursting height of all sewn samples were similar to non-sewn reference samples.

In the further studies, seam strength of the selected samples will be determined to observe the
behavior of stitch combinations during uniaxial seam strength tests. In addition, lock stitch or
chain stitch can be selected and, the seam strength and seam bursting properties of workwear
with lapped seams can be studied in the further studies.

References

Akter, M., Khan, M. R., (2015). The effect of stitch types and sewing thread types on seam
strength for cotton apparel. International Journal of Scientific and Engineering
Research, 6(7), 198-205.

ASTM D6193 – 16. Standard practice for stitches and seams.

ASTM D6797 – 15. Standard test method for bursting strength of fabrics constant-rate-of-

173
extension (CRE) ball burst test.

Ates, M., Gurarda, A., Ceven, E. K., (2019). Investigation of seam performance of chain
stitch and lockstitch used in denim trousers. Tekstil ve Muhendis, 26(115), 263-270.

Bharani M., Shiyamaladevi P. S. S., Mahendra Gowda R. V., (2012). Characterization of


seam strength and seam slippage on cotton fabric with woven structures and finish.
Research Journal of Engineering Sciences, 1(2), 41-50.

Choudhary, A. K., Goel, A., (2013). Effect of some fabric and sewing conditions on apparel
seam characteristics. Journal of Textiles, 2013, 1-7.

Chowdhary U, Poynor D., (2006). Impact of stitch density on seam strength, seam elongation,
and seam efficiency. International Journal of Consumer Studies, 30(6), 561-568.

Farhana, K., Syduzzaman, M., Yeasmin, D., (2015). Effect of sewing thread linear density on
apparel seam strength: A research on lapped and superimposed seam. Journal of
Advancements and Engineering and Technology, 3(3), 1-7.

Frydrych, I., Greszta, A., (2016). Analysis of lockstitch seam strength and its efficiency.
International Journal of Clothing Science and Technology, 28(4), 480-491.

Gribaa, S., Amar, S. B., Dogui, A., (2006). Influence of sewing parameters upon the tensile
behavior of textile assembly. International Journal of Clothing Science and Technology,
18(4), 235-246.

Gurarda, A., (2008). Investigation of the seam performance of PET/nylon-elastane woven


fabrics. Textile Research Journal, 78(1), 21-27.

Gurarda, A., (2019). Seam performance of garments. In: Textile Manufacturing Processes.
ISBN: 178985105X, 9781789851052.

Haifa, I. H., (2013). Seam properties of workwear. Pakistan Textile Journal, 62(1), 42-46.

Islam, M. R., Asif, A. A. H., Razzaque, A., Al Mamun, A., Maniruzzaman, M., (2020).
Analysis of seam strength and efficiency for 100% cotton plain woven fabric.
International Journal of Textile Science, 9(1), 21-24.

Kara, S., (2020). Comparison of sewn fabric bending rigidities obtained by heart loop method:
Effects of different stitching types and seam directions. Industria Textila 71(2), 105-
111.

Kara, S., Yesilpinar, S. (2021). Comparative study on the properties of taped seams with
different constructions. Fibres & Textiles in Eastern Europe, 29, 2(146), 54-60.

Kovalova, N, Kulhavy, P., Vosahlo, J., Havelka, A., (2019). Experimental and numerical
study of sewing seams of automobile seat covers under unidirectional and multiaxial
loading. Tekstil ve Konfeksiyon, 29(4), 322-335.

Midha, V. K., Kothari, V. K., Chattopadhyay, R., Mukhopadhyay, A., (2010). Effect of

174
workwear fabric characteristics on the changes in tensile properties of sewing threads
after sewing. Journal of Engineered Fibers and Fabrics, 5(1), 31-38.

Mukhopadhyay, A., Sikka, M., Karmakar, A.K., (2004). Impact of laundering on the seam
tensile properties of suiting fabric. International Journal of Clothing Science and
Technology, 16(4), 394-403.

Namiranian, R., Shaikhzadeh Najar, S., Etrati, S. M., Manich, A. M., (2014). Seam slippage
and seam strength behavior of elastic woven fabrics under static loading. Indian Journal
of Fibre and Textile Research, 39(3), 221-229.

Paul, R. (Ed.)., (2019). High performance technical textiles. John Wiley & Sons.

Rajput, B., Kakde, M., Gulhane, S., Mohite, S., Raichurkar, P. P., (2018). Effect of sewing
parameters on seam strength and seam efficiency. Trends in Textile Engineering and
Fashion Technology, 4(1), 4-5.

Sular, V., Mesegul, C., Kefsiz, H., Seki, Y., (2015). A comparative study on seam
performance of cotton and polyester woven fabrics. The Journal of the Textile Institute,
106(1), 19-30.

Tarafder, N., Karmakar, R., Mondal, M., (2007). The effect of stitch density on seam
performance of garments stitched from plain and twill fabrics. Man-Made Textiles in
India, 50(8), 298-302.

Unal, B. Z., Baykal, P. D., (2018). Determining the effects of different sewing threads and
different washing types on fabric tensile and sewing strength properties. Tekstil ve
Konfeksiyon, 28(1), 34-42.

Verdu, P., Rego, J. M., Nieto, J., Blanes, M., (2009). Comfort analysis of woven
cotton/polyester fabrics modified with a new elastic fiber, part 1 preliminary analysis of
comfort and mechanical properties. Textile Research Journal, 79(1), 14-23.

Vijay Kirubakar Raj, D., Renuka Devi, M., (2017). Performance analysis of the mechanical
behaviour of seams with various sewing parameters for cotton canopy fabrics. Fibres
and Textiles in Eastern Europe, 25, 4(124), 129-134.

Yesilpinar, S., (1997). Kullanim sirasinda giysilerde oluşan dikis patlamalari üzerine bir
arastirma. Tekstil ve Mühendis, 11(56), 30-41.

Yeşilpınar, S., Bahar, S., (2007). The effect of sewing and washing processes on the seam
strength of denim trousers. AATCC review, 7(10), 27-31.
Yusof, N. A., (2013). Effect of seam type on selected seam tensile behaviour under multi-
axial forces (Doctoral dissertation, University of Otago).

175
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Geomorphometric Analysis of the Sub-watersheds in the Eastern


Black Sea region, Turkey
Senem TEKİN1*, Tolga ÇAN2

Abstract: Topographic and drainage network features are important parameters in watershed
management planning due to the physical characteristics of the watershed in terms of
controlling the sustainability of water resources. In this study, physical characteristics and
geomorphometric features were evaluated in the sub-waterheds of the Eastern Black Sea that
is one of the 25 main watersheds of Turkey. GIS based geomorphometric analyses are
important in terms of erosional processes and interpretation of the watershed condition.
Geomorphometric evaluations were evaluated by considering linear (Bicking rate, Length
Ratio, Texture ratio), areal (Drainage density, basin shape, Stream frequency) and surface
morphometry (Hypsometric integral/curve) parameters.

Keywords: Watershed, drainage network, Eastern Black Sea, geomorphometric analysis.

1. Introduction

Watersheds morphometrics and the development of stream profiles are configured over long
time periods, constructing the boundary condition that controls by river flow system. These
controls constrain the range of river behaviour and associated morphological attributes within
a watershed. By its nature, among others regional geologic and climatic factors mainly
regulate the topography, sediment transport and the discharge regime (Fryirs and Brierley
2013). Topographic and drainage network features of watersheds are important parameters in
watershed management planning due to the physical characteristics of the basin in terms of
controlling the sustainability of water resources. With the surface morphometric features,
erosion cycles of the basins can be determined, and numerical interpretations can be made.
Therefore, geomorphometric studies using digital basin analyzes are important in terms of
erosional processes in the basin and interpretation of the basin situation. Geomorphometric
evaluations are evaluated by considering linear (Bidding ratio, Length Ratio, Texture ratio),
areal (Drainage density, basin shape, Stream frequency) and surface morphometry
(Hypsometric integral/curve) parameters. In this study, linear, areal and surface
geomorphometric evaluations were made in 38 sub watersheds with an aerial size greater than
100 km2 in the Eastern Black Sea main drainage basin of 22,867.06 km2, which is one of the
25 main watersheds in Turkey (Figure 1).

1
Adiyaman University, Mining and Mineral Extraction Department, School of Technical Sciences, Adıyaman,
Turkey.
2
Çukurova University, Department of Geology Engineering, Adana, Turkey
* Corresponding author: [email protected]
176
Figure 1.. Yer bulduru haritası.

2. Eastern Black Sea Basin of General Features

The Eastern Black Sea is one of the 25 main watershed of Turkey. The Black Sea region
consists of 3 regions: East, Central and West. The Eastern Black Sea basin constitutes a
region with a lower population compared to the Central and Western Black Seas in terms of
both its climatic characteristics and mountainous settlements.

Digital elevation model (DEM) is the most basic parameter of geomorphometry studies. In
this study, ASTER GDEM data, which is available free of charge and has a resolution of 28
meters, was used. According to this data, it varies between 0 - 3866 m in the Eastern Black
Sea basin and increases relatively as one goes from the coast to the inner parts of the basin
(Figure 2). Areas with an altitude below 1000 m in the basin have 40.44%, while the upper
parts of the basin greater than 2000 m correspond to 20%. slope map was prepared by using
DEM of the Eastern Black Sea basin (Figure 3). According to this map, slopes below 20°
correspond to 44%, while areas with slopes greater than 40° correspond to 6% of the basin.

177
Figure 2. Digital elevation model of the Eastern Black Sea basin.

Figure 3. Slope map of the Eastern Black Sea basin.

The landform map was prepared according to Jenness (2006) by using the topographic
position index (Figure 3). Accordingly, the basin consists of Valleys with 5.61% small
drainage systems, 0.64% Lower Slopes, 17.25% Gentle Slopes, 16.08% Steep Slopes, 41.60%
Upper Slopes and 18.81% very low slopes. ridges classes (Figure 4).

According to the 1/100000 scale CORINE Land Cover Classification, which consists of three
different levels determined by the European Environment Agency; The Eastern Black Sea
basin consists of 9 different classes (Figure 5). Among these classes, agricultural lands are
generally located in areas below 1000 m and corresponds 34% of the basin. The forests cover
that are seen throughout the basin and have the largest areal size with 37%, are generally
located in the middle and upper parts of the basin.

178
Figure 4.Landform classification of the Eastern Black Sea basin.

Şekil 5. CORINE of the Eastern Black Sea basin.

According to the KOPPEN classification, which is the most commonly used climate
classification globally, there are 10 different climate types in Turkey. Climate data prepared
by Worldclim (http://www.worldclim.org/version1) for a series of global climate layers with a
spatial resolution of approximately 1 km2 were evaluated for the Eastern Black Sea basin.
According to the annual precipitation data of 1960-1990, the lowest precipitation in the basin
was recorded as 463 mm and the highest precipitation as 2230 mm (Figure 6a). According to
the predictions for the future using today's data, it is thought that there will be an increase in
the annual precipitation amount of approximately 200 mm in the Eastern Black Sea basin in
2070 (Figure 6b).

179
(a)

(b)
Figure 6. Annual precipitation values (a) for the years 1960-1990, annual precipitation data
for the year 2070 (b) using current data (http://www.worldclim.org/version1).

3. Methodology

Depending on the formal features and numerical values of the rivers in the basins, the
morphometric features of the river networks are evaluated numerically and the relationship
between streams of different sizes, called the "drainage composition", can be expressed
mathematically (Horton, 1945, Strahler, 1952). Geomorphometry is defined by Pike (2002),
as “the science of digital land surface analysis”. With geomorphometry, the characteristic
features and morphological processes of water catchments are examined (Horton, 1932;
Strahler, 1952; Chorley, 1957; Patton, 1976; Keller and Pinter, 1996; Pike, 2009).

When examined in general, geomorphometric analyzes are the whole of evaluations about the
stage of erosion activities of streams. It is evaluated under three main headings as linear, areal
and surfacial (Table 1). Linear morphometric analyzes include detailed examination of rivers
in stream catchments. These analyzes mainly include the Bifurcation Ratio (Rb) (Schumm,

180
1956), Length Ratio (RL) (Schumm, 1956), and Texture Ratio (T) (Horton, 1945) parameters.
While linear morphometric analyzes are evaluated only on the drainage network of the basin,
areal morphometric properties are obtained with the values of both the drainage network and
the entire basin surface (Ritter et al., 1995). They are important parameters in terms of
collection of precipitation falling into the basin and accumulation of surface runoff. Surface
morphometric evaluations consist of Drainage Density (Dd), Stream Frequency (Fs) (Horton,
1945), Basin Shape (RF) (Horton, 1945), parameters. Linear and areal morphometric features,
total number of tributaries (Nu), total number of tributaries of the upper sequence (N(u+1)),
basin diameter (P), basin area (A), total length (Lu) of the number of rivers, The next index
number is calculated by calculating in-basin values such as length (Lu+1), Total number of
indexes (N) (Table 1). The evaluations of the surface morphometry were determined by the
hypsometric curve and hypsometric integral parameters. The hypsometric curve is obtained
by comparing the ratio of the area above a contour line of height h passing through a basin to
the area of the entire drainage basin (a/A, x) and the value of the contour line with the ratio of
the highest elevation of the basin (h/H, y). The hypsometric integral (HI) is an important
parameter in determining the watershed characteristics (Ritter et al., 2002). Basins are defined
as old (monadnock: basin where erosional processes are balanced; HI ≤ 0.3) according to Hı
values, indicating that the catchment basin is completely balanced. Situations where HI value
is between 0.3 ≤ HI ≤ 0.6 defines mature stage, and HI ≥ 0.6 defines basins as young or
unequilibrium, that is, highly susceptible to erosion (Strahler 1952; Sarangi et al., 2001). The
hypsometric analyzes were performed using the hypsometry extension in the ArcGIS
environment.

Table 1. Lineer, areal and relief aspects calculated for morphometric analysis
İndisler Eşitlik Kaynak
Bifurcation Rate Rb=Nu/N(u+1) Schumm (1956)
Length Ratio Rl=Lu/Lu+1 Horton (1945)
Texture Ratio T = Nu1*(1/P) Horton (1932)
Drainage Density Dd = Lu/A Horton (1932)
Stream frequency Fs = Nu/A Horton (1932)
Basin shape Ff = A/Lb² Horton (1932)

4. Eastern Black Sea Basin Geomorphometry

All basin classes, including stream drainage branches and micro dimensions of the Eastern
Black Sea basin, were performed using DEM using the Hydrology extension in ArcGIS
environment. Flow direction and cumulative flow values were prepared in raster format,
drainage networks and basin boundaries were prepared in vector formats. By determining the
starting points of the basin, the rivers in the Eastern Black Sea basin and all the basin
boundaries formed by these rivers were created using the ArcGIS model configuration
technique. For the calculation of geomorphometric analysis, the branches in the drainage
network were ranked hierarchically according to their positions using the Strahler method. A
drainage network model up to the 5th degree was obtained with the Strahler method in the
22,867.06 km2 basin with a total of 1682 river segments. The Eastern Black Sea basin has 83
sub-basins. The areal size of these basins varies between 30.45 - 1241.04 km2. In this study,
linear, areal and surface geomorphometric evaluations were made in 38 basins (Figure 7) with
an area of more than 100 km2 in the Eastern Black Sea basin.

181
Figure 7. Rivers and sub-basin boundaries of the Eastern Black Sea basin.

4.1. Linear Morphometric Evaluations of the Eastern Black Sea Sub-Basins

Linear morphometric parameters consist of numerical examination of the river systems


forming the basins (Ritter et al., 1995; Özdemir, 2011; Hajam et al., 2013). The basis of linear
parameters is based on the number of river tributaries obtained by stream grading and their
relations with each other. Linear morphometric evaluations of the Eastern Black Sea sub-
basins were evaluated with bifurcation ratio (Rb), Average river length (Lum) and river length
ratio (Rl) and Texture ratio parameters. Inbetween the 38 sub-basins of the Eastern Black Sea
basin, 15 are composed of 2-grade river systems (Table 2). Among these basins, it is seen that
the 1st index of the 6th basin has the lowest average river length, and the 2nd series of the
20th basin has the highest. The forking ratio (Rb) values range from 1.13 to 2.00 and present
relatively low values. It can be interpreted as having a high drainage density. The basin with
the highest average river length is basin 18. The number of basins formed by the 3-grade river
system is 15, and the average river length of these basins has been calculated as 4.44. The
lowest average river length was calculated as 2.37 in the basin numbered 26, while the second
index of the basin no. 30 was calculated as 19.24. According to the average bifurcation rate
parameter of the basins; It is seen that it varies between 1.53-7.08 (Table 3). Among the sub-
basins of the Eastern Black Sea basin, the basins 8, 12, 15, 23, 31 and 35 consist of 4 series.
The average river length varies between 3.51-5.87 (Table 4), the 1st Series of the 8th basin of
these basins has a total length of 621.96 km. The bifurcation rate parameter varies between
1.64 and 2.64, and the basin no. 31 has the highest bifurcation rate. Inbetween the sub-basins
in the Eastern Black Sea basin, only the basin no. 16 consists of 5 series. Each index has 136,
66, 22, 14 and 32, respectively. Looking at the lengths of the indexes, it is seen that the lowest
4th Index is (47.83 km) and the highest 1st Index is 441.62 km in length. The average
bifurcation ratio of the basin no. 16 was calculated as 1.77. The texture ratio (T) is calculated
as the ratio between the total number of indexes belonging to the 1st index and the perimeter
of the basin from the indexes created according to the Strahler method. Accordingly, it varies
between 0.03-0.37 for the Eastern Black Sea basin.

182
Table 2. Bifurcation ratio (Rb), Average 1 7 15.81 2.26 - -
37
river length (Lum) and river length ratio 2 6 31.27 5.21 1.17 0.51
(Rl) parameter values of basins with 2
indexes.
Lu Lum
Havza
index L Rb Rl
Number
(km) (km) Tablo 3. 3 dizine sahip havzaların
çatallanma oranı (Rb), Ortalama akarsu
1 6 60.86 10.14 - -
1 uzunluğu (Lum) ve akarsu uzunluk oranı
2 5 18.30 3.66 1.20 3.33 (Rl) parametre değerleri.
1 8 34.32 4.29 - - Lu Lum
2 Havza inde
2 7 36.60 5.23 1.14 0.94 Number x
L (km) (km) Rb Rl

1 5 8.45 1.69 - - 1 16 55.30 3.46 - -


6
2 3 26.10 8.70 1.67 0.32 3 2 5 16.09 3.22 3.20 3.44

1 5 20.38 4.08 - - 3 9 52.15 5.79 0.56 0.31


7
2 4 20.94 5.23 1.25 0.97 1 11 66.33 6.03 - -

1 4 21.91 5.48 - - 10.8


11 4 54.36
2 5 7 2.20 1.22
2 3 23.92 7.97 1.33 0.92
3 4 16.72 4.18 1.25 3.25
1 4 14.53 3.63 - -
13 1 12 32.23 2.69 - -
2 3 15.21 5.07 1.33 0.96
9 2 6 22.47 3.75 2.00 1.43
1 3 34.09 11.36 - -
18 3 5 37.86 7.57 1.20 0.59
2 2 5.68 2.84 1.50 6.00
1 31 96.74 3.12 - -
1 5 15.21 3.04 - -
19 10 2 15 64.74 4.32 2.07 1.49
2 4 21.74 5.44 1.25 0.70
3 15 65.80 4.39 1.00 0.98
1 3 12.94 4.31 - -
20 1 26 63.66 2.45 - -
2 2 35.31 17.66 1.50 0.37
14 2 7 36.06 5.15 3.71 1.77
1 4 8.11 2.03 - -
24 3 18 62.08 3.45 0.39 0.58
2 2 17.87 8.93 2.00 0.45
1 12 38.25 3.19 - -
1 7 25.53 3.65 - -
25 17 2 5 18.16 3.63 2.40 2.11
2 6 53.02 8.84 1.17 0.48
3 6 30.46 5.08 0.83 0.60
1 2 18.22 9.11 - -
27 1 12 31.13 2.59 - -
2 1 14.10 14.10 2.00 1.29
21 2 9 29.79 3.31 1.33 1.05
1 8 24.16 3.02 - -
28 3 2 10.88 5.44 4.50 2.74
2 7 36.82 5.26 1.14 0.66
1 12 31.13 2.59 - -
1 9 34.43 3.83 - - 22
34 2 9 29.79 3.31 1.33 1.05
2 8 33.75 4.22 1.13 1.02

1
Adiyaman University, Mining and Mineral Extraction Department, School of Technical Sciences, Adıyaman,
Turkey.
2
Çukurova University, Department of Geology Engineering, Adana, Turkey
* Corresponding author: [email protected]
183
3 2 10.88 5.44 4.50 2.74 4 32 107.30 3.35 0.63 0.60

1 34 80.73 2.37 - - 1 36 129.41 3.59 - -

26 2 18 69.43 3.86 1.89 1.16 2 17 62.87 3.70 2.12 2.06


12
3 15 53.52 3.57 1.20 1.30 3 5 16.95 3.39 3.40 3.71

104.5 4 13 55.42 4.26 0.38 0.31


1 37 8 2.83 - -
1 25 78.71 3.15 - -
29
2 22 62.51 2.84 1.68 1.67
2 15 92.01 6.13 1.67 0.86
3 14 42.99 3.07 1.57 1.45 15
3 7 57.30 8.19 2.14 1.61
1 11 31.67 2.88 - -
4 2 12.00 6.00 3.50 4.78
19.2
30 38.48
2 2 4 5.50 0.82 1 39 124.32 3.19 - -

3 5 42.33 8.47 0.40 0.91 2 19 97.00 5.11 2.05 1.28


23
1 15 70.74 4.72 - - 3 10 29.41 2.94 1.90 3.30

2 13 38.52 2.96 1.15 1.84 4 9 30.29 3.37 1.11 0.97


32
13.0 1 44 139.05 3.16 - -
2.84
3 1 2.84 0 13.57
2 23 83.59 3.63 1.91 1.66
1 16 48.09 3.01 - - 31
3 4 15.20 3.80 5.75 5.50
33 2 11 36.73 3.34 1.45 1.31
4 16 55.09 3.44 0.25 0.28
3 4 22.39 5.60 2.75 1.64
1 43 144.49 3.36 - -
1 8 23.89 2.99 - -
2 21 82.71 3.94 2.05 1.75
36 2 5 21.60 4.32 1.60 1.11 35
3 17 63.13 3.71 1.24 1.31
3 2 7.25 3.62 2.50 2.98
4 4 21.47 5.37 4.25 2.94
1 11 42.72 3.88 - -

2 7 28.81 4.12 1.57 1.48


38 Table 5. Bifurcation ratio (Rb), Average
2.3 2.3 river length (Lum) and river length ratio
12.39
3 3 4.13 3 2
(Rl) parameter values of basins with 5
indexes.
Table 4. Bifurcation ratio (Rb), Average
Lu Lum
river length (Lum) and river length ratio Havza Number index
(Rl) parameter values of basins with 4 L (km) (km) Rb Rl

indexes. 1 136 441.62 3.25 - -

Lu Lum 2 66 235.24 3.56 2.06 1.88


Havza No
Dizin L (km) (km) Rb Rl 16 3 22 67.48 3.07 3.00 3.49

1 90 326.07 3.62 - - 4 14 47.83 3.42 1.57 1.41

8 2 37 123.96 3.35 2.43 2.63 5 32 92.43 2.89 0.44 0.52

3 20 64.63 3.23 1.85 1.92

184
4.2. Areal Morphometric Evaluations of the Eastern Black Sea Sub-Basins

The morphometric parameters formed by the spatial characteristics of the basins have a very
important feature in terms of the collection of precipitation falling into the basin and the
accumulation of surface runoff. In the Eastern Black Sea basin, the circumferential lengths of
38 sub-basins, which are larger than 100 km2 in area, vary between 50-370. The basin
numbered 16, which has the largest areal size, is 3306.46 km2. Drainage density (Dd), which
expresses the degree of fragmentation of the basins by the rivers, varies between 0.2-0.6
according to the parameter calculations. It may vary with the relations of more than one factor
with each other, and it itself gives clues about the water and sediment transport of the streams.
Among the factors that determine the drainage density are climate, vegetation, soil and rock
structure, surface features, erosion and deposition processes (Malik et al., 2011, Elbaşı and
Özdemir, 2018). The drainage density values of the sub-basins of the Eastern Black Sea basin
are in the class range that is considered to be low. It shows that the basin has a high
infiltration capacity with its vegetation and soil structure. According to the river frequency
analysis, the frequency of the rivers varying between 0 and 0.3 in 38 basins was calculated. It
is seen that the sub-basins of the Eastern Black Sea basin have low river frequency according
to the river frequency values that present similar characteristics with the drainage density
parameter. It is known that as the value in the circularity ratio parameter calculations
approaches 1, the basin displays a longitudinal basin shape.
4.3. Surface Morphometric Evaluations of the Eastern Black Sea Sub-Basins

The hypsometric curve and the hypsometric integral indicate the erosional state of the basin
and the stage of youth, maturity and old age (Strahler, 1952, Ritter et al., 2002). According to
the hypsometric curve and integral results, it is seen that 13 basins are in the maturity stage
and have completed their erosion processes (Figure 9a). On the other hand, it is seen that 18
basins approach the maturity stage and in general, more than 60% of these basins are eroded
by erosional processes, and erosion continues in the regions towards the bottom of the basins
(Figure 9b). It is observed that erosion continues in the remaining basins 9, 16, 22, 23, 26, 30
and 31 (Figure 9c). It was determined according to the hypsometric curve and integral values
that the youngest basin among the sub-basins of the Eastern Black Sea basin was the 26th
basin.
5. Results

In this study, geomorphometric evaluations were carried out as linear, areal and surface
morphometric features for the basins located in the Eastern Black Sea basin and larger than
100 km2 in area. According to the linear morphometric parameter results, similar results were
obtained in 38 basins and it was calculated that they generally have low bifurcation degrees.
Bifurcation ratio values lower than 3 show us that almost all of the basins have high drainage
density and that they have no effect of any tectonic activity in the basins, and they also have
geologically heterogeneous features. According to the areal morphometric properties; low
stream frequency and similar low drainage density values were calculated. These results show
that units in lithology with high infiltration capacity outcrop throughout the basin. According
to the results of superficial morphometric analysis; Among these basins, where erosion still
continues in 7 of 38 sub-basins, it is seen that the youngest one is basin 26.

1
Maltepe University, Faculty of Engineering and Natural Science, Department of Electrical and Electronics
Engineering, Istanbul, Turkey
2
Fatih Sultan Mehmet Vakıf University, Faculty of Engineering, Department of Electrical and Electronics
Engineering, Istanbul, Turkey
* Corresponding author: [email protected]
185
(a)
(b)

(c)
Figure 10. Distribution of sub-basins in the Eastern Black Sea basin according to mature (a),
old (b) and young (c) stages.

References

Chorley, R. J. (1957)., Climate And Morphometry. The Journal Of Geology, 65(6), 627–638.

CORINE, (2006). (Coordination of Information on the Environment),


http://www.corine.itu.edu.tr/typography.html.

Elbaşı, E., Ozdemir, H., (2018). Morphometric Analysis of the Marmara Sea River Basins.
Journal of Geography. 63-84. 10.26650/JGEOG418790.

Fryirs, K.A. and Brierley, G.J., (2013). Geomorphic Analysis of River Systems: An Approach
to Reading the Landscape, Blackwell Publishing Ltd. 360p.

186
Hajam, R.A. Hamid, A., Bhat, S., (2013). Application of morphometric analysis for geo-
hydrological studies using geo-spatial technology-A case study of Vishav Drainage Basin.
Hydrol Current Res. 4. 1-12.

Horton, R. E., (1932). Drainage basin characteristics. American Geophysics Union, 13(1),
350–361.

Horton, R. E., (1945). Erosional development of streams and their drainage basins;
Hydrophysical approach to quantitative morphology. Bulletin of The Geological Society of
America, 56, 275–330.

Keller, E. A., and Pinter, N., (1996). Active tectonics: Earthquakes, uplift and landscape.
London, UK: Pearson.

Özdemir, H., (2011). Havza morfometrisi ve taşkınlar. D. Ekinci (Ed.), Fiziki coğrafya
araştırmaları: Sistematik ve bölgesel içinde (s. 507–526). İstanbul: Babil.

Patton, P. C., and Baker, V.R., (1976). Morphometry and floods in small drainage basins
subject to diverse hydrogeomorphic controls. Water Resources Research, 12(5), 941–952.

Pike, R., Evans, I., and Hengl, T. (2009). Geomorphometry: A brief guide. In T. Hengl and H.
I. Reuter (Eds.), Geomorphometry: Concepts, software, applications (pp. 3–30). New York,
NY: Elsevier.

Pike, S., (2002). Destination image analysis: A review of 142 papers from 1973-2000.
Tourism Management. 23(5): 541-549.

Ritter, D.F., Kochel, R.C., and Miller, J.R., (1995). Process geomorphology. Dubuque, IA:
William C. Brown.

Malik, M.I., Bhat, M.S., Kuchay, N.A. (2011). Watershed based drianage morphometric
analysis of Lidder Catchment in Kashmir Valley Usin Geographical İnformation System.
Recent Research in Science And Technology, 3(4), 118–126.

Schumm, S. A., (1956). Evolution of drainage systems and slopes in badlands at Perth
Amboy, New Jersey. GSA Bulletin, 67, 597–646. https://doi.org/10.1130/0016-
7606(1956)67[597:EODSAS]2.0. CO;2.

Strahler, A. N., (1952). Quantitative analysis of watershed geomorphology. Transamer


Geophys Union, 38, 913–920.

Worldclim (http://www.worldclim.org/version1).

187
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Influence of Cell Transportation Microchannel Wall Quality on


Cell Deposition Risk: a DPM Analysis
Daver ALİ1*

Abstract: The dynamic cell culture process has been used in tissue engineering. Like other
microfluidics systems, these devices always run the risk of particles deposition and eventually
clogging up. Since the use of microchannels, one of the difficulties has been producing
microchannels with ideal surface smoothness. In this study, the movement of stem cells
through a microchannel with roughness was investigated theoretically using discrete phase
computational fluid dynamics. The surface waviness was modeled using sinusoidal equations
in different severity. Also, four sizes of 10, 15, 20, and 30 μm were selected for cell discreet
phase modeling. The analysis results showed that surface roughness of 5 µm in the
microchannel could increase the risk of its obstruction with more cell sedimentation. This
phenomenon was more severe in models with more large cells or a lower fluid flow rate. This
study further elucidates the effect of a microchannel surface quality on dynamic cell culture
success.

Keywords: Microchannel, Surface Roughness, Cell Culture, Stem-cells, DPM Analysis,

1. Introduction

Microfluidic devices are highly integrated, multidisciplinary applied science that has been in
development for over 30 years (Whitesides, 2006). One of the areas where micro-devices are
widely used in biology and cellular studies. Dynamic cell culture is a modern method of
obtaining tissue containing living cells and is widely used by tissue engineers (Kato et al.,
2018). Many parameters, including the transfer rate and the number of healthy cells delivered
to the destination from the bioreactor (Lee et al., 2021). Dynamic cell culture is performed on
a small scale, and therefore the equipment used in such a process is minimal. For example,
cells are transported by tiny connection pipes to scaffolds at very low speeds. Immediate
success in cells' migration to their destination (usually a three-dimensional scaffold) is the key
to success in the later stages of such a process (Campos Marin et al., 2017). An effective cell
transfer will require a meticulous design of the microchannels and the inherent quality of its
surface. Like other microchannels, one of the risks in cell transfer is the risk of cells settling
on the microchannel wall, which potentially depends on medium flow boundary conditions.
This problem can be caused by several factors (Zhou et al., 2021, Li et al., 2021).
Microchannel studies have already shown the effect of the surface roughness o the modality
of fluid flow within microchannels. For example, the relationship between phenomena such as
pressure drop or heat transfer rate from a microchannel to its surface roughness has been
shown. Therefore, it is conceivable that the surface roughness affects cell migration dynamics
within them (Yuan et al., 2016). Because it is difficult and expensive to obtain microchannels
with perfectly smooth surfaces, understanding microchannel surface quality on cell culture
efficiency can be valuable in optimizing the time and cost in dynamic cell culture systems. In
1
Karabuk University, Faculty of Engineering, Department of Medical Engineering, Karabuk, Turkey
* Corresponding author: [email protected]
188
this study, the effect of surface roughness of one of the walls of a cell transfer microchannel
(bottom wall) on cell deposition was theoretically investigated using discrete cell modeling.
2. Material and Method

The surface roughness of microchannels can vary between 1-10 µm depending on the
production techniques (Weaver et al., 2011). There are different approaches to model surface
roughness in microchannels. One of them is modeling surface irregularity with sinusoidal
curves (Dharaiya and Kandlikar, 2013). In this work, a microchannel with a rectangular cross-
sectional area of 3000×1000 µm and a length of 40 mm (Marin et al., 2017) and a roughness
of and a roughness of 5 µm was created on the bottom surface using sinusoidal curves. Based
on Torres et al study result (Cámara-Torres et al., 2020), the density and related viscosity
were assigned to the culture media as 1024 ( ⁄ ) and 0.025 Pa.s, respectively. Also, four
different inlet flow rates of 20, 50, 90, and 180 μl/min were selected to investigate the effect
of fluid velocity on the fate of cells within the microchannel. The governing equations for
CFD and DPM analysis of models can be found elsewhere (Ali, 2019).

Figure 1. The microchannel model with a rough surface was used in this study.

3. Results

To understand the effect of surface roughness on the number of cells that failed to exit the
microchannel, the percentage of cells trapped in a similar microchannel with a smooth surface
was compared and shown in Figure 2.

189
Figure 2. The number of sedimented cells was normalized with the total number of injected
cells for a cell size of a) 10 µm, b) 15 µm, c) 20 µm, and d) 30 µm, respectively.

As can be seen, three factors of fluid flow rate, cell size, and surface roughness were effective
in the number of sedimented cells. Fluid flow rate of 20 in all models, regardless of cell size
or microchannel surface roughness, carries a high risk of sedimentation. Except for the first
group with a cell size of 10 µm in all groups, surface roughness increased cell deposition.

4. Discussion and Conclusions

What can be deduced from the results of this theoretical work is that many parameters,
including the fluid flow rate and the size of the cells themselves, are more involved in the
design and regulation of a dynamic cell culture system. However, the quality of the
microchannel, including its surface quality and the mentioned factors, can be the determining
factor in the smooth and unobstructed transfer of cells to the destination.

References

ALI, D. 2019. Effect of scaffold architecture on cell seeding efficiency: A discrete phase
model CFD analysis. Computers in biology and medicine, 109, 62-69.
CÁMARA-TORRES, M., SINHA, R., MOTA, C. & MORONI, L. 2020. Improving cell
distribution on 3D additive manufactured scaffolds through engineered seeding media
density and viscosity. Acta Biomaterialia, 101, 183-195.
CAMPOS MARIN, A., GROSSI, T., BIANCHI, E., DUBINI, G. & LACROIX, D. 2017. 2D
µ-Particle Image Velocimetry and Computational Fluid Dynamics Study Within a 3D
Porous Scaffold. Annals of Biomedical Engineering, 45, 1341-1351.
DHARAIYA, V. V. & KANDLIKAR, S. G. 2013. A numerical study on the effects of 2d
structured sinusoidal elements on fluid flow and heat transfer at microscale.
International Journal of Heat and Mass Transfer, 57, 190-201.

190
KATO, Y., KIM, M.-H. & KINO-OKA, M. 2018. Comparison of growth kinetics between
static and dynamic cultures of human induced pluripotent stem cells. Journal of
Bioscience and Bioengineering, 125, 736-740.
LEE, H., MARIN-ARAUJO, A. E., AOKI, F. G., HAYKAL, S., WADDELL, T. K., AMON,
C. H., ROMERO, D. A. & KAROUBI, G. 2021. Computational fluid dynamics for
enhanced tracheal bioreactor design and long-segment graft recellularization.
Scientific Reports, 11, 1187.
LI, C., KUSS, M., KONG, Y., NIE, F., LIU, X., LIU, B., DUNAEVSKY, A., FAYAD, P.,
DUAN, B. & LI, X. 2021. 3D Printed Hydrogels with Aligned Microchannels to
Guide Neural Stem Cell Migration. ACS Biomaterials Science & Engineering, 7, 690-
700.
MARIN, A. C., GROSSI, T., BIANCHI, E., DUBINI, G. & LACROIX, D. 2017. µ-Particle
tracking velocimetry and computational fluid dynamics study of cell seeding within a
3D porous scaffold. Journal of the Mechanical Behavior of Biomedical Materials, 75,
463-469.
WEAVER, S. A., BARRINGER, M. D. & THOLE, K. A. 2011. Microchannels With
Manufacturing Roughness Levels. Journal of Turbomachinery-Transactions of the
Asme, 133, 8.
WHITESIDES, G. M. 2006. The origins and the future of microfluidics. Nature, 442, 368-
373.
YUAN, X., TAO, Z., LI, H. & TIAN, Y. 2016. Experimental investigation of surface
roughness effects on flow behavior and heat transfer characteristics for circular
microchannels. Chinese Journal of Aeronautics, 29, 1575-1581.
ZHOU, Z., CUI, F., WEN, Q. & ZHOU, H. S. 2021. Effect of vimentin on cell migration in
collagen-coated microchannels: A mimetic physiological confined environment.
Biomicrofluidics, 15, 034105.

191
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Processing of Seismic Signals on base of AI during Oil Exploration

Ramziyya Garazade1*, Naila Allahverdiyeva2

Abstract: Preprosessing of seismic data is one of the most widely used methods in oil and gas
exploration which is providing comprehensive information about various layers and rock
properties underneath the Earth without any expensive drilling operations. The accuracy and
efficiency of the seismic data analysis mainly depend on the quality and the quantity the seismic
receivers - geophones or hydrophones located on the Earth or marine surface. Furthermore, the
choice of the seismic source energy is also important factor in the reliability of the acquired
seismic data. Unfortunately, such seismic data analysis usually appears with an excessive
amount of noise generated by various sources and erratically missing information due to
inaccessible points in the field. The attenuation or suppression of random noise is of great
importance for geologists to achieve high quality and precise seismic data in oil and gas
exploration. According to this data, the potential oil or gas resources can be found out, even an
approximate capacity of the reservoir can be estimated by expert geoscientists and engineers.
Many methods have been investigated for random noise attenuation and each of them has
certain advantages and disadvantages. The selection of the appropriate method depends upon
the preferred criteria on the acquired results in the seismic data analysis. In this paper is
considered the application of Artificial Neural Network (ANN) in seismic data analysis using
MATLAB. As result ANN filter is designed. While comparing with other classical filters this
method has showed his efficiency.

Keywords: Seismic data, preprocessing, Artificial Neural Network, root-mean-square error,


seismic signals, filter, signal-to-noise ratio.

1. Introduction

In recent years, major improvements and innovations have been developed in energy
exploration. Oil and gas exploration is one of the most expensive processes in fuel energy
acquisition. Millions of dollars are being spent to find out if there is a potential petroleum
reservoir to continue drilling and complete the well. Seismic surveys have become one of the
most efficient methods applied in energy exploration providing huge return on investment.
Seismic technique is a remarkable option to analyze the subsurface structure in advance of
drilling and to determine the design of the well trajectories in order to achieve the reservoir in
the safest and most effective way. Obtaining a detailed record of the geological formations,
which is known as well logging, allows the geologists to gather comprehensive information
about different layers of rock formations. This contributes greater certainty about whether or
not the hydrocarbons exist beneath Earth’s surface. If there is not any latent petroleum reservoir,
the drilling process needs to be stopped and the well should be abandoned to prevent incurring

1
Baku Higher Oil School, Informatics and Control in Technical Systems, Process Automation Engineering,
Baku, Azerbaijan
2
Baku Higher Oil School, Process Automation Engineering, Baku, Azerbaijan
* Corresponding author: ramziyya.garazade.std@@bhos.edu.az
192
higher costs of completing a well. In consequence, seismic data analysis leads to a more
profitable hydrocarbon extraction with fewer wells drilling.

Seismic data with high quality is very crucial in seismic exploration to accurately analyze the
subsurface structure. However, in real industrial experiments, the seismic signals are usually
obtained with high frequency noise which could cause losing useful information about the rock
formations underneath the Earth. So that, noise attenuation is one of the important stages in
seismic signal analysis. Although a number of methods have been suggested to attenuate the
seismic noise signal, the efficiency of these methods is evaluated with the preservation of the
original signal amplitude. Traditional attenuation algorithms, such as transform domain
algorithms (V. Oropeza and M. Sacchi, (2011), Y. Chen, H. Chen, K. Xiang, and X. Chen,
(2017), W. Chen, M. Bai, and H. Song, (2019)), spatial domain algorithms (A. Stumpf, N.
Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, (2014)), curvelet transform (H. Zhang, S.
Diao, H. Yang, G. Huang, X. Chen, and L. Li, (2018)), comprehensive types denoising
algorithms (Y. Chen and S. Fomel, (2015), Q. Zhao, Q. Du, X. Gong, and Y. Chen, (2018), W.
Liu, S. Cao, Z. Jin, Z. Wang, and Y. Chen, (2018), et.al) can manage to eliminate the noise to
some extent, they usually appear with some drawbacks such as inaccurate design assumptions
and problems in estimating parameter values (M. Bai, J. Wu, S. Zu, and W. Chen, (2018), R.
Anvari, A. R. Kahoo, M. Mohammadi, N. A. Khan, and Y. Chen, (2019), D. Zhang, D. J.
Verschuur, S. Qu, and Y. Chen, (2020)). Furthermore, a comprehensive experience and
knowledge about the noise are required before the traditional methods applied in seismic noise
attenuation. As the random noise is obscure in the real exploration process, it needs to be tested
at different variances which contributes to time-consuming and inefficient process. Considering
the drawbacks of the traditional methods. contemporary science and industry demand more
effective and intelligent denoising methods. In recent years, deep learning or ANN has rapidly
and extensively developed, consequently it has also applied in seismic exploration field.

2. Experimental Data Preparation and Artificial Neural Network Application

The suggested method for preprocessing of raw seismic signal learns the difference between
noisy and clean data. The data representation and noise cancellation will be performed using
MATLAB software. MATLAB has a range of features for filtering the noisy signal and its
Neural Network Toolbox – " " enables to design an Artificial Neural Network structure
with desired number of layers and flexible parameters. There are mainly five steps in the method
described in this paper:
1. Generate a clean - ideal output signal using traditional filtering methods.
2. Design a neural network framework.
3. Train the network and test the network performance.
4. Compare the denoising results of neural network and traditional methods.
5. Calculate the signal-to-noise ratio to check the efficiency of the result.

2.1. Experimental data representation

The input data is represented as a tabular data which shows the measurements of seismic sensors
on 513 traces of the Earth. When the seismic sources – air guns are fired and send acoustic
waves to the lower layers of the Earth surface, seismic receivers or hydrophones towed behind
the seismic ship collect the depth values of the rocks according to the time duration between
sent and received signals. As a result of multitude bombing actions from air guns, a table of
data in the size of 886 x 513 is obtained. Each column in the table depicts one trace or layer in
the Earth subsurface and each raw shows a time at which the sample is acquired from the

193
sensors. Considering the acoustic signal speed in the water and two sequential depth
samples ( , ) the data parameters are initialized in Table 1 below:

Table 1. Acoustic signal parameters

1500
1 2.385
2 1.803

Using the known signal parameters represented above, the sampling time ( ) and
sampling frequency ( ) parameters can also be easily calculated as shown below:

| 2 1|
0.388
1
2.577

Each trace – column consists of error measurements due to sensor accuracy or environmental
noises. Therefore, the experimental raw data obtained directly from hydrophones or geophones
is not helpful for geologists and geoscientists to predict the potential of hydrocarbon resource.
The noise interference on the pure signal is apparently observed in the 2D representation of the
original experimental data in time and frequency domains (Figure 1):

Figure 1. 2D representation of original seismic data

194
2.2 Noise attenuation using lowpass filter

Prior to neural network design, we need a target data to train the network. This target data can
be obtained by applying traditional filters on the input data in MATLAB. As the random noise
signals are generally high frequency signals, we need a lowpass filter to vanish the noise from
the signal.
While designing the low-pass filter, the choice of cutoff frequency at which the signal is
sufficiently cleaned is very important. Based on the noisy signal frequency response, the cutoff
frequency has been chosen 0.02 kHz approximately in order to minimize the amplitude of the
noise on the seismic signal. Furthermore, the passband and stopband frequencies should be less
than Nyquist frequency, which is half of sampling frequency. The low-pass Butterworth filter
with minimum order was designed using MATLAB tool (Figure 2):

Figure 2. Low-pass filter design


After applying the designed low-pass Butterworth filter on the original noisy seismic signal,
the noise was attenuated and an ideal – noise-free signal was obtained (Figure 3 & Figure 4):

195
Figure 3. Noisy and Filtered signal in time domain

Figure 4. Noisy and Filtered signal in frequency domain

196
Comparing with original noisy signals, the acquired results are much smoother and ideal both
in time and frequency domain representation. There are some ripples in time domain as
demonstrated in Figure 3, this is because of the allowable passband attenuation indicated in
magnitude specifications of the filter.
In summary, the obtained signal will be the target data for the neural network to learn the system
and will be used to validate the acquired results.

2.3. Noise attenuation using Artificial Neural Network

ANN learns how ideally the system performs and implements it on the other samples in the
same manner, afterwards. As discussed in the previous subsection, the input data was filtered
using traditional low-pass filter firstly. The input signal together with the ideal filtered signal
are two main inputs of the neural network to be able to calculate the respective weights of the
hidden layers. Since the provided seismic signal data is quite big, only some portion of the data
is used to train the system. In this instance, 200 samples of traces obtained in 100 time
were used for learning purpose of the neural network and the next 200 samples were used to
test and validate the results. MATLAB has function which opens Neural Network/Data
Manager window in order to import the input data, create a neural network with desired
parameters and to export the neural network results.
After input and target data were imported, a neural network structure is designed. The neural
network design used in this work is a 7 layers (6 hidden, 1 output) network with 10 neurons at
each hidden layer. The hidden layers have a hyperbolic tangent activation function which is
more successful than sigmoid function and is applied in many neural network problems. As it
is regression type prediction problem, Linear activation function has been selected for the
output layer. After creating the neural network, it needs to be trained with the imported input
and target data. Training parameters are specified as represented in Figure 5:

Figure 5. Training parameters of neural network

197
The training process lasted for 13 minutes approximately, and after 57 iterations the goal was
achieved (Figure 6):

Figure 6. Neural network training GUI

The performance of the training and error histogram as represented in Figure 7, also validates
the neural network successful results:

198
Figure 7. Regression performance and Error histogram of the neural network

3. Results

As mentioned above, 200 samples different from the trained data were used to check and
validate the neural network performance. When the testing data was initiated as an input to
the trained network model the obtained result was compared with the desired output in time
(Figure 8) and frequency domain (Figure 9):

Figure 8. Signal comparison in time domain

199
Figure 9. Signal comparison in frequency domain
As can be seen from the figures above, the results are very close to the desired ones and the
noise on the signal has considerably been attenuated. Different sets of data were tested with
the designed network and all results were the satisfying as represented in Figure 8 and Figure
9.

Furthermore, the statistical properties of the input and output signals were also calculated to
verify the model efficiency (Table 2):

81.1607
82.3470
275.7967
0.1800
0.2050
0.4444
0.9054
8.8955

The derived values show that the signal has been cleaned from the noise successfully as its
and have decreased as expected. Another important parameter, root-mean-
square-error is almost the same for the training and testing data. Finally, the most
essential parameter to confirm the noise attenuation in the signal is the Signal-to-noise ratio
(SNR). A significant increase in the signal-to-noise ratio value also implies the noise reduction
in the output signal.

4. Discussion and Conclusions

In conclusion, as implemented in many areas of industry, a neural network can successfully be


applied in oil and gas exploration to achieve a desired result in a more efficient and eco-friendly
way. A detailed and systematic analysis of seismic data can guide the geologists to decide the

200
potential capability of the reservoir and can save millions of dollars for oil and gas companies
if the well does not worth to drill. Moreover, a careful preprocessing and interpretation of
seismic signals can result in an accurate drilling direction and depth due to precise assumptions
about rock formations. Although the preprocessing of seismic signals using ANN model is time
consuming, it can deal with the seismic signal without requiring any prior knowledge or
comprehensive analysis of the signal. Despite the fact that the traditional methods are more
accurate and reliable, Artificial Intelligence (AI) or Machine Learning has become main focus
of industry due to its convenience and flexibility.
This paper mainly discusses the AI application in noise cancellation in seismic signals which is
known as preprocessing of seismic signals. A future work could be to apply a CNN in seismic
signal processing and improve the processing of seismic signals in order to make a judgement
about reservoir capability and to identify the rock properties. Certainly, it requires a deep
knowledge about various rock formations and their characteristics, afterwards, a neural network
can be trained to be implemented in seismic signal processing.

Acknowledgements

This is a complete paper describing my work during the spring of 2021 for International
Conferences on Science and Technology which will be held on September 8-10, 2021. This
work promoted me to do a deep research on the field of seismic signal analysis and raised my
knowledge in the field of neural network application on noise attenuation. I could attain an
appreciable educational benefit and I believe that I will take full advantage of this work in the
future.
I would like to thank to my supervisor, Associate Professor Naila Allahverdiyeva for her
valuable guidance and advice. She encouraged me greatly to complete my research project and
to work on this paper. Equally, I would like to express my gratitude to the authority of Baku
Higher Oil School (BHOS) and Department of Process Automation Engineering for providing
me with a good environment and facilities to accomplish my work.
Last but not least, I’d like to thank to my parents for their eternal encouragement and support
which made me very enthusiastic and determined to succeed in this research project.
.
References

V. Oropeza and M. Sacchi, (2011). ‘‘Simultaneous seismic de-noising and reconstruction via
multichannel singular spectrum analysis (MSSA),’’ Geophysics, vol. 76, no. 3, pp. V25–
V32, 2011

W. Chen, M. Bai, and H. Song, (2019). ‘‘Seismic noise attenuation based on waveform
classification,’’ J. Appl. Geophys., vol. 167, pp. 118–127, Aug. 2019.

Y. Chen, H. Chen, K. Xiang, and X. Chen, (2017). ‘‘Preserving the discontinuities in least-
squares reverse time migration of simultaneous-source data,’’ Geophysics, vol. 82, no. 3,
pp. S185–S196, May 2017.

A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, (2014). ‘‘Active learning in the
spatial domain for remote sensing image classification,’’ IEEE Trans. Geosci. Remote
Sens., vol. 52, no. 5, pp. 2492–2507, May 2014

H. Zhang, S. Diao, H. Yang, G. Huang, X. Chen, and L. Li, (2018). ‘‘Reconstruction of 3D


non-uniformly sampled seismic data along two spatial coordinates using non-equispaced

201
curvelet transform,’’ Explor. Geophys., vol. 49, no. 6, pp. 906–921, Nov. 2018.

Y. Chen and S. Fomel, (2015). ‘‘Random noise attenuation using local signaland-noise
orthogonalization,’’ Geophysics, vol. 80, no. 6, pp. WD1–WD9, Nov. 2015.

Q. Zhao, Q. Du, X. Gong, and Y. Chen, (2018). ‘‘Signal-preserving erratic noise attenuation
via iterative robust sparsity-promoting filter,’’ IEEE Trans. Geosci. Remote Sens., vol.
56, no. 6, pp. 3547–3560, Jun. 2018.

W. Liu, S. Cao, Z. Jin, Z. Wang, and Y. Chen, (2018). ‘‘A novel hydrocarbon detection
approach via high-resolution frequency-dependent AVO inversion based on variational
mode decomposition,’’ IEEE Trans. Geosci. Remote Sens., vol. 56, no. 4, pp. 2007–2024,
Apr. 2018

W. Huang, D. Feng, and Y. Chen, (2018). ‘‘De-aliased and de-noise Cadzow filtering for
seismic data reconstruction,’’ Geophys. Prospecting, vol. 68, pp. 553–571, 2018

D. Zhang, D. J. Verschuur, S. Qu, and Y. Chen, (2020). ‘‘Surface-related multiple leakage


extraction using local primary-and-multiple orthogonalization,’’ Geophysics, vol. 85, no.
1, pp. V81–V97, Jan. 2020.

R. Anvari, A. R. Kahoo, M. Mohammadi, N. A. Khan, and Y. Chen, (2019). ‘‘Seismic random


noise attenuation using sparse low-rank estimation of the signal in the time–frequency
domain,’’ IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 12, no. 5, pp. 1612–
1618, Apr. 2019.

202
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The effect of holding time on the mechanical properties of TFP


produced thermoplastic matrix composites under compression
molding process

Hasan KARA1*, Mustafa Özgür BORA2*, Emine BAŞ3*

Abstract: The purpose of this study is to characterize, optimize and manufacture glass fiber
reinforced thermoplastic composites using the compression molding process for the
application of producing improved automotive parts. Composites offer many benefits; the key
among them are corrosion, design flexibility, durability, light weight, and specific strength.
Thermoplastic composites offer some interesting advantages over their thermoset counterparts
like a higher toughness, faster manufacturing and their recyclable nature. In this study, the
woven glass fiber/Nylon6 (PA6) was produced by using Tailored Fiber Placement (TFP). TFP
is an embroidery-based technology that allows the fibre tows to be placed exactly where they
are most needed for structural performance and stitched into position on a compatible textile
or polymer substrate. After this production, compression molding process was applied for
obtaining the final composite materials. During the compression molding process, various
holding time values were selected for determining the mechanical properties (tensile strength,
tensile modulus, flexural strength and flexural modulus) of the woven glass fiber reinforced
PA6 composites. The aim of the study is to investigate the influence of holding time
parameters on the mechanical properties of the woven glass fiber reinforced PA6 composites.
Tensile and 3 point bending tests were applied the thermoplastic matrix composites which
were produced and manufactured by TFP and compression molding process. Tensile strength,
tensile modulus, flexural strength and flexural modulus values were determined. It was
determined that holding time parameters were slightly affected the composite materials’
mechanical properties.
Keywords: Tailored Fiber Placement (TFP), Thermoplastic Matrix Composites, Compression
Molding, Mechanical Properties.

1. Introduction

Firstly, the material selection is the most important for the automotive sector once the product
development process has been well recognized in recent years. There are two main reasons
why materials selection is required: firstly, to design an existing product for better
performance, lower cost, increasing reliability and reduced weight and secondly, to select a
material for a new product [1]. The car manufacturing companies are competed an extremely
strong among leading to larger model variety, and shorter model cycles. Application of
lightweight design principles is one of the most important trends to meet above requirements.
1
Plascam A.Ş, Maden Caddesi No:22 Pelitli Köyü, Gebze 41480, Kocaeli, Turkey
2
Aviation Material Research and Development Laboratory, Faculty of Aeronautics and Astronautics, Kocaeli
University, Kocaeli, Turkey
* Corresponding author: [email protected]

203
As such the new design concepts are needed new materials [2]. Government regulations in the
world continue to demand for tighter restrictions on vehicle emissions. Among these are
lightweight materials such as plastics and composites. Also, there are interesting to see that as
fuel economy standards increase, total vehicle mass and steel content decreases and are
replaced by composite materials. Actually, the transport sector notably accounts for about
25% of worldwide production of glass fiber and, cars produced in the United States can
contain as much as 100 kilograms of composite samples, compared with slightly less than 30
kilograms for cars built in Europe [3].
The polymeric materials are selected because they are replacing traditional materials in many
engineering applications due to their attractive properties such as excellent strength, stiffness
to weight ratio, chemical resistance, corrosion resistance, impact resistance, fatigue resistance,
thermal resistance, wear resistance and low processing cost [4]. The automotive composite
samples, reinforced plastics and polymers are among widely preferred alternatives for light
weighting of the automobile as they offer enhanced properties such as impact strength, easy
mold-ability, improved aesthetics, and reduced weight as compared to conventional
automotive components. The main advantages, which offer opportunities in the automotive
industry, are their potential for maximum mass reduction of automobile and carbon emission
reduction potential by light weighting of the vehicle. All material industries plastics and
polymer composites, as well as steel, aluminum, and magnesium, are operating to respond to
the automotive industry changing needs. Since start of the using, advanced plastics and
polymer, composites have been helped the improvement of appearance, functionality, and
safety of automobiles while reducing vehicle weight and delivering superior value to
customers at the same time [5].
Polymer matrix composites divided into main group for matrix material; thermoset and
thermoplastic. Although thermoset and thermoplastics sound similar, they have very different
properties and applications. The thermoplastic materials are processed with heat. When
enough heat is added to bring the temperature of the plastic above its melting point, the plastic
melts, liquefies, or softens enough to be processed. Thermoplastics tend to be tougher or less
brittle than thermoset. They can have better chemical resistance, do not need refrigeration as
uncured thermosets (prepreg materials) frequently do, and can be more easily recycled and
repaired. Table 1 presents a comparison between thermoset and thermoplastic materials [6].
The thermoplastic composites (TPCs) are the most advantages than thermosets in terms of
reduced cycle times, improved toughness and potential for re-cycling. Finally, TPCs is used to
product within the automotive industry, including battery trays, seat structures, front end
modules and load floors. Most current applications utilize glass mat thermoplastic (GMT),
based usually on random fibres within a polypropylene matrix. If it is requested to achieve
higher mechanical properties, materials will be based on aligned fibres, usually in the form of
woven fabrics, have been developed. Actually, the problem of main here is to reduce the
required consolidation pressure to achieve a high fibre volume fraction. The problem can be
solved to achieve by minimising the required flow distanced for the matrix. A number of
approaches have been developed, usually involving the development and matrix at the tow
level. As with GMTs, this is undertaken in a separate oven, as the mold temperature is below
the melt temperature of the matrix [7].

204
Table 1: Presents a comparison between thermoset and thermoplastic materials [6]
Thermoset Thermoplastic
The product remelting when heat is applied, This characteristic allows thermoplastics to
making thermosets ideal for high-heat
Processing applications such as electronics and appliances be remolded and recycled without negatively
affecting the material’s physical properties

Thermoset are often used for sealed products due Commonly offer high strength, shrink resistance,
to their resistance to deformation
and easy bendability.

Cannot be recycled Highly recyclable

Cannot be remolded or reshaped (not melt if heat) Can melt if heated, remolding / reshaping
capabilities

Easy to wet the reinforcing fibers and fillers More difficult to wet the reinforcing fibers and
fillers

Features More resistant to high temperatures than High-impact resistance


and benefits thermoplastics

Highly flexible design Chemical resistant

Thick to thin wall capabilities Hard crystalline or rubbery surface options

Excellent aesthetic appearance Aesthetically superior finishes

High levels of dimensional stability Eco-friendly manufacturing

More difficult to surface finish Generally, more expensive than thermoset

Cost-effective

By way of Tailored Fiber Placement (TFP) is solved that this is problem that how to
affordably, reliably and quickly fabricate the resulting complex shapes that depend for their
performance in the finished part on minimal quantities of precisely oriented fibers. (TFP) is a
unique stitched preforming process that has the potential to answer and eliminate most of the
limitations of drilling/machining holes in composites [8]. Compared with other textile
preforming methods, TFP reinforcement fibres can be orientated in any direction and can also
be curved in a radius allowing the fibre structure to align with the load paths in the material
taking full advantage of the anisotropic properties of fibre reinforced plastics [9].

E.Richter et al. [10] investigated the effect of short or continuous glass fiber reinforcement on
the mechanical properties of polyamide 6.6 (PA 6.6) material. They produced short and
continuous glass fiber reinforced PA 6.6 materials by using TFP method. The results showed
that by using continuous glass fiber, failure load values under tension and compression
loading increased about 120% compared to short fibre reinforcement. K.Gliesche et al. [11]
investigated that the influence of open-hole tension plate on the mechanical properties of
carbon/epoxy laminated composites which were produced by using TFP technology. The test
layer parts were included the hole in the center. They were found that the stress-field aligned
local reinforcement on an open hole tensile plate. The reference plate from textile MAG
preforms without hole reached a specific failure load of 90 kN. Due to the hole, this value
decreased to 55 kN. Applying the TFP reinforcement, the value increased to 85kN. In
addition, it was shown that the TFP technology is an advanced technique to design

205
reinforcements with only little weight increase of the tool due to the fact that the fiber
alignment only takes place corresponding to the major stress trajectories.

Compression molding offers some specific properties such as low cost, high efficiency, low
internal stress, small buckling deformation, good mechanical stability, and excellent product
repeatability for producing composite samples. The production volume is increasing for the
requirement part quantities thus the molding method boasts a strong competitive advantage in
industrial. However, the process parameters of compression molding (e.g., preheating
temperature, molding temperature, molding pressure, pressure holding time, cooling rate,
exhaust pressure, exhaust times, and blank holder force) directly affect the flow of the matrix
material, the impregnation effect of the reinforcing fibers and also mechanical properties of
the composite samples. This effect exerts an impact on both the quality and mechanical
performance of the material once coupled with the interaction between process parameters.
Consequence, the best process parameters of compression molding are provided to optimize
the mechanical performance of the material, it is critical to analyze the interaction between
various process parameters and mechanical properties of the manufactured material [12]. G.
Başer et al. [13] developed a new process to produce non crimp glass fabric (NCGF)
reinforced Poly buthyleneterephtalate (PBT) matrix composites. Isothermal and semi-
isothermal processing of NCGF reinforced IPCBT composites via in-situ polymerisation of
cylic buthyleneterephtalate (CBT)/glass fiber prepregs has been successfully performed by
means of compression molding technique. The results showed that the prepreg production
temperature which gave highest mechanical property was identified as 160 °C. The optimum
compression parameters which gave highest mechanical properties were identified as 200 °C
compression temperature, 30 minutes compression time and 1.6 MPa compression pressure
for isothermal process. Semi-isothermal process with 180 °C demolding temperature gave
higher tensile and flexural properties than isothermal process. To enhance the quality and
mechanical performance of a carbon fiber–reinforced polymer (CFRP) workpiece, J.Xie et al.
[12] prepared a polyacrylonitrile (PAN)-based carbon fiber–reinforced thermosetting polymer
(CFRTP) laminated board through compression molding, and carried out orthogonal tests and
single-factor tests to disclose the effects of different process parameters (i.e., compression
temperature, compression pressure, pressure-holding time, and cooling rate) on the
mechanical performance of the CFRTP workpieces. The results showed that the optimal
process parameters for compression molding included a compression temperature of 150 °C, a
pressure-holding time of 20 min, a compression pressure of 50 T, a cooling rate of 3.5
°C/min, and a mold-opening temperature of 80 °C. Under this parameter combination, the
tensile strength, bending strength, and the interlaminar shear strength (ILSS) of the samples
were, respectively, 785.28, 680.36, and 66.15 MPa.

From literature surveys, there was not easily found an article which was investigated the
effect of holding time parameters on the mechanical properties of the woven glass
fiber/Nylon6 (PA6) composite samples which were produced by using both TFP and
compression molding technology. By this way, in this study, the woven glass fiber/Nylon6
(PA6) was produced by using TFP. After TFP production, compression molding process was
applied to produce composite samples under various holding time parameters (for tensile test
1.0 mm thickness composite samples – 3.30 min., 4.00 min., 4.30 min.) (for bending test 2.0
mm thickness composite samples – 4.00 min., 4.30 min., 5.00 min.). Tensile and three point
bending tests were applied the composite samples. Tensile strength, tensile modulus, flexural
strength and flexural modulus values were determined.

206
2. Materials and Methods
2.1. Materials
In this study, Polyamide 6 (PA6) [14] and E-glass fibre were supplied from Universal Fibers
from USA and Owen Corning, respectively. E-glass fibre is SE1200 Type 30 Single [15] end
roving which is designed for excellent processing for knitting and weaving in polyester, vinyl
ester, and epoxy resins. E-glass fibre is also suitable for Long Fiber Thermoplastic (LFTP) PA
compounding applications.
2.2. Tailored Fiber Placement (TFP)
In this study, unidirectional (UD) E-glass fabrics were produced by using Tailored Fiber
Placement (TFP) technology which was an invention of the Institute for Polymer Research
Dresden. We used this production method for producing fiber preforms with stress field-
aligned fiber orientations. This is based on the well-known embroidery technique used for
fabrics. The principle of TFP technology was showed at Fig. 1. The process is that by
stitching with a needle yarn a roving is fixed on a base material. As you can see in Fig. 1
which in between the stitches the base material is moved in both X and Y-direction. This
process is working in this way how the roving is fixed with zigzag stitches on either side of
the roving. Also, the roving can be made of carbon, glass, or other types of fiber. The TFP
technology advantage, compared to common textile technologies, is the ability to arrange
reinforcing fibers in every direction of the reinforcing area from an angle of 00 to 3600 [16].

Figure 1: The principle of TFP technology


The system is ideally suited for series production because the highly automated production
process allows good reproducibility. The other advantage is almost no fiber waste. Because
the near-net-shape preforms are made using the material [16].

Figure 2: The manufactured of UD E-glass/PA6 fabrics by using TFP


For the produced fabrics, 2 3 Michelamn PA845H was used for improving thermal stability of
the material. The UD E-glass/PA6 fabrics were produced by Coats in Bursa (Fig.2). The
fabrics were produced different thicknesses such as 1mm and 2 mm for tensile and bending
tests. The size of fabrics was 305 mm x 305 mm.

207
2.3. Compression molding process
100 tonne Hydraulic hot press was used in the study with Roctool equipment (Fig.3). By
using Roctool equipment thermoplastic matrix composite samples produced faster in the
compression molding process. Thus, this process is suitable for a wide range of industrial,
commercial, and consumer parts and products ranging from very small to large automobile
panels. After the TFP production of GF-PA6 fabrics, materials were manufactured by using a
compression molding process with Roctool equipment.

Figure 3: Compression molding process and used molds

The GF-PA6 fabrics were pressed into the molds under 160 bars with 300 oC. The cooling
system also placed for suitable cooling of composite samples to the molds. The mold
dimension is 830 mm x 380 mm x 145 mm as has 2 cavities which are 1.00 mm and 2.00 mm.
Each cavity dimension is 305mm x 305 mm. In the study both cavities were used for
producing 1mm and 2mm thick composite samples. For 1.00 mm and 2.00 mm thickness
composite samples, the weight of GF-PA6 fabrics was measured as 185 gr. and 367 gr.,
respectively. In this study, composite samples with different thickness were produced at
compression molding process under various holding times. The holding times were 3.30 min.
/ 4.00 min. / 4.30 min. for 1.00 mm thickness of composite samples. For the 2.00 mm of
composite samples, holding times were 4.30 min. / 5.00 min. / 5.30 min. After the production,
water jet system was used for cutting the composite samples according to the standards for
tensile and 3 point bending tests. Then the specimens put on the dehumidifier machine along
24 hours due to moisture effect for PA6.

208
2.4. Mechanical tests
2.4.1 Tensile Test
ISO 527 test standard was used for determining the maximum tensile strength and tensile
modulus of GF-PA6 composite samples which were produced under various holding times
(3.30 min. / 4.00 min. / 4.30 min.). In this study that Fig. 4, Type A test specimen was used.
The dimension of the specimen was 250 mm x 15 mm x 1 mm. The sides of each individual
specimen were parallel to within 0.2 mm. The tensile tests were done at 1 mm/min. at
Shimadzu AGS-X series test device. Tensile modulus and tensile strength values were
determined under various holding times. The standard is allowed the used alternatives include
tabs made from the material under test, mechanically fastened tabs, unbonded tabs made of
rough materials (such as emery paper or sandpaper, and the use of roughened grip faces.).
Sandpaper was used for tabs. Also, all specimens were scratch before they had started test
tabs location point. And the scratch points had been controlled and reviewing during the test
keep going.

Figure 4: In this study, Type A test specimen was used. The dimension of the specimen was
250 mm x 15 mm x 1 mm.

2.4.2. 3-Point Bending Test


ASTM D 7264/D 7246 test standard was used for determining flexural modulus and flexural
strength values of GF-PA6 composite samples (2.00 mm thickness) which were manufactured
under various holding times (4.30 min. / 5.00 min. / 5.30 min.). The test method determines
the flexural stiffness and strength properties of polymer matrix composites. This test method
was developed for optimum use with continuous-fiber-reinforced polymer matrix composites.
In this study, span-to-thickness ratio of 32:1 was selected. The standard specimen thickness is
2.00 mm, and the standard specimen width is 13.00 mm with the specimen length being about
20% longer than the support span. 3-point bending test was done at 1 mm/min. at Shimadzu
AGS-X series test device which is Fig. 5.

209
Figure 5: The support span. 3-point bending test was done at 1mm/min. at Shimadzu AGS-X
series test device

3.Results and Discussion


3.1 Tensile Test

In this work, the effect of holding time on tensile properties of GF-PA6 composites was
investigated. Tensile test results according to holding time values of manufactured composite
samples having 1 mm thickness were given in Fig. 6. As can be seen from Fig. 6, as a result
of the increase in holding time, tensile modulus and tensile strength values were increased. If
we compared these obtained results, tensile modulus showed close values at both 3.30 min.
and 4.00 min. Beside this, because of increasing the holding times at 4.30 min. the tensile
modulus values improved to nearly 20 GPa. In addition, tensile strength values increasing
significantly due to increasing of holding time values from 3.30 min. to 4.00 min. As
manufactured composite samples at 4.30 min. holding time, flexural strength value was
determined as 577 MPa. The ideal duration holding time was achieved for both tensile
modulus and strength values at 4.30 min. holding time. Xie et al. [19] investigated the effect
of compression molding parameters (compression temperature, compression pressure,
pressure-holding time, and cooling rate) on the mechanical properties of polyacrylonitrile
(PAN)-based carbon fiber–reinforced thermosetting polymer (CFRTP). In this study, the
authors selected pressure holding times from 10 min. to 25 min. From test results, the
mechanical performance of the test material gradually increased with the elapse of the
pressure-holding time, before the time reached a certain threshold. The mechanical
performance remained constant after the threshold because the resin flow and impregnation
both improve with the extension of the pressure-holding time, but the flow ceases after the
resin is fully impregnated. Taking tensile strength as the main criterion, the optimal pressure-
holding time was determined as 20 min. for the mechanical properties of the samples. With
respect to the effect of holding times, it was clear that the tensile strength increased with
increasing holding times from 3.30 to around 4.30 minutes as stated in Ref. [20].

210
Figure 6: Tensile test results according to holding time values of manufactured composite
samples having 1.00 mm thickness

3.2 Three Point Bending Test

Figure 7 summarized three point bending test results according to holding time values of
manufactured composite samples having 2 mm thickness. From Fig. 7, flexural modulus and
flexural strength values decreased in order to the holding time values increased. As you can
see from Fig. 5, the maximum flexural modulus value reached 28.77 GPa at 4.30 min. Also,
we compared the other holding times realized, there wasn't a significantly change for flexural
modulus values when the applied holding times at both 4.30 min. and 5.00 min. However,
when the other holding time was compared the 5.00 min. and 5.30 min., the flexural modulus
value was decreased. In addition, the maximum flexural strength was obtained at 4.30 min. as
689 MPa compared to other composite samples which were manufactured at 5.00 min. and
5.30 min. holding times. The flexural strength values were also determined as 672 MPa (5.00
min.) and 608 MPa (5.30 min). Xu et al. [20] investigated bending properties of the
unidirectional continuous glass fiber-reinforced poly (ether ether ketone). They held
composite materials at the mold temperature for a certain time as 10 min. – 150 min. Bending
strength and modulus values were determined under various molding times with the same
molding temperature of 400 oC and the same cooling rate of 10 oC/min. The results showed
that the bending strength and modulus increased as the holding time increased. This is
because the matrix was gradually penetrated into the GFs under pressure, and the
dispersibility of the GFs was improved. In addition, when the holding time was increased to
120 min, the bending strength and modulus reached its maximum, at 941.1 MPa and 38.3
GPa, respectively. This indicates that a sufficient holding time was critical to improving the
bending performance of the composites based on the cowrapped yarns method. From another
study, Fujihara et al. [21] achieved an optimum fabrication for the continuous carbon fiber
reinforced PEEK matrix composites. So, they selected three processing temperatures (380,
410, and 440 oC) and three holding time (20, 40 and 60 min.). The result showed that the
bending modulus was not changed with increasing the holding times from 20 min. to 60 min.
for the specimens fabricated at 380 and 410 oC, which was around 95 GPa. It was noted that a
slight modulus drop was seen in the case of 440 oC. A similar tendency was also seen in the
bending strength showed around 1300 MPa, in the case of 380 and 410 oC indicated that the
bending strength dropped about 200 MPa from the 40 to 60 min. holding time.

211
Figure 7: Three point bending test results according to holding time values of manufactured
composite samples having 2.00 mm thickness
4.Conclusion

In this study, it was aimed to determine the mechanical properties of thermoplastic matrix
composite materials (GF-PA6) produced via TFP then compression molding process. Tensile
and 3 point bending tests were applied the thermoplastic matrix composites which were
produced and manufactured by TFP and compression molding process. Tensile strength,
tensile modulus, flexural strength and flexural modulus values were determined.
 Tensile modulus showed close values at both 3.30 min. and 4.00 min. Beside this,
because of increasing the holding times at 4.30 min. the tensile modulus values
improved to nearly 20 GPa. As manufactured composite samples at 4.30 min. holding
time, flexural strength value was determined as 577 MPa. The ideal duration holding
time was achieved for both tensile modulus and strength values at 4.30 min. holding
time.
 Flexural modulus and flexural strength values decreased in order to the holding time
values increased. The maximum flexural modulus value reached 28.77 GPa at 4.30
min. The maximum flexural strength was obtained at 4.30 min. as 689 MPa compared
to other composite samples which were manufactured at 5.00 min. and 5.30 min.
holding times.
 No correlation was found between sample thickness and holding time. For both tensile
and flexural composite samples, optimum holding time was found as 4.30 min.
Acknowledgement

We would like to thank Plascam A.Ş for supporting and funding this scientific research.
References
1. Campbell F.C. (2010). Structural Composite Materials, ASM International.
2. Ngo, T.-D. (2020). Introduction to Composite Materials. Composite and Nanocomposite
Materials – From Knowledge to Industrial Applications.
3. William D. Cllister, Jr. and David G. Rethwisch, Materials Science and Engineering an
Introduction. 8th ed.p.cm.
4. Biron M. Thermoplastics and Thermoplastics Composites.

212
5. Mazumdar, Sanjay K. (2001). Composite Manufacturing: Materials, Product, and Process
Engineering.
6. Park C.H. and Lee W.I. Compression Molding in Polymer Matrix Composites, University
of Le Havre, France and Seoul National University, Korea.
7. Long, A. C., Wilks, C. E., & Rudd, C. D. (2001). Experimental characterisation of the
consolidation of a commingled glass/polypropylene composite. Composites Science and
Technology, 61(11), 1591–1603. doi:10.1016/s0266-3538(01)00059-8
8. Koricho, E. G., Khomenko, A., Fristedt, T., & Haq, M. (2015). Innovative tailored fiber
placement technique for enhanced damage resistance in notched composite laminate.
Composite Structures, 120, 378–385. doi: 10.1016/j.compstruct.2014.10.016
9. Crothers, P. J., Drechsler, K., Feltin, D., Herszberg, I., & Kruckenberg, T. (1997). Tailored
fibre placement to minimise stress concentrations. Composites Part A: Applied Science
and Manufacturing, 28(7), 619–625. doi:10.1016/s1359-835x (97)00022-5
10. E.Richter, K. Uhlig, A.Spickenheuer, L.Bittrich, E.Mader,G.Heinrich, Thermoplastic
Composite Parts Based on Online Spun Commingled Hybrid Yarns With Continuous
Curvilinear Fibre Patterns, ECCM16-16th European Conference on Composite Materials,
Seville, Spain, 22-26 June 2014
11. Gliesche, K. (2003). Application of the tailored fibre placement (TFP) process for a
local reinforcement on an “open-hole” tension plate from carbon/epoxy laminates.
Composites Science and Technology, 63(1), 81–88. doi:10.1016/s0266-3538(02)00178-1
12. J.Xie,S.Wang,Z.Cui and J.Wu, Process Optimization for Compression Molding of
Carbon Fiber-Reinforced Thermosetting Polymer, MDPI, Materials 2019, 12(15), 2430
13. G.Başer, Production of Fiber Reinforced Thermoplastic Composites, ITU, Department
of Polymer Science and Technology Polymer Science and Technology Programme,
October 2012
14. Universal Fibers, – 1000D PA6
15. Owens Corning, SE1200 TYPE30 Single-End Roving
16. Mattheij, P., Gliesche, K., Feltin, D. (1998). Tailored Fiber Placement-Mechanical
Properties and Applications. Journal of Reinforced Plastics and Composites, 17(9), 774–
786.
17. EN ISO 527-5, Plastics – Determination of tensile properties
18. D 7264/D 7264M – 07 – Standard Test Method for Flexural Properties of Polymer
Matrix Composite Materials.
19. Xie, J., Wang, S., Cui, Z., & Wu, J. (2019). Process Optimization for Compression
Molding of Carbon Fiber–Reinforced Thermosetting Polymer. Materials, 12(15), 2430.
doi:10.3390/ma12152430
20. Tharazi, I., Sulong, A. B., Muhamad, N., Haron, C. H. C., Tholibon, D., Ismail, N. F.,
… Razak, Z. (2017). Optimization of Hot Press Parameters on Tensile Strength for
Unidirectional Long Kenaf Fiber Reinforced Polylactic-Acid Composite. Procedia
Engineering, 184, 478–485. doi: 10.1016/j.proeng.2017.04.150
21. Xu, Z., Zhang, M., Wang, G., & Luan, J. (2018). Bending property and fracture
behavior of continuous glass fiber-reinforced PEEK composites fabricated by the wrapped
yarn method. High Performance Polymers,
095400831876750. doi:10.1177/0954008318767500
22. Fujihara, K., Huang, Z.-M., Ramakrishna, S., & Hamada, H. (2004). Influence of
processing conditions on bending property of continuous carbon fiber reinforced PEEK
composites. Composites Science and Technology, 64(16), 2525–2534. doi:
10.1016/j.compscitech.2004.05.014

213
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Image Data Augmentation Techniques for Fracture Detection of


Dogs

Gülnur Begum ERGÜN1*, Selda GÜNEY2

Abstract: Image collection and preparation phases are very costly for machine learning
algorithms. They require a lot of labeled data, especially deep learning algorithms. Hence, the
image pre-processing method, data augmentation, is commonly used. Since there are so many
proposed methods for this task, this comparison study is presented to be a supporting guide
for the researchers working in this field. In addition, the scarcity of studies with animal-based
data sets makes this study more valuable. The study is investigated on a comprehensive
medical image data set consists X-ray images of many different dogs. The goal is to
determine the fracture of the long bones in dogs. Three traditional augmentation methods are
employed on the data set: flipping, rotating, and changing brightness of the images. The
experimental work shows that flipping and changing the brightness of the image methods are
more successful than the rotation.

Keywords: biomedical image processing; bone fractures; data augmentation; preprocessing.

1. INTRODUCTION

Great advances have been made in the use of deep learning models. They have found a place in many
application areas. One of the most used area is biomedical field [1]. In orthopedics, these deep
architectures are utilized for fracture and bone disease detection [2].

Despite of all advances, deep architectures require lots of labeled data. That problem leads us to one of
the most preferred pre-processing method, data augmentation. Data augmentation is a transition from
limited data to more data [3]. On one hand, it helps to reduce overfitting problem, on the other hand, it
equalizes unbalanced data sets.

In order to prevent the overfitting, it is possible to modify the network structure. Batch normalization
and drop-out can be given as examples of the modifications. Data augmentation techniques are
different from them, as they are basically a pre-processing step [4].

In a work, authors explored and compared the data augmentation methods in image classification.
They applied simple techniques, such as cropping, rotating, and flipping the images. Also, they
experimented Generative Adversarial Neural Networks (GANNs), and a proposing method named
neural augmentation. Their experiments show that the traditional augmentation methods are more
effective than the others [5].

1,2
Department of Electrical and Electronic Engineering, Başkent University, Ankara, Turkey
* Corresponding author: [email protected]
214
In another research paper, a variety of augmentation strategies, horizontal flips, random crops, and
principal component analysis (PCA) are investigated. Their work shows that augmentation strategy
greatly affects classification performance [6].

Shijie et. al. used some data augmentation methods in their paper include: GAN/WGAN, flipping,
cropping, shifting, PCA jittering, color jittering, noise, rotation, and some combinations. According to
the results of the study, the four individual methods (Cropping, Flipping, WGAN, Rotation) perform
generally better than the others, and some appropriate combination methods are slightly more effective
than the individuals [7].

In the recent years, GANNs have become very popular for synthesizing images [8-10]. The works
which related traditional methods can be exemplified further [11-15]. Nevertheless, these methods can
be highly impacted by data sets, and it is difficult to find studies in the literature using comprehensive
data sets of X-Ray images of dogs.

In this work, a comprehensive data set created from dogs is employed. The aim is to detect long bone
fractures in dogs. Since it needs a lot of labeled data to do this task, three data augmentation
techniques are investigated: flipping, rotating, and changing brightness of the images. These
traditional methods can positively affect classification performance. Additionally, they are easy to
apply. After realizing the pre-processing step, deep neural model, CNN is performed for detection of
the long bone fractures.

2. MATERIAL AND METHODS

A. Data set

The data set consists of 2027 X-Ray images of long bones of many different dogs taken from Ankara
Metropolitan Municipality Stray Animals Temporary Nursing Home. 479 images of the data set were
labeled as fractured, and the remaining 1548 images were labeled as no fracture. For more detail about
the data set, readers can be referred in our previous study [16]. The .png extension images have a size
of 227x227x3. For better understanding, an example is given Fig.1. Both images in the figure belong
to radius-ulna.

a) b)

Figure 1. A visual example of the data set


a) fractured, b) no fracture

215
B. Methods for Data Augmentation

Some of image data augmentation techniques can be divided into two classes. These are:

a. Position augmentation: Cropping, Flipping, Padding, Rotation.

b. Color augmentation: Brightness, Contrast, Saturation, Hue.

Although, there are various methods for augmentation, because of the simplicity, three common data
augmentation methods are investigated in this study.

1. Flipping: For augmentation, images can flip horizontally and vertically. In this study, three flipping
options are applied: horizontally, vertically, and both horizontally and vertically.

2. Rotation: Rotating the images at specific angles. But after the rationing process, the image
dimensions are not the same, so after the application, the images are resized to the original dimensions
again. In this study, three different angles are applied: +30, -30 and +45 degrees.

3. Brightness: Another way for augmentation is changing the brightness of the image. The resultant
images become lighter or darker. In this study, three brightness settings are applied.

After the process, size of the data set increased from 2027 (479 broken, 1548 non-broken bones) to
8108 (1916 broken, 6192 non-broken bones) for each technique. After augmentation process, the
outputs from all techniques are given in Fig.2.

C. Methods for Classification

In this study, convolutional neural network (CNN) is used for train and test phases because of its
success in image processing [17]. A convolutional neural network consists of 5 primary layers: An
input layer, convolution layers, pooling layers, fully connected layers and an output layer [18]. The
purpose of the convolution layer is to extract features from the input image by performing a dot
product between images and filters. After the convolution layer, pooling layer is used to reduce the
dimensions of the featured matrix. Finally, the output from network matrix is flattened and ready for
classification process in the fully connected layer [16].

For the classification, a CNN network is created. The network has 7 layers (6 convolutional layers, all
of them followed by max-pooling layers, and 1 fully connected layer).

The filter sizes used in convolutional layers are selected as 3x3 and in pooling layers, maximum
pooling method is preferred. Adam optimizer is implemented, and batch number of the network is
chosen as 32. The training and test sets are randomly selected 0.8 and 0.2 from the data set,
respectively.

216
a) b) c)
Figure 2. Outputs after the augmentation process
a) after flipping, b) after rotating 30 degrees, c) after changing the brightness

3. RESULTS AND DISSCUSSION

The objective is to classify the images into their classes which are fractured or no fracture. In order to
achieve this goal, CNN is utilized, and the results are given in Table 1.

Despite the diversity of the augmentation techniques, in this study three of them were studied. It is
because, these methods are very easy to be implementation and they have very low cost.

From Table 1, it can be seen that the classification accuracy of the raw data is only 77.57%. On the
other side, after the augmentation methods, the accuracy is increased. The most effective method is
changing the brightness of the images in data set, as it has 89.34% classification success. Therefore,
the outcomes of the study confirm the prediction that it will increase the success. However, the
rotation method did not make much of a change. The essential problem of the situation might be that
the image size changes after the rotation process, and it resized again. During these stages, data loss
has occurred.

Table I. Classification Results for the Fracture Detection

Augmentation
Accuracy (%) Selectivity
Technique

Raw Data 77,57 0,7807

Flipping 85,13 0,8095

Rotation 79,49 0,6915

Brightness 89,34 0,8952

4. CONCLUSION

This paper tries to present a basic solution with comparison for the problem of overfitting seeing the
veterinary medicine. Since, deep architectures need big data sets, data augmentation is a very powerful
technique for creating bigger data sets.

217
Lots of related work in the literature exists, but all of these methods depend on the data sets. For this
reason, we wanted to contribute to the literature with a data set containing canine (dogs) X-rays
images. The results of the study are promising for future works.

REFERENCES

[1] J.Schmidhuber, “Deep learning in neural networks: An overview”, Neural Networks, 2015,
vol. 61, pp. 85-117.

[2] Adams M, Chen W, Holcdorf D, McCusker M W, Howe P D, Gaillard F., “Computer vs


human: deep learning versus perceptual training for the detection of neck of femur fractures,” J Med
Imaging Radiat Oncol; vol.63, pp. 27–32, 2019.

[3] Raúl de la Fuente Lopes, “Wild Data Part 1: Augmentation”, https://blog.stratio.com/wild-


data-part-one-augmentation-2/ access: 19.04.2021, 18.56.

[4] Shorten, C., Khoshgoftaar, T.M. “A survey on Image Data Augmentation for Deep Learning,”
J Big Data 6, 60, 2019. https://doi.org/10.1186/s40537-019-0197-0

[5] Luis Perez, Jason Wang, “The Effectiveness of Data Augmentation in Image Classification
using Deep Learning,” Computer Vision and Pattern Recognition, 2017.

[6] Hussain Z, Gimenez F, Yi D, Rubin D. “Differential Data Augmentation Techniques for


Medical Imaging Classification Tasks,” AMIA Annu Symp Proc. pp. 979-984, 2018.

[7] J. Shijie, W. Ping, J. Peiyi and H. Siping, "Research on data augmentation for image
classification based on convolution neural networks," Chinese Automation Congress (CAC), pp. 4165-
4170, 2017.

[8] Calimeri F., Marzullo A., Stamile C., Terracina G. “Biomedical Data Augmentation Using
Generative Adversarial Neural Networks.” Artificial Neural Networks and Machine Learning –
ICANN 2017. Lecture Notes in Computer Science, vol 10614. Springer, Cham.
https://doi.org/10.1007/978-3-319-68612-7_71,

[9] M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger and H. Greenspan, "Synthetic data


augmentation using GAN for improved liver lesion classification," 2018 IEEE 15th International
Symposium on Biomedical Imaging (ISBI 2018), 2018, pp. 289-293, doi: 10.1109/ISBI.2018.8363576

[10] Shin HC. et al. “Medical Image Synthesis for Data Augmentation and Anonymization Using
Generative Adversarial Networks,” Simulation and Synthesis in Medical Imaging. SASHIMI 2018.
Lecture Notes in Computer Science, vol 11037. Springer, Cham. https://doi.org/10.1007/978-3-030-
00536-8_1

[11] Jia S, Wang P, Jia P, Hu S. “Research on data augmentation for image classification based on
convolutional neural networks” Chinese automation congress, 2017. p. 4165–70.

[12] Shunjiro Noguchi Mizuho Nishio Masahiro Yakami Keita Nakagomi Kaori Togashi, “Bone
segmentation on whole-body CT using convolutional neural network with novel data augmentation
techniques,” Computers in Biology and Medicine, vol. 121, 2020.

218
[13] Hernández-García A., König P. “Further Advantages of Data Augmentation on Convolutional
Neural Networks,” Artificial Neural Networks and Machine Learning ICANN 2018. Lecture Notes in
Computer Science, vol 11139. Springer, Cham. https://doi.org/10.1007/978-3-030-01418-6_10.

[14] Sajjad M. et. al. “Multi-grade brain tumor classification using deep CNN with extensive data
augmentation,” Journal OF Computational Science, vol. 30, pp.174-182, 2019.

[15] Abdollahi B., Tomita N., Hassanpour S. (2020) “Data Augmentation in Training Deep
Learning Models for Medical Image Analysis,” Deep Learners and Deep Learner Descriptors for
Medical Applications. Intelligent Systems Reference Library, vol 186. Springer, Cham.
https://doi.org/10.1007/978-3-030-42750-4_6.

[16] Ergün G. B. , Güney S., Ergün T.G., Köpeklerdeki Uzun Kemiklerin Evrişimsel Sinir Ağları
Kullanılarak Sınıflandırılması, Fırat Üniversitesi Fen Bilimleri Dergisi, vol. 33, pp. 125-132, 2021.

[17] K. Guo et al., "Angel-Eye: A Complete Design Flow for Mapping CNN onto Customized
Hardware," 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), Pittsburgh, PA,
2016, pp. 24-29.

[18] LeCun, Y.; Boser, B.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W.; Jackel, L.
D. (December 1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural
Computation. 1 (4): 541–551.

219
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Sürdürülebilir- Eko Köy Olarak Inşaat Yapıları Ve Tasarım


Planlaması Örneği Akbaş Köyü İncelemesi
An Example of Construction Structures and Design Planning as a
Sustainable-Eco-Village Akbaş Village Analysis

Ayse Arıcı1*

Özet: Araştırma alanı olarak, Akbaş köyü sahip olduğu doğal güzellikler ve yöresel konut
tipi, kültürel değerleri ile eko turizm potansiyeli açısından önemli bir yerleşim yeri olan
Antalya İli Serik İlçesi'ne bağlı Akbaş Köyü seçilmiştir. Akbaş Köyü’nün Türkiye’modern
besi-süt işletmeleri tarımsal turizm- eko turizm- örnek sürdürülebilir köy çalışmalarına örnek
teşkil etmesi, alanın seçilmesinde etkili olmuştur. Bu çalışmada, “Akbaş Köyü, kırsal
planlama çerçevesinde mevcut konut tipleri-kullanılan yapı malzemeleri incelenerek
sürdürülebilir mimari için kullanılması gereken yapı malzemeleri belirlenmelidir. Mevcut
konutların kullanılabilmesi için nasıl yapı malzemeleri ile restorasyon yapılmalıdır? Yörede
nasıl amaçlara hizmet edecek yapılar gerekmektedir? Yapıların nasıl bir konfor sağlanması
beklenmektedir? Yörede ekonomik ve kültürel sürdürülebilirliğin sağlanması için neler
gereklidir” sorularına yanıt aranmıştır. Araştırmada yörede ki mevcut durumu incelenmiş ve
yerel halkın görüşleri alınmıştır. Bu kapsamda, yöre halkına bu sorular yöneltilerek
araştırmada anket çalışması yapılmıştır. Yörede fotoğraf çekilerek yerinde inceleme
yapılmıştır. Nitel ve nicel veriler toplanarak değerlendirilmiştir. Bu kapsamda Akbaş Köyü
için kırsal kalkınma ve yerel kimlik açısından önemli bir potansiyele sahip olduğu
görülmüştür. Sonuç olarak, kırsal alanlarda mevcut doğal- kültürel kimliği ve iklimsel
değerleri- tarım-hayvancılık faaliyetleri-turizm çalışmalarına ve çalışma alanında mevcut
kültürel değerlere sahip olan konutların uygun yapı malzemeleri kullanılarak yaşatılması ve
yeni planlanacak konutlar ve yapılar için yöreye uyumlu yeni yapı malzemeleri ile geleneksel
yapı malzemelerinin harmanlanarak sürdürülebilirliği bağlamında sonuç ve önerilere yer
verilmiştir.

Anahtar Kelimeler: Sürdürülebilir yapılar, eko köy, yapı malzemeleri, kırsal


sürdürülebilirlik, fonksiyonel yapı malzemeleri.

Abstract:

Akbaş Village of Antalya Province Serik District, which is an important settlement in terms of
its natural beauties, local housing type, cultural values and eco-tourism potential, was chosen
as the research area. The fact that Akbaş Village sets an example for modern fattening-dairy
farms, agricultural tourism-eco-tourism-exemplary sustainable village studies in Turkey has
been influential in the selection of the area. In this study, “Akbaş Village, within the
1
International Vizyon University, Faculty of Engineering and Architecture, Department of Civil Engineering, Gostivar,
Northern Macedonia.
* Corresponding author: [email protected]
220
framework of rural planning, existing housing types-used building materials should be
examined and the building materials that should be used for sustainable architecture should be
determined. What kind of building materials should be restored so that the existing houses can
be used? What kind of buildings are needed in the region to serve what purposes? What kind
of comfort is expected from the buildings? Answers were sought to the questions "What is
necessary to ensure economic and cultural sustainability in the region". In the research, the
current situation in the region was examined and the opinions of the local people were taken.
In this context, a survey was conducted in the research by asking these questions to the local
people. Photographs were taken in the region and an on-site examination was carried out.
Qualitative and quantitative data were collected and evaluated. In this context, it has been
seen that Akbaş Village has an important potential in terms of rural development and local
identity. As a result, the existing natural-cultural identity and climatic values-agricultural-
livestock activities-tourism studies in rural areas and the existing cultural values in the study
area should be kept alive by using appropriate building materials, and new building materials
and traditional building compatible with the region for the newly planned houses and
structures. The results and suggestions are included in the context of sustainability by
blending the materials.

Keywords: Sustainable buildings, eco village, building materials, rural sustainability,


functional building materials.

1. Giriş

Kırsal kalkınma, kırsal alanlarda ekonomik ve sosyal alanlarda gelişme ve büyümeyi


sağlamak için yapılan faaliyetleri kapsamaktadır. Kırsal kalkınma projeleri denildiği zaman
genellikle tarımsal ürünlerin işlenmesi, değerlendirilmesi ve pazarlanmasına yönelik
ekonomik faaliyetleri ve yatırımları ve üretime teşvik etmek olarak programlanmaktadır.
Kırsal kalkınmada tarımsal üretim- hayvansal üretim kadar turizm faktörü de göz önünde
bulundurulmalıdır.

Sürdürülebilir kalkınma yaklaşımının temel vurgusu ekonomik büyüme değil, sosyal bir
varlık olarak insan ve çevre konuları üzerinedir. Söz konusu yaklaşımda kalkınma salt bir
ekonomik büyümedir. Ekonomik büyüme tek başına çoğunluğun yaşam standardının
yükselmesine neden olmaz. Salt ekonomik büyüme için önemli olan olgular; doğal
kaynakların verimli kullanılması- alt yapının kurulması- dünden bugüne aktarılan değerlerin
bundan sonrası içinde aktarılması gereken değerlerin verimli ve sürdürülebilir hale
getirilmesi- mevcut yapıların restorasyona sokularak fonksiyonel ve sağlıklı bir hale
getirilmesi çok önemlidir. Doğal kaynakların dengesiz kullanımı, kısa vadeli amaçlar için
kaynakların yok edilmesi gelecek nesiller için oldukça tehlikelidir.

Dolayısı ile sürdürülebilir kalkınmanın temel hedefi yoksulluğu ve yoksulluğa yol açan
nedenleri ortadan kaldırmaktır (Gülçubuk, 2007).

Sürdürülebilir kalkınma sosyal gereklilik, eşitlik, katılım ve insan kaynaklarının geliştirilmesi


birbiriyle ilişkili ilkelerdir. Bunların tamamının gerçekleştirilmiş olması toplumun
güçlenmesiyle sonuçlanır ki, bu durum kalkınmanın temel hedeflerindendir. Güçlenmiş bir
toplum doğal kaynakları tahrip etmeden koruyarak onlardan sürekli yararlanabilme yollarını

221
bulmalıdır. Kırsal alanın kendi sorunlarını tanımlayıp, dışsal müdahaleye gerek kalmadan
bunların çözüm yollarını geliştirebilir (Mutlu, 2002).

Sürdürülebilir Yapılar- Sürdürülebilir Planlama – Sürdürülebilir Kalkınma

İnşaat üniteleri- yeni Ekonomik


yapı malzemeleri ile sürdürülebilirlik
konforlu yapılar

Sosyal
sürdürülebilirlik ve
cevresel
sürdürülebilirlik

Şekil 1 Sürdürülebilir Örnek Köy Modeli İçin Tasarım Önerisi

Çevresel sürdürülebilirlik kavramı, kaynakların özellikle de yenilenemeyen kaynakların


korunması ve yönetilmesini içermektedir. Sosyal sürdürülebilirlik kavramı ise; toplumdaki
herkes için eşit fırsat yaratarak insan haklarına saygı duymak anlamına gelmektedir.
Ekonomik sürdürülebilirlik kavramı, farklı toplum seviyelerinde refah düzeyini ve tüm
ekonomik faaliyetlerin maliyet etkinliğini ele alınması anlamına gelmektedir (UNEP/ WTO,
2005; Gurung, 2012).

2. Materyal ve Yöntem

Çalışma evreni olarak Antalya ilinin Serik İlçesi’ne bağlı Akbaş köyü seçilmiştir. Akbaş köyü’nde
mevcut konutların durum incelemeleri yapılmıştır. Köyde ki inşaat yapıları tipleri incelenmiştir. Köyde
ki mevcut ticari faaliyet türleri ve faaliyetlerin gerçekleştiği inşaatlar tespit edilmiştir. Söz konusu
yapıların yöre halkının ihtiyaçlarını karşılayıp karşılamadığı ve beklentileri tespit edilmiştir.
Araştırma için yöre halkının görüşleri ve tecrübeleri önemli kaynak olarak değerlendirilmiştir.
Literatür, haritalar, uydu görüntüleri incelenmiştir ve yörede fotoğraf çekimleri yapılmıştır. Yörede
yaşayan yerli halkından bilgiler alınmıştır. Köy yönetiminde bulunan muhtardan bilgiler alınmıştır.
Ayrıca, yerel halk ve yöneticilerle sorunlar ve çözüm önerileri hakkında görüşmeler yapılmıştır.
Yörede geçmişten günümüze kadar olan değişimler hakkında nitel ve nicel bilgiler toplanmıştır.
Yörede fotoğraf çekimleri yapılmıştır. Yörede mevcut durum için yerinde inceleme ve kontrol
sağlanmıştır. Arazi çalışmaları ile arazi kullanım özellikleri incelenmiştir. Yörede sürdürülebilir
mimari ve sürdürülebilir yapı malzemeleri ile sürdürülebilir ekolojik köy önerisi oluşturulacaktır. Yöre
mimarisinde inşaat yapılarında modern ve konforlu yapı malzemeleri ile çözüm önerileri
sunulmuştur.
Örnek sürdürülebilir köy planlaması için bir sistem yaklaşımı kapsamında model
köylerincelenerek yeni ve özgün yapılarla yeni sürdürülebilir köy örneği oluşturulmaya
çalışıldı. Yaklaşım kapsamında öncelikle köyün ileri gelenleri ile birlikte köy gezilir, köyde
yaşayanlarla görüşülerek ve bazı ailelere konuk olunarak köyün sorunlarının, koşullarının,
beklentilerinin ne olduğu, köy halkının gelişimi, iş olanakları, uğraş alanları ve kendine
yeterliliği konusunda gözlemler yapılarak köy hakkında ön değerlendirmeler yapılır. Köyde
bulunan konutların mevcut durumları tespit edilerek köy halkının ihtiyaçlarını ne derece
karşıladığı nelerden rahatsız oldukları ve nasıl yapılar istedikleri konusunda araştırma
yapılarak bilgi toplandı. Çalışma kapsamı çerçevesinde köyü temsil edecek bir grup
oluşturularak SWOT analizi yapıldı. Yörede fotoğraf çalışmaları yapılarak nitel ve nicel
gözlemlemeler yapıldı.

222
3. Bulgular

SWOT analizi kapsamında saptanan sorunları daha geniş tabana yaymak gerekmektedir.
Yöre halkından gelen önerileri geliştirmek, yöre halkı ile birlikte ortak çözümler üretmek için
belirlenen başlıklarda çalışma grupları (konforlu inşaatlar- konforlu kullanışlı yapılar - altyapı
- donanım, tarımsal işletmecilik, turizm, eko köy çalışmaları, ortak geçmiş ve kültür, tarımsal
yapılar, yerleşim inşaat alanları ve yeni inşaat malzemeleri ile konforlu yap kültürü, çevre,
kırsal görünüm vb.) oluşturulur. Oluşturulan çalışma gruplarında beklentilere dayalı çözüm
önerilerinin hayata geçirilmesi ile yöre de nasıl etkiler beklenildiğine dair görüşler alındı.

Kırsal bölgelere çağdaş bir görünüme kavuşmak amacıyla örnek sürdürülebilir köy
oluşturulmalıdır. Ancak örnek sürdürülebilir köylerin kuruluşunda ve yaşam alanlarının
oluşturulmasında köy kültürünün beklentilerinin karşılaması çok önemlidir. Mevcut durumda
ihtiyaçların karşılanamadığı gözlemlenmiştir. Gözlemlenen eksikliklerin giderilmesi ve
beklentilere cevap verebilmek açısından örnek sürdürülebilir köy tasarımları için proje
geliştirilebilir. Farklı projeler üreterek farklı bakış açılarını inceleyerek en verimli çalışmayı
seçebilmek adına proje yarışması yapılabilir. Söz konusu tasarım yarışmasının ana teması
köylüyü ve köyü inşaat bakımından iyi etüt ederek yapılacak olan masrafların en verimli
şekilde değerlendirilmesidir. Bu bağlamda Akbaş Köyü için oluşturulması gereken inşaat
alanları üniteler şeklinde tasarım sıralamasına göre;

1.Eğitim Yapıları Ve Uygulama Bahçesi


2.Öğretmen evi
3. Konuk odası
4.Kütüphane – Bilgi Merkezleri- Okuma Odası
5.Konferans salonu
6.0tel-Bungalov Evler
7.Çocuk bahçesi
8.Köy parkı
9.Sağlık kuruluşları
10.Hayvan sağlık korucusu
11.Sosyal kurumlar
12.Tarım ve köy el işleri müzesi
13.Gençler kulübü
14. Hamam
15.Dini İbadethane-Cami
16. Rekreasyon Alanları
17.Kooperatifler
18.Köy dükkânları
19.Spor alanı
20.Damızlık Tavuk, Tavşan, Arı istasyonları
21.Damızlık ahırı
22.Kesimhane
23. Mandıra
24.Asri mezarlık
25.Hayvan Mezarlığı
26.Yem bitkileri arazisi
27.Koruluk
28.Köy hayvan gübreliği
29.Modern ağıl

223
30.Pazar yeri
31.Panayır- şenlik- festival yeri’dir

Örnek köy uygulamalarında fiziksel planlama olarak köydeki diğer sosyo-kültürel donatılarla
birlikte 4 - 5 kişilik ailenin barınabileceği konut ile ona bitişik ahır ve yem depoları tesis
edilmiştir. Oluşturulan bu ideal köylerin birer model olarak eski ve geri kalmış köy tipinin
yerini alması amaçlanmıştır (Anonymous, 1934).

Turizmin çeşitlendirilmesi ile birlikte yörenin potansiyel özellikleri; doğal güzellikleri dağları,
gölet, tarihi özellikleri, bungalov evler yapılarak yörenin doğasında şehir yaşamından uzak
doğasına uyumlu uygun konutlar yapılarak turistleri ağırlayabilecektir, çok önemli değere
sahip olan mağarası, safari turları ile doğada özel geziler planlanarak ülke genelinde ve
yurtdışından gelen turistlere tüm yıla yaygınlaştırılan etkinliklerle pek çok alternatif türler
geliştirilmeye çalışılmıştır. Çevrenin korunmasında turizmin fiziksel planlamasının önemi
oldukça büyüktür. Sürdürülebilir turizmin geliştirilmesi için turizmin fiziksel planlamasında
mekanın rasyonel şekilde kullanılması gerekmektedir.

Köy yerleşimini yeniden düzenlemeye yönelik planlamalar kapsamında yani turizm köyleri
oluşturmaya yönelik çalışmalarda köyün kültürü, gelenek ve görenekleri ile mimari biçimi
korunmalıdır yeni konforlu yapı malzemeleri ile mimarisine uygun olarak restorasyon
yapılmalıdır. Yeni yapılacak olan yapıların eski mimariye uygun olarak tasarlanması ve
uygun yapı malzemeleri seçimi tapılarak sürdürülebilirlik sağlanması gerekmektedir.

Akbaş köyünde konut kullanımı tarım ve hayvancılık faaliyetleri için konut tipleri ve
kullanılan yapı malzemeleri;

Köyde 200 konut bulunmaktadır. Köyde bulunan en yaslı ev 80-100 yas aralığında 50 adet ev
bulunmaktadır. Bu eski kevler- konutlar ayakta olmasına rağmen terk edilmiştir. 20-25 yıldır
bu konutlarda yaşam sürdürülmemektedir. Taş duvar örgü yöntemi kullanılmıştır. Bu
konutların üst kısmı ahşap çatı şeklinde yapılmıştır. Değişik konut planları olmasına rağmen
en yaygın görülen konut tipleri 2 odalı ve 4 odalı olanlarıdır. 2 odalı ev tipinde; 1 oda
gündüzleri oturmak geceleri yatmak amacı ile dizayn edilir gömme dolaplar bulunmaktadır.
Bu gömme dolaplar el işçiliği ile oyularak ve boyanarak yapılmış özel üretimlerdir. Diğer oda
ise ekmek pişirmek için mutfak niteliğinde dizayn edilmiştir fakat ihtiyaca göre zaman zaman
yatak odası olarak ta kullanılabilmektedir. Wc ve banyo ev dışında bahçede
konumlandırılmıştır. Bunun sebebi ise evde temizlenme ve arınma mekanlarının bulunmaması
gerektiği düşüncesidir.1990’lı yıllardan itibaren bu görüş değişerek wc ve banyolar ev
içerisine dahil edilmeye başlanmıştır. (Arıcı, 2021)

Şekil 2’de geleneksel konut tipinde tek katlı ev tipine ait görseller bulunmaktadır. Wc nin
dışarıda konumlandırıldığı üçüncü resimde görülmektedir.

224
Geleneksel Köy Evi Tek Katlı

Şekil 2 Geleneksel Köy Evi Tek Katlı (Arıcı,2021)

Diğer geleneksel konut tipi;4 odalı ev tipinde ise genellikle 2 aile birlikte yaşamaktadır. 3 oda
gündüz zaman diliminde oturmak için kullanılırken geceleri yatak yapılarak yatak odası
şeklinde kullanılmaktadır. Yine 1 oda ekmek pişirmek için ve ateşte yemek pişirmek için,
ısınma amaçlıda kullanılmamaktadır. Wc ve banyo dışarıda bahçede konumlandırılmış. Bu
konutların ön kısmında konuta bitişik şekilde 20m2 ve 40m2 lik arasında değişen köşk şekilde
adlandırılan geniş ferah oturma alanları tasarlanmıştır. Bu köşk adı verilen oturma alanında
yaz aylarında yataklar ile uyumak içinde kullanılabilirken yemek yeme, oturma gibi
fonksiyonel amaçlar içinde kullanılırdı. Her evde muhakkak büyük yada küçük bir adet
hayvan ahırı bulunmaktadır. Kışları ısınma ihtiyacı konutlarda soba kullanılır. 200 adet
hayvan ahırı bulunmaktadır. Aşşağıda (şekil 3’de ) geleneksel konut tasarımını görmekteyiz
yöresel malzemeler kullanılmıştır. Zamanla terk edilerek kullanım dışı kalmasına rağmen köy
dokusunda köy kimliği hakkında bilgi veren önemli bir kültürel değer olarak kalmıştır. Söz
konusu yapıda geleneksel yapı malzemelerinin yöre halkında ki kullanım talebine kendi
imkanları ile tasarlanarak kendi işçilikleri ile yöresel malzemelerin harmanlanarak hizmete
sunulmuş olması köy kimliğini tanımak için farklı bir değer daha kazanmasına yardımcı
olmaktadır. Söz konusu yapının artık hangi amaçlara hizmet etmediğinin belirlenerek yöre
halkının konutlardan beklentilerini tespit edilmesi içinde önem arz ettiğini görmekteyiz.

Geleneksel Köy Evi 2 Katlı

Şekil 3 Geleneksel Köy Evi 2 Katlı (Arıcı,2021)

225
Geleneksel köy evinde zaman içerisinde yetersiz kalması sonucu ile konuta ekleme
yapılmıştır. Fakat bu yapılan ekleme konutun kendi geleneksel malzemeleri ile yapılmadığı
için konutun kimliğini kaybetmesine neden oluştur. (şekil 4) Söz konusu durumlar için bilinç
oluşturulmalıdır. Kültürel değerin korunarak geleneksel konut tiplerinde uygun yapı
malzemesi seçilmesi uygun işçilik kullanılması çok önemlidir. Geleneksel konut tiplerinde
yapının dokusu ve tasarımına uygun restorasyon yapılmalıdır. Yapılacak olan eklemelerde
uygun prosedürler sonucunda uygulamaya geçirilmelidir.

Geleneksel Köy Evi

Şekil 4 Geleneksel Köy Evine Sonradan Betonarme Ekleme Yapılmıştır(Arıcı,2021)

Köyde bulunan 100 yıldan daha uzun bir geçmişe sahip olan köy camiisi restorasyon
çalışmaları yapılmamıştır. Mevcut durumda şadırvanlar yetersiz kalmıştır. Dini ibadethane
yetersiz kalmaktadır. Camiide yalıtım bulunmamaktadır. Yazın oldukça sıcaktır. Termal
konfor bulunmamaktadır. Kış şartlarında ılık bir iklim olmasına rağmen termal konfor
bulunmaması kullanıcılar için olumsuz etki yaratmaktadır. Camii içerisinde uygun
restorasyon çalışmaları yapılmalıdır. Camii dışında avluda ise daha fonksiyonel bir ortam
tasarlanmalı uygun yapı malzemeler kullanılmalıdır.

Köy Camii

Şekil 5 Köy Camiisi Görüntüleri (Arıcı,2021)

226
Köyde bulunan Zeytintaşı mağarasının oluşumunun yaklaşık olarak 14 milyon yılda
tamamlandığını belirlendiğini belirtti mağaranın tanıtımını yapan yetkililer. Mağara içerisinde
karbondioksit oranının belirli düzeyde kalması gerekmektedir. Karbondioksit oranı arttığı
zaman mağara içersinde deformasyonlar oluşmaktadır. Bu nedenden dolayı belirli süre
içersinde belirli ziyaretçileri ağırlayabilmektedir. Mağara içersinde kamera ve dijital
çekimlere izin verilmemektedir. Mağara için gelen ziyaretçilerin bekleme sırasında rahat ve
konforlu bir ortamda zaman geçirebilmeleri için uygun cafe restoran ve bekleme salonları
bulunan yapılar inşaa edilmelidir. Mevcut durumda ziyaretçilerin kaliteli zaman
geçirebileceği ortamlar bulunmamaktadır. Söz konusu eksiklik köyün sürdürülebilirliğini
etkileyen önemli bir eksikl iktir.(Şekil 6)

Zeytintaşı Mağarası

Şekil 6 Zeytintaşı Mağarası Dış Görünüşleri (Arıcı,2021)

4. Tartışmalar ve Sonuçlar

Akbaş köyü’nde sürdürülebilir ekolojik köy olarak planlama yapılabilir ve ekolojik turizm,
doğa ile iç içe ve atıksız bir yaşam elde edilebilir.

Şehir yaşamından uzaklaşarak doğa ile iç içe sağlıklı bir yaşam sürmek isteyenler için özel bir
altarnatif sunabilecek potansiyele sahiptir. Söz konusu özelliği ön plana çıkartacak tasarımlar
ile sürdürülebilir eko köy örneği teşkil edecek tasarımlar geliştirilmelidir.

En önemli doğal kaynaklarımızdan birisi olan tarım topraklarımızın amaç dışı kullanımı
kesinlikle önlenmeli, bu konuda arazi sahipleri bilinçlendirilmeli, tarım dışı yatırım ve hizmet
girişimlerinin verimli arazi dışındaki alanlarda konumlandırılması teşvik edilmelidir.

Sürdürülebilir ekolojik köy kapsamında otel ve bungalov evlerin eğlence ve aktivitelerini


gerçekleştirerek sosyalleşebilecekleri cafe-restoran ve eğlence alanları, dini ibadethaneler,
sağlık kuruluşları, kamu yapıları, eğitim yapıları, kütüphane gibi bilgi merkezleri, spor ve
rekreasyon alanları yürüme ve erişim mesafesinde özgün ve sürdürülebilir tasarımlar
yapılarak uygun yapı malzemeleri kullanılmalıdır. Kullanılacak olan yapı malzemelerinde ısı
yalıtım ve nem yalıtımına özellikle dikkat edilmesi gerekmektedir. Akdeniz bölgesi ve
Antalya ilinin nemli hava koşullarında yapıyı güvende ve konforda uzun süre kalmasını
sağlayacak şekilde dizayn edilmelidir.

227
Doğa ile uyumlu, sürdürülebilir ve ekolojik yaklaşım için uygun temalar seçilmelidir.

Söz konusu çalışmada önerilen eko-köy yorumlarının günümüzün ekolojik ve toplumsal


sorunlarına bir çözüm modeli olarak, Türkiye’de eko-köy uygulamalarına örnek teşkil
edileceği ve eko- köy bilincinin gelişesine katkı sağlayacak nitelikte olmalıdır.

-İnsan kaynaklarının geliştirilmesi açısından özellikle teknolojik altyapı hizmetleri ile birlikte
kırsal alan fiziksel altyapı hizmetleri etkinliği arttırılmalıdır. Bu bağlamda örnek köy
modelleri oluşturulmalıdır.

- Köyün mimari kültürü korunarak gerekli restorasyonlar çalışmaları yapılmalıdır.


- Köyün alt yapısı (içme ve kullanma suyu, kanalizasyon sistemi, ulaşım ağları ) sistemleri
planlanmalı, geliştirilmelidir.
- Köyün kültürü ve mimarisine yönelik yeme içme ve de konaklama tesisleri
oluşturulmalıdır.
- Köyün ilçe ve il merkezleri ile ulaşımı düzenlenmeli ve çevre ilişkileri geliştirilmelidir.
- Mağara çevresinde uygun cafe- restoran ve bekleme alanları inşaa edilmelidir. Yapılacak
olan inşaatlarda yalıtım özelliği ve fonksiyonel kullanım özelliğine dikkat edilmelidir.

Teşekkürler

“ Sürdürülebilir- Eko Köy Olarak İnsaat Yapıları ve Tasarım Planlaması Örneği Akbaş Köyü
İncelemesi” konulu çalışmamı destekleyen Akbaş Köyü yerel halkı ve Uluslararası Vizyon
Üniversitesi’ne ayrıca teşekkürü bir borç bilirim.

Referanslar

Ayşe ARICI,(2021antalya- Serik-Akbas Köyü’nün Yeniden İşlevlendirilerek Post Covıd


Veya Salgın Hastalıklar Döneminde Sosyal Mesafeli Sürdürülebilir Konaklama Amaçlı
Ekolojik Köy Önerisi 13- 14 Temmuz-2021 Uluslararası Fen Ve Uygulamalı Bı̇ lı̇ mler
Kongresı̇ Adıyaman-Türkiye

Anonymous, 1934. 2510 Sayılı İskân Kanunu.

Gülçubuk, B., (2007.) Uluslararası Tarım Politikalarının Kırsal Yoksulluk Üzerine Etkileri.
Ulusal Tarım Kurultayı, 15–17 Kasım 2006, Adana.

Gurung L. (2012). Exploring Links between Tourism and Agriculture in Sustainable


Development: A Case Study of Kagbeni VDC, Nepal. Master thesis ( unpublished), Lincoln
University, Faculty of Environment, Society and Design Department of Social Sciences,
Parks, Recreation, Tourism and Sport Christchurch, Canterbury, New Zealand, 138 p

Mutlu, N.,2002. Avrupa Birliği ve Türkiye'de Kırsal Kalkınma Politikaları. Güneydoğu


Anadolu Projesi Bölge Kalkınma İdaresi Başkanlığı. Ankara.

Şahinkaya, S., 2008. İdeal Cumhuriyet Köyü “Cumhuriyeti Kuranların Tahayyülüne Bir
Örnek’’.Mülkiye Dergisi, Cilt:XXIV, Sayı:225, Ankara.

228
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Moving Towards Sustainable Construction: A primitive


transitional guide

Hagar Ali LABIB1*, Gökhan GELISEN2*

Abstract: This research clearly defines the concept of sustainability and its historical traces as
it highlights the significance of sustainability, sustainable development, and sustainable
construction. Furthermore, this research aims to provide solutions and alternatives that aid in
transitioning towards sustainable construction through gradual and progressive alterations that
act as a primitive guide to help achieve the purpose of recognizing the environment and its
exhaustible resources.

Keywords: Sustainability, Sustainable development, Sustainable construction, Environment,


Construction, Renewable energy, Recycling, Energy efficiency.

Introduction
The definition of sustainability is not as relatively straightforward as it might seem, likewise
with the definition of sustainable development. The fact that there are hundreds of distinct
definitions for what defines sustainability and sustainable development is a good illustration
of this. The indefinite ability to persist substantially throughout many realms of life is referred
to as sustainability. It refers to the capacity of the Earth's ecosystem and human civilization to
coexist in the twenty-first century.
On the other hand, Sustainable Development has been the current development catchphrase
during the last few years. It has been adopted as the new development paradigm by a wide
variety of nonprofit and governmental organizations. Sustainable development is a framework
for achieving human development goals while also preserving natural systems' ability to
supply the natural resources and ecosystem services that the economy and society rely on.
In simple words, the key to the survival of all human beings on this planet is sustainability. At
the same time, sustainable development is the development that satisfies current demands
without jeopardizing future generations' capacity to meet their own. Sustainability refers to
the Earth's inherent life-supporting system's capacity to adapt to changing conditions. Human
activities have a significant impact on the Earth's natural environment and ecosystem.
1
Bahcesehir University, Faculty of Engineering and Natural Sciences, Civil Engineering Department,
Construction Management Program, Istanbul, Turkey
2
Bahcesehir University, Faculty of Engineering and Natural Sciences, Civil Engineering Department, Istanbul,
Turkey

* Corresponding author: [email protected] [email protected]

229
Therefore, these activities should be conducted without deteriorating or degrading the quality
or function of the biological system.
In civil engineering and construction management, the concept of sustainability is becoming
more prevalent. Construction engineering and management encompass all stages of a
building's life cycle, such as design, construction planning and management, construction
works, maintenance, and rehabilitation of structures or infrastructure items.
When it comes to new constructions, sustainable construction involves the use of renewable
and recyclable materials while also decreasing energy usage and wastage. Thus, the main
objective of sustainable building is to mitigate the industry's environmental effect. This
research aims to provide steps and solutions for moving towards sustainable construction
through gradual and progressive alterations to achieve the purpose of recognizing the
environment and its definite resources.
Sustainability Traces and Background
From the dawn of civilization to the present, the history of sustainability tracks human-
dominated ecological systems. This history is marked by a society's growing regional
prosperity, followed by crises that were either addressed, resulting in sustainability, or not,
resulting in decline.
Since the 18th century, interest has been increasing in the environment. The conception of the
idea of "sustainability," or "Nachhaltigkeit" in German, can be traced back to Hans Carl von
Carlowitz between 1645–1714. Carlowitz suggested proposals for the forest's "sustainable
use." His belief that just as much wood should be harvested as can be regrown through
planned or organized reforestation initiatives became a guiding concept in contemporary
forestry.
Coal was utilized to power increasingly efficient engines and, eventually, to create electricity.
In the mid-twentieth century, a growing environmental movement highlighted that the
numerous material gains that were now being experienced had ecological consequences. The
energy crises of 1973 and 1979 revealed how reliant the global society had become on
nonrenewable energy supplies.
The threat presented by the human-induced increased greenhouse effect, primarily caused by
forest clearance and the combustion of fossil fuels, is becoming more widely recognized in
the twenty-first century.
Sustainable Construction
The technique of building a healthy environment based on ecological principles is known as
sustainable construction. According to Professor Charles J. Kibert, sustainable building is
based on six principles: "conserve, reuse, recycle/renew, protect nature, develop non-toxic
and high-quality materials." Furthermore, sustainable construction should not cease once
construction is over; the structure should have a lower environmental effect during its
lifecycle. This indicates that components in the building design should have a long-term
positive impact on the building's ecological impact. Proper insulation to minimize heat loss,
solar panels to reduce energy use, and long-lasting building materials are just a few examples.

230
The objective of sustainable construction is to decrease the industry's environmental effect by
implementing sustainable development methods, increasing energy efficiency, and deploying
green technologies. Although many various business sectors are striving to become far more
sustainable, the construction industry is unique. It could have a substantial impact on how
these practices are implemented. This is due to the industry's enormous use of resources and
energy. Below are few methods that can be followed or taken as a guide towards transitioning
into sustainable construction.
Training Employees
Sustainability starts with proper awareness and education. Construction companies must
invest in staff training to keep up with new methodologies, insights, and strategies used on
their projects. In addition, employee skills must be improved in concert with technology and
procedures as the sector moves towards a digital future. Not only will this boost employee
productivity and performance, but it will also upsurge job satisfaction and workplace
contentment.
Embodied Carbon Mitigation
Embodied carbon is the total influence of a material's greenhouse gas emissions across its
entire life cycle. Embodied carbon contributes to more than 10% of total global greenhouse
gas emissions. Embodied carbon is estimated to account for nearly half of all new
construction emissions between now and 2050. Concrete, steel, and ceramics are the materials
that consume the most embodied carbon. As a result, substituting concrete with fly ash on a
construction site can reduce embodied carbon.
Modernized construction methods
Modern construction methods aim to be more sustainable since they are dedicated to saving
money, increasing delivery and construction speed, and avoiding budget overruns. On a
project, there is a range of approaches that may be used. Floors, walls, and roofs, for example,
can be manufactured in factories and delivered to the job site. Modular construction is
comparable to traditional construction, except it entails fabricating prefabricated rooms in a
factory environment that may be joined to make a complete structure. This could possibly
mitigate above 50% of the wastage generated by conventional methods.
Management and Waste Reduction
Construction organizations can still save money and materials by employing prefabricated
parts since components are manufactured off-site and, in an environment, where leftover
materials can be readily recycled. In addition, companies will pay less for waste services as a
result of this method.
Additionally, Incineration is another option for getting rid of wastage. This is especially
beneficial for hazardous chemicals since it eliminates the risk of contamination in the
surrounding area. While Incineration can result in CO2 emissions, the gas can be stored in
subterranean spaces, and the heat generated by burning can be utilized to generate power or
heat water. Finally, the resulting or the end product fly-ash can be used for manufacturing
purposes. In fact, fly-ash has been proven to act as an excellent cement substitute in concrete.

231
Renewable and Recyclable materials
Sourcing environmentally friendly materials might be challenging; however, it is not
impossible. Timber and recyclable materials such as concrete, metals, plastics, rubbers, and
composites are among these materials. In addition, several forms of renewable materials are
employed in various end-use applications to decrease the detrimental impacts on the
environment.
Insulation materials, light structural walls, natural paints and finishes, thatch, and geotextiles
are all made from crop-based renewable resources. As all-natural, renewable, and efficient
insulating materials, various sustainable materials such as Rockwool, sheep's wool, and
recycled paper are employed. For example, Rockwool, produced from molten stone, saves
100 times the amount of CO2, SO2, and NO2 emitted during its manufacture.
Virtual meetings, data and video sharing between stakeholders
The current COVID-19 pandemic has caused many people to work from home and use video
conferencing services to communicate. Continuing to participate in virtual meetings after
returning to work can help decrease carbon emissions by reducing travel. Of course, some site
visits will inevitably be necessary, but for those who aren't, virtual meetings can assist in
minimizing carbon emissions while also saving time when sharing information. In addition,
without needing to travel to the project site, video conferencing may be utilized to view the
equipment or the data as a team.
Utility Usage Tracking
Utility usage tracking is critical for identifying essential cost sinks and carbon sources on a
building site. Furthermore, documenting information regarding gasoline and gas is crucial
since it has a more significant impact on a job site than other utilities. Several online tools,
such as Green Badger, may be used to track water, energy, and trash across multiple building
sites to see where substantial improvements might be made.
Discouraging the Use of Paper Blueprints Drawings and Specs
Although it may appear trivial, avoiding the usage of paper blueprints, drawings, and
specifications may save a significant number of trees. But, most significantly, it will save a lot
of time, cut down on material waste, and speed up the completion of the project. It is highly
recommending to invest in construction management software instead of utilizing paper.
Users may use this software to arrange the entire construction project, calculate and regulate
various costs, manage the portfolio and documents, assess risk, and follow the project's
progress, among other things. Most businesses provide cloud-based solutions that allow on-
site and off-site employees to communicate in real-time. The project will undoubtedly have a
beneficial influence on the environment due to greater on-site productivity and decreased
waste.
Encouraging the use of Energy-Efficient Construction and Material Handling
Equipment
Using energy-efficient construction and material-handling equipment is yet another method to
assure sustainability. Reduce energy waste by using the appropriate equipment. You may
prevent squandering fuel, for example, by using a suitable on-site generator. Similarly,
energy-efficient overhead cranes and other construction equipment may be used. However,
repairing and maintaining energy-efficient construction equipment can be challenging. In

232
addition, handling such equipment may need further training for the employees. That is why
you should always purchase equipment from local suppliers familiar with local rules, weather
conditions, and material handling needs.
Monitoring Transportation
One of the most critical on-site construction operations is transportation. A transportation
management system may be implemented to decrease the transportation fleet's carbon impact.
It will allow tracking drivers and their driving habits, setting speed limits, planning the
optimal routes, and doing real-time preventive maintenance. All of these elements will aid in
the reduction of air pollution.
Conclusion
The significance of sustainability, sustainable development, and sustainable building is
highlighted in this research, which clearly explains the concept of sustainability, sustainable
development, and its historical origins. Furthermore, the objectives of this research were to
offer solutions and alternatives that will assist in the transition to sustainable construction
through gradual and incremental changes that can positively impact the environment.
Sustainability refers to the Earth's inherent life-supporting system's capacity to adapt to
changing conditions, and the key to the survival of all human beings on this planet is
sustainability. At the same time, sustainable development is basically the development that
satisfies current demands without jeopardizing future generations' capacity to meet their own.
In civil engineering and construction management, the concept of sustainability has been
becoming more prevalent. The technique of building a healthy environment based on
ecological principles is known as sustainable construction. The objective of sustainable
construction is to decrease the industry's environmental effect by implementing sustainable
development methods, increasing energy efficiency, and deploying green technologies.
Several methods can be followed or taken as a guide towards transitioning into sustainable
construction. This includes employee training, mitigating embodied carbon through the
replacement of cement with a byproduct such as fly-ash, following modernized construction
methods, managing and reducing waste, usage of recyclable and renewable materials, tracking
utility, discouraging the use of paper blueprints drawings and specs, encouraging virtual
meetings and the use of Energy-Efficient Construction and Material Handling Equipment
and finally, monitoring transportation to enhance time and waste management. Historically,
the construction sector has been seen as a major polluter of the environment. The construction
sector, on the other hand, is striving to develop more sustainable building methods. As a
result, construction trucks, equipment, and building materials are becoming more energy-
efficient and environmentally friendly.
References

10 Ways to Green Up Your Construction Site. Sustainable Investment Group. (2020, April
30). https://sigearth.com/10-ways-to-green-up-your-construction-site/.

10 ways to make construction more sustainable. List secondary lists page | Construction
Global. (n.d.). https://constructionglobal.com/top10/10-ways-make-construction-more-
sustainable/modern-construction-methods.

233
BigRentz, I. (2021, April 19). HOME. BigRentz. https://www.bigrentz.com/blog/sustainable-
construction.

Glick, V. (2019, May 20). Building a Better Tomorrow: 6 Ways to Create a More Sustainable
Construction Site. construction21.org.
https://www.construction21.org/articles/h/building-a-better-tomorrow-6-ways-to-create-
a-more-sustainable-construction-site.html.

Hans Carl von Carlowitz and "Sustainability." Environment & Society Portal. (n.d.).
http://www.environmentandsociety.org/tools/keywords/hans-carl-von-carlowitz-and-
sustainability.

Lélé, S. M. (1991). Sustainable development: a critical review. World Development, 19(6),


607-621.

Sustainability and Sustainable Development. Circular Ecology. (2020, May 17). sustainable
construction. British. (n.d.). https://www.british-assessment.co.uk/insights/what-is-
sustainable-construction-and-why-is-it-important/.

Sustainable Development. International Institute for Sustainable Development. (2013, January


6). https://www.iisd.org/about-iisd/sustainable-development

https://circularecology.com/sustainability-and-sustainable-development.html.

What is sustainability? What is biomimicry? Explain why learning from the earth is a key to
learning how to live more sustainably. Bartleby learns. (n.d.).
https://www.bartleby.com/

Why Is Sustainability Important?: BluGlacier - Top-quality salmon producer. BluGlacier.


(2021, March 11). https://bluglacier.com/why-is-sustainability-important/.

Wikimedia Foundation. (2021, May 10). History of sustainability. Wikipedia.


https://en.wikipedia.org/wiki/History_of_sustainability.

Wikimedia Foundation. (2021, June 14). Sustainability. Wikipedia.


https://en.wikipedia.org/wiki/Sustainability.

Zavadskas, E. K., Šaparauskas, J., & Antucheviciene, J. (2018). Sustainability in construction


engineering.

234
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Relationship and Differences Between Leadership and


Management in Construction

Hagar Ali LABIB1*, Gökhan GELISEN2*

Abstract: This research clearly defines leadership and management in construction


management as it investigates the relationship, attributes, and differences and the link that
binds them both. The goal of this research is to convey a fundamental comprehension of the
contrasts between managers and leaders while examining the characteristics of an ideal leader
in order to achieve success in construction management.
Keywords: Leadership, Management, Construction, Motivation, Delegation.

Introduction
What is leadership? This is a quite simple question that, despite its simplicity, it continues to
be a source of consternation for both experts and non-experts.

It is prevalent to encounter the misconception between a leader and a boss. It mainly refers to
seniority or a particular person's position in the hierarchy of a company or an organization.
When the word "leader" is mentioned, it is typical for people's first thoughts to divert towards
a domineering, bossy, or senior executive kind of individual. However, this is not the case
because there is quite a difference between a manager and a leader. In simple words,
according to the Oxford Languages dictionary, it can be agreed that a leader is said to be
someone who leads a specific group or an organization.

Whereas leadership is best described as a process of social influence, which maximizes the
efforts of others towards the achievement of a goal (Kruse, April 2013). Nevertheless, a
'leader' or 'leadership' has several definitions that will be discussed later in depth in this
research. On the other hand, a manager, by definition, is said to be an individual who is in
charge of a worker, group of people, or organization according to Oxford Languages. To
elaborate, a manager is a person who is usually responsible for other staff members,
commands or gives orders, and is held accountable for company goals and developing
employees. Basically, a manager manages their employees, while a leader inspires them to
innovate, think creatively, and strive for perfection. The ability to properly lead and manage
in the construction business is vital as it aids in evading negative issues that may arise during
a project's life cycle. This research intends to deliver a genuine understanding of the
differences between a boss and a leader while discussing an ideal leader's attributes to achieve
success in construction management.

1
Bahcesehir University, Faculty of Engineering and Natural Sciences, Civil Engineering Department,
Construction Management Program, Istanbul, Turkey
2
Bahcesehir University, Faculty of Engineering and Natural Sciences, Civil Engineering Department, Istanbul,
Turkey
* Corresponding author: [email protected] [email protected]
235
Management in Construction

Management is the coordination and organization of assignments to accomplish an objective.


The management activities include determining corporate strategy and coordinating the
efforts of employees to achieve these goals with the help of available resources. Additionally,
in a company or an organization, management could also refer to the seniority structure of
employees.

Construction management is a branch of civil engineering known as a profoundly proficient


framework intended to work with planning, coordinating, and controlling a construction
project from commencement to completion. It is the role of the construction manager to work
as a leader throughout the life of the construction project. This is because working as a leader
allows the construction manager to efficiently and effectively plan, monitor, and control the
progression of a construction project.

A construction manager's need for leadership ability depends on the tasks, teams,
organizational environment, manager's capabilities, project resources, available time, and
budget.

Leaders versus Managers

Over the past decades, one of the reoccurring questions has been the difference between
managers and leaders. There is no direct straight-line answer for that; however, this research
intends to compare and contrast management versus leadership skills by exploring their
similarities and differences to provide a clear picture to the reader. It's important to note that
there is no such thing as a hundred percent manager or a hundred percent leader. It's the job of
the construction manager to do a little bit of both.

Managers are clearly in managerial positions since they are tied to an official position,
whereas leaders can lead anywhere. A leader should lead by example by motivating their team
and inspiring them to move forward towards the set goals. Managers tend to have more of a
controlled mindset, and they tend to focus more on the administration of processes, the
structure, the resources of the organization. They are very much into maintaining their status
quo as maintenance plays a crucial part in their work. They also tend to be very task-focused.
So overall, managers work on day-to-day tasks and making sure all the activities get
accomplished in an orderly fashion. Leaders, however, can be slightly different since their
approach to pursuing goals can vary from managers. Leaders can be inclined to have a more
persuasive approach to communication rather than controlling.

According to Bill Hybels, leaders are also focused on taking the organization from 'here' to
'there.' This basically means casting a vision by setting goals for the team members to get
from point A to point B for the betterment of the organization. Leaders are also known for
taking risks; instead of maintaining their status quo and keeping the working environment
running under steady conditions, they try to stretch the organization past what its currently
doing, which usually involves taking risks. Additionally, Leaders are inclined to be people or
relationship-focused as they mentor, coach, and teach. It can be noticed that most of the
leadership skills mentioned above, such as persuasive communication, take the organization
from 'here' to 'there' through casting a vision well as relationship-focused approach are all
communication-centric, which means that a leader must have outstanding and sophisticated

236
communication skills to practice those leadership functions well. As mentioned above,
nowadays, it's quite essential to be a good manager as a good leader.

Managerial and Leadership Skills

In simple words, leadership is the process or the action of leading a group of people towards a
common goal. Being a leader means one is required to inspire, motivate and encourage. Some
of the top leadership skills that leaders are required to have will be discussed below.

Leaders tend to communicate with any group effectively as they have outstanding
communication skills. They are inclined to motivate and inspire their team members towards
the success of the organization. Leaders have proficient delegation skills as they tend to
delegate certain activities best suited or done best by someone else. They encourage a positive
environment even at the worst times since positivity leads to measurable performance
improvement. They represent trustworthiness since trust is the glue that binds the leader to
their team members or followers as it brings forth the capacity for organizational and
leadership success.

Additionally, one of the crucial characteristics of an effective leader is promoting creativity.


It fosters a prosperous and healthy workplace environment. It opens up opportunities in
problem-solving, achieving goals, and inspiring teams to be creative and find unlikely
perspectives. One of the advantages of being a leader over a manager is that leaders are more
likely to feed the behavior of giving constructive feedback rather than destructive feedback
without offending any team members. Moreover, leaders take responsibility for actions and
performance. A responsible behavior combined with a responsible attitude gives the leader
powerful influence and accelerates their leadership growth.

Last but not least, leaders are quite committed to their team, work, and the organization. This
is because commitment is a trait of leadership that motivates and draws others. It
demonstrates that the leader is committed to the cause and believes in it. Before they believe
in the vision, a team will believe in the team leader. Commitment is a heart issue. Finally,
leaders are flexible and adaptable. This is because leaders with an elastic cognitive style can
employ various thinking processes and mental frameworks. Leaders may better understand
how their team thinks and feels and how their customers believe by increasing their awareness
and perspective.

On the other hand, while leadership is the process or the action of leading a group of people
towards a common goal, management is the process of dealing with or controlling things or
people. And while it's the leader's job to inspire, motivate and encourage, it's the manager's
job to plan, organize and coordinate. Some of the top managerial skills would start with
interpersonal skills. Interpersonal communication skills are quite vital since they could help
the manager become more productive at work, create solid and constructive connections with
coworkers, and execute team tasks efficiently. Strong interpersonal skills could significantly
affect the confidence and efficiency of the entire team or department.

Like leaders, managers are expected to communicate, motivate, and delegate effectively with
their team members. Managers are expected to forward a plan by thinking ahead and the
organization's path to ensure it's on the right track towards achieving its next goal. Besides,
managers are supposed to have efficient organizational skills such as time management,
scheduling, prioritization through to-do and to-don't lists, project management skills,

237
continuous communication, multi-tasking, and flexibility and adaptation are all examples of
organizational abilities. Mangers are more likely to adopt a strategic thinking behavior. It
aims to identify and create unique possibilities to generate value by facilitating a provocative
and innovative debate among those who could influence its performance, such as the board of
directors and management.

Furthermore, managers are expected to have analytical and robust problem-solving skills
since the managerial problem-solving operation is a non-stop cycle of planning, doing,
checking, and acting while keeping an eye on the issue and the results. Managers alter their
plans as required so that the team may continue to work toward a solution that will lead to
improved company results. Last but not least, managers tend to have commercial awareness.
Commercial awareness is the capacity to comprehend what makes a company or organization
prosperous, whether via the purchase or sale of goods or the provision of services to a market.

Business awareness or organizational awareness are other terms for commercial awareness.
Lastly, a good manager is required to be a mentor since mentoring is essential in management.
It is mainly about assisting team members in becoming more productive. It's a mentoring
relationship that aims to provide the mentee/team member the confidence and support they
need to take charge of their growth and job.

Conclusion

This research intends to deliver a genuine understanding of the differences between a boss
and a leader while discussing an ideal leader's attributes to achieve success in construction
management. The ability to properly lead and manage in the construction business is vital as it
aids in evading negative issues that may arise during a project's life cycle. Leadership is the
act of leading a group of people by inspiring, motivating, and encouraging them. In contrast,
on the other hand, managing focuses on planning, organizing, and coordinating people.
Managers are clearly in managerial positions since they are tied to an official position,
whereas leaders can lead anywhere. Managers and leaders have several skills that they share
in common: motivation, delegation, and effective communication. Finally, a leader is required
to have outdating leadership and managerial skills. Leadership and management can't replace
one another, and in today's world, it is crucial to find the proper combination of being a little
bit both.

"Leadership is the art of getting someone else to do something you want to be done because
he wants to do it" Eisenhower, D. D. (2012)

"Management is doing things right; leadership is doing the right things." Peter, D. (2009)

References

2011-2021, (c) C. skillsyouneed.com. (n.d.). Developing Commercial Awareness.


SkillsYouNeed. https://www.skillsyouneed.com/general/commercial-awareness.html.

6 Essential Organizational Skills for Leadership Success - mysimpleshow. simpleshow video


maker. (2018, April 17). https://videomaker.simpleshow.com/6-essential-
organizational-skills-leadership

238
Drucker, P. (2009). Management is doing things right; leadership is doing the right things. In
US Naval Institute Proceedings (Vol. 135, No. 4, p. 96).

Everything You Need To Know About the Importance of Interpersonal Communication at


Work. Indeed Career Guide. (n.d.). https://www.indeed.com/career-advice/career-
development/importance-of-interpersonal-communication.

Eisenhower, D. D. (2012). Leadership: the art of getting someone else to do something you
want to be done because he wants to do it. Leadership.

Gharehbaghi, K., & McManus, K. (2003). The construction manager is a leader. Leadership
and management in engineering, 3(1), 56-58.

Kruse, K. (2015, September 2). What Is Leadership? Forbes.


https://www.forbes.com/sites/kevinkruse/2013/04/09/what-is-
leadership/?sh=685c3cc45b90.

Keating, K. (2021, March 10). 3 Traits of Adaptable Leaders. Main.


https://www.td.org/insights/3-traits-of-adaptable-leaders.

Lyon, A. (2017). Management vs. Leadership. YouTube. YouTube.


https://www.youtube.com/watch?v=Tddlkly1cC0.

Managers Must Be Effective Problem-Solvers. CMOE. (2019, November 18).


https://cmoe.com/blog/managers-must-effective-problem-
solvers/#:~:text=The%20managerial%20problem%2Dsolving%20process,them%20to%
20better%20business%20results.

Oxford Languages and Google - English. Oxford Languages. (n.d.).


https://languages.oup.com/google-dictionary-en/.

Peck, A., Roddy, S., & Clark, E. (n.d.). The Importance of Creative Leadership. The
Importance of Creative Leadership | Clutch. co.
https://clutch.co/hr/resources/importance-of-creative-leadership.

What is a Boss? Three Types of Bosses. Job Search. (n.d.).


https://www.indeed.com/hire/c/info/types-of-bosses.

What is Construction Management? (And Why It Matters): Stonemark. Stonemark


Construction Management. (2021, May 24). https://stonemarkcm.com/blog/what-is-
construction-management/.

What Is Management? Definitions and Functions. Indeed Career Guide. (n.d.).


https://www.indeed.com/career-advice/career-development/what-is-management.

What is 'strategic thinking'? Effective Governance. (n.d.).


https://www.effectivegovernance.com.au/page/knowledge-centre/news-articles/what-is-
strategic-thinking.

239
Zenger, J. (2015, July 16). Taking Responsibility Is The Highest Mark Of Great Leaders.
Forbes. https://www.forbes.com/sites/jackzenger/2015/07/16/taking-responsibility-is-
the-highest-mark-of-great-leaders/?sh=65d3738448f2.

240
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Thermodynamic Assessment of Solar-Driven Rankine Cycle for


Supercritical Working Fluids

Serpil ÇELİK TOKER1*, Gamze SOYTÜRK1, Hiroshi YAMAGUCHİ2,


Önder KIZILKAN1

Abstract: In this study, thermodynamic analysis of transcritical Rankine cycle integrated with
evacuated solar collectors operating by various supercritical working fluid types is conducted.
In order to examine the performance of the integrated system, the energy and exergy analyses
are first made, followed by a parametric study for determining the effect of turbine inlet
temperature, solar irradiation, and working fluid on the system performance. Using the
meteorological data of Isparta province, the performance of the integrated system has been
examined. The results indicate that when the working fluid is taken as N2O, the performance
of the system gets better compared to that based on other types of working fluids. Also, that
the highest power generation is calculated for the cycle using R170 with a net power
generation of 0.2479 kW, followed by R744. Results show that the highest exergy efficiency
is calculated for the system using N2O as 10.1%.

Keywords: Evacuated U-tube, transcritical Rankine cycle, supercritical fluids, energy.

1. Introduction

An important rise in the world population and technological advances has significantly
increased the energy demand. However, fossil fuels used to generate energy are consumed
rapidly, and environmental pollution due to this usage rises. In this context, renewable energy
resources and innovations are seen to be the primary solution. Solar energy, which is one of
the renewable energy sources, has become important due to its features such as sustainability,
abundance, and environmental friendliness. Researchers have focused on studies on electricity
generation from solar energy in recent years. Within these workings low- temperature solar
energy application has been widely developed. There are many thermodynamic cycles such as
Organic Rankine, transcritical CO2 Rankine, and Kalina cycle that operate using low-
temperature sources (Mehran et al., 2019).

Work is increasing on transcritical Rankine cycles to improve the performance of energy


conversion at low temperatures. The first step when designing a transcritical Rankine cycle is
the choice of the appropriate working fluid. Working fluids with relatively low critical
temperatures and pressures can be compressed directly to their supercritical pressures and
heated to their supercritical state before expansion so as to obtain a better thermal match with
the heat source (Kizilkan, 2019). In a transcritical Rankine cycle, the working fluid is heated
directly from the liquid state into the supercritical state after passing the two-phase region,
which allows it to have a better thermal match with the heat source, resulting in less exergy
1
Isparta University of Applied Sciences, Department of Mechanical Engineering, Isparta/Turkey
2
Doshisha University, Department of Mechanical Engineering, Kyoto/Japan
* Corresponding author: [email protected]
241
loss. Furthermore, by avoiding the boiling process, the configuration of the heating system is
potentially simplified. Sometimes if condensation temperature is near to critical temperature,
then the smaller fraction of heat rejection occurs due to condensation, and the cycle is called
as condensation cycle. In this case, sometimes, both pump and compressor are used for the
compression process (Dostal, 2006).

There are several general criteria, including physical and chemical characteristics, personal,
environmental and operational safeties, design, and economy, that the working fluid should
ideally satisfy to be used in power cycle. Several working fluids have been proposed within
the last decade (Sarkar, 2015). Various organic working fluids including R23, R32, R125,
R143a, R227ea, R234, R236fa, R245fa, R134a, R218, isobutene, propane, propylene, and
R170 have been used in the transcritical Rankine cycle (Karellas and Schuster, 2008). Cayer
et al. (2010), used CO2, ethane, R125 as working fluids in the transcritical Rankine cycle.
Among various working fluids, CO2 is a non-flammable and non-toxic fluid and has less
influence on the environment and personal safeties than other working fluids (Zhang et al.,
2007). Nitrous oxide is another working fluid that has a very similar molecular weight, critical
pressure, and temperature causes nearly the similar behavior to CO2 with respect to system
temperature and pressure, properties, and compactness, but it remains largely unexplored
(Sarkar, 2015).

Chen et al. (2010), Chen (2010) and Gao et al. (2010) reviewed various working fluids for
transcritical Rankine cycle and used various selection criteria such as physical and
thermodynamic properties, efficiencies, turbine shape factor, stability of the fluid and
compatibility with materials. Zhang et al. (2007), have also conducted research on the
supercritical CO2 power cycle. Their experiments revealed that the power generation
efficiency was 8.78% to 9.45%, and the COP for the overall outputs from the cycle was 0.548
and 0.406, respectively, on a typical summer and winter day in Japan. Karellas and Schuster
(2008), organic fluids like isobutene, propane, propylene, difluoromethane, and R-245fa have
been suggested for the supercritical Rankine cycle. They concluded that supercritical fluids
could maximize the efficiency of the system. Gu and Sato (2001), used propane, R125 and
R134a as the working fluids in the transcritical power cycle and showed that propane and R-
134a are appropriate working fluids of supercritical cycles for geothermal binary design.
Various working fluids, including R23, R32, R125, R143a, R227ea, R234, R236fa, R245fa,
R134a, R218, isobutene, propane, propylene, and R170 have been used in the transcritical
Rankine cycle.

In this study, thermodynamic analysis of a transcritical Rankine cycle integrated with


evacuated tube solar collectors operating by various working fluid types is conducted. In
order to examine the performance of the integrated system, the energy and exergy analyses of
the system are conducted. After that, parametric analysis is performed to examine the
variation of system performance with solar radiation values and turbine inlet temperature
using various working fluids. In addition, the monthly performance of the evacuated tube
solar collector with the meteorological data of Isparta province is analyzed.

2. Transcritical Rankine Cycle with Regenerator

The layout of the transcritical Rankine cycle with regenerator is shown in Figure 1. In the
transcritical Rankine cycle, working fluid is pumped above its critical pressure (1–2) and then
heated to supercritical steam in an evacuated tube solar collector (3-4). The supercritical fluid
is expanded to generate work in the turbine (3–4). After expansion, the fluid condenses in the

242
condenser (6-1), and the condensed liquid is then pumped back to high pressure; thus the
cycle is completed. As seen in the figure, the regenerator is used to recover the turbine
exhaust heat (5-6) and to preheat the liquid entering the vacuum tube solar collector (3-2).

Figure 1. Transcritical Rankine cycle with regenerator

The selection of working fluid is one of the essential factors influencing the system
performance and also has an effect on the environment. Generally, an appropriate working
fluid should have suitable thermodynamic properties and low environmental impacts. The
choice of the cycle fluid in this study is performed, taking into account mostly used the type
of cycle fluid in the literature. Six different supercritical fluid types are chosen, namely R774,
R125, R41, SF6, R170, and N2O. Table 1 shows the basic features of the selected supercritical
fluids.

Table 1. Thermophysical properties of supercritical working fluids (Kizilkan, 2019; Sarkar,


2015)

Critical Critical
Refrigerant Molecular
Chemical Name ODP GWP Temperature Pressure
Number Mass (kg/kmol)
( ) (kPa)
Carbon dioxide R744 44.01 0 1 30.97 7377.3
Pentafluoroethane R125 120 0 3500 66.023 3617.7
Fluor methane R41 34.03 0 92 44.13 5897
Sulfurhexafluoride SF6 146.1 0 22800 45.57 3755
Ethane R170 30.07 0 3 32.2 4872
Nitrous Oxide N2O 44.01 0.017 298 36.4 7245
Abbreviations: GWP, global warming potential; ODP, ozone depletion potential

For the performance analysis of the evacuated tube solar collector assisted transcritical
Rankine cycle for different supercritical fluids, the design parameters are given in Table 2.

243
Table 2. Operating parameters of transcritical Rankine cycle (Kizilkan, 2020)

Parameter Value
Turbine inlet temperature, 200
Turbine outlet pressure, kPa Pcrit×1.13
Pressure ratio 1.5
Turbine isentropic efficiency, % 92
Pump isentropic efficiency, % 85
Heat exchanger effectiveness, % 65

3. Thermodynamic Analysis

A thermodynamic model is constructed using Engineering Equation Solver (EES) software


(Klein, 2020) in order to evaluate the energetic and exergetic performance of the solar-
powered transcritical Rankine cycle for different supercritical working fluids. The main
assumptions of the model are given as:

 The system operates at steady-state conditions.


 Kinetic and potential energies and exergy changes are ignored.
 Pressure is constant in the heat exchangers.
 There are no heat losses in the heat exchangers.
 The turbine and pump operations are assumed to be adiabatic.
 The reference state properties are 22°C and 101.325 kPa.

In order to calculate absorbed solar energy by the evacuated tube solar collector, the relations
developed in reference (Kalogirou, 2009) are utilized. The useful absorbed energy from the
sun is calculated as:

Q F AS U T T (1)

where S is the solar irradiance, FR is the heat removal factor, UL is the overall heat loss
coefficient, A is the collector area, Ta is the temperature air, and Tin is the inlet CO2
temperature. For determining the CO2 temperature at the collector exit, the useful solar energy
collected by evacuated solar tubes can also be written as:

Q mC T T (2)

The mass balance equation for steady‐state and steady‐flow processes can be written as
(Cengel and Boles, 2006):

m m (3)

In the above equation, m ̇ is the mass flow rate, and the subscripts in and out stand for inlet
and outlet, respectively. The energy balance equation can be written as:

Q m h W m h (4)

244
Here, Q is the rate of heat, W is the rate of work, and h is the specific enthalpy. For the exergy
analysis, the balance equation is defined as (Dincer and Rosen, 2007):

Ex Ex Ex Ex Ex (5)

where the first and the second terms are exergy of heat and work respectively, Ex is the rate of
flow exergy, Ex is exergy destruction. In the above equation, each term is defined as
follows:

T T
Ex Q (6)
T

Ex W (7)

Ex m ex (8)

Ex TS (9)

In equation (8), ex is the specific flow exergy and can be calculated using the equation below:

e h h T s s (10)

Applying the above mentioned by applying the thermodynamic equilibrium equations to each
system element, the capacities, and exergy destruction rate equations for each system
component can be obtained as follows.

Evacuated Tube Solar Collector:

Q m h h (11)

Ex , Ex Ex Ex (12)

Turbine:

W m h h (13)

Ex , Ex Ex W (14)

Regenerator:

Q m h h m h h (15)

Ex , Ex Ex Ex Ex (16)

Condenser:

Q m h h (17)

245
Ex , Ex Ex Ex Ex (18)

Pump:

W m h h (19)

Ex , Ex Ex W (20)

The energy efficiency of the solar-assisted transcritical Rankine cycle with regenerator is
expressed as:

W W
η (21)
Q

3. Results and Discussion

The solar-assisted Rankine Cycle was analyzed in terms of the first and second laws of
thermodynamics in order to determine the thermodynamic performance characteristics of the
system. The Engineering Equation Software was used for determining the thermodynamic
properties of the working fluid.

In Figure 2, the variation of mean solar radiation for the months of the year according to the
meteorological data of Isparta was given. It is clear from the figure that the mean solar
radiation increases gradually from January to August and reaches a maximum value of 974
W/m2 in August for Isparta. In addition, the lowest solar radiation is reported for January as
497 W/m2.
1000

900

800
Solar [W/m ]
2

700

600

500

400
1 2 3 4 5 6 7 8 9 10 11 12
Months

Figure 2. Monthly solar radiation data of Isparta

Figure 3 shows the variation of temperature at the exit of the evacuated solar collectors in
terms of the solar irradiation for different working fluid types (R744, R125, R41, SF6, R170,
and N2O). This figure indicates that temperature at the exit of the evacuated tube solar
collectors for various cycle fluid types increases with increasing solar irradiation. It can be

246
seen that when the solar irradiation rises, the temperature of the fluid at the evacuated solar
collector outlet increases, which in turn causes a rise in the enthalpy and temperature of the
inlet of the turbine.
250
R744
R125
R41
SF6
200 R170
N2O
Tout [°C]

150

100

300 400 500 600 700 800 900 1000 1100 1200 1300
2
S [W/m ]

Figure 3. Variation of different supercritical working fluids temperature at the exit of solar
collector

Figure 4 shows the monthly averaged variation of collector energy efficiency for different
working fluid types. It was obtained that, if R744 is used as the cycle fluid for the study
conducted, the collector efficiency has the highest value among the other fluids.
0.3
R744 SF6 N2O
R125 R41 R170
0.25

0.2
col [-]

0.15

0.1

0.05

0
1 2 3 4 5 6 7 8 9 10 11 12

Months

Figure 4. The monthly averaged variation of collector efficiency for different cycle fluid
types

For the investigation of the effects of system characteristics on system performance,


parametrical analyses were conducted. Figure 5 shows the effect of the turbine inlet
temperature on the net power generation. During the analysis, turbine inlet temperature was

247
varied between 120°C and 220°C, and it was observed that net power generation was
increased for all supercritical fluids. As seen from the figure, R125 has the lowest power
generation, whereas N2O has the highest amount of net power generations.
2.4
R744 SF6
R125 R170
2.2 R41 N2O

2
Wnet [kW]

1.8

1.6

1.4

1.2
120 140 160 180 200 220

Tturbine [°C]

Figure 5. Variation of net power generation with the turbine inlet temperature

In Figure 6, energy efficiencies were given as a function of turbine inlet temperature. From
the figure, it can be seen that energy efficiencies are increasing with the turbine inlet
temperature for all working fluids. However, the increment slope for R744 and R41 is nearly
constant. The energy efficiency of the integrated system for R744 and N2O is almost equal to
each other and has the highest efficiency.
0.1
R744 SF6
R125 R170
R41 N2O
0.09

0.08
energy [-]

0.07

0.06

0.05
120 140 160 180 200 220
Tturbine [°C]

Figure 6. Variation of energy efficiencies with the turbine inlet temperature

The net work generated in the cycle for different supercritical fluids with reference to July is
given in Figure 7. As seen from the figure, the highest net electricity production was obtained
for R170 with a value of 0.24 kW.

248
Figure 7. Comparison of net work produced in the cycle with respect to supercritical fluids

In Figure 8, the energy and exergy results of the solar-assisted transcritical Rankine cycle are
given comparatively. According to the results of the analyses, the best performance for the
transcritical Rankine cycle is obtained using CO2 and N2O, followed by R41, R170, SF6, and
R125, respectively. N2O is another working fluid that has a very similar molecular weight,
critical pressure, and temperature causes nearly the similar behavior to CO2 with respect to
system temperature and pressure, properties, and compactness. But the GWP value of N2O
supercritical fluid is higher than CO2.

Figure 8. Comparison of energy and exergy efficiency of cycle with respect to supercritical
fluids

249
4. Conclusions

In this study, the thermodynamic performance of a system including transcritical Rankine


Cycle and evacuated solar collector is assessed using 6 different supercritical working fluids.
For this purpose, energy and exergy analyses were carried out for determining the system
performance indicators such as net power generation and energy and exergy efficiencies. In
addition, comprehensive parametric analyses were conducted for transcritical Rankine cycle
using different supercritical working fluids. According to the results of the study, the
important findings are summarized as follows:

 The solar energy potential of Isparta was found to be relatively high. According to the
solar data, the maximum solar radiation was 974 W/m2 for Isparta for August.

 Among the supercritical working fluids examined in this study, it was observed that
the maximum outlet temperature (202.9°C) from the evacuated tube collector was
reached when R125 and SF6 were used.

 It was found from the study that the highest and lowest amount of net power
generations are found as 0.2479 kW and 0.05214 for R170 and SF6, respectively.

 According to the results of the comparative analyses for the transcritical Rankine
Cycle using supercritical fluids, the best performance was obtained using N2O,
followed by R744, R41, R170, SF6, and R125 with the energy efficiencies of 9.6%,
9.5%, 9.3%, 9.1%, 7%, and 6.7%, respectively.

 The results of the parametrical analyses showed that the turbine inlet temperature has
significant effects on system performance. For all supercritical fluids, the energy
efficiencies increased with the turbine inlet temperature.

 The exergy analysis, according to the input parameters given in Table 1. Showed that
the lowest and the highest exergy efficiency are 0.101 and 0.071 for N2O and R125,
respectively.

 According to the performance analysis results of the cycle, among the supercritical
working fluids investigated in the present study, N2O and R744 have great potential
for transcritical power generation applications utilizing low‐grade thermal energy. The
critical temperature and pressure values of N2O and CO2 are similar, but the GWP
value of N2O is very high. Due to the fact that CO2 has a relatively lower
environmental impact and low cost, it is more preferred in low-temperature
applications in recent years.

References

Cayer, E., Galanis, N., Nesreddine, H., (2010). Parametric study and optimization of a
transcritical power cycle using a low temperature source. Applied Energy, 87(4), 1349–
1357.

Cengel, Y.A., Boles, M.A., (2006). Thermodynamics: an engineering approach. McGraw‐


Hill, New York.

250
Chen, H., Goswami, D.Y., Stefanakos, E.K., (2010). A review of thermodynamic cycles and
working fluids for the conversion of low-grade heat. Renewable Sustainable Energy
Reviews, 14(9), 3059–3067.

Chen, H., (2010). The conversion of low-grade heat into power using supercritical Rankine
cycles (PhD Thesis). University of South Florida.

Dincer, I., Rosen, M.A., (2007). Exergy: Energy, Environment and Sustainable Development.
Elsevier Science.

Dostal, V., (2006). A supercritical carbondioxide cycle for next generation nuclear reactors
(PhD Thesis). Department of Nuclear Engineering, Massachusetts Institute of
Technology.

Gao, H., Liu, C., He, C., Xu, X., Wu, S., Li, Y., (2012). Performance analysis and working
fluid selection of a supercritical organic Rankine cycle for low grade waste heat
recovery. Energies, 5, 3233–3247.

Gu, Z.L., Sato, H., (2001). Optimization of cyclic parameters of a supercritical cycle for
geothermal power generation. Energy Conversion and Management, 42, 1409–1416.

Kalogirou, SA., (2009). Solar Energy Engineering: Processes and Systems. Academic Press:
Oxford.

Karellas, S., Schuster, A., (2008). Supercritical fluid parameters inorganic Rankine cycle
applications. International Journal of Thermodynamics, 11(3), 101–108.

Kizilkan, O., (2019). Evaluation of transcritical Rankine cycle driven by low‐temperature


geothermal source for different supercritical working fluids. International Journal of
Technological Sciences, 11(3), 155-169.

Kizilkan, O., (2020). Performance assessment of steam Rankine cycle and sCO2 Brayton
cycle for waste heat recovery in a cement plant: A comparative study for supercritical
fluids. International Journal of Energy Research, 44, 12329-12343.

Klein, S.A., (2020). Engineering Equation Solver (EES). F-Chart.

Mehran, A., Shahram, K., Samad, J., (2019). Exergoeconomic analysis of a novel integrated
transcritical CO2 and Kalina 11 cycles from Sabalan geothermal power plant. Energy
Conversion and Management, 195, 420-435.

Sarkar, J., (2015). Review and future trends of supercritical CO2 Rankine cycle for low-grade
heat conversion. Renewable and Sustainable Energy Reviews, 48, 434-451.

Zhang, X.R., Yamaguchi, H., Uneno, D., (2007). Thermodynamic analysis of the CO2- based
Rankine cycle powered by solar energy. International Journal of Research, 31(14),
1414–1424.

Zhang, X., Yamaguchi, H., Uneno, D., (2007). Experimental study on the performance of
solar Rankine system using supercritical CO2. Renewable Energy, 32, 2617–2628.

251
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Comparative Thermodynamic Investigation of Ground Coupled


Refrigeration System for Supercritical Refrigerants

Gamze SOYTÜRK1*, Serpil ÇELİK TOKER1, Önder KIZILKAN1

Abstract: The purpose of this study is to examine the thermodynamic performance of the
ground-coupled refrigeration system using different supercritical refrigerants. For this purpose,
six different supercritical refrigerants are analyzed in terms of the first and second laws of
thermodynamics. The analyses are made for a refrigeration capacity of 75 kW and cold room
temperature of -5 °C. The energy performance indicator COP and exergy performance indicator
exergy efficiency and destruction rates of the system are determined. Results showed that the
best COP value and exergy efficiency were obtained using R125. Besides, the highest exergy
destruction was found to be 13.49 kW for R170, followed by R744 with a value of 13.44 kW.
A parametric study was carried out to examine the effect of ground and cold room temperatures
on the energy and exergy efficiency of the system. According to the parametric studies, COP
values decreased with the increase of ground temperature for all refrigerants, while they
increased with the increase of evaporator temperature.

Keywords: Supercritical refrigerant, ground-coupled, refrigeration, energy, exergy.

1. Introduction

The rapid growth of the population and accelerating process of urban modernization give rise
to a greater demand for energy consumption. Compared with the year 1990, it was predicted
that the energy consumption of 2050 could increase by up to 275% (Fridleifsson, 2001).
However, a great portion of the energy used for space heating/cooling and electricity generation
comes in the form of burning fossil fuels (Aresti et al. 2018), which are limited available and
harmful to the environment (Atam and Helsen, 2016). A reduction of fossil fuel use and an
improvement in the global energy structure to include more sustainable energy is required
(Mohammadi et al. 2017). Nguyen and Eslami-Nejad (2019), were proposed the concept of a
“ground source heat pump” (GSHP), which uses soil as an environmentally friendly and
sustainable source for heating and cooling. In cold climates, ground source heat pump (GSHP)
systems are increasingly deployed for heating, cooling, and air-conditioning in residential,
commercial, and institutional buildings. These systems rely on a relatively constant ground
temperature throughout the year to operate with higher efficiencies than conventional air-source
heat pump systems.

The refrigerants used in the ground-coupled refrigeration system must have a low global
warming potential (GWP) and zero ozone depletion potential (ODP) since these two major
environmental concerns are important for the future development of refrigeration industries.
ODP and GWP have become one of the most important criteria in analyzing new alternatives
1
Isparta University of Applied Sciences, Department of Mechanical Engineering, Isparta/Turkey
* Corresponding author: [email protected]
252
to chlorofluorocarbon (CFC) and hydrochlorofluorocarbon (HCFC) refrigerants in vapor-
compression refrigeration systems. CFC and HCFC-type refrigerants have been used
predominantly in refrigeration systems for the past few decades. HCFCs have both ODP and
high GWP and may not be produced or imported after 2030 in developed nations and 2040 in
developing countries. The Kigali Amendment to Montreal Protocol (UNEP, 2016) requires the
participating parties to gradually reduce HFCs use by 80–85% by the late 2040s.

There have been many publications on ground-coupled heat pumps and refrigeration systems
in the literature for heating and cooling applications. Sakulpipatsin et al. (2010), presented a
method for exergy analysis of buildings and Heating Ventilation Air Conditioning systems.
They illustrated an office building equipped with low-temperature heating and high-
temperature cooling systems situated in the Netherlands. They found the overall exergy
efficiencies of the system to be 17.15% and 6.81%. They also noted that the thermal energy
emission and control system and the energy conversion system were the main causes of the
exergy inefficiencies in the heating and cooling cases. Healy et al. (1997), researched the effect
of system parameters, such as the depth of ground heat exchanger, brine flow rate, the length
between pipes, on the performance of ground heat exchanger in Canada. The researcher also
concluded that ground source heat pumps were more advantageous than the other traditional
heating and cooling systems. Esen et al. (2007), have reported a detailed techno-economic
analysis of a ground source heat pump system and six conventional heating systems for the
climate conditions of Turkey in a heating season of 2002–2003. In hot climates such as in
Turkey, ground source heat pump systems represent a viable alternative to air source heat pump
systems and conventional space cooling and heating systems because of their higher operating
efficiency, especially during the cooling season. Coskun et al. (2008), and Pulat et al. (2009),
studied the effect of experimental COPs on ground source heat pumps constructed in Bursa,
Turkey for heating and cooling. Using ANSYS computer program, the same researchers
obtained temperature distributions near the GSHE by using inlet and outlet temperatures of
brine temperatures. Jin et al. (2016), introduced a concept of a CO2 hybrid source coupled heat
pumping system for a warm climate. This hybrid system utilized the heat sink by combining
city water, ground, and ambient air in cooling mode, while the ground is used as the only heat
source in heating mode. Alkan et al. (2014), examined the thermodynamic analysis of the
system by using different alternative refrigerants in the ground source heat pump. They chose
R22, R404a, R410a, R407c, R134a, and R600 fluids as refrigerants in their studies.

They found that R600 fluid had the best performance among other refrigerants. Xu et al. (2013),
were compared the performance of R410a and R32 refrigerants used in heat pump systems.
They said that the use of the R32 coolant, which has a lower global warming potential, is 10%
and 9% better in terms of capacity and COP, respectively, than the use of the R410a coolant.
Li et al. (2014), investigated the energy and exergy performance of secondary loop systems
using R152a and R290 in automotive air-conditioning systems. They found that the COP was
increased by 8% to 15% with the use of R290 instead of R134a. Cho et al. (2016), compared
the heating and cooling performance of a heat pump system with R32 or R410a. Cheng et al.
2014, replaced R22 and R410a with R32 and R290 as heat pump refrigerants respectively and
studied their cycle performance. Nawaz et al. (2017), focused on the substitution of R290 for
R134a, and they also analyzed the cycle performance of 13 low global warming potential
(GWP) refrigerants in their follow-up studies (Nawaz and Ally, 2019). Joybari et al. (2013),
carried out exergy analysis to investigate the performance of a domestic refrigerator for R134a
and R600a. They also applied the Taguchi method to design experiments to minimize exergy
destruction while using R600a.

253
In this study, the performance of the ground-coupled refrigeration system is investigated for six
supercritical fluids. These supercritical fluids are pentafluoroethane (R125), methyl fluoride
(R41), sulfur hexafluoride (SF6), ethane (R170), nitrous oxide (R744A), and carbon dioxide
(R744). In order to determine the energetic and exergetic performance of the ground-coupled
refrigeration system, the first and second laws of thermodynamics are applied to the system.
The highest energy and exergy efficiency of the ground-coupled refrigeration system is found
for R125 fluid, while the lowest energy and exergy efficiency is found for R170 fluid. A
parametric study is carried out to examine the effect of ground and cold room temperature on
the energy and exergy efficiency of the system.

2. System Description

The system consists of the ground-coupled refrigeration system, a cold room to be maintained
at -5°C, and a ground source for the heat rejection process from the condenser (Figure 1). The
refrigerant circulating in the vapor compression refrigeration cycle evaporates by extracting
heat from the room air in the evaporator and then enters the compressor. In the compressor, the
temperature and pressure of the refrigerant are increased, and the fluid is sent to the condenser.
The refrigerant, whose heat has been taken by the ground in the condenser, passes through the
throttling valve, and its pressure is reduced and sent to the evaporator; thus, the cycle is
completed. The T-s diagram of the ground-coupled sub-critical refrigeration system is given in
Figure 2.

Figure 1. Schematic representation of the ground-coupled sub-critical refrigeration system

Figure 2. T-s diagram of the sub-critical refrigeration cycle

254
The selection of working fluid is one of the essential factors influencing the system performance
and also has an effect on the environment. Generally, an appropriate working fluid should have
suitable thermodynamic properties and low environmental impacts. The analyses are made for
six supercritical refrigerants with low ODP values. The general properties of these refrigerants
are tabulated in Table 1, and T-s diagrams are given in Figure 3.

Table 1. Thermophysical and environmental properties of supercritical working fluids


(ASHRAE 2004; Restrepo et al., 2008; Calm and Hourahan, 2011)

Name Refrigerant Chemical Molecular Mass, Critical Critical


ODPa GWPb
Number formula kg/kmol) pressure, kPa temperature, °C
Pentafluoroethane R125 CHF2CF3 120.02 3617.5 66.02 0 3500
Methyl fluoride R41 CH3F 34.03 5897 44.13 0 92
Sulfurhexafluoride SF6 SF6 146.5 3755 45.57 0 22800
Ethane R170 CH3CH3 30.07 4872.2 32.17 0 3
Nitrous oxide R744A N2O 44.01 7245 36.37 0.017 298
Carbon dioxide R744 CO2 44.01 7377.3 30.98 0 1
a
relative to R11 b relative to CO2

Figure 3. T-s diagram of the supercritical refrigerants

The refrigeration load from the cold room is absorbed by a secondary fluid, which is ethylene
glycol (EG), and selected in order to prevent the freezing problem. The freezing point of EG is
-18.84 °C for the specified concentration. Also, the condenser heat load is absorbed by cooling
water, and the water is cooled down by means of the ground-coupled heat exchangers. In order
to determine the performance analysis of the system for different supercritical refrigerants, the
operating parameters are tabulated in Table 2.

255
Table 2. Design parameters of the ground‐coupled sub‐critical refrigeration system

Refrigeration capacity, Q 75 kW
Cold room temperature, TCR ‐5 °C
Ground temperature, TG 14 °C
Evaporator temperature, TE TCR ‐ 5°C
Condenser temperature, TC TG + 12°C
Superheating temperature, ΔTSH 0.1 °C
Subcooling temperature, ΔTSc 0.1 °C
Isentropic efficiency of compressor, ηis 0.88
Mechanical efficiency of compressor, ηmec 0.92
Electrical efficiency of compressor, ηelec 0.86
Ethylene ‐glycol concentration 0.30

3. Thermodynamic Balance Equations

The performance characteristics of the ground-coupled refrigeration system are assessed by


applying first and second law analysis of thermodynamics. The balance equations are used to
determine the work and heat interactions, energy and exergy efficiencies, and exergy
destruction rates for each system component. The general mass balance equation for steady-
state and steady-flow processes can be written as (Cengel and Boles, 2006):

∑m ∑m (1)

The energy balance equation is defined as:

∑E ∑E (2)

Equation (2) can be written in the form given below:

Q ∑m h W ∑m h (3)

In the above equations, m is the mass flow rate, E is the rate of net energy, Q is the rate of net
heat, W is the rate of net work, and h is the specific enthalpy. The subscripts in and out stand
for inlet and outlet, respectively.

The second law of thermodynamics overcomes with concepts of entropy and exergy. Exergy
analysis of systems allows determining irreversibility and available energy (exergy) in the
system. These analyses reveal the efficiency of the systems in terms of the First and Second
Law of Thermodynamics (Göktürk et al., 2013). For a steady-state operation, the general exergy
balance equation can be defined as (Dincer and Rosen, 2007):

∑ Ex ∑ Ex ∑ Ex (4)

The exergy balance equation can also be written more explicitly as:

Ex Ex ∑m e ∑m e TS (5)

256
where, Ex and Ex are the exergies of heat and work, respectively, e is the specific exergy,
T0 is the reference state temperature and S is the entropy generation rate. In the above
equation, exergies of heat and work and entropy generation rate are given below (Kotas, 1985).

Ex TS (6)

Ex Q (7)
Ex W (8)

The specific exergy is expressed relative to the environmental conditions as:

e h h T s s (9)

where s is entropy, P is the pressure, and the subscript 0 indicates properties at the reference
state.

The performance of the ground-coupled refrigeration system can be determined using energy
and exergy efficiency definitions:

COP (10)
,

η (11)
,

where Q represents refrigeration capacity and W , represents compressor’s electric


consumption.

3. Results and Discussion

The performance of the ground-coupled refrigeration system is compared for six supercritical
refrigerants in this study. In order to analyze the ground-coupled sub-critical refrigeration
system for supercritical refrigerants the several assumptions have to be made:

 The system operates at the steady-state.


 Kinetic and potential energies and exergy changes are ignored.
 Pressure losses through pipelines are neglected.
 The pump operations are assumed to be adiabatic.
 The directions of heat transfer to the system and work transfer from the system are taken
positive.
 Heat losses and heat gains from or to the system are neglected.

Using the balance equations and under the assumptions given above, the analyses are performed
for different supercritical refrigerants by EES software. In Figure 4, the COP value is given for
all supercritical refrigerants. It can be seen from the figure that the best COP value is obtained
using R125, followed by R41, SF6, and N2O.

257
Figure 4. COP values for different supercritical refrigerants
Figure 5 shows the exergy efficiency of the ground-coupled refrigeration system for different
supercritical fluids. The exergy efficiency trend is the same as COP. As can be seen from the
figure, the best exergy efficiency is occurred using R125, while the lowest exergy efficiency
was obtained when using R744.

Figure 5. Exergy efficiency of the system for different supercritical fluids.

In Figure 6, exergy destruction of the system is given in the case of using different supercritical
fluids. The highest exergy destruction is obtained using when R170 and R744 are using in the
system.

258
Figure 6. Exergy destruction of the system for different supercritical fluids.

In Figures 7 to 9, the variations of COP, exergy efficiency, and exergy destruction rates are
given with the variation ground temperature. As can be seen from the figures, with the increase
of ground temperature, COP and exergy efficiency values decrease for all refrigerants. The
exergy destruction rates increase with ground temperature, contrary to COP.

Figure 7. Variation of COP with ground temperature

259
Figure 8. Variation of exergy efficiency with ground temperature

Figure 9. Variation of exergy destruction with ground temperature

Figure 10 shows the variation of COP with evaporator temperature. As expected, while the COP
increases with evaporator temperature, exergy efficiency and exergy destruction rates decrease
for all refrigerants (Figure 11 and 12).

260
Figure 10. Variation of COP with evaporator temperature

Figure 11. Variation of exergy efficiency with evaporator temperature

261
Figure 12. Variation of exergy destruction with evaporator temperature

4. Conclusions

In this study, the performance of the ground-coupled refrigeration system using different
supercritical refrigerants has been investigated. Analyses were made for six different
supercritical refrigerants. The results of the analyses were showed that the best COP was
obtained by using R125 followed by R41, SF6, and N2O. The best exergy efficiency is obtained
using R125, while the lowest exergy efficiency was obtained when using R744. Parametric
studies have been done to see the effect of ground temperature and evaporator temperature on
the system performance. The results of parametric studies showed that the increase of
evaporator temperature had a positive effect on the coefficient of performance, exergy
efficiency, and exergy destruction of the system, while the increase of ground temperature had
a negative effect.

References

Alkan, R., Kabul, A., Kızılkan, Ö., (2014). Thermodynamic analysis of a ground source heat
pump for different refrigerants. Journal of Thermal Science and Technology, 34(1), 27-
34 (In Turkish).

Aresti, L., Christodoulides, P., Florides, G., (2018). A review of the design aspects of ground
heat exchangers. Renewable Sustainable Energy Reviews, 92, 757–763.

ASHRAE, (2004). Designation and safety classification of refrigerants. ANSI/ASHRAE


Standard 34-2001, Atlanta, GA, USA.

Atam, E., Helsen, L., (2015). Ground-coupled heat pumps: part 1 – literature review and
research challenges in modeling and optimal control. Renewable Sustainable Energy
Reviews, 54, 1653–1667.

262
Calm, J.M., Hourahan, G.C., (2011). Physical, safety, and environmental data summary for
current and alternative refrigerants. Proceedings of the 23rd International Congress of
Refrigeration, Prague, Czech Republic, 21-26.08.2011.

Cengel, Y.A., Boles, M.A. (2006). Thermodynamics: an engineering approach. 5th ed.,
McGraw-Hill, New York, USA.

Cheng, S., Wang, S., Liu, Z., (2014). Cycle performance of alternative refrigerants for domestic
air-conditioning system based on a small finned tube heat exchanger. Applied Thermal
Engineering, 64, 83–92.

Cho, I.Y., Seo, H.J., Kim, D., (2016). Performance comparison between R410A and R32 multi-
heat pumps with a sub-cooler vapor injection in the heating and cool- ing modes. Energy
112, 179–187.

Coskun, S., Pulat, E., Unlu, K., Yamankaradeniz. R., (2008). Experimental performance
investigation of a horizontal ground source compression refrigeration machine.
International Journal Energy Research, 32, 44-56.

Dincer, I., Rosen, M.A., (2007). Exergy: Energy, environment and sustainable development.
1st ed., Elsevier Science; Oxford, UK.

Esen, H., Inalli, M., Esen, M., (2007). A techno-economic comparison of ground-coupled and
air-coupled heat pump system for space cooling. Building and Environment, 42, 1955-
1965.

Fridleifsson, IB., (2001). Geothermal energy for the benefit of the people. Renewable
Sustainable Energy Reviews, 299–312.

Göktürk, M., Oztop, H.F., Hepbaslı, A., (2013). Energy and exergy assessments of a perlite
expansion furnace in a plaster plant. Energy Conversion and Management, 75, 488–497.

Healy, PF., Ugursal, VI., (1997). Performance and economic feasibility of ground source heat
pumps in cold climate. International Journal Energy Research, 21, 857-870.

Jin, Z., Eikevik, T.M., Nekså, P., (2016). Investigation on CO2 hybrid ground coupled heat
pumping system under warm climate. International Journal of Refrigeration, 62, 145-152.

Joybari, M.M., Hatamipour, M.S., Rahimi, A., Modarres, F.G., (2013). Exergy analysis and
optimization of R600a as a replacement of R134a in a domestic refrigerator system.
International Journal of Refrigeration, 36, 1233-1242.

Kotas, T.J., (1985). The exergy method of thermal plant analysis. Butter-Worths, London, UK.

Li, G., Eisele, M., Lee, H., Hwang, Y., Radermacher, R., (2014). Experimental investigation of
energy and exergy performance of secondary loop automotive air-conditioning systems
using low-GWP (global warming potential) refrigerants. Energy, 68, 819-831.

263
Nawaz, K., Shen, B., Elatar, A., Baxter, V., Abdelaziz, O., (2017). R290 (propane) and
R600a(isobutane) as natural refrigerants for residential heat pump water heaters. Applied
Thermal Engineering, 127, 870-883.
Nawaz, K., Ally, M., (2019). Options for low-global-warming-potential and natural
refrigerants Part 2: Performance of refrigerants and systemic irreversibilities.
International Journal of Refrigeration, 106, 213-224.

Nguyen, A., Eslami-Nejad, P. A., (2019). Transient coupled model of a variable speed
transcritical CO2 direct expansion ground source heat pump for space heating and
cooling. Renewable Energy, 140, 1012-1021.

Pulat, E., Coskun, S., Unlu, K., Yamankaradeniz, N., (2009). Experimental study of horizontal
ground source heat pump performance for mild climate in Turkey. Energy, 34(9), 1284-
1295.

Restrepo, G., Weckert, M., Brüggemann, R., Gerstmann, S., Frank, H., (2008). Ranking of
Refrigerants. Environmental Science & Technology, 42, 2925–2930.

Sakulpipatsin, P., Itard, L.C.M., Kooi, H.J., Boelman, E.C., Luscuere, P.G., (2010). An exergy
application for analysis of buildings and HVAC systems. Energy and Buildings, 42, 90–
99.,

UNEP, (2016). Amendment to the Montreal Protocol on Substances that Deplete the Ozone
Layer, Kigali. (Date of Access: 25.04.2021).

Xu, X., Hwang, Y., Radermacher, R., (2013). Performance comparison of R410A and R32 in
vapor injection cycles. International Journal of Refrigeration, 36(3), 892–903.

264
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Modelling the Color Removal Efficiency of an Electrochemical


Process from Organic Wastewater by Response Surface Method
Bir Elektrokimyasal Prosesin Organik Atıksudan Renk Giderim
Etkinliğinin Yanıt Yüzey Metodu ile Modellenmesi

Oğuz ŞAHİNER1*, Murat SOLAK2

Abstract: European Union member countries use the spectral absorption coefficient (SAC)
method within the framework of the standards set in EN ISO 7887 in the analysis of the color
parameter in industrial wastewaters. In the study, the color removal efficiency of the
discharge wastewater of the yeast production industry was investigated as SAC unit by
electrochemical process using titanium and stainless steel electrodes. The effects of operating
parameters such as pH, current density and electrolysis time on the removal of SAC436,
SAC525 and SAC620 color parameters were optimized by the Response Surface Method
(RSM).

Keywords: Yeast industry discharge wastewater, electrochemical treatment, titanium


electrode, stainless steel electrode, optimization

Özet:

Avrupa Birliği üyesi ülkeler endüstriyel atıksularda renk parametresinin analizinde EN ISO
7887'de belirlenen standartlar çerçevesinde renklilik sayısı (RES) yöntemini kullanmaktadır.
Çalışmada, maya üreten işletmenin deşarj atıksuyunun RES birimi olarak titanyum ve
paslanmaz çelik elektrotlar kullanılarak elektrokimyasal prosesle renk giderim verimi
araştırılmıştır. pH, akım yoğunluğu ve elektroliz süresi gibi işletme parametrelerinin RES436,
RES525 ve RES620 renk parametrelerinin giderimi üzerindeki etkileri Yanıt Yüzey Metodu
(YYM) ile optimize edilmiştir.

Anahtar Kelimeler: Maya endüstrisi deşarj atıksuyu, elektrokimyasal arıtma, titanyum


elektrot, paslanmaz çelik elektrot, optimizasyon.

1. Giriş

Gıda endüstrisi, artan temel ihtiyaçların karşılanması için stratejik öneme sahip, üretimin her
aşamasında yüksek miktarlarda su ihtiyacı olan ve buna bağlı olarak da atıksu üretimi oldukça
yüksek olan endüstriyel üretim alanlarından biridir. Gıda endüstrisi alanında faaliyet gösteren
maya üretim tesisleri dünyada ve ülkemizde önemli bir yere sahiptir (Balcıoğlu, 2013).
Ekmek mayası üretiminde, şeker fabrikalarının yan ürünü olan melas, endüstriyel
1
Düzce Üniversitesi, Yapı İşleri ve Teknik Daire Başkanlığı, Düzce/Türkiye
2
Düzce Üniversitesi, Çevre Mühendisliği Bölümü, Mühendislik Fakültesi, Düzce/Türkiye
* Sorumlu Yazar: [email protected]
265
sürdürülebilirlik yaklaşımlarından biri olan endüstriyel simbiyoz örneği olarak maya
endüstrisinin hammaddesi olarak kullanılmaktadır.

Maya endüstrisi atıksuları, kimyasal oksijen ihtiyacı (KOİ), biyokimyasal oksijen ihtiyacı
(BOİ), toplam organik karbon (TOK), azot, fosfor, renk gibi yüksek konsantrasyonlarda
kirletici içeriğe sahiptir (Balcıoğlu, 2013). Maya üretimi süreçlerinde ortaya çıkan
atıksulardaki yoğun renk (koyu kahverengi) içeriği, biyopolimer kompleksi olan,
biyokimyasal reaksiyonlar sonucu oluşan melanoidden kaynaklanmaktadır. Bu içeriklerin
parçalanması oldukça zordur (Alkan, 2010). Bu tür atıksuların arıtımında biyolojik, kimyasal
arıtma teknikleri kullanılmaktadır. Biyolojik prosesler, atıksudaki organik maddeleri
parçalayan mikroorganizma yumaklarının çöktürme havuzunda çöktürerek giderimi üzerine
tasarlanmıştır. Konvansiyonel biyolojik prosesler; Aktif çamur prosesleri, damlatmalı filtreler
ve döner biyolojik disklerdir (Aydın, 2020). Ayrıca kimyasal koagülasyon, kimyasal
çöktürme, elektrokoagülasyon ve fenton prosesi gibi prosesler de organik içerikli
kirleticilerden renk gideriminde kullanılan proseslerdir (Haksevenler vd. 2014).
Elektrokimyasal prosesler, kimyasal proseslere ve çeşitli kirleticilerin gideriminde kullanılan
proseslere alternatif olabilecek ve son dönemlerde önemi artmış olan bir arıtma tekniğidir
(Cansu, 2018). Elektrokimyasal arıtım prosesleri içerdikleri mekanizma anlamında
koagülasyon, adsorbsiyon, absorbsiyon, çöktürme flotasyon, oksidasyon gibi proseslerin bir
ya da bir kaçını kapsayabilmektedir (Ihara vd, 2004; İlhan vd. 2007). Elektrooksidasyon (EO)
prosesi çözünmeyen bir anot malzeme kullanılarak organik maddelerin oksitlenmesini
sağlamaktadır (Fil, 2004; Kul, 2005). EO prosesinde yaygın şekilde kullanılan elektrotlar
grafit (Kannan vd. 1995), titanyum (Xion vd. 2003), paslanmaz çelik (Bejankiwar vd. 2005),
bor kaplı elmas (Martínez-Huitle vd. 2008) gibi çözünmeyen elektrotlardır. EO prosesi temel
olarak doğrudan veya dolaylı oksidasyon olmak üzere 2 farklı proses olarak gerçekleşebilir.
Doğrudan oksidasyon (Anodik oksidasyon) prosesinde; kirleticiler anot yüzeyinde adsorbe
edilir ve daha sonra anodik elektron transfer reaksiyonu ile ayrıştırılır. Dolaylı oksidasyon
prosesinde reaksiyon hipoklorit / klor, ozon ve hidrojen peroksit gibi güçlü oksidantlar ile
gerçekleşir (Alfredo vd. 2014).

Bu çalışmada titanyum ve paslanmaz çelik elektrotların kullanıldığı EO prosesi ile maya


üretimi süreçlerinde ortaya çıkan, işletmenin arıtma tesislerinde arıtılarak deşarj standartlarına
getirdiği ve alıcı ortama deşarj edilen atıksu alınarak, renk parametresi açısından tekrar
kazanılabilirliğinin belirlenmesi için RES parametresi olarak renk giderim verimleri
incelenmiştir.

2. Materyal ve Metod

2.1. Atıksu Karakterizasyonu

Deneysel çalışmalarda kullanılan atıksu maya üretim fabrikasının arıtma tesisi çıkışından
alınmıştır (deşarj suyu). Ham atıksu karakterizasyonu Tablo 1’de görülmektedir.

Tablo 1. Ham atıksu karakterizasyonu


Parametre Değer/ Parametre Değer/
Konsantrasyon Konsantrasyon
pH 7,66±0,2 Renk (m-1)
İletkenlik(mS/cm) 6,43 RES436 (m-1) 6,81
TDS (mg/L) 3,97 RES525 (m-1) 5,21
KOİ (mg/L) 300±10 RES620 (m-1) 4,14

266
2.2. Optimizasyon Çalışmaları

Çalışmada model reaktör ile maya üretimi yapan ve SKKY kapsamındaki Yönetmelik
değerlerini sağlayan bir işletmenin deşarj noktasında alınan atıksudan renk giderimi üzerine
pH, akım yoğunluğu ve elektroliz süresi gibi işletme parametrelerinin etkisi araştırılmıştır.
Araştırmalarda parametre aralıkları literatür taraması ve ön deneysel çalışmalar ile
belirlenmiştir. Parametrelerin etkin giderim aralıklarını belirlemek amacıyla, pH 4,5-10,5,
akım yoğunluğu 90-150 A/m2 ve elektroliz süresi 30-60 dk. aralığında olacak şekilde
istatistiksel analize göre hazırlanan deney serisi uygulanmıştır (Tablo 2). Optimizasyon
çalışmalarında Yanıt Yüzey Metodu kullanılmıştır. 3D grafikler ve ANOVA analizi Design
Expert programı ile hazırlanmıştır.

Tablo 2. Farklı elektrot türleri için aralıklar

Faktörler Titanyum Paslanmaz Çelik


pH 4,5-9,5 4,5-9,5
Akım Yoğunluğu (A/m2) 80-140 60-120
Elektroliz Süresi (dk.) 30-60 15-75

2.3. Deney Düzeneği

Elektrokimyasal proses ile renk gideriminin yapıldığı deneysel çalışmalarda, akım ve voltaj
kontrolü DC güç kaynağı ile sağlanmıştır. Deneylerde kullanılan model reaktörün hacmi 500
ml’dir. Elektrotların boyutları 50*80*0.5 mm olup, su içerisinde elektroliz işleminin
gerçekleştiği bölümün boyutları 50*55 mm’dir (165 cm2 aktif yüzey alanına sahip elektrotlar
için 90, 120 ve 150 A/m2 akım yoğunlukları için hesaplanan ve sisteme verilen akım sırasıyla,
1,5A, 2A ve 2,5 A’dir).

Elektrokimyasal proses ile maya endüstrisi deşarj atıksularından renk giderimi çalışmasında
kullanılan prosesin şematik gösterimi Şekil 1’de verilmiştir.

Şekil 1. EO prosesinin şematik gösterimi

267
2.4. Metot

2.4.1. Renk Tayini

Maya endüstrisi arıtma sonrası deşarj sularından renk giderim veriminin belirlenmesi
amacıyla RES metodu kullanılmıştır. EN ISO 7887’ye göre renk parametresinin RES metodu
ile ölçülmesi 3 farklı renk analizine (Remazol Yellow RR gran için 436 nm, Remazol Red RR
gran için 525 nm, Remazol Blue RR gran için 620 nm dalga boylarında ölçüm yapılır)
ayrılmaktadır ve m-1 biriminde RES-436, RES-525 ve RES-620 şeklinde renk değerleri
belirlenmektedir [EPA].

Hach Lange DR6000 model spektrofotometrede sırasıyla 436, 525, 620 nm dalga boylarında
numunenin absorbans değerler okunarak, bu absorbans değerleri Denklem 1’de yerine
konulmuş ve RES 436, RES 525, RES 636 değerleri hesaplanmıştır.

RES .f (1)

A :  dalga boyunda çözeltinin absorbans değeri (cm-1)


d : Küvet kalınlığı (mm)
f : Spektral absorbans değerini m-1 biriminde elde etmek için faktör, f=1000
RES() :  dalga boyundaki renklilik sayısı (RES değeri (m-1)

2.4.2. KOİ, pH, İletkenlik Tayini

KOİ analizi SM 5220-D metoduna göre Hach DR6000 model spektrofotometre kullanılarak,
pH ve iletkenlik ölçümleri elektrometrik metoda (Standard Metod 4500-H+) göre Hanna
model cihaz ile belirlenmiştir (APHA, 2005).

3. Sonuçlar

Titanyum Elektrot bağlı Elektrokimyasal Proses için optimizasyon çalışmalarında elde edilen
RES436, RES525 ve RES620 renk parametreleri için ANOVA analizi sonuçları Tablo 3’te
verilmiştir.

ANOVA analizinde, p değerlerine bakıldığında, RES436 renk değeri için pH ve akım


yoğunluğu parametrelerinin elektroliz süresine göre daha etkin olduğu söylenebilir (p<0,05).
RES525 renk değeri için sadece pH değerinin etkin olduğu, RES620 renk değeri için de pH ve
akım yoğunluğu parametrelerinin prosesi etkileyen parametreler olduğu belirlenmiştir. 3D
grafiklerde de bu tespitler görülmektedir.

Quadratik modele uyumlu olduğu belirlenen istatistiksel analiz sonucunda R2 değerleri


RES436, RES525 ve RES620 için sırasıyla 0.99, 0.99 ve 0.98’dir.

268
Tablo 3. Titanyum için ANOVA analizi sonuçları

Sum of Mean F p-value


Source df Prob >
Squares Square Value
F
RES 436
Model 0.59 9 0.066 49.66 0.0002
A-pH 0.25 1 0.25 189.51 < 0.0001
B-A.Y 0.054 1 0.054 40.94 0.0014
C-E.S. 2.000E-004 1 2.000E-004 0.15 0.7141
AB 9.025E-003 1 9.025E-003 6.79 0.0480
AC 2.500E-005 1 2.500E-005 0.019 0.8963
BC 0.021 1 0.021 15.81 0.0106
A2 0.024 1 0.024 18.33 0.0079
B2 1.442E-004 1 1.442E-004 0.11 0.7553
C2 0.24 1 0.24 182.29 < 0.0001
Residual 6.650E-003 5 1.330E-003
Lack of Fit 6.650E-003 3 2.217E-003
Pure Error 0.000 2 0.000
Cor Total 0.60 14
R2 0.99
Adj R2 0.97
RES 525
Model 1.11 9 0.12 53.39 0.0002
A-pH 0.37 1 0.37 159.86 < 0.0001
B-A.Y 4.050E-003 1 4.050E-003 1.75 0.2431
C-E.S. 0.011 1 0.011 4.86 0.0786
AB 9.000E-004 1 9.000E-004 0.39 0.5601
AC 3.600E-003 1 3.600E-003 1.56 0.2675
BC 0.096 1 0.096 41.54 0.0013
A2 1.641E-004 1 1.641E-004 0.071 0.8006
B2 0.093 1 0.093 40.01 0.0015
C2 0.56 1 0.56 240.70 < 0.0001
Residual 0.012 5 2.313E-003
Lack of Fit 0.011 3 3.833E-003 115.00 0.0086
Pure Error 6.667E-005 2 3.333E-005
Cor Total 1.12 14
R2 0.99
Adj R2 0.97
RES 620
Model 0.66 9 0.073 33.10 0.0006
A-pH 0.44 1 0.44 197.34 < 0.0001
B-A.Y 0.017 1 0.017 7.73 0.0389
C-E.S. 2.000E-004 1 2.000E-004 0.090 0.7759
AB 0.026 1 0.026 11.56 0.0193
AC 2.500E-005 1 2.500E-005 0.011 0.9195
BC 6.250E-004 1 6.250E-004 0.28 0.6180
A2 0.17 1 0.17 78.86 0.0003
B2 1.131E-003 1 1.131E-003 0.51 0.5069
C2 3.692E-004 1 3.692E-004 0.17 0.7000
Residual 0.011 5 2.215E-003
Lack of Fit 0.011 3 3.692E-003
Pure Error 0.000 2 0.000
Cor Total 0.67 14
R2 0.98
Adj R2 0.95

269
Şekil 2’de RES436, RES525 ve RES620 renk giderim verimleri için pH ve akım yoğunluğu
parametrelerinin etkisi görülmektedir. Buna göre, pH değerinin tüm renk parametreleri
gideriminde etkin olduğu görülmüştür. Akım yoğunluğunun artması bir noktaya kadar
RES525 renk giderim verim artışı ile paralel hareket etmiş yaklaşık 100 A/m2 akımdan sonra
giderim verimi duraklamıştır. RES436 ve RES620 renk gideriminde akım yoğunluğunun
artması ANOVA analizinde de görüldüğü üzere giderim verimini olumsuz yönde etkilemiştir.
Titanyum elektrot ile tekstil endüstrisi atıksuyunun elektrokimyasal yöntem ile arıtılmasının
incelenmiş olduğu bir çalışmada, 18 dakikalık elektroliz süresinden sonra KOİ, BOİ ve renk
parametreleri için giderim verimi %80’in üzerinde olduğu belirlenmiştir (Kocaer vd. 2002;
Vlyssides vd. 2000).

ted value cted value


ted value cted value
Renk Giderim Verimi (%)-RES436

Renk Giderim Verimi (%)-RES525


1.2

1
0.8
0.8
0.6 0.6

0.4
0.4
0.2

0.2 0

4.5
80 4.5 80
5.5
90 5.5 90
100 100 6.5
6.5
110 110 7.5
7.5 120
120 8.5 pH
A.Y (A/m2) 130 8.5 pH A.Y. (A/m2) 130
140 9.5
140 9.5

a) b)
Renk Giderim Verimi (%)-RES620

0.9

0.8

0.7

0.6

0.5

60
70 4.5
80 5.5
90 6.5
100 7.5
A.Y. (A/m2) 110 8.5
120 9.5 pH
c)

Şekil 2. Titanyum elektrotların kullanıldığı EO prosesinde pH - akım yoğunluğu


parametrelerinin renk giderim verimine etkisi
a) RES 436 b) RES 525 c) RES 620
(Elektroliz Süresi: 45 dk.)

270
Yapılan çalışma sonucunda elde edilen denklemler Tablo 4’te verilmiştir.

Tablo 4. Farklı renk değerleri için belirlenen eşitlikler

RES436 RES525 RES620


+0.76 +0.87 +0.98
-0.18 *A -0.21 *A -0.23 *A
-0.083 *B -0.023 *B +0.046 *B
-5.000E-003 *C +0.037 *C +5.000E-003 *C
-0.047 * AB -0.015 * AB +0.080 * AB
+2.500E-003 * AC -0.030 * AC -2.500E-003 * AC
+0.072 * BC +0.16 * BC -0.013 * BC
-0.081 * A2 +6.667E-003 * A2 -0.22 * A2
-6.250E-003 * B2 -0.16 * B2 +0.018 * B2
-0.26 * C2 -0.39 * C2 -0.010 * C2

Şekil 3’te maksimum RES436, RES525 ve RES620 renk giderim verimi için optimum pH
değeri 4.57, akım yoğunluğu 139.84 A/m2, elektroliz süresi 58 dk. olarak tespit edilmiştir.
Optimum koşullarda RES436 giderim verimi yaklaşık % 97, RES525 ve RES620 renk
giderim verimleri >%99.99 olarak belirlenmiştir.

Şekil 3. Renk giderim verimlerini maksimize eden optimum değerler

Paslanmaz çelik elektrot bağlı elektrokimyasal proses için deneysel çalışma sonucunda elde
edilen RES436, RES525 ve RES620 renk parametreleri için ANOVA analizi sonuçları Tablo
5’te görülmektedir. Quadratik modele uyumlu olarak belirlenen istatistiksel analiz sonucunda
R2 değerleri RES436, RES525 ve RES620 için sırasıyla 0.99, 0.97 ve 0.99 olarak tespit
edilmiştir.

ANOVA analizinde, RES436 renk değeri için, pH, akım yoğunluğu ve elektroliz süresi
parametrelerinin p değerlerine bakıldığında, pH ve akım yoğunluğu parametrelerinin
elektroliz süresine göre daha etkin olduğu görülmektedir (p<0,05). RES525 renk değeri için
sadece pH değerinin etkin olduğu, RES620 renk değeri için de pH ve akım yoğunluğu
parametrelerinin prosesi etkileyen parametreler olduğu belirlenmiştir. 3D grafiklerde de bu
tespitler görülmektedir.

Şekil 4’te RES436, RES525 ve RES620 renk giderim verimleri için pH ve akım yoğunluğu
parametrelerinin etkisi görülmektedir. Buna göre, pH değerinin tüm renk parametreleri
gideriminde etkin olduğu, özellikle pH <5 olması durumunda renk giderim verimleri >%80
olarak tespit edilmiştir. RES436, RES525 ve RES620 renk giderim verimleri için akım

271
yoğunluğu parametrelerinin artması giderim verimini olumsuz etkilemiştir. Çeliğin katot
olarak, alüminyum ve demirin anot olarak kullanıldığı bir elektrokimyasal proseste tekstil
endüstrisinde kullanılan boyar madde bulunan numuneden renk giderimi üzerine yapılan
çalışmada; pH, akım yoğunluğu ve elektroliz süresi gibi değişken parametrelerin renk giderim
verimi üzerine etkileri araştırılmıştır. Akım yoğunluğu 2,5 mA/cm2 olduğunda renk giderimi
%20 iken, akım yoğunluğu 12.5 mA/cm2 olduğunda renk giderimi %98 olarak
gerçekleşmiştir. Bu proses için optimum akım yoğunluğu 11,25 mA/cm2 olarak belirlenmiştir.
pH’ın <2 olması durumunda en düşük giderim verimi elde edilmiştir. 5-9 arası renk giderim
veriminde bir değişiklik olmazken, ph 9 dan sonra giderim verimi artmıştır (Daneshvar vd.
2007).
cted value cted value
cted value cted value

Renk Giderim Verimi (%)-RES525


1
Renk Giderim Verimi (%)-RES436

0.8 0.8

0.6 0.6

0.4
0.4
0.2
0.2
0

4.5
5.5 60 4.5
60 70 5.5
70 6.5
80
80 7.5 6.5
90 90
100
pH 100 7.5
8.5
110 A.Y. (A/m2) 110 8.5 pH
A.Y. (A/m2) 120 9.5
120 9.5

a) b)
1
R enk G iderim Verim i (% )-R ES620

0.9

0.8

0.7

0.6

0.5

60
70 4.5
80 5.5
90 6.5
100 7.5
A.Y. (A/m2) 110 8.5
120 9.5 pH
c)

Şekil 4. Paslanmaz çelik elektrotların kullanıldığı EO prosesinde pH - akım yoğunluğu


parametrelerinin renk giderim verimine etkisi.
a) RES 436 b) RES 525 c) RES 620
(Elektroliz Süresi: 30 dk.)

272
Tablo 5. Paslanmaz Çelik için ANOVA analizi sonuçları
p-
Sum of Mean F
Source df value
Squares Square Value
Prob
RES 436
Model 0.59 9 0.066 49.66 0.0002
A-pH 0.25 1 0.25 189.51 <
B-A.Y 0.054 1 0.054 40.94 0.0014
C-E.S. 2.000E-004 1 2.000E- 0.15 0.7141
AB 9.025E-003 1 9.025E- 6.79 0.0480
AC 2.500E-005 1 2.500E- 0.019 0.8963
BC 0.021 1 0.021 15.81 0.0106
A2 0.024 1 0.024 18.33 0.0079
B2 1.442E-004 1 1.442E- 0.11 0.7553
C2 0.24 1 0.24 182.29 <
Residual 6.650E-003 5 1.330E-
Lack of Fit 6.650E-003 3 2.217E-
Pure Error 0.000 2 0.000
Cor Total 0.60 14
R2 0.99
Adj R2 0.97
RES 525
Model 1.11 9 0.12 53.39 0.0002
A-pH 0.37 1 0.37 159.86 <
B-A.Y 4.050E-003 1 4.050E- 1.75 0.2431
C-E.S. 0.011 1 0.011 4.86 0.0786
AB 9.000E-004 1 9.000E- 0.39 0.5601
AC 3.600E-003 1 3.600E- 1.56 0.2675
BC 0.096 1 0.096 41.54 0.0013
A2 1.641E-004 1 1.641E- 0.071 0.8006
B2 0.093 1 0.093 40.01 0.0015
C2 0.56 1 0.56 240.70 <
Residual 0.012 5 2.313E-
Lack of Fit 0.011 3 3.833E- 115.00 0.0086
Pure Error 6.667E-005 2 3.333E-
Cor Total 1.12 14
R2 0.99
Adj R2 0.97
RES 620
Model 0.66 9 0.073 33.10 0.0006
A-pH 0.44 1 0.44 197.34 <
B-A.Y 0.017 1 0.017 7.73 0.0389
C-E.S. 2.000E-004 1 2.000E- 0.090 0.7759
AB 0.026 1 0.026 11.56 0.0193
AC 2.500E-005 1 2.500E- 0.011 0.9195
BC 6.250E-004 1 6.250E- 0.28 0.6180
A2 0.17 1 0.17 78.86 0.0003
B2 1.131E-003 1 1.131E- 0.51 0.5069
C2 3.692E-004 1 3.692E- 0.17 0.7000
Residual 0.011 5 2.215E-
Lack of Fit 0.011 3 3.692E-
Pure Error 0.000 2 0.000
Cor Total 0.67 14
R2 0.98
Adj R2 0.95

273
Yapılan çalışma sonucunda elde edilen denklemler Tablo 6’da görülmektedir.

Tablo 6. Farklı renk değerleri için belirlenen eşitlikler

RES436 RES525 RES620


+0.44 +0.53 +0.81
-0.31 *A -0.22 *A -0.11 *A
-0.058 *B -0.079 *B -0.11 *B
+0.12 *C +0.12 *C +0.034 *C
-0.060 * AB -0.10 * AB -0.028 * AB
+0.035 * AC +0.073 * AC -7.500E-003 * AC
-0.035 * BC -0.030 * BC +0.015 * BC
+0.11 * A2 +0.16 * A2 -0.024 * A2
+0.084 * B2 +0.062 * B2 +8.333E-003 * B2
-0.026 * C2 -0.023 * C2 +0.048 * C2

Şekil 5’te maksimum RES436, RES525 ve RES620 renk giderim verimi için optimum pH
değeri 4,84, akım yoğunluğu 60,15 A/m2, elektroliz süresi 44 dk. olarak tespit edilmiştir.
Optimum koşullarda RES436 giderim verimi yaklaşık %97, RES525 giderim verimi yaklaşık
% 95 ve RES620 renk giderim verimi > %99.99 olarak belirlenmiştir.

Şekil 5. Renk giderim verimlerini maksimize eden optimum değerler.

Optimize edilen koşullarda maya endüstrisi arıtım sonrası deşarj suyunun (deneysel
çalışmalarda kullanılan ham atıksu) ve EO prosesi sonrası atıksudaki renk değişimi Şekil 6’da
görülmektedir.

Şekil 6. İşletmenin deşarj atıksuyu ve EO prosesi ile arıtılmış su

274
4. Tartışma ve Sonuçlar

Yanıt yüzey metodu ile yapılan elektrokimyasal proseslerin optimizasyon çalışmalarında


titanyum elektrot için pH değerinin 4.55, akım yoğunluğunun 84.23 A/m2 ve elektroliz
süresinin 43 dk. olduğu optimum şartlarda RES436, RES525, RES620 renk giderim verimleri
sırasıyla % 89, % 98, % 99,99 olarak tespit edilmiştir. Paslanmaz çelik elektrot için pH
değerinin 4.84, akım yoğunluğu 60.15 A/m2, elektroliz süresi 45 dk. olduğu optimum
şartlarda RES436, RES525, RES620 renk giderim verimleri sırasıyla % 98, % 95, % 99,99
olarak tespit edilmiştir.

İstatistiksel analiz sonucunda quadratik modele uyumlu olduğu belirlenen modelin, R2


değerleri titanyum için RES436, RES525 ve RES620 için sırasıyla 0.99, 0.99 ve 0.98 olarak,
paslanmaz çelik için RES436, RES525 ve RES620 için sırasıyla 0.99, 0.97 ve 0.99 olarak
bulunmuştur. Akım yoğunluklarına bağlı olarak prosesin enerji tüketimleri ise sırasıyla, 34,2-
122,2 kWsa/m3 ve 4,64-42,86 kWsa/m3 aralığında değişim göstermiştir. Aynı giderim
verimleri için paslanmaz çelik elektrodunun kullanımının, titanyum elektrot kullanımına göre
enerji maliyeti açısından daha uygun olacağı düşünülmektedir.

Titanyum ve paslanmaz çelik elektrotların kullanıldığı elektrokimyasal proseslerin maya


endüstrisi deşarj sularındaki rengi oluşturan ve giderimi oldukça kompleks olan melanoidlerin
parçalanarak giderilmesinde oldukça etkin olduğu belirlenmiştir. Özellikle, deşarj suyunun
tüm renk parametreleri için giderim verimi > %89’dur. Bu durumda işletme tarafından
arıtıldıktan sonra deşarj edilen suyun elektrokimyasal prosesler ile tekrar kullanım açısından
arıtılabileceği ve arıtılan suyun işletmede tekrar kullanımı için uygulanan prosesler üzerine
renk haricinde farklı parametreler ile de değerlendirmeler yapılması gerektiği
düşünülmektedir.

Referanslar

Alfredo G., Veronica M., Ivan G. M.,Perla T. A., Monserrat C., Ivonne L. (2014). Industrial
wastewater treatment by electrocoagulation–electrooxidation processes powered by
solar cells, Fuel 149 (2015) 46–54.

Alkan R., 2010, Melanoidin içeren atık suların renginin mikroorganizmalarla giderilmesi,
Ankara Üniversitesi Çevre Bilimleri Dergisi, 2, 89-94.

APHA. (2005). American Public Health Association (APHA), Standard Methods for the
Examination of Waste and Wastewater (19th ed.), Washington.

Aydın S., (2020). Erzurum biyolojik atıksu arıtma tesisi arıtma çamuru yönetiminin
incelenmesi, Yüksek Lisans Tezi, Atatürk Üniversitesi Fen Bilimleri Enstitüsü,
Erzurum, Türkiye.

Balcıoğlu G., (2013). Biyolojik olarak arıtılmış ekmek mayası endüstrisi atıksularının ileri
arıtım alternatiflerinin incelenmesi, Yüksek Lisans Tezi, İstanbul Üniversitesi Fen
Bilimleri Enstitüsü, İstanbul, Türkiye.

Bejankiwar R., Lalman J., A., Seth R., Biswas N., (2005). Electrochemical degradation of 1,2-
dichloroethane (DCA) in a synthetic groundwater medium using stainless-steel
electrodes, Water Research, 39, 4715–4724.

275
Cansu E., (2018). Atık aktif çamurun elektrooksidasyon yöntemi ile ön arıtımının
incelenmesi, Yüksek Lisans Tezi, Atatürk Üniversitesi Fen Bilimleri Enstitüsü,
Erzurum, Türkiye.

Chen G., (2004). Electrochemical technologies in wastewater treatment, Separation and


Purification Technology, 38 (1), 11-41.

Daneshvar N., Khataee A., R., Amani G., A., R., Rasoulifard M., H., (2007). Decolorization
of C.I. Acid Yellow 23 solution by electrocoagulation process: Investigation of
operational parameters and evaluation of specific electrical energy consumption
(SEEC), Journal of Hazardous Materials, 148, pp. 566–572.

Demir N., M., (2012). İleri biyolojik arıtma proseslerinde nütrient giderimi ve
mikroorganizme türlerinin incelenmesi, Doktora Tezi, Yıldız Teknik Üniversitesi Fen
Bilimleri Enstitüsü, İstanbul Türkiye.

EPA (2009). http://water.epa.gov/drink/contaminants/upload/mcl-2.pdf Europa Norm, 1994.


EN ISO 7887.(Erişim Tarihi:02.06.2021).

Haksevenler G., B., H., Doğruel S., Alaton A., İ., 2019. Kimyasal arıtma proseslerinin
karasuyun boyutsal dağılımı üzerindeki etkilerinin incelenmesi, Uludağ Üniversitesi
Mühendislik Fakültesi Dergisi, Cilt 24, Sayı 3.

Ihara I., Kanamura K., Shimada E., et.al., (2004). High gradient magnetic separation
combined with electrocoagulation and electrochemical oxidation for the treatment of
landfill leachate, Ieee Transactions On Applied Superconductivity, 14-2, 1558-1560.

İlhan F., Kurt U., Apaydın Ö., Arslankaya E., Gönüllü M., T., (2007). Elektrokimyasal arıtım
ve uygulamaları” TÜRKAY 2007 AB sürecinde Türkiye’de katı atık yönetimi ve çevre
sorunları sempozyumu.

Kannan K., Sivadurai S., N., John Brechmans, L., Vijayavalli R., 1995. Removal of phenolic
compounds by electrooxidation method, J. Environ. Sci. Health, A 30, 2185.

Karaoğlu M., H., (2007). Sulu çözeltilerden bazı boyarmaddelerin fizikokimyasal yöntemlerle
giderimi, Doktora Tezi, Balıkesir Üniversitesi Fen Bilimleri Enstitüsü, Balıkesir,
Türkiye.

Kocaer F., O., Alkan U., (2002). Boyar madde içeren tekstil atıksularının arıtım alternatifleri,
Uludağ Üniversitesi Mühendislik-Mimarlık Fakültesi Dergisi, Cilt 7, Sayı 1.

Kul S., (2005). Zeytin karasuyunun elektrooksidasyon yöntemi ile arıtımının incelenmesi,
Doktora Tezi, Atatürk Üniversitesi Fen Bilimleri Enstitüsü, Erzurum, Türkiye.

Martínez-Huitle C., A., Alfaro M., A., Q., (2008). Elmas elektrodun son çevresel
uygulamaları: Kritik inceleme, J. Environ. Eng. Manage., 18 (3), 155-172.

Stone C., (1998). Yeast Products in the Feed Industry , Diamond V Mills, Inc. Cedar Rapids,
Iowa, 3-15.

276
Ünal T, (2011). Ekmek mayası endüstrisi seperasyon prosesi atıksularında ozon ve
ozon/hidrojen peroksit oksidasyonu ile renk giderimi. Yüksek Lisans Tezi, İstanbul
Teknik Üniversitesi Fen Bilimleri Enstitüsü, İstanbul, Türkiye.

Xion Y., He C., Karlsson H., T., Zhu X., (2003). Performance of three-phase three-
dimensional electrode reactor for the reduction of COD in simulated wastewater-
containing phenol, Chemosphere, 50, 131–136.

Vardar B.,(2006). Tekstil endüstrisi reaktif boya banyolarının elektrokimyasal yöntemler ile
arıtımı, Yüksek Lisans Tezi, İstanbul Üniversitesi Fen Bilimleri Enstitüsü, İstanbul,
Türkiye.

Vlyssides A., G., Papaioannou D., Loizidoy M., Karlis P., K., Zorpas A., A., (2000). Testing
an electrochemical method for treatment of textile dye wastewater, waste management,
20, 569-574.

Yılmaz E., (2014). Maya endüstrisi atıksuyunun ses ötesi dalgalarla arıtılması, Yüksek Lisans
Tezi, Hitit Üniversitesi Fen Bilimleri Enstitüsü, Çorum, Türkiye.

277
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Description of 7.5KW Plant Pollution in PV System

Vehebi Sofiu1*; Sami Gashi2*, Besa Veseli3*; Shkelzim Ukaj4*; Muhaxherin Sofiu5*

Abstract: Particles with aerosol pollution based on radiation efficiency, slope angle validity,
based on solar panel specification models are used in a residential residence in Prizren. As a
result, it has been concluded that aerosol pollution has a direct impact on the efficiency and
effectiveness of sunlight taking into account all climatic conditions, including the degree of
validity of the slope of the solar panels and the pollution contained on the surface of the
panels. Particles found in solar panels resulting from the effects of fossil fuels CO2 emissions,
as follows from the monitoring of the electronic station located in Prizren. of PM10, PM2.5,
SO₂, NO₂, CO, O₃ and NOx. The optimal angle of inclination when satisfactory use of
renewable energy is made at 33⁰, especially the solar energy source with zero emissions
balance. The inclination rate of the solar panel in the Prizren region is 35.5 ⁰ which is close to
the optimal value of the analytical measurements of the solar panel placement. Based on the
results made with this study for solar radiation, it was concluded that Prizren has the best
conditions for the use of sunlight with 12. 15 hours / day of sunshine or 2015 hours / year.
From the electronic measurements performed in the 7.5 kW plant in Prizren, it has resulted
that the solar radiation is 1580 kWh / m² / year, therefore Prizren has good climatic conditions
for the production of solar energy.

Keywords: Radiation efficiency, aerosol particles, electronic station, slope angle.

1. Introduction

The generating plant with photovoltaic system 7.5 kW is composed of modules and
connections of solar panels in the form of strings connected in series, divided into groups of
inverter connection which are considered as the basic group of the plant, depending on the
installation power in converting PV energy. Impacts of pollution from fossil fuels whether
from greenhouse gas emissions, dust pollution, burning of old cars, or even from rainfall
coming from the atmosphere. Such aerosol pollution is in the form of smog and sticks in the
form of a stick on the surface of solar panels, therefore creating many dilemmas of what
should actually be done in the future to eliminate these obstacles to the efficiency of solar
radiation. Common and simple solar panel models have been developed and integrated into
many engineering programs including data simulation on the Matlab platform. However,
these models in the case of this research have turned out to be inadequate for the application
of the hybrid power system as they need to adapt to certain atmospheric parameters,
depending on the locations where the solar plants are installed. Therefore, this study presents
a step-by-step procedure for simulating PV cells / modules / arrays with the Matlab /
Simulator Tag tools. A DS-100M solar panel is used as the reference model. Characteristics
of operation of PV group with large or medium systems and in some cases even small PV
1
UBT- Higher Education Institution
* Corresponding author: [email protected]; [email protected]; [email protected];
[email protected]; [email protected]
278
systems in case of study for aerosol rainfall, from which convincing results have been
obtained that aerosol pollution affects the efficiency of the converting rays of the sun ( V.
Sofiu, 2019).

1.1. Reflection of sunlight by aerosol particles in space

Aerosol particles are formed by fossil fuels with CO2 which are distributed in the atmosphere
in the form of black clouds clustered in appearance as puddles concentrated in different forms
which are reflected in the ozone layer envelope where at the same time it affects the direct
equilibrium of climate change and also changes the flow of the sun's radiant energy into the
atmosphere and the Earth's surface. The distribution of sunlight in the atmosphere by the
presence of aerosol particles on the surface of solar panels directly changes the intensity
properties in the conductivity flows of radiation cells. Figure 1 shows how to measure aerosol
particles in the atmosphere.

Figure 1: Measurement of aerosol particles in air


Aerosol particles are extraordinarily different depending on the sources they have natural or
artificial (driving cars, industry, or power plants) directly affecting climate change which is
one of the most important and most pressing challenges facing atmospheric science today
globally. Figure 1 illustrates the aerosol particles in the atmosphere with the help of two
airplanes in which two measuring instruments are placed in different positions to see to what
extent the variety of aerosol according to the flight paths including different amounts (
Sofiu.V, 2018).

1.2. Climate change from aerosol particles

Particles suspended in the atmosphere in small quantities at cooling points affect the climate
of sunlight by changing the flow of radiant energy from the sun to the earth's surface and
inside the atmosphere. The effect is to raise the temperature at the earth's surface as a result of
the warm energy trapped between the gases in space. This effect occurs directly with the
scattering of rays and absorption of sunlight, indirectly affecting the formation of clouds and
changing their properties from rain and snow creating the atmospheric mixture, so the
atmospheric particles of the aerosol are extremely different because their sources are of
artificial and natural composition. To understand in reality the phenomenon of aerosol
particles how much they affect climate change, the greenhouse effect is considered to be one

279
of the most difficult challenges of technological science that the world is facing today from
the presence of Global Warming. The change of solar energy flux according to AOD is an
important change with aerosol effects which enables climate change called the Forced
efficiency of solar radiation (Sofiu.V, 2013).

Figure 2. Flux of solar radiation

Figure 2 shows the reduction of sunlight on the surface of space affected by the effect of the
aerosol; the flux of solar radiation according to the measured values reaches from 350-700 nm
wavelengths with radiant visibility of the sun. (J. Redemann, P.Pileweskie, 2006)

1.3. Discussion of results

Electronic monitoring stations of KHMI in Prizren with altitudes above 400 m with Latitude
42 ° N and Longitude 20 ° E, in the case of our study are taken in elaboration the data
measured in the 7.5 kV system with the generated data planned in the period one year from
the commissioning of the PV plant. This analyzes in detail the impact of air polluted with
aerosol particles on solar panels, the presence of which affects the intensity of radiation for
electricity generation. The average hourly values of Irradiance Global Horizontal (GHI) for
these days (23 and 24 April 2019) are presented in Table 1. It From this it is observed that 1
MJ / m2 = 225 Watt-h / m2. As expected, the GHI value has dropped during the storm of rain
with aerosol pollution. In fact the average daily values of GHI has decreased from 250 W /
m2 on April 23, 2019 to 225W / m2 on April 24, 2019 (reduction of 10% from the previous
day). The peak value was reduced from 300 W / m2 to 250 W / m2 (Reduction of 16% from
the previous day) (KHMI, 2019).

280
Table 1: Measured storm time values with pollution
GHI (MJ / m2) April 23&24, 2019

x y z
0.5 8 0.6
2 10 1.9
2.75 12 3.1
2.8 13 2.9
2.7 14 2.8
1.7 16 1.8
0.2 18 0.3

The main reason why a detailed study of solar radiation has been done in this research is
analyzing external factors as a benefit or even an obstacle to electricity generation by the PV
system with the effect of aerosol pollution on the solar surface where it has resulted in energy
loss in case of study .

A cautious role is also the approach of placing the solar panels or the steep slope with
orientation at a suitable angle to maximize the solar energy of the incident. In such situations
it is necessary to calculate the Global Tilted Irradiance (GTI or IT). The averages of the global
tilt, global irradiance, horizontal irradiance and horizontal distribution, irradiances are denoted
by It, Id, Ig respectively are related to the following equation (Solar Energy, 2015):
1 + + (1)

Bending factors for radiation, diffuse radiation and reflected radiation:


∅ ∅
= (2)
∅ ∅

(3)

(4)

Where: latitude (φ), clock angle (ω), declination angle (δ) and solar zenith angle (θz) depend
on the location of the place, time and day of the year (T. Hill, 2013).
The tilt angle (β) of the PV panel is taken to be 275 W and the ground reflection (ρ) is
assumed to be 0.2. It can be observed that the slope factor for scattered radiation is based on
the assumption that the sky is an isotropic source. There are other models that account for the
anisotropy of diffuse radiation scattering. The isotropy assumption may be sufficient to
predict the yield of the PV plant (S. Kaligirou, 2015).

Table 2: Estimated value of GTI (or TI)

GTI 100 200 300 400 500 600 700 800 900 1000
time 6 8 10 12 14 16 18 20 22 24

281
It is clear that due to the large decrease in DHI efficiency, the GTI value is significantly
reduced when compared to the GHI shown in Table 35. Thus, it is necessary to consider the
value of the tilted radiation as a change of the scattered component which reflects in total
solar radiation table 2 available for electricity generation (Sofiu.V, 2019).

Table 3 Optimal angle and solar energy per year per m2 for the Prizren region

Solar energy No. Solar


kWh/m2/d day energy
Month Ugao(⁰) kWh/m2
January 68.5 2.03 31 152.6
February 49.1 3.33 28 93.24
Mart 38.9 4.55 31 136.5
April 25.2 25.2 30 166.5
Maj 5.9 5.9 31 222
4.6 30 299.9
June 4.6
1.5 31 296
July 1.5 17.3 31 186.6
August 17.3 32.1 30 116.6
September 32.1 49.3 31 91.8
October 49.3 59.5 30 99
November 59.5 73.4 31 62.4
Decembar 73.4
Year 35.5 3.14 365 152.6

Looking at the graph of monthly and annual data of Table 3, it is concluded that the optimal
average angle for the whole year for which we use the maximum solar energy in the solar
collector 35, 5 °.
Therefore, the decrease in power due to aerosol particles is expected to be of the same order.
(B. Ravindra, 2011)

Conclusion
Particles found in solar panels derived from the effects of fossil combustion pollution based
on mathematical modules, simulations on radiation efficiency, slope angle validity, based on
the model specifications of the solar panels used for the study and as an intermediate
consequence it has been concluded that aerosol pollution has a direct impact on the efficiency
and effectiveness of sunlight.
Orientation of the solar panels in the adequate direction in the south-east position, the angle of
inclination in the optimal degree is as follows: The angle of validity of the extension of the
solar panel for Prizren is 33.5⁰ and 35⁰ which is close to the optimal value of the analytical
analysis solar in 7.5 kV PV system.
Polluted particles with exceeded values are PM 10 and PM 2.5 particles and pollution with
fossil fuels and greenhouse gas emissions. Based on the available air quality data, emission
data and other statistical data, it can be assessed that the air in Kosovo is of unsatisfactory
quality and is outside the allowed limits.
The correlation coefficients of the dependence factor of the intensity of solar radiation falling
on the earth's surface fluctuate from the relative humidity of the air, and range from 0.150 to

282
about 0.440. Formed analytical equations in empirical with the hierarchical communication
system are used to make the necessary corrections according to the built algorithmic schemes.
Therefore, we will look at the possibilities of finding a way to eliminate these atmospheric
obstacles by replacing a new technology including SMART equipment for monitoring
particulate matter and at the same time automatic cleaning of solar panels through sensors in
the installed plant.

References
Mathematical modelling of aerosols in solar panels; Vehebi Sofiu; 2019.
Impact of aerosol optical depth on seasonal temperatures in Kosovo; V. Sofiu, 2018.
Global access to living environmental protection, V. Sofiu, 2013.
Airborne measurements of spectral direct aerosol radiative forcing in the Intercontinental
chemical Transport Experiment/Intercontinental Transport and Chemical Transformation of
anthropogenic pollution, 2004.( J Redemann; P.Pileweskie, 2006)
Ames tracking Airboume, AOD (aerosol optical depth - Global Monitoring Laboratory, 2018.
IHMK – Hidrometerological institute, Pristina, 2019
Solar irradiance forecast using aerosols measurements; Solar Energy vol. 122, no. 1158,
2015.
Mathematical modeling of the effect of aerosols on solar panels and climate conditions in
order to support the use of alternative energy sources (V.Sofiu, 2019).
Solar Energy, Principles of Thermal Collection and Storage, S.P. Sukhatme and J.K. Nayak,
Third ed. New Delhi: Tata McGraw Hill, 2013.
Solar Energy Engineering, Soteris A. Kaligirou, 2015
Performance of a crystalline silicon photovoltaic power plant during sandstorms. B.
Ravindra, 2011.

283
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Diagnostic expert systems

Rahimova N.A1*, Abdullayev V.H.2*

Abstract: The object of the research is expert diagnostic systems. The natural field of application of
diagnostic expert systems is medicine. This article considers the expert system "System of diagnosis
and programming of treatment of acute surgical abdominal diseases, pancreatic diseases and
ophthalmology." Relevant issues were considered for the implementation of this system. The features
of expert systems are also discussed here. This article focuses on the characteristics and symptoms of
many eye diseases and their grouping into special categories. In addition, the conceptual model of the
database, which was implemented in DBASE IV. After that, the structure and phases of the diagnostic
expert system for determining the disease were discussed.

Keywords: expert systems; diagnostics; information systems; artificial intelligence; diagnostic expert
systems.

1. Introductıon

The medical field is considered more natural for diagnostics, which is why diagnostic systems
are more widely used in this field. However, there are still many issues to be investigated in
this area. The system of facts characterizing the diagnostics for the application area, the
relevant description of the knowledge, the correct estimation of the facts and the validity of
the results in the rules, and the adaptation of the diagnostic strategy to the real situation can be
considered. [1] In this regard, the medical diagnostic systems developed by the scientists of
the Department of Computer Science and Programming at the Azerbaijan State Oil Academy
and the Department of Surgery and Ophthalmology of the Azerbaijan Medical University -
"Acute surgical diseases of the abdominal cavity", Diagnostics and treatment programming
systems for diseases of the pancreas and ophthalmology may be considered relevant. The
following issues were investigated and solved for the purpose of each system:
● Investigation of expert systems related to the subject area and the findings and
drawbacks in this area;
● Investigation of the subject area, determination of its main objects and relationships
between objects;
● Diagnostic decision-making strategy and its main factors;
● Determine the structure of the data base and knowledge base and develop a
conceptual model based on the study of the subject area;
● Design and formal description of rules and metadata;
● Creation of logical output mechanism;
● Development of software for expert system implementation;
● System setup and implementation. [2, 3]
The modern level of development of computer technology, mass production of high-
performance and wide range of hardware, the emergence of a wide range of software
1
ASOIU, Baku, Azerbaijan
* Corresponding author: [email protected]
284
packages have facilitated the implementation and expansion of intelligent systems, including
expert systems. [4] Expert systems are currently used in many areas (planning, forecasting,
management, diagnostics, etc.). The most effective of these is diagnostics (failure of complex
technical devices and systems, diagnosis of disease in humans), because the solution of
diagnostic problems by humans in the usual way often does not give the desired result. [5]
There are many reasons for this:
● Lack of complete and accurate information about the object that is diagnosing the
condition
● The greater the volume of information, the more difficult it is to analyze and make
logical conclusions and to make mistakes;
● Human factor - poor experience, fatigue, memory impairment, etc. [6]
Let's look "Eye Diseases" system created at the City Clinical Hospital I with the help
of scientists-ophthalmologists of the Department of Ophthalmology of AMU.
The clinical history of outpatients is collected in the admission department, complaints
are studied and clarified, objective examinations in a special sequence (visual acuity, visual
field, examination of the eyeball by focal light, reverse and direct aphthamoscopy,
determination of eye refraction, biomicroscopy and intraocular pressure palpator) binocular
vision examination) and based on the results of these examinations, informative information
about the functional condition of each patient's eyes is obtained. Preliminary data are listed in
the following order:
● Patient's passport data;
● Doctor's indications;
● Patient's medical history;
● Complaints (local complaints of the patient, general complaints of the patient). [7]
● Examinations:
■ Stage I (objective examination);
■ Stage II (visual acuity, atropinization, etc.);
■ Stage III (ophthalmolmoscopy);
■ Stage IV (visual field, intraocular pressure, USM). [8]
The study included a total of 150 eye diseases and about 1,000 symptoms associated
with those diseases in the 10 most common groups of diseases that cover ophthalmology. [9,
10]

2. Materials and Methods

Before compiling the conceptual scheme of the database, the structure, boundaries of the data
and the queries that can be submitted to the system were analyzed. When creating a
conceptual scheme, not only the information interests of the user, but also the information
needs of the subject area are taken into account for structuring the subject area. All
requirements are summarized in a conceptual model that allows you to see the full
information content of the subject area. A conceptual diagram showing the structure of the
data stored in the database and the relationships between the sections of the subject area is
shown in Table 1 to 9.
The conceptual scheme of the system database is based on the relational model of DB.
The relational model is based on the description of data structures in the form of ratios and
tables. Based on the developed conceptual scheme, the DBASE IV environment was selected
to build the physical model of DB.

285
3. Research results and discussion

According to the conceptual scheme, the database is composed of nine linked tables.
PATIENT, DOCTOR, LOCAL COMPLAINTS, GENERAL COMPLAINTS, HISTORY,
EXAMINATIONS (STAGES I, II, III, AND IV). Their structure is described in the order
from Table 1 to 9.
Table 1. Table of Patient
Field name Type Size

Id N 15

First_Name C 15

Last_Name C 15

Second_Name C 15

Date D

Sex C 8

Age C 45

Family_status C 10

Education C 35

Workplace C 250

Address C 250

Diagnosis M

Phone C 10

Have_you_been_examined? C 5

Where_was_examined? C 200

Table 2. Table of Doctor


Field name Type Size

Id N 15

D_First_Name C 15

D_Last_Name C 15

D_Second_Name C 15

Table 3. Table of Local Complaints


Field name Type Size

Id N 15

286
Cipher C 15

Local_Complaints C 55

OD C 15

OS C 15

OU C 15

Table 4. Table of General Complaints


Field name Type Size

Id N 15

Cipher C 15

General_complaints C 60

Table 5. Table of Anamnesis


Field name Type Size

Id N 15

Cipher C 50

Anamnesis C 100

OD C 25

OS C 25

OU C 25

Table 6. Table of Phase I


Field name Type Size

Id N 15

Cipher C 50

Phase_I C 100

OD C 25

OS C 25

OU C 25

Table 7. Table of Phase II


Field name Type Size

Id N 15

Cipher C 50

287
Phase_II C 100

OD C 15

OS C 15

OU C 15

Table 8. Table of Phase III


Field name Type Size

Id N 15

Cipher C 15

Phase_III C 100

OD C 15

OS C 15

OU C 15

Table 9. Table of Phase IV


Field name Type Size

Id N 15

Cipher C 55

Phase_IV C 100

OD C 25

OS C 25

OU C 25

For example, glaucoma is described as follows:


Q.1. Congenital glaucoma
Q.2.Primary glaucoma
Q.2.1. Primary glaucoma (by angle)
Q.2.1.1. Primary open glaucoma
Q.2.1.2. Primary closed-angle glaucoma
Q.2.1.3. Primary mixed angle glaucoma
Q.2.1.4. Suspicion of primary glaucoma
Q.2.2. Primary glaucoma (by stage)
Q.2.2.1. Phase I primary glaucoma
Q.2.2.2. Phase II primary glaucoma
Q.2.2.3. Phase III primary glaucoma
Q.2.2.4. Phase IV primary glaucoma
Q.3. Secondary glaucoma
Q.3.1. secondary glaucoma uveitis
Q.3.2. Phacogenic secondary glaucoma

288
Q.3.3. After secondary glaucoma retinal thrombosis
Q.3.4. Secondary glaucoma is neoplastic
Q.4. Intraocular pressure is normal
Q.5.Glaucoma operations
The knowledge base formed in the system consists of about 400 heuristic rules. The
rule reflects the decision-making decision of the expert doctor. These rules are constantly
changing in the research process. Compilation of rules
Common symptoms and symptom weights appropriate for eye diseases were used in
the study. Prices are heuristic and are divided into three conditional groups:
 0.9- pathognomic;
 0.8¸0.7 total;
 0.5¸0.3 according to differential indicators and some medical data.
The pathognomonic (0.9) symptoms accepted in the diagnosis of the disease play an
important role in the correct diagnosis. For example, the main pathognymic symptoms of
glaucoma selected by a physician are structured in the following sequence:
 1.4.1. Newborn-1-6 months old babies
 1.4.2. Child-6 months-12 years;
 2.30.1. Fear of light (weak) - photophobia;
 2.31. Irritation of the eyes;
 5.4.2. The volume of the eyeball is larger than normal (macrophthalmia);
 5.4.4. Buftalm;
 6.8.1. Nausea in the cornea;
 6.12.3. Front camera depth;
 6.15.1.3. Dilation of the baby (mydriasis).
While these are the main symptoms of congenital glaucoma, they only allow the
diagnosis of one type of glaucoma. Using this structure, a knowledge base is created on the
principle of "IF THEN" rules. For example,
IF the patient is a newborn (1-6 months old) {1.4.1.},

OR is 6 months-12 years old {1.4.2.},

AND you are not afraid of light {2.30.},

AND the eye is irritated {2.31.},

AND the volume of the eyeball is larger than normal (macrophthalmia)


{5.4.2.},

OR the volume of the eyeball is buffalo {5.4.4.},

AND in the cornea, which is the optic system of the eye

If there is nausea {6.8.1.},

AND the front camera is deep {6.12.3.},

AND the size of the pupil of the eye is gendered (mydriasis) {6.15.1.3.},

THEN The patient was diagnosed with congenital glaucoma (Q.1.)

The extraction mechanism in the system depends on the state of the working memory
and the composition of the knowledge base. At the selection stage, the active collection of

289
data and rules (modules) is selected. After all the information is collected in the database and
the rules knowledge base, the patient's complaints and symptoms collected in the database on
the doctor's examination are displayed in turn, and the answers are collected in the computer's
memory block.
The activation phase determines which active modules are ready for use in which
active data. Performs comparison of all rules and data at each stage of the extract mechanism
for the diagnosis of the disease. The facts collected in the memory block are compared to the
diseases collected in the knowledge base. The structure of the disease is defined as a tree-like
hierarchical scheme. Figure 1.

Figure 1. Classification scheme of eye diseases


The woody structure is of great importance for testing hypotheses. This structure, on
the one hand, reflects the possible relationships between diseases, and on the other hand,
determines the strategy of the diagnostic search process. The main goal is to search for the
right data. There are high-level group diseases, low-level diseases, and intermediate diseases,
which combine diseases that are not so different from each other. The purpose of the
diagnostic system is to detect the disease. Second-level diseases include several diseases that
have a certain amount of common properties, and if the system cannot identify the disease, it
responds to the user with a corresponding second-level disease. The first level of diseases is
pathologically common and is called a group disease.
In the dispute resolution phase, the extract mechanism evaluates these rules for their
relevance to the current goal.
At the execution stage, the description of the rules (modules) selected during the
dispute resolution stage is performed, and finally, the result of the operation period of the
extraction mechanism is output to the user.
The algorithm for selecting the operating mode of the system is described in Figure 2.
The software is organized in modules. These modules allow you to receive
information about the patient at any time, enter the patient's symptoms, diagnose, edit the
information, quickly search for the necessary information from the database, make a decision,
print a medical history, and more allows.

290
Figure 2. Search module algorithm
This tree-like structure contains information about all the modules, allows you to look
at the sub-branches to get acquainted with the working principle of other modules.
The facts section describes the working principles of the modules, which ensure that
the patient's symptoms are included in the program. These modules are mainly:
 anamnesis;
 examinations;
 complaints.
The anamnesis module provides input of the patient's anamnesis data into the
program.
The “Examinations” module provides for the inclusion in the program of the patient's
data, i. e. symptoms identified as a result of examinations. The admission process is carried
out according to the following examination stages:
 Stage I (objective examination);
 Stage II (visual acuity, atropinization, etc.);
 Stage III (ophthalmoscopy);
 Stage IV (visual field, intraocular pressure, USM).
The “Complaints” module provides for the inclusion in the program of the data
identified as a result of the patient's complaints, i. e. symptoms. The admission process is
conducted in accordance with the following types of complaints
 local complaints;
 common complaints.
In the “Diagnosis” module, the diagnosis is made by including the main symptoms of
both patients with a permanent medical history and those with a history of illness. Diagnosis
is made according to the following modules:
 diagnosis made with constant maintenance of medical history;
 Diagnosis of non-permanent or operative history of the disease.
 Optics;
 Teaching part;
 Part of public service.

291
The "Help" module informs the user about the program, the author, as well as the
working principles of the individual modules of the program. This module also explains the
working principle of the Inquiry Book. This module consists of 4 parts:
 About medicines;
 Optics;
 Teaching part;
 Part of public service.
The research materials included 500 patients admitted to the "Eye Diseases"
department of Baku Clinical Hospital No. 1 with eye diseases.
4. Conclusions

Expert Systems, which play a special role in the field of artificial intelligence, are very
important for the development of technology and their future achievements.
With the help of Diagnostic Expert Systems, a type of expert system, great advances have
been made in both the technology and the medical world. With the correct use of these
systems, it is possible to conduct accurate diagnostic assessments.

References

[1] J. Yanase, E. Triantaphyllou, “A Systematic Survey of Computer-Aided Diagnosis in


Medicine: Past and Present Developments”. Expert Systems with Applications 138, 2019
[2] S. Jabeen, G. Zhai, “A Prototype Design for Medical Diagnosis by an Expert System”. 7th
International Workshop on Computer Science and Engineering (WCSE 2017)
[3] P. Patra, D. Sahu, I. Mandal, “An Expert System for Diagnosis Of Human Diseases”.
International Journal of Computer Applications 1(13), 2010
[4] J. Singla, D. Grover, A. Bhandari, “Medical Expert Systems for Diagnosis of Various
Diseases”. International Journal of Computer Applications 93(7), 2014.
[5] D. Matthias, O. Udo, “Expert System for Medical Diagnosis of Hypertension and
Anaemia”. MA YFEB Journal of Enviromental Science 3, 2017.
[6] S. Abu-Naser, R. Aldahdooh, A. Mushtaha, M. El-Naffar, “Knowledge Management in
ESMDA: Expert System for Medical Diagnostic Assistance”. ICGST-AIML Journal 10(1),
2010.
[7] I. Abundez, E. Rendon, C. Estrada, S. Zagal, “Diagnosis of Medical Images Using an
Expert System”.
https://www.researchgate.net/publication/220942958_Diagnosis_of_Medical_Images_Using_
an_Expert_System
[8] S. Abu-Naser, Abu Zaiter O., “AN EXPERT SYSTEM FOR DIAGNOSING EYE
DISEASES USING CLIPS”. Journal of Theoretical and Applied Information Technology
4(10), 2008.
[9] S. Sikchi, S. Sikchi, M. S. Ali, “Artificial intelligence in medical diagnosis”. International
Journal of Applied Engineering Research 7(11), 2012.
[10] A. Oluwafemi J., I. A Jimoh, “Expert System for Diagnosis Neurodegenerative
Diseases”. International Journal of Computer and Information Technology 4(4), 2015.

292
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Investigation of the Relationship Between Bridge Equipment


Location, Fatigue and Mental Workload by Using Piper Fatigue
Scale and NASA-TLX

Leyla Tavacıoğlu1*, Bayram Barış Kızılsaç2*, Neslihan Gökmen İnan3*,


Özge Eski4*, Can Tanguç5*

Abstract: Fatigue is the decrease in the level of physical and mental data received by the
individual in connection with many reasons such as the individual's work. Fatigue also takes
an important place by negatively affecting the individual herself, her/his relations with her
environment and her working life. Maritime trade, which has a great place in the growing
world trade day by day, is indisputably affected by these situations. Thanks to the developing
technology, the quality of the equipment used becomes better and more efficient over time.
However, despite these, due to the increasing workload, the fatigue on the employees
gradually increases and this causes some negativities in individuals. In this study, it is aimed
to investigate the relationship between bridge equipment position, mental workload and
fatigue. In this context, 42 ship's people who have worked/worked on the ships are
participated and answered NASA-TLX and Piper Fatigue Scale. NASA-TLX measures
performance, effort, frustration, mental, physical and temporal demand. Piper Fatigue Scale
have four subdimensions such as behavior, affective, sensory and cognitive mood. This study
reveals that there is statistically significant positive correlation between fatigue and mental
workload (p<0.05). In addition, mental workload and fatigue are found to be associated with
bending status and the degree of eye strain from the bridge equipment in the use of ECDIS
and RADAR. In the following studies, mental workload assessment can be made for different
operational processes by increasing the number of samples.

Keywords: piper fatigue scale, NASA-TLX, bridge equipment location, maritime


transportation

1. Introduction

Approximately 90% of global trade is carried out by maritime transport. In case of the
slightest interruption of this trade, it deeply affects the world markets from the largest to the
smallest. The safe handling and management of ships is a responsibility of the ship's people.
For this reason, the health and well-being of seafarers is known as an important building
block for the flow of trade to continue undisturbed (Kinchington, 2020).

Maritime is one of the most dangerous occupations compared to other occupations. It is also
seen that the stress is high. Considering the working environment on the ship, it includes
1
Istanbul Technical University, Maritime Faculty, Basic Sciences, Istanbul, Turkey
* Corresponding author: [email protected]

293
sound pollution, vibration effect, intense work tempo, hot and cold weather conditions
(Öztürk, 2021).
Fatigue is a common denominator in the maritime sector, as in many other sectors, and is an
important problem. In general, it is stated that 70-80% of the accidents that occur in the sea
are caused entirely by human error. Fatigue is shown as the main cause of the majority of
these accidents (Reyner&Baulk, 1998).

With the increase in the level of fatigue of the ship's people, which are frequently encountered
in the maritime community, disasters with high costs occur. The decrease in performance
caused by fatigue in individuals causes health problems, damages to the environment and a
decrease in the working time of individuals on board (Smith et al., 2001).

In this study, the relationship between bridge equipment position, fatigue and mental
workload is investigated. During this study, the Piper Fatigue Scale (PFC) and NASA-TLX's
rating and weighting models are handled and a seafarer’s attitude-based survey is conducted.
The sub-dimensions of PFC and the rating part of NASA-TLX are analyzed among
themselves and it was also examined whether there was a correlation between them.

2. Material ve Method

The research is carried out to measure the relationship between the position of bridge
equipment, mental workload and fatigue. The content of the research is conducted in the form
of data collection on a survey basis. The survey consists of three parts. First of all, 6 questions
are asked within demographic characteristics, such as the gender, age, education status of the
participants, their duties on the ship, the length of time they worked at sea, and the types of
ships they mostly worked with. Following the demographic data, questions about the Piper
Fatigue Scale and the bridge related questions are added. Finally, NASA-TLX rating and
weighting questionnaires are asked and the questions. The questionnaire is completely created
on the online platform and send to the people via the Google Forms. To investigate the
correlation between two normally distributed variable Pearson correlation analysis is used.
IBM SPSS Statistics for Windows, Version 24.0. is used and the significance level is
determined as 5%.

2.1. Piper fatigue scale (PFC)

A comprehensive measurement model for fatigue, the Integrated Fatigue Model, was created
by Piper et al. in 1987 to evaluate. When the scale was first made, it consisted of 42 items and
with the changes made over time, each question is 0-10 points in the present time and is
evaluated in four sub-sections at the point of fatigue levels of the patients, which consists of
22 items in total evaluated based on the VAS (Visual Analog Scale). These parts are: violence
and behavior sub-dimension that evaluates the effect and severity of fatigue (2-7 items),
affective sub-dimension (8-12 items), which includes the emotional state that is the source of
fatigue, and sensory sub-dimension that reflects mental and mental symptoms of fatigue (13-
17 items) and the mental sub-dimension (18-23 items) that shows the level of cognitive effect
of fatigue on symptoms and mood. In addition, there are 5 items (1st and 24-27 items) that are
not active at the point of calculating the fatigue score in the scale, but are important for the
evaluation of the data related to fatigue. Out of these 5 items, item 1 deals with the process of
fatigue, while item 24-27 is for individuals who are sick to express their thoughts on fatigue
(Clark et al., 2006).

294
According to the mean scores of the Piper scale, they are ranked as follows:
0 points: No fatigue
1-3 points: Mild feeling of fatigue
4-6 points: Moderate fatigue
7-10 points: Extreme level of fatigue

2.2. NASA-TLX

Although the definition of workload is gradually increasing its effectiveness in the academic
community, there has not been a clear consensus on the definition point. Generally used in the
existing definitions; It is done on the basis of three variables, namely the amount of work,
duration and the individual's psychological experience. When the definition of workload was
first made, the focus was on physical workload, however, with the advancement of
technology over time, the use of machines more efficiently instead of the work that is
physically engaged in, the workload of workload began to be focused on the issue of
workload in a mental rather than physical sense (Taç, 2018).

NASA-TLX (NASA Task Load Index) is a subjective and multidimensional workload


assessment scale created by Hart and Staveland in order to evaluate the degree of mental
workload of individuals. NASA-TLX Since it was originally designed for use in the aviation
industry, most of the work has been done on air traffic control, civil or military cockpits.
Recently, studies on portable technological devices such as automobile drivers, clinical
research, computers or mobile phones are increasing day by day (Taç, 2018).

NASA TLX is a multidimensional assessment-based procedure that looks at the rating


averages of six subscales and creates an overall score for a workload. These dimensions are as
follows;
performance, effort, frustration, mental, physical and temporal demand. Individuals define the
contribution rate made by each of the six dimensions in order to make the determined
workload intensity clear.

NASA-TLX evaluates the workload scale in two stages as rating and weighting. In the first
stage, each dimension describing the workload consists of a comparative evaluation of its
contribution to the workload. NASA-TLX weighting form is used for this. Based on the
preference made by the person, how much each factor is marked is calculated, and weights
(from 0 to 5) are assigned to the six factors. In the second stage, it consists of grading the six
dimensions that make up the definition of the workload separately on a numerical basis. For
this reason, NASA-TLX rating form is used, based on independent scoring of all six workload
factors. The person completes the form by giving a score between 0 (low) and 20 (high) for
the dimensions of the workload in the specified jobs. Finally, the person's total mental
workload rating averages (between 0 and 100) are obtained (Taç, 2018; Hart, 2006).

3. Results

42 ship's people who have worked/worked on the ships are participated and answered
demographic questions, NASA-TLX and Piper Fatigue Scale. NASA-TLX measures
performance, effort, frustration, mental, physical and temporal demand. Piper Fatigue Scale
have four subdimensions such as behavior, affective, sensory and cognitive mood.

295
Table 1. Distribution of Demographics

N %
Age 20 ve altı 4 12
21-30 33 79
31-40 5 9
Education Faculty 40 95
Vocational School 2 5
Position Deck Intern 23 55
Watch Officer 14 33
First Officer 4 10
Captain 1 2
Working at sea Up to 2 years 29 69
2-4 years 11 27
4-8 years 1 2
8-12 years 1 2
Type of vessel Tanker 23 55
Dry cargo 6 14
Container 6 14
Ro-Ro 3 7
Other 4 10

A significant part of the participants consists of male ship people. The main reason for this is
that women have not worked at sea for many years. However, this perception has started to be
broken and women's employment in the sector continues to increase. It can be said that the
majority of those who participated in the survey were young. Most of the people who
participated in the survey are either a faculty graduate or a faculty member. More than half of
the respondents are deck trainees. The number of deck trainees is followed by the watch
officers. It is seen that the majority of the participants have up to 2 years of sea experience. It
is seen that more than half of the participants mostly worked as a tanker type of ship (Table
1).

Table 2. Distribution of Bridge Related Questions

Mean SD
00.00-04.00 5.45 1.316
04.00-08.00 6.5 2.057
08.00-12.00 4.97 1.494
The degree of eye strain from bridge equipment lights.
12.00-16.00 4.85 1.197
16.00-20.00 5.64 1.686
20.00-24.00 5.16 1.229
N %
Yes 30 71.4
How the chart table affects the bow view
No 12 28.6
White 5 11.9
Color distribution in which Radar is used in general at Blue 14 33.3
night time Green 18 42.9
Orange 5 11.9
Body bending distribution in the use of RADAR and Yes 30 71.4

296
ECDIS. No 12 28.6
Presence of any obstacle that may affect the viewpoint of Yes 7 16.7
the captain and pilot's seat at the bow point of view No 35 83.3
Yes 24 57.1
Eye disease
No 18 42.9
If there is an eye disease, whether or not it occurred while Yes 2 4.8
working on board. No 16 38.1
SD: Standard Deviation

According to Table 2, while the shift in which fatigue is felt more is seen as 04.00-08.00, the
shift in which fatigue is felt less is reflected to the values between 12.00-16.00. 71.4% of the
participants stated that chart table affects the bow view, they are bending their body in the use
of RADAR and ECDIS and 42.9% of them answered that green color is used in Radar in
general at night time. 4.8% of them have eye disease and it occurred while working on board.

Table 3. Distribution of PFC and NASA-TLX


Mean SD
Behavior/severity 5.658 2.035
Affective/meaning 5.971 1.943
PFC
Sensory 5.523 2.065
Cognitive mood 5.13 1.928
Mental demand 7.26 2.26
Physical demand 6.93 2.37
Temporal demand 7.5 2.2
NASA-TLX
Performance 8.07 1.28
Effort 6.43 2.42
Frustration 6.07 2.9

Since the general average of the total 4 sub-dimensions remained in the 4-6 points band, an
analysis can be made that the individuals participating in the study felt moderately fatigued. In
the NASA-TLX form each dimension are changing between 0-20 points (very low-very high).
The results show that, the participants have medium level mental workload which is close to
10.

Table 4. Correlation analysis between PFC and NASA-TLX

PFC
r, p Behavior/ Affective/ Sensory Cognitive
severity meaning mood
Mental
0.116,0.464 0.339, 0.028 0.070, 0.659 0.298, 0.055
demand
Physical
0.477, 0.001 0.566, <0.001 0.350, 0.023 0.439, 0.004
demand
NASA-
Temporal
TLX 0.284, 0.068 0.392, 0.010 0.222, 0.157 0.293, 0.060
demand
Performance 0.157, 0.322 0.180, 0.255 0.032, 0.842 -0.006, 0.972
Effort 0.380, 0.013 0.293, 0.060 0.234, 0.134 0.460, 0.002
Frustration 0.512, 0.001 0.488, 0.001 0.509, 0.001 0.527, <0.001
Pearson correlation coefficient

297
According to Table 4, there is statistically significant positive moderate correlation between
behavior/severity and physical demand, effort and frustration. There is statistically significant
positive moderate correlation between affective/meaning and frustration, physical, mental and
temporal demand. There is statistically significant positive moderate correlation between
sensory and physical demand, frustration. In addition, the significant positive correlation is
found between cognitive mood and physical demand, effort and frustration.

4. Discussion and Conclusions

In this study, the relationship between the effects of bridge device positions on the people of
the ship and the Piper fatigue scale and NASA-TLX were examined. First of all, it consists of
the distribution of gender, age, education level, task distribution on the ship, working time at
sea and finally the ship types they mostly worked in.

Participants showed that fatigue was felt more at the end of the 04.00-08.00 shift on the
bridge. (10/6.5) Bridge 12.00-16.00 shift was recorded as the shift in which fatigue was felt
the least compared to other shifts (10/4.5).

According to the 1st sub-dimension scale of the Piper fatigue scale, the value of the
dimension evaluating the effect and severity of fatigue was approximately 5.65 points out of
10. This shows that the effect is felt moderately. According to the data of the 2nd sub-
dimension, the affective sub-dimension data covering the emotional meaning attributed to
fatigue is 5.97 points out of 10, which is the highest score among the dimensions. It is seen
that the effect part is moderate. In the data presented by the 3rd sub-dimension, the emotional
sub-dimension score of fatigue, which reflects mental, physical and emotional symptoms,
emerged as 5.52 out of 10. Accordingly, the degree to which it is felt is medium in size.
Finally, according to the data of the 4th sub-dimension, the psychological sub-dimension
value, which reflects the effect of fatigue on cognitive functions and mental state, is 5.13
points out of 10 and has the lowest value among the four dimensions.

In general, when the average value of all the sub-dimensions of the Piper fatigue scale is
taken, it is clearly seen that the participants feel moderately fatigued because they are at the
level of 4-6 points.

In the correlation analysis of PFC and NASA-TLX rating, 4 sub-dimensions of Piper and 6
titles from NASA-TLX rating were taken and a total of 24 pairwise relationships are
evaluated. 11 of these have significant positive moderate correlation. Frustration and physical
demand have significant correlation with all PFC sub-dimensions. It can be said that as the
fruition and physical demand increase, fatigue increases and vice versa. In addition, mental
workload and fatigue can be associated with bending status and the degree of eye strain from
the bridge equipment in the use of ECDIS and RADAR due to having large part of sample.

In the following studies, mental workload assessment can be made for different operational
processes by increasing the number of samples.

References

298
Carrieri-Kohlman, V., Lindsey, A. M., & West, C. M. (2003). Pathophysiological phenomena
in nursing: Human responses to illness (p. 640). Philadelphia, PA: Saunders.

Clark, P. C., Ashford, S., Burt, R., Aycock, D. M., & Kimble, L. P. (2006). Factor analysis of
the Revised Piper Fatigue Scale in a caregiver sample. Journal of nursing measurement, 14(2),
71-78.

Hart, S. G. (2006, October). NASA-task load index (NASA-TLX); 20 years later. In


Proceedings of the human factors and ergonomics society annual meeting (Vol. 50, No. 9, pp.
904-908). Sage CA: Los Angeles, CA: Sage publications.

IBM Corp. Released 2016. IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY:
IBM Corp.

Kinchington, F. Under whose flag? The race to dominate natural resources: an examination of
the evolving power dynamics of superpowers and flag protectionism on global trade and
maritime security.

Reyner, L., & Baulk, S. (1998). Fatigue in ferry crews: a pilot study (p. 34). Cardiff: Seafarers
International Research Centre.

Smith, A. P., Lane, T., & Bloor, M. (2001). Fatigue offshore: A comparison of offshore oil
support shipping and the offshore oil industry. Seafarers International research Centre/Centre
for Occupational and health Psychology, Cardiff University, Cardiff.

Taç, Umut. (2018). Gemiadamlarının Bilişsel Yeteneklerinin Durumsal Farkındalık Açısından


Modellenmesi, İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü.

299
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Developing Automation and Applications in Maritime


Transformation Process of Freights

Leyla Tavacıoğlu1*, Bayram Barış Kızılsaç2*, Özge Eski3*, Neslihan Gökmen İnan4*,
Mehmet Mert Dalyan5*, Ercan Emre Erköse6*

Abstract: Due to a variety of facilities created by the globalized world trade, the products and
services have recently been able to meet the relevant consumers in a geographically wide
scope of the area. The trade implemented in the global scope does naturally involve trade
activities with global features wherein transportation plays a critical role. In meeting any
products, particularly low-cost ones, with the consumers, one of the highest cost items is
transportation. In this context, supply chain management involves not only supplying the
goods but also comprises planning, implementing, monitoring, and controlling the whole
process including transportation from the supply point to the final consumer in a manner that
could yield minimum cost and maximum utility. The ever-increasing severe competition has
made it inevitable for ports to get operated efficiently and ceaselessly providing customers
with added value. Regarding this, providing ceaseless and efficient port services requires a
performance measuring device. There are mainly four logistical processes carried out at
container terminals. Those are ship operations, moving containers, warehousing, and handling
containers. The efficient and effective operation of these four basic processes increases the
competitive power of ports, which means that the mentioned processes are necessary to
optimize. Such optimization requires thorough analyses of the contribution to the system
made by any aspects affecting the basic processes and where needed, the system operations
must be interfered with.
The purpose of this research is to review a logistical oriented decision supporting model as a
decision support instrument for port management aims to contribute to such basic topics as
comprehending analyzing and evaluating the logistical structure of ports as well as port
performance indicator, planning port capacity, increasing port efficiency, developing internal
port logistical processes and predicting the needs of the port in the future. Eight specific
trends, that will have a combined impact on the port industry's outlook in 2030 are explained
in this study. This research also examines how the major maritime trends are affected by the
various factors, as well as what this means for the port business in the future.

Keywords: automation, maritime transportation, freights, logistics

1. Introduction

Ports, which constitute the nodal point in the fulfillment of many activities in the
transportation chain; In addition to their basic functions such as loading/unloading, towage,
and storage, they are in contact with many organizations or individuals such as shippers,
1
Istanbul Technical University, Maritime Faculty, Basic Sciences, Istanbul, Turkey
* Corresponding author: [email protected]

300
exporters, importers, logistics companies, state authorities, banks, insurance companies. With
this feature, ports provide important contributions to the country's economy by providing the
coordination of many commercial and legal transactions as well as the transfer of cargo. Ports
have very different and important functions in terms of micro and macro aspects. In the
fulfillment of national and international marketing functions, these functions must be carried
out effectively and economically (Esmer, 2009).

A well-functioning freight transportation infrastructure is critical for the economy and for
maintaining a good standard of living. Intelligent transportation systems aspire to enhance the
utilization of current transportation systems, capacity from existing physical infrastructure,
safety, and security, while lowering the negative environmental consequences of freight
transportation (Ranaiefar, 2012). Innovative solutions may assist operators in the organization
of freight management and handling operations at freight terminals, promoting intermodal
transport by lowering terminal handling times and costs. Driven by mechanization of
manufacturing, automated guided transport systems and vehicles for commercial purposes
were developed in the early 1950s in the United States and about 10 years later in Europe,
with the goal of optimizing material flows and decreasing labor demands. Automation was
first utilized in manufacturing and warehousing (Flämig, 2016), but automated freight
transport systems have yet to be used in public open space because they require specialized
infrastructure and laws. With autonomous driving hitting the market, (Neuweiler &Riedel,
2017) discovered a research gap in finding competitive advantages. In terms of ‘technology,'
there has been a lot of work put into researching new technologies for transportation systems,
and there has been significant development in recent years. However, research on the
microeconomic and macroeconomic advantages and costs of these advances has been
minimal, and further study is needed. The purpose of this research is to revies a logistical
oriented decision supporting model as a decision support instrument for port management
aims to contribute to such basic topics as comprehending analyzing and evaluating the
logistical structure of ports as well as port performance indicator, planning port capacity,
increasing port efficiency, developing internal port logistical processes and predicting the
needs of the port in the future.

2. Material ve Method

In this study, the importance of ports in the supply chain is emphasized, and the relationship
between cargo handling systems in ports and logistics is emphasized.

Two separate and simultaneous techniques were used to perform the review:

• finding relevant material published internationally on the review subjects, including


academic papers, reports, trials and experiences, and any other evidence on the issue by
contacting a pool of experts
• using online web search engines like Google and Google scholar to find relevant materials
The status of technical progress, its implications to date, and its application to various
operational settings were all taken into account for each new technology (defined in relation
to modes of transport and location in the supply chain). A two-stage filtering and ranking
procedure was used to find, select, and prioritize candidate source documents for inclusion.
The first phase examined the evidence's relevance and transferability, followed by a second
evaluation based on the source's perceived importance.

301
a. At ports and depots, automated loading methods are used. A container port is a point when
the supply chain breaks down (Franke, 2008). As an intermodal transshipment point, it is
vulnerable to variations in arrival and departure times, as well as a lack of information that
leads to inefficiencies in lead times. Automation in a container port can help solve problems
caused by space constraints. According to Tavasszy (2016), automation can increase the
productivity of a container port: if the order of truck arrivals at a terminal is known ahead of
time, yard design can be more effective. As a result, port terminals must have a part-ashore
efficient marine terminal and an inland intermodal interface center (Franke, 2008). The
efficient marine port and multimodal interface center are connected by a dedicated railway
line in this Agile Port System ideal concept (Franke, 2008). The main principle of the Agile
Port System is as follows:

• move containers quickly between the terminal and the intermodal interface center by train
• sort containers between trains according to their final destination
• handle as many containers as possible between vessels and trains, avoiding terminal storage
• load and unload vehicles that serve the surrounding area at the intermodal interface facility

b. The Agile Port idea is based on a mix of enhanced semi-automated equipment that allows
for direct container transshipment from vessel to train and vice versa at the quay without
sacrificing performance. In effect, instead of being stored at the port terminal, load units may
be kept near to the client (Franke, 2008). The Port of Hamburg is a good example of how
Noell9 improved on the original efficient marine terminal concept by developing the ‘Mega
Hub' concept, which allowed 360 boxes to be transshipped between trains in less than 100
minutes. Due to the redundancy of yard transfer trucks, the efficient marine terminal's major
benefit is a decrease in machinery and personnel expenses. The system includes improved
semi-automated ship-to-shore cranes, semi-automated cantilevered and rail-mounted gantry
cranes, and a box mover based on rail-mounted, automated shuttle cars powered by linear
motor technology.

3. Results

3.1. The importance of ports in the supply chain and port-related dynamics

Supply chain management is the integration of essential business activities from the end user
to the initial supplier in order to provide consumers and partners with value-added goods,
services, and information (Stock&Lambert, 2001).

In order to obtain a competitive edge, businesses are reorganizing their relationships with
their suppliers and consumers. Particularly noteworthy is the tight collaboration created with
suppliers; it is clear that they contribute significantly to issues such as enhancing product
quality, lowering purchasing costs, boosting production and distribution flexibility, and
raising customer satisfaction. In order to obtain a competitive edge, businesses are
reorganizing their relationships with their suppliers and consumers. Particularly noteworthy is
the tight collaboration created with suppliers; it is clear that they contribute significantly to
issues such as enhancing product quality, lowering purchasing costs, boosting production and
distribution flexibility, and raising customer satisfaction. Since the 1990s, the logistics
concept aimed at creating an integrated structure inside a single company has begun to spread
along the distribution route, which is situated towards both supply sources and customers.
This method, known as "supply chain," attempts to adopt an integrated approach not only

302
inside the framework of a single company, but also across the distribution channel process at
all suppliers, manufacturers, wholesalers, retailers, and even customers (Tuna, 2001).

All parties participating in the fulfillment of client needs, whether directly or indirectly, are
included in the supply chain. Not only manufacturers and suppliers are part of the supply
chain, but so are transportation, warehouses, retailers, and even customers. Any function in a
firm that fulfills consumer needs is included in the supply chain. New product creation,
marketing, operations, distribution, financing, and customer support are some of these
functions (Chopra&Meindhl, 2007).

Supply chain management is a larger notion than logistics in that it manages both the
materials in the process from the raw material source necessary for manufacturing to the
ultimate customers, as well as the relationships between the distribution channel's
intermediaries (Johnson et al., 1998). Beyond being mere transit hubs in the conventional
sense, ports play an essential role in the supply chain, evolving to become logistics centers.
Without a doubt, changes in the dynamics influencing the port business have influenced this
progress. Mangan et al. (2008) explain these processes and their consequences as follows;
Under the topic, there are sections on the consequences of marine transport, developments in
the port sector, rivalry between global port operators and inter-port competition, economic
contribution of ports, and port-based logistics and supply chain strategies. These topics are
covered in depth farther down.

3.1.1 Port operators around the world

Operators of Major Container Terminals Around the World ten worldwide container terminal
operators, which handle about 37% of all container handling at the world's ports (UNCTAD,
2014). Mediterranean Shipping Company (MSC), APM Terminals, and Mitsui O.S.K. are all
global container terminal operators.It has a tight relationship with shipping firms like Lines.
According to the Drewry Research Company's Global Container Operators 2014 Annual
Report, as of the end of 2013, PSA International (8.2 percent) was the container terminal
operator with the most TEU handling in the world, followed by Hutchison Port Holdings (7.0
percent), APM Terminals (5.5 percent), DP World (5 percent), and China Merchants Holdings
International (CMHI) (3.6 percent). The top five container operators handled around 41% of
all containers handled globally in 2014 (Table 1).

Table 1. Major Global Container Terminal Operators Handling Quantities and Market Share
(UNCTAD, 2014)
Port Name Handling Quantities Market Share (%)
(Million TEU)
The Port of Singapore Authority (PSA) 50,9 8,2
Hutchison Port Holdings (HPH) 44,8 7,2
APM Terminals (APMT) 33,7 5,4
Dubai Ports World (DP World) 33,4 5,4
China – Ocean Shipping (Group) 17 2,7
Company (Cosco)
Terminal Investment Ltd. 13,5 2,2
China Shipping Terminal Development 8,6 1,4
Hanjin 7,8 1,3
Evergreen 7,5 1,2
Eurogate 6,5 1

303
3.1.2. Economic development and ports

Global growth is expected to stay stable in 2018-19, at the same pace as in 2017. Global
growth is expected to be 3.7 percent in 2018-19, down from (-) 0.2 percent in 2017-2018. The
downward revision reflects unexpectedly weak activity in some major advanced economies in
early 2018, as well as the negative effects of trade measures implemented or approved
between April and mid-September, and a weaker outlook for some key emerging market and
developing economies due to country-specific factors, tighter financial conditions,
geopolitical tensions, and higher oil prices, as well as a weaker outlook for some key
emerging market and developing economies due to country-specific factors, tighter financial
conditions, geopolitical tensions, and higher oil prices (WTO, 2018). The high pace of GDP
change in emerging nations caused a 5% increase in global GDP between 2004 and 2008.
However, the Economic (Financial) Crises of 2008 experienced a sharp decline. After 2008,
the negative impacts of the crisis began to fade, and the GDP growth rate has been steady at
above 3%. This GDP change has been 3.8 percent since 2008, and by 2023, this value is
anticipated to remain unchanged.

Figure 1. Change of World GDP (UNCTAD, 2018)

The global seaborne commerce is performing well, aided by the world economy's recovery in
2017. Global marine trade grew at a rate of 4%, the fastest in five years, gaining traction and
boosting confidence in the shipping sector. Total volumes reached 10.7 billion tons, up 411
million tons from the previous year, with dry bulk commodities accounting for roughly half of
the total (UNCTAD, 2018). As crude oil slows to 2.4 percent, there is a considerable increase
in containerized commerce (6.4 percent) and dry bulk freight (4%). The outlook for seaborne
commerce is bright; UNCTAD (2018) forecasts a 4% rise in volume in 2018, which is similar
to 2017. UNCTAD forecasts a 3.8 percent compound annual growth rate between 2018 and
2023, assuming or continuing favorable global economic trends.

304
Figure 2. Change of Volume of Goods Traded (UNCTAD, 2018)

Martin Stopford proposed a relationship between freight rates, ship building, and sales, in
which freight rates rise for four years and then fall when the other two drops. And this pattern
has been in place for a long time. However, during the 1990s, when global economic growth
surged from 2% to 4%, this scenario has altered appropriately. The freight rates, which began
to rise in earnest in the early 2000s and peaked in 2008, have been steadily increasing since
then. As a result of this deceptive scenario, shipbuilding investments have grown, and second-
hand ship values have climbed, making it hard to locate a new shipyard. With the
commencement of the financial / economic crisis in 2008, freight rates dropped to rock
bottom. Many marine firms were forced to shut as a result of this scenario. Stopford's
tendency has never been reversed, and the uncertainty and stability have persisted for the past
ten years.

As a result of this circumstance, maritime operators are looking for new ways to operate.
Conducting scientific study on a new trend in marine transportation is necessary in order for
individuals to develop a new idea for forecasting improvements and making long-term plans.

The idea is to develop a system for estimating maritime transportation growth based on global
commerce and economic growth based on credible data. This application will help to better
characterize the supply-demand curve in order to forecast freight rates, as well as future ship
building and sales expectations. After, deleting two years due to crises, the average increase
of world trade is 5.29 percent, while global GDP growth is 2.14 percent. That indicates that
the growth of global trade is almost 2.5 times greater than the increase of global GDP. The
average growth rate of seaborne trade is 4,154 percent, whereas the global GDP growth rate is
2.14 percent. As a result, the growth of seaborne trade is around 2.17 times greater than the
increase of global GDP. Between GDP and Seaborne Trade, there is a smooth rate connection.
As a result, the expansion of the global economy may be utilized to forecast the rise of
seaborne trade.

3.2. Automated loading systems at ports, depots and technology's role

A mathematical model that can be assessed analytically cannot adequately explain most
complicated real-world systems containing stochastic aspects. As a result, simulation is

305
sometimes the only form of inquiry available (Law, 1982). Simulation may be used to verify
the accuracy of assumptions and a specific model. An analytic model, on the other hand,
might offer acceptable options to test in a simulation. The concept of a system is central to
every simulation research (Graybeal, 1980). A system is more than just a collection of
physical things and their interactions. A container port terminal is viewed as a system, with its
activities and interactions viewed as a collection of objects in our example. One of the
benefits of simulating a system's performance, such as a container terminal, is that it allows
you to assess alternatives capable of meeting the design criteria before they are implemented.
It also calculates the operating costs of such design configurations, which may be compared to
the prices of alternative changes. Proposed operational enhancements and port developments
can be integrated gradually into the simulation model to assess local terminal performance
while keeping the global viewpoint in mind (Seeley&Griffiths, 1992). It also gives you a way
to make sure you're working on the most productive project at any given time. Ships generally
arrive in a random manner that is characterized by some sort of statistical distribution. The
most common estimate is a negative exponential distribution of inter-arrival periods (and
hence a Poisson arrival rate). The arrival of ships and the length of time a berth is occupied
are included in the ship turn-around time (service time). With vessel arrivals, single or
multiple service(s), and limitless lines at an anchorage, ports or, more accurately, ship-to-
berth linkages are called queuing systems (Radmilovich, 1992). Only operational
improvements are included in the models created for comparison in this article. They
encompass actions that take place within the terminal, namely ship-to-shore operations and
container transfer from the ship to the stacking area. The models do not include events that
take place outside of the terminal gates (e.g., land transport). A quick description of the two
terminals is provided in order to have a good grasp of the systems and sequences.

3.3. Self-Driving or remote-controlled units and stacking equipment


Vehicle automation is based on a number of technologies that provide varying degrees of
functionality and capacity. The technologies most often linked with autonomous cars are
described in the following section.

Radar uses a variety of radio frequencies to offer continuous distance (and, to a lesser extent,
object size) monitoring by measuring the time it takes radio waves to travel to and back from
an object. Radar sensors, which use both long and short-range radar, are mounted on the front
bumper region of a truck in a transportation application. 13 Long-range radar was used to
focus further down the road (820', 18°), whereas short-range radar had a tighter and broader
field of vision (230', 130°). 14

LIDAR is a radar-like concept that collects information about the surroundings using lasers
rather than radio waves. While LIDAR provides unique benefits over radar, adoption has been
hampered by the equipment's "size, weight, cost, and power consumption." 15 According to
Google, its self-driving car has a roof-mounted LIDAR with “64 lasers spinning at roughly
900 rpm to provide a 360-degree view.” 16 This technique is estimated to cost $75,000. 17
Due to the trailer and the additional height requirements for a rooftop installation, a 360°
vision in a trucking scenario would be difficult with a comparable arrangement (Figure 1).

Signs, highway striping, and other characteristics of the surrounding transportation


infrastructure and environment are read using video camera systems. Currently available
video camera apps assist truck drivers in maintaining lanes and alerting them to potential
collisions with cars and pedestrians. 18 The same functions may be available through the
video camera in an autonomous truck, but they would be automated.

306
Figure 3. Location of Technologies that Enable Automation (Url-1)

The Federal Communications Commission (FCC) designated 5.9 DSRC (Dedicated Short-
Band Communications) as a particular range of the 75 MHz spectrum for use in intelligent
transportation systems in 1999. 20 When "coupled with accurate vehicle location," this range
has been intensively researched in safety applications to determine if DSRC might improve
autonomous car-based safety systems or enable new communication-based safety
applications. 21 Due of the short range of 5.9 DSRC, embedded DSRC transceivers are
required every quarter mile or so to sustain connection. The 5.9 GHz frequency enables for
exceptionally fast data transfer rates, despite the DSRC range being limited. LTE (Long-Term
Evolution) is a high-speed wireless communications technology used most often by
smartphones. 5G LTE is the name given to the next iteration of this terrestrial infrastructure.
The 5G technology is projected to be “10-100 times quicker than today's typical 4G LTE
connections,” allowing enabling accident avoidance and vehicle platooning via cellular
communications. 23 4G wireless communications, while capable of functioning over a
significantly wider range than 5.9 DSRC, have a slower data transmission rate.

The Differential Global Positioning System (DGPS) expands on the Global Positioning
System (GPS) by incorporating ground-based correction stations that serve as a third point of
reference between the vehicle and a GPS satellite. This improves precision from a few meters
to a few millimeters. When used in real time, such precision might assist preserve a driving
lane when markers are lacking.

Combinations of these technologies are now being utilized to create autonomous systems.
Radar and video camera technologies are used in the Freightliner L3 truck mentioned
previously. 24 OTTO employs the same technological categories as 24 OTTO, but adds three
LIDAR units and precise mapping data, bringing the system to L4.25 It's likely that linked
vehicle technologies (e.g., 5.9 DSRC), often known as V2X or vehicle-to-everything, will
improve and allow L3-L5 technologies. When two V2X devices come within range of each

307
other, V2X communicates directly between them using wireless local area network (WLAN)
technology.

Table 2. V2X Categories


V2X Technology Functionality
Enables vehicles to communicate with and gain
Vehicle-to-Infrastructure (V2I)
awareness of infrastructure such as traffic signals.
Enables vehicles to communicate with and gain
Vehicle-to-Vehicle (V2V)
awareness of other vehicles.
Enables vehicles to communicate with and gain
Vehicle-to-Pedestrian (V2P)
awareness of pedestrians, bicyclists and others.

These technologies may enable cars to monitor and respond to their environment more
quickly and accurately than a human driver. V2I, for example, may enable a vehicle to
anticipate traffic light changes or to travel at various speeds as it crosses different roads. As
long as the signal is not blocked, V2V may allow a car to quickly detect and respond to
another vehicle ahead of it that is suddenly stopping, or to spot a vehicle around an urban

street corner (Table 2).

Figure 4. The Port Industry in 2030 (Url-2)

Today's port isn't the port of tomorrow. Several major developments are predicted to influence
the marine sector as a result of demographic, technological, and sustainability forces. We've
identified eight specific trends, as well as three that reflect wider trends, that will have a
combined impact on the port industry's outlook in 2030. This research will examine how the
major maritime trends are influenced by the various factors, as well as what this means for the
port business in the future.

Ports are progressively using a variety of technology to execute improvements across the
whole value chain.

More technical solutions are required.

308
• To boost productivity, technological solutions like as robots and the Internet of Things are
required. As a consequence, supply networks would become more automated, digitalized, and
linked, requiring less physical labor.
• It assists in the transformation of the port ecosystem from a basic logistics and transportation
node to an open and efficient community capable of participating in the global landscape of
integrated international trade.

Increased susceptibility to cyber attacks

• The increased usage of automated, digitalized, and linked supply chains increases the
vulnerability to cyber assaults.
• Ports have traditionally been key infrastructure and must be secured against cyber-attacks
that may shut them down or steal data.

There is a lesser emphasis on physical infrastructure investments.

• On the one hand, upgrades to the port's supply chains will necessitate investments in
digitization and automation. To safeguard this new supply chain, however, investments in
cyber security are required.
• Physical port infrastructure investments are projected to diminish as a result of a shift in
investment priorities toward more technology alternatives. The Impact of Workforce Aging
on Productivity, IMF 4

Case example;

• From the Hamburg-Le Havre range to the main Middle East hubs, from Chinese mega-
terminals to South African IOT pilots, 5g networks in Antwerp, smart port platforms in
Rotterdam, unique track and trace in Vancouver, and so on, terminal automation is taking
place all over the world.

4. Discussion and Conclusions

In this study, automation systems at ports are explained. An agile port is defined as an
interface that can act as a 10G port or cover a predefined set of 10G interfaces to create an
interface with higher speed capabilities. The set of interfaces that can be combined to create a
higher speed port is limited by the hardware configuration. The importance of ports in supple
chain and port-related dynamics are mentioned. Without a doubt, changes in the dynamics
influencing the port business have influenced this progress. Mangan et al. (2008) explain
these processes and their consequences as follows; the consequences of marine transport,
developments in the port sector, rivalry between global port operators and inter-port
competition, economic contribution of ports, and port-based logistics and supply chain
strategies.
The economic situation is evaluated from 2003 to 2023. According to UCTAD (2018), it can
be seen that World GDP and % change of volume of goods traded have similar pattern among
the time period. Both of them decreased in 2009 and started to increase after 2009. The stable
pattern follows after 2018 both for World GDP and % change of volume of goods traded.
Consequently, freight and economic development are related to each other. Therefore, the
development of the port industry has significant importance.

309
In this study, eight specific trends identified as well as three that reflect wider trends, that will
have a combined impact on the port industry's outlook in 2030. The future of the global ports
and shipping industry is still uncertain, but four important aspects are expected to change:
trade routes, the competitive position of ports, ecosystems, and cargo distribution. Each
affected by underlying trends.

Automation systems can be examined in more detail in future studies and economic factors
can be investigated to show the relationships between port operations and percentage of
volume of goods traded.

References

Chopra, S., Meindl, P. (2007). Supply Chain Management, Strategy, Planning, and Operation,
3rd edition, Pearson Prentice Hall, 130.

Dülger, M. C. (2006). Denizcilik Gücünün Geleceği. Yüksek Lisans Tezi. Gebze İleri
teknoloji Enstitüsü, Sosyal Bilimler Enstitüsü, Gebze.

Esmer S., Yıldız G., Tuna O., (2008) Konteyner terminallerinde gemi-rıhtım bağlantısının
benzetim yöntemi ile modellenmesi. Yöneylem Araştırması ve Endüstri Mühendisliği
XXVII. Ulusal Kongresi,İzmir.

Esmer, S., (2009). Konteyner Terminallerinde Lojistik Süreçlerin Optimizasyonu ve Bir


Simülasyon Modeli., Doktora Tezi, Dokuz Eylül Üniversitesi. Sosyal Bilimler
Enstitüsü. İzmir.

Flämig, H. (2016). Autonomous vehicles and autonomous driving in freight transport.


In Autonomous driving (pp. 365-385). Springer, Berlin, Heidelberg.

Graybeal, W.J. (1980), Concept of a System: Simulation, Principles and Methods, p.3.

Johnson, James C., Wood, Donald F., Wardlow, Daniel L. et. al. (1998). Contemporary
Logistics. Seventh Edition. Prentice Hall, Inc: New Jersey.

Law, A. (1982), Advantages and disadvantages of simulation'', Simulation Modeling and


Analysis, p. 8.

Mangan, J., Lalwani, C., Fynes, B. (2008) Port centric logistics. The internatioal journal of
logistics management. Vol:19. No: 1. Sayfa: 29-41.

Neuweiler, L., Riedel, P. V. (2017). Autonomous Driving in the Logistics Industry: A multi-
perspective view on self-driving trucks, changesin competitive advantages and their
implications.

Radmilovich, Z.R. (1992). Ship-berth link as bulk queuing system in ports. Journal of
Waterway, Port Coastal, and Ocean Engineering.

Ranaiefar, F. (2012). Intelligent Freight Transportation Systems. Institute of Transportation


Studies.

310
Review of Maritime Transport (2008) UNCTAD.

Seeley, D., Griffith, T. (1992), Objects used in Simview. Understanding Systems with
Simview.

Stock, J.R. ve D. M. Lambert (2001) Strategic Logistics Management, 4. Baskı, 2001,


McGraw-Hill Higher Education.

Tuna, O. (2001) Türkiye İçin Lojistik ve Denizcilik Stratejileri: Uluslararası ve Bölgesel


Belirleyiciler. Dokuz Eylül Üniversitesi Sosyal Bilimler Enstitüsü Dergisi,Cilt 3, Sayı:2,
2001

UNCTAD (2005) Free Trade Zone and Port Hinterland Development, Research report of the
Economic and Social Commission for Asia and the Pacific, New York.

Url-1. “Safer, More Efficient Commercial Trucks.” Accessed October 25, 2016.
http://www.freightlinerinspiration.com/technology/

Url-2. “The Port Industry in 2030.” Accessed October 25, 2016.


https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/consumer-
business/deloitte-nl-cb-global-port-trends-2030.pdf

311
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Identification of Defective Cherries Using Convolutional Neural


Network

Halil KAYGISIZ1*, Abdulkadir ÇAKIR2*

Abstract: Worldwide, 2.45 million tons of cherries were produced in 2017, 2.57 million tons
in 2018 and 2.59 million tons in 2019. It is very important to sort defective sweet cherries to
ensure that the export capacity of sweet cherry producing countries is high. Because defects in
fruits are contagious. A single rotten sweet cherry can cause all sweet cherries to rot.
Therefore, a model was proposed in the present study to prevent the spread of decay.
Defective and non-defective sweet cherry images are classified in the proposed model.

In developing countries, defective fruits are sorted manually during or after harvest of fruits.
Checking defective fruits during harvest requires a special effort and is time consuming. This,
in turn, increases the cost of labor. The cost of labor are reduced with the proposed model.

A data set consisting of 1,050 images with a resolution of 224x224 pixels was created in the
study. Convolutional Neural Network (CNN) was used for the extraction of characteristics
and SOFMAX was used for classification. Other methods were also examined using the
transfer learning approach to compare the performance of the proposed system. The highest
success rate in the classification of sweet cherries was achieved with the proposed method.

Keywords: Chery Defect Detection, Convolutional Neural Network, Transfer Learning, Deep
Learning

1. Introduction

Approximately 3 million tonnes of sweet cherries are produced annually (Wordatlas, 2020).
The data for sweet cherry production reveal the size of the sweet cherry market in the world.
It is very important for countries to export for creating added value. The removal of defective
products in the export of fruits needs to be performed much more carefully. Because, a rotten
fruit may cause all fruits to rot. The defective fruits are generally sorted by manpower. Using
manpower increases both the error rate and labor cost. Sorting defective fruits by a system
increases efficiency, while reducing error rate and labor costs.

Recently, machine learning and deep learning have become widespread in computer vision
applications and successful results have been obtained. Such companies as Unitec, Compac,
Greefa, Buhler and GpGraders have systems that can perform fruit classification and defect
analysis. However, those systems cost millions of dollars. Thus, apart from those systems,
there is a need for new systems with low cost and high success rate.

1
Korkuteli Vocational School, Akdeniz University, Antalya, Turkey.
2
Faculty of Technology, Isparta University of Applied Sciences, Isparta, Turkey.
* Corresponding author: [email protected]
312
2. Related Work

Arango et al. conducted a study to perform the quality control of apples. They primarily took
infrared and color images of the apple in their study. Then, they attempted to classify the data
set by creating a Convolutional Neural Network model. It was stated that the success rate of
the model was 97% (Arango et al., 2021).

Jana et al. studied on the classification of rotten fruits. They created a Convolutional Neural
Network model for a data set consisting of 1,200 apple, banana and orange images in the size
of 64x64 pixels. The success rate of the model with four convolutional layers was reported to
be 97.7% in average for all fruits (Jana et al., 2021).

Bongulwar tried to classify defective fruits in a dataset consisting of apple, banana, grape,
litchi and mango images. The model uses Convolutional Neural Networks to identify fruits
from images. The accuracy obtained is 92.23% (Bongulwar, 2021).

An automatic detection method is proposed by Wu et al. for apple defects based on laser-
induced light backscattering imaging and convolutional neural network (CNN) algorithm.
Laser backscattering spectroscopic images of apples are obtained using semiconductor laser.
An AlexNet model with an 11-layer structure is established and trained to identify apple
defects. We analyze how well the model does with the recognition effects of apple defects.
The proposed CNN model for the detection of apple defects achieves a higher recognition rate
of 92.5%, and the accuracy is better than conventional machine learning algorithms (Wu et
al., 2020).

The method proposed by Nur Allam et al. is based on a deep learning approach using
SqueezeNet architecture. However, the apple images are extracted and fed into a deep
network for training and testing. The proposed SqueezeNet architecture utilizes convolution
neural network to regress a bypass connection between the fire modules across the images. It
has been evaluated our own created dataset. The excremental result shows that our proposed
methods are efficient and effective. The general detection rate was 92.23% (Nur Allam et al.,
2021).

3. Material and Methods

3.1. Dataset acquisition and pre-processing

First of all, the dataset was created in the study. The data set comprised of 1,050 images.
Figure 1 shows some image samples from the data set created. The resolution of the images
was 224x224 pixels. The convolution process is applied to all images in the data set in the
Convolutional Neural Network model. This is a long process with a lot of processing
overhead. The images are converted to 224x224x3 numpy array for faster access in the
convolution process to be applied on the images in RGB format. Then, all images are labelled
classifying into two group as defective and non-defective. Whereas, when training the dataset
using transfer learning the image augmentation is applied, validation is done in parallel while
training and tested upon the test set (Figure 1).

313
Figure 1. Sample Images from the dataset

3.2. Convolutional neural networks

The deep learning is very widely used in machine learning applications since it produces very
successful results across different types of data. In the classification of images in the
Convolutional Neural Network model, the deep learning approach is used (Hussain et al.,
2018). The main structure of Convolutional Neural Network model is presented in Figure 2.

Figure 2. Convolutional neural network architecture

The collection of pixels that constitutes the images can represent different patterns such as
edge and shadow in various images. Convolution is a process used to define the patterns. The
image is converted to a matrix before the convolution process. The size of the matrix changes
depending on the resolution of the image. A matrix in the size of 500x300x3 is created for an
image having the resolution of 500x300 in RGB format. The patterns can be determined by
the multiplication of the image matrix with the filter matrix. Although the size of the filter
matrix may vary, 3x3 filters are widely used. The convolution process is performed starting
from the first pixel of the image to the last pixel by shifting. The new matrix obtained at the
end of the convolution process is indeed a feature map to be passed to the next layer
(Krizhevsky et al., 2012; Yang et al., 2021).

A pooling layer is used after the convolution layer in the CNN model. This layer reduces the
size of our feature map. The pool size was determined to be 2x2. The last layer of the CNN
model is a fully-connected layer. The matrix is preprocessed by flattening to obtain the fully
connected matrix and converted into a single vector. Then, this single vector is transmitted to
the fully connected layer (Thenmozhi et al., 2019).

3.3. Proposed model for classification of fresh and rotten fruits

In the present study, a CNN model shown in Figure 3 is proposed for the classification of
defective and non-defective sweet cherries. Sixteen filters are applied in the first convolution

314
layer, which is the first layer of the model. The size of these filters is 3x3. The keyword
arguments used for passing initializers to layers depends on the layer. Usually, it is simply
kernel_initializer and bias_initializer. Kernel regularizer, and bias regularizer of 0.05. The
initial weights in the neural network are randomly determined, then the weights are updated
with the values that give better results. Random_uniform minimum value is -0.05 and the
maximum value of 0.05. In the model, a regularizer is used as a penalty mechanism in the
optimization of the layer. These penalties are utilized to optimize the loss function in the
neural network. A normalization is applied to each convolution layer before passing the max-
pooling layer By means of the process, the activation function approaches 0 and the
activation standard deviation approaches 1. Rectified linear units (RELU), which is a linear
function, are used in the normalization. RELU produces outputs at positive value, otherwise
produces 0 for the output. Max-Pooling layer reduces number the parameters by down-
sampling. Since this layer reduces the size of the data set, efficient use of hardware resources
and time savings are achieved. Thus, the features required for classification are retained while
getting rid of the excess data irrelevant to classification. Sixteen filters were used in the
second convolution layer as in the first layer. The size of the filters used was 3x3. After the
convolution layer, RELU was used for the normalization and max-pooling layer for data
reduction. Sixteen 3x3 filters were also used in the third convolution layer. After the third
layers of RELU and Max-pooling, there is the fully connected layer. In this layer, first of all,
the feature map obtained from the the third Convolution layer should be converted into a one-
dimensional array. This process is called as flattening. In this study, the loss function used is
categorical cross-entropy and Adam optimizer with a learning rate of 0.0001. The architecture
of the proposed CNN model is shown in Figure 3.

Figure 3. Architecture of proposed model

3.4. Cherry classification using transfer learning

Transfer learning, a research problem in machine learning, focuses on retaining the


knowledge gained during solving a problem and then applying it to a different but related
problem. It can be also defined as adapting and applying a model that gives successful results
in solving a problem to other problems. With transfer learning, we only perform neural
network training when applying another model to our dataset. For feature extraction, we

315
adjust the fully connected layer according to our own data set and perform classification
without changing the weights obtained in the convolution layer of different models. Thus, we
are able to use the feature maps without running the convolution layer, which requires serious
hardware and takes a long time (Pardede et al., 2021).

The proposed model were compared with AlexNet, GoogleNet, Vgg19 and ResNet50 transfer
learning models in the study.

3.4.1. Alexnet

Although it is said that the application of deep learning was first appeared in the article
published by LeCun in 1998 (Lecun et al., 1998), its worldwide recognition occurred in 2012.
The AlexNet model designed with deep learning architecture won the Imagenet competition
held in the same year. The study was published as an article with the title of “ImageNet
Classification with Deep Convolutional Networks”(Krizhevsky et al., 2012) and received
16,227 citations as of October 2017. This architecture allowed the computerized object
identification error rate to decrease from 26.2% to 15.4%. The architecture given in Figure 4
is compromised of five convolution layers and three fully connected layers. The architecture
has been designed to classify 1,000 objects. The filters are 11x11 in size and the number of
step shifts is four in the AlexNet model.

Figure 4. AlexNet architecture

3.4.2. GoogLeNet

The GoogLeNet is a complicated architecture because of the inception modules in its


structure. The GoogLeNet with 22 layers is the winner of the ImageNet competition in 2014
with its error rate of 5.7%. Its architecture is among the first CNN architectures that avoided
to stack convolution and pooling layers on top of each other in a sequential structure. In
addition, this new model has an important place in terms of memory and power usage.
Because stacking all layers together and adding lots of filters bring about a calculation and
memory cost and increases the possibility of overfitting. The modules connected in parallel
have been used in the GoogLeNet to overcome this situation (Mohammed, 2018).

316
3.4.3. Vgg19

The main layers of the VGG-19 architecture comprise of 16 convolutional, five pooling and
three fully connected layers. This architecture has a total of 24 main layers. The filters used in
the convolutional layers are used to decrease the numbers of parameters since the VGG-19
has a deep network structure. The filter size selected in this structure is 3x3 pixels. The VGG-
19 architecture consists of approximately 138 million parameters (Pardede et al., 2021).

3.4.4. ResNet50

The ResNet model is an architecture designed deeper than any other architectures so far. It
consists of 152 layers. The Resnet is also the winner of the ImageNet competition in 2015
with its 3.6% error rate (People typically have an error rate of 5-10%, depending on their
skills and expertise) (Tran et al., 2019).

4. RESULTS AND DISCUSSIONS

In the study, we first classified the data set we created, which includes the images of defective
and non-defective sweet cherries. Sixty percent of the data set was randomly selected for
training, 10% for validation and 30% for testing. The effects of the parameters used during
training were examined, and necessary changes were made so that the model proposed in the
present study can give the best results. Python library “Keras” is used to implement this deep
CNN model on google colab which uses NVIDIA Tesla K80 GPU, 12.72GB RAM, and
68.40GB of disk space.

4.1. The model parameters of the proposed CNN model and Effects of hyper-parameters
of the proposed model

This model uses Adam optimizer with 0.0001, learning rates, batch size 16 and epochs 32.
The accuracy rate of the models were calculated with the test data.

4.1.1. Effect of Batch Size

Convolutional neural networks are sensitive against batch size. Small changes in batch size
may cause large effects in the success of the model (Houlsby et al., 2019). A small batch size
creates a regularization effect. In the batch process, the data set is divided into parts based on
the value determined as the batch size, and the model is trained on this part in each iteration.
However, in some cases, the data may become grouped within themselves. This situation
creates a correlation within the data set and also provides to give a high success rate in the test
set selected from the data set, so there will be an overfitting. Before training starts and the
data set was divided into parts, the data set should be shuffled to prevent overfitting to occur
(Salman, et al., 2019). The random selection of the data is important in choosing the batch
size. The batch size is generally selected from the values that are multiples of 2 between 64
and 512. The accuracy rates of the model with the batch sizes of 8, 16, 32 and 64 in the
present study are given in Table 1. The highest success rate has been achieved for the batch
size of 16.

317
Table 1. Effect of Batch Size

64 68

32 96

16 97

8 93

0 20 40 60 80 100 120

Effect of Batch Size

4.1.2. Effect of Number of epochs

Not all data are included in the training process simultaneously while the model is being
trained. They take part in the training in certain numbers of pieces. The first part is trained,
the success of the model is tested, and the backpropagation and weights are updated based on
the success rate. Then, the model is re-trained with the new training set and the weights are
updated again The optimal weight values are calculated for the model by repeating this
process in each training step (Kishore et al., 2021). Each of these training steps is called
“epoch” The success is low in the first epochs in deep learning since the optimum weight
values to solve the problem are calculated step by step. However, the success of the model
increases with increasing number of epoch. Table 2 presents the change in accuracy rate
depending on the number of epoch.

Table 2. Effect of Number of epochs

32 97

16 87

8 83

4 58

0 20 40 60 80 100 120

Effect of Number of epochs

4.1.3. Effect of optimizers

In deep learning applications, the absolute minimum value of the error function must be
determined for the learning process to be optimally concluded (Onishi et al., 2019). This
process is performed by using optimization methods. Optimization is the methods used to

318
minimize the error which is the difference between the output value produced by the network
and the actual value. There are various algorithms available for the optimization of artificial
neural networks. The accuracy rates of Rmsprop, Adagrad, Adam and Nadam optimization
algorithms used for testing in the study are provided in Table 3.

Table 3. Effect of optimizers

Nadam 78

Adam 97

Adagrad 55

Rmsprop 85

0 20 40 60 80 100 120

Effect of optimizers

4.1.4. Effect of learning rates

In the training of neural networks, the calculated weights are updated to a certain extent. The
update amount of the weights is expressed as learning rates. This parameter, which takes a
value between 0 and 1, is an important hyperparameter for the CNN model. In the study, four
learning rates were used and their effect on the accuracy rate was examined. Table 4 shows
the learning rates used and their corresponding success rates.

Table 4. Effect of learning rates

0.0001 97

0.001 82

0.01 65

0.1 23

0 20 40 60 80 100 120

Effect of learning rates

4.2. Comparison of classification accuracies of the proposed and transfer learning


models

The best result in the training process of the neural network was achieved with the following
combination of hyperparameters: batch size = 16, number of epochs = 32, optimizer = Adam,
and learning-rate = 0.0001. The test procedure was performed using the test data set after the

319
training conducted with the selected parameters. The accuracy rate of 97,9 was obtained at the
end of the test. The same data set was also tested on AlexNet, GoogleNet, Vgg19 and
ResNet50 models using the transfer learning. The accuracy rates obtained are given in Table
5. The Vgg19 model produced the highest accuracy rate with 88.32% compared to the other
models tested. The accuracy rates of Vgg19 and resnet50 were appeared to be similar. The
model proposed in the study saved time with the use of fewer filters. In addition, the model is
able to operate with less hardware requirement. As seen in Figure 3, with the appropriate
combination of convolution and pooling layers created and appropriate hyperparameters
determined during training, the proposed model produced the highest accuracy rate. The
normalization conducted with RELU function between the convolution layer and max-pooling
layers has reduced the overfitting that will occur during training. The regularizers are used as
a penalty mechanism during the optimization. The error loss has been reduced by the Adam
optimization function. Table 5 shows the detection rates of defective and non-defective sweet
cherries of the model proposed in the present study. In addition, the comparison with the
studies in the literature is shown in Table 6.

Table 5. Accuracy of pre-trained and proposed model

Proposed Model 97.9

ResNet50 94.6

Vgg19 82.4

GoogleNet 65.3

AlexNet 23.6

0 20 40 60 80 100 120

Accuracy of pre‐trained and proposed model

Table 6. Comparison of proposed CNN model with state of art methods

Author Accuracy
Arango et al. %97
Jana et al. %97,7
Bongulwar et al. %92,23
Wu et al. %92,5
Nur Allam et al. %92,23
Proposed CNN model %97,9

5. CONCLUSION

Detecting and sorting defective fruits is of great importance for the agricultural economy.
Performing this process by an automated system will make a significant contribution to the
agricultural economy. In the present study, the data set consisting of a total of 1050 defective
and non-defective sweet cherry images was attempted to be classified by creating a CNN
model. The effects of different hyper-parameters i.e. batch-size, number of epochs, optimizer,

320
and learning rate are interrogated in this work. In addition, AlexNet, GoogleNet, Vgg19 and
ResNet50 models were also applied to the same data set using the transfer learning. The
model proposed in the study produced the highest success rate with 97.9% compared to the
other models. Thus, the proposed CNN model with high success rate proved that an
automated system can successfully detect defective sweet cherries.

References

Arango J. D., Staar B., Baig A. M., Freitagac M., (2021). Quality control of apples by means
of convolutional neural networks-Comparison of bruise detection by color images and
near-infrared images. Procedia CIRP, Volume 99, 2021, Pages 290-294.
https://doi.org/10.1016/j.procir.2021.03.043

Bongulwar D. M., (2021). Identification of Fruits Using Deep Learning Approach, IOP Conf.
Series: Materials Science and Engineering. doi:10.1088/1757-899X/1049/1/012004

Houlsby N., Giurgiu A., Jastrzebski S., Morrone B., (2019). “Parameter-Efficient Transfer
Learning for NLP”, Machine Learning, arXiv: 1902.00751

Hussain M., Bird J.J., Faria D.R., (2018). “A Study on CNN Transfer Learning for Image
Classification”, Advances in Intelligent Systems and Computing, vol. 840. Springer,
Cham

Jana S., Parekh R., Sarkar B., (2021). Detection of Rotten Fruits and Vegetables using Deep
learning, Computer Vision and machine learning in agriculture, Algorithms for
intelligent systems, https://doi.org/10.1007/978-981-33-6424-0_3

Kishore M., Kulkarni S. B., Babu K. S., (2021). Fruits and Vegetables Classification using
Progressive Resizing and Transfer Learning, Journal of University of Shanghai for
Science and Technology, Volume 23, Issue 1, 489-498

Krizhevsky A., Sutskever I., Hinton G., (2012). "ImageNet classification with deep
convolutional neural networks." In NIPS’2012. 23, 24, 27, 100, 200, 371, 456, 460

Lecun Y., Bottou L., Bengio Y., Haffner P., (1998). "Gradient-based learning applied to
document recognition." Proceedings of the IEEE 86(11): 2278–2324.

Mohammed N.A., (2018). Evaluation of CNN, Alexnet and GoogleNet for fruit recognition
Indonesian. J. Electr. Eng. Comput. Sci. 12(2), 468–475

Nur Alam M. D., Ullah I., Al-Absi A. A. (2021). Deep Learning-Based Apple Defect
Detection with Residual SqueezeNet, Proceedings of International Conference on Smart
Computing and Cyber Security, Lecture Notes in Networks and Systems 149,
https://doi.org/10.1007/978-981-15-7990-5_12

Onishi Y., Yoshida T., Kurita K., (2019). “An automated fruit harvesting robot by using deep
learning”. Robomech J 6, 13 doi:10.1186/s40648-019-0141-2

321
Pardede J., Sitohang B., Akbar S., Khodra M. L., (2021). Implementation of Transfer
Learning Using VGG16 on Fruit Ripeness Detection, I.J. Intelligent Systems and
Applications, 2021, 2, 52-61

Salman S., Liu X., (2019). “Overfitting Mechanism and Avoidance in Deep Neural
Networks”, Machine Learning, arXiv: 1901.06566

Thenmozhi K., Reddy U.S. (2019). Crop pest classification based on deep convolutional
neural network and transfer learning. Computers and Electronics in Agriculture, 164:
104906. https://doi.org/10.1016/j.compag.2019.104906

Tran T. T., Choi J. W., Le T. T. H., Kim J. W., (2019). A comparative study of deep CNN in
forecasting and classifying the macronutrient deficiencies on development of tomato
plant. Applied Sciences, 9(8), 1601.

Worldatlas (2020). World cherry production, https://www.worldatlas.com/articles/the-world-


leaders-in-cherry-production.html, (Accessed 20 October 2020)

Wu A., Zhu J., Ren T. (2020). Detection of apple defect using laser-induced light
backscattering imaging and convolutional neural network. Computers & Electrical
Engineering, 81: 106454. https://doi.org/10.1016/j.compeleceng.2019.106454

Yang X., Zhang Z., Qiu C., Wang L., (2021). Study On Feature Layer Of Adaptive Selection
Pyramid For Small Object Detection In Complex Environments, Fresenius
Environmental Bulletin, Volume 30– No. 01/2021 pages 474-483.

322
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Estimation of the NiTi alloy corrosion rate dependence on the


percentage of oxygen in three different seawater environments

Nataša Kovač1, Špiro Ivošević 2*, Radmila Gagić3*

Abstract: This research is based on the analysis of empirical data obtained in a real
experiment which included monitoring the corrosion behaviour of NiTi shape memory alloy
created under the influence of three different seawater environments. In their work so far, the
authors have conducted extensive research on the analysis of the rate of corrosion processes
of specific alloys subjected to real experiments and the influence of various external
influences and modelling of these complex processes by linear and nonlinear probabilistic
analysis. However, now the corrosion process is viewed from the standpoint of changes in the
chemical composition of the alloy. The empirical database was formed based on
measurements obtained based on Energy Dispersive X-Ray Analysis, so that data were
systematically collected after 6, 12, and 18 months of exposure of samples to the influence of
the seawater environment. The conducted statistical analysis is performed with an aim to
establish a correlation between the corrosion rate and the percentage of oxygen in the sample.

Keywords: corrosion, oxygen, SMA, statistical analysis

1. Introduction

In recent decades, the use of smart materials, such as shape memory materials, has attracted
special attention from researchers. Various families of alloys based on Cu, Al, Ni, Ti, Fe are
used in various branches of industry, such as medicine, transport, robotics, aviation, traffic,
etc. (Jani et. al., 2014; Huang, 1998). To enable the application of these alloys in the maritime
industry, numerous studies are performed in the laboratory and real conditions.

Under the influence of external and complex environmental factors in which metallic
materials are located, physical changes in the metal are manifested through a decrease in the
thickness of the metal and its weight. These influences will also affect the change of metal
surface and chemical composition of metals, predominantly through the process of oxidation
and reduction of the share of less noble metals. Changes in the material's physical shape and
chemical composition are the product of a corrosion process that increases with time, both in
terms of volume and depth of the material.

The decrease in metal weight and the increase in corrosion rate increase with time and
according to previous research, there are different linear and nonlinear models of corrosion.
Linear model is developed in (Guedas Soares and Garbatov, 1998), while non-linear models
are presented in (Yamamoto and Ikegami, 1998; Paik et. al., 1998; Paik, 2003; Paik, 2004;
Melchers, 1999a; Melcher, 2003) and in the work of some other researchers.
1
University of Donja Gorica, Faculty of Applied Sciences, Mathematical Department, Podgorica, Montenegro
2
University of Montenegro, Maritime Faculty, Kotor, Montenegro
* Corresponding author: [email protected]
323
Basically, all models developed so far are based on exposure time while some more advanced
models take into account other parameters such as salinity, pH, seawater temperature and flow
rate, the content of oxygen dissolved, sulfur pollution, and fouling (Melchers, 1999a;
Melchers, 1999b).

Considering the above, through the realization of the experiment in real conditions of the
marine environment, in this paper we analyze the influence of the different marine
environments in which the materials are located (atmosphere, tide, and sea), elapsed times and
changes in the chemical composition of alloys surface over the time. Specifically, depending
on the three environmental conditions, we consider the dependence of the corrosion depth
(expressed in nm of alloy wear) on the change in the percentage of oxygen during the
exposure time of 6, 12, and 18 months.

The paper is structured through 4 chapters. The second chapter discusses materials and
methods, the third presents research results, while the fourth chapter presents discussion and
conclusion remarks.

2. Materials and methods

2.1. Materials
Pure metals were used to produce NiTi alloys by the classical casting process. Ni (99.99
wt.%) and Ti (99.99 wt.%) were delivered by Zlatarna Celje d.o.o. Slovenia. Totally 9 disks
samples with a diameter of 42.3 mm and thickness of 3.4 mm were used in this research
(Figure 1). Three samples were located in the atmosphere (close to shore), three samples were
located in the tide area, and three were located in the sea three meters below the sea surface.

Figure 1. Sample of NiTi alloy disk

2.2. Methods

All samples posted in real sea water environment were analyzed in period of 18 months.
Three samples were checked upon 6 months, 3 samples upon 12 months and 3 samples upon
18 months of exposure in each analyzed areas. Upon every 6 months test samples were
monitored and measured in way to calculate depth of corrosion in nm and monitor chemical
composition of samples.

324
2.1.1 Collecting data methods

In this article, two methods for collecting data were used. The first method was used for
measuring the depth of corrosion and the second was used for calculating the percentage of
oxygen contents in the alloy.

Material characterization by the FIB method (Focused Ion Beam on a scanning electron
microscope) is used as a method for measuring the depth of corrosion (Ivošević et. al., 2021).
This method was used to measuring the depth of corrosion (in nm) of NiTi alloys after 6, 12,
and 18 months of exposure on the surface and image of the samples below surfaces.

For calculating the percentage content of oxygen in NiTi allow we use semi-quantitative
analyses. More precisely, the chemical composition of the selected NiTi alloy was determined
through the use of a high-resolution Field Emission SEM Sirion 400 NC (FEI, USA). The
microscope contains an Energy Dispersive Spectrometer (EDS) - Oxford INCA 350 - for
microchemical analysis (Ivošević et. al., 2020). This EDX semi-quantitative analysis
determined the chemical composition of the materials after corrosion, as well as the content of
oxygen on the surface of the examined samples which were used for future comparison.

2.1.2 Multivariate linear regression

In the process of studying the behavior of a physical phenomenon, it is often necessary to


have a tool that will enable the prediction of future quantities that characterize the observed
phenomenon. Therefore, one of the requirements of the statistical analysis of empirical data is
the formation of a model on the basis of which a general conclusion can be drawn about
measured values. For this purpose, one or more dependent variables are selected whose values
are to be predicted as functions of other explanatory input variables, and whose all values are
known in advance.
One of the basic ways to build a predictive model is linear regression. In the first phase of
linear regression, the independent variables that will participate in the formation of the model
are defined, as well as the dependent variables that will represent the response of the model.
The next phase involves estimating the values of the model parameters to minimize errors in
predicting the dependent variable. The final step should include tests to show whether the
model adequately monitors the behavior of the dependent variable.
There are two basic types of linear regression. If the model has only one independent variable,
we are talking about a simple linear regression. In the case that the model is built from several
independent variables, it is a multivariate linear regression (Heckler, 2005).
If the dependent variable is denoted by , and represents the vector of independent
variables of dimension 1, then the linear regression model can be written in the following
way:

⋯ (1)

The common name for is the response variable, while is known as the predictor (Willard,
2020). The coefficient represents the fixed value of the model, i.e., the expected mean
value of when =0 and it is known as intercept. The coefficients values , , , … ,
are unknown and need to be estimated in such a way that the linear dependence of on is
best captured. Once the model parameters are determined, the model can be further used to
predict the value of for a given value of the vector .

325
3. Results

The main goal of statistical analysis performed in this paper is to establish a relationship
between NiTi alloy corrosion rate and the observed process of oxygen development on the
material surface caused by seawater environment influence and time of exposure. To predict
NiTi alloy corrosion rate degree (measured in nm/month) in three different seawater
environments, regression analysis was used to form appropriate prediction models. Oxygen
levels expressed in percentage and samples exposure time, expressed in months, were used as
explanatory variables in each model.
At the beginning of the experiment (zero month), there were no traces of corrosive processes
on the NiTi samples. in addition, the samples used in this experiment were not treated with
anti-corrosion coatings or any other technique that could further affect the surface structure
and behavior of the alloy during the experiment. Alloy samples were placed in three different
seawater environments (air, tide, sea) and changes were monitored at 6, 12, and 18 months.
Each sample was subjected to FIB and EDX methods, after 6, 12, and 18 months of exposure
to one of the observed marine influences, and then the values of corrosion depth, percentage
of oxygen in the alloy, and months of environmental exposure were detected. In this way, a
comprehensive empirical database was formed, which was the subject of this statistical
analysis.
Table 2. shows the basic descriptive statistics of the formed empirical database. The values
related to the percentage of oxygen representation as well as the corresponding values of the
corrosion rate that was manifested on the NiTi alloy samples are presented.

Table 2. Descriptive statistics of input data

Environment Variable N Mean StD Min Q1 Median Q3 Max


air oxygen 31 11.41 10.15 0.00 2.02 8.28 19.34 30.00
corr. depth 31 43.36 8.84 30.00 37.50 40.00 50.00 58.33
tide oxygen 41 29.90 8.68 9.14 24.77 28.57 35.61 48.41
corr. depth 41 38.78 10.86 22.50 30.00 37.50 45.83 66.67
sea oxygen 49 26.67 13.16 0.00 15.04 29.82 37.63 52.48
corr. depth 49 281.3 158.5 41.7 168.8 308.3 414.6 575.0

Figure 2. graphically shows the corrosion rate dependence of the observed NiTi alloy and
visualizes the empirical database. The data are grouped with respect to the three seawater
environments, so that Figure 2. shows the empirical data for the corrosion rate generated by
air, tides, and sea. Corrosion rate values are shown on a color scale with the lowest values
assigned in blue, while the highest values are shown in red. In all three images, the corrosion
rate is shown as a function of two independent variables that are the basis of the future
regression model (oxygen percentage and exposure time).

326
Figure 2. Visualization of NiTi alloy corrosion data measured under the influence of air, tide,
and sea

Changes in the values of one variable can affect changes in the values of another variable.
Understanding this causal relationship is especially important in creating models that aim to
predict and estimate the future values of a dependent variable. The correlation can reflect
these changes from the point of view of the trend and the strength of the joint changes of the
two variables. Various types of correlation are represented in the statistical analysis. In this
paper, one of the most common correlation coefficients, the so-called Pearson correlation
coefficient, is used (Lee Rodgers et. al., 1988).
Figure 3. shows the results of the correlation analysis between the corrosion rate, the
percentage of oxygen, the elapsed time since the beginning of the experiment, as well as the
mutual simultaneous influence of oxygen and the time of exposure of the sample to the
environmental influences. Correlation analysis is given here in the form of a correlation
matrix.

Figure 3. Correlation analysis

A Pearson's correlation coefficient can take values between -1 and 1. Negative values also
indicate a negative correlation, while positive values of the correlation coefficient indicate a
positive correlation between the two observed variables. As can be seen in Figure 3., in all
three observed environments, only positive correlation effects occur. This means that an
increase in one value of the observed variable affects the increase in the value of the other
observed variable. The degree of correlation may be high, moderate, low, or no correlation.
These degrees of correlation are characterized by the values of the coefficients between ± 0.50
and ± 1, ± 0.30 and ± 0.49, respectively, below ± 0.29, and the correlation is said to be strong,
moderate, or small. If the correlation coefficient is zero, then it is said that there is no
correlation. On the correlation matrices shown in Figure 3., the correlation degrees were
additionally plotted through the color intensity assigned to each cell of the matrix.

327
One correlation matrix is given for each observed seawater environment, on the basis of
which positive values of Pearson's correlation coefficient can be observed for all considered
variables, but with different degrees of correlation, whose values are shown in the cells of the
matrix. As expected, the corrosion rate increases with increasing exposure time to the
environment, as well as with an increase in the percentage of oxygen, in all three seawater
environments.

The generalized regression model presented in formula (1) was applied to an empirical
database of NiTi alloys and measured values of corrosion rate and percentage of oxygen for 6,
12, and 18 months of exposure to three seawater environments, i.e., air, tide, and sea. At the
beginning of the experiment, the oxygen content was zero, and the corrosion rate for all
samples and all observed seawater environments. As a result, the intercept was set to zero in
all models in the regression analysis, i.e., 0 in all three regression models. The corrosion
rate in a given seawater environment was considered as a dependent variable, so that three
regression models were obtained, which we refer to as , , and , respectively. As
explanatory variables, the time elapsed since the beginning of the experiment, the percentage
of oxygen in the alloy, and the simultaneous interaction of oxygen and elapsed time (which
was obtained by multiplying these two values) were observed. These three independent
variables are denoted by , , and , respectively, in each produced regression model. The
results of the regression analysis are shown in Table 3.

Table 3. Table of parameters for the three formed models of corrosion for air, tide, and sea

Environment Coefficient Estimate Std. error t-statistics p-value


air 3.4271 0.228483 14.9993 6.534ˣ10-15
2.28241 0.451786 5.05198 0.0000240593
-0.157748 0.0296944 -5.31239 0.0000118121
tide 4.00726 0.439707 9.11348 4.2203ˣ10-11
0.71392 0.119184 5.99005 5.87456ˣ10-7
-0.0814676 0.0158206 -5.14946 8.31654ˣ10-6
sea 12.086 2.5077 4.81956 0.0000160634
-0.212609 1.68564 -0.126129 0.900179
0.347143 0.0952672 3.64388 0.000680433

The third column of Table 3. shows the values of the coefficients of the three independent
variables in the case of the regression model under the influence of air, tide, and sea. In the
last column of Table 3., the corresponding values for p-value are given. As the significance
level in the regression analysis was set to 95%, i.e., 0.05, all values for p-value that are
less than 0.05 indicate the fact that the observed model parameters have a significant effect on
the corrosion rate in the observed environment. From this, it follows that all three independent
variables (environmental exposure time, oxygen percentage, and interacting oxygen
interaction and exposure time) have a significant share in describing corrosive processes in
the air and tide environment.

Under the influence of sea, the regression model showed small difference in terms of
corrosion rate behavior. Namely, the coefficient has a high value for p-value, which
indicates that single oxygen observation does not have a significant share in the corrosion rate
modeling. Despite the fact that initial research has shown that the percentage of oxygen has an
impact on the formation of corrosive processes in the sea environment, regression analysis

328
shows that combining the influence of oxygen percentage with other factors significantly
reduces the degree of oxygen influence as a separate factor. Based on this, it can be concluded
that it is necessary to further calibrate the model for corrosion rates under the influence of sea,
due to very complex factors that can affect the corrosive processes in that environment.

By including obtained specific values of regression coefficients for the considered empirical
data related to the NiTi alloy in three different seawater environments, three regression linear
models were formed whose patterns are given in formulas (2) - (4).

3.4271 2.28241 0.157748 (2)


4.00726 0.71392 0.0814676 (3)
12.086 0.212609 0.347143 (4)

The coefficient of determination, i.e., R2 statistics were used to estimate the goodness of fit
for the three formed regression models represented by expressions (2) - (4). This statistic
shows how much proportion of variation in the dependent variable can be explained by the
independent variables. The corresponding values of R2 for the three formed regression models
da, dt, and ds, are respectively 0.959489, 0.961901, and 0.962615. This means that
approximately 95.95%, 96.19%, and 96.26% of the variation in the corrosion rate can be
explained by the obtained linear models (2) - (4).

To test variability within a regression model and to address the model significance, Analysis
of Variance (ANOVA) is used (Miller, 1997). Simultaneous and individual effects of the
percentage of oxygen and the elapsed time of exposure to the influence of the seawater
environment on the corrosion rate of NiTi alloys were verified by ANOVA test with a 95%
confidence level. ANOVA results are summarized in Table 4.

Table 4. Regression Analysis of Variance for three observed seawater environment

Environment Source DF AdjSS AdjMS F-Value P-Value


Regression 3 58169.5 19389.8 221.06 0.000
air Error 28 2456.0 87.7
Total 31 60625.5
Regression 3 63852.8 21284.3 319.80 0.000
tide Error 38 2529.1 66.6
Total 41 66381.9
Regression 3 4891817 1630606 394.81 0.000
sea Error 46 189984 4130
Total 49 5081800
The F-test belongs to the group of statistical tests on the basis of which the conclusion is
drawn whether the regression model better describes the data than the model in which the
explanatory variables would not be included (Bingham and Fry, 2010). For each value of F
statistics, the corresponding p-value is calculated. As the significance level in this paper is set
at 0.05, all p-values less than 0.05 provide sufficient evidence to conclude that the formed
regression model corresponds well to the empirical data for the corrosion rate. In all three
seawater environments, the calculated p-values for F statistics are close to zero. Based on this,
it is concluded that the regression models presented in expressions (2) - (4) well represent the
values of NiTi alloy corrosion rate in the observed seawater environments. More specifically,
these models can be used to predict future corrosion rate values generated under the influence
of oxygen and the time of exposure of the sample to the seawater environment. It can be

329
noticed that these values are in accordance with the obtained values for R2 and the
conclusions that these values impose.

As another way of checking the quality of the regression model, a residual normality plot can
be used (Anscombe, 1973). These graphs are shown in Figure 4. for all three considered
seawater environments. If the regression model follows well the changes in the dependent
variable, the residual plot should show that the residuals follow the normal distribution, i.e.,
that the graph of the normal probability of the residual lies approximately on a straight line
extending along the main diagonal of the graph. It is evident that in all three regression
models corresponding to the influence of air, tide, and sea, the graphs approximately follow a
straight line, so it is concluded that there are no deviations, unexpected behaviors, or evident
existence of some unidentified variable.

99 99 99

95 95 95

90 90 90

80 80 80

70 70 70
60 60 60
Percent

Percent

Percent
50 50 50
40 40 40
30 30 30

20 20 20

10 10 10

5 5 5

1 1 1
-20 -10 0 10 20 -20 -10 0 10 20 -150 -100 -50 0 50 100 150
Residual Residual Residual

Figure 4. Normal probability plot of residuals for corrosion depth under air, tide, and sea
influence
Statistical analysis shows that the formed regression models with three explanatory variables
that indicate the time of exposure of the sample to the environment, the percentage of formed
oxygen on the sample surface, and the joint impact of the previous two variables, adequately
monitor changes in the corrosion rate of NiTi alloy in all three seawater environments.
Therefore, the formed models (2) - (4) can be used as a tool for predicting the value of
corrosion rate as a function of the described independent variables. Predicted values for
corrosion rates caused by air, tides, and sea are shown in Figures 5-7. These figures clearly
show the different behavior of corrosion rates in the three observed seawater environments.
The values for the corrosion rate are shown in these figures by a scale on which the lowest
values are assigned blue dots while the higher highest values correspond to red dots.

330
Figure 5. Predicted corrosion rate values based on regression model for air

Figure 6. Predicted corrosion rate values based on regression model for tide

331
Figure 7. Predicted corrosion rate values based on regression model for sea

4. Discussion and Conclusions

In this paper, the authors have dealt with the development of regression models that will
adequately describe the corrosion rate behavior for NiTi alloy formed under the influence of
air, tide, and sea. For the explanatory variables of the model, the time of exposure to the
influence of the environment, the percentage of formed oxygen on the sample surface, and the
joint simultaneous influence of both variables were selected. The conducted statistical
analysis provides sufficient evidence that the selected model parameters are valid for
describing the corrosion rate as a dependent variable, and that the formed regression models
can be further used to predict the corrosion rate of NiTi alloy in all three observed
environments.

Despite the fact that preliminary research has shown that the influence of oxygen on the
formation of corrosion of NiTi alloy under the influence of the sea is noticeable, a more
detailed statistical analysis revealed an interesting fact. Namely, oxygen as a standalone factor
of the regression model is not statistically significant, but in interaction with other factors, it
has shown that it is an unavoidable factor of regression analysis. Based on this, it can be
concluded that it is necessary to make an effort and form even more complex regression
models, which will include a larger number of variables and their mutual interactions in the
formation of regression models for corrosion rate.

A more detailed insight into the correlation coefficients given in the correlation matrices
(Figure 3.) can also show the effects of multicollinearity. Multicollinearity is not an
aggravating factor if the main goal of regression analysis is only to predict the value of the
dependent variable, as is the goal of statistical analysis in this paper. However, based on this
multicollinearity, it is possible to define future research directions that could also have a
stricter relationship between explanatory variables and corrosion rate as output variables.
Therefore, this research can be extended with principal component analysis or partial least
squares regression.

332
Acknowledgements

This paper is a result of the initial phase of the research of different aspects of the sea and
atmosphere to the production and application of smart materials of Shape Memory Alloy in
the Nautical industry. Project PROCHA-SMA is a part of the EUREKA Project which is
jointly realized by the Faculty of Stomatology in Belgrade, Zlatarna Celje, and the Faculty of
Maritime Studies Kotor, University of Montenegro.

References

Anscombe, F. J. (1973). Graphs in statistical analysis. The american statistician, 27(1), 17-21.
Heckler, C. E. (2005). Applied multivariate statistical analysis.
Bingham, N. H., & Fry, J. M. (2010). Regression: Linear models in statistics. Springer
Science & Business Media.
Huang, W. (1998). Shape memory alloys and their application to actuators for deployable
structures. University of Cambridge Department of Engineering.
Ivošević, Š., Vastag, G., Majerič, P., Kovač, D., & Rudolf, R. (2020). Analysis of the
Corrosion Resistance of Different Metal Materials Exposed to Varied Conditions of the
Environment in the Bay of Kotor.
Ivošević, Š., Kovač, N., Vastag, G., Majerič, P., & Rudolf, R. (2021). A Probabilistic Method
for Estimating the Influence of Corrosion on the CuAlNi Shape Memory Alloy in
Different Marine Environments. Crystals, 11(3), 274.
Jani, J. M., Leary, M., Subic, A., & Gibson, M. A. (2014). A review of shape memory alloy
research, applications and opportunities. Materials & Design (1980-2015), 56, 1078-
1113.
Lee Rodgers, J., & Nicewander, W. A. (1988). Thirteen ways to look at the correlation
coefficient. The American Statistician, 42(1), 59-66.
Melchers, R. E. (1999a). Corrosion uncertainty modelling for steel structures. Journal of
Constructional Steel Research, 52(1), 3-19.
Melchers, R. E. (1999b). Factors Influencing the Immersion Corrosion of Steels in Marine
Water. In Proceedings of the 14th International Corrosion Congress, Cape Town, South
Africa.
Melchers, R. E. (2003). Probabilistic model for marine corrosion of steel for structural
reliability assessment. Journal of Structural Engineering, 129(11), 1484-1493.
Miller Jr, R. G. (1997). Beyond ANOVA: basics of applied statistics. CRC press.
Paik, J. K., Kim, S. K., & Lee, S. K. (1998). Probabilistic corrosion rate estimation model for
longitudinal strength members of bulk carriers. Ocean Engineering, 25(10), 837-860.
Paik, J. K. (2003). A time-dependent corrosion wastage model for bulk carrier structures. Int J
Marit Eng R Just Naval Arch, 145(2), 61-87.

333
Paik, J. K. (2004). Corrosion analysis of seawater ballast tank structures. International
Journal of Maritime Engineering, 146(A1), 1-12.
Soares, C. G., & Garbatov, Y. (1999). Reliability of maintained, corrosion protected plates
subjected to non-linear corrosion and compressive loads. Marine Structures, 12(6), 425-
445.
Willard, C. A. (2020). Statistical Methods: An Introduction to Basic Statistical Concepts and
Analysis. Routledge.
Yamamoto, N., & Ikegami, K. (1998). A study on the degradation of coating and corrosion of
ship’s hull based on the probabilistic approach.

334
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Estimation Forest Cover Map with Fusion Lidar And Sentinel Data

Nuray Bas1*

Abstract: With the development of remote sensing technology, the information about the
surface and the changes can be determined more precisely and more accurately. In this
contex, forest cover density is one of the important froest structural parameter in forest
management. This extremely important issue in forest management is determined with the
help of various satellite images. A wide variety of satellite images are used for this purpose
depending on the study. The forest cover map can be derived from the spectral properties of
optical images and also derived from 3D point cloud data such as LiDAR data with high
accuracy. In this study, point cloud forest cover map was determined using Sentinel-2 optical
image and 3D LiDAR data in Turkey located in the wooded area in the province of
Adapazarı. The benefits and shortcomings were revealed in terms of utilization possibilities
and forest management in both studies. Then, the values of Forest Index (FI) and Normalized
Difference Vegetation Index (NDVI), Enhanged Vegetation Index (EVI), Red Edge R atio
Vegetation Index (RERVI) were calculated and the results were compared in forest area. At
the end of the study, user accuracy for forest cover density for LiDAR data and Sentinel data
is very high with 76% and 85%.

Keywords: NDVI, 3D LiDAR Sentinel-2, EVI, RERVI

1. INTRUDUCTION
Forestry monitoring plays an important role in sustainable forest management (Foody 2002),
(Tucker et al.. 1985). It is possible to determine the amount of biomass as well as the spatial
determination of forest information with remote sensing techniques (Sinha et al.. 2015), (Lu,
et al.. 2014), (Eisfelder, et al.. 2012), (Li, et al.. 2020). Forest information is frequently used
in environmental, biomass and biodiversity applications as well as forest inventory studies (Li
et al. 2014), (Wulder et al. 2012), (Hansen et al. 2010).

Forest maps are crucial for global environmental change assessment and local forest
management planning (Waser et. al. 2015). It has been discussed as a research topic in many
subjects such as agriculture, environment and sustainable forest management (Ampadu et al.
2020), (Boyd et al. 2002), (Hansen et al. 2013). However, the cost of measurement of large
area, need high accuracy map and the limited area coverage make monitoring problematic
(Shimizu et al. 2020). Therefore, optical data can be combined with synthetic data (Sánchez et
al. 2018).

Another important consideration is the need to produce high-level cover maps when the
production of forest maps requires spatial accuracy and detailed content information (Ganz et
al. 2020). Obtaining spatial information about the forested area on-site measurement work.

1Sivas Cumhuriyet University, Engineering Faculty, Geomatic Engineering Department, Sivas/Turkey


* Corresponding author: [email protected]
335
This is possible in local area studies. However, it is very difficult to produce a map of forest
areas by measuring with a total station or Global Positioning System (GPS) at the world scale
and especially in large and hard-to-reach areas.

Consequently, their development is time-consuming and therefore limited to relatively small


areas (Naesset, 1997), (McRoberts 2007). In this case, we come across the Airborne Light
Detection and Ranging (LiDAR) technique, which has been popular in the science of remote
sensing in recent years. With this technique, both positional information and canopy height
information are determined with high accuracy (Wulder et al. 2012), (Li et al. 2020).

The fact that LiDAR provides height information as well as spatial information to users is an
important advantage especially in tree height determination and biomass calculation studies.
For this reason, it is widely used in operations such as individual tree height, volume and
biomass determination. (Yao et al. 2012), (Stereńczak et al. 2020).

However, in some cases, optical and synthetic data may also need to be considered. In
addition to LiDAR, Optic and SAR (Sentetic Aparture Radar) data are also widely used in
forestry studies. Long observation time, temporal data caracteristics wide spatial coverage and
multiple bands can provide abundant information about forest structure (Mc Cue et al. 1971),
(Gao et al. 2014), (Fagua et al. 2019). Sensor selection, spatial variations, radiometric and
temporal resolutions are also of great importance in land cover classification studies (Lu and
Weng 2007).

Various satellite images can be used by evaluating effects such as the study area and the
precision of the study. Low-resolution images such as Landsat, (Gao and Zhang 2009),
(Manandharet al. al 2009), SPOT, (Gxumisa, and Breytenbach 2017), MODIS (Aredehey, et
al. (2017), ASTER (Zhao et al. 2019) can be used or classification can be made with
QuickBird ( Lu et al. (2010), WorldView, (Ranaie et al. (2018) images on a more local scale.
Multispectral sensors are used for issues such as vegetation character, species determination,
and plant health (Lu et al. 2016).

In this study, we used a multisensory fusion approach involving both LiDAR data and
Sentinel-2 data. The applied methods are almost completely automated land use classification.
Therefore suitable for area-wide forest mapping using Optic data and high resolution LiDAR
data. We showed that the forest cover map could be derived by Sentinel-2 data with
combination LiDAR data. In addition this result adding Forest index such as Enhanged
Vegetation İndex (EVI), Normalized Vegetation Index (NDVI), and Red Edge Vegetation
Index (RERVI).

2. DATA AND METHODS

2.1. Study Area

The study area is located in the Karasu region of Sakarya in Turkey which has 1.311 km 2 area.
Sakarya Province is very rich in terms of natural vegetation. The mountains, which are
extensions of the northern Anatolian coastal mountains are covered with dense forests. There
are abundant ash forests in the east of Adapazarı. Here, elm and alder trees are mixed among
the ash trees. There are settlement areas, cultivated areas and forest areas in places.
Vegetation weakens around the lower Sakarya valley in the plains. The climate in Karasu,
where the study area is located, is classified as warm and temperate. The average temperature

336
of the province is 14.1 °C. Even in the driest months, the amount of precipitation is quite
high. The study area is at a min elevation of 34.79 m above sea level and max elevation is
594.44 meter.

Figure 1: Study Area

2.2 Data Acquisition and Pre-Processing

In this study, three different data sets were used, namely LiDAR, Sentinel-2 and Google Earth
data. LiDAR data is 5.1 samples/m2 and point spacing is 0.44 m. First, second last, first of
many and last return and Scan angle 26o . Riegl VQ580 LiDAR sensor was used as the sensor.
Flight altitude is an average of 450-500 m. above ground level. Average Helicopter Speed
was purchased as 75-80 km\h. Data has UTM zone 36 WGS 84 coordinate system. Since
LiDAR data was obtained in October, The Sentinel-2 image was acquired on 10 October
2020. The European Space Agency was selected by determining the time interval with
minimum cloudiness. Ten bands were used for analysis as shown in Table 1. The Sentinel-2
image was atmospherically corrected using QGIS software. Since the Sentinel-2 image was
acquired at level-1 C orthorectification was already.

Table 1: Sentinel-2 bands used for analysis


Bands Central wave Resolution Bandwith (nm)
length(ųm) (m)
Band 2- Blue 0.490 10 65
Band 3-Green 0.560 10 35
Band 4-Red 0.665 10 30
Band 5-Vegetation Red Edge 0.705 20 15
Band 6- Vegetation Red Edge 0.740 20 15
Band 7- Vegetation Red Edge 0.783 20 20
Band 8- NIR 0.842 10 115
Band 8A- Narrow NIR 0.865 20 20
Band 11- Swır 1.610 20 90
Band 12- Swır 2.190 20 180

3. LiDAR DATA PROCESS

Forest canopy density is frequently used in applications such as biomass estimation


biodiversity determination in environmental and forestry applications. Canopy density or
canopy cover ratios in vegetation cover can be viewed by aerial acquisition with the help of

337
LiDAR data. In order to calculate the canopy cover ratio with the help of LiDAR data, it is
necessary to first make the raw LiDAR data that has been classified into ground returns (bare
earth) versus non-ground returns.

In this study, Cloth Simulation Filter (CSF) filtering method was used in the classification of
LiDAR data (Zhang et al. 2016). The CSF method first inverts the original point cloud by
creating a DTM process. As a second step, rigid cloth is used to cover the inverted level
formed in the first step. At the end of the process, the ground surface is obtained. In the final
step, the original point cloud and the final shape of the simulated cloth are compared with
each other.

In the LiDAR data classification process, ground and nonground classification were obtained
at the end of the classification process. Since the altitude values are the altitudes obtained
from sea level in the flight data, the altitude values were normalized to obtain local altitude
values.

Figure 2 : LiDAR Data Classification Results a) Unclassified Data, b) Classified Data, c)


Normalized hight value

4. RESULTS

4.1. LULC Land Classification Using Sentinel-2 Data

In Remote Sensing management, the concept of Land use and land cover (LULC)
classification can be defined as the determination of socioeconomic and environmental
changes at local, regional and global scales (Li et al. (2014), (Forkuor et. al. 2017).

Region of Interest (ROI) polygons were created in the classification process to define the
spectral characteristics of the land classes. ROIs are polygons drawn over homogeneous areas
of the image that overlay pixels belonging to the same land cover class. The image is
segmented around a pixel seed including spectrally homogeneous pixels using region growing
algorithm. ROI polygons were stored as training input. It is worth pointing out that
classification is always based on spectral signatures. In the algorithm used, the spectral
signatures of the fields determined as training input are used and the program assigns a

338
microclass ID to each class. Spectral Angle Mapping (SAM) method was used as a method in
the classification process.

The Spectral Angle Mapping calculates the spectral angle between spectral signatures of
image pixels and training spectral signatures. The spectral angle θ is defined as (Kruse et al..,
1993) Eq (1), where,
n

 xi, yi
 ( x, y ) = cos (
−1
n
i =1
n
) (1)
( X i ) *( yi )
2 1/ 2 2 1/ 2

i =1 i =1
x = spectral signature vector of an image pixel,
y = spectral signature vector of a training area,
n = number of image bands.
Therefore a pixel belongs to the class having the lowest angle, that is Eq(2).
x  Ck   ( x, yk )   ( x, yj )k  j (2)
Where,
Ck = land cover class k,
yk = spectral signature of class k,
yj = spectral signature of class j.

Figure 3 : Sentinel 2 data Spectral Engine Mapping Classification Results

4.2.Forest Index Based on NDVI, EVI and RERVI Index

NDVI is a simple but widely used index for determining the amount of green vegetation. This
index normalizes green leaf scattering at near Infrared wavelengths with chlorophyll
absorption at red wavelengths. The value range of NDVI is -1 to 1. Values close to zero (-0.1
to 0.1) generally correspond to barren rocky areas, sand or snow areas with the least
vegetation. Low, positive values represent shrubs and grasslands (approximately 0.2 to 0.4),
while high values represent high wooded areas (values approaching 1), such as tropical
rainforests with the densest vegetation. NDVI is defined as Eq (3) , (Rouse Jr. et al.1974).
NIR-RED
NDVI = index (NIR,RED) = (3)
NIR+RED

339
The Enhanced Vegetation Index (EVI) is an index designed to monitor the amount of biomass
in improved vegetation with reduced atmospheric effect (Huete et al.. 1997), (Eq.4). NDVI is
sensitive to chlorophyll, while the EVI leaf area index (LAI) is more sensitive to canopy
structural variations, including canopy type, plant physiognomy, and canopy architecture. The
two vegetation indices complement each other in global vegetation studies and improve upon
detection of vegetation changes and extraction of canopy biophysical parameters. Another
difference between the NDVI and EVI is that NDVI decreases in the presence of snow,
whereas EVI increases (Huete, 2002).
NIR-RED
EVI = 2.5 ( 4)
NIR+ C1*RED - C2 *BLUE+L
Where NIR, RED and Blue are atmospherically- corrected surface reflectance and C1,C2, and
L are coefficients to correct atmospheric conditions . Fort he standart MODIS EVI product
L=1 , C1=6 and C2= 7.5 . The range of values for EVI is -1 to 1, with healthy vegetation
generally around 0.20 to 0.80. Red-edge ratio vegetation index (RERVI) (Cao et al.., 2013)
is calculated three visible bands (B2, B3, B4), two red -edge bands (B5, B6), and three NIR
bands (B7, B8, and B8a). To improve the saturation problem of NDVI, the red -edge ratio is
considered. Among the red-edge bands in Sentinel-2, the B5 and B6 band are selected,
because they have similar reflectance values at low vegetation densities, and because there is
a gradual increase in the difference between the two bands in relation to increased biomass.
Finally, we propose the Red Edge Ratio NDVI (RERNDVI) which is formed by multiplying
the red-edge ratio and NDVI as Eq.5.
RERNDVI = NDVI × sqrt(B6/B5) (5)

As a result s a result, NDVI, EVI, RERVI indices were calculated as shown in the figure to
calculate vegetation rates in the study area. When these indexes calculated according to the
classification results obtained in high resolution LiDAR data are compared, it is seen that
there is a similarity. Namely, the conifer area seen in the LiDAR data is seen as the closest
value to 1 in the NDVI, EVI and RERVI index maps. Similarly, Sentinel Data index map
corresponding to the area in LiDAR data classified as uncultivated area is seen as a value
closer to zero. Thus, with the help of plant indexes, the classification accuracy can be
increased with the help of both the calculated index values and the corresponding fields in the
LiDAR data of the classified areas in the sentinel data

340
Figure 4 : a) Google earth İmage , b) LiDAR forest cover map , c) NDVI map , d) RERVI
map , e) EVI map

4.3.Accuracy Assestment

Reference points were determined in Google Earth software as a reference data set for
accuracy. For each land class, 50 reference points were determined randomly distributed
throughout the image. In this way, the reference point file was created (Figure 5).

Figure 5: Study Area Reference Points

In the second step an error matrix was created for accuracy management. The error matrix
allows us to calculate various or accuracy metrics from our data. This matrix consists of n x n
arrays and ''n'' number of data classes. The columns in the error matrix represent the reference
data. Rows are mapped classes created from remotely sensed data. 50 samples were created
from each class for building, plantation, uncultivated area, conifer and road classes. These
sample areas were then compared with the classified data. As a result, the error matrix in the
figure was created.

Table 2 :Relationship between reference data and classified data obtained error matrix report
for samples

Buil Plant Uncul Co Roa Tota Correc PA UA


Reference ding ation tivate nif d l t (%) (%)
d er Sampl
ed
Building 32 2 3 3 4 44 32 0.78 0.73
Plantation 2 35 4 3 2 46 35 0.78 0. 76
Classification

Uncultivat 3 2 39 2 3 49 39 0.74 0.80


ed
Conifer 2 3 4 38 5 52 38 0.76 0.73
Road 2 3 3 4 47 59 47 0.77 0.80

Total 41 45 53 50 61 250 191


OE(%) 22 22 26 24 30

CE(%) 37 24 20 27 20
Overall Accuracy %76 Kappa Statistics: 0.75

341
Highlighted diagonal items in the error matrix represent correctly classified areas. These
diagonal values show the accuracy of our classification process. The accuracy is determined
with the help of the following equation (Eq.6).

The overall classification accuracy = No. of correct points/total number of point (6)

In the example above, 32 of the 50 building reference points, 35 of the Plantation class, 39 of
the Uncultivated class, 38 of the Conifer field and 47 of the Road zones are correctly
identified on the classified Sentinel-2 map

At the end of the process, Ommission Error (OE) and Commission Error (CE) percentages
were calculated. OE show false positives. So the pixels are assigned to the wrong class. CE
indicates false negatives. That is pixels of a known class are classified as something other
than that class. Also, the non-diagonal elements in the rows of the confusion matrix are
divided by the total number of pixels assigned to the Sentinel image class corresponding to
the row, and the resulting value shows CE. CE defines the probability that a pixel assigned to
a particular class actually belongs to one of the other classes.

Errors of omission are also known as user's accuracy or Type -1 error. The producer's
accuracy column shows false negatives, or errors of commission. The Total column shows the
number of points that were identified as a given class, according to the classified map. Kappa
statistic of agreement gives an overall assessment of the accuracy of the classification of
commission are also known as producer's accuracy or Type-II error. Overall accuracy, on the
other hand, shows how accurately the classification is made according to the reference data.
The diagonal values in the table show the correctly classified values. Overall accuracy was
calculated as 76% according to the classification result.

5. CONCLUSIONS

In this study, forest indices were calculated for forest areas and a land classification map was
created using Sentinel-2 and LiDAR data. Thus, the densely forested area, cultivated areas,
uncultivated areas and the settlement areas in the region were determined by automatic
classification algorithms. In addition, spectral range values obtained from NDVI, EVI,
RERVI indices obtained from Sentinel-2 data were compared with LiDAR classification
results. The Vegetation Index (VI) is a mathematical combination of bands with spectral
features in green plants as a factor that affects and supports the classification result. In
conclusion, conifer, Red Edge , NDVI, EVI indices of forest and cultivated areas were
calculated for uncultivated area, plantation, road. The relationship between these vegetation
indices and classification results has been revealed. When the relationship obtained was
examined, it was concluded that there was a positive relationship. Vegetation indices were
found to be high in areas with high conopy biomass. In the study, it was concluded that we
should decide which remote sensing data to choose depending on the target we want to
achieve results. using Sentinel-2 data in the classification process is an option for very high
accuracy undesired targets. Another option is the result that we can obtain a more accurate
terrain map with LiDAR data for areas that require high sensitivity or for more localized
areas. In addition, since LiDAR data allows viewing in areas that are difficult to reach from
the air, it will be possible to classify it in wide areas as well as high accuracy. Another result
is that it is possible to obtain a terrain map with the combination of Sentinel and LiDAR data
fusion. Thus, a more accurate classification map will be obtained. Interpreting the results
obtained from Sentinel data and LiDAR data combination together with spectral index values

342
will increase the accuracy of the classification map to be obtained. In general, higher
resolution additional data is required in studies with low resolution data in the imaging of
agricultural areas and forest areas with remote sensing data. In this case, LiDAR data
becomes an important resource. This study has shown that Sentinel-2 and LiDAR data can be
used to generate accurate irrigation. As a result of the process, 76% overall accuracy was
obtained with Sentinel data and 85% with Lidar data.
ACKNOWLEDGE
This work is supported by the Scientific Research Project Fund of Sivas Cumhuriyet
University under the project number “M-797”
REFERENCES

Aredehey, G., Mezgebu, A., and Girma, A. (2017). Land-use land-cover classification
analysis of Giba catchment using hyper temporal MODIS NDVI satellite images.
International Journal of Remote Sensing, 39(3), 810–821.
doi:10.1080/01431161.2017.1392639

Boyd, D., Foody, G. And Ripple, W. (2002). Evaluation of approaches for forest cover
estimation in the Pacific Northwest, USA, using remote sensing. Applied Geography,
22(4), 375–392. doi:10.1016/s0143-6228(02)00048-6

Cao, Q., Miao, Y. , Wang, H., Huang, S., Cheng, S., Khosla, R., Jiang, R. (2013). Non-
destructive estimation of rice plant nitrogen status with Crop Circle multispectral active
canopy sensor. Field Crops Research 154, 133–144. doi:10.1016

Eisfelder, C., Kuenzer, C., and Dech, S. (2012). Derivation of biomass information for semi-
arid areas using remote-sensing data. International Journal of Remote Sensing, 33(9),
2937–2984. doi:10.1080/01431161.2011.620034

Fagua, J.C., Jantz, P., Rodriguez-Buritica, S.,Duncanson, L., Goetz, S.J. (2019). Integrating
LiDAR, Multispectral and SAR Data to Estimate and Map Canopy Height in Tropical
Forests. Remote Sens. 11, 2697.

Foody, G. M. (2002). Status of land cover classification accuracy assessment. Remote


Sensing of Environment, 80(1), 185–201. doi:10.1016/s0034-4257(01)00295-4

Forkuor, G., Dimobe, K., Serme, I., and Tondoh, J. E. (2017). Landsat-8 vs. Sentinel-2:
examining the added value of sentinel-2’s red-edge bands to land-use and land-cover
mapping in Burkina Faso. GIScience & Remote Sensing, 55(3), 331–354.
doi:10.1080/15481603.2017.1370169

Gao, Y., and Zhang, W. (2009). LULC Classification and Topographic Correction of Landsat-
7 ETM+ Imagery in the Yangjia River Watershed: the Influence of DEM Resolution.
Sensors, 9(3), 1980–1995. doi:10.3390/s90301980

Gao, M.L., Zhao, W.J., Gong, Z.N., Gong, H.L., Chen, Z., Tang, X.M. (2014). Topographic
correction of ZY-3 satellite images and its effects on estimation of shrub leaf biomass in
mountainous areas. Remote Sens. 6, 2745–2764.

343
Ganz, S., Adler, P., and Kändler, G. (2020). Forest Cover Mapping Based on a Combination
of Aerial Images and Sentinel-2 Satellite Data Compared to National Forest Inventory
Data. Forests, 11(12), 1322. doi:10.3390/f11121322

Gxumisa, A., and Breytenbach, A. (2017). Evaluating pixel vs. segmentation based classifiers
with height differentiation on SPOT 6 imagery for urban land cover mapping. South
African Journal of Geomatics, 6(3), 436. doi:10.4314/sajg.v6i3.12

Gyamfi-Ampadu, E., Gebreslasie, M., and Mendoza-Ponce, A. (2020). Mapping natural forest
cover using satellite imagery of Nkandla forest reserve, KwaZulu-Natal, South Africa.
Remote Sensing Applications: Society and Environment, 18, 100302.
doi:10.1016/j.rsase.2020.10030 2

Hansen, M. C., Stehman, S. V., and Potapov, P. V. (2010). Quantification of global gross
forest cover loss. Proceedings of the National Academy of Sciences, 107(19), 8650–
8655. doi:10.1073/pnas.0912668107

Hansen, M. C., Potapov, P. V., Moore, R., Hancher, M., Turubanova, S. A., Tyukavina, A.,
Townshend, J. R. G. (2013). High-Resolution Global Maps of 21st-Century Forest
Cover Change. Science, 342(6160), 850–853. doi:10.1126/science.1244693

Huete, K. Didan, T. Miura, E. P. Rodriguez, X. Gao, L. G. Ferreira. (2002). Overview of the


radiometric and biophysical performance of the MODIS vegetation indices. Remote
Sensing of Environment 83195-213 doi:10.1016/S0034-4257(02)00096-2.

Kruse, F. A., Lefkoff, A. B., Boardman, J. W., Heidebrecht, K. B., Shapiro, A. T., Barloon, P.
J., and Goetz, A. F. H. (1993). The spectral image processing system (SI PS)—
Interactive visualization and analysis of imaging spectrometer data. Remote Sensing of
Environment, 44(2-3), 145–163. doi:10.1016/0034-4257(93)90013-n.

Li, Y., Li, M., Li, C., and Liu, Z. (2020). Forest aboveground biomass estimation using
Landsat 8 and Sentinel-1A data with machine learning algorithms. Scientific Reports,
10(1). doi:10.1038/s41598-020-67024-3

Li M, Zang S, Zhang B, Li S, Wu C. (2014). A review of remote sensing image classification


techniques: The role of spatio-contextual information. European Journal of Remote
Sensing. 47(1):389-411. Doi: 10.5721/EuJRS20144723

Li, W., Niu, Z., Shang, R., Qin, Y., Wang, L., and Chen, H. (2020). High-resolution mapping
of forest canopy height using machine learning by coupling ICESat-2 LiDAR with
Sentinel-1, Sentinel-2 and Landsat-8 data. International Journal of Applied Earth
Observation and Geoinformation, 92, 102163. doi:10.1016/j.jag.2020.102163

Li, M., Im, J., Quackenbush, L. J., & Liu, T. (2014). Forest Biomass and Carbon Stock
Quantification Using Airborne LiDAR Data: A Case Study Over Huntington Wildlife
Forest in the Adirondack Park. IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, 7(7), 3143–3156. doi:10.1109/jstars.2014.2304642

Lu, D., Chen, Q., Wang, G., Liu, L., Li, G., and Moran, E. (2014). A survey of remote
sensing-based aboveground biomass estimation methods in forest ecosystems.
International Journal of Digital Earth, 9(1), 63–105. doi:10.1080/17538947.2014.990

344
Lu D, Weng Q . (2007). A survey of image classification methods and techniques for
improving classification performance. International Journal of Remote Sensing.
28(5):823-870. Doi: 10.1080/01431160600746456

Lu D, Chen Q , Wang G, Liu L, Li G. and Moran E. (2016). A survey of remote sensing-


based aboveground biomass estimation methods in forest ecosystems. International
Journal of Digital Earth ,9(1):63-105. Doi: 10.1080/17538947.2014.990526

Lu, D., Hetrick, S., & Moran, E. (2010). Land Cover Classification in a Complex Urban-Rural
Landscape with QuickBird Imagery. Photogrammetric Engineering & Remote Sensing,
76(10), 1159–1168. doi:10.14358/pers.76.10.1159

Manandhar, R., Odeh, I., and Ancev, T. (2009). Improving the Accuracy of Land Use and
Land Cover Classification of Landsat Data Using Post-Classification Enhancement.
Remote Sensing, 1(3), 330–344. doi:10.3390/rs1030330

McCue, G. A., Williams, J. G., and Morford, J. M. (1971). Optical characteristics of artificial
satellites. Planetary and Space Science, 19(8), 851–868. doi:10.1016/0032-
0633(71)90137-1

McRoberts, R.E., and Tomppo, E.O. (2007). Remote sensing support for national forest in-
ventories. Remote Sens. Environ. 110, 412–419.

Naesset, E., (1997). Determination of mean tree height of forest stands using airborne
laserscanner data. ISPRS J. Photogramm. Remote. Sens. 52, 49–56.

Ranaie, M., Soffianian, A., Pourmanafi, S., Mirghaffari, N., and Tarkesh, M. (2018).
Evaluating the statistical performance of less applied algorithms in classification of
worldview-3 imagery data in an urbanized landscape. Advances in Space Research,
61(6), 1558–1572. doi:10.1016/j.asr.2018.01.004

Rouse, J. W., Haas, R. H., Schell, J. A. And Deering, D. W. (1974) . Monitoring vegetation
systems in the great plains with ERTS. in: Proceedings of the Third Earth Resources
Technology Satellite-1 Symposium, NASA SP-351 (pp. 309-317).

Shimizu, K., Ota, T., Mizoue, N. and Saito, H. (2020). Comparison of Multi-Temporal
PlanetScope Data with Landsat 8 and Sentinel-2 Data for Estimating Airborne LiDAR
Derived Canopy Height in Temperate Forests. Remote Sensing, 12(11), 1876.
doi:10.3390/rs12111876

Stereńczak, K., Kraszewski, B., Mielcarek, M., Piasecka, Ż., Lisiewicz, M. and Heurich, M.
(2020). Mapping individual trees with airborne laser scanning data in an European
lowland forest using a self-calibration algorithm. International Journal of Applied Earth
Observation and Geoinformation, 93, 102191. doi:10.1016/j.jag.2020.102191

Sánchez Sánchez, Y., Martínez-Graña, A., Santos Francés, F.and Mateos Picado, M. (2018).
Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley,
Cáceres, Spain). Sensors, 18(3), 826. doi:10.3390/s18030826

Sinha, S., Jeganathan, C., Sharma, L. K. and Nathawat, M. S. (2015). A review of radar
remote sensing for biomass estimation. International Journal of Environmental Science
and Technology, 12(5), 1779–1792. doi:10.1007/s13762-015-0750-0

345
Tucker, C. J., Townshend, J. R. And Goff, T. E. (1985). African Land -Cover Classification
Using Satellite Data. Science, 227(4685), 369–375. doi:10.1126/science.227.4685.369

Waser, L., Fischer, C., Wang, Z. and Ginzler, C. (2015). Wall-to-Wall Forest Mapping Based
on Digital Surface Models from Image-Based Point Clouds and a NFI Forest Definition.
Forests, 6(12), 4510–4528. doi:10.3390/f6124386

Wulder, M. A., White, J. C., Nelson, R. F., Næsset, E., Ørka, H. O., Coops, N. C.,Gobakken,
T. (2012). LiDAR sampling for large-area forest characterization: A review. Remote
Sensing of Environment, 121, 196–209. doi:10.1016/j.rse.2012.02.001 143–3156.

Yao, W., Krull, J., Krzystek, P., Heurich, M. (2014). Sensitivity analysis of 3D individualtree
detection from LiDAR point clouds of temperate forests. Forests 5, 1122–1142.K.
Stereńczak, et al..Int J Appl Earth Obs Geoinformation 93 (2020) 10219112

Zhang W, Qi J, Wan P, Wang H, Xie D, Wang X, Yan G. (2016). An Easy-to-Use Airborne


LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sensing., 8(6):501.

Zhao, B., Yang, F., Zhang, R., Shen, J., Pilz, J., and Zhang, D. (2019). Application of
unsupervised learning of finite mixture models in ASTER VNIR data-driven land use
classification. Journal of Spatial Science, 1–24. doi:10.1080/14498596.2019.1570478

346
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Observations on public space in the city: the Town Hall Square of


Vigonza (Italy)

Enrico Pietrogrande1*, Alessandro Dalla Caneva1

Abstract: This contribution examines the building complex planned and constructed in
Vigonza, a populous town situated to the north-east of Padua, between 1937 and 1939. The
almost exclusive use of exposed terracotta brickwork and the abstract sinusoidal design of
the main façade make the architecture scenario characteristic and crystalised, a fundamental
element in the identity of Vigonza although its inhabitants are not aware, today, of the issues
that produced it.

The authors of this article are deeply convinced that the public quality of life in the city is
connected with the formal choices made by the project plan. They think that it is possible to
trace a close relationship of identification between the community and the architectural
forms that create a sense of belonging in those places. One of the identifying forms of the
city is certainly provided by the piazza, the place par excellence throughout history in which
the community recognises itself.

From the city of classical times to the medieval city followed by the Renaissance city and
continuing up to the present day, the piazza persists as a space of identity with continuity in
the layout of the urban structure. Its maximum expression is seen in the Renaissance where
the project of the city was planned as a place of the representation of civil life and its
institutions were recognisable in the form of the places.

Keywords: Vigonza, public space, reuse of built heritage, functional conversion

1. Introduction

Vigonza is a town situated in the province of Padua in the region of Veneto. In Vigonza a
very large urban transformation was initiated at the end of the 1930s, designed by the thirty-
year-old architect Quirino De Giorgio (1907-1997). The transformation was centred on the
town hall which stood in an isolated position, outside the populated area historically
secluded around the parish church of Saint Margaret. The intention was to define a new lay
centre, a town hall square that would be bordered on all sides by a series of new buildings in
only a few months: the House of the Fascist Party (Casa del fascio), the theatre, rural houses,
and residential blocks for the employees.

The town hall square or piazza in Vigonza, with its evident metaphysical character,
nowadays constitutes the back-drop of most local shows and events. The naturally non-
monumental approach De Giorgio used in all his work [Monti, 2006], the full trust of the
1
University of Padua, Department of Civil, Environmental and Architectural Engineering, Padua, Italy
* Corresponding author: [email protected]
347
client, the almost exclusive use of terracotta brickwork, the sinusoidal design of the façade
facing the town hall make this a very particular place. It survived intact the Second World
War and recently underwent a restoration that contributed to valorise its urban presence.

Figure 1. View of un unhealthy house burned and substituted by new rural houses promoted
by Fascist Party at the end of the the Thirties.
Figure 2. Quirino De Giorgio, general planimetry of the transformation implemented in
Vigonza between 1937 and 1939 where the town hall had been built a long way from the
centre of the town. At the time the centre was secluded around the parish church. Please note:
North is the opposite direction to that stated on the layout.

The construction of ordered housing in five buildings in sequence for the rural population
was one of the policies of the fascist regime to improve the living conditions of this poor
people (fig. 1). A comment in the “Padua” magazine reveals that thirteen accommodation
units were built, “inhabited by one hundred and four people. (…). There was a seventeen
thousand square metre piazza in front of the houses for which sixteen thousand cubic metres
of soil was transported in two thousand working days. The piazza is adorned by green lawns,
with a flagpole for raising a flag and an artistic well” [Rigamo 1938].

The wave of rural houses concluded to the west in the architecture of the theatre. This
building presents a convex front on the piazza that together with the concave profile of the
adjacent covered market, gave rise to a second sinusoid that was in a very close relationship
with the first. The high entrance portico opens into the central part of the theatre façade,
corniced by the compact stonework and divided by five columns, these also made of
exposed terracotta bricks.

The joints between the courses of bricks in the columns of the façade of the theatre are
cancelled by the peculiar shape of the pieces whose bases are inclined so as to contain the
binder in the internal part: this results in the smooth surfaces of the cylinders and their shiny

348
bright response to the light contrasting with the shade of the portico. If, on the one hand,
there is no evidence of representations held in the theatre by 1943, that is, before the fall of
the fascist regime, on the other hand, it is certain that the Padua Federation of the National
Fascist Party quickly but vainly attempted to rid itself of the building, considered to be too
large for Vigonza and too onerous to maintain.

Figure 3. The House of the Fascist Party (casa del fascio) in the Town Hall Square of
Vigonza. The picture was taken in 1940.
Figure 4. The same building in the post-war period, after the partial demolition of the façade.
Figure 5. The current condition after the repair and restoration in the image of House of the
Fascist Party completed in 1995 with De Giorgio present as consultant.

The adjacent covered market is composed of an oblong portico, open towards the piazza and
to the back through a series of completely curved arches obtained in the composition of the
brickwork. The concave arch in plan of the covered market concludes, on the part opposite
the theatre, with a celebrative element, that is, a stylised pillar on which the motto “believe,
obey, fight” is inscribed. This is also made in brickwork and was meant to support a
sculpture that was never made.

The architecture of the House of the Fascist Party was obtained by covering the façades
facing the public space of a pre-existing building originally used as a school.

The general planimetry of the transformation presented by De Giorgio in the propaganda


publication “Three years of marches by Paduan fascism” (fig. 2) also includes the prismatic
buildings planned for the employees. The planning methods of De Giorgio in this period can
be seen here: volumetric layout in blocks, exposed brick masonry, hidden guttering, slots all
of the same size and shape and open with regularity [De Giorgio 1940].

2. The latest restyling of the building that once hosted the House of the Fascist Party in
the Town Hall Square (1995)

After the fall of fascism in Vigonza, as everywhere else in the Kingdom of Italy [Mangione
2003], evidence of the regime was removed [Lenci and Segato, 1996] including the covering
of the old school buildings that De Giorgio had studied in order to change the building into
the office of the local Fascist Party (fig. 3). Consequently, after the fall of the regime, the
House of the Fascist Party in Vigonza, having only operated for a few years, shared the fate of
many buildings that hosted the structures of fascism and marked its presence in the territory.

349
No substantial variation affected the side along the road apart from the fitting of in the slots in
the portico with glass while the stylised Fascist tower on the side facing the piazza was
knocked down and the changing concave wall was demolished. The mystic value precisely
assigned to the façade on the piazza by De Giorgio (fig. 4) explains how the determination to
erase evidence of the National Fascist Party was concentrated here, bringing the school façade
back to light.

Above all, the case of this building in the national panorama in recent times is distinct. In
1995 the characterisation in exposed terracotta brickwork planned in 1938 by De Giorgio was
restored (fig. 5), not giving any consideration to the opinions of the antifascist associations.

The work involved rebuilding the concave wing-wall from top to bottom without any other
opening except the small door in the middle, giving meaning to the building construction
again that is now no longer only an answer to a function but once again represents the ideal
value. The symbolic Fascist tower was also rebuilt. Consultant to the work was Quirino De
Giorgio himself, by now almost ninety years old, who had been thirty years old when he
planned the new lay centre in Vigonza developed around the Town Hall.

3. The theatre in the piazza

The architecture of the theatre closes the piazza to the west and the planner solved this by
almost completely using exposed brickwork thereby contributing to the alienating character of
the square generated by the same material apart from modest exceptions (fig. 6).

The characterisation by using exposed terracotta brickwork is particularly intense in the space
of the portico that opens onto the piazza beyond the columns (fig. 7): the walls on the two
sides, the ceiling and the floor are finished in brick. The only plasterwork used is on the
bottom wall, on which the four doors of the entrance are arranged and above which are the
round windows of the room that leads to the gallery. The covered market branches out from
the theatre in the form of a concave portico facing the piazza, entirely built in exposed
terracotta brickwork too.

Completed in 1939, the theatre was never used before the fall of fascism in 1943. In fact, from
the very start the Padua Federation of the National Fascist Party tried to disassociate itself
from the building, considering it to be excessively large for Vigonza and also too expensive to
maintain.

The theatre has undergone various modifications since the post-war period that do not respect
either the plan or building site work of De Giorgio. In particular, the parish transformed it into
an orphanage in the 1960s. After this, the communal management promoted its return to its
original function while the entrance hall was converted into a library. Regarding the outside,
later additions were built, part of the arches of the lateral porticoes and all of those of the
covered market had cladding put on them, new window openings were made, the round slots
of the entrance portico were increased in number, like the doors beneath. A recent building
operation corrected the main alterations suffered by the building [Zanella, 1996-1997].

350
Figures 6, 7. Quirino De Giorgio, theatre in the new piazza in Vigonza (1938-1939),
photographed by the architect De Giorgio himself in 1940. General view and detail of the
colonnaded portico.
4. Restoration of the rural houses

The rural houses (fig. 8) were restored from the autumn of 2012. The method used was very
conservative and replaced a previous transformative approach, which was also due to the
work of the Superintendency for Architectural and Landscape Heritage.

The transformation involved the five buildings arranged according to the sinusoidal line. The
land use designation given to the rooms on the first floor was as habitation while the premises
on the ground floor were expected to see the installation of craft workshops. Therefore, a
series of samples were taken from the building to see how much of the original finishing was
still there even if concealed from view, to document the construction process even after
seventy years of the environments being used. Furthermore, traces of the original openings in
the most altered façades were found, those facing the countryside. In fact, no drawings of the
original situation exist, not a single copy of the architectural plan has been found in the
archive of the architect nor in those of public administrations, and furthermore, the rich
archive of the construction company Grassetto which carried out the work has always been
inaccessible.

The rural houses of Vigonza constitute an above all fortunate case in that they belong to the
State Property Office, which has maintained the unitariness of the building complex. In the
case of a twin transformation carried out by De Giorgio in the period, the plan and the
construction of rural houses in Candiana [Longhin, 2009], the breaking of the property passed
to the individual occupants generated the almost complete loss of the original structure and
layout of the architecture and the image the village.

Therefore, the original floor made of terracotta tiles was identified a few centimetres under
the existing floor in nearly all of the ground floor rooms (fig. 9), confirming that the
heightened tendency of De Georgio in those years was to use one material as much as
possible in developing an architectural project.

351
Figure 8. Quirino De Giorgio, the rural houses that close off the southern side of the new
piazza in Vigonza (1937-1938). General view photographed by De Giorgio in 1940.

Figures 9, 10. The rural houses that close off the southern side of the new piazza in Vigonza
(1937-1938). Ground floor, the original floor of terracotta square tiles, laid without joints
simulating continuous flooring (preliminary phase of the restoration, 2012-2013).

As at other times the planner preferred using travertine stone, here adopting terracotta bricks
not only for the façades on the piazza but also for the floors on the ground, made using square
shaped tiles. The laying of tiles without joints served to simulate a continuous surface
throughout the whole room (fig. 10). The restoration of the tiles unfortunately turned out to be
impossible since the cement-based mortar used for bedding was so firmly attached to the
pieces that they could not be re-used once the new insulation screed had been supplied.

The intermediate floors are made of wood with beams of a modest cross-section, sometimes
with a square cross-section, often put in place at varying distances. On top of these 2.5

352
centimetre-thick planking functioned as the floor. The joists were fixed to the beams to
support the trellis to which the plasterwork of the ceiling of the floor below was fixed (fig,
11).

According to the choice De Giorgio often made, these houses are covered with single-pitched
rooves, descending towards the back. Consequently, he obtains greater height for the main
façade and eliminates from the most significant view of the architectural scene those
functional accidents such as the drainpipes and the guttering that can dissipate the
metaphysical aura he focused on. As his own photographs show, he looks for a pure and
strong contrast between light and shade. The demolition of the ceilings on the first floor
facilitated the discovery of the technique De Giorgio used to put the single-pitched solution
into effect and at the same time make an appreciable saving in the cost of the construction: the
masonry of the façade on the piazza above the ceilings was raised 12 centimetre-thick, instead
of 24, with small support pilasters under each roof beam (fig. 12).

This is an example of how the planner applied himself to reduce the cost to the Paduan
Federation of the Fascist Party to whom he answered for his actions, aiming at coexistence
with the greater expressive strength to capture the attention of the head of state, Benito
Mussolini. Mussolini was expected in Padua on the 24 September 1938 [Bertolini, 1938], but
did not go to Vigonza perhaps because the work had not been completed in time
[Pietrogrande, 2011].

During the restoration work on the houses, the 12 centimetre-thick brickwork that concluded
the façades in the part above the first floor was strengthened from the inside using a second
brick course so that the framework of the building guarantees the building is safe and will
remain so in the future.

The openings in these buildings, the doors and windows, were provided for by De Giorgio
without external shutters (fig. 13). In such a way another accident of a practical nature did not
question the ideal and everlasting destiny of his architecture and the shadows were to strongly
mark the composition of the façades (fig. 14).

Figures 11, 12, 13. The rural houses that close off the southern side of the new piazza in
Vigonza (1937-1938). Details of the original situation: the wooden intermediate floors, the
masonry that supports the beams of the single-pitched roof, the doors and windows made of
wood and painted white.

The windows had frames of a significant size and were painted white to generate the
chromatic relationship between the white and the colour of the terracotta bricks that

353
characterises most historic buildings in the Venetian area – but where the white is that of the
stone.

In general, the transformation also involved the demolition of several additions constructed at
the back of the buildings so as to reduce the volumes to the initial stereometry desired by the
planner. Upon the restoration of the original appearance, other initiatives were activated such
as the modification of the terrace on the portico behind the convex building with a pitched
roof.

5. General considerations. The piazza as a place of community identity

In general, two ideas of piazza can be recognised. They refer to two different urban models:
the closed city and the open city. The piazza in the idea of the closed city has a distinctive
form protected from the surrounding territory and has its own size. The buildings that extend
along the sides of the piazza are arranged so that they give the empty space form.

Figure 14. The rural houses in Vigonza at the end of the restoration transformation, view of
the first buildings from the east (November 2015). The façades on the piazza are made of
exposed brick with the exclusion of the corresponding portion on the upper floor of the
building with convex trend (on the right, partially off).

This idea of piazza is the opposite of the open piazza in which the empty space becomes even
more determining since it constitutes the motivation for the relationships that the main
elements of the city establish between themselves. The idea of the piazza proposed by Le
Corbusier still recognises the compositional elements of the project such as the transport
network, the building fabric, and the public buildings but assembled more freely as a
consequence of a new element that enters the composition: the surrounding open space
offered by nature.

The piazza is the archetypal element in the form of the city, recurring in and identifying the
city. It appears in the architecture of classical times as the result of two or more stoas
(porticoes in ancient Greece). The arrangement of the volumes generates a public space
surrounded by porticoes that is the origin of the forum or the agora. In this case the portico
defines an introverted space that can be read as being the precursor of the construction of the
public piazza in the city. The Greek agora signals a starting point, a real and true invention.
Thanks to it, the social life and the relationships between human beings take on a new form,
more socially and culturally evolved.

354
The orthogonality of the axes in the Roman foundation city piazza, the cardo and decumanus,
is at the origin of the structure of the urban spaces. The axes are necessary to identify at their
crossing a symbolic centre and boundary, the mundus and the terminus or pomerium. The first
act of an architect in the planning phase is to draw the layout of the city which then shows the
division of the city into areas inside it, obtained from the framework of the urban fabric with
the main and minor roads being delineated. Then he traces the space where both the
residential buildings and the public ones with a religious or political function have to be
positioned. The succession of phases in the transformation is given by the erection of the city
wall that surrounds it and delimits the internal area and then by the subdivision of the open
spaces, roads, and piazzas. The public buildings are arranged around the symbolic empty
space of the forum composing a scenic urban wing-wall representative of the collective place
in the city. The Roman city of Timgad belongs to a settlement model typical of a closed city
surrounded by walls (fig. 15).

The forum in Pompei and the imperial forums in Rome are constructed where the cardo and
decumanus form a crossroads and the regular empty space of the civil space is defined by
colonnaded porticoes that unify the representative buildings according to precise relationships
of axiality. The piazza constitutes one of the urban spaces of the Roman city that, by virtue of
their introverted character, conform by starting from the principle of the limit.

Figures 15, 16. Alessandro Dalla Caneva, graphic restoration of the city of Timgad with
identification of the piazza (on the left), the city of Siena and identification of the Piazza del
Campo.

Figures 17, 18. Alessandro Dalla Caneva, the city of Pienza with identification of the piazza
(on the left), partial graphic restoration of the city of Paris and identification of the Place
Vendome.

355
The act of delimiting the space belongs to a way of thinking in the founding and construction
of the Roman city. On the other hand, the theme of the limit has its origin in identifying and
delimiting one space through religious rites, a dimension that has assumed civil values with
the passing of the centuries.

The medieval city (fig. 16) was superimposed on the ancient city giving shape to the space
according to analogous principles. The space in the civic and religious piazza is analogously
limited and protected by urban wing-walls that configure the space by giving it an identity.
The relationship between empty and filled spaces constitutes the reason for a picturesque
composition of the parts between volumes that are related according to an apparently free
arrangement of the parts giving life to irregular but unitary spaces. The medieval roads and
piazzas break the orthogonal grid of the city founded by the Romans into pieces but also
constitute the texture of continuous change alongside which the buildings of the city stand
that are recognised by role and position within a wider and general urban framework. The
founding principles of the medieval city always arose from the same idea that considers the
relationships between the house and road, the public buildings and the piazza as invariable
elements.

In the sixteenth century the space of the piazza assumed a unitary layout, aimed at urban
decorum in line with the theories developed on the idea of the ideal city. Starting from Leon
Battista Alberti the idea of the ordered city is reflected in the design of the piazza recognised
by its symmetrical shape, which the experience of urban planners in developing piazzas such
as the piazzas in the cities of Pienza (fig. 17) and Vigevano attempt to emulate.

The eighteenth and nineteenth centuries saw general ideas on the modernisation of spaces in
large cities proliferate. This is the case with the Paris plan by urban planner Pierre Patte (fig.
18). The collage of piazzas inside historic Paris redesigned the look of the city by re-ordering
the spaces using a new order in form that copied the same principles of construction as the
historic city. Road and piazza always have the same role generator but change their
dimensions. Piazzas represent the fulcrum of a series of relationships and routes that unify the
individual piazza with a unitary urban spatial system. A hierarchical order can be recognised
in the size of piazzas from the exceptional dimensions to the small ones. It represents the
celebrative role of the public space very well and through which real power manifests its own
meaning and role.

The idea of the piazza thought of like this for millennia enclosed inside a symbolic enclosure
representing the values identifying a community was in crisis at the start of the twentieth
century because of the emergence of a new ideal of the city and spatiality.

356
Figures 19, 20. Le Corbusier, planimetry of the civic centre of Saint Dié, 1945. On the right
plan of the city of Parma. The two planimetries identify the two models of historical city, the
open city and the closed city, to which correspond the open piazza and the closed piazza.

The end of the traditional piazza is found in the unrealised project Le Corbusier developed for
the city of Saint Dié in which free volumes in close visual relationship constitute the meaning
of the new urban piazza (figg. 19, 20).

Arranging autonomous volumes in space in close interdependent relationship so that they


form spatial harmony constitutes what the architectural historian Siegfried Giedion defined as
the first design conception in space: architecture understood as sculpture. This principle of
composing by relationships between individual and autonomous buildings, this perception of
the spatial effect that volumes, as plastic figures, produce in their close relationship, are
carried over in twentieth century architecture.

The idea of the ancient piazza is still valid today at least in the historic city. This model
becomes invalid with the removal of the ancient walls that affirms an urban form constructed
on the relationship between urban elements and natural context. Saint Dié is therefore
constructed on a specific principle: the form of the place results from the system of the
relationships that are established between distinct urban elements and the surrounding nature.
An analogous principle is stated by the organisation of two places: the system of the
architectures in the Athenian Acropolis (the Parthenon in relationship with the Propylaea and
the Erechtheion, and with the slopes of Mount Pentelicus) and the relationships that the
distinct volumes of the Baptistry, the Cathedral, and the leaning tower establish in the empty
space of the Field of Miracles in Pisa.

6. Conclusions

Quirino De Giorgio's urban project rejects the model of the open city and is in continuity with
the ways of composing the traditional city that conceives space by arranging the volumes
around a void with a recognizable shape. The close relationship between the street and the
houses, typical of the way of composing the ancient city, is still evident, as is the
scenographic position of the theater which makes its hierarchical role within the square
recognizable.

De Giorgio gave form to the piazza of the town hall of Vigonza by interweaving curvilinear
forms, and subdivided the piazza into specific functional areas. A green area in front of the
theatre was initially enclosed in a circular flower bed with a tree in the middle, alluding to the
theme of the garden. Opposite on the far side was the area with the well, with the well-head

357
made of terracotta bricks raised on three small concentric steps placed on trachyte stone
paving. The red bricks of the yard extended in the middle. This tricolour, like the Italian flag,
was probably a symbolic reference that De Giorgio made to represent the large scale of the
village of Vigonza. All of this was progressively removed in the second half of the last
century until largely reduced by the invasion of a homogeneous tarmac surface good for
parking and the weekly market. Together with the restoration of the houses was the
transformation of the piazza for which the covering of tarmac was removed leaving a
continuously paved space now not used for parking vehicles.

The monumental complex planned by De Giorgio nowadays constitutes one of the most
important references in the life of the population of Vigonza, also because of the marked
characterisation influenced by the metaphysical poetry. At first sight not identifiable as a
work of the fascist regime [Portoghesi, Mangione, Soffitta, 2006], the piazza is also one of the
most complete examples of how fascism, through its provincial federations, intervened in the
transformation of villages on an urban scale. It is also because of the lack of a grandiloquent
and rhetorical character of the square that the inhabitants of Vigonza live in this space happily
as a useful resource for the life of the community, regardless of its origins and ignoring them
in many cases.

In conclusion, it should be noted that the corresponding author has followed the restoration of
the first two rural houses in the framework of the agreement between the Commune of
Vigonza and the Department of Civil, Environmental, and Architectural Engineering of the
University of Padua.

References

Bertolini, A., (1938). La grande giornata, in «Padova», n. 10, 10.


De Giorgio, Q., (1940). Tre anni di marcia del fascismo padovano, Padua.
Lenci, G., Segato, G., (Eds., 1996), Padova nel 1943, dalla crisi del regime fascista alla
resistenza, Padua.
Longhin, S., (2009). Quirino De Giorgio a Candiana. Il Borgo del Littorio, in «Quaderni di
storia Candianese», n. 5, 9-35.
Mangione, F., (2003). Le case del fascio in Italia e nelle terre d’oltremare, Rome.
Monti, G., (2006). Quirino De Giorgio, in «Territorio e ambiente veneti», n. 2, 71-79.
Pietrogrande, E., (2011). L'opera di Quirino De Giorgio (1937-1940). Architettura e
classicismo nell'Italia dell'impero, Milan.
Portoghesi, P., Mangione, F., Soffitta, A., (Eds., 2006), L’architettura delle case del fascio,
Florence.
Rigamo, R., (1938). Vasta opera rigeneratrice nelle campagne padovane. Dal casone alla
ridente casa rurale, in «Padova», n. 10, 44.
Zanella, M., (1996-1997). I borghi rurali di Candiana e Vigonza progettati da Quirino De
Giorgio, degree thesis, Institute of Architecture and Urbanism, University of Padua
(supervisor Vittorio Dal Piaz).

358
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Crease-Resistance Treatments of Cotton Fabrics by Electrostatic


Self-Assembly

Buse Sağgün1*, Şule Sultan Uğur2*, Okan Ayvacık3*

Abstract: Cotton fabrics, especially during use, washing and drying operations due to their
tendency to wrinkle are subjected to non-wrinkle finishing processes. Although many
different chemical finishing agents and innovative methods have been proposed for this
purpose, solutions that can provide a wide range of uses are still needed for reasons such as
whether they are not economically suitable or damage the physical properties of the fabric
while providing wrinkling. In this study, it was aimed to give the cotton fabric a wrinkle-free
property by using layer by layer coating technique. It is aimed to minimize the loss of
strength, color change and cost in cotton fabric with laboratory studies carried out for this
purpose. In this way, layer by layer coating method, a nanofabrication method, will be used
commercially without additional investment costs, to gain new functional features with both
the wrinkle-free feature and the nanoparticles to be used.

Keywords: crease-resisance, electrostatic self-assembly, cotton.

1. Introduction

Cellulose-based fibers have been one of the most preferred textile materials in years, due to its
natural origin, biodegradable, sustainable, renewable properties and superior wearing comfort.
But a common and undesirable condition in cellulose-based fabrics, wrinkles are defined as a
form-breaking and cause significant user discomfort. Wrinkles occur in cellulose fibers
because of the presence of free hydroxyl groups, and when any force is applied to the fabric,
these groups form new hydrogen bonds with the adjacent polymer chain, and affect the
appearance on the fabric, causing lines and fold marks (Yuen et.al, 2007). Anti-wrinkle
chemical finishing process is applied to fabrics to prevent wrinkle. With the chemical
finishing process, the effect of functional groups is limited and crease-proof is gained by the
formation of cross-links between neighboring cellulose molecules. But functional groups
whose movement is restricted are known to cause loss of strength in the fabric and hardening
in attitude. In addition, some chemicals used in wrinkle finishing processes release
formaldehyde, a carcinogenic substance that has a negative effect on human health, causing
color changes or yellowing in the fabric. In recent years, intensive efforts have been made to
develop formaldehyde-free cross-linking agents for cotton to replace formaldehyde-based
reagents due to the negative effects of formaldehyde-based cross-linkers on human health.
Especially many studies have been done with polycarboxylic acids such as citric acid (CA),
maleic acid (MA) and 1,2,3,4-butane tetracarboxylic acid (BTCA). According to the
literature, BTCA (1,2,3,4-butane tetracarboxylic acid) may be the most promising to replace
other crosslinkers, but it is not suitable for commercial use due to its high cost (Lam et al.
1,2
Süleyman Demirel University, Engineering Faculty, Textile Engineering Department, Isparta, Turkey
3
Söktaş Textile Industry and Trade Inc., Aydın, Turkey
* Corresponding author: [email protected]
359
2011; Hebeish et al. 2011; Sarwar et al. 2019).

In the study, the applicability of commercially used anti-wrinkle chemicals with ionic charges
in the electrostatic self-assembly coating method was investigated and compared with the
proposed metal oxide nanoparticles. The fact that it will be possible to use both a new method
and new chemical groups to improve the easy-care properties of cotton fabrics will allow this
work to be evaluated commercially.

2. Material ve Method

100% cotton fabric with 50/1 yarn and 215 gr/m2 weight was purchased from Söktaş Textile
Industry and Trade Inc. and used for obtaining nanofilm coated fabrics. Before multilayer film
coating process, cotton fabric surfaces were pretreated with Polyethylenimine (PEI, 0.1 g/l,
PH: 10, dip-coating method) for obtaining cationic surface charges. Anatase titanium oxide
(TiO2) nanoparticle, Silicon dioxide (SiO2) nanoparticle, Poly(sodium 4-styrene sulfonate)
(PSS) and Poly(diallyldimethylammonium chloride) (PDDA) were purchased from Aldrich
and used as received. Aqueous solutions of the polyelectrolytes were prepared at
concentrations of 3 mM l−1 with using deionized water. 1 g/l nanoparticle suspensions were
prepared at 50 W for 1 h by Sonics Vibra-Cell Ultrasonic Homogenizer. Knittex FA
crosslinking agent (FA) was purchased from Hunstman and used as commercial agent. In the
electrostatic self-assembly process, cotton fabrics were deposited with 20 multilayer films by
using a laboratory type padding machine for continuous process. Air permeability, DP rating,
crease recovery angle and tensile strength analyses were performed to examine the
electrostatic self-assembly process effect on the cotton textile fabric properties.

3. Results

The wrinkling which is the major disadvantage of cotton fabric may handle the situation by
crease-resistant finishing processes. The results of the DP rating and wrinkle recovery angle
analysis were used to examine the crease resistance process on the cotton fabrics are shown in
Table 1. The test results proved that the applied electrostatic-self assembly processes,
especially with FA content are very effective in improving the wrinkling properties. For DP
rating test results minimum 3 value were obtained.

Table 1. DP rating and crease recovery angle of different types of fabrics

Durable Wrinkle
Samples Press (DP) Recovery Angle % Differnce
Rating WRA(o)
1 Untreated fabric 1 151,04 -
2 PSS/PDDA (20°C) 3 190,2 25,9
3 TiO2/TiO2 (20°C) 3,5 220,2 45,7
4 TiO2/TiO2 (50°C) 3 216 43
5 SiO2/TiO2 (20°C) 3 221,2 46,4
6 SiO2/TiO2 (50°C) 3 215 42,3
7 SiO2/PDDA (20°C) 3 217,5 44
8 SiO2/PDDA (50°C) 3 220,5 45,9
9 FA/PPDA (20°C) 3 211,5 40
10 FA/PPDA (50°C) 3 228 50,9

360
The mechanical tests were performed on electronic tensile strength machine according to EN
ISO 2062 Standard. The breaking strength of warp and weft yarns that are extracted from the
untreated and multilayer films deposited fabrics was tested at fracture and all the test results
are given in the Table 2. It was observed that the tensile strength values of the cotton fabrics
also decreased after multilayer film deposition both warp and weft directions.

Table 2. Strength evaluation results of different types of fabrics

Samples Warp Tear Strength Weft Tear Strength


1 Untreated fabric 2138,5 1741
2 PSS/PDDA (20°C) 1239 522
3 TiO2/TiO2 (20°C) 1370 587
4 TiO2/TiO2 (50°C) 913 391
5 SiO2/TiO2 (20°C) 1174 587
6 SiO2/TiO2 (50°C) 913 652
7 SiO2/PDDA (20°C) 1109 652
8 SiO2/PDDA (50°C) 978 456
9 FA/PPDA (20°C) 1109 848
10 FA/PPDA (50°C) 1370 913

TexTest Instruments FX 3300 Air Permeability Tester III instrument was used to obtain the
air permeability values of the untreated and multilayer films deposited on cotton fabrics
according to EN ISO 9237 Standard. Fabric air permeability tests were performed 10 times at
100 Pa pressure for all the samples. Table 3 shows air permeability values of untreated and
multilayer films deposited fabrics. After multilayer film deposition on the cotton fibers, air
permeability values are decreased. These results verified the presence of the deposited layers
on the cotton fiber.

Samples Aır Permeabılıty % Difference


1 Untreated fabric 410,3 -
2 PSS/PDDA (20°C) 350,3 -14,6
3 TiO2/TiO2 (20°C) 277 -32,4
4 TiO2/TiO2 (50°C) 253,3 -38,2
5 SiO2/TiO2 (20°C) 323 -21,2
6 SiO2/TiO2 (50°C) 241,6 -41,11
7 SiO2/PDDA (20°C) 316 -22,9
8 SiO2/PDDA (50°C) 260 -36,6
9 FA/PPDA (20°C) 314,3 -23,3
10 FA/PPDA (50°C) 333,7 -18,6
Table 3. Air permeability and percentage differences of different types of fabrics

4. Discussion and Conclusions

In the study, the applicability of commercially used anti-wrinkle chemicals with ionic charges
in the electrostatic self-assembly coating method was investigated and compared with the
proposed metal oxide nanoparticles. The fact that it will be possible to use both a new method

361
and new chemical groups to improve the easy-care properties of cotton fabrics will allow this
work to be evaluated commercially.

Acknowledgements

This work was supported by Scientific Research Fund of the Suleyman Demirel University.
Project Number: FKP-2021-8296. Project partner firm is Söktaş Textile Industry and Trade
Inc.

References

Yuen C.W.M., Ku S.K.A., Kan C.W., Cheng Y.F., Choi P.S.R., Lam Y.L., (2007). Using
nano-TiO2 as co-catalyst for improving wrinkle-resistant of cotton fabric. Surf Rev Lett,
14(4), 571–575.

Lam Y, Kan C, Yuen C., (2011). Wrinkle-resistant finishing of cotton fabric with BTCA - the
effect of co-catalyst. Textile Research Journal, 81(5), 482-493.

Hebeish A., Moustafa F.A., Fouda M. G., Elsaid Z., Essam S., Tammam G. H., Drees E. A.,
(2011). Green synthesis of easy care and antimicrobial cotton fabrics. Carbohydrate Polymers,
86, 4, 1684-1691.

Sarwar N., Ashraf M., Mohsin M., Rehman A., Younus A., Jayid A., Iqbal K., Raz S., (2019).
Multifunctional Formaldehyde Free Finishing of Cotton by Using Metal Oxide Nanoparticles
and Ecofriendly Cross-Linkers. Fibers and Polymers, 20, 2326–2333.

362
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

All Optical Gate Based on Photonic Crystal Ring Resonator

Lila Mokhtari*1, Hadjira Badaoui, Mehadji Abri, Rahmi bachir ,


Lallam Farah, Moungar Abdelbasset

Abstract: The aim of this paper was to propose and design a photonic crystal drop filter based
on ring resonators and study its properties numerically. This structure is constituted in a two-
dimensional square lattice. The resonant wavelengths of the PCRR proposed are λ = 1.553 μm,
the extraction efficiency exceeds 99% with a quality factor of 5177. To study the all-optical OR
and XOR logic gate function, we calculated the electric field distribution of the 2D photonic
crystal for the 1.553 μm signal light.

Keywords: photonic crystals ; filter ; ring resonators ; logic gates ; OR ; XOR .

1. Introduction

PCs are periodic optical nanostructures composed of two different materials with low and high
dielectric constant [1-2]. As a result of this periodicity, it owns photonic band gap (PBG), where
the transmission of light in certain frequency range is absolutely zero [3]. Depending on the
geometry of the structure, PCs can be divided into three broad categories, namely one-
dimensional (1D), two-dimensional (2D) and three-dimensional (3D) structures. 2D PCs due to
their complete PBG and ease of design and fabrication attract more attention than 1D structures
[4].
In this study, we propose a new design of PCRR based on square photonic crystal ring resonator
with flower shaped. The COMSOL Multiphysics based on the finite element method FEM is
used for simulate the distribution and transmission of electromagnetic wave. In our design,
100% dropping efficiency with quality factor of 5177 is achievable at wavelength λ=1.553 μm,
which is a satisfactory result in comparison with other T-shaped channel drop filters based on
photonic crystal ring resonators.
Optical logic gates are essential components required for optical signal processing and optical
communication networks Saidani [5] proposed a multifunctional logic gate in a 2D PCs
waveguide structure using multimode interference concept. By switching optical signal to
different input waveguides, different functions such as XOR, OR, NOR and NOT gates have
been obtained. An all optical NOR gate have been proposed by Isfahani [16]. We used the
PCRR presented to realize the gate logic for OR and XOR function, there are presented by
studying the electric distribution of the 2D photonic crystal for the 1.553 μm signal light.

1
STIC Laboratory, Faculty of Technology, University of Tlemcen, Algeria.
* Corresponding author: [email protected]
363
Structural characteristics

2.1. Band gap structure


To determine the physical parameters of the filter, it is necessary to calculate the band gap
diagram of the design. This last is traced using the plane wave expansion method PWE under
COMSOL Multiphysics software [7]. The dielectric rod radius r=0.188×a, and background
constant are taken a=0.64 μm. As shown in Fig.1, the PCs structure supports a photonic band
gap in the region 0<wa/2ߨܿ<0.455, 0.525<wa/2ߨܿ<0.545 and 0.675<wa/2ߨܿ<0.750 for TE
mode.

The resonant frequency is chosen such that there will not be a propagative mode in a photonic
structure without defect as shown in Fig. 1. At the wavelength 1.553 μm (wa/2ߨܿ=0.412), we
observe the absence of modes in these regions. The electric field is reflected back because of
the existence of the PBG as show in Fig. 1.

Figure 1. Schematic of photonic band gap

2.2. Field formulation


Use the Helmholtz field eaqution and starting from the frequency-domain governing equation:

(1)
The total electric field, E, can be decomposed into two components:

E = Etotal= Ebackground+ Erelative (2)

In mode analysis and the boundary mode analysis COMSOL Multiphysics solve equation
(1), the electric field in spectral domain is given by:

(3)

364
The spatial parameter, α = δz+ jβ
 β : Propagation constant
 δz : Attenuation constant
Use the scattering boundary conditionto make a boundary transparent for a scattered wave. The
boundary condition is also transparent for an incoming plane wave.

(4)
2.3. Design of the channel drop filter

In this study, the structure of the two-dimensional photonic crystal considered is formed by a
square lattice of dielectric cylindrical rods of GaAs embedded in an air background. The
numerical simulations are based on finite element method exploiting the commercial software
COMSOL. Rods have a refractive index value of n=3.28, and a radius of r=0.188×a, a=640 nm
being the lattice of the photonic crystal structure which is defined as the distance between
centers of two adjacent rods, and a resolution of 20 rods horizontally and 20 rods vertically.
Fig.2 shows the schematic structure of a Channel-Drop Filter CDF based on a PCRR. In this
structure the ring resonator is created by removing a 7 × 7 square of dielectric rods and then
replacing it with four flowers with height holes each separated by a hole the radii r1=0.2356×a.
In this study the used mesh is non-uniform and the type of sequence used is physics-controlled
mesh with scattering boundary condition.

365
Fig. 2. (a) Single ring PCRR. (b) Normalized transmission spectra at two output ports 2 and 3
for PCRR. The designing parameters of the proposed NRC-QSRR : a= 640 nm, r=120.32 nm,
rin = 151.3 nm, aNRC=551.36 nm, rNRC =130.34nm, d= 1608.36 nm, l= 1169.61 nm.

The optical waves enter the structure through port 1 and exit through port 2, but during
resonance, the optical wavelengths will be transferred to the drop guide via the resonant ring
and exit through port 3. At the resonance wavelength λ = 1.553 μm, the extraction efficiency
exceeds 99% with a quality factor of 1411.

Fig.3. Electric field pattern of the ring resonator at a) λ = 1.553 μm (the resonant wavelength).
b) λ = 1.556 μm (the off-resonance)

366
The electromagnetic wave transverse component Ez is presented around the wavelengths
λ=1.553 μm and λ=1.556 μm where the positive pulses are in red and the negative pulses are in
blue.

3.2. OR gate

The proposed OR gate structure is formed from two waveguides and two ring resonators with
a resolution of 38 rods horizontally and 23 rods vertically. Two symmetrical optical waveguides
AY and BY were formed along the Γ–M direction by removing two rows of GaAs rods, and
put two ring resonators between them. The refractive index, radius and lattice constant of the
structure are the same as the PRCC structure. The final schematic of our proposed OR gate
structure is shown Fig.4.

The all-optical OR logic gate operation is presented by studying the electric field distribution
of the 2D photonic crystal for the 1.553 μm signal light, and the calculated results are shown in
Fig.4. If a signal is injected into input port A, then the signal light can transmit through the
optical waveguide AY and be output from port Y, as shown in Fig.4 (b). If a single beam is
injected into input port B, then the signal light can transmit through the optical waveguide BY
and be output from port Y, as shown in Fig.4 (c). If two beams are injected into input ports A
and B simultaneously, then the signal light can transmit through optical waveguides AY and
BY, as shown in Fig4. (d). Thus, an all-optical OR logic gate can be achieved very easily.

Fig.4 (a) OR gate structure. (b) 1 OR 0 = 1. (c) 0 OR 1 = 1. (d) 1 OR 1 = 1. The OR gate


structure parameters are set such as: n=3.28, r=0.188×a and a=640 nm.

3.3. XOR gate

To study the all-optical XOR logic gate function, the same structure of OR gate 2D photonic
structure is used adding one column of rods after the first ring resonator and is presented in

367
Fig.5. The optical XOR logic gate operation is presented by studying the electric field
distribution of the 2D PCRR device for a particular wavelength λ=1.553 μm.

First, we insert a signal light into only port A of the input waveguide. A large part of this signal
travels to the port Y through the ring resonator waveguide. This is identified as the logic
phenomenon “1 XOR 0 gives 1” and it is shown in Fig.5 (b).

The similar situation occurs, when the signal is incident to the B port only, and we get output
as 1.This corresponds of the logic operation “0 XOR 1 gives1” as shown in Fig.5 (c).

When the signals given to the input ports A and B simultaneously, there occurs a phase
difference between these two signals due to path difference, and we get a destructive
interference. As a result of this, there is approximately zero output at the port Y. This
corresponds to the logic operation “1 XOR 1 gives 0” as shown in Fig. 5 (d).

When the both input signals are same (“0”, “0” or “1”, “1”) the output of XOR gate is zero “0”
and both are different (“0”, “1” or “1”, “0”) the output is one “1”.)

Fig. 5. (a) XOR gate structure. (b)1 XOR 0 = 1. (c) 0 XOR 1 = 1. (d) 1XOR 1 = 0. The XOR
gate structure parameters are set such as: n=3.28, r=0.188×a and a=640 nm.

368
4. Conclusion

In this article, photonic crystal ring resonator based Channel-Drop Filter is designed and
investigated. First we designed a flower shaped PCRR based on only one photonic crystal ring
resonator. By combining two ring resonators, we proposed OR and XOR gates operating with
TE mode optical signals.. Photonic crystal manufacturing is one of the main inconvenient that
can be confronted where is expensive in production. Sub 100 nm dimensions need generally
employing of high resolution electron beam lithography (EBL).

References

[1] S.H. Kim, H.Y. Ryu, H.G. Park, G.H. Kim, Y.S. Choi, Y.H. Lee, ‘Two-dimensional
photonic crystal hexagonal waveguide ring laser’, Appl. Phys. Lett. 81 PP.2499–2501,
2002.

[2] V. Dinesh Kumar, T. Srinivas, A. Selvarajan,’ Investigation of ring resonators in photonic


crystal circuits’, Photon. Nanostruct. 2 pp.199–206, 2004.

[3] Z. Qiang, W. Zhou, R.A. Soref, Optical add-drop filters based on photonic crystal ring
resonators, Opt. Express 15 pp.1823–1831, 2007.

[4] S. Robinson, R. Nakkeeran, ‘Investigation on two dimensional photonic crystal resonant


cavity based band pass filter’, Optik, Volume 123, pp.451–457, 2012.

[5] H.Alipour Banaeia, S.Seraj mohammadib,F.Mehdizadehc,Alloptical NOR and NAND


gate based on nonlinear photonic crystal ring resonators,Optik Volume 125 pp.5701–5704,
2014.

[6] Mahmoud M Y, Bassou G, Taalbi A, Chekroun Z M. Optical channel drop filter based on
photonic crystal ring resonators. Optics communications 2012; 285:368-372.

[7] M. A. M. Birjandi, and M. R. Rakhshani, “ Anew design of tunable four-port wavelength


demultiplexer by photonic crystal ring resonators” Optik (2013),
http://dx.doi.org/10.1016/j.ijleo.2013.04.128.

[8] L. Mokhtari, H. A. Badaoui, M. Abri, M. Abdelbasset, F. Lallam, and B. Rahmi, "Proposal


of a New Efficient or/Xor Logic Gates and All-Optical Nonlinear Switch in 2D Photonic
Crystal Lattices," Progress In Electromagnetics Research C, Vol. 106, 187-197, 2020.
doi:10.2528/PIERC20051501 http://www.jpier.org/pierc/pier.php?paper=20051501
(scopus)

[8] Saidani N, Belhadj W, Abdel Malek F. ‘Novel all-optical logic gates based photonic crystal
waveguide using self imaging phenomena’. Optical Quantum Electron 47:1829–46, 2015.

[9] Isfahani BM, AhamdiTameh T, Granpayeh N, Javan AM.’ All optical NOR gate based on
nonlinear photonic crystal microring resonators’. Optical Society of America, volume
26,pp.1097–102,May 2009.

369
[10] Moungar A, Badaoui H, Abri M, ‘16-Channels Wavelength Efficient Demultiplexing
around 1.31/1.55 m in 2D Photonic Crystal Slab’, Optik (2019),
https://doi.org/10.1016/j.ijleo.2019.04.032

[11] T. Skauli, P. S. Kuo, K. L. Vodopyanov, T. J. Pinguet, O. Levi, L. A. Eyres, J. S. Harris


and M. M. Fejer, B. Gerard, L. Becouarn, and E. Lallier ‘Improved dispersion relations for
GaAs and applications to nonlinear optics’, Journal of Applied Physics, Vol. 94, N. 10, pp
6447-6455, 2003.

370
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Air Pollution Prediction Based on LSTM Neural Network: Sample


of Isparta Province

Mahmut TOKMAK1*

Abstract: In recent years, air pollution has become an important problem as a result of the
increase in population, the advancement of technology, the growth of developing industries and
cities. Especially, pollutants such as PM10 (Particulate Matter) and SO2 have been shown to
threaten human health. Therefore, the detection and prediction of PM10 air pollution is an
important issue. The Province of Isparta, Turkey, is among the Provinces with the risk of 1st
degree pollution due to its geographical location. For this reason, the Province of Isparta has
been taken as an example.

In this work, we propose an approach to forecast PM10 concentration using Recurrent Neural
Network (RNN) with Long Short- Term Memory (LSTM). In the estimation of air pollution,
hourly temperature, pressure, humidity, wind speed, wind direction meteorological parameters
and PM10 concentration obtained between 2016-2021 were used. The data were trained and
tested with the established LSTM model. In assessing the accuracy of model predictions, we
used Explanatory Coefficient (R2), Root-Mean-Square Error (RMSE) and Mean Absolute Error
(MAE). The predicted values of R2, RMSE, MAE were 0.90, 9.43, 5.99 respectively. The result
shows that the proposed approach can effectively forecast the value of PM10.

Keywords: Air pollution, Deep Learning, RNN, LSTM

1. Introduction

With the rapid development of urbanization, air pollution is becoming an increasingly serious
environmental problem affecting human health and sustainable development worldwide (D.-R.
Liu et al., 2021). The rapid development of industrial technology has caused many negative
environmental impacts. Air pollution is one of them (Chang et al., 2020).

Typical sources of air pollution include industrial emission and traffic emission, and the main
pollutants are PM2.5, PM10, NO2, SO2, O3 etc. The correlation between health risk and the
concentration of air pollutants have been studied (J. Fan et al., 2017). In order to decide on the
level of air pollution in a region by the World Health Organization (WHO), it was found
sufficient to determine the sulfur dioxide (SO2) and particulate matter (PM) values, which are
pollutants that change the natural composition of the air and give it the characteristic of polluted
air, and it has been suggested to measure it in every country (Özel, 2019; Tsai et al., 2018).
Isparta Province in Turkey is among the ones with the highest pollution risk (ICSİM, 2021). In
order to prevent or reduce the harmful effects of air pollution on human health and the

1
Isparta University of Applied Sciences, Gelendost Vocational School, Gelendost, Isparta, Turkey
* Corresponding author: [email protected]
371
environment, it is aimed by the authorities to achieve the determined air quality targets.
Therefore, to effectively monitor and forecast PM concentration is an important issue.

When the studies in the literature for the estimation of air quality parameters are examined,
various studies have been carried out on air pollution using data sets from different cities in
different countries and various machine learning methods. These methods can be basically
divided into two groups as traditional classification algorithms and deep learning methods.
Traditional classification algorithms refer to classification algorithms developed to perform
data mining tasks. Examples of these algorithms are Support Vector Machines (SVM) (Zhu et
al., 2018), Random Forest (Kumar, 2018), k Nearest Neighbor algorithms (KNN) (Y. Fan et
al., 2018) Artificial Neural Network (ANN) (Maleki et al., 2019). The methods used as deep
learning methods are Deep Neural Networks (DNN) (Eslami et al., 2020), Convolutional Neural
Networks (CNN) (Park et al., 2020; Zhang et al., 2020) and RNN (J. Fan et al., 2017; Krishan
et al., 2019; D. Liu et al., 2020; D.-R. Liu et al., 2021; Tsai et al., 2018).

Air pollution is also affected by meteorological factors such as wind speed, direction,
temperature, pressure and humidity. The most important role of meteorology in atmospheric air
pollution is to be effective in the stages of distribution, transport and separation from the
atmosphere (Özel, 2019). Therefore, in this study, an LSTM model was established by
combining meteorological parameters with pollutant parameters. The data used in the model is
retrieved from the General Directorate of Meteorology and Turkey Ministry of Environment
and Urbanisation from year 2016 to 2021 and is combined into 7 dimensions dataset. The
performance of the established model was evaluated with R2, RMSE and MAE criteria.

2. Material and Method

2.1. Data Preprocessing and Experiments Design

The meteorological data used in this study were obtained from the General Directorate of
Meteorology for the Province of Isparta. These data consist of hourly temperature, humidity,
pressure, wind direction and wind speed information for the years 2016-2021. PM10 and SO2
data was obtained from Turkey Ministry of Environment and Urbanisation, air quality station
data web page (RTMEU, 2021).

Table 1. Dataset Statistics

Wind Wind
Pressure Humidity Temperature SO2 PM10
direction speed
count 43844 43841 43844 43845 43841 41651 41725
mean 902.21 179.91 60.67 13.42 1.66 11.74 53.79
std 4.43 115.92 22.25 9.43 1.27 12.60 52.51
min 878.80 0.00 7.00 -13.70 0.00 0.00 0.02
max 918.20 360 99 37.40 20.90 201.62 1646.03

First of all, meteorological parameters and pollutant parameters were combined. When there
are missing values in the all data, we used the average value to fill in the all data. And Min-
Max Normalization (1) is used to limit values in each dimension between 0 and 1. Min-Max
Normalization is used to avoid multiple iterations of the neural network and decrease in
accuracy.

372
min (1)
max min

2.2. RNN and LSTM

RNN is a variation of feed-forward neural network (FNN): FNN consists of layers stacked on
top of each other, where each layer is composed of neurons, and all connections between layers
follow the same direction. RNN present cyclic structure into the neural network, which is
implemented by self-connection of each node (neuron). By using self-connected each node,
historical inputs can be memorized by RNN (J. Fan et al., 2017; Tsai et al., 2018). Training of
FNN is done with back propagation. While RNN sequence data and takes the transfer of
‘memory’ into account, for that reason its training process should stack back propagation results
over time dimension, resulting in the back propagation through time algorithm (J. Fan et al.,
2017).

The LSTM deep learning algorithm is known as a recurrent neural network introduced by
Hochreiter and Schmidhuber in 1997 to overcome the disadvantages of RNN architecture
(Hochreiter & Schmidhuber, 1997). The difference between LSTM and traditional RNN is that
each node in LSTM is a memory cell. The LSTM links the previous data information to the
current nodes. Each nodes contains three gates: input gate, forget gate, and output gate (Tsai et
al., 2018).

Figure 2. LSTM Unit (Xiao & Yin, 2019)

In the LSTM architecture (Figure 2), first of all, and −1 information are used as inputs, and
it is decided which information to delete. These operations are done in the forget layer ( )
using Equation (2) and sigmoid is used as the activation function. Secondly, the input layer,
where new information will be determined, comes into play and firstly ( ) the information is
updated with the sigmoid function using Equation (3). Then, the candidate information that will
form the new information with Equation (4) is determined by the tanh function. New
information is created by equation (5). Finally, the output data is obtained by using Equation
(6) and (7) in the output layer. Weight parameters ( ) and bias parameters ( ) are learned by
the model in a way that minimizes the difference between actual training values and LSTM
output values (Xiao & Yin, 2019).

ft=σ(Wf⋅[ht−1,xt]+bf) (2)
it=σ(Wi⋅[ht−1,xt]+bi) (3)

373
t=tanh(WC⋅[ht−1,xt]+bC) (4)
Ct=ft∗Ct−1+it∗ (5)
ot=σ(Wo[ht−1,xt]+bo) (6)
ht=ot∗tanh(Ct) (7)

Three different statistical evaluation criteria were used to evaluate the prediction performance
of the proposed LSTM model. These criteria are: R2, RMSE and MAE. R2 (8) , RMSE (9) and
MAE (10) are commonly used as a measure of the difference between predicted and observed
values (Delavar et al., 2019; D. Liu et al., 2020).

∑ ŷ (8)
1
∑ ŷ

(9)
∑ ŷ

1 (10)
| ŷ|

n number of samples in Equation (8) - (10) used for statistical evaluation criteria, y . the true
value of the observation, ŷ . is the estimated value of the observation and represents the
average of the actual observation values.

3. Results

The proposed models and baseline models are implemented using Python, Theano, Keras and
Scikit-learn, and executed on a computer with Intel Core i5-4200 CPU 2.50 GHz, 16 GB RAM.
The dataset was divided into two groups for training 90% and testing 10% approximately.The
training and test data of PM10 are shown in Figure 2.

Figure 2. PM10 Training Set and Test Set

The established LSTM network consists of 3 layers and there are 120 neurons in each layer.
The dropout rate is set to 0.2. The parameters used for the LSTM network are given in Table 2.

374
Table 2. LSTM Model Parameters

Structures of LSTM
Value
Model
Number of Records 43849
Inputs 7
Neuron 120
Layers 3
Dropout 0.2
Outputs 1
Batch Size 64
Epoch 50
Loss Function Mean Square Error
Optimizer Adam

After the model was trained, as a result of the test; R2, RMSE and MAE scores of 0.90, 9.43
and 5.99 were obtained, respectively (Table 3).

Table 3. Test Results

R2 RMSE MAE
Test Results 0.90 9.43 5.99

The representation of the estimated values and the actual values on the graph is given in Figure
3. Forecast data and actual data for the last 100 hours are shown in Figure 4.

Figure 3. Hourly Timestep on Test Set

375
Figure 4. Last 100 Hours on Test Set

4. Discussion and Conclusions

Forecasting air pollution is a significant issue for human health. Measures to be taken as a result
of correct predictions will have positive results. Countries, administrators, non-governmental
organizations and scientists are doing various studies on this subject.

In this study, the LSTM model, which is a deep learning approach, is proposed to predict air
pollutant values. Hourly meteorological parameter and pollutant parameter data of Isparta
Province of Turkey between 2016 and December-2020 were used to train the proposed model
and evaluate its performance. These data were divided into two groups as training and test sets.
While the training data was used only in the learning process of the model, the test data were
not used in the learning process. After the learning process of the model was completed, the
test data were used while evaluating the performance of the algorithm. Three different statistical
evaluation criteria were used to evaluate the prediction performance of the proposed LSTM
model. These criteria are: R2, RMSE and MAE. As a result of the test, very high predictive
values were obtained. It is concluded that improving air pollution forecast accuracy will have
significant positive effects on public health and environmental policy making.

References

Chang, Y.-S., Chiao, H.-T., Abimannan, S., Huang, Y.-P., Tsai, Y.-T., & Lin, K.-M. (2020).
An LSTM-based aggregated model for air pollution forecasting. Atmospheric Pollution
Research, 11(8), 1451–1463. https://doi.org/10.1016/j.apr.2020.05.015
Delavar, M., Gholami, A., Shiran, G., Rashidi, Y., Nakhaeizadeh, G., Fedra, K., & Hatefi
Afshar, S. (2019). A Novel Method for Improving Air Pollution Prediction Based on
Machine Learning Approaches: A Case Study Applied to the Capital City of Tehran.
ISPRS International Journal of Geo-Information, 8(2), 99.
https://doi.org/10.3390/ijgi8020099
Eslami, E., Salman, A. K., Choi, Y., Sayeed, A., & Lops, Y. (2020). A data ensemble approach
for real-time air quality forecasting using extremely randomized trees and deep neural
networks. Neural Computing and Applications, 32(11), 7563–7579.
Fan, J., Li, Q., Hou, J., Feng, X., Karimian, H., & Lin, S. (2017). A Spatiotemporal Prediction
Framework for Air Pollution Based on Deep RNN. ISPRS Annals of the
Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-4/W2, 15–22.
https://doi.org/10.5194/isprs-annals-IV-4-W2-15-2017

376
Fan, Y., Hou, L., & Yan, K. X. (2018). On the density estimation of air pollution in Beijing.
Economics Letters, 163, 110–113.
Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation,
9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Isparta Çevre ve Şehircilik İl Müdürlüğü. (2021). Isparta Çevre ve Şehircilik İl Müdürlüğü.
https://isparta.csb.gov.tr/ilimiz-hava-kirliligi-degerlendirmesi-haber-257500, (Access
Date: 01.07.2021)
Krishan, M., Jha, S., Das, J., Singh, A., Goyal, M. K., & Sekar, C. (2019). Air quality modelling
using long short-term memory (LSTM) over NCT-Delhi, India. Air Quality,
Atmosphere & Health, 12(8), 899–908. https://doi.org/10.1007/s11869-019-00696-7
Kumar, D. (2018). Evolving Differential evolution method with random forest for prediction
of Air Pollution. Procedia Computer Science, 132, 824–833.
Liu, D., Lee, S., Huang, Y., & Chiu, C. (2020). Air pollution forecasting based on attention‐
based LSTM neural network and ensemble learning. Expert Systems, 37(3).
https://doi.org/10.1111/exsy.12511
Liu, D.-R., Hsu, Y.-K., Chen, H.-Y., & Jau, H.-J. (2021). Air pollution prediction based on
factory-aware attentional LSTM neural network. Computing, 103(1), 75–98.
https://doi.org/10.1007/s00607-020-00849-y
Maleki, H., Sorooshian, A., Goudarzi, G., Baboli, Z., Tahmasebi Birgani, Y., & Rahmati, M.
(2019). Air pollution prediction by using an artificial neural network model. Clean
Technologies and Environmental Policy, 21(6), 1341–1352.
https://doi.org/10.1007/s10098-019-01709-w
Özel, G. (2019). Markov Zinciri Kullanarak Ankara İli İçin Hava Kirliliği Tahmini. 3(2), 144–
151. https://doi.org/10.30516/bilgesci.546317
Park, Y., Kwon, B., Heo, J., Hu, X., Liu, Y., & Moon, T. (2020). Estimating PM2. 5
concentration of the conterminous United States via interpretable convolutional neural
networks. Environmental Pollution, 256, 113395.
Republic of Turkey Ministry of Environment and Urbanisation. (2021). Republic of Turkey
Ministry of Environment and Urbanisation. https://sim.csb.gov.tr/STN/STN_Report/
StationDataDownloadNew, (Access Date: 07.04.2021)
Tsai, Y.-T., Zeng, Y.-R., & Chang, Y.-S. (2018). Air Pollution Forecasting Using RNN with
LSTM. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing,
16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data
Intelligence and Computing and Cyber Science and Technology Congress
(DASC/PiCom/DataCom/CyberSciTech), 1074–1079.
https://doi.org/10.1109/DASC/PiCom/DataCom/CyberSciTec.2018.00178
Xiao, Y., & Yin, Y. (2019). Hybrid LSTM neural network for short-term traffic flow prediction.
Information, 10(3), 105.
Zhang, Q., Lam, J. C., Li, V. O., & Han, Y. (2020). Deep-AIR: A Hybrid CNN-LSTM
Framework forFine-Grained Air Pollution Forecast. ArXiv:2001.11957 [Eess].
http://arxiv.org/abs/2001.11957
Zhu, S., Lian, X., Wei, L., Che, J., Shen, X., Yang, L., Qiu, X., Liu, X., Gao, W., & Ren, X.
(2018). PM2. 5 forecasting using SVR with PSOGSA algorithm based on CEEMD,
GRNN and GCA considering meteorological factors. Atmospheric Environment, 183,
20–32.

377
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Pro and Contra for Self-Driving Car: Public Opinion in Serbia

Livija Cveticanin1*, Ivona Ninkov2

Abstract: Self-driving car (SDC), considered in this paper, is a cyber-physical system i.e. a
fully autonomous vehicle without human driver. All activities of the vehicle are expected to be
done automatically. Based on data obtained from the sensors for perception and navigation and
those given by passengers the decision for SDC would be done by the system with artificial
intelligence. Such SDC is expected to be on public roads very soon. In spite of the fact that the
first manufactured SDCs have been already tested in some cities of the world, the result of
investigation of the public opinion in more than 150 countries of the world show some doubts
and distrust in their application. The aim of this paper is to present the results of queries done
in Serbia among the wide range of population. The questionnaire is extended in comparison to
already used and is modelled as the tripartite with cognitive, affective and behavior
components. It is obtained that the replies dependent on gender, age, professional orientation,
living place. The persons under interview express their opinion about technical, social,
economic, safety and security aspects of SDC. It is concluded that the results of interview
obtained in Serbia differ significantly to those obtained in highly developed countries with high
level of traffic in cities and highways. As the most of people is Serbia is skeptical with SDC,
certain education and dissemination of knowledge on the topic is necessary.

Keywords: autonomous vehicle, questionnaire, benefits of self-driving car, barriers for self-
driving car

1. Introduction

Currently, one of the most investigated transport innovation is in the so called 'self-driving car'
(SDC). There are many definitions to SDC, but the most widely spread is that ‘it is an
autonomous car without human driver’ (SAE, 2018). Namely, SAE gives the classification of
cars in six groups depending on the level of automation. SDC is believed that represents the
most sophisticated version where riding do not need human driver to operate or to supervise the
vehicle. SDC represents a cyber-physical system where all activities of mechanical parts are
directed with computers i.e. hardware and software. For the function and making decision the
algorithms need data obtained from various sensors, but also from passengers of the car. The
project of SDC is multidisciplinary. Besides mechanical and electro engineering, IT,
architecture and civil engineering, traffic engineering the aspects of environmental protection,
economics, social sciences etc. have to be considered in realization but also application of SDC.
The scholars believe that SDCs will change the world (Myrick, 2019). The major benefit of
SDCs would be to eliminate many car accidents and to save tens of thousands of lives per year
and to prevent hundreds of thousands of injuries and their associated economic toll. Nowadays,
human errors are recognized as a major factor to traffic crashes. More than 90% of traffic
1
University of Novi Sad, Faculty of Technical Sciences, Novi Sad, Serbia
2
Obuda University, Doctoral School of Safety and Security Sciences, Budapest, Hungary
* Corresponding author: [email protected]
378
crashes can be tied to a human error or human choice (NHTSA, 2016). SDCs, replacing fallible
human drivers, are expected to largely reduce traffic crashes. In addition, adoption of SDCs in
traffic will promise to (Liu & Xu, 2020): reduce traffic congestion, air pollution and
transportation emissions, and to increase the mobility of those who currently are unable to drive,
to improve the fuel efficiency, space utilization and productivity, increase human mobility, and
to decrease risks and challenges related to safety, security, legal liability, and regulation issues
(Penmetsa et al, 2019; Anderson et al, 2016; Fagnant and Kockelman, 2015; NHTSA, 2016).

As the analysts predict that completely autonomous cars will be for sale by 2025-2030 (Ilkova
& Ilka, 2017) it is necessary to know if the SDC will be accepted from population. Namely, the
SDCs are technically almost prepared for testing and application on public roads and their
inclusion in the everyday transportation seems is necessary. For realization of the process, the
users need to accept the SDC. But, are people really ready for autonomous vehicles? To obtain
the answer the population is worldwide interviewed about SDC.

According to online survey in 2011 on two thousand persons from USA and UK it is found that
even 49% of them were ready to use SDC. Due to Reiss & Pitts (2021) the confidence in the
future of the SDC continued to grow in spite of the fact that the global pandemic gripped the
world.

In a 2012 survey of about thousand German drivers by automotive researcher Puls, 22% of the
respondents had a positive attitude towards these cars, 10% were undecided, 44% were
skeptical and 24% were hostile (Floridi, 2020). Similar results gives the survey made in the
USA, the UK and Australia (Schoettle & Sirak, 2014).

In 2015 a questionnaire survey by Delft University of Technology explored the opinion of five
thousand people from 109 countries on automated driving (Kyriakidis et al, 2015). Results
showed that respondents, on average, found manual driving the most enjoyable mode of driving.
22% of the respondents did not want to spend any money for a fully automated driving system.
Respondents were found to be most concerned about software hacking/misuse, and were also
concerned about legal issues and safety. Finally, respondents from more developed countries
(in terms of lower accident statistics, higher education, and higher income) were less
comfortable with their vehicle transmitting data. The survey also gave results on potential
consumer opinion on interest of purchasing an automated car, stating that 37% of surveyed
current owners were either "definitely" or "probably" interested in purchasing an automated
car.

57% of 1500 interviewed persons in China in 2018 stated that “they would be likely to ride in
a car controlled entirely by technology that does not require a human driver" (Qu et al, 2019).
The most are willing to trust automated technology.

A Pew Research Center (Smith & Anderson, 2017) surveyed more than four thousand adults
from USA and found that 94% of them have heard about SDC and even 44% are ready to ride
it. The reasons against riding the SDC are: no trust into the control (42%), no trust in the safety
(30%), enjoy of driving is eliminated (9%), feeling that the technology is not ready for everyday
use (3%), fear to be hacked (2%) and other (8%). The reasons to accept the ride of SDC are:
“cool” experience 37%, safer drive 17%, can do other things 15%, less stresses 13%, greater
independence 4%, convenience 4% and others 11%.

379
In 2019 a new standardized questionnaire about the autonomous vehicle acceptance or decline
is introduced. The questionnaire includes the additional description which helps respondents
better to understand the implications of different automation levels (Montoro et al, 2019). Using
this questions the public opinion on SDCs among various groups of respondents was analyzed
(Kyriakidis et al, 2015). Results showed that partial automation (regardless of level) which
requires higher driver engagement (usage of hands, feet and eyes) was more supported by
population than SDC with full autonomy. The perceived and predicted safety give the level of
intention to use highly autonomous SDCs.

The questionnaire used in the aforementioned was used as the basis for forming of the
corresponding one for Serbia in Serbian language. The questionnaire is extended to be of
tripartite type with cognitive, affective and behavior components.

The aim of this paper is to analyze the opinion of population on SDC in Serbia and to compare
with that obtained in other countries. It will be suggest how to improv the knowledge in
population in Serbia with the aim to increas the acceptence level and to destroy the barriers for
accepting SDC.

2. Investigation Method

Most of studies on attitudes toward SDCs, mentioned in the previous section of this paper, are
made through the traditional view with one - dimensional bipolar scales (Eagly and Chaiken,
1993; Marletto, 2019) which implies answers in questionnaire to be positive, negative (Nielsen
& Haustein, 2018), or neutral i.e. uncertain (Hulse et al, 2018). However, we think that this
approach is very simply and is not enough sophisticated to include ambivalent and indifferent
opinions. To overcome the lack in our investigation the model for estimation is conceptualized
as a tripartite (Rosenberg and Hovland, 1960) with cognitive, affective and behavior
components. Cognitive components define the object by perceptions and beliefs and also
thoughts. Affective components describe the feelings and emotions which are linked to the
object. The behavioral components include the behavior intention and verbal statements. So,
the gradation on answer is, for example, very likely – somewhat likely – somewhat unlikely –
very unlikely.

The questionnaire is given in Appendix. In the questionnaire the short description of SDC is
given. After that the question are divided into two parts: Personal questions and Questions
according to SDC. The personal questions concern the gender, age, level of education and living
place. The second group of questions have to give the conclusion about level of knowledge
about SDC and its acceptance or rejection.

The interview involved 150 persons: 75 males and 75 females. (The percent of persons in the
interview is equal to that in the already done interview in other countries and reported in the
section before.)

3. Results

Distribution of persons in interview due to education and living location is plotted in Fig.1 and
due to age in Fig.2.

380
80

60

Number
40

20

0
Technics Non‐technics Urban Rural
Education and Living Location

Femail Mail Column1

Figure 1. Education and Living Location

40 80
30 60
Number

Number
20 40
10 20
0 0
< 18 19 ‐ 29 30 ‐ 65 > 66 < 10 int. 10‐50 > 50
Ages SDC on roads

Femail Mail Column1 Femail Mail Column1

Figure 2. Age distribution Figure 3. Education and Living Location

Only 24 female and 58 male have heard of SDC. Only one female and 25 males who have not
the technical education know something about SDC.
In Fig.3 the expectation of SDC on public roads is shown. The opinion regarding SDC is plotted
in Fig.4.

35
30
25
Number

20
15
10
5
0
Very Somewhat Neutral Somewhat Very
positive positive negative negative
Opinion on SDC

Femail Mail Column1

Figure 4. Opinion regarding SDC

Even 29 female and 68 male would like to be the part of the SDC project as designer,
manufacturer or owner. In Fig.5 the expected crashes with SDC in comparing with conventional
car is shown.

381
40
30

Number
20
10
0
Very likely Somewhat Somewhat Very unlikely
likely unlikely
Fewer crashes with SDC in comparison to
conventional car

Femail Mail Column1

Figure 5. Decrease of crashes with SDC in comparing with conventional car

15 of the female population expect very likely, 27 somewhat likely, 20 somewhat unlikely and
13 very unlikely the crashes of SDC with mortal would be reduced in comparison to the case
with conventional car. In the male population the replies are: 25 very likely, 34 somewhat likely,
12 somewhat unlikely and 4 very unlikely.

30
25
Number

20
15
10
5
0
Very likely Somewhat Somewhat Very unlikely
likely unlikely
Decrise of emission

Femail Mail Column1

Figure 6. Decrease of emission

In Fig.6 the opinion about emission decrease is plotted. The statement about reduction of fuel
consumption is shown in Fig.7.

30
25
Number

20
15
10
5
0
Very likely Somewhat Somewhat Very unlikely
likely unlikely
Decrise of fuel consumption

Femail Mail Column1

Figure 7. Decrease of fuel consumption

59 females and only 22 males said that they would worry during driving in SDC.

382
15

Number of females
10
5
0

Activity during the ride in SDC

Femail Column3 Column1

Figure 7. Activity of females during driving in SDC

20
Number of males

15
10
5
0

Activity during the ride in SDC

Mail Column3 Column1

Figure 8. Activity of males during driving in SDC

In Fig.7 and Fig.8 the activity during driving in SDC of females and males, respectively, are
presented.
35
30
25
Number

20
15
10
5
0

Benefits of riding SDC

Femail Mail Column1

Figure 9. Benefits of riding SDC

In Fig.9 the benefits and in Fig.10 the barriers of riding SDC are given.

383
20
15

Number
10
5
0

Barriers of riding SDC

Femail Mail Column1

Figure 10. Barriers of riding SDC

4. Discussion and Conclusions

In general, based on results of interview the following is concluded:

1. Analysing the results of the interview it is concluded that the population in Serbia gives
the support to SDC acceptance. This support is in the ration 55% to 45%. The support
of application of SDC is based on the assumption the number of crashes will decrease.
The special influence on the decision is the expecting that the mortality in accidents will
be smaller. The result agrees with that published in Liu et al (2019a).
2. Other benefits of SDC in opinion of respondents it connected with comfortable long-
time trip, faster trip in cities, cheaper transportation costs, available the ride for older
and disabled, application of the time in car for another activities like work, rest, etc., but
with ‘healthy life’ in the environment with decreased pollution i.e. life outside areas
with polluted air.
3. The opinion is that SDC would decrease the environmental pollution and conventional
fuel consumption, too. Respondents believe that SDC would increase the efficiency in
reducing carbon emissions more than conventional vehicles. Population expects the
manufacturer to place a greater emphasis on emission reductions and conventional fuel
consumption (Liu et al, 2019b).
4. When SDCs can be used throughout the day the traffic will be optimally organized and
many parking spaces in cities and towns may be eliminated in the future.
5. Population in Serbia is fearful about cyber-security and privacy in SDC. All the
population is worried that SDCs will be easily hacked because of the abundance of
digital infrastructure required for them to work. The population agree that the
cybersecurity in SDC has to be increased before inclusion of vehicle into the traffic. The
population is afraid that criminals will use the data that they retrieve, hacking the vehicle
and getting it to perform actions the user is unaware of, unable to undo, and maliciously
causing harm to persons in the car. The similar consequences of cyber-attack are
mention by Stevens (2018). If cyber-criminals take over a vehicle, they can cause minor
nuisances (closing or opening windows), or they can create greater threats (disable the
functionality of car to read stop signs, causing crash of vehicles, harm passengers), or
they can use SDCs for terrorist purposes (transporting and detonating bombs).
These positive, neutral or negative opinions on inclusion of SDC on roads in Serbia are
independent on the gender, aging or education of respondent. There is the group of questions

384
where answers differ if they are given by males or females, younger or older persons,
technically or non-technically educated persons or those living in urban or rural ambient.
1. The persons from urban give higher support to SDC than those from rural ambient.
Namely, the last are mainly indifferent and not interested in SDC. The reason for urban
inhabitants to accept SDC is based on the problem of traffic jams, not enough parking
space, sparing driving time, better use of the time during driving and at last there is also
an economic aspect. Living in the suburbs or in rural area and working in cities gives
the benefits not to pay expensive flats in the downtown and living in more ecological
environment. Individuals would be able to rent further away from the centres of towns
and cities because of the ease of commuting with SDC and, in addition, reduced costs.
It would be the new stream in living way: reduction in urbanisation and spread out of
population throughout the region (Lim & Taeihagh, 2018). People would be less of a
need to live in cities. The importance of this item is seen in this pandemic situation,
when the human activity was possible only in fields and inside the houses. It is worth to
say, that in Serbia the urban population lives mainly in flats.
2. The interview highlighted that there is a difference in perceptions about SDCs between
men and women. Men are ready to accept SDC on public roads, but women have serious
worry. Males thrust in SDC and consider SDC to be safer than conventional cars. Men
are ready to ride SDC because experience and have less stress and fear in comparison
to females. Females are less enthusiastic and more fearful about their safety in SDC. As
the advantage of SDC the females mentioned the long-time trip and the possibility to
have other activities during riding.
3. Females and males opinion about the joy of driving differs, too. As it is known, the joy
of driving is one of the primary pleasure of the vehicle (Kemp, 2018). The interview
shows that the most of men are not ready to waive the joy of driving. In contrary, it is
not the case for women. For significant number of female driving is a necessary activity
for fulfilling the everyday duties, but for most of men it is a form of pleasure: a
connection with surroundings, a sense of adventure and control, a relaxation form etc.
Males think that SDCs would threat this joy.
4. Female have higher affinity to doing other things than driving in comparison to males.
Female are primary ready to phone, read, to sleep or watch the road. Male spend time
in phoning, playing games, working and watching films.
5. The interview shows that there is the fear during riding SDC. The stress and fear in
driving is significantly higher in female than in male population. In addition, the fear is
stronger than in driving conventional car. It is interesting to note, that females are afraid
to ride in the SDC independently on ages. Similar result is reported by Johnsen et al.
(2017) and Naughton (2019).
6. The age of the respondents proved to be a significant factor in decision making. This
conclusion is already presented in some publication (see for example, Rahman et al.
2019). In general, younger people are ready to accept the SDC. There is also a group of
older persons, non-drivers and disabled who see the benefits of SDC in their inclusion
into the normal life. They consider SDC to potentially reduce the inequality in
population and have the positive opinion in accepting SDC (Abraham et al, 2017; Lee
et al, 2017) It is worth to be said that school children where included into the interview.
Namely, they would be most likely the users of this technical innovation in the future.
7. The persons with technical education have much more information about SDC than
others. However, their knowledge is insufficient. Both, males and females, are ready to
be included into the projects considering this autonomous vehicle. The population need
education in this segment of life. Popular and informative lectures on the topic are
necessary on all levels and for various aging (from those for children up to old persons).

385
Scholars have to disseminate the knowledge in SDC while manufacturers and sellers
have to invest in SDC advertising.
Finally, based on interview a new, quite unexpected aspect of SDC appears. There is an
additional lack in using SDC due to pandemic situation with Covid-19. Namely, car-sharing
which is a fundamental property of SDC ride, is prohibited for persons who are not from one
family. This argument is mentioned in the interview against SDC. At the moment, the solution
of the problem is not evident.

Acknowledgement

The investigation is supported by the Faculty of Technical Sciences in Novi Sad, Serbia (Proj.
No. 054/21).

References

Abraham, H., Lee, C., Brady, S., Fitzgerald, C., Mehler, B., Reimer, B. & Conghlin, J.F.,
(2017). Autonomous vehicles and alternatives to driving: trust, preferences, and effects
of age. Transportation Research Board, Conference paper, pages 17.

Anderson, J.M., Kalra, N., Stanley, K.D>, Sorensen, P., Samaras, C. & Oluwatola, T., (2016).
Autonomous vehicle technology: A guide for policymakers. RAND Corporation,
9780833083982.

Eagly, A.H. & Chaiken, S., (1993). The psychology of attitudes. New York, Harcourt Brace
Jovanovich College Publisher.

Fagnant, D.J. & Kockelman K., (2015). Preparing a nation for autonomous vehicles:
opportunities, barriers and policy recommendations. Transportation Research Part A 77,
167-181.

Floridi, L., (2020). The pulse of autonomous driving. Puls, pages 52, audi-study-autonomous-
driving.pdf

Hulse, L., Xie, H, & Galea, E.R., (2018). Perceptions of autonomous vehicles: Relationships
with road users, risk, gender and age. Safety Science, 102, 1-13.

Ilkova, V. & Ilka, A., (2017). Legal aspects of autonomous vehicles – an overview. Proceedings
of the 2017, 21st International Conference on Process Control, Strbsko Pleso, Slovakica,
June 6-9 2014, 428-433.

Johnsen, A., Strand, N., Andersson, J., Patten, C, Kraetsch, C. & Takman, J, (2017). Literature
review in the acceptance and road safety, ethical, legal, social and economic implications
of automated vehicles. BRAVE No. 723021, pages 76.

Kemp, R., (2018). Autonomous vehicles – who will be liable for accidents. Digital Evidence
and Electronic Signature Law Review, 15, 33-47.

Kyriakidis, M. Happee, R. & De Winter, J.C.F., (2015). Public opinion on automated driving:
Results of an international questionnaire among 5000 respondents. Transportation
Research Part F: Traffic Psychology and Behaviour 32, 127–140.

386
Lee, C., Ward, C., Raue, M., D'Ambrosio, L. & Coughlin, J.F., (2017). Age differences in
acceptance of self-driving cars: a survey of perceptions and attitudes. In: Zhou J, Salvendy
G, (eds) Human aspects of it for the aged population, Aging, design and user experience,
London, Springer, 3–13.

Lim, H.S.M. & Taeihagh, A., (2018). Autonomous vehicles for smart and sustainable cities: An
in-depth exploration of privacy and cybersecurity implications. Energies, 11(5), pages 24.

Liu, P., Zhang, Y. & He, Z., (2019a). The effect of population age on the acceptable safety of
self-driving vehicles. Reliability, Engineering & System Safety, 185, 341-347.

Liu, P., Ma, Y. & Zuo, Y., (2019b). Self-driving vehicles: Are people willing to trade risks for
environmental benefits? Transportation Research Part A Policy and Practice, 125, 139-
149.

Liu, P. & Xu, Z., (2020). Public attitude toward self-driving vehicles on public roads: Direct
experience changed ambivalent people to be more positive. Technological Forecasting
and Social Change, 151, 119827.

Marletto, G., (2019). Who will drive the transition to self-driving? A socio-technical analysis
of the future impact of automated vehicles. Technological Forecasting & Social Change
139, 221-234.

National Highway Traffic Safety Administration (NHTSA) (2016) ODI Resume –


Investigation: PE 16-007, Office of Defects Investigations

Montoro, L., Useche, S.A., Alonso, F., Lijarcio, I., Boso-Segni, P. & Marti-Belda, A., (2019).
Perceived safety and attitude value as predictors of the intention to use autonomous
vehicles: A national study with Spanish drivers. Safety Science 120, 865-876.

Myrick, J.G., Ahern, L., Shao, R. & Conlin, J., (2019). Technology name and celebrity
endorsement effects of autonomous vehicle promotional messages: mechanisms and
moderators. Science Communication, 41(1), 38-65.

Naughton, K., (2019) Americans still fear self-driving cars. BLOOMBERG, March 13, 2019,
https:// www.bloomberg.com/news/articles/2019-03-14/americans-still-fear-self-
driving-cars.

Nielsen, T.A.S. & Haustein, S., (2018). On sceptics and enthusiastics: What are the expections
towards self-driving cars? Transport Policy, 66, 49-55.

Penmetsa, P., Adanu, E., Wood, D., Wang, T. & Jones, S., (2019). Perceptions and expections
of autonomous vehicles - snapshot of vulnerable road user opinion. Technological
Forecasting and Social Change, 143, 9-13.

Qu, W., Xu. J., Ge, Y., Sun, X. & Zhang, K. (2019) Development and validation of a
questionnaire to assess public receptivity toward autonomous vehicles and its relation
with the traffic safety climate in China, Accident Analysis and Prevention 128, 78-86.

387
Rahman, M.M., Deb, S., Strawderman, L., Burch, R. & Smith, B., (2019). How the older
population perceives self-driving vehicles, Transportation Research Part F, Traffic
Psychology ad Behavior, 65(8), pages 12.

Reiss, M. & Pitts, B., (2021). Accenture on operations: Objects may be closer than they appear.
Modern Materials Handling, 3 pages,
https://www.mmh.com/article/accenture_on_operations_
objects_may_be_closer_than_they_appear

Rosenberg, M.J. & Hovlan, C.I., (1960). Cognitive, affective and behavioral components of
attitudes. In: Rosenberg, M.J. & Hovlan, C.I., Eds., Attiture Organization and Change:
An Analysis of Consistency among Attitude Components. Yale University Press, New
Haven.

Schoettle, B. & Sirak, M., (2014). A survey of public opinion about autonomous and self-
driving vehicles in the U.S., the U.K., and Australia. UMTRI Report No–2014-021,
University of Mechigen, Transportation Research Institute, pages 42.

Smith, A. & Anderson, M., (2017). American attitudes toward driverless vehicles. Pew
Research Center, https://www.pewresearch.org/internet/2017/10/04/americans-attitudes-
toward-driver less -vehicles/

Society of Automotive Engineers (SAE), (2018). International: Taxonomy and definitions for
terms related to driving automation systems for on-road motor vehicles. Standard
J3016_201806, USA, https://www.sae.org/standards/content/j3016_201806/

Stevens, T., (2018). Global cybersecurity: New directions in theory and methods. Politics and
Governance, 5(2), 1-4.

Appendix: QUESTIONNAIRE ABOUT SELF-DRIVING CAR (SDC)

(SDC is an autonomous vehicle which need not the human-driver. Receiving your call the
SDC would pick you up and transport you to the willing location in the shortest time, along
optimal path and in the most comfortable way.)
Remark: Circle only one answer at a time!

Personal questions:
1. What kind of vehicle you use most often for transportation?
a) Pedestrian b)Personal car c)Bicycle d) Motorcycle
e) Public transportation

2. What is your gender?


a) Female b) Male

3. What is your age?


a) Under 18 b) 19 to 29 c) 30 to 49 d) 50 to 64 e) 65 or older

4. What is your level and type of education?


a) Student in non-technics
b) Student in technics

388
c) Undergraduate in non-technics
d) Undergraduate in technics
e) Graduate in non-technics
f) Graduate in technics

5. Living place
a) Urban b) Rural

Questions in acceptance of SDC


1. Had you ever heard of SDC before participation in this survey?
a) Yes b) No

2. What is your opinion regarding SDC?


a) Very positive b)Somewhat positive c)Neutral d)Somewhat negative e)Very negative

3. In how many years do you believe the SDC will be on roads?


a) Less than 10 years b) 10 to 50 years c) More than 50 years

4. Would you like to be the part of the SDC project (designer, producer, owner)?
a) Yes b) No

5. Would you be ready to study engineering considering the SDC?


a) Yes b) No

6. How likely do you think that fewer crashes would occur with SDC?
a) Very likely b) Somewhat likely c) Somewhat unlikely d)Very unlikely

7. How likely do you think the reduction of sever crashes with mortal would occur?
a) Very likely b) Somewhat likely c) Somewhat unlikely d)Very unlikely

8. How likely do you think that lower emission would occur with SDC?
a) Very likely b) Somewhat likely c) Somewhat unlikely d)Very unlikely

9. How likely do you think the reduction of fuel consumption would occur?
a) Very likely b) Somewhat likely c) Somewhat unlikely d)Very unlikely

10. If you were to ride in a SDC what do you think you would use the extra time doing
instead of driving?
a) Phoning and mailing b)Read c)Resting and sleeping
d)Watch movies/TV e)Playing games f)Working
g)Eating h)Watching road i)Not ride in SDC j)Other (please specify)

11. Would you be worried during driving in SDC?


a) Yes b) No

12. I would ride in SDC because:


a) Experience b)Safer than conventional car c)Can do other things
d) Less stress e)Long-time trip convenience

13. I would not ride in SDC because:

389
a) Do not trust b)Enjoy of driving c)Feel technology is not ready
d) Hacking e) Safety concerns f)Worry of privacy

Thank you for completing this survey about SDC.

390
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Effects of Activated Carbon on Medium Density Fiber Board


Properties

Ayşe Ebru Akın1*, Mustafa Karaboyacı1

Abstract: There is a growing concern all over the world on the health effect of the
formaldehyde emission coming from the adhesive used in the MDF production. In this work,
we investigated the effect of activated carbon addition into urea-formadehyde resin on the
total emitted formaldehyde from MDF plates produced using the resin modified as such.

First, the gel time behavior of the resin was studied by monitoring the pH, gelation time, solid
content, flow time and viscosity of the modified resin in comparison to the reference resin
with no exist activated carbon. The dosing of the activated charcoal in the dry resin was kept
at 1wt%, 3wt% and 5wt%. After that modified resin was used in the production of 40x40 cm
MDF samples by using laboratory scale press line with full automation system. İnternal
bonding strength, surface soundness, screw holding resistance, water absorption and thickness
swelling were also measured in addition to the main interested parameter formaldehyde
emission level which is determined via spectrometric technique following an extraction
procedure.

Threshold values for activated carbon were determined to be 1wt%. Formaldehyde emission
level was observed where addition of 1wt% activated carbon into the urea formaldehyde
adhesive decreased the formaldehyde emission 52% comparison to reference whereas
addition of activated carbon at above its threshold level provided 47% decreasing.

Keywords: activated carbon, formaldehyde emission, MDF, adsorbent

1. Introduction

The increase in diseases in the world has increased the awareness of the chemical hazards that
will come from the products used. For this reason, it has become an important issue to reduce
the chemicals released over time from the products that are constantly used in furniture
industry. Medium density fiber boards (MDF) that are used different areas school, home, etc.
is an important wood panel industry composite material consisting urea formaldehyde. Urea
formaldehyde resin has an important use in wood panel industry. This resin is preferred
because it is cheap and transparent but ıt has disadvantages. These disadvantages are low
water resistance and high formaldehyde emission that is measured from MDF. As it known,
formaldehyde has many negative effects on human health. One important among these is the
increasing risk of cancer. In 2004 the Internal Agency for Research on Cancer (IARC)
classified formaldehyde as harmful chemical for human body (Pizzi 1994).

1
Süleyman Demirel University, Engineering Faculty, Chemical Eng. Department, Isparta, Turkey
* Corresponding author: [email protected]
391
Formaldehyde is used in the production of urea formaldehyde resins used in the MDF
industry and depending on the reaction conditions between urea and formaldehyde, some
amount formaldehyde can remain in the environment without reacting. In addition this, some
formaldehyde is released due to bond formation in the condensation stage of the resin, which
develops during the pressing stage of the MDF production. Due to these reasons, some
formaldehyde which is called free formaldehyde remains in the fiber board plate produced
(Pizzi 1989).

Consequently, the use of formaldehyde on wood panels currently has been reduced to
particular levels, regarded as not harmful to human health. Formaldehyde emission can be
lowered by several methods. Several methods for producing low formaldehyde emission
MDF panels have been studied, such as reducing formaldehyde to urea mol ratio and addition
of formaldehyde scavengers in to resin. However mostly mechanical and physical properties
of wood based panels have been affected badly. In addition to this, decreasing formaldehyde
mol ratio causes to spread of curing time at MDF production. In this situation requires more
energy and time. In the 21st century, where energy and time are important, this is an
undesirable situation. The other factors affecting formaldehyde emission are wood type, resin
type, type of hardeners, press conditions, amount of resin used in MDF production and
storage time. Moreover, modifications of the resin with different amine containing chemicals
are also important to reduce formaldehyde emission.

Activated carbons have been used as adsorbents in various fields, for instance, solvent
recovery, gas separation, and deodorization. The activated carbon is characterized by a strong
adsorption capacity which is attributed to its large internal surface area, porosity and high
degree of surface reactivity. In related to this the use of activated carbon is one of the possible
method to reduce formaldehyde emission (Kumar et al., 2013).

The use of activated carbon as formaldehyde absorbent has been analyzed by many
researchers rayon based activated carbon as formaldehyde absorbent and activated charcoal
have been used as bio-scavenger for decreasing formaldehyde emission from melamine
formaldehyde resin.

As the relevance, in this work aimed to investigate the effect of activated carbon addition in to
urea formaldehyde resin properties, formaldehyde emission values of MDF, mechanical and
physical properties of MDF.

2. Materials and Methods

2.1. Materials

Urea and formaldehyde to be used in the resin synthesis was provided from AGT AĞAÇ
SAN.TİC.A.Ş. Mixed wood fibers which contain of soft and hardwood fibers that were be
used in the MDF productions were provided by AGT. The activated carbon powders that have
200 mesh particle size, 900-950 m2/g surface area, iodine number greater than 900 and pH 8-
10 were procured from ECS KİMYA.

392
2.2. Synthesis of urea formaldehyde resin

Urea formaldehyde resin synthesis basically; it is dived into two stages: an alkaline
condensation stage in which mono-, di- and trimethylolureas forms are formed, and a
condensation stage of the formed methylolureas in acid environment.

In the synthesis process, 45% industrial type aqueous formaldehyde solution and powdered
urea have been used. The mol ration of formaldehyde/urea was taken as 1.04/1.00. Pure water
was added by weighing powder urea in the appropriate mole ratio into three-necked glass
reaction ballon flask assembly, then placed in heated magnetic stirrer unit and set to heat at 40
o
C. At this stage, the appropriate molar ratio of formaldehyde was added gradually and the pH
of the reaction media was adjusted to 8.20 with 20% NaOH solution by weight. Reaction was
continued at 40°C for 30 minutes. Then, for polycondensation, the pH was arranged with ~4.5
with formic acid. The reaction was continued at 90 oC for 100 – 120 minutes by controlling
flow time of resin with DIN Cup 4. Finally, while the resin was cooled to 70 oC, its pH was
adjusted to 8.5 and the reaction was continued for a while. Finally, vacuum drying was
applied to the solution and resin was cooled to 40 °C, and the solid content of resin was
reduced to 58% from %60 by weight.

In order for the synthesized 1.04 mol urea formaldehyde resin to cure sufficiently in the plate
press stage, it must be used with a hardener. As the hardener 20% by weight aqueous
ammonium chloride was used, constituting 4% by weight based on the resin solid content.

Table 1. Resin manufacturing parameters


Resin manufacturing parameters
Parameters Values
pH 8.20 ±10
Viscosity (cP@ 25°C) 160 cP±10 at 30 rpm
Flow time (second @ 25°C) 25±5
Gelation time (second) 60±5
Solid content (%) 58±1

2.3. Mixing of activated carbon with urea formaldehyde resin

To obtain a uniform dispersion of activated carbon powder in the urea formaldehyde resin,
mechanical stirring with YOKEŞ VBR-600 high shear disperser mixer was done for 30 min at
1200 rpm by using cowls type blade. Activated carbon was added to the urea formaldehyde
resin at 1%, 3% and 5% by weight according to the resin solid weight. The modified resin was
named based on percent added as AC1, AC3 and AC5. AC0 indicates reference resin. AC0
shows that absence of activated carbon powder in the resin.

2.4. Characterization of physical properties of activated carbon containing urea


formaldehyde resins

Viscosity measurements were done by Brookfield LV DV2T viscometer by using spindle no 1


at 30 rpm 25°C. Flow time measurements were done by DIN cup 4mm. Gelation time tests
were done by using water bath at 100 °C with stirring according to related standard test
method.

393
2.5. Preparation of medium density fiberboard and physical and mechanical testing

The resin free wood fibers (a mixture of 15% beech wood fiber + 85% pine wood fiber) with
an average moisture content of 30% were dried in an ındustrial oven for approximately 6
hours until 2% - 4% humidity was achieved. Theoretically the amount of dry fiber was
calculated and activated carbon added urea formaldehyde resin was weighed as 12%
according to dry fiber amount. Activated carbon added resin was sprayed onto the wood fibers
with the help of a mixer with a nozzle system, and a homogenous glue fiber mixture was tried
to be obtained. After a 3 g of fiber sample taken from the resinated wood fiber mixture and
analyzed in a moisture analyzer and, it was determined that it had an average moisture content
of 9% - 10%, and this value was appropriate for pressing. The glued fibers that activated
carbon added resin were transferred into a 40x40 cm mold with the help of a vacuum suction
unit and the preform was formed before the press. Then, it was transferred to the IMAL PAL
laboratory press unit and pressed with a pressure of 120 N/cm2 for 326 seconds. Table 3
shows all the details of the MDF that contains activated carbon added resin and for reference
MDF. The boards were then conditioned to attain uniform moisture content in panels. After
that, the boards were cutted and tested according to related standard test method for
determining of internal bond strength (EN 319), Edge screw holding resistance (EN 320), and
surface soundness (EN 311). Physical tests of samples as thickness swelling and water
absorption (EN 317), moisture content determination were done (EN 322). The mechanical
properties of MDF panels were evaluated according to TS EN 622-5. Internal bonding tests
and other mechanical tests were done with universal testing machine (IMAL IB800 Board
property tester)

Table 2. MDF manufacturing parameters with different loading activated carbon


MDF manufacturing parameters
Parameters Values
Size 400*400 mm
Thickness 17 mm ± 1
Target density 740±20 kg/m3
Press Pressure 120 N/cm2
Pressing Time 326 seconds
Press temperature ( for both top and
bottom plate) 190 °C
UF resin wt % of dry wood fibers 12wt%
Activated carbon wt % of solid resin 1%, 3% and 5%
content
Number of boards for each type of
concentrations 4

2.6. Formaldehyde emission testing

The formaldehyde emissions from MDF panels were evaluated using the EN-120 (perforator
method). 100 g sample were put in a round bottomed flask that contain the 600 ml of toluene.
The 1000 ml of distilled water was poured into the perforator attachment. The samples were
boiled with the toluene for 2 hours. In this test method the distilled water absorbs the
formaldehyde and the volatile organic compounds captured by the boiling toluene.
Formaldehyde trapped by the water is then quantitatively determined using UV
spectrophotometer.

394
3. Results

3.1. Effect of activated carbon on the resin physical properties

As shown in the table 3, increasing with amount of activated carbon increases the viscosity of
urea formaldehyde resin and extended the gelation time period.

Table 3. Resin properties with addition activated carbon


Sample Flow Time Gelation Viscosity Solid
pH (second @ Time (cP@ 25°C) Content
25°C) (second) (%)
Reference 8.15 20.00 53 164 58,76
AC1 8.30 20.12 66 174 59,12
AC3 8.52 23.91 88 197 59,69
AC5 8.63 25.13 96 227 60,15

The reactivity of the UF resin depends on the amount of free formaldehyde which produces
more acidic during the curing process when the hardener is added (Moslemi 2020). The pH
values in Table 3 are the values measured only with the activated carbon, without the addition
of hardener. Gelation time test were done with adding the hardener ammonium chloride
solution. Since the pH of the resin medium is high, we expect the gel time to be extended.
This situation was parallel to the literature. The high pH value of the activated carbon
increased the pH value of the resin and extended the gel time period even after the addition of
hardener, since the environment was not acidic enough with the increasing concentration of
activated carbon.

Resin flow time, viscosity and solid content increased with the addition of activated carbon.
Increasing the resin viscosity and flow time will decrease the resin fluidity and cause a
decrease in the adhesive property. This situation may cause weakening of the mechanical
strength of MDF (Anjum 2020).

3.2. Physical and mechanical properties of MDF panels

Physical tests were done as water absorption and thickness swelling for 24h. For the water
absorption tests, the test results for the MDF samples coded as AC0, AC1, AC3 and AC5 are
respectively; it is 48.13%, 49.19%, 45.92% and 53.1%. The results are shown that in figure1.

395
Water Absorption (WA)
54
52

24 h WA %
50
48
water absorption for
46 24h
44
42
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 1. Water Absorption results of MDF panels

For the thickness swelling tests, the results for the MDF samples coded as AC0, AC1, AC3
and AC5 are respectively; as shown in figure 2, it is 19.48, 19.66%, 19.46% and 22.01%.

Thickness Swelling (TS)


23

22
24 h TS %

21

20 thickness swelling for


24h
19

18
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 2. Thickness swelling results of MDF panel

When the results of MDF samples were evaluated, there was an increase of 0.91% in swelling
value compared to the reference at 24 hours swelling tests with the addition of 1% activated
carbon. There was a 10% decrease and 13% increase for the 3% and 5% concentrations
respectively. Based on the results, it was observed that the addition of 1% activated carbon
did not cause a significant increase in swelling value. Addition of 3% active carbon caused a
decrease in swelling value. When all concentrations are evaluated ıt can be stated that the
threshold value is 3% for the swelling test because the addition of 5% activated carbon
negatively affected the swelling value of the system within increase of 13%. With the addition
of activated carbon, the change in water absorption values at increasing values at increasing
rates compared to the reference is an increase of 2.2%, a decrease of 4.6% and an increase of
10.4% respectively.

396
Table 4. Thickness swelling and water absorption values of MDF panels
Sample Density Moisture 24 h 24 h
kg/m3 % TS % WA %
Reference 756.64 4.93 19.48 48.13
AC1 753.48 4.99 19.66 49.19
AC3 758.37 4.63 19.46 45.92
AC5 755.34 4.90 22.01 53.01

According to table 4 thickness swelling (TS) and water absorption (WA), activated carbon
particle addition did not affect the moisture and density values of the MDF panels, and did not
cause a significant change in TS and WA values. This was also observed in the study of in a
literature (Kumar et al, 2013).

Mechanical tests were done as internal bonding resistance, surface soundness and screw
holding resistance. For the internal bonding tests, the results for MDF samples coded as AC0,
AC1, AC3 and AC5 are respectively it is 0.32 N/mm2, 0.34 N/mm2, 0.32 N/mm2 and 0.30
N/mm2. The results are shown in figure 3. The results for surface soundness tests are
respectively 0.80 N/mm2, 0.69 N/mm2, 0.75 N/mm2, and 0.84 N/mm2. The results are shown
in figure 4. For the screw holding resistance tests, the results for MDF samples coded as
AC0, AC1, AC3 and AC5 are respectively it is 690.00 N, 737.50 N, 668.50 N and 660.75 N.
The results are shown in figure 5.

İnternal Bonding
0.35
internal bonding (N/mm2)

0.34
0.33
0.32
0.31
İnternal Bonding
0.3
0.29
0.28
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 3. İnternal bonding results of MDF panels

397
Surface Soundness
1.00

Surface Soundness(N/mm2)
0.80

0.60

0.40
Surface soundness
0.20

0.00
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 4. Surface soundness results of MDF panels

Screw holding resistance (edge)


760.00
740.00
Screw holding (N)

720.00
700.00
680.00 screw holding
660.00 resistance
640.00
620.00
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 5. Screw holding resistance results of MDF panels

Mechanical tests were examined; there was a 6.25% increase in internal bonding values for
the 1% concentration. With the addition of activated carbon at increasing concentrations for
the 3% and 5% ratios, there was a 5.9% decrease and a 12.5% decrease in the internal
bonding values, respectively. The screw holding resistance values are analyzed, an increase of
6.4%, a decrease of 3.1% and a decrease of 4.2% were observed, respectively compared to the
reference at increasing concentrations. When the surface soundness test results are examined,
it is 12.5% decrease, 6.25% decrease, 5% increase compared to reference at increasing rates
respectively.

Based on the test results of MDF internal bonding strength, it can be deduced that MDF with
less active carbon added mostly exhibits higher strength than control MDF. This can be
explained by the fact that the incorporation of activated carbon in MDF fills the space
between the fibers in the MDF, thereby intensifying the close contact of the fiber-carbon-fiber
system, thereby strengthening the hydrogen bond and Van Der Waals forces. (Darmawan et
al., 2010)

398
By holding the formaldehyde by the activated carbon, the free formaldehyde in the resin is
prevented from escaping from the reaction medium during the curing of the formaldehyde.
This strengthens cross-linking. However, the higher activated carbon loading (above 1%),
results in less effective retention of formaldehyde due to the agglomeration of the activated
carbon particles, the internal bonding strength is reduced. (Resmi et al., 2017)

3.2. Formaldehyde emission tests of MDF panels

Figure 6 shows the formaldehyde emission testing results by the perforator method. The
formaldehyde emission tests were done with samples that having 5% moisture content. The
value of formaldehyde emission of samples that are named as AC0, AC1, AC3 and AC5 are
respectively 22.33, 10.60, 10.67 and 11.78 mg/100g board.

Formaldehyde Emission
25
Formaldehyde emission

20
(mg/100g board)

15

10 formaldehyde emission

0
AC0 AC1 AC3 AC5
Activated carbon concentration

Figure 6. Formaldehyde emission results of MDF panels

According to formaldehyde emission test results, with the addition of activated carbon, a
decrease of 52.5%, 52.2% and 47.2% was observed in the emission values, respectively.

That the addition of activated carbon into the resin system reduces the formaldehyde emission
values for all concentrations was detected. Such lowering was caused by the capability of the
microstructure of the activated charcoal to adsorb formaldehyde in the MDF (Rong et al.,
2002; Pari et al., 2006). Further, the porous structure afforded greater surface area of the
adsorbent (activated carbon) and the holding of the adsorbate (formaldehyde) by activated
charcoal through the secondary force of hydrogen bonding as well as Van Der Waals type.
This enhanced the intimate take-up of adsorbate on the surface of adsorbent, thereby
intensifying the adsorption of formaldehyde by the activated carbon incorporated MDF.
Since the activated carbon used has an iodine number value of over 900 and surface area high
that its adsorption capacity to be high based on the information in the literature was expected
situation (Medek 2006)

4. Discussion and Conclusions

The main purpose of this study is to produce MDF panels that are sensitive to the
environment and human health by reducing formaldehyde emission. For this reason, activated
carbon, which is a good adsorbent due to its surface area and porous structure, was used as
filler in the urea formaldehyde resin system. The study also aimed to protect the mechanical

399
and physical strength values while reducing the emission values. For this reason, the threshold
value for the addition of activated carbon into the resin system has been tried to be
determined.

In addition to the emission tests, when all mechanical and physical strength tests were
examined, it was decided that the optimum activated carbon concentration was 1%. With the
addition of 1% active carbon, the emission value was reduced, while the internal bonding
strength, screw holding resistance were increased and a value close to the reference was
obtained in surface soundness tests. For thickness swelling and water absorption values, we
see that the addition of activated carbon does not cause a significant change compared to the
reference.

Acknowledgements

We would like to thank AGT Company and Süleyman Demirel University for allowing us to
use the laboratory facilities and test devices in the studies.

References

Anjum A., Khan G.M.A.(2020) Effect of synthesis of conditions on the molecular weight and
activation energy of urea formaldehyde prepolymer and their relationship. Journal of Eng.
Advancements, 01(04), 123-129.

Darmawan S, Sofyan K, Pari G, Sugiyanto K (2010) Effect of activated charcoal addition on


formaldehyde emission of medium density fiberboard. J For Res 7(2):100–111.

Kim S, Kim HJ, Kim HS, Lee HH (2006) Effect of bio-scavengers on the curing behavior and
bonding properties of melamine–formaldehyde resins. Macromol Mater. Eng., 291(9):1027–
1034.

Kumar A, Gupta A, Sharma K, Nasir M, Khan TA (2013) Influence of activated charcoal as


filler on the properties of wood composites. Int J Adhesive, (46), 34–39.

Liu C., Luo J., Li X., Gao Q., Li J. (2018) Effects of compounded curing agents on properties
and performance of urea formaldehyde resin. J. Polym. Environ. 26:158–165.

Medek J., Weishauptova Z., Kovar L., Micropor. Mesopor. Mater., 89, 276.

Moslemi A., K. Mohsen, Behzad T., Pizzi A. (2020) Addition of cellulose nanofibers
extracted from rice straw to urea formaldehyde resin; effect on the adhesive characteristics
and medium density fiberboard properties, International journal of adhesion and adhesives,
99(10), 25-82.

Pari G., S. Kurnia, S. Wasrin. (2006) Tectona grandis activated charcoal as catching agent of
formaldehyde on plywood glued with urea formaldehyde. Proceedings of the 8th pacific rim
bio-based composites symposium. Kuala Lumpur. Malaysia.

Pizzi A. (1983), Aminoresin Wood Adhesives in Wood Adhesives, Chemistry and


Technology 59-104.

400
Pizzi A. (1994), Advanced Wood Adhesives Technology.
Resmi V.C., Narayanankutty S.K. (2017) Effect of charcoal on formaldehyde emission,
mechanical, thermal and dynamic properties of resol resin. Int J Plast Technol 21(1):55–69.

EN 120, 1992. Wood based panels determination of formaldehyde content-extraction method


called perforator method. European standard.

EN 317, 1993. Particleboards and fiberboards, Determination of swelling in thickness after


ımmersion in water, CEN, Brussels.

EN 319, 1993. Particleboards and fiberboards. Determination of Tensile Strength


Perpendicular to the Plane of the Board.

EN 322, 1993. Wood based panels, Determination of density, Brussels.

EN 311, 2002. Wood based panels, Surface soundness test method.

EN 320, 2011. Particleboards and fiberboards, Determination of resistance to axial


withdrawal of screws test method.

401
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Performance Analysis of FBMC-OFDM Waveform in Multipath


Fading Channels

Halil Alptug Diver1*, Kubilay Tasdelen1*

Abstract: In Fifth Generation (5G) wireless communication, in order to support different


requirements, more flexile resource distribution is needed. To enable flexible resource
distributions, different Orthogonal Frequency Division Multiplexing (OFDM) techniques are
proposed such as, Filter Bank Multicarrier (FBMC), rather than conventional OFDM. FBMC
techniques gives much more flexible resource distribution on time-frequency domain because
in OFDM systems, through all used spectrum, the subcarrier spacing must be fixed. In this
study, a new waveform of OFDM, which is FBMC, used in Wi-Fi and 4G technologies,
which are the most common usage systems of communication technologies, and their usage
areas are examined. In addition to the existing communication methods, there are researches
on 5th generation communication systems and technologies beyond these and new waveforms
for these systems. The study also includes simulations for doubly-selective channels and Bit
Error Rate (BER) performance and Peak-to-Average Power Ratio (PAPR) analysis of
conventional OFDM and FBMC waveforms in doubly-selective channel.
Keywords: OFDM, FBMC, PAPR, 5G and Beyond

1. Introduction
With the development of digital technology, the techniques used in communication
technologies have begun to be developed to increase the speed values, as well as to keep the
accuracy rates in data transfer high. In today's communication systems, OFDM modulation
techniques in use have started to be used in Wi-Fi and 4G systems, which are the most
popular systems, as well as being the subject of many researches. Since the OFDM waveform
used in the aforementioned systems could not meet the demands for 5G and beyond, new
waveforms have become one of the popular topics of current research. Within the scope of
this study; The new waveforms in the literature and their comparison with the current OFDM
waveform are presented with computer simulated studies.

Orthogonal waveforms used in 4G and LTE standards are called Cyclic Prefix-OFDM (CP-
OFDM). In these waveforms, the last portion of the OFDM signal is placed at the beginning
of the OFDM symbols, longer than the channel's delay spread, to avoid inter-symbol
interference. However, repetition of the same data in this way reduces the efficiency. For
example, a ¼ cyclic prefix (CP) length is used in Wi-Fi signals. This reduces the efficiency of
the system by ¼, and waveforms such as Universal Filtered Multicarrier (UFMC) and Filter
Bank Multicarrier (FBMC) are planned to be used in order to prevent this.

one
Isparta University of Applied Sciences, Faculty of Technology, Electrical and Electronics Engineering,
Isparta, Turkey
* Related Author: [email protected]
402
The current 4G multi-carrier technology, CP-OFDM, has many problems. The cyclic prefix in
OFDM modulation technique is used to reduce multipath effects and hence inter-symbol
interference (ISI) and inter-carrier interference (ICI), but the cyclic prefix reduces efficient
spectral utilization. In addition, CP-OFDM has higher Out-of-Band (OOB) emission and
higher PAPR than adjacent sidebands. Hence, two 5G waveforms UFMC and FBMC-OQAM
are discussed here, which do not use circular prefixes and increase spectral efficiency. UFMC
and FBMC systems consist of additional filtration that can reduce OOB emission. In addition,
system performance is analyzed across different fading channels (Ravindran and
Viswakumar, 2019).

To support high data rates in 5G and beyond, high-order modulations are intended to be used,
so how the 256 and 1024 QAM modulations perform has also been studied in FBMC
waveforms (Kamurthi, 2020). In the aforementioned study, it is aimed that new waveforms
show the same bit error rate as existing OFDM waveforms. In this way, it is expected to
exhibit the same performance as normal OFDM systems, eliminating the problems of
different subcarrier lengths, PAPR, OOB emission and cyclic prefix.

In order to meet more flexible resource allocation in 5G and beyond communication


techniques, the existing OFDM modulation technique is not sufficient. For this reason, an
alternative technology called UFMC technique has emerged. In UFMC technique, Quadruple
Amplitude Modulation (QAM) is used to avoid an orthogonality and is also suitable for
Multiple Input Multiple Output (MIMO) technology. In his study, Kamurthi describes the
performance of the UFMC modulation technique. In this article, 256 and 1024 QAM Mapping
techniques were selected and PAPR values of 256-QAM and 1024-QAM techniques were
observed in the simulation results, and PAPR values in the 1024-QAM technique were found
to have low PAPR values. Therefore, 1024-QAM is the best mapping technique in the UFMC
technique. In the article, the performance of the UFMC technique based on PAPR and
spectrum usage was also evaluated. All PAPR and BER values for both 256-QAM and 1024-
QAM are also included. Systems based on OFDM technique are vulnerable to high power
amplifiers (HPA). Also, due to multi-carrier signal overlap, the 4G OFDM technique suffers
from high PAPR. High PAPR causes non-linear distortion in high power amplifiers and it is
concluded that UFMC multi-carrier technique also has high PAPR values, which indicates
high power consumption. (Kamurthi, 2020) However, FBMC waveforms have a more
complex structure than normal CP-OFDM systems, where various filters are used to give the
same performance as CP-OFDM signals, these filters are also included in the Filter Bank. In
the studies in the literature, it is aimed to reduce the complexity of FBMC systems and to
create suitable filter banks.

FBMC/OQAM (Filterbank Multicarrier/Offset QAM) systems attract the attention of


researchers due to their advantages over the classical CP-OFDM system. In this paper, a
processing and synchronization scheme is developed for the promising FBMC/OQAM
system. The proposed scheme is simulated, a plot of the dependence of the probability of a bit
error on the signal-to-noise ratio is obtained. (An, Kim and Ryu, 2016, Doré, Gerzaguet,
Cassiau and Ktenas, 2017, Abenov, 2019).

FBMC is one of the candidate modulation techniques for 5G and beyond communication
systems due to the disadvantages of OFDM signals using cyclic prefix, OOB emission
shortage etc. It was created as an alternative to the standard w-OFDM (windowed OFDM)
waveforms used in current communication systems such as 4G, LTE and Wi-Fi. FBMC is
provided to be used for more specific purposes by using different filters, for example, the

403
OOB emission problem can be solved in this way. It is a technique of filtering subcarriers. As
with the normal OFDM technique, the cyclic prefix is not used and provides a better spectrum
utilization. In the FBMC modulation technique, each subcarrier is filtered separately.

In this study, channel estimations and bit error rates of FBMC modulation technique in
multipath fading channels were investigated and their BER performances were compared with
normal OFDM systems. Similar bit error rates and PAPR results were obtained in the
simulations. However, the filters used in FBMC modulation create more complexity in the
receiver and transmitter than the OFDM technique.

2nd. FBMC-OQAM Modulation

Fig. 1. Transmitter and Receiver Block Diagram of FBMC-OQAM Modulation

The fact that OFDM-OQAM modulation does not require the use of cyclic prefix provides an
advantage in terms of spectral efficiency compared to CP-OFDM. Increasing the spectral
efficiency can be achieved by modulating each subcarrier with a prototype filter/function
without the need to add redundant shielding fields. Good localization of the prototype
filter/function to modulate the subcarriers is important for the robustness/robustness of the
channel variations. While the filter/function localization in time aims to limit the intersymbol
interference, the localization in frequency aims to limit the intercarrier interference caused by
the Doppler effect.

It is important that the orthogonality between subcarriers is preserved after modulation.


OFDM modulation using localized filters/functions that only guarantee orthogonality over the
real values is called OFDM-OQAM.

Each subcarrier in the OFDM-OQAM method carries real-valued symbols part ( , )


corresponding to the real or imaginary part ( , ) in a complex OFDM symbol; m is the
frequency index, n is the time index. (El Tabach, Javaudin and Hélard, 2007)

Mathematically, FBMC signals can be expressed as the following equations. (1)(2)


, 1

, , 2

404
Here , is defined as the pulse shape filter in the transmitter and must satisfy the
underlying equation for orthogonality (3).


, , , , 3

, 4

In Equation (4), , is a time and frequency shifted filter of . Time shift in here is , the
shift in frequency is . , the prototype filter was derived from Hermite polynomials
(Haas, Belfiore, 1997)

1
2√ 5
, , , , ,

In Equation (5) filter, values that will provide orthogonality at in and


2/ , were obtained.

2.1. System Model

To test the FBMC and OFDM modulations in doubly-selective multipath and Gaussian noise
channels, a simulation model was established. The parameters of this Model are given in
Table 1.( Chiavaccini and Vitetta, 2000)

Table 1.Simulation Model Channel Parameters

Carrier Spacing 15kHz


SNR Range 10: 5: 40
Number of Sampling 2940000
Number of Subcarriers 196
Number of Active Subcarriers 24
FBMC Prototype Filter Hermite
Doppler Frequency Hazrat
Channel Model Pedestrian A
Carrier Frequency 2.5GHz
OFDM Symbol Count 14
FBMC Symbol Count 30

A single-input and single-output channel is modeled. The channel parameters used in the
simulation are shown in Table 1. For this channel model, both BER performances and PAPR
analyzes of FBMC and OFDM signal structures were performed (Al-Jawhar, Ramli, Taher, et
al, 2021). It is assumed that the receiver knows the channel and one-tap channel equalization
is done in the receiver.

405
3. Results
3.1 BER Results in Multipath Channel
In this chapter, FBMC and OFDM signals are tested for 16, 64 and 256 QAM modulations
and channel estimation is made by performing one-tap channel equalization on the receivers.
The Pedestrian A channel model was used as the channel type and it was assumed that the
receiver and transmitter were fixed, that is, the Doppler frequency was zero.
Bit Error Rate

Figure 2. OFDM vs FBMC 16-QAM BER Performances


Bit Error Rate

Figure 3. OFDM vs FBMC 64-QAM BER Performances

406
Bit Error Rate

Figure 4. OFDM vs FBMC 256-QAM BER Performances

As seen in Figure 2, Figure 3 and Figure 4, FBMC waveforms performed very closely with
OFDM waveforms. By not using circular prefixes, a greater resource usage is achieved. Since
channel codes are used in traditional communication systems, this small BER difference
between them can be eliminated by channel coding.

3.2 FMBC and OFDM PAPR Analysis


Probability X<=X

Figure 5. OFDM vs FBMC 16-QAM PAPR


Probability X<=X

407
Figure 6. OFDM vs FBMC 64-QAM PAPR

Probability X<=X

Figure 7. OFDM vs FBMC 256-QAM PAPR

In Figure 5, Figure 6 and Figure 7, PAPR analyzes of OFDM and FBMC waveforms were
performed with CCDF (Complementary Cumulative Distribution Function) graphs. In these
graphs, the PAPR performance of both waveforms is observed to be the same. This means
that both waveforms will perform similarly when using a non-ideal amplifier.

4. Evaluation And Conclusion

For Doubly-Selective channels, FBMC can provide almost the same performance as normal
OFDM. It seems appropriate that different wave models can be used, especially in the middle
SNR region, according to their intended use. Considering the sector needs and complex
structures, it will already be the subject of researches of future communication technologies
since the new waveforms can be used in different channel models as OFDM alternatives in
different areas. In the selected channel model, the scenario chosen is one of the most realistic
scenarios in terms of the time resolution of the channel impulse responses. In this context,
considering that the study is performed in 64-QAM OFDM sub-modulation, 10-1 BER
performance around 15 dB is sufficient. In future studies, real-time performances of low-
subcarrier OFDM systems such as Wi-Fi protocols with the help of SDR should be examined
with FBMC waveforms.

References

Abenov, R. R., Pokamestov, D. A., Rogozhnikov, E. V., Anatoliy, D. Y., & Kryukov, Y. V.
(2019, October). FBMC/OQAM Equalization Scheme with Linear Interpolation. In
2019 International Multi-Conference on Engineering, Computer and Information
Sciences (SIBIRCON) (pp. 0130-0133). IEEE.

Al‐Jawhar, Y. A., Ramli, K. N., Taher, M. A., Shah, N. S. M., Mostafa, S. A., & Khalaf, B.
A. (2021). Improving PAPR performance of filtered OFDM for 5G communications
using PTS. ETRI Journal, 43(2), 209-220.

408
An, C., Kim, B., & Ryu, H. G. (2016, December). Waveform comparison and nonlinearity
sensitivities of FBMC, UFMC and W-OFDM systems. In 8 th International Conference
on Networks & Communications (pp. 83-90).

Chiavaccini, E., & Vitetta, G. M. (2000). Error performance of OFDM signaling over doubly-
selective Rayleigh fading channels. IEEE communications letters, 4(11), 328-330.

Doré, J. B., Gerzaguet, R., Cassiau, N., & Ktenas, D. (2017). Waveform contenders for 5G:
Description, analysis and comparison. Physical communication, 24, 46-61.

El Tabach, M., Javaudin, J. P., & Hélard, M. (2007, June). Spatial data multiplexing over
OFDM/OQAM modulations. In 2007 IEEE International Conference on
Communications (pp. 4201-4206). IEEE.

Haas, R., Belfiore, J. C. (1997). A time-frequency well-localized pulse for multiple carrier
transmission. Wireless personal communications, 5(1), 1-18.

Kamurthi, R. T., Chopra, S. R., & Gupta, A. (2020, March). Higher Order QAM Schemes in
5G UFMC system. In 2020 International Conference on Emerging Smart Computing
and Informatics (ESCI) (pp. 198-202). IEEE.

Ravindran, R., & Viswakumar, A. (2019, November). Performance evaluation of 5G


waveforms: UFMC and FBMC-OQAM with Cyclic Prefix-OFDM System. In 2019 9th
International Conference on Advances in Computing and Communication (ICACC) (pp.
6-10). IEEE.

409
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The normative regulations, legislation and standards on the


control and preservation of electronic records in the northern
countries of Europe

Lana Žaja1*

The northern countries of Europe, namely: Iceland, Norway, Sweden, Finland, Denmark,
Lithuania, Latvia and Estonia, have taken the lead in developing their own standards and thus
carry significant weight in the development of archival legislation and other similar normative
regulations. Normative frameworks are key elements in the safeguarding of electronic
records. Within this framework, archives are faced with the issue of record control, as well as
the construction of an IT infrastructure that must preserve downloaded electronic records in
the long run. The Open Information Archive System (OAIS) reference model and
international ISO standards provide solutions to the presented tasks and are the major models
that underpin the development of record architecture in northern Europe. The aforementioned
digital archives are developed in a modular fashion to allow the development of each IT
section separately. For example, sections such as: receiving digital records, the physical
storage of records, record management and access to records, must be modularly separated to
ensure long-term usability and the possibility of implementing technological upgrades. By
adapting their own standards to legal requirements and international standards, these countries
continuously create and maintain the ideal conditions for long-term archival record keeping.
Keywords: electronic records, standards, norms, legislation, northern European countries.

1. Introduction

Governments around the world, with the help of archives, increasingly scrutinize private,
public and governmental bodies through normative regulations, legislation and standards.
Electronic data laws are a growing challenge and they must comply with various legal and
enforcement rules. Understanding and complying with regulations can be a challenging task,
like treading through a minefield while harmonizing regulations with standards. The northern
countries of Europe have a regulatory regime in which various stakeholders are required to
deliver information on a variety of media, while following standardized instructions. Data and
content standards set out requirements with which public authorities must comply. This poses
a challenge for long-term access to information. Compliance with laws and standards is the
focus of data control and archives are now facing the task of constructing an IT infrastructure
that preserves downloaded electronic records for long-term periods. In the specific Croatian
example, Vlatka Lemić presented the experiences of Scandinavian countries with regard to
electronic records,1 because their example is an interesting and advanced system in terms of

1
Lemić, V. (2003). Archives and electronic records – experiences of Scandinavian countries. Croatia. Bulletin
d'archives. 46(1), pp. 179-207. URL: https://hrcak.srce.hr/7378. (09.12.2019.)

1
Croatian State Archives, Zagreb, Croatia
* Corresponding author: [email protected]
410
drafting laws and standards tailored to the needs of their own state apparatus. A similar paper,
authored by L. Žaja in 2019, included research into the establishment of digital archives and
the policy of long-term preservation of digital records in selected national archives of EU
countries. The aim was to investigate the policies and practices of digital preservation of all
types of records in publicly available data and strategic documents in the archives of Austria,
Belgium, France, Croatia, Italy, Germany, Poland, Slovenia and the United Kingdom.2 Using
relevant standards helps digital archives harmonize different electronic systems inter-
institutionally. Compliance with standards also enables digital archives to continuously audit
the certification for managing digital information systems. Standardization and certification
contributes to competitiveness, as well as enhancing the image of archives in the sense of a
reliable and organized partner. It is beneficial from a marketing standpoint and provides a
guarantee for the safe handling of records and sensitive (classified) data within precisely
defined responsibilities and authorities. David Giaretta,3 was chairman of the panel that
produced the reference model OAIS (ISO 14721),4 accepted today as the de facto standard for
building digital archives, on which most northern European countries rely. His organization
also leads a group that has developed an ISO standard for auditing and certifying digital
repositories ISO 16363: 2012.5 At the Faculty of Humanities and Social Sciences of the
University of Zagreb (Croatia), the scholarship on digital archives is being refined and
continuously developing under the leadership of Professor Hrvoje Stančić, PhD – Head of the
Department of archival and documentation sciences,6 Department of Information and
Communication Sciences.7 Although northern European countries are leading the
development of their own standards, they rely heavily on internationally proven standards to
ensure the best possible quality, which they then further develop with their own scientific and
information resources. This is particularly evident in the example of the National Estonian
Archives, which uses the OAIS reference model for its records architecture and the National
Archives of Latvia, whose original standards are based on international ISO standards.

Iceland
The Icelandic Public Archives Act8 (2014) prescribes the obligation to transfer all types of
records and the right to make information accessible. Records covered by the transfer
obligation must be submitted to the public archive when they reach the age of 30 years.
2
Žaja, L. (2019). Digital preservation policy in publicly available data and strategic documents on the websites
of selected national archives of European Union countries. 51. Counseling of the Croatian Archivist Society:
Management of Electronic Material and Contemporary Archival Practice. Croatia. Slavonski Brod., pp. 147-169.
3
Giaretta Associates Ltd. URL: http://giaretta.org/digital-preservation/standards/. (09.12.2019.)
4
Reference Model for an Open Information Archival System OAIS. URL:
https://public.ccsds.org/pubs/650x0m2.pdf. (09.12.2020.)
5
Space data and information transfer systems — Audit and certification of trustworthy digital repositories. ISO
16363: 2012. URL: https://www.iso.org/standard/56510.html. (09.12.2020.)
6
Stančić, H. (2005). A theoretical model of the persistent preservation of the authenticity of electronic
information objects. Doctoral thesis. University of Zagreb. Faculty of Philosophy. Croatia. URL:
https://bib.irb.hr/datoteka/244465.Ocuvanje_autenticnosti_e-informacijskih_objekata.pdf. (10.12.2020.)
7
Lectures of Prof. Stančić at graduate studies level: Digital Archives, Archival Legislation, Digitization and
Migration of Documents, Digitization of 3D Objects and Spaces, Planning and Design of Material Management
Systems, Management and Business in Archives and Protection of Electronic Material. Lectures of prof. Stančić
at postgraduate study: Disruptive technologies and long-term preservation of e-content and Preservation of
digital record authentication. URL:
https://inf.ffzg.unizg.hr/index.php/hr/odsjek/katedre/arhivistika-i-dokumentalistika. (10.12.2020.)
8
Icelandic Public Archives Act. URL: https://skjalasafn.is/files/docs/ThePublicArchivesAct-in-Iceland-No-77-
2014.pdf. (27.11.2019.)

411
However, electronic and digital records are generally submitted no later than 5 years. In both
cases, the date of the reference is the date of the last metadata entered or last entered in the
closed case. The „Rules on Electronic Public Data and Data Delivery“9 comprise a set of rules
on electronic data systems used by institutions which are obligated to provide their data.
Government bodies that intend to store their information in electronic form must comply with
these rules, including notifying all electronic systems in which the data are stored. The brief
instructions on electronic archiving outline two basic rules that a provider must report to the
electronic archive data system. Archival electronic data systems are defined by the electronic
institution filing system described in the first rule and submitted to the archive together with
the electronic files / databases as defined in the second rule. The notification of data
submission must be sent to the archive in electronic form according to the attached forms. The
archives must receive the submitted documentation within one month after notification. The
documentation is described in the electronic filing systems with a description of how the data
is searched, the rules for using the electronic system, and a technical description of the data
structure in that system. If the system will not be used as a relational database, the notice must
also include a statement that the submitted version can be transformed into a relational
database in accordance with the rules of the National Archives of Iceland,10 which provide
information on the electronic data systems of those institutions subject to the delivery data.

Norway

The Norwegian Archives Act,11 in force since 1999, together with additional regulations,
provides a complete legal framework for all issues related to archival affairs in public
administration from the original creation of records to the operational documentation of day-
to-day business. The differences in regulations for paper and electronic records relate to
differences in the management and transfer to storage. The Rulebook on Supplementary
Technical and Archival Regulations for the Treatment of Public Archival and Documentary
Materials12 (archival regulation of the Norwegian Archives) states the requirements for the
system of archiving and electronic processing of archival records in Chapter 3. These
requirements are in fact instructions describing the responsibilities, protocols and legal rights
associated with the creation, receipt, exchange, maintenance and use of archives. They
describe the following responsibilities and procedures: responsibility for assigning and
updating user rights; special rights for processing archival material granted to system users
with assigned roles, types of authentication and signatures for documents, as well as rules and
procedures for signing documents, including the use of a digital signature; responsibilities and
procedures for secured quality filing and responsibilities and procedures for recording the
sending and receiving of archived documents. With regard to the requirements for document
formats and functions for export to the electronic archive system, it indicates that electronic
archive documents are stored in one or more document formats that are specified in separate
chapters. This does not apply to documents that can be destroyed after 10 years or less. After
completing the process, the institution must confirm that the conversion to the default archive
format has been completed correctly and that the documents are legible. The National

9
Icelandic electronic documentation. URL: https://skjalasafn.is/rafraen_skjalavarsla. (27.11.2020)
10
Iceland National Archives. URL: https://skjalasafn.is/. (10.10.2020.)
11
Norwegian Archives Act. URL: https://www.arkivverket.no/forvaltning-og-utvikling/regelverk-og-
standarder/lover-og-forskrifter-for-arkiv/arkivloven. (7.11.2020.)
12
Norwegian Ordinance on Supplementary Technical and Archival Regulations for the Treatment of Public
Archival and Documentary Records. URL: https://lovdata.no/dokument/SF/forskrift/2017-12-19-
2286#KAPITTEL_3. (27.11.2020.)

412
Archives of Norway13 provide that systems that store electronic archival documents have such
export functions that ensure that the stored electronic records can be transferred to another
system or submitted to the archive. When exporting data for delivery or storage, this system
must comply with the requirements set by the archive. This provision does not apply if all
records in the system are allowed to be extracted after 10 years or less, in accordance with the
Archives Act. Regarding the storage and safe preservation of records, protocols have been
drawn up with the following guidelines: the storage media and formats for use have been
specified for the types of records to be submitted electronically, as well as the statutory forms
for those records. These must be submitted in paper form. Responsibilities and procedures for
converting records to the default format include conversion times, guidelines for the disposal
of archived paper and electronically digitized records. Furthermore, the plan includes the
preparation of records, which should be transferred to the archives by implementing protocols
and safeguards that include information security. Noark14 is the name for the Norwegian
Document Management Standard. This standard was developed by the Norwegian State and
its National Archives. All government agencies must use Noark approved systems for record
keeping and electronic filing. Noark 5 is the latest release of the Noark standard and was
officially released in 2008. Noark 5 is a conceptual standard where its technical application is
left to the open market, and the quality of individual solutions in various sectors is not
controlled by the archive itself. This standard specifies the type of information that should be
processed, but does not specify any technical specifications. Therefore, when public
authorities procure new records management systems, it is important to differentiate what the
archive approval system includes and what the company must control in the procurement
process. Noark 5 archive approval is displayed so that the system is logged in and that further
updates to the standard are granted. Final approval is based on a vendor statement, which is
ultimately the most important document when purchasing an electronic database system. The
buyer of this solution must then determine which requirements he wants and test them so that
they fulfill all the specified functions of the electronic system.

Sweden

The Swedish National Archives regulations15 are binding for all government bodies which
hold various types of records. With the support of the Archives Act and other segments of
archival regulation, the regulations regulate, among other things, how records are created,
organized, evaluated, processed, stored, protected and submitted to the archive. General
regulations and general advice of the National Archives of Sweden16 have been published in
the National Archives Collection RA-FS.17 The regulations are generally applied by all
government agencies, including individual bodies that hold public records and documents, but
there are several regulations in the series RA-FS intended only for a specific group of
authorities and this is stated in the title of the regulation. The rules are set out in the following
way:
1. Rules specific to procedures in different types of media. This applies to electronic,
digital, paper or microfilmed records.

13
National Archives of Norway. URL: https://www.arkivverket.no/. (10.10.2020.)
14
NOARK – Norwegian Document Management Standard. URL: https://www.arkivverket.no/forvaltning-og-
utvikling/noark-standarden. (27.11.2020.)
15
Swedish Archival Regulations RA-FS i RA-MS. URL https://riksarkivet.se/offentlig-forvaltning. (28.11.
2020.)
16
National Archives of Sweden. URL: https://riksarkivet.se/startpage. (10.10.2020.)
17
Swedish General Regulations RA-FS. URL: https://riksarkivet.se/generella-foreskrifter. (28.11.2020.)

413
2. Technical regulations pertaining to requirements for different media, such as
requirements for formats for keeping electronic records or for hard-copy archives. These
regulations should always be read in conjunction with specific media practices.
3. Regulations on archival repositories can be found in a separate section of the RA-FS
on the planning, execution and operation of archival repositories (RA-FS 2013: 4).
4. The general regulations for the extraction and destruction of records in the RA-FS
series deal with the operational or current documentary material available to all or most state
agencies. Much of this regulation concerns the elimination of documentary material with
specific retention periods for financial, personnel, procurement and application documents.
The basis of this regulation is the records of universities and colleges on research and project
cooperation with the EU. By applying general rules, public authorities in their activities must
document the actions of, for example, an internal implementation decision.
Regulations on the management of archives relating to competent archival data are included
in a separate series of titles RA-MS.18 So, this is a series that involves administrative powers
related to both managing, extracting and destroying records. These decisions are for a specific
body or group of authorities. The structure of the RA-MS regulation is used in the following
cases:
1. Reducing the scope („thinning“) of material that is not covered by the RA-FS General
regulations. The most common case of such treatment is the return of records to the applicant
as an alternative to disposal.
2. Exceptions to the General Rules (i.e. exceptions to the RA-FS) apply to the design of
archival repositories or formats for electronic documents. In certain cases, the National
Archives may prescribe exceptions for "thinning out", i.e. keeping those records that would
otherwise be extracted in accordance with the recommendations of the Constitutional
Register.
3. Lending or depositing of records to the competent archives (instead of handover).
The Technical Committee of the Archives participated in the work with the International
Organization for Standardization ISO,19 as well as with special professional bodies such as the
International Council on Archives ICA,20 the Society of American Archivists SAA,21 the
Document lifecycle management DLM,22 and Research Libraries Group RLG.23 RLG has
developed guidelines for Trustworthy Repositories,24 where the idea is that institutions which
meet the high standardized requirements for advanced digital storage have the ability to audit
with a trusted storage certificate, while the National Archives is still exploring options for
certification. ICA has developed de facto standards for describing archival records.
Furthermore, these de facto standards were developed in order to address everything from
questions about the structure of an electronic system to the interconnection of metadata and
digital preservation objects. SAA develops its work on standardization by working with some
de facto standards that have proved useful for inter-archival cooperation. Once a year, a
meeting is held at which relevant working groups share experiences about what the annual
reports should cover. The Technical Committee participates and develops international and
national standards relating to many aspects of business information management such as:

18
Swedish General Regulations RA-MS. URL: https://riksarkivet.se/ansok-om-gallring. (28.11.2020.)
19
International Organization for Standardization ISO. URL: https://www.iso.org/home.html. (27.11.2020.)
20
International Council on Archives ICA. URL: https://www.ica.org/en. (8.11.2020)
21
Society of American Archivists SAA. URL: https://www2.archivists.org/. (28.11.2020.)
22
Document lifecycle management DLM. URL. https://www.webpdf.de/blog/en/dlm-document-lifecycle-
management/. (18.11. 2020.)
23
Research Libraries Group RLG. URL: http://www.rlg.org/. (28.11.2020.)
24
Giaretta, D. (2011). Advanced Digital Preservation. ISBN 9783642168086.

414
archive records, preservation, archiving, metadata, conversion formats, the migration of
information, digitization and security risk management, accountability, copying and designing
work processes. This Committee also works to ensure that standards are applied to their full
extent in Sweden, for example by authorizing public bodies to process inquiries in a way that
is then usable for certification.

Finland

The implementation of comprehensive reform of archival legislation was crucial in reforming


the National Archives Service.25 The following laws are currently in force in Finland: The
Archives Act26 (1994), the Electronic Communications Act27 (2003), the State and Private
Archives Act28 (2006), the Act on State Aid to Private Archives29 (2006), the National
Archives Act30 (2016), and the Government Ordinance on State Archives31 (2017). In
accordance with the new National Archives Act and the Government Ordinance, the name of
the National Archives was changed to the State Archives (in Helsinki), and the regional
archives are the offices of the State Archives. The name Arhiva Sámija was retained, it being
a part of the National Archives, and whose special duty is to be the „keeper of the
documentary cultural heritage of Sam“. These regulations removed the outdated model of the
county administration and the inefficient use of the resources of the National Archival Office
in a situation where appropriations were being reduced. The proposed legislation on private
archives would make cooperation between the National Archives of Finland and private
archives funded from the national budget more efficient as the regulation of functions has
been specified and this fosters development that is mutually beneficial.32 The decision to build
a joint central archival facility, as well as the right to reasonably destroy the digitized
documents, enables the storage of analogue records. The implementation of the
comprehensive digitization project for permanent records eliminated the need to build more
archival facilities suitable for permanent storage after the completion of the central archive.
The achievement of strategic goals was evaluated by the self-assessment conducted in 2017
and 2018. Based on these evaluations, an international assessment for the remaining strategic
period was the basis for recommendations and guidelines which will be used in the planning
of the forthcoming strategy leading up to the year 2025. The permanent storage of records
relating to the screening strategy is ensured by various sufficiently standardized processes that
are conditionally controlled according to the research stages. The goal of record management
is to ensure lasting preservation, usability and availability, and the primary focus is on
managing the entire record life cycle in an electronic operating environment. The structured
data included in information systems is also available separately, in line with the goals of

25
Strategy of the National Archives of Finland 2020. URL: https://www.arkisto.fi/en/the-national-archives-
2/copy-of-strategy-2020. (2.10.2020.)
26
Law on the Archives of Finland. URL: http://www.finlex.fi/fi/laki/ajantasa/1994/19940831 (2.11.2020.)
27
Electronic Communications Act of Finland. http://www.finlex.fi/fi/laki/ajantasa/1994/19940831. (02.12.2020.)
28
Law on the Private and National Archives of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/2006/20061006. (2.11.2020.)
29
Law on state aid to the private archives of Finland. URL: http://www.finlex.fi/fi/laki/ajantasa/2006/20061006.
(2.10.2020.)
30
Law on the National Archives of Finland. URL: http://www.finlex.fi/fi/laki/alkup/2016/20161145.
(2.10.2020.)
31
Government Ordinance on the National Archives of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/2017/20170039. (2.11. 2020.)
32
National Archives of Finland. URL: https://www.arkisto.fi/. (10.10.2020.)

415
open public information. The Finnish Archives promote electronic archiving within public
administration by determining which records are of lasting importance. The contents of
records relating to basic administrative functions shall be permanently stored in digital form.
The amount of permanently stored analogue records in the possession of the public
administration is evaluated in such a way as to allow the transmission cycle to be transferred
to new media. The final point is that analogue records transferred to permanent storage must
be digitized as part of the process of the transfer itself. Institutions responsible for the transfer
of records are also responsible for the costs of digitization. At the same time, the proportion of
documentation stored in analogue format is evaluated. Digitization, in addition to improved
record accessibility, leveled the functional bridge between the analogue and the original
digital record. An international survey is currently underway for evaluating the criteria for the
disposal or preservation of digitized records and records of permanent value in analogue form,
which is part of the international practice of preserving and permanently storing analogue
records. Analog records will not be destroyed before the completion of the survey and the
preparation of the final report. The ultimate goal is to have 80-90% of the records transferred
to new media, to be disposed of in permanent storage or destroyed in accordance with the
retention periods.

Denmark

The Danish Ministry of Culture manages the National Archives. The powers of the Minister
in connection with the archiving activity of public authorities are defined by the Archives Act
of 1992,33 which has been modified three times, in 1997, 2000 and 2007. The provisions of
the Act on Management of Records of State Bodies apply to all activities carried out by public
administration bodies and the judiciary. However, rules and regulations differ from one type
of government to another. Therefore, only state bodies, courts and categorized institutions are
required to submit records to the National Archives of Denmark.34 The law further authorizes
the Minister of Culture to determine the rules and regulations relating to the handover of
records to archives. Finally, the Minister of Culture is empowered to determine that in certain
circumstances the rules and regulations governing archives may apply to certain private
companies, institutions and associations that are not considered as part of the public
administration. Specific provisions relating to the activities of the Danish National Archives
are laid down in the Executive Order on Archives signed by the Minister of Culture. This
order stipulates that the State Archivist has the authority to issue further rules, guidelines and
regulations on the following: the downloading of records by state bodies, including approval
and entry into the archiving system; installation, design, restriction and operation of the filing
system; measures to ensure that electronic filing systems are independent; technical
requirements for archival records in various media relating to the storage of records;
evaluation of government bills; time limits for the transfer of records of government bodies,
including the transfer of electronic systems to the archive; lending of records to a state body
which was transferred by the same body to the archive, and evaluation and handing over of
municipal records. The Executive Order on Archives also stipulates that the National
Archives of Denmark may require the submission of the necessary information by public
authorities for expert review and archiving. If a public authority neglects such archival
considerations, the archives may issue an order to take the necessary measures to comply with
the relevant archival regulations. National authorities are obliged to inform the archives of

33
Danish Archives Act. URL: https://www.sa.dk/wp-content/uploads/2014/12/Danish-Archives-Act.pdf.
(6.11.2020.)
34
National Archives of Denmark. URL: https://www.sa.dk/en/. (10.10. 2020.)

416
their electronic filing systems prior to their application, followed by the evaluation of the
system and, if it meets the set standards, to set a time limit for the transfer of electronic
records to the archive system. This process usually takes place after a period of approximately
five years. The electronic file management systems that need to be preserved must be further
professionally monitored for approval (certification). To this end, the Danish National
Archives shall determine whether the system fulfills the requirements laid down for public
authorities with regard to the management of their documentation. The expert review focuses
on the organizational and technical aspects of the electronic system. The legal requirement for
information on electronic filing systems means that an archive must, in principle, have
information on all electronic filing systems used in the Danish Central Administration.
However, decisions regarding long-term digital preservation or storage may, in some cases, be
made in collaboration with local archives. All public authorities must adhere to the national
standard for the submission of electronic system data. The “Executive Order on Submission
Information Packages – Danish National Standard“35 gives an overview of the elements and
structure in the information packages. The handover of the information packet of data and all
documents in the IT system contains: general rules on information packets for submission,
data structure, data content, and information on the packet to be submitted to the archive. The
data structure contains general rules for data structure, the location of folders and files, index
folders, tables, record contexts, schemas, and documents. The data content relates to table
content along with data type, conversion of table content to digital archive, audio, video or
other file formats, text formatting, digital records, audio and video, geographic features,
compression and optimization without degradation (without deterioration). This is followed
by the Handover Data Package Information, which contains the archive descriptive file,
context documentation, content information in the handover information packet tables, and
SQL queries. Lastly, we find the choice of media for the transmission of information
packages.

Lithuania

The Republic of Lithuania law on documents and archives36 currently in force, uses the
professional term “official electronic record”, which denotes those records created by the
Lithuanian public and state sectors and various non-governmental organizations. Electronic
records shall be prepared in accordance with the specifications on electronic documents
approved by the Central Archives of Lithuania.37 It should be noted that the professional term
“official electronic record” is defined in response to the application of the 2011 Regulation
and Council of the European Parliament. Its specification is defined to basically describe the
contents of the record, that is, the main components of official email, such as metadata and
electronic signatures. The Lithuanian Chief archivist approved the ADOC-V1.0 (current
version)38 specification for electronic records with electronic signatures. The application of
this specification for electronic documents is further defined by the 2017 Order of the

35
Executive Order on Information Packages and Submission – Danish National Standard. URL:
https://www.sa.dk/wp-content/uploads/2014/12/Executive-Order-on-Submission-Information-Packages-Danish-
national-standard.pdf. (7.10.2020.)
36
Laws of the Lithuanian Archives Office. URL: http://www.archyvai.lt/lt/teisine-
informacija_51/teisesaktai/el_specifikacijos.html. (6.12.2020.)
37
Central Archive of Lithuania. URL: http://www.archyvai.lt/lt/lvat.html. (10.10.2020.)
38
Specification ADOC-V1.0 for an electronic record with the electronic signature of the Lithuanian Archives
Office. URL: https://www.e-tar.lt/portal/lt/legalAct/TAR.11EFBB8DA962/tgLnzGXfEL. (6.10.2020.)

417
Archives of Lithuania specification for official electronic records,39 which currently requires
institutions to accept official electronic records, equivalent to analogue records, prepared in
accordance with ADOC-V1.0. The Archives adhere to Regulation (EC) No. 1049/2001 of the
European Parliament and of the Council No. 910/2014 eIDAS40 on electronic identification
and trust services for electronic transactions in the internal market, and efforts are being made
to increase confidence in electronic transactions. This applies in particular to providing a
common basis for secure electronic interaction between citizens, businesses and public
authorities, simpler and more secure transactions, and facilitates mutual recognition of
electronic identification. The electronic signature has already been previously regulated by
Directive 1999/93/EC, and the eIDAS regulation removes existing obstacles to the cross-
border use of electronic identification used in member states for authentication to access
online public services. The Regulation defines which means of electronic identification must
be recognized, establishes the conditions under which Member States recognize the means of
electronic identification of natural and legal persons covered by another Member State's
notified electronic identification system, establishes rules for trust services, in particular for
electronic transactions, and establishes a legal framework for electronic signatures, electronic
stamps, electronic timestamps, electronic documents, electronically registered delivery
services and certification services for web site authentication. One of the key activities in
establishing a digital single market and promoting its values for the digital economy of the
future is to create the appropriate conditions for the mutual recognition of key cross-border
factors such as electronic identification, electronic documents, electronic signatures and
electronic delivery services, and conditions for interoperable services of eGovernment across
the European Union. Building trust in the online environment is crucial to economic and
social development because consumers, businesses and public authorities are often reluctant
to conduct transactions electronically and find issues with the introduction of new services.
With regard to standards, the Lithuanian Archives adhere to the following international
standards for electronic records: ISO 19005-1: 2008; ISO 12234-2: 2008; ISO / IEC 29500-1:
2009; ISO / IEC 29500-2: 2009; ISO / IEC 29500-3: 2009; ISO / IEC 29500-4: 2009 and
ETSI TS 119 101 V1.1.1.

Latvia

The work of the National Archives of Latvia41 is based on the Archives Act42 of 2011. It
defines the basic principles for the collection, preservation, accessibility and management of
the national documentary heritage. The archive implements the following regulations: lists
with time limits for the storage of archival and documentary material and model
nomenclature,43 procedures for using archives in the reading room of the Latvian Archives,44

39
Specification for official electronic records of the Lithuanian Archives Office. URL: https://www.e-
tar.lt/portal/lt/legalAct/a79ba6a0f29a11e692c5977c7316c9b5. (6.10.2020.)
40
REGULATION (EU) NO. 910/2014 EUROPEAN PARLIAMENT AND COUNCIL. URL: https://eur-
lex.europa.eu/legal-content/HR/TXT/HTML/?uri=OJ:JOL_2014_257_R_0002&from=EN. (6.10.2020.)
41
National Archives of Latvia. URL: https://www.arhivi.gov.lv/. (accessed 10 December 2019)
42
Law on Archives of the Republic of Latvia. URL: http://likumi.lv/doc.php?id=205971#p20. (6.10.2020.)
43
Latvian Lists with deadlines for archival and documentary storage and model nomenclature URL:
https://www.arhivi.gov.lv/content.aspx?id=466&mainId=127 (6.10.2020.)
44
Procedures for Using Documents in the Reading Room of the Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Groz_lasitavas%20darbibas%20noteikumos
_1.pdf. (6.10.2020.)

418
the ethical codex,45 rules of the Latvian National Archives,46 statute of the Commission for
the Accreditation of Private Archives,47 guidelines for the digitization of archives and
documents in public archives,48 guidelines for the application of criminal offenses in
administrative offenses,49 terms of use of the common information system of the State
Archives,50 terms of use of the portal of the common state archival information system51 and
guidelines on conditional partial immunity from fines in cases of administrative violations of
the law.52 In addition to the aforementioned laws, the Latvian Archives has developed seven
of its own standards based on the international ISAD (G)53 standard, accredited by the
International Council on Archives ICA.54 The first Latvian standard AA (VP)55 provides
general guidance on how to make an archival description. The AA (VP) standard is based on
generally accepted theoretical principles of archiving and provides intellectual control over
the authenticity and availability of all types of records described throughout their life cycle.
This standard is primarily concerned with the preparation of descriptions of archival records
after they have been selected for permanent retention, but its provisions can also be applied at
an earlier stage of the record lifecycle, which is of particular relevance to electronic records.
Its policies apply regardless of medium or format, but do not include instructions for
describing specific types of records, such as audio tracks or maps or postage stamps. The
second Latvian standard LVS ISO 1110856 it is identical to the international standard ISO
11108: 1999. „Information and documentation – Requirements for persistence and
durability“. This standard establishes requirements for the preservation of original archives
that have not yet been published, as well as publications that are frequently used but also

45
Code of Ethics for the Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/LNA%20Etikas%20kodekss.pdf. (6.10.
2020.)
46
Rules of the Latvian National Archives. URL:
https://www.arhivi.gov.lv/files/files/LNA_reglaments_08_01_19__grozjijumi-1.pdf (6.10.2020.)
47
Statute of the Commission for Accreditation of Private Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Privatu_arhivu_akreditacijas_komisijas_noli
kums.pdf. (6.10.2020.)
48
Guidelines for the digitization of archival and documentary material in Latvian public archives. URL:
https://www.arhivi.gov.lv/files/files/Vadlinijas%20digitalizacijai%20arhivos.pdf. (6.10.2020.)
49
Guidelines for the application of criminal offenses in administrative offenses in Latvia.URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Vadlinijas_adm_parkapumu_lieatas.pdf.
(6.10.2020.)
50
Terms of use of the common information system of the Latvian State Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/VVAIS_Lietosanas_noteikumi.pdf.
(6.10.2020.)
51
Terms of use of the portal of the common Latvian state archival information system. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/VVAIS_Portala_Lietosanas_noteikumi.pdf.
(6.10.2020.)
52
Guidelines on conditional partial immunity from fines in cases of administrative breach of law in Latvia. URL:
https://www.arhivi.gov.lv/files/files/Vadl%C4%ABnijas.pdf. (6.10.2020.)
53
ISAD(G): General International Standard Archival Description – Second edition 2011. URL:
https://www.ica.org/en/isadg-general-international-standard-archival-description-second-edition. (6.10.2020.)
54
International Council on Archives ICA. URL: https://www.ica.org/en. (6.10.2020.)
55
The first Latvian standard LVS 369: 2004. „Archive description. General principles: aa (vp) “.
56
The second Latvian standard LVS ISO 11108. „Information and Records – Archival Paper – Requirements for
Persistence and Long Term“.

419
permanently stored. The third Latvian standard LVS ISO 1179857 it is identical to the
international standard ISO 11798: 1999. „Information and documentation – Durability and
authenticity of written, printed and copied paper records – Requirements and test methods”.
This international standard lists libraries, archives and museums according to stability
requirements, which means that special test methods evaluate the durability and authenticity
of paper, printed or copied records over a long period in a protected environment. The fourth
Latvian standard LVS ISO 11800: 200358 is identical to the international standard ISO 11800:
1999. „Information and documentation – Requirements for materials and methods“. This
international standard specifies the production methods and materials to be used in the
commercial production of hard and soft-bound books. This does not apply to hand-binding,
individual sewing or collecting archives, nor to works of art that are movable cultural
heritage. The basic purpose of this standard is not to have a permanent protection function, for
example, it cannot provide guidance for carved artwork on a book cover. This International
Standard has two prescribed annexes and one supplement with a set of guidelines. Each of
them places requirements for a specific category of publication production. The fifth Latvian
standard Latvian standard LVS EN 1047-2: 200659 is identical to the international standard
„Secure storage units – Classification and test methods for fire resistance – Part 2: Metadata
and storage depository“. A part of this norm EN 1047 defines the requirements and includes
test methods to determine the capability of the storage facilities for data storage and the
storage containers for serving content protection. The parameters of sensitivity to humidity
and temperature, as well as protection against the effects of fire outside and inside the storage
depository, are specifically defined. A test method has also been defined to measure the
resilience of data storage spaces and data containers to this effect. The sixth Latvian standard
LVS ISO 14416: 200560 is identical to the international standard ISO 14416: 2003
„Information and documentation – Requirements for binding books, periodicals, serials and
other paper records – Methods and materials.“ This International Standard applies to the
binding of books, periodicals and archival records with special requirements for permanent
preservation. The frequency of use, as well as the extraction of libraries from archival records,
varies significantly. The choice of binding method is based on the specific requirements of the
library or archive. This standard does not apply to those records which the expert has valued
as high artistic or historical values or because of their physical characteristics, records cannot
and should not be imported in accordance with this standard. Special treatment of specific
species must be carried out separately. The standard was prepared as part of the project
„National Unified Library Information System“. The seventh Latvian standard LVS ISO
1179961 is identical to the international standard ISO 11700: 2003 (E), „Requests for
information and records of archives and books“. This international standard specifies the
characteristics of universal repositories for the long-term storage of archives and library
material. This includes the location and construction of the facility as well as the necessary
installations and equipment. It does not include special requirements for long-term

57
Third Latvian standard LVS ISO 11798. „Information and documentation – Durability and authenticity of
analogue, printed and copied records – Requirements and test methods“.
58
The fourth Latvian standard LVS ISO 11800: 2003. „Information and Documentation – Material and Method
Requirements for Book.
59
The fifth Latvian standard Latvian standard LVS EN 1047-2: 2006. „Safe repositories: Classification and test
methods for fire resistance. Part 2: Storage areas and data containers“.
60
The sixth Latvian Standard LVS ISO 14416: 2005. „Information and documentation. Requirements for
binding books, periodicals, serials and other paper records for archives and libraries. Methods and materials“.
61
The seventh Latvian standard LVS ISO 11799. “Information and documentation. Requirements for the
preservation of records of archives and library material“.

420
preservation of records or specific types of records, such as parchment or photographic
records. It also does not include repository management procedures. In a number of areas,
national or local building codes may include issues of construction and safety of public
buildings or buildings containing valuable items related to natural disasters, robberies or
terrorist attacks. These include professional furnishing services such as alarms and security
doors. Therefore, this international standard avoids detailed rules regarding those that have
been listed here, unless the guidelines supplement those requirements.

Estonia

The work of the State Archives of Estonia62 is based on the Archives Act63 and the Archives
Policy,64 the main document that lays down the general requirements for archival file formats.
The activities of the archives are within the purview of the Ministry of Education and
Research, and are based on the archiving program, which requires that all state records which
are entered in the archives, are kept stored in a publicly accessible register of records. The
archive implementing regulations are structured as follows: the Statute of the State
Archives,65 the Statutes of the Archival Departments,66 the Statute of Digital Archives,67 the
Statutes of the Office of Research and Publications,68 the Statute of the Law Office,69 the
Statute of the Film Archive70 and the Film Archive Policy.71 This archive uses a migration
strategy for long-term preservation of digital records, which means that digital records are
always stored in a format that is easy to manage with current hardware and software. An
analysis of the various file formats has compiled a list of so-called “archive formats”
containing recommended file formats suitable for long-term storage, and all files uploaded to
the digital archive are migrated to the default formats. At the same time, international support
for these file formats is being monitored, and the list of "archive formats" is updated as
necessary and the files are moved to the new format. Digital record descriptions or metadata
also play an important role, so that all digital records are easily searchable and manageable.
Physically, the digital records are stored as equivalent copies in different places. This also
ensures that records are preserved even if the data is lost in one place. Similar to file formats,
new storage solutions for analyzing the most appropriate media are utilized, as well as the
hardware needed to read them. Currently, the archive simultaneously stores digital records in

62
National Archives of Estonia. URL: http://www.ra.ee/. (10.12.2020.)
63
Estonian Law on Archives. URL: https://www.riigiteataja.ee/akt/106012016006?leiaKehtiv. (10.12.2020.)
64
Estonian Archive Policy. URL: https://www.riigiteataja.ee/akt/131052017011?leiaKehtiv. (10.12.2020.)
65
Statute of the State Archives of Estonia. URL: https://www.riigiteataja.ee/akt/130112011009?leiaKehtiv.
(10.12.2020.)
66
Statute of the archival departments of the Estonian Archives.URL: http://www.ra.ee/wp-
content/uploads/2019/11/ra_osakondade_p%C3%B5him%C3%A4%C3%A4rus_2019.pdf. (10.12.2020.)
67
Statute of the Estonian Digital Archives. URL: http://www.ra.ee/wp-content/uploads/2018/01/Digitaalarhiivi-
p%C3%B5him%C3%A4%C3%A4rus.pdf. (10.12.2020.)
68
Statute of the Estonian Research and Publications Office. URL: http://www.ra.ee/wp-
content/uploads/2018/01/Teadus-ja-publitseerimisb%C3%BCroo-p%C3%B5him%C3%A4%C3%A4rus.pdf.
(10.12.2020.)
69
Statute of the Estonian Law Office. URL: http://www.ra.ee/wp-
content/uploads/2018/01/Haldusb%C3%BCroo-p%C3%B5him%C3%A4%C3%A4rus.pdf. (10.12.2020.)
70
Statute of the Estonian Film Archive. URL: http://www.ra.ee/wp-content/uploads/2016/11/fa_pm.pdf.
(accessed 8 December 2019)
71
Politics of the Estonian Film Archive. URL: http://www.ra.ee/wp-
content/uploads/2019/01/Filmiarhiivi.tegevuspohimotted_vers.1.1.pdf. (10.12.2020.)

421
a network array of disks and magnetic tapes. The digital archive is being developed
modularly, which means that digital archive modules such as digital record reception,
physical storage, content management and access to the record are mutually separate modules,
thus ensuring long-term digital preservation of records. Estonian standards for information
and document management are mainly prepared by the Technical Board for Standardization72
EVS / TC 22 for information and documentation. The main activity of this committee is to
cooperate with European and international standards, primarily because Estonia is required to
adopt all European standards as Estonian standards in their unaltered state. Participation in
work on European and international standardization is key to promoting the best practices, as
well as, among other things, avoiding requirements that are not appropriate to the set
objectives.

The number of valid Estonian standards in December 2020:


Number of valid Available in Estonian
Estonian standards
Original 280 280
Estonian standards73
Implemented 26555 1238
European standards74
CEN 15.637 1030
CLC 6222 193
ETSI 4696 15
Implemented international 343 222
standards
ISO 293 178
IEC 50 44
Total number of valid 27.178 1740
Estonian standards

In absence of an appropriate European or international standard, the drafting of an original


Estonian standard may be initiated. It must be observed that the standard to be drafted should
not present requirements that contradict any European or international standard (differences
from a European standard must be introduced as a special national condition into the
European standard).
An Estonian standard may be:
1) an original standard.
2) an implemented international or European standard:
a) endorsement method;
b) reprint method;
c) translation method.
It is designed in such a way as to ensure that it does not conflict with the law or European or
international standard or its draft. The elements of the description of the archives of the
Estonian archive are based on ISAD(G) and ISAAR(CPF) standards developed under the

72
Estonian Technical Committee for Standardization EVS / TC 22. Information and documentation. URL:
https://www.evs.ee/Standardimine/Tehnilisedkomiteed/EVSTK22/tabid/218/Default.aspx. (10.12.2020.)
73
The original Estonian standards are drafted in Estonian and are not available in other languages, generally.
74
The implemented European standards are generally available in English and some of them in Estonian. Other
languages are available depending on the existence of official texts.

422
auspices of the International Council on Archives. The digital archive is based on the ISO
14721: 2003 referent OAIS75 model. Within the scope of information and documentation
description, the Estonian Committee has developed nine standards of its own: CEN / SS F17 –
Administrative records; CEN / TC 346 – Preservation of cultural heritage; ISO / TC 171 –
Records management applications; ISO / TC 46 – Information and records; ISO / TC 46 / SC
10 – Record storage requirements and conditions for long-term preservation; ISO / TC 46 /
SC 11 – Archive / records management; ISO / TC 46 / SC 4 – Technical interoperability; ISO
/ TC 46 / SC 8 – Quality – Statistics and performance appraisal and ISO / TC 46 / SC 9 –
Quality – Statistics and performance appraisal.

Conclusion

International standards and archival legislation are open working categories for upgrading the
architecture of records in archives worldwide. The countries of northern Europe involved in
the evolution of the standard thus gain access to relevant knowledge, resources and
experience, and have the opportunity to network with a worldwide community of experts in
this field. This enables them to develop these professions, to gain the recognition of the global
archival community and to establish inter-institutional professional cooperation. No matter
what storage strategy and standardized digital preservation security strategy they use, the
makers of these lessons shall still face these remaining questions:

Can it be said with certainty that the stored content will remain unchanged within the set time
frame of record preservation?
How will this technology be updated to ensure long-term record availability?
Does this technology make it easier for institutions to hand over records to respond to a legal
request within the time limit that is set?
Can this technology be developed with regular operating business and legal requirements?
Can standards-based and model-based technology be used with other content generation
applications?
How will this technology architecture for record storage handle the ongoing workflows and
challenges of understanding for archival professionals?

In order to meet regulatory compliance requirements, archives must focus on collecting, safe-
keeping and easily retrieving key records. After knowing which electronic data laws and
standards affect them, archives must follow the best practices and build an IT architecture that
will support all legal requirements. However, due to their complex nature, most regulations
still do not provide a clear map of protocols for compliance. The best practice often proves to
be the highlight of any record management project. Archives should by no means disregard
regulations, but should adjust the standards to a mutually acceptable outcome for private, state
and public institutions. On the other hand, institutions that have to comply with electronic
data laws have all questions regarding the issue of handover of records have been answered,
including questions on standardized processes, people and technology for the efficient
management, as well as the maintenance of electronic records. Certainly, by investing
adequate time in the development of record architecture and thinking strategically about the
best practices for archiving and protecting records, northern European countries can satisfy
the legal requirements and thus continuously create and maintain the ideal conditions for
long-term preservation of archival records.

75
See under fn 4.

423
Bibliography:
Giaretta, D. (2011). Advanced Digital Preservation. ISBN 9783642168086.
Lemić, V. (2003). Archives and electronic records - experiences of Scandinavian countries.
Croatia. Bulletin d'archives, 46(1), pp. 179-207. URL: https://hrcak.srce.hr/7378.
(09.12.2020.)
Stančić, H. (2005). A theoretical model of the persistent preservation of the authenticity of
electronic information objects. Doctoral thesis. University of Zagreb. Faculty of
Philosophy. Croatia. URL: https://bib.irb.hr/datoteka/244465.Ocuvanje_autenticnosti_e-
informacijskih_objekata.pdf. (10.12. 2020.)
Žaja, L. (2019). Digital preservation policy in publicly available data and strategic documents
on the websites of selected national archives of European Union countries. 51.
Counseling of the Croatian Archivist Society: Management of Electronic Material and
Contemporary Archival Practice. Croatia. Slavonski Brod., pp. 147-169.

Sources:
Central Archive of Lithuania. URL: http://www.archyvai.lt/lt/lvat.html. (10.12.2020.)
Code of Ethics for the Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/LNA%20Etikas%2
0kodekss.pdf. (10.12.2020.)
Danish Law on Archives. URL: https://www.sa.dk/wp-content/uploads/2014/12/Danish-
Archives-Act.pdf. (10.12.2020.)
Document lifecycle management DLM. URL. https://www.webpdf.de/blog/en/dlm-document-
lifecycle-management/. (10.12.2020.)
Electronic Communications Act of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/1994/19940831. (10.12.2020.)
Estonian Archive Policy. URL: https://www.riigiteataja.ee/akt/131052017011?leiaKehtiv.
(10.12.2020.)
Estonian Law on Archives. URL: https://www.riigiteataja.ee/akt/106012016006?leiaKehtiv.
(10.12.2020.)
Estonian Technical Committee for Standardization EVS / TC 22. Information and
documentation. URL:
https://www.evs.ee/Standardimine/Tehnilisedkomiteed/EVSTK22/tabid/218/Default.asp
x. (10.12.2020.)
Executive Order on Information Packages and Submission – Danish National Standard. URL:
https://www.sa.dk/wp-content/uploads/2014/12/Executive-Order-on-Submission-
Information-Packages-Danish-national-standard.pdf. (10.12.2020.)
Giaretta Associates Ltd. URL: http://giaretta.org/digital-preservation/standards/. (10.12.2020.)
Government Decree on the National Archives of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/2017/20170039. (10.12.2020.)
Guidelines for the application of criminal offenses in administrative offenses in Latvia. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Vadlinijas_adm_pa
rkapumu_lieatas.pdf. (10.12.2020.)
Guidelines for the digitization of archival and documentary material in Latvian public
archives. URL:
https://www.arhivi.gov.lv/files/files/Vadlinijas%20digitalizacijai%20arhivos.pdf.
(10.12.2020.)
Guidelines on conditional partial immunity of fines in cases of administrative breach of law in
Latvia. URL: https://www.arhivi.gov.lv/files/files/Vadl%C4%ABnijas.pdf.
(10.12.2020.)
Iceland National Archives. URL: https://skjalasafn.is/. (10.12.2020.)

424
Icelandic electronic documentation. URL: https://skjalasafn.is/rafraen_skjalavarsla.
(10.12.2020.)
Icelandic Law on Public Archives. URL:
https://skjalasafn.is/files/docs/ThePublicArchivesAct-in-Iceland-No-77-2014.pdf.
(10.12.2020.)
International Council on Archives ICA. URL: https://www.ica.org/en. (10.12.2020.)
International Organization for Standardization ISO. URL: https://www.iso.org/home.html.
(10.12.2020.)
ISAAR (CPF): International Standard Archival Authority Record for Corporate Bodies,
Persons and Families, 2nd Edition. URL: https://www.ica.org/en/isaar-cpf-international-
standard-archival-authority-record-corporate-bodies-persons-and-families-2nd.
(10.12.2020.)
ISAD (G): General International Standard Archival Description - Second edition 2011. URL:
https://www.ica.org/en/isadg-general-international-standard-archival-description-
second-edition. (10.12.2020.)
Latvian Lists with deadlines for archival and documentary storage and model nomenclature.
URL: https://www.arhivi.gov.lv/content.aspx?id=466&mainId=127. (10.12.2020.)
Law on Archives of the Republic of Latvia. URL: http://likumi.lv/doc.php?id=205971#p20.
(10.12.2020.)
Law on state aid to the private archives of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/2006/20061006. (10.12.2020.)
Law on the Archives of Finland. URL: http://www.finlex.fi/fi/laki/ajantasa/1994/19940831
Law on the National Archives of Finland. URL:
http://www.finlex.fi/fi/laki/alkup/2016/20161145. (12.12.2020.)
Law on the Private and National Archives of Finland. URL:
http://www.finlex.fi/fi/laki/ajantasa/2006/20061006. (12.12.2020.)
Laws of the Lithuanian Archives Office. URL: http://www.archyvai.lt/lt/teisine-
informacija_51/teisesaktai/el_specifikacijos.html. (12.12.2020.)
National Archives of Denmark.URL: https://www.sa.dk/en/. (12.12.2020.)
National Archives of Estonia. URL: http://www.ra.ee/. (12.12.2020.)
National Archives of Finland. URL: https://www.arkisto.fi/. (12.12.2020.)
National Archives of Latvia. URL: https://www.arhivi.gov.lv/. (12.12.2020.)
National Archives of Norway. URL: https://www.arkivverket.no/. (12.12.2020.)
National Archives of Sweden. URL: https://riksarkivet.se/startpage. (12.12.2020.)
NOARK – Norwegian Document Management Standard. URL:
https://www.arkivverket.no/forvaltning-og-utvikling/noark-standarden. (12.12.2020.)
Norwegian Law on Archives. URL: https://www.arkivverket.no/forvaltning-og-
utvikling/regelverk-og-standarder/lover-og-forskrifter-for-arkiv/arkivloven.
(12.12.2020.)
Norwegian Ordinance on Supplementary Technical and Archival Regulations for the
Handling of Public Archival and Documentary Materials. URL:
https://lovdata.no/dokument/SF/forskrift/2017-12-19-2286#KAPITTEL_3.
(12.12.2020.)
Politics of the Estonian Film Archive. URL: http://www.ra.ee/wp-
content/uploads/2019/01/Filmiarhiivi.tegevuspohimotted_vers.1.1.pdf. (14.12.2020.)
Procedures for Using Documents in the Reading Room of the Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Groz_lasitavas%20
darbibas%20noteikumos_1.pdf. (14.12.2020.)
Reference Model for an Open Information Archival System. URL:
https://public.ccsds.org/pubs/650x0m2.pdf. REGULATION (EU) NO. 910/2014

425
EUROPEAN PARLIAMENT AND COUNCIL. URL: https://eur-lex.europa.eu/legal-
content/HR/TXT/HTML/?uri=OJ:JOL_2014_257_R_0002&from=EN. (14.12.2020.)
Research Libraries Group RLG. URL: http://www.rlg.org/. (14.12.2020.)
Rules of the Latvian National Archives. URL:
https://www.arhivi.gov.lv/files/files/LNA_reglaments_08_01_19__grozjijumi-1.pdf.
(14.12.2020.)
Society of American Archivists SAA. URL: https://www2.archivists.org/. (14.12.2020.)
Space data and data transmission systems - Audit and certification of trusted digital
repositories. ISO 16363: 2012. URL: https://www.iso.org/standard/56510.html.
(14.12.2020.)
Specification for official electronic records of the Lithuanian Archives Office. URL:
https://www.e-tar.lt/portal/lt/legalAct/a79ba6a0f29a11e692c5977c7316c9b5.
(14.12.2020.)
Specification of ADOC-V1.0 for electronic record with electronic signature of the Lithuanian
Archives Office. URL:
https://www.etar.lt/portal/lt/legalAct/TAR.11EFBB8DA962/tgLnzGXfEL.
(14.12.2020.)
Statute of the Archival Departments of the Estonian Archives. URL: http://www.ra.ee/wp-
content/uploads/2019/11/ra_osakondade_p%C3%B5him%C3%A4%C3%A4rus_2019.p
df. (14.12.2020.)
Statute of the Commission for Accreditation of Private Latvian Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/Privatu_arhivu_akr
editacijas_komisijas_nolikums.pdf. (14.12.2020.)
Statute of the Estonian Digital Archives. URL: http://www.ra.ee/wp-
content/uploads/2018/01/Digitaalarhiivi-p%C3%B5him%C3%A4%C3%A4rus.pdf.
(14.12.2020.)
Statute of the Estonian Film Archive. URL: http://www.ra.ee/wp-
content/uploads/2016/11/fa_pm.pdf. (14.12.2020.)
Statute of the Estonian Law Office. URL: http://www.ra.ee/wp-
content/uploads/2018/01/Haldusb%C3%BCroo-
p%C3%B5him%C3%A4%C3%A4rus.pdf. (15.12.2020.)
Statute of the Estonian Research and Publications Office. URL: http://www.ra.ee/wp-
content/uploads/2018/01/Teadus-ja-publitseerimisb%C3%BCroo-
p%C3%B5him%C3%A4%C3%A4rus.pdf. (15.12.2020.)
Statute of the State Archives of Estonia. URL:
https://www.riigiteataja.ee/akt/130112011009?leiaKehtiv.
Strategy of the National Archives of Finland 2020. URL: https://www.arkisto.fi/en/the-
national-archives-2/copy-of-strategy-2020. (15.12.2020.)
Swedish Archival Regulations RA-FS i RA-MS. URL https://riksarkivet.se/offentlig-
forvaltning. (15.12.2020.)
Swedish General Regulations RA-FS. URL: https://riksarkivet.se/generella-foreskrifter.
(15.12.2020.)
Swedish General Regulations RA-MS. URL: https://riksarkivet.se/ansok-om-gallring.
(15.12.2020.)
Swedish International Standardization. URL: https://riksarkivet.se/Standardisering.
(15.12.2020.)
Terms of use of the joint information system of the Latvian State Archives. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/VVAIS_Lietosanas
_noteikumi.pdf. (15.12.2020.)

426
Terms of use of the portal of the common Latvian state archival information system. URL:
https://www.arhivi.gov.lv/files/files/Ieksejie%20normativie%20akti/VVAIS_Portala_Li
etosanas_noteikumi.pdf. (15.12.2020.)
University of Zagreb. Faculty of Philosophy. Department of Information and Communication
Sciences. Chair of Archival and Documentary Studies. URL:
https://inf.ffzg.unizg.hr/index.php/hr/odsjek/katedre/arhivistika-i-dokumentalistika.
(15.12.2020.)

427
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

E-learning Technology in Higher Education: A Review

Faton Kabashi1*, Zamir Dika 2*, Lamir Shkurti3* and Vehbi Sofiu4*

Abstract: Nowadays e-learning is a way of learning in higher education, which includes


electronic media in the field of education. The rapid development of technology has had a
great impact on the field of education. E-learning is now a day one the most important tool of
learning & becoming a core part of today’s education system and open new avenues to higher
education. Today, almost all institutions of higher education deliver programs that incorporate
digital media into an online environment to provide versatile learning opportunities,
regardless of time and location. A major issue for faculty and educational developers in higher
education is to determine which e-learning technology is most appropriate to support their
particular teaching needs and provide optimum learning opportunities for students. Many
Higher Education Institutions are now relying on various technological advances and
changing their pedagogical practices, teaching methods, and implementing various e-learning
strategies such as the Flipped Classroom (FC), Blended Learning & MOOCs, Open
Educational Resources (OER), etc. In this paper, a review of e-learning technology in Higher
education has been presented. Fifty relevant papers, published between 2009 and 2021, were
reviewed and analyzed. This paper provides a background for e-learning in Higher Education
Institutions, characteristics, advantages and disadvantages of e-learning, its requirements, and
the various strategies of e-learning, in Higher Education Institutions.

Keywords: E-learning, Higher Education, Open Educational Resources (OERs), Blended


Learning, Flipped Classroom, Massive Open Online Courses (MOOCs)

1. Introduction
E-learning is the use of electronic tools, information technologies and communication in
education. E-learning is broadly inclusive of all forms of educational technology in learning
and teaching [1]. E-learning is characterized as the use of information and communication
technology in various educational processes to support and enhance learning in higher
education institutions, which includes the use of information and communication technology
as a complement to traditional classrooms, online learning or a combination of the two modes
[2].

E-leaning focuses on usage of technology in the field of education and learning. E-learning
refers to the use of advanced technology of information communication in the learning
process where the advanced technology comprises of electronic media [3]. Another way of
defining, e-learning can be defined as learning without using paper instructional material and
the use of technology to teach [4]. Therefore, it is seen as opposite of classroom instruction,

1UBT- Higher Education Institution, Pristina, Kosovo


* Corresponding author: [email protected]

428
traditional teaching or face to face teaching. Several terms are used to cover E-learning such
as online learning, virtual learning, network, and web-based learning [4].
ICTs is a diverse set of technological tools and resources used to communicate and to create,
disseminate, store and manage information [5]. This broad definition of ICT includes
technologies as radio, television, video, DVD, telephone, satellite systems, computer and
network hardware and software; as well as the equipment and services associated with these
technologies, such as videoconferencing and electronic mail [5]. Success of ICT-based
education depends upon the teacher's ability to keep pace with the developments since
teachers are responsible for quality control, improvement of learning and the aggregate
effectiveness of the learning process [6].
The introduction of ICT in the higher education has profound implications for the whole
education process ranging from investment to the use of technologies in dealing with key
issues of access, equity, management, efficiency, pedagogy and quality [7].
1. Student-centered Learning: ICT provides a technology that has the capacity to promote
and encourage the transformation of education from a teacher directed enterprise towards
student-centered models. As more and more students use computers as information sources
and cognitive tools, the influence of the technology will increase to support their studies
[8][9][10][11].
2. Supporting Knowledge Construction: Learning approaches using contemporary ICTs
provide many opportunities for constructivist learning and support for resource-based, student
centered settings by enabling learning to be related to context and to practice [5].
3. Anyplace Learning: With the help of ICT, educational institutions can offer programs at
a distance mode. Today many students can use this facility through technology-facilitated
learning settings [5].
4. Anytime Learning: Technology-facilitated educational programs remove the
geographical barriers. Students are able to undertake education anywhere, anytime and at any
place. This flexibility has provided learning opportunities for many more learners who
previously were constrained by other commitments [5].
5. Information Literacy: The growing use of ICT as tools of everyday life have seen the
pool of generic skills expanded in recent years to include information literacy. It is highly
probable that due to the future developments and growth in technology, it will help further for
information literacy [5].

Information technology (IT) is considered as one of the most fundamental forces for change
in the all sectors of our lives [12]. Today many students want to learn online and in turn get
degrees from worldwide colleges and universities, but still cannot go anywhere as they live in
isolated areas without proper communication systems [13][14]. Consequently, many
researchers encourage learning courses under the e-learning system as it saves time and
energy of those students staying at any far-off distant regions from the universities or colleges
they have enrolled [15] [16]. Indeed, e-learning adoption is increasing in most universities and
institutions of higher learning all around the world. E-learning which is also known as web-
based learning, is defined as the delivery of education in a flexible and easy way through the
use of internet to support individual learning or organizational performance goals [17][18].
Furthermore, there are different kinds of e-learning system such as blackboard and second

429
life. Both of blackboard and the second life systems are used for attended lecturers, do
homework and so many services.
As IT becomes more robust and easier to use, it increasingly infiltrates academic activities in
higher education. Course management systems let teachers easily integrate technology into
their instruction. Online communication and information access expand a course’s range to
wherever and whenever a professor or student logs on. Higher network bandwidth provides a
quick and efficient conduit to accomplish these activities [19]. As an increasing number of
institutions adopt e-learning strategies, their successes depend not only on the availability of
technology but also on the extent to which faculty and students are supported as they explore
and develop innovative ways to integrate technology into the learning experience.
Pedagogical practices must be adapted, technical proficiency becomes more important, and a
reliable and robust technical infrastructure must be maintained in order to use e-learning
effectively. These demands are translated into a host of new professor and student support
requirements that institutions must address [20]. The use of technology in education,
commonly defined as e-learning, has become a standard component in many courses.
Technology applications are not limited to the classroom – they are also placing some
classroom sessions with virtual sessions or fully replacing classroom courses with online
courses.
As institutions adopt e-learning, some important new issues arise:

 institutions must provide an adequate and reliable technical infrastructure to support e-


learning activities;
 teachers and students must possess the technical skills to use e-learning tools;
 professors must redesign their courses to incorporate e-learning effectively into their
pedagogy [21].
E-learning is an educational process, using information and communication technologies to
create courses, to distribute a study content, to enable communication between students and
teachers and the study management. However, the success of an e-learning system depends on
the understanding of certain antecedent factors that influence the students’ acceptance and
usage of such e-learning systems.
This paper is organized as follows. The second section, shows the history of e-learning, in the
third section is shown the research methodology, in the fourth section, the characteristics,
advantages, and disadvantages are given. Further, the fifth section is shown e-learning
technology. Finally, the sixth section will be summarized the conclusion.

2. History of e-learning
A revolution in the information technology and the emergence of web has made the human
society take a huge leap. The focus of society is shifted from industry to information. The
appearance of information technology has been the most important event at the start of this
century. Information technology suddenly became an important element of every aspect of
our society. Education is no exception. The use of multimedia and networking is welcomed
by the field of education [22]. Historically, distance education can be traced back to the 18th
century, to the beginning of print-based correspondence study in the United States. In the
mid-19th century correspondence education started to develop and spread in Europe (Great
Britain, France, and Germany) and the United States. Isaac Pitman, the English inventor of
shorthand, is generally recognized as the first person to use correspondence courses [23].
Some experts refer to the education in 21st century as a multimedia network education.

430
Educational information is being accepted and promoted by all the nations around the world.
A fact stated by the National Centre for Education Statistics that in 2008, there were 18
million students, who were enrolled in some online program worldwide, which was a 1.6%
increase from 2002 [22].
ICT supported education quickly became the hot topic in the 1990s due to spreading use of
the World Wide Web and its fast developing applications. These new technologies have
opened up new opportunities for the non-traditional learner as well as for the traditional
training institutions [23].
The education system and the teaching methods and many other things related to the
education field are changing. And this transformation has given birth to e-learning.
Nowadays, almost all available ICT developments are being used for distance education, or –
with today’s more popular term – for e-Learning.
3. Methodology
In this paper review, are taken into consideration the studies published between 2009 and
2021 in major online scientific databases, including Google Scholar, IEEE Xplore, Web of
Science, Scopus, Elsevier and Research Gate. The key words used were e-learning, High
Education, and e-learning technology. Thus, 49 documents including articles, books and web
pages were finally selected based on specific inclusion/exclusion criteria.
4. Characteristics, advantages and disadvantages of e-learning
E-Learning is the use of ICT to deliver information for education where instructors and
learners are separated by distance, time, or both in order to enhance the learner’s learning
experience and performance [24] [25]. The promise of e-Learning is that it brings powerful
new tools for improving competency and capability, speed, and performance whether an
organization operates at one geographical location or at many. Just as the rise of ICTs
fundamentally changed the nature of how work and communication gets done, the emergence
of e-Learning technologies is fundamentally changing the nature of how people learn. People
are more and more encouraged to learn by themselves and to only learn what they really need
to know to perform their task optimally [23]. The major part of the effective e-learning is
interactive. Because one should also possess a good portion of self regulation skills, in most
cases a coach is also provided to support the learners throughout their learning path. In terms
of greater flexibility and timeliness, e-Learning can suit training needs 24 hours-a-day, 7
days-a-week, where traditional classroom-based training initiatives are quite disruptive.
Rather than having to wait due to making up a class of students, e-Learning allows training to
be conducted for individuals at their own convenience.
The adoption of e-learning in education, especially for higher educational institutions has
several benefits, and given its several advantages and benefits, e-learning is considered among
the best methods of education. Several studies and authors have provided benefits and
advantages derived from the adoption of e-learning technologies into schools [26].

These are some advantages of adoption of e-learning in education obtained from review of
literature:
 Class work can be scheduled around work and family
 Reduces travel time and travel costs for off-campus students
 Students may have the option to select learning materials that meets their level of
knowledge and interest

431
 Students can study anywhere they have access to a computer and Internet connection
 Self-paced learning modules allow students to work at their own pace
 Flexibility to join discussions in the bulletin board threaded discussion areas at any
hour, or visit with classmates and instructors remotely in chat rooms
 Instructors and students both report eLearning fosters more interaction among students
and instructors than in large lecture courses [27].

E-learning, in spite of advantages it has when adopted in education, also has some
disadvantages. regardless of all the disadvantages of e-learning, there are a lot of benefits that
inspire its use and encourage search for ways to reduce its disadvantages. Disadvantages of e-
learning listed in various studies include.
E-learning also has several disadvantages:
 Learners with low motivation or bad study habits may fall behind
 Without the routine structures of a traditional class, students may get lost or confused
about course activities and deadlines
 Students may feel isolated from the instructor and classmates
 Instructor may not always be available when students are studying or need help
 Slow Internet connections or older computers may make accessing course materials
frustrating
 Managing computer files and online learning software can sometimes seem complex
for students with beginner-level computer skills
 Hands-on or lab work is difficult to simulate in a virtual classroom [27].

5. E-learning Technology
To deliver and manage their learning processes, establishments are using learning platforms.
A learning platform could be a set of interactive online services that give learners access to
data, tools, and resources to support instructional delivery and management. Learning
platforms exist as proprietary or open-source software systems. They are distributed as
closed-source programs with learning management systems (LMS) license prices supported a
per-user fee. Open-source programs, work under the terms of the General Public License
(GNU), which is meant to ensure the liberty to share and alter the program and make sure that
it remains free for all users.
The e-learning necessities are:

 Comprehensive infrastructure, quick communication tools, and trendy pc labs.


 Training academics to use technology.
 Building enticing instructional curricula and materials.
 An effective program for the academic method of student registration, follow up, and
analysis
 Providing these instructional materials round the clock.
 Reducing prices [28].
E-learning is the use of technology to connect teachers and students who are physically apart.
The training can be delivered by a number of means. Many teachers in schools, colleges or
universities are already relying on different technological advancements and changing their
pedagogical practices and teaching methods. Some of the technologies that are being used for

432
e-learning in higher education are: the Flipped Classroom (FC), Blended Learning (BL), Open
Educational Resources (OERs), Massive Open Online Courses (MOOC).
5.1 The Flipped Classroom
A Flipped Classroom (FC) is an instructional strategy and a type of blended learning, which
aims to increase student engagement and learning by having students complete readings at
their home and work on live problem-solving during class time [29]. In a FC approach,
students see online lectures, work together as a team in online discussions and perform
research at home and employ the concepts in the classroom with the guidance of a teacher or a
mentor [30]. In FC, students see lesson videos at any time convenient to them. In getting to
know the learning process, actively they arrive in the classroom with their homework and
participate [31].

In a Traditional Classroom, students are typically given projects where they would research
their topic at home, watch a lecture/presentation in school, and then given homework to carry
out at home, based on the information they have gathered. In a FC, students are given
information about their topic, along with a presentation/lecture prior to class. In class,
students discuss the information they have received with their peers and teachers and then
carry out homework at home based on their classmate’s and their own findings.
The differences between the Traditional Classroom and the FC are shown in Figure 1.

Figure 1. Differences between the Traditional Classroom and the Flipped Classroom
Figure 2 summarizes the core components of the FC model discussed above separately for in-
class and out-of-class learning [32].

433
Figure 2: The components of the traditional Flipped Classroom
model
The advantage of FC is that students can see the e-content or the videos as per their
convenience. Thus a student can learn at his own pace. Secondly, the teacher can get a lot of
time to fulfill the learning & emotional demands of students. Thirdly, students get ample
coverage and new perceptions of the course material and are capable to spend more time with
scientific tools that can be only used in the classroom & this will motivate the students to
perform the work they want to do.
The disadvantage of the FC mode is that students may come to class without preparation.
Secondly, it may be that students are lack of smart phones, tablets or computers, and may
have internet problems [33].
5.2 Blended Learning
Blended Learning (BL) is a novel model of learning which combines the benefits of both
traditional face-to-face learning and ICT supported learning including both offline and online
learning irrespective of the control over time, location, path or speed [34]. BL in the
preparation of university students includes a combination of mobile learning, e-learning,
distance learning, and mass open courses.

BL is a third wave in designing and implementing learning environments that is used in public
education, job training, and higher education. With the emergence of information
communication technology and increased penetration rate of the Internet, using these facilities
in learning environment has gradually made e-learning environment meet the shortcomings of
the traditional face-to-face learning systems [35]. BL system was appeared to contribute to
the effective achievement of learning outcomes, increasing the absorption of various
audiences and decreasing some educational costs [36]. The goal of this movement is
improving learning qualities, extending the boundaries of education and decreasing
educational costs.

The concept of BL is shown with the help of Figure 3 [33].

434
Figure 3. Concept of Blended Learning

The advantages of this model are: time saving, cost-effective, more effective and efficient
Learning, the participants in accessing learning material, enables learners to learn the subject
matter independently. Utilize materials available online, learners can conduct discussions with
teachers or other learners outside of face-to-face Teaching, don't spend too much energy to
teaching, Adding material enrichment through internet facilities, expand the range of
learning/training, optimal results as well as enhance the attractiveness of learning, and etc
[37]. The disadvantage of BL is the lack of required infrastructure and dearth of teachers
having expertise in technological advancements.

5.3 Open Educational Resources


Open Education Resources (OERs) are one of the latest technological educational tools in
present day society. Any form of learning and teaching materials that are freely accessible and
are obtainable within the property right is termed Open Educational Resources (OERs) and
may be used with an open license (Creative Commons). OERs embody course materials,
modules, textbooks, lecture notes, assignments, tests, projects, computer code tools, audios,
videos, and animations [38].
OERs are teaching, learning and research materials in any medium – digital or otherwise –
that reside in the public domain or have been released under an open license that permits no-
cost access, use, adaptation and redistribution by others with no or limited restrictions. The
new definition explicitly states that OER can include both digital and non-digital resources.
Also, it lists several types of use that OER permit, inspired by 5R activities of OER [39].
5R activities/permissions were proposed by David Wiley, which include:

 Retain – the right to make, own, and control copies of the content (e.g., download,
duplicate, store, and manage)
 Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a
study group, on a website, in a video)
 Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate
the content into another language)
 Remix – the right to combine the original or revised content with other material to
create something new (e.g., incorporate the content into a mashup)
 Redistribute – the right to share copies of the original content, your revisions, or
your remixes with others (e.g., give a copy of the content to a friend) [40].

435
Advantages of using OER include:

 Expanded access to learning – can be accessed anywhere at any time


 Ability to modify course materials – can be narrowed down to topics that are
relevant to course
 Enhancement of course material – texts, images and videos can be used to support
different approaches to learning
 Rapid dissemination of information – textbooks can be put forward quicker online
than publishing a textbook
 Cost saving for students – all readings are available online, which saves students
hundreds of dollars [41].
Disadvantages of using OER include:

 Quality/reliability concerns – some online material can be edited by anyone at


anytime, which results in irrelevant or inaccurate information
 Limitation of copyright property protection – OER licenses change "All rights
reserved." into "Some rights reserved [42] so that content creators must be careful
about what materials they make available
 Technology issues – some students may have difficulty accessing online resources
because of slow internet connection, or may not have access to the software
required to use the materials [41].
5.4 Massive Open Online Courses (MOCCs)
Massive Open Online Courses (MOOCs) are web based online courses for an unlimited
number of participants held by professors or other experts. The term MOOC was originally
used by George Siemens and Stephen Downes in 2008, and since then has gained popularity
in the USA especially when Sebastian Thrun, a Stanford professor offered an artificial
intelligence course for free [43]. MOOCs are a new addition to the open educational
provision. They are offered mainly by prestigious universities on various commercial.
MOOCs are among the latest e-learning initiative to attain widespread popularity among
many universities.
So far, MOOCs can be characterized as follows:

 they are online courses


 with no formal entry requirement
 no participation limit
 are free of charge
 and do not earn credits [44].
Basically, any individual with an Internet connection can join a MOOC, to access the
available resources, interact with other students, reflect and share what they have learned with
others [45] [46]. MOOC are the structured courses where e-content is provided to the learner
in the form of a virtual class through a web-based portal, preferably by LMS (Learning
Management System). They can be accessed by any suitable device i.e., desktop, laptop,
tablet or smart-phones. The e-content is arranged in a logical sequence, either in topic-wise
format or weekly format for learners to meet specific learning outcomes. In addition to e-
content, there are various activities provided to the virtual group of learners like online
quizzes, discussion forums, live chat and live videos [47].

436
Nowadays, users can access the courses via mobile devices together with tablets,
smartphones, than ever before. By considering the issue and for ease of the users, suppliers
are providing mobile applications for their MOOCs. Moreover, these applications will support
multiple platforms like android and iOS, permitting the learners to use mobile devices to
induce enroll, access to course content, and participate in altogether course activities [48].
As of December 2020, more or less 180 million students are registered for the MOOC
courses, offered by quite 950 universities and around 16300 courses [20], offered by
numerous suppliers like Coursera, edX, Future Learn, Swayam [49].
In Table 1 are show the top MOOC providers look in terms of users and offerings: [49].
Table 1
Learners Courses Microcredentials Degrees
Coursera 76
4,6003 610 25
million
edX 35 3,100 385 13
FutureLearn 14 1,160 86 28
Swayam 16illi
1,130 0 0
million

The advantage of MOOCs is that one can learn according to own pace of learning at any time
convenient to you. Other advantage of MOOCs is that it will make learning learner centric
and interactive unlike our traditional method of teaching, which is basically teacher centric
[33].
The disadvantage of MOOCs is that human learns best socially, hence it is not suitable for
individuals who struggle with motivation. Secondly, low completion rate is the major
disadvantage. Around 5% percent of the total enrolled learners are able to complete MOOCs
[34].
Conclusions
E-learning information systems are known to be the most used recent IT facilities in higher
academic institutions. Technological advancements and e- learning is an approach that
requires the reframing and restructuring of the traditional mode of teaching and learning. The
research has highlighted that e-learning is not so similar to the traditional ways of learning in
those numerous institutions and industry factors might have an impact on the overall success
of an e-learning strategy. For preparing higher educational institutions to adopt e-learning it
requires a well-designed plan that includes all the individuals of the educational hierarchy
starting from top to the bottom. Learning or e-learning is also inseparable from various
advantages and disadvantages. But behind it all, learning through e-learning is very
supportive in the current learning process. This paper has been presented a background for e-
learning in higher education establishments, requirements, and techniques of e-learning in
teaching and learning in n higher education establishments such as Open Educational
Resources (OERs), Blended Learning, Flipped Classroom (FC), and Massive Open Online
Courses (MOOCs).

437
REFERENCES
[1] Adina-Petruta Pavela, Andreas Fruthb, & Monica-Nicoleta Neacsuc (2015). ICT and E-
Learning – Catalysts for Innovation and Quality in Higher Education. Procedia
Economics and Finance, 23(2015), 704 – 711. doi: 10.1016/S2212-5671(15)00409-8
[2] S. Kannadhasan1, M. Shanmuganantham2, Dr. R. Nagarajan3 and S. Deepa, The Role of
Future E-Learning System and Higher Education, International Journal of Advanced
Research in Science, Communication and Technology (IJARSCT), Volume 12, Issue 2,
December 2020.
[3] Himanshu Agarwal1, G. N. Pandey2, Impact of E-Learning in Education, International
Journal of Science and Research (IJSR), ISSN (Online): 2319-7064.
[4] Goyal S, (2012). E-Learning: Future of Education, Journal of Education and Learning.
Vol.6 (2) pp. 239-242.
[5] Ulka Toro (Gulavani) and Millind Joshi, ICT in Higher Education: Review of Literature
from the Period 2004-2011, International Journal of Innovation, Management and
Technology, Vol. 3, No. 1, February 2012
[6] K. Balasubramanian, Willie Clarke-kah, “ICTs for higher education. Background paper
from the common wealth of learning UNESCO,” World Conference on Higher
Education Paris, 2009.
[7] A. Garcia-Valcarcel Munoz-Repiso and F. J. Tejedor, “Use of information and
communication technology in higher education and lecturers competencies,”
[8] B. Loing, “ICT and higher education - general delegate. of ICDE at UNESCO,” 9th
UNESCO / NGO, Collective Consultation on Higher Education, 2005, 6-8 April
[9] O. Ron, “The role of ICT in higher education for the 21st century: ICT as a change agent
for education, Edith Cowan University, Perth, Western Australia
[10] M. Fengchun, “Constructive approach to ICT in education,” APPLIEDUNESCO,
Bangkok, 2010
[11]Y.S.Kiranmayi,“Management of higher education in India,” Crown Publication, New
Delhi, 2009
[12] Alshurideh, M., & Alkurdi, B. (2012). The Effect of Customer Satisfaction upon
Customer Retention in the Jordanian Mobile Market: An Empirical Investigation.
European Journal of Economics, Finance and Administrative Sciences, 47, 69-78
[13] Tarhini, A., Hone, K., & Liu, X. (2014a). The effects of individual differences on e-
learning users’ behaviour in developing countries: A tructural equation model.
Computers in Human Behavior, 41, 153-163
[14] Darawsheh, S., ALshaar, A., & AL-Lozi, M. (2016). The Degree of Heads of
Departments at the University of Dammam to Practice Transformational Leadership
Style from the Point of View of the Faculty Members. Journal of Social Sciences
(COES&RJ-JSS), 5 (1), 56-79.).
[15] Hubackova, S., & Golkova, D. (2014). Podcasting in Foreign Language Teaching.
Procedia Social and Behavioural Sciences, 143, 143-146

438
[16] Alenezi, A., & Shahi, K. (2015). Interactive E-Learning through Second Life with
Blackboard Technology. Procedia Social and Behavioural Sciences, 176, 891-897.
[17] Clark, R., & Mayer, R. (2011). E-Learning and the Science of Instruction: Proven
Guidelines for Consumers and Designers of Multimedia Learning. Pfeiffer; 3rd Edition
(August 16, 2011).
[18] Maqableh, M., Masa’deh, R., & Mohammed, A. B. (2015). The Acceptance and Use of
Computer Based Assessment in Higher Education. Journal of Software Engineering and
Applications, 8 (10), 557.
[19] Andersson, A., Grönlund, A. (2009). A conceptual framework for E-learning in
developing countries: A critical review of research challenges. The Electronic Journal
on Information Systems in Developing Countries, 38(2), pp. 1-16.
[20] Wright, N. (2010). E-Learning and implications for New Zealand schools: A literature
review, Report to the Ministry of Education, New Zealand, pp. 23-27
[21] Andreea-Maria Tîrziua, Cătălin Vrabieb, Education 2.0: E-Learning Methods, 5th World
Conference on Learning, Teaching and Educational Leadership, WCLTA 2014
[22] Himanshu Agarwal, G. N. Pandey, Impact of E-Learning in Education, International
Journal of Science and Research (IJSR) ISSN (Online): 2319-7064
[23] Attila Nagy, The Impact of E-Learning, ICT Business Consultancy, Budapest, Hungary
[24] Keller, C., Hrastinski, S., & Carlsson, S. A. Students' Acceptanc of E-Learning
Environments: A Comparative Study in Sweden and Lithuania. International Business,
395-406.
[25] Tarhini, A., Teo, T., & Tarhini, T. (2016). A Cross-Cultural Validity of the E-Learning
Acceptance Measure (ElAM) in Lebanon and England: A Confirmatory Factor
Analysis. Education and Information Technologies, 21 (5), 1269-1282
[26] Algahtani, A.F. (2011). Evaluating the Effectiveness of the E-learning Experience in
Some Universities in Saudi Arabia from Male Students' Perceptions, Durham theses,
Durham University.
[27] Nageswara Rao Posinasetti (2014). What are the advantages and challenges of online
learning and teaching? Retrieved May 12, 2018, from
https://www.researchgate.net/post/What_are_the_advantages_and_challenges_of_online
_learning_and_teaching
[28] https://helea rning.wordpress.com/requirements-of-e-learning/
[29] Europass Teacher Academy| Flipped classroom;
2020url= https://www.teacheracademy.eu/course/flipped-classroom/
[30] Bajunury, A., An Investigation into The Effects of Flip Teaching on Student Learning.
Master's Thesis, Ontario Institute for Studies in Education of the University of Toronto.
[31] S. Bal and M. Gupta, “Technology and E-Learning in Higher Education Technology and
E-Learning in Higher Education,” no. May, 2020.

439
[32] Ina Blau, Tamar Shamir-Inbal, Re-designed flipped learning model in an academic
course: The role of co-creation and co-regulation, DOI: 10.1016/j.compedu.2017.07.014
[33] Satinder Bal, Monika Gupta, Technology and E-Learning in Higher Education,
https://www.researchgate.net/publication/341734948
[34] A. Picciano, C. Dziuban, and C. Graham, “Blended learning: Research perspectives,”
vol. 2, 2014.
[35] S.Bocconi& G. Trentin, G. Modelling blended solutions for higher education: teaching,
learning and assessment in the network and mobile technology era. Educational
Research and Evaluation. (2014), 20(7-8), 516-535, DOI: 10.1080/13803611.
2014.99636.
[36] C.J.Bonk,& C.R. Graham (Eds.). Handbook of blended learning: Global Perspectives,
local designs. SanFrancisco, CA: Pfeiffer Publishing. (2012). P, 211- 214.
[37] Kintu M J, Zhu C and Kagambe E 2017 Blended learning effectiveness: the relationship
between student characteristics, design features and outcomes International Journal of
Educational Technology in Higher Education
[38] Khalid Al-Hussaini, Huda al-qozani, A review of E-Learning in Higher Education,
https://www.researchgate.net/publication/349847609
[39] Wiley, David; Hilton Iii, John Levi (2018). "Defining OER-Enabled Pedagogy". The
International Review of Research in Open and Distributed
Learning. 19 (4). doi:10.19173/irrodl.v19i4.3601
[40] http://opencontent.org/definition/
[41] Open Chemistry Education Resources: Advantages and Disadvantages". Board of
Regions of the University of Wisconsin System. Retrieved 24 April 2019
[42] "Open Chemistry Education Resources: Advantages and Disadvantages". Board of
Regions of the University of Wisconsin System. Retrieved 24 April 2019.
[43] Hu, H. (2013). MOOC migration. Diverse: Issues in Higher Education, 30(4), 10-11.
[44] Michael Gaebel, Book: MOOCs Massive Open Online Courses, pg 3, January 2013
[45] Kop, R. (2011). The challenges to connectivist learning on open online networks:
Learning experiences during a massive open online course. International Review of
Research in Open and Distance Learning
[46] Koutropoulos, A., Gallagher, M. S., Abajian, S. C., de Waard, I., Hogue, R. J., Keskin,
N. O.,& Rodgriguez, C. O. (2012). Emotive Vocabulary in MOOCs: Context &
Participant Retention. European Journal of Open, Distance and E-Learning
[47] Chatterjee, P., & Nath, A., Massive open online courses (MOOCs) in education—A case
study in Indian context and vision to ubiquitous learning. In MOOC, Innovation and
Technology in Education (MITE), 2014 IEEE International Conference on pp. 36-41,
2014.
[48] J. Chauhan, “An Overview of MOOC in India,” no. July 2017, 2018, doi:
10.14445/22312803/IJCTT-V49P117.)

440
[49] https://www.classcentral.com/report/mooc-stats-2020

441
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

The Experience in TUMnanoSAT Launch Preparation

Viorel Bostan1*, Valentin Ilco2*, Vladimir Melnic3*, Alexei Martiniuc4*, Vladimir


Vărzaru5*, Nicolae Secrieru6*

Abstract: The National Center for Space Technologies of Technical University of Moldova
(TUM) team was selected by the United Nations Office for Outer Space Affairs (UNOOSA)
and Japan Aerospace Agency (JAXA) for the 4th round of the KiboCUBE Program for the
launch of the TUMnanoSAT nanosatellite from the International Space Station (ISS). In this
paper, a brief overview of the educational and scientific missions of TUMnanoSAT, the
impact on system design and the educational opportunities are presented by these challenges.
The primary mission of TUMnanoSAT is to provide hands-on experience to students in
designing, building, and testing a space system with a specific task/mission. There are
described basic testing procedures of this nanosatellite of systems, including structure,
electrical power supply communications and attitude control.

Keywords: TUMnanoSAT nanosatellite, International Space Station, KiboCube module,


JAXA, UNOOSA

1. Introduction

The National Center for Space Technologies (NCST) of Technical University of Moldova
(TUM) has been oriented towards a series of nanosatellites, according to the international
standard CubeSat. In 2019, the NCST team participated in the fourth round of the KiboCUBE
Program with the “TUMnanoSAT” nanosatellite project proposal and won this competition
for free launch by JAXA. This project includes the student-initiative design and fabrication of
critical components, including the payload and CubeSat modules.

TUM spurred the installation of facilities such as a clean room to support building, testing,
and integration of space hardware. Through this process, students acquired experience with
industrial level integration and testing procedures. Undergraduate teams working on
TUMnanoSAT lead the design and fabrication of the payload, structure, and system
integration, providing experience with systems engineering, technical writing, and various
cross-disciplinary applications. Over fifty undergraduate students, several graduate students
and faculty members from several departments were involved in this project in both the
development and testing of the TUMnanoSAT nanosatellite subsystems to enhance
understanding of the fundamentals of engineering.

KiboCube program for the NCST of TUM has a major impact on the improvement of the
quality of engineering studies based on modern space technologies, attracting young students
to develop and strengthen scientific research in space exploration. The purpose of this paper is

1
Technical University of Moldova, Space Technologies Center, Chisinau, Rep. of Moldova
* Corresponding author: [email protected]

442
to present the overview of the TUMnanoSAT nanosatellite and to describe our testing and
verification experience in preparation for its launch.

2. TUMnanoSAT setup and overall systems

NCST of TUM, focusing on the international standard CubeSat, decided to develop a


series of satellites with specific and efficient missions. For the first mission of TUMnanoSAT
our primary objective is to verify under real conditions the functionality of the various
satellite modules and subsystems for future missions. The internal 3D schema and its real
stack implementation of satellite modules are presented in figure 1. The basic missions of
these satellites are:
- testing of nano-structure-based sensors in space conditions;
- to establish effective communication subsystem "satellite-ground station" with the
possibility to modify the communication rate range and ensure high reliability;
- to check the communication protocol "satellite-ground station" with different levels of
access;
- testing of power supply system and the search for the optimal modes of accumulated energy
distribution;
- testing of sensors subsystem for satellite attitude determining (magnetometers, micro-
gyroscopes, sun sensors) in order to optimize process control satellite attitude;
- testing of the COTS electronic components operation in conditions of radiation, including

a) b)
Figure 1. Overview of TUMnanoSAT:
a) Internal 3D schema; b) real stack of nanosatellite modules.

the onboard computer, digital memories.

2.1. TUMnanoSAT structure

The main purpose of the structural subsystem is to provide a rigid, reliable structure
that would withstand all harsh launch conditions. Also the main idea in structural subsystem
designing is maximizing usable interior space while minimizing the complexity of
subsystem. The basic constraints imposed to TUMnanoSAT structure are given by the
CubeSat Design Specifications and JEM Payload Accommodation Small Satellite

443
Deployment Interface Control Document. Following these standards, TUMnanoSAT
dimensions are 100 mm x 100 mm x 113.5 mm. The material used in TUMnanoSAT
sctructure is aluminum alloy 6061. Also due to the fact that TUMnanoSAT is designed to be
launched from Internation Space Station 3 deployment switches were added to rails. The
switches should phisically cut all power lines in the satellite, so when the satellite is installed
in deployer no early deployment of antennas or subsystem should be activated. The finite
element model of the structure was created by using ANSYS Mechanical. The mass
properties were used to construct a model with approximately equal mass as the components.
The natural frequency and static load simulations by ANSYS Workbench Modal analysis
results that it has revealed that minimum fundamental frequency is 366.57[Hz] which is
higher than 60[Hz] and the maximum stress on the satellite was 94.5 MPa, 100.8 MPa and
19.5 MPa in the necessary limits of loads. Stress levels on various parts of the satellite are
displayed in Figure 2 show the FEM with input load, acceleration and constraint condition
for each analysis cases. The margin of safety for the various components was computed
using a factor of safety of 1.5 for yield strength (Fty) and 2.0 for ultimate strength (Ftu).
Real following vibration tests were performed along X, Y and Z axes, with low level
sinusoidal sweep and random vibration. The verification points are: no breakage in main
structure; main structure needs to satisfy specified natural frequency; natural frequency before

a) b)

c) d)
Figure 2. TUMnanoSAT finite element model (a) and analysis results with :
b) X direction acceleration; c) Y direction acceleration; d) Z direction
acceleration

444
and after tests need to remain unchanged; no improper antenna deployment, and no
malfunction to cubesat; no breakage in grass material such as solar battery cover; no
loosening in all fasteners. Low level sinusoidal sweep is adequate for model verification of
simple structures with relatively rigid components, whose flexibility is confined to mounting
bracketry or frequency isolation hardware. It is performed on each axis with the frequency
20~2000 [Hz] and amplitude 0,5 [G].

The random vibration test level It is performed with the frequency 20~2000 [Hz] and
amplitude 02-04 [G2/Hz]. This level is the envelope of the environments for HTV, Space X

a) b)
Figure 3. Overview of TUMnanoSAT:
a) Acceleration measurement point (Z-axis) b) Z-axis Vibration at Control Sensor
with Satellite
Dragon and NG Cygnus (reference: JX-ESPC-101132). This test level was defined by
Structure Fracture Control Evaluation Form. Some results are presented in the figure 3.

2.2 The power subsystem of TUMnanoSAT

The power subsystem of TUMnanoSAT has one integrated Li-Po battery pack that contains
two Varta Li-Po cells with a total capacity of 10 Wh. Also in EPS (Electrical Power
Subsystem) are five solar panels. Each Solar Panel Channel has a DC-DC step-up converter
with Maximum Power Point Tracking (MPPT). The output energy for each solar panel is
monitored. The Solar Panel Channels can handle input voltages up to 5.5V and the current
maximum threshold for overcurrent protection is set to 1.8A. The operating temperature range
is from -40 °C to +150 °C and the over temperature threshold is set to +155 °C (the module
will turn off if this threshold is reached and restart automatically when the temperature
decreases to +130 °C). The efficiency of the step-up convertors is up to 95%. The step-up
convertors work at 100 kHz fixed frequency. The duty cycle is controlled by the MPPT
algorithm. The boosted output voltage can be accessed through the PC/104 connector for
additional functionality such as charging of the external battery pack, super capacitors, etc.
The general diagram of power subsystem is presented in the figure 4.

All nanosatellite subsystems require a nominal voltage stabilized by 3.3V or 5V for normal
operation. The voltage on the battery can vary in the 3.5V - 4.2V range, so for 3.3V voltage
the DC-DC converter with the Buck (Step-Down) topology will be used, and for 5V - DC-DC
converter with the Boost (Step-Up topology). To assure a correct and efficient power
distribution it was created a simulation model for power subsystem in Simulink. For the

445
Figure 4. Diagramm of the power subsystem of TUMnanoSAT.

Figure 5. Batteries characteristics


(Left : Charge,Right: Discharge)

simulation, the converter models from the standard Simulink library were used. The
parameters of the converters have been set according to the technical specifications of all
components from EPS. The simulation permits showed that Overall battery state of charge
increases after a full cycle (sun and shadow), which shows that in the given configuration, all
nanosatellite subsystems will be sufficiently fed without disturbance for any length of time,
until the photovoltaic panels, the battery or the appearance of some faults in the control
system, but not for the consumption case 1.0W. For EPS reliability were used s hardware and
firmware battery protection. Each battery has its own overcurrent, overcharge and
overdischarge protections ensured by integrated protection circuit module (PCM). Special
firmware algorithm is implemented for protection of the batteries from short circuit, deep
discharge and overheating.

446
The EPS subsystem was subjected to real tests, priority was given to the characteristics of the
batteries, being COTS components. Therefore, charge and discharge characteristics tests are
performed for each battery cells before and after the environmental tests. From these tests, it
is confirmed that charge and discharge characteristics do not change due to the environmental
tests and are within the nominal range (figure 5). It is important prior to handover after the
environment tests for TUMnanoSAT , Charge/Discharge Characteristic of battery inside
nanosatellite is measured to see that there is no damage. To be mentioned that the
Charge/Discharge Characteristics test was measured the range between maximum voltage and
minimum voltage.

2.3 Communication Subsystem

The communication subsystem is responsible for receiving commands, sending telemetry and
payload data. The efficiency of the satellite-ground station communication depends on the
distribution of the level functions on the satellite components. The distribution of level
functions is proposed as follows in figure 6:
- The physical level is achieved by the RF module: the transmission / reception
concomitantly the modulation / demodulation of radio signals, in particular AFSK of the data
provided or accumulated by the local microcontroller of the communication mode.
- The level of encapsulation / decapsulation of data according to the AX.25 protocol is
performed on the on-board computer (OBC) of the nanosatellite;
- The application level is performed on the on-board computer (OBC) of the
nanosatellite, which includes the acquisition of data from the basic sensors, including payload
data and captured images.
Therefore, the communication software is done by the following processes, which was
developed within FreeRTOS:
- Application Communication Task - the application communication process with the
ground station
- DataLink Task - the transport process / communication link;
- Channel / Physical layer Com Task - the process of transmitting / receiving the
physical level at communication.

447
In order for any radio amateur to be able to connect to the "university" satellite in the ground
station - university satellite communication system, the AX.25 protocol is recommended as a
standard - data transfer protocol at channel level (second lower level in OSI model). It is
intended for communication between radio amateurs, which is why it is widely used in
amateur packet-based radio communication networks.

The TUM Space Technology Center has a good terrestrial infrastructure for satellite
communications, monitoring and control. A further development of a satellite data capture
system with flexible configuration is based on SDR technology. The telemetry
communications station is endowed with specialized equipment to ensure upward and
downward linkages of the satellite in orbit flight with the ground infrastructure. It is
connected to a set of telemetry antennas and to the parabolic antenna with mixed purpose. The
telemetry antennas and the parabolic antenna are able to orient on two axes towards the
nanosatellite in orbit flight through the actuator drivers of Rotor BIG-RAS HR model.

The stations can operate in the two main radio amateur bands for communication with small
satellites. These bands are the VHF (2m ) and UHF (70 cm) band. The antennas, mounted on
the mast, are connected with RF ecoflex cable to the LNAs (Low Noise Amplifiers). X-Quad
70cm and X-Quad 2m to LNA SP70 and LNA SP200 respectively. The next node connected
from the LNAs consists of coaxial relays which split the signal for feeding it into the ICOM
IC-9100 and USRP B200/E310 to be further processed. Besides the main RF connections
there is also a data line connection from the PC Ground station to the signal processing units
(ICOM and USRP) and a control line connection to the rotator controller and relays.

OBC firmware
UHF module
Channel level communication process
U U
Output buffer A A
Input buffer R
- radio packet - radio packet R
T T

Data Link communication process

Data divided Data divided


by packets by packets UHF
firmware

Application communication process

Initiation/
I2C I2C
Config & test
Transmited data Port Port
Received data data

SD Data base

Figure 6. Hierarchy and interaction of nanosatellite communication processes.

448
Figure 7. Example of the implementation in GNU Radio
of the communication algorithm with the nanosatellite “TUMnanoSAT”.

The “TUMnanoSAT - Ground Stations” communication test scenario consists in checking the
connection with the nanosatellite, configuring/setting/resetting the parameters of its modules
and requesting payload data or images by the ground station operator, as well as sending the
requested data from the satellite to the station. Due to the elaboration of transceiver type
algorithms for the peripheral device USRP B200 and USRP E310 of the ground stations of
NCST, we had the possibility to check efficiently communication through a semi-duplex
channel with the nanosatellite “TUMnanoSAT”.

In the result of communication tests it was confirmed, that the procedures and algorithms for
communication, which performs the diversified dialogue with different transmission rates,
with different ways of packing&encoding messages ensures an efficient and reliable
communication of nanosatellites with ground stations.

2.4 Attitude and Orbit Control Subsystem

The attitude control subsystem is a very important one for any mission. In the case of
TUMnanoSAT, a low-performance ADCS system is required to orient the nanosatellite to the
Nadir direction, because the camera is low resolution and has a large aperture angle (56
degree), on the other hand the antenna also has a transmit/receive diagram with an angle of
120 degrees. Based on these, the following requirements were formulated to the ADCS
components. The TUMnanoSAT is equipped with five Solar Panels. There are a network of
sensors and a magnetorquer on the Solar Panel PCB and they can be interfaced to
an Attitude Determination and Control System (ADCS). The network can be all or a
combination of the following: temperature sensor, Sun sensor, magnetorquer, and
gyroscope. The temperature sensor and Sun sensor (photodiode) are positioned on the top
surface of the solar panel whereas the magnetorquer and gyroscope are positioned within the
solar panel and not visible. The magnetorquer is a series of large electrical coils positioned
over several layers of a multi-layer PCB. Furthermore, the PCB is equipped with a
connector for an external magnetorquer. To calibrate sensors and to test magnetorquers and

449
attitude control algorithms was build a facility to simulate geomagnetic conditions for the
satellite.

The verification of the attitude control algorithms was performed in the terrestrial magnetic
field simulation stand. This stand is an elaboration of NCTS and ensures in a computerized
way the creation of the magnetic field as intensity and direction in any point of the orbit of the
nanosatellite. The results obtained confirm the correctness and quality of the satellite
orientation, which are partially shown in Figure 8.

a) b)

c)
Figure 8. TUMnanoSAT attitude control:
a-b) Helmholtz triaxial system for testing and calibration of satellite magnetometers and
for testing the working algorithms of magnetorquer developed in-house at NCST;
c) Detumbling simulation algorithm results.

450
3. Discussion and Conclusions

The TUMnanoSAT for KiboCube design is compliant with the Safety Requirements reported
in the “JEM Payload Accommodation Handbook Small Satellite Deployment Interface
Control Document (JX-ESPC-101133)”, especially the specific ones related to the operations
inside the ISS, including the possibility to be handled by astronaut on board.

High level functional tests on TUMnanoSAT subsystems and assemblies are required for
validation. Functionality of the components shall be verified in different moments during the
acceptance campaign. It should be noted that testing TUMnanoSAT nanosatellite was
conducted by NCST staff with infrastructure of the ROSA on base of cooperation agreement
between the TUM and the Space Science Institute.

This project of the first TUMnanoSAT nanosatellite within the KiboCube program includes
several missions. Starting from the general concept, a 3D model of the TUMnanoSAT
satellite was developed during the Critical Design Review stage and finally the nanosatellite
was made the flight mode, figure 9. These missions are mainly with educational objectives, in
the realization of which the students are involved, other objectives are with elements of
research and technological verifications. The experimental tests in terrestrial conditions give
us confidence in their efficient operation in space conditions.

Based on the KiboCube TUMnanoSAT project, the TUM Space Center aimed to directly
involve students in each phase of the development of CubeSat space missions: design,
development and testing of nanosatellite subsystems and processing and use of spatial data to
promote students' interest in engineering and space technologies.

Acknowledgements

This paper reflects the results of the development and testing of the TUMnanoSAT
nanosatellite at the Center for Space Technologies at the Technical University of Moldova
within the 20.80009.5007.09 "Development and launch of the series of nanosatellites with
research missions from the International Space Station, their monitoring, post-operation and
the promotion of space technologies” project, which is to be launched free of charge by JAXA

a) b)
Figure 9. TUMnanoSAT final assembly:
a) 3D model of the TUMnanoSAT satellite after Critical Design Review stage;
b) Real TUMnanoSAT after final assembly.

based on the KiboCube program in the 4th round.

451
References

J. Farkas, (2005). CPX: Design of a Standard Cubesat Software Bus, California State
University, California, USA, 2005.
L. Dusseau et al., (2005). CUBE SAT SACRED: a student project to investigate radiation
effects, In: RADECS 2005 Proceedings, Cap d’Agde, France, 2005.

B. Larsen, The Montana nanosatellite for science, engineering, and technology for the
AFRL/NASA university nanosat program.

J. Bouwmeester et al., (2008). Advancing nanosatellite platforms: the Delfi program, - In:
Proceedings of the 59th International Astronautical Congress, Glasgow, Scotland 2008.

J. Bouwmeester, J. Guo, (2010). Survey of worldwide pico- and nanosatellite missions,


distributions and subsystem technology. - Acta Astronautica 67 (2010) 854–862 pp.

CEOS EO handbook – catalogue of satellite missions. –In:


http://database.eohandbook.com/database /missiontable.aspx

World's largest database of nanosatellites, more than 1700 nanosats and CubeSats. – In:
http://www.nanosats.eu/

CubeSat Design Specification (CDS) Rev. 13, (2013). The CubeSat Program, Cal Poly SLO,
2013. – In: http://cubesat.org

TUMnanoSAT proposal for CubeSAT Mission Application for the Fourth Round in the
framework of United Nations/Japan Cooperation Programme on CubeSat Deployment
from the International Space Staation (ISS) Japanese Experiment Module ”KiboCube”.
– Technical University of Moldova. Chișinau, 2019. 63 p.

Infrastructure of the Spatial Sciences Institute (ROSA - Romania) In:


http://www2.spacescience.ro/?page_id=22&lang=en

The United Nations/Japan Cooperation Programme on CubeSat Deployment from the


International Space Station (ISS) Japanese Experiment Module (Kibo) "KiboCUBE" –
In: http://www.unoosa.org/oosa/en/ourwork/psa/hsti/kibocube_2019.htm

452
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Signal performance with eon-xr technology and frequency


simulation mode with radio telescope on the MATLAB platform

Vehebi Sofiu1*, Faton Kabashi2*, Naim Baftiu3*

Abstract: The idea of using a large spherical plate using eon-xr technology is a depressing
feature of the simulation representations in the 3D system of representing elements with a
broad-spectrum characteristic that allows a telescope to operate with a large surface area of
radius 250 m, which is arranged in a Y-shaped formation and acts as a single telescope with a
range of 24 miles (38 kilometers) in the entire hanging space with an opening angle of 112º.
The shape of the telescope function according to the technology represents the largest radio
telescope in the universe which traverses radio waves whose frequencies are simulated with
wavelengths through the MATLAB platform. Generating frequencies of magnetic wave vary
in different wavelengths of about 0.05 inches or 65 miles (120km). In this research, light has
been used as a great variety of cosmic phenomena including planets, fossil fuel clouds and
dust, which have formed black holes in space. The large radio telescope is positioned at a 40º
zenith angle and corrects spherical deviations in the ground to represent complex
transformational systems from the adjustability of the vectorial system in the form of
polarizations to discrete convergence transformation zones. To capture a signal on the
operating system of the radio transmitter using a hardware platform interface for processing
the simulation of data recorded in the FM transmission spectrum in a file which is read as a
spectrum analyzer in the System toolbox that highlights the local stations of the transmission.
Technology research has reduced the cost of electronics as a source of information which
empowers scientific instruments globally with collaboration offers for all users and
developers of CASPER applications such as compatibility of programming and collaboration
environments.
Keywords: Radio telescope, wavelength, transform signals, specter performance, MATLAB

1.0 Large radio telescope

The radio telescope is an astronomical instrument consisting of a rad io receiv er an d an


antenna system used to detect radio frequency radiation between a wavelength of 0.05 inches
or 65 miles which observes celestial bodies radiating electromagnetic waves in the spectral
area of electromagnetic waves. The radio telescope shown in figure 1 has a 76 m diameter
guided paraboloid antenna. The reflective surface of the Arecibo telescope fills a naturally
occurring plate-shaped device with a diameter of 305 m. The Arecibo installation is equipped
with a radar transmitter for studying radar signals reflected by such celestial objects as their
planets and satellites. Technological development trends over the years have made it possible
to make continuous improvements to increase the reliability of new electronic amplifiers with
low noise and ultra-advanced resolution that today the telescope is used all over the world [1].
1 1,2
UBT- Higher Education Institution, Pristinam Kosova; 3University of Prizren, Prizren, Kosova
* Corresponding author: [email protected]

453
A radio telescope according to eon-xr technology is simply a telescope that was created to
receive radio waves from space. In its simplest form and has three ingredients:
• Great beginning of the verse
• Large middle of the group
• Big end of group [2]

Figure 1: Large Array Radio Telescope eon-xr


An antenna is a metal device that serves to radiate or receive radio waves. In a word, it
represents a transitional structure between free space and reference device. [3]
The transmitting device or transmission line may take the form of a coaxial line or hollow
tube and is used to carry electromagnetic energy from the transmitting source to the antenna
or from the antenna to the receiver. In this case we have the transmitting antenna and the
receiving antenna. [4]
The transmission line is presented in line with Zc (characteristic impedance) and the antenna
is presented as Za
Za=(Rl+Rr) +jXa (1)
which is connected to the transmission line.
• Rl submitted conductivity and dielectric losses associated with the antenna structure.
• Rr represents the radiation resistance that serves to represent the radiation of the
antenna.
• Xa represents the imaginary part of the impedance associated with the antenna
radiation.
Ideally, the energy generated by the source should be totally transferred to Rr, which serves to
indicate the radiation of the antenna. [5]
Steady waves can be reduced and stored energy can be minimized by adjusting the Za to Zc of
the line. In addition to receiving or transmitting power, an antenna in an advanced wireless

454
system is usually needed to amplify the radiation energy in some directions and extinguish it
in other directions. There are one or more antennas to collect incoming radio waves. Most
antennas are parabolic plates that reflect radio waves to a receiver, in the same way as a
curved mirror that can focus visible light at a point. The receiver and amplifier are used to
amplify the very weak radio signal to a measurable level. These days amplifiers are extremely
sensitive and normally cool to very low temperatures to minimize interference due to the
noise generated by the movement of atoms in the metal (called thermal noise). Most radio
telescopes nowadays record directly on a disk memory form of computer memory as
astronomers use sophisticated software to process and analyze data. [7]
1.1 The performance of the large radio telescope

The Tadio Large Aperture Telescope is a Arecibo spherical telescope that illustrates the
optical geometry of the FAST and its extraordinary features: support structure, secondary
reflector, main active reflector of 500 m which directly corrects spherical deviation and partial
spatial of cable-driven amplifier, servomechanism with adjustable secondary system to hold
the most precise parts of the receivers and a parallel robot. Inside the cab, multi-beam and
multi-band receivers will be installed, covering a frequency range of 70MHz - 5 GHz. Based
on the Communications Toolbox mode, simulated forms of signals with different frequencies
can be used to capture RF signals from the air with data generation interferences using the
Radio Defined Radio (SDR) hardware. Effective visualization is the best way to communicate
information even when the data modeling according to the programming platform is presented
in complex forms. [8]
The platform used by MATLAB simply extracts information about any signals used that it
can capture in the workspace or directly into a file for processing after capture in the
Simulator.
The capture function is used to record the FM Transmission spectrum in a file which is later
read again in a DSP System Toolbox spectrum analyzer that highlights the communication
peaks corresponding to the local transmission stations. The wave capture function is used to
receive an LTE frame from a local antenna in the workspace with the MATLAB application.
The LTE Toolbox is used to decipher the known physical cell identifier to verify the reception
shown in Figure 2. Large radio and telescope systems involve all technological stakeholders
in order to follow innovative trends for the near future by alluding to optimal specifications
and structural criteria which are unified within and outside community structures. [9]

The simulation information is given with the same frequencies f1 = f2 and different f1 ≠ f2
through two communication fields from 0 Hz to 30 Hz and magnitudes from 0-1 figure 3. The

455
communication signals with the analytical signal application have generated data different of
constant shape and sinusoidal shape. The obtained data were filtered with the Filter Desingh
application describing the time frequency during exploration as the impulsive response of the
signal processing tools as the required response. [10]

Figure 3: Communication signal between frequencies with filter analyzer


1.2 Signal power spectrum

The use of key concepts to update signal integrity through rules of acceptance and analytical
approximation with numerical and measured data ensures the integrity of mathematical path
design and improvement as an engineering practice for performance improvement [11]. The
power spectrum is estimated through the Fourier transform using the direct analytical form
according to equation (1). Frequency signal data is converted into data received as a function
of time. Based on the data simulation and simulation download we provide pulse waves by
the way of transmitting wave motions divided according to the samples in equal frequency
positions or variable depending on the given modeling which adjusts the positions through the
spatial interpolation used in the spatial values. located at x-y coordinates as the control routine
on the MATLAB platform. The uniform signal sampling platform defines the connections of
the autocorrelation function to the modeled signal data in a uniform network. Detection of lost
samples at points of change resembles signals of the power spectrum which is made through
Furie transformations.
There are two techniques of Fourier analysis:
- Fourier analysis solves the periodic signal - the energy signal in an infinite sum of
sinusoidal waves.
- Fourier analysis performs a similar role in the analysis of non-periodic signal which
are most used in power signal processing.
Infinite signals have components of sinusoidal waves whose frequency is the main component
imposed on the amplitude of the signal. [12]
Spectral power is represented by the relation:

Fs(f)= m =0,1,2,3…. (2)

Fs[m]=∑ m=0,1,2,3……N (3)

456
vxx (t) and vxx [n] are autocorrelation bonds, whose functions have symmetry and the sine
terms in the Fourier series are zero and the function takes a simplified form including the real
cosine cut. [13]

Fs(f)= cos 2 m=0,1,2,3…... (4)

Fs[m] ∑ cos m=01,2, 3…N (5)

Equations (4) and (5) represent cosmic transformations.


The energy transformation approach is a direct relation of the analog signal x (t) which is
related to the signal size of the integrated square over time according to:

| | (6)

A major attribute in generating power data in the integrated squares part that refers to spectral
power densities is the Parseval theorem [14]:

| | | | (7)

The Praseval connection allows us to calculate the energy of a signal from the Furier series:

| | ∑ | | (8)

Therefore | | is equal to the energy density function over the frequency or simply the
energy spectrum (FS).
In the "direct access", the energy spectrum is also counted as square form energy:
| | (9)
From the generated sampled data, we have constructed power spectra with data uniformed by
the signal models as part of the noises generated by the different frequencies from 0 to 0.3
amplitudes shown as in figure 4. In these different estimates are derived the results of
spectrograms at the specified frequency determining the treatable time velocity and the
structured modeling time of 4ms. [15]

Figure 4: Presentation of time amplitudes obtained from noise


In practice it is much more important to make some characteristic measurements of
waveforms before classifying the signal. The measurements capture quantitative descriptions

457
of the existing state and the differences of the signals produced by the activity of the
simulated samples by generating data modeled by identifying the characteristic features
(sound, frequency, time and speed) as a technique of signal processing with automated
measurements. data obtained from the MATLAB platform shown in Figure 5. [16]

Figure 5: Difference of sampled power-frequency signals


Signals which have poor radio coverage are channeled from the food tree to a receiver located
in the focus cabin located on top of the telescope which has several brass tubular horns. Radio
receivers amplify the input signal of the system which optimizes different frequency
applications and optimizes different frequency applications. Wave interference consists of two
or more separate antennas widely connected to transmission lines. Controlling and digitizing
analog transmitter follicles is acceptable if the frequency of reception and the frequency of
transient transmission of energy are close to the band of interest. [15]
1.3 Frequency and time simulation mode

The time-dependent frequency simulation mode accelerates the simulation of systems with
different frequencies which characterize the maximum step size for the variables of variable
operators. Phase analysis of such systems uses blocks of Periodic Operators of Physical
Signals. Frequency simulation analyzes the transient effects and modality of the data
generated to perform the phase analysis of a model. [16]
The obtained data are based on anaclitic calculations whose variables are divided into two
categories:
- Time variables with nominal period 2π / ω0
- Frequency variables at nominal frequency, x = dx + ax cos (ω0t) + bxsin (ω0t)
The time simulation mode is limited by a small fraction of the nominal frequency. The effects
of accelerating the solution of complex problems in the case of the presentation of sinusoidal
variables allowed by the variable selector with large steps. The frequency and time simulation
modal should have a slow dynamic in relation to the simulated values obtained compared to
the constant value variables. [17]
Signal blockages outside the physical network are not considered valid sinusoidal sources and
we cannot execute frequency simulation data that does not meet the criteria of model analysis.
[18]

458
Conclusion
Classroom 3.0 learning applications are based on eon-xr technology that offers practical
benefits over implementation of complex 3D problem solving even remotely.
The use of analytical practices has updated the signal transmission through the analytical
approximation of Fourier transforms for the generating power of numerical data and metered
data.
Generation of sampled data according to power spectra from signal modeling as part of the
noise generated their frequencies range from 0 to 0.3 amplitudes.
MATLAB platform applications are used in the same operating field with different
frequencies between communications from 0HZ to 30 HZ.
The transformation approach gives the large power data generated in the integrated squares
part which refers to the spectral power density from the Parseval theorem.
Signals with poor wave coverage are channeled from the data source to the top of the large
telescope which amplifies and optimizes different frequencies by using phase analysis of
systems in blocks.
The provision of generating modulations depends on the way of transmitting the wave
movements of communications in separate frequencies according to the simulation samples in
equal frequency positions.
The connections of the uniform autocorrelation functions have the analytical symmetry of the
transformations in the Fourier series are simplified to the real part. Analytical data of energy
furrier transformations are an adaptation of the interactive data language provided by
simulated modeled measurements.
Frequency adjustment accelerates the modulation of systems with different frequencies which
characterize the variability of periodic signal operators which directly illustrates the spherical
deviation with adjustable secondary system to keep the most precise parts of the receivers in
the system.
Frequency dependence on time accelerates the simulation of different speeds that characterize
the magnitude of coverage of physical operators.
Reference
[1] Kemneth. I. Kellermann; Ellen N. Bouton; Sierra S. Brandt; The National Radio
Astronomy Observatory and Its Impact on US Radio Astronomy, USA, 2021
[2] www.eon-xr (Large Array Radio Telescope, USA, 2021
[3] Constantine A.Balanis; Antenna theory analysis and design", 2014.
[4] Mario Garcia-Sanz; Robust Control Engineering; 2017
[5] William A. Imbriale; Large Antennas of the Deep Space Network; 2008
[6] Bernard F. Burke; An Introduction to Radio Astronomy 3rd Edition; 2010
[7] Spectrum Management for Science in the 21st Century, USA, 2010

459
[8] Claus O. Wilke; Fundamentals of Data Visualization; 2019
[9] James W. Mar, Harold Liebowitz; Structures Technology for Large Radio and Radar
Telescope Systems, 2003
[10] Daniel Aronsson; Modeling and Simulation of Signal Processing Applications with
MATLAB and Simulink; 2021
[11] Eric Bogatin; Signal and Power Integrity, 2020
[12] Khamies M. A. El-Shennawy; Fourier Series and Power Spectra; 2014
[13] John Semmlow, Signals and Systems for Bioengineers, 2012
[14] https://support.lumerical.com/hc/en-us/articles/360034394274-Using-Parseval-s-
theorem-to-check-for-energy-conservation-between-the-time-and-frequency-domain
[15] Modeling and Simulation of Systems Using MATLAB and Simulink; Modeling and
Simulation of Systems Using MATLAB and Simulink,2011
[16] Dac-Nhuong Le, Abhishek Kumar Pandey, Sairam Tadepalli, Pramod Singh Rathore,
Jyotir Moy Chatterjee; Network Modeling, Simulation and Analysis in MATLAB, 2019
[17] Mahamed Nahavi; Signals and Systems; Californija, 2012
[18] Katalin Popovici, Pieter J. Mosterman; Real-Time Simulation Technologies: Principles,
Methodologies, and Applications, 2013
[19] Elena Cordero; Luigi Rodino, Time-Frequency Analysis of Operators; 2020

[20] Franz Hlawatsch; Time-Frequency Analysis and Synthesis of Linear

460
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Sun Dyeing of Wool Yarns with Pyracantha coccinea Roem. Fruits

Selime ÇOLAK1, Meruyert KAYGUSUZ2*, Fatoş Naslihan ARĞUN3

Abstract: People have used the natural dyes for millennia and have perfected the art of
natural dyeing. Today’s consumers have preferred using natural dyed fabrics or clothes, so the
appeal of natural dye is returning. Most natural dyeing methods have required a heat source to
extract dye. Solar dyeing relies on natural heat from the sun. Aside from being
environmentally friendly and super low cost, this method requires very few tools and little
work, and results in non-repeatable, beautiful colors. Pyracantha coccinea Roem. or scarlet
firethorn is one of the shrubs belonging to the Rosaceae family, which remains green in
summer and winter. It has plentiful orange-red globose fruits in the form of grape clusters.
The most important feature of the plant is that it is resistant to cold and drought. For this
reason, it is used to create hedges since even in winter, the fruits on it do not fall off, creating
a decorative image. In this study, ecological sun dyeing process of wool carpet yarns with P.
coccinea fruits extract at different pHs was investigated. When the colors obtained were
evaluated, it was seen that the use of different chemicals can create different tones of brown
color in wool yarns. Thus, it was revealed that P. coccinea fruits can be used as a natural dye
source and the sun dyeing method is appropriate for textile dyeing.

Keywords: Eco-friendly method, Sun dyeing, Natural dyes, Pyracantha coccinea, Wool
yarns

1. Introduction

Natural dyes have a historical, cultural and economic importance and value in the coloring of
textile products. The art of using plant dyes has been improved today with the development of
science and technology, and has gone far beyond the traditional practices (Erdem Işmal,
2019). Recently, awareness of humans and their grown demands on eco-protection, eco-safety
and health have created new approaches for the use of natural colorants that have no adverse
effects on the environment and aquatic ecosystem (Yusuf et al., 2017). The issue of advanced
approaches for natural bio-resources and bio-based colorants and their sustainable use for
functional clothing is currently being reconsidered in textile research and development.
Therefore, the appeal of natural dyes is returning.

The interest in natural dyes, especially in the textile industry, is increasing day by day. The
reason for this situation is that synthetic dyes can have allergic, toxic and carcinogenic effects
on the human body and environmental concerns that have arisen recently. Not only to prevent

461
health problems, but also to prevent the degeneration of cultures and their loss by losing their
values, people show interest in products that carry the traces of their cultures and take their
source from nature. Based on this idea, aesthetic perception also changes (Taylan and Atlıhan,
2018). Many craftsmen and textile designers or artists have also started to use natural
materials and natural dyes.

Most natural dyeing methods have one thing in common; a heat source is required to extract
dye. Sun dyeing or solar dyeing is different by the type of heat source used. Solar dyeing
relies on natural heat from the sun, whereas other dye methods commonly use artificial heat
sources like an electric stove top. Aside from being environmentally friendly and super low
cost, this method of natural dyeing is suitable for everyone and can be completed in just a few
steps (Irwin, 2020). Solar dyeing is a natural method that requires very little work and results
in non-repeatable, beautiful colors. It also requires very few tools, unlike stove-top dyeing and
other dyeing techniques. It is an easy, natural and enjoyable way to give the yarn color and
character (Ffrench, 2017).

Pyracantha coccinea Roem. or scarlet firethorn is one of the thorny perennial shrubs
belonging to the Rosaceae family, which is up to three meters tall and remains green in
summer and winter. It produces small, bright red berries. The fruit can be cooked to make
jellies, jams, sauces and marmalade (www.wikipedia). Its red fruits, which are in the size of
peas, are also known by names such as "dog apple", "rabbit apple" and "bird apple"
(Sarıkürkçü and Tepe, 2015). It is naturally found in Tekirdağ, Istanbul, Bursa, Bolu,
Zonguldak, Sinop, Tokat, Trabzon, Artvin, Konya, Ankara, İçel and Hatay regions in our
country (Kambur, 2009). The most important feature of the plant is that it is resistant to cold
and drought. For this reason, it is used to create hedges since even in winter, the fruits on it do
not fall off, creating a decorative image.

In this study, ecological sun dyeing process of wool carpet yarns with P. coccinea fruits
extract at different pHs was investigated.

2. Material and Method


2.1. Materials
The plant samples used in the research were collected from Gerzele region of Denizli
Province in September 2020 and their fruits were separated (Fig. 1).

a b
Figure 1. The collected Pyracantha coccinea (a) and its separated fruits (b)

Double twisted wool (100%) carpet yarn was used. Tannic acid (Tekkim, Turkey) was used in
the mordanting process. pH adjustment was performed by vinegar and sodium bicarbonate.

462
Four glass jars with lids were used for sun dyeing process. The glass transmits the sun rays
and contributes to the heating of the dye solution inside the jars.

2.2. Method
Extraction of plant dye
100 g of fresh Pyracantha coccinea fruits were immersed in 2 L of water and boiled for 2
hours. The extract was cooled for 30 minutes. The remnants of plant matter were removed
from the extract and the obtained dye solution was used in subsequent applications. pH value
of the dye solution was determined to be 5.5.

Preparation of yarn
Wool yarn was washed in a solution of soda and detergent in order to remove the impurities
on them. They were made ready for dyeing by rinsing several times with warm water.

Sun dyeing application


In the solar dyeing applications plant materials can be used directly putted in jars with yarn or
fabrics, but in our case the extraction of dye solution was carried out to produce an even dye.
The equal amount (300 mL) of the prepared dye solution from P. coccinea was transferred to
each glass jar. The first jar with only extract solution was marked as control. In the second jar
pH of the dye solution was adjusted to 3.5 with vinegar. The pH value of the solution in the
third jar was arranged to 9.0 by sodium carbonate. To the dye solution in the fourth jar tannic
acid was added as biomordant and pH was measured to be 5.0 (Fig. 2). 1 g of wool yarn was
placed in each dye solution. All the jars were closed with lids and were left in sunny place to
sun dye for one week. Long agitation may cause fibers to felt (Ffrench, 2017). After solar
dyeing process, the wool yarns were squeezed, rinsed with water and dried at room
temperature. Color evaluations were made for the dried samples.

Figure 2. The arranged dye solutions for sun dyeing: a) control with pH 5.5, b) pH 3.5 by
vinegar, c) pH 9.0 by carbonate and d) pH 5.0 by tannic acid

3. Results and Discussion

The most suitable fibers for natural dyeing are cationic fibers such as wool and polyamide
because they absorb the color beautifully. The raw material of woven handicraft products is
wool. It can be shown that natural dyed products do not adversely affect humans and the
environment, unlike synthetic products, and these materials, which are already a part of
nature, are evaluated and presented to human use by producing healthy products (Çolak et al.,
2020). The obtained colors of wool carpet yarns after sun dyeing with P. coccinea in four
different pHs are shown in Figure 3.

463
a b c d e

Figure 3. Colors of wool yarns a) undyed, b) control pH 5.5, c) pH 3.5, d) pH 9.0 and
e) with tannic acid pH 5.0

As seen from Fig. 3, different color and tones have emerged as a result of the sun dyeing with
P. coccinea fruits extract. While the color of the raw wool yarn was light beige, the color of
the yarn dyed with the P. coccinea extract in the dyeing trial without using any mordant or
chemical substance was in brown tone (Fig. 3). Whereas the adjustment of pH of the dye
solution to 3.5 by vinegar was resulted in fulvous color of the yarn, the arrangement of pH to
9.0 by sodium bicarbonate caused obtaining of darker tone of yarn (Fig. 3). Even though the
use of biomordant causes to change the dye solution to dark, the obtained color shade of the
yarn was ivory or paler fulvous than the tone provided by the dye solution with pH 3.5. The
color of the yarn, which was mordanted with tannic acid and dyed with the extract, created a
light color (Fig. 3). In our study, this color difference is thought to be caused by both
arrangement of pH and the use of biomordant. Furthermore, the higher pH as 5.5 and 9.0
resulted in obtaining of darker tones. Tannic acid is a mordant that makes chemical bonds
between the dye molecules and the functional groups of the fibers, and generally change the
color produced by the dye (El Khatib et al., 2016). It was stated that for best results the solar
dyeing process can be performed longer, but in summer most dyes will give long lasting
beautiful color after two to three weeks if placed in a sunny spot (Aka, 2019). In this study it
was determined that one week of solar dyeing was enough to obtain effective coloration of
wool yarn especially at mild acidic and basic pHs.

The process of dyeing involves adsorption of the dye on the fiber surface and then spreading
it to the inside. The process of adsorption and dispersion depends on the nature of the fibers to
be dyed. Fiber and animal fibers (such as wool and silk) containing amine and carboxylic
groups are aliphatic (Al-Khateeb, 2019). The chemical composition of wool makes it
chemically compatible with most types of pigments. It was reported that P. coccinea fruits are
good source of glutathione, vitamin C, β-carotene and lycopene (Çöteli and Karataş, 2017).
Beta carotene is a natural pigment that gives an yellow-red or orange color and is used in food
coloring. The carotenoid dye structure has long-chain conjugated double bonds, which acts as
chromophore (Gupta, 2019) and gives a yellow to red color (Venil et al., 2020). In addition,
the natural wool has the ability to easily absorb dyes.

4. Conclusions

In this study, the wool carpet yarns were put in the dye solution obtained from the fruits of P.
coccinea and solar dyeing process was carried out after pH adjustment. When the colors
obtained were evaluated, it was seen that different pHs can create different tones in wool
yarns. It is understood from the results obtained that darker colors are obtained at slightly
acidic pH as 5.5 or at basic pH 9.0, while light colors are obtained at more acidic pHs as 3.5
with vinegar and 5.0 by tannic acid. Thus, in the light of the data obtained, it was revealed

464
that P. coccinea fruits can be used as a natural dye source and solar dyeing method is
appropriate and creative for dyeing of wool fibers.

Solar dyeing is by far the easiest and most straight forward of all the natural dyeing methods.
It is enjoyable to watch the color change every day as the yarns are exposed to the sun. It was
seen that this method works well with natural fiber. It is also a great way to connect with the
surroundings since natural solar dyeing uses natural materials such as leaves, flowers, veggies
or fruits as in our case. It should be taken into consideration that the geographical location and
time of year will influence the temperature that solar dye reaches and this can affect the
resulting colour.

References
Aka, V.M.A. (2019). The Beginners Guide to Solar Dyeing.
https://lacreativemama.com/beginners-guide-solar-dyeing/ Accessed on 20.08.2021.
AL-Khateeb, D. S. M. (2019). Extraction Dyes From Two Natural Plants Olive Leaves and
Beta vulgaris and The Uses in dyeing Textile. J. Phys.: Conf. Ser. 1294.
Çolak, S., Kaygusuz, M., Arğun, F.N. (2020). Dyeing of Wool Yarns with Parthenocissus
quinquefolia L. Leaves Extract. In "Theory and Research in Engineering", Ed. A.
Hayaloğlu, Gece kitaplığı, Ankara.
Çöteli, E.and Karataş, F. (2017). Ateş Dikeninin (Pyracantha coccinea Roemer var. lalandi)
Kırmızı Meyvelerindeki A, E, C Vitamini, β-Karoten, Likopen, Glutatyon ve
Malondialdehit Miktarlarının Araştırılması. Fırat Üniv. Fen Bilimleri Dergisi, 29(1), 41-
46.
El Khatib, E. M., Ali N. F., El-Mohamedy, R. S. R. (2016). Enhancing dyeing of wool fibers
with colorant pigment extracted from green algae. Journal of Chemical and
Pharmaceutical Research, 8(2), 614-619.
Erdem İşmal, Ö. (2019). Doğal boya uygulamalarının değişen yüzü ve yenilikçi yaklaşımlar,
YEDİ: Sanat, Tasarım ve Bilim Dergisi, Yaz 2019 (22), s. 41- 58.
Ffrench, C. (2017). Kissed by the Sun: The Art of Solar Dyeing.
https://spinoffmagazine.com/kissed-sun-art-solar-dyeing/ Accessed on 20.08.2021.
Gupta, V. K. (2019). Fundamentals of Natural Dyes and Its Application on Textile Substrates,
In "Chemistry and Technology of Natural and Synthetic Dyes and Pigments", Eds. A.
K. Samanta, N. S. Awwad and H. M. Algarni, IntechOpen.
Irwin, C. (2020). Beginner’s guide to solar dyeing. https://caitlynirwin.com/blog/beginners-
guide-to-solar-dyeing Accessed on 20.08.2021.
Kambur, S. (2009). Rhus coriaria L., Pyracantha coccinea M. Roemer, Cotoneaster
nummularia Fisch.&Mey. Türlerinin Tohum ve Çimlenme Özelliklerinin Belirlenmesi.
Yüksek Lisans Tezi. Artvin Çoruh Üniversitesi Fen Bilimleri Enstitüsü Orman
Mühendisliği Anabilim Dalı, 34 p.
Sarıkürkçü, C. and Tepe, B. (2015). Biological activity and phytochemistry of firethorn
(Pyracantha coccinea MJ Roemer). Journal of Functional Foods, 19, 669-675.
Taylan M, Atlıhan Ş. (2018). Tekstil Tasarımında Doğal Elyaf ve Doğal Boya Kullanımı. Idil,
7(43), 319-326.
Venil, C. K., Velmurugan, P., Dufossé, L., Devi, P. R., Ravi, A. V. (2020). Fungal Pigments:

465
Potential Coloring Compounds for Wide Ranging Applications in Textile Dyeing. J.
Fungi 6(68), 1-24.
Yusuf, M., Shabbir, M., Mohammad, F. (2017). Natural Colorants: Historical, Processing and
Sustainable Prospects. Nat. Prod. Bioprospect. 7, 123-145.
Internet sources:
https://en.wikipedia.org/wiki/Pyracantha_coccinea Accessed on 20.08.2021.

466
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Synthesis and Characterization of Cellulose Acetate from Waste


Spartium Juncem Flowers

Özlem Karaboyacı1*, Semra Kılıç2

Abstract:

In this study, it was aimed to re-incorporate spartium junceum flowers, which became garbage
after essential oil extraction, into the raw material cycle through cellulose extraction and to
ensure recycling. Cellulose extracted from waste flowers is a commercial product and used in
the synthesis of cellulose acetate, which is the raw material of our further studies. Cellulose
and holocellulose were extracted from waste flowers for the synthesis of cellulose acetate.
Later, both were used to synthesize cellulose acetate. At the end of the study, cellulose acetate
was obtained with a yield of 8.37% by cellulose extraction from waste flowers, while
cellulose acetate was obtained with a yield of 8.14% in the holocellulose method.

Keywords: Cellulose, Cellulose Acetate, Waste Spartium Junceum Flowers, Extraction

1. Introduction

Cellulose is one of the natural polymers produced by living plant organisms (Şahin, 2019).
Plant, herb and in trees. Cellulose, which forms the cell wall of plants, is produced with an
annual synthesis amount of 10-10-10-11 tons. It is the most abundant carbohydrate in nature.
(Arslan et al., 2014)

Rojas et al., 2015

Figure 1. Progressive Structure of Cellulose from Plants


1
Süleyman Demirel University, Engineering Faculty, Bioengineering Department, Isparta, Turkey
2
Süleyman Demirel University, Science and Literature Faculty, Biology Department, Isparta, Turkey
* Corresponding author: [email protected]
467
Cellulose Esters

Cellulose esters are formed by the reaction of cellulose molecules with hydroxide. Cellulose
The formation of esters is theoretically possible for all inorganic and organic acids. These
esters Among the commercial and technological aspects, the most important ones are
cellulose nitrate, cellulose xanthate and cellulose acetate. (Kırıcı et al., 2001)

Cellulose Acetate and Type

Cellulose acetate is a natural polymer, that is widely used in various industries, cigarette,
material for plastic, film, and paint. Mechanical durability, high abrasion resistance,
transparency, dyeability, machinability diversity, It is the most important organic acid sourced
cellulose derivative with its moldability and high dielectric properties. (Kırıcı et al., 2001)
Cellulose consists of two types, fiber and plastic. Largest Market cellulose fiber dominates its
share.

Pulp Cellulose Acetate Powdered Cellulose Acetate Fiber Cellulose Acetate

Figure 2. Cellulose Acetate and Types

It was made from cellulose by acetylation process. Cellulose could be isolated from plants or
biomass. Empty palm oil bunches (EPOB) and dried jackfruit leaves (DJL) are two sources of
biomass that is available in Indonesia.

Indonesia is one of the countries with the largest palm oil plantations in the world. Therefore,
many palm oil factories are established in Indonesia and produce a lot of empty bunches of
palm oil. Empty palm oil bunches are the biggest waste in processing palm into palm oil. The
average production of empty palm oil bunches is 20-23% of total oil palm production in
Indonesia. The content of cellulose in empty palm oil bunches is 38.76% . In addition to palm
oil bunches there is another potential material, that is jackfruit leaves. Indonesia is a tropical
country so it’s suitable to grow plants, including jackfruit plant. This plant is found in almost
all parts of Indonesia and is favored by most people.

Therefore, jackfruit leaves will be used for cellulose acetate raw materials. The cellulose
content of dried jackfruit leaves is 21.45% (Tristantini et al., 2018). Cellulose acetate, one of
the important derivatives of cellulose which are industrially more important and it is estimated
that annually 1.5 billion pounds are manufactured globally (Das et al., 2014). Brazilian paper
and cellulose industry aggregates about 220 companies which produce around 10.1 million
metric tons of cellulose and 8.6 million metric tons of paper each year. This production
corresponds to 1.4% of Brazilian gross domestic product (GDP) and makes Brazil the seventh
major producer of cellulose – leader in the production of short fiber cellulose and 11th in the
production of paper. Brazilian paper and cellulose products are manufactured exclusively
from wood of forests planted in degraded areas, avoiding the cut of native trees( Rodrigues et
al., 2008).

468
2. Material ve Method

2.1. Cellulose Extraction

The cellulose extraction of waste spartium junceum flowers was made by the Kurschner and
Hoffer method (1931). Approximately 2 g of sample was treated with 100 mL of 1:4 (V/V
mixture of nitric acid and ethanol and allowed to boil under reflux for 1 hour. After boiling
for 1 hour, the sample was filtered and this process was repeated 3 times. After cooling, the
remaining cellulose was filtered off and washed with distilled water until the filtrate was
neutralized.

2.2. Holocellulose Extraction

Holocellulose is the carbohydrate complex that remains after the lignin substance of the plant
has been removed. In the study, the chlorite method developed by Wise and Karl (1962) was
used to determine the amount of holocellulose.

80 mL of distilled water was added to 2.5 g of waste flower pulp. By adding 0.3 mL of acetic
acid and 0.75 g of sodium chloride, the reaction was carried out at 80C for 1 hour under
reflux. Afterwards, 0.3 mL of acetic acid and 0.75 g of sodium chloride were added again and
waited for 1 hour. This process was repeated 3 times in total. After 3 hours, the mixture was
filtered. The drying process was carried out by washing with distilled water and acetone. For
the production of cellulosic fiber, it is necessary to convert the cellulose and holocellulose we
have obtained into cellulose acetate.

The cellulose acetate production reaction was described by Djuned et al. 2014, Filho et al.
The studies of 2008 and Cerqueira et al. 2007 were examined, modified in accordance with
our study, and reactions were carried out as described below.

2.3. Obtaining Cellulose Acetate from Cellulose

1 g of cellulose was added to 25 mL of acetic acid and stirred for 1 hour for activation. 0.1
mL of sulfuric acid and 30 mL of acidic anhydride were added to the mixture as catalysts and
stirring continued for 4 hours at room temperature (22 C). After 4 hours, the mixture was
filtered to remove unreacted particles. By adding 500 mL of distilled water to the obtained
filtrate, it was waited for the hydrolysis to start and the precipitation of cellulose acetate to
take place. The precipitate was filtered and washed with distilled water until the pH was 7. It
was dried in an oven at 105 °C.

2.4. Obtaining Cellulose Acetate from Holocellulose

1 g of holocellulose was added to 25 mL of acetic acid and stirred for 1 hour for activation.
0.1 mL of sulfuric acid and 30 mL of acidic anhydride were added to the mixture as catalysts
and stirring continued for 4 hours at room temperature (22 C). After 4 hours, the mixture was
filtered to remove unreacted impurities. By adding 500 mL of distilled water to the obtained
filtrate, the precipitation of cellulose acetate was expected. The precipitate was filtered and
washed with distilled water until neutralized. It was dried in an oven at 105 °C.

469
Figure 3. Cellulose acetate derived from holocellulose Figure 4. Cellulose acetate derived from cellulose

3. Results

Holocellulose and cellulose obtained from the wastes of the broom (spartium junceum) flower
were converted into cellulose acetate. It was observed that there was a color difference
between the cellulose acetates. As can be seen in the figures, it has been observed that the
cellulose acetate obtained from cellulose (Figure 4) is more yellowish than the cellulose
acetate obtained from holocellulose (Figure 3). As a result of experimental studies; While
cellulose acetate was obtained with 8.37% yield from cellulose, cellulose acetate was obtained
with 8.14% yield from holocellulose.

FTIR spectra of cellulose acetates obtained from cellulose and holocellulose are shown in
Figures 5 and 6. The fact that both spectra are very similar to each other shows that the same
purity product was obtained in both methods. In the FTIR spectra, 1748, 1384 and 1240 cm -1
bands corresponding to the characteristic bands C=O, C-H and -CO- of acetyl cellulose are
clearly observed in both spectra.

Figure 5. FTIR spectrum of Cellulose Acetate derived from cellulose

470
Figure 6. FTIR spectrum of Cellulose Acetate derived from holocellulose

4. Discussion and Conclusions

The positive effect of the recycling of flower waste (pulp) released as a result of the
distillation of broom (Spartium junceum) plant and the data we obtained were examined. With
the implementation of recycling, the amount of waste going to garbage will be reduced, and
less space and energy will be used for the transportation and storage of these wastes. FTIR
spectra show that cellulose acetate was successfully synthesized from the waste flowers.
Obtained cellulose acetate can be used in fiber, nano fiber spinning in the next stages.

Acknowledgements

This work was supported by Süleyman Demirel University Scientific Research Projects
Coordination Unit (BAP, Project Number: 8081). We thank the Scientific Research Projects
(BAP) unit for their support.

References

Arslan, S., & Erbaş, M. (2014). Selüloz ve Türevi Diyet Liflerin Özelliklei ve Fırın
Ürünlerinde Kullanım İmknaları. Gıda(39), s. 243-250.

Cerqueira, D. A., Rodrigues Filho, G., Silva Meireles, C. (2007). Optimization of sugarcane
bagasse cellulose acetylation. Carbohydrate Polymers, 69(3), 579-58
Das, A. M., Ali, A. A., & Hazarika, M. P. (2014). Synthesis and characterization of cellulose
acetate from rice husk: Eco-friendly condition. Carbohydrate polymers, 112, 342-349.

471
Djuned, F. M., Asad, M., Ibrahim, M. N. M., Daud, W. R. W. (2014). Synthesis and
characterization of cellulose acetate from TCF oil palm empty fruit bunch pulp.
BioResources, 9(3), 4710-4721.
Kırıcı, H., Ateş, S., Akgül, M. (2001). Selüloz Türevleri ve Kullanım Yerleri. Fen ve
Mühendislik Dergisi, 4(2), s. 119-130.
Pinestrength, (2017). COST Action FP1406: Pine pitch canker strategies for management of
Gibberella circinata in greenhouses and forests (Pinestrength).
http://www.pinestrength.eu
Rodrigues Filho, G., Monteiro, D. S., da Silva Meireles, C., de Assunçao, R. M. N.,
Cerqueira, D. A., Barud, H. S., Messadeq, Y. (2008). Synthesis and characterization of
cellulose acetate produced from recycled newspaper. Carbohydrate Polymers, 73(1), 74-
82.
Rojas, J., Bedoya, M., Ciro, Y. (2015). Cellulose-Fundamental Aspects and Current Trends.
Current Trends in the Production of Cellulose Nanoparticles and Nanocomposites for
Biomedical Applications
Şahin, H. T. (2019, Ağustos 7). Selüloz ve Kağıt Üretimi Üzerine Bir Değerlendirme. Ağustos
12, 2020
Tristantini, D., Sandra, C. (2018). Synthesis of cellulose acetate from palm oil bunches and
dried jackfruit leaves. In E3S Web of Conferences (Vol. 67, p. 04035). EDP Sciences.
TÜİK, (2015). Turkish Statistical Institute. http://www.tuik.gov.tr/Start.do (Date of Access:
12.04.2017)

472
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Determination of Volatile Component and Saponin Content of


Jujube Tree Leaves Pre-Fruit and Post-Harvest

Musa Denizhan Ulusan*1, Mustafa Karaboyacı2

Abstract: In this study, leaf volatile components and saponin contents of cultured jujube trees
grown in Isparta region in spring and autumn were analyzed. There is a very developed sector
in the Isparta region where essential oils of plants such as lavender and rose essential oils are
obtained. For this reason, the volatile components of the leaves of the jujube tree, which has
been used for centuries for medicinal and various purposes, were studied with the SPME
method and 18 volatile components were determined before the fruiting period and 14
components after the harvest season. Leaves contains 2,49 mg/mL saponin before fruiting
4,29 mg/mL after harvest season.

Keywords: Ziziphus zizyphus, jujube, leaf, volatile, saponin

1. Introduction

Jujube Tree is a thorny tree with a height of 4-5 meters that blooms with fragrant yellow
flowers between April and May. The fruits of the tree are red-brown bark, hard-seeded, the
wild ones are in the size of a elaeagnus, and the grafted cultivated plants are in the size of an
average walnut. Although the original homeland of jujube (Ziziphus zizyphus) is Syria, it is a
plant that has been grown in China for 4000 years and is known to have 400 varieties. India,
Russia, Southern Europe, North Africa, the Middle East and Anatolia are its natural spread
areas. The jujube plant, which was taken to the United States in 1837, found a place to grow
in the southwestern region of the country (Yılmaz, 2019).

Jujube is a temperate climate plant and subsides up to 1700 meters above sea level,
withstanding down to -20ºC degree. It is resistant to excessive precipitation on drained and
fertile soils. It is not affected by drought. It can grow well in sandy-loam neutral or slightly
alkaline soils in regions with an altitude of 0-1500 m, annual average temperature 7-13°C in
winter and 37-48°C in summer, and an annual average rainfall of 120-2200 mm (Kavas and
Dalkılıç, 2015).

When the literature about the plant is examined, it is seen that there are many studies on the
usage areas of its fruits, wood and leaves. Outlaw et al, (2002), in their study explained the
versatile uses of the plant as follows: Few plants are as versatile as the jujube. First, the wood
itself is valuable. Strong, durable, and smooth, it is used for the manufacture of musical
instruments, artwork, carts and miscellaneous items. It has also been crafted into gears and
caskets, which bring honor to the deceased. Second, it is a source of fodder for cattle, camels,
and goats. Third, it has been used medicinally for 3000 year. All parts of the plant kernel,
1
Süleyman Demirel Üniversity, Isparta, Turkey
2
Süleyman Demirel Üniversity, Engineering Faculty, Chemical Engineering Department, Isparta, Turkey
* Corresponding author: [email protected]
473
flower, fruit, leaves, bark, wood, and root have been used medicinally. Fourth, the fruit has
been used for every imaginable edible purpose fresh, dried, candied, in teas in myriad recipes,
and for the production of wine and vinegar. Last, the jujube is a major honey source in China.

As can be understood from this beautifully summarized sentence, every part of the plant is
used, from the fruit to the stem, bark and leaves. Since our article is about the leaves of the
plant, let's focus on the studies on the leaves.

Elaloui et al (2016) conducted a study on the phytocomponents of jujube plants harvested in


Tunisia. At the end of the study, they found that, Z. jujuba leaf extracts were distinct due to
their richness in linolenic, palmitic, oleic, and linoleic acids, and in β-Sitosterol, stigmasterol
and flavonoid compounds especially rutin and apigenin that justified their use in cosmetics
and in pharmacology.

Guo et al (2011) performed a study about determining triterpenic acids, saponins and
flavonoids in the leaves of two Ziziphus species. They found fourteen constituents including
three flavonoids, two saponins and nine triterpenic acids in Z. jujuba and Z. jujuba var leaves.
They found that jujube leaves are rich in quercetin-3-O-rutinoside and triterpenic acids, they
could be the promising natural sources for future industrial research of quercetin-3-O-
rutinoside and triterpenic acids with potential benefits for human health.

Damiano et al (2017) studied antioxidant and antibiofilm activities of secondary metabolites


from Ziziphus jujuba leaves used for infusion preparation. They reported that jujube leaf
infusion is a healthy antioxidant bedtime beverage, associate it to an unreported anti-caries
activity. Z. jujuba Mill. leaf extracts could be also employed for the development of
alternative or adjunctive natural anti-caries prevention remedies, as well as included in
products for oral hygiene such as toothpastes and mouthwashes.

In this study, we tried to elucidate the changes in the volatile components and saponin content
of the leaves of the jujube plant grown in the Isparta region before fruiting in the spring and
after the fruits are harvested in the autumn.

2. Material and Methods

Ziziphus zizyphus fresh leaves were collected from the cultivated fields of Isparta Gönen
region on 10 June and 10 November. The leaves were saved in the shade at room temperature
and ground for aallysis with the aid of a rondo. Powdered leaves (10.0 g) was extracted at
room temperature with 100 mL distilled water for 12 hours and filtered.

2.1. Determination of total saponins

The total saponins contents of Ziziphus zizyphus leaves extracts were determined by the
vanillin-sulfuric acid method (Hiai et al., 1976). 0,25 mL of extracts were reacted with 0,25
mL vanillin/etanole (8%) and 2 mL sulfuric acid (72%). Than the mixture was incubated at 60
°C for 10 min. After the incubation mixture was cooled for another 15 minute at room
temperature, followed by absorbance measurement at 538 nm. Quillaja saponin was used as a
standard and the content of total saponins was expressed as Quillaja equivalents.

474
2.2. Determination of volatile components

GC-MS SPME technique was used in the analysis of the volatile components of the leaf
extracts. The SPME technique is a simple and sensitive sample preparation method that
extracts the components in the sample to be analyzed by absorbing the fiber coated with a thin
polymeric stationary phase on silica.

3. Results

Figure 1 shows the extracts of jujube leaves. It can be seen from the picture leaves collected
before the fruit season gives more clear extracts. After the harvest season, the pigments
contained in the leaves increase and a darker solution is obtained. In addition, the stable foam
layer clearly observed on the solutions is an indication that both contain saponins.

Figure 1. Extracts of jujube leaves, on left june extract on right november extract

Peak Name Area


1 Ethyl alcohol 4,76
2 Propanenitrile, 2-hydroxy-2-methyl- (CAS) Acetone cyanohydrin 3,16
3 2,3-Butanedione (CAS) Diacetyl 2,68
4 Hexane (CAS) n-Hexane 21,38
5 1-Propene, 3-bromo- (CAS) Allyl bromide 2,33
6 2-Hexenal, (E)- (CAS) (E)-2-Hexenal 2,06
7 6-Methyl-5-hepten-2-one 3,33
8 dl-Limonene 2,05
9 6-Octen-1-ol, 3,7-dimethyl-, formate (CAS) Citronellyl formate 4,89
10 Benzofuran, 2,3-dihydro- 18,76
11 Guaiacol 4-vinyl- 5,04
12 Eugenol 19,68
13 5,9-Undecadien-2-one, 6,10-dimethyl- (CAS) Dihydropseudoionone 9,88
Table1. Volatile components of jujube leaves after harvest season on november

Table 1 shows the volatile component of jujube leaves after harvest. As seen from the table
hexane, benzofuran, eugenol and dihydropseudoionone are the major components of the
leaves. 14 volatile components were detected in leaves by SPME method. But these four
components make up 69,7 percent of the total components.

475
Table 2 shows the result of SPME analysis at the end of water extraction of leaves collected
from jujube tree in June before fruiting. As a result of the analysis, 18 different components
were identified. The main of these compounds are ethyl alcohol, 2-pentanone, 2-hexenal, 3-
octanone, 3-octanone, eugenol and 3-methylpentane. These 7 compounds constitute 75,51
percent of the total components and the ratio of all of them is over five percent. But the
presence of 23,63% of ethyl alcohol alone is too high to ignore. It also contains a high rate of
14,35% eugenol.

The total amount of saponin determined according to the method of Hiai et al., 1976 was
determined as an average of 2,49 mg/mL in the leaves before fruiting period. In the leaves
collected in November after the fruit harvest, this rate was found to be 4,29 mg/mL on
average. As can be seen, the plant continues to produce saponin as long as the leaves remain
on the tree, so this rate is higher in autumn leaves.

Peak Name Area


1 Ethanol (CAS) Ethyl alcohol 23,63
2 Propanenitrile, 2-hydroxy-2-methyl- (CAS) Acetone cyanohydrin 2,26
3 2,3-Butanedione (CAS) Diacetyl 2,90
4 Pentane, 3-methyl- (CAS) 3-Methylpentane 5,12
5 2-Pentanone (CAS) Methyl propyl ketone 9,30
6 2-Hexenal, (E)- (CAS) (E)-2-Hexenal 9,40
7 2-Hexen-1-ol, (Z)- (CAS) Cıs-Hex-2-En-1-Ol 3,47
8 2-Heptanone (CAS) Heptan-2-one 1,85
9 7-Octen-2-one 2,29
10 3-Octanone (CAS) Eak 7,65
11 2-Octanone (CAS) Octan-2-one 6,06
12 Octanal (CAS) n-Octanal 2,73
13 l-Limonene 1,85
14 Linalool 2,63
15 Phenol, 4-ethenyl-2-methoxy- 1,33
16 Eugenol 14,35
17 5,9-Undecadien-2-one, 6,10-dimethyl- (CAS) Dihydropseudoionone 1,10
18 1,2-Benzenedicarboxylic acid, bis(2-methylpropyl) ester 2,08
Table 2. Volatile components of leaves extracted before fruit season on june

4. Discussion and Conclusions

In this study, the volatile components and saponin contents of the leaves of jujube trees grown
in the Isparta region were analyzed. As it is known, secondary metabolites produced by plants
vary according to the climate and soil structure of the region where they are located. For this
reason, this preliminary study on the analysis of the injured components contained in this
plant, which has been used for medicinal purposes for centuries and which has been
extensively cultivated in this region in recent years, and in which sectors it can be used, offers
new ideas to researchers and us.

References

Guo, S., Duan, J. A., Tang, Y., Qian, Y., Zhao, J., Qian, D., ... & Shang, E. (2011).
Simultaneous qualitative and quantitative analysis of triterpenic acids, saponins and

476
flavonoids in the leaves of two Ziziphus species by HPLC–PDA–MS/ELSD. Journal of
pharmaceutical and biomedical analysis, 56(2), 264-270.

S. Hiai, H. Oura, T. Nakajima. (1976). Color reaction of some sapogenins and saponins with
vanillin and sulfuric acid. Planta Med., 29 pp. 116-122

Yılmaz, G. (2019). Hünnap (zizyphus zizyphus) ağacı yaprak ve meyve ekstratlarının


antioksidan ve antimikrobiyal özelliklerinin araştırlması (Master's thesis, Namık Kemal
Üniversitesi).

Kavas, İ., Dalkiliç, Z. (2015). Bazi Hünnap Genotiplerinin Morfolojik, Fenolojik ve


Pomolojik Özelliklerinin Belirlenmesi ve Melezleme Olanaklarinin Araştirilmasi.
Adnan Menderes Üniversitesi Ziraat Fakültesi Dergisi, 12(1), 57-72.

Outlaw Jr, W. H., Zhang, S., Riddle, K. A., Womble, A. K., Anderson, L. C., Outlaw, W. M.,
Thistle, A. B. (2002). The jujube (Ziziphus jujuba Mill.), a multipurpose plant.
Economic Botany, 56(2), 198-200.

Elaloui, M., Laamouri, A., Ennajah, A., Cerny, M., Mathieu, C., Vilarem, G., Hasnaoui, B.
(2016). Phytoconstituents of leaf extracts of Ziziphus jujuba Mill. plants harvested in
Tunisia. Industrial crops and products, 83, 133-139.

477
International Conferences on Science and Technology
Engineering Sciences and Technology
ICONST EST 2021

Dye Sensitized Solar Cell Production by Doctor Blade Method


Using Bezathren Yellow 5GF Vat Dye

Kamila Sobkowiak1* , Mustafa Karaboyacı2

Abstract: In this study, the structure and working principle of dye-sensitized solar cells were
investigated, as well as the production process of them from cheap organic vat dyes. The
conductive glass part of the cell was produced with SnO 2 nano coating by spray pyrolysis
method. In order to increase the efficiency of conductive glasses, the d oping with fluorine was
applied. FTO glass surfaces was coated with TiO 2 by the very famous easy to operate method
doctor blade method.

The current-voltage (J-V) values of the prepared solar cells under light and dark conditions
were measured with an A.M 1.0 solar simulator and the efficiency calculation was made for
each cell with the obtained values. The highest efficiency η = 0.00429 % was obtained for
solar cell sensitized with Bezathren Yellow 5GF dye prepared by paste method.

Keywords: DSSC, Vat dyes, solar cell, FTO,

1. Introduction

The development of renewable energy sources becomes an alternative focus point to reduce
the use of fossil resources. The most abundant and remarkable energy source is photovoltaic
or solar energy that converts solar energy directly into electrical energy (Feldt, 2013;
Fukurozakia et al., 2013). Solar energy is the main renewable energy source available today
because it provides energy for growth and development to all living beings in the world
through the process of photosynthesis. An important advantage of solar energy is that it can
be used locally and commercially. Solar energy benefits not only the individual owners but
also the environment. Solar energy can be turned into useful heat or electricity. Electricity is a
form of energy that can be made accessible easily. For this reason, scientists and engineers are
currently trying to use solar radiation to generate electricity directly with economic devices
(Kumar et al., 2015).

Vat dyes are a type of dyestuff that maintains its importance in cotton dyeing due to its high
fabrication fastness. They show high fastness to wet processes. Good wet fastness is due to
the formation of water-insoluble compounds. Light fastness is also generally very good. Vat
1Lodz University of Technology, Chemistry Faculty, Polymer and Dye Technology Department, Lodz, Poland
2Süleyman Demirel University, Engineering Faculty, Chemical Engineering Department, Isparta, Turkey
* Corresponding author: [email protected]
478
dyes are not only resistant to light, acids and alkalis, but also to strong oxidizing bleaches. Vat
dyes are water insoluble dyes so before applying it to dyebath in needs to be e reduced to
water soluble leuco form and after dye is absorbed to the fiber than it needs to be re-oxidised
to the original form.

These reduction oxidation properties of vat dyes gave us the idea that they can also be used in
solar cells. Bezathren Yellow 5GF Vat Dye produced by CHT Bezema ad it is redox potential
– 840 mV accurding to its MSDS. Also Egerton and Assaad, (1970) studied about
photochemical behaviour of vat dyes and they reported that photo reduction of some vat dye
are possible.

For preparing the solar cell, simplest method for depositing titania paste on FTO glass
substrate was used. The technique is known as doctorblade method and the thickness of the
titania layer is determined by the thickness of a adhesive tape placed on both sides FTO glass.

2. Material and Methods

Some combinations were applied, and various results were obtained as described below.

2.1. HNO3 method

3 g of TiO 2 was mixed with HNO3 and 2 drops of wetting agent giving a medium density
paste. Two sides of the surface were covered with a tape. Thin smooth layer of the paste was
distributed on the conductive glass surface. The tape was removed. Afterwards, the glass was
heated in the oven for 30 min in 500°C

2.2. H2 O method

3 g of TiO 2 was mixed with distilled water and 2 drops of wetting agent giving a medium
density paste. A thin smooth layer of the paste was distributed on the conductive glass
surface. Afterwards, the glass was heated in the oven for 30 min in 500°C.

2.3. H2 O + Ethanol method

3 g of TiO 2 was mixed with mixture of 1:1 distilled water and ethanol and 2 drops of wetting
agent giving a medium density paste. A thin smooth layer of the paste was distributed on the
conductive glass surface. Afterwards, the glass was heated in the oven for 30 min in 600 ֯C.

2.4. TiO2 + Dye premixed method

TiO 2 was mixed with dye, distilled water and 2 drops of wetting agent giving a medium
density paste. A thin smooth layer of the paste was distributed on the conductive glass
surface. Afterwards, the glass was heated in the oven for 30 min in 400°C.

479
3. Results

In the material method section, the preparation of 4 different paste methods was explained. In
the "2.4. TiO2 + Dye premixed method" method, which is the method number 4, a completely
homogeneous and surface-covering titanium coating was obtained and the experiments were
continued on this. In other trials, the TiO 2 coating was partially or completely removed from
the surface after the dye solution was impregnated. For this reason, those trials have not been
studied further. s

Figure 1 shows the J-V curves of DSSC prepared with Bezathren Yellow 5GF. Table 1. shows
the JSC, V OC, Pin , FF and efficiency values of the solar cell. Solar cell was prepared by paste
method using only TiO 2 solution and was sintered at 400°C. The dye and mixture of H2 O +
ethanol as a solvent was placed on the coated surface afterwards.

J-V
0.02

0.01

0
-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05

-0.01

-0.02

Dark Light

Figure 1. J - V graph of solar cell measured with AM 1.0 Solar Simulator

Table 1. Results for Bezathren Yellow 5GF

JSC V OC Pin Fill factor Efficiency


Sample
(mA) (V) (Watt) (%) (%)
Paste method 0.01 0.213 1.5 30.21 0.00429

4. Discussion and Conclusions

Results shows us vat dyes are photo sensitive and can be used for solar cells. However, the
yield result obtained from this experiment is as low as 0.00429%. We intend that the study
will be useful in terms of being a preliminary study on the preparation of dye-sensitive solar

480
cells with vat dyes, and giving us and those interested in the subject in the future. The other
attractive point of the subject is the cheapness and abundant availability of vat dyes.

References

Feldt, S. (2013). Alternative redox couples for dye-sensitized solar cells (Doctoral
dissertation, Acta Universitatis Upsaliensis).

Fukurozaki, S. H., Zilles, R., Sauer, I. L. (2013). Energy payback time and CO2 emissions of
1.2 kWp photovoltaic roof-top system in Brazil. Int J Smart Grid Clean Energy, 2, 164-9.

Egerton, G. S., & Assaad, N. E. N. (1970). Photochemical Behaviour of Vat Dyes I —


Reaction with the Polymer Substrate. Journal of the Society of Dyers and Colourists, 86(5),
203-208.

Kumar, A., Richhariya, G., & Sharma, A. (2015). Solar photovoltaic technology and its
sustainability. In Energy sustainability through green energy (pp. 3-25). Springer, New Delhi.

481

You might also like