Volumul 18
Volumul 18
Volumul 18
President:
Cornel Panait Vice-rector, Constanta Maritime University, Romania
Members:
Violeta-Vali Ciucur Eliodor Constantinescu Mihail Alexandrescu Toader Munteanu Mariana Jurian Alexandru Jipa Corneliu Burileanu Silviu Ciochin Rector, Constanta Maritime University, Romania Vice-rector, Constanta Maritime University, Romania Dean, Transport Faculty, Politehnica University of Bucharest, Romania Professor, Dunarea de Jos University, Galati, Romania Professor, Faculty of Electronics, Communications and Computers, University of Pitesti, Romania Dean, Physics Faculty, University of Bucharest, Romania Vice-rector, Politehnica University of Bucharest, Romania Chair of Telecommunications, Faculty of Electronics, Telecommunications and Information Technology, Politehnica University of Bucharest, Romania Professor, Politehnica University of Bucharest, Romania Professor, Chair of Applied Electronics and Informatics Engineering, Faculty of Electronics, Telecommunications and Information Technology, Politehnica University of Bucharest, Romania Dean, Faculty of Electronics, Telecommunications and Information Technology, Politehnica University of Bucharest, Romania Professor, Faculty of Electronics, Telecommunications and Information Technology, Politehnica University of Bucharest, Romania Associate Professor - Researcher, Grenoble INP/ENSE3, GIPSA-lab, Department Images-Signal, France Professor, Faculty of Electronics, Telecommunications and Information Technology, Politehnica University of Bucharest, Romania Rector, N.Y. Vaptsarov Naval Academy, Bulgaria Director, Department of Nautical Sciences and Engineering, Polytechnical University of Catalonia, Spain Professor and Chair, Department of Maritime Policy and Management, California Maritime Academy, California State University Professor, Estonia Maritime Academy President of Dalian Maritime University Professor, Faculty of Nautical Studies - Polytechnical University of Catalonia, Spain President, Odessa National Maritime Academy, Ukraine Dean, Maritime Faculty, Istanbul Technical University, Turkey President, Gdynia Maritime University, Poland Rector, Piri Reis Maritime University, Turkey Rector, Hochschule Wismar, University of Technology, Business and Design, Germany Professor, N.Y. Vaptsarov Naval Academy, Bulgaria President of Korea Maritime University, Korea President, California Maritime Academy, USA
Teodor Petrescu Marin Drgulinescu Cornel Ioana Ovidiu Dragomirescu Boyan Mednikarov Ricardo Rodrguez-Martos Donna J. Nincic Anto Raukas Wang Zuwen De Melo German Mykhaylo V. Miyusov Gler Nil Cwilewicz Romuald Sag Osman Kamil Gruenwald Norbert Dimitar Angelov Oh Keo-Don Eisenhardt William
Laczynski Bogumil Malek Pourzanjani Yu Schicheng Boris Pritchard Elena Croitoru Lavinia Ndrag Gabriela Dima Clive Cole Roxana Hapan Elena Condrea Costel Stanca Nicolae Buzbuchi Dumitru Dinu Razvan Tamas Paulica Arsenie Mircea Georgescu Dan Popa Danut Argintaru Ion Omocea Liviu Stan Felicia Surugiu Alexandra Raicu
Professor, Faculty of Navigation - Gdynia Maritime University, Poland President, Australian Maritime College President of Shanghai Maritime University, China Professor, Faculty of Maritime Studies, University of Rijeka, Croatia Professor, Faculty of Letters, Dunarea de Jos University of Galati, Romania Professor, Faculty of Letters, Ovidius University of Constanta, Romania Professor, Faculty of Letters, Dunarea de Jos University of Galati, Romania Professor, World Maritime University, Malmo, Sweden Professor, Bucharest University of Economic Studies, Romania Professor, Ovidius University of Constanta, Romania Dean, Faculty of Navigation, Constanta Maritime University, Romania Professor, Constanta Maritime University, Romania Professor, Constanta Maritime University, Romania Professor and Director, Department of European Research Programmes Department, Constanta Maritime University, Romania Associate Professor and Director, Department of Navigation, Constanta Maritime University, Romania Associate Professor, Faculty of Navigation, Constanta Maritime University, Romania Professor and Director, Department of Electronics and Telecommunications, Constanta Maritime University, Romania Director, Department of Fundamental Sciences Humanities, Constanta Maritime University, Romania Director, Department of Engineering Sciences in Electrical, Constanta Maritime University, Romania Director, Department of Engineering Sciences in Mechanics and Environmental, Constanta Maritime University, Romania Director, Department of Transport Management Constanta Maritime University, Romania Director, Department of Engineering Sciences General, Constanta Maritime University, Romania
CONTENTS
SECTION I NAVIGATION AND MARITIME TRANSPORT
LEGAL PROVISIONS ON LAYTIME AND DEMURRAGE IN CHARTERPARTIES 1. ADASCALITEI OANA, Constanta Maritime University, Romania... STANDARD CLAUSES OF VOYAGE CHARTER SHIFTING RISK OF DELAY AND 2. READINESS ADASCALITEI OANA, Constanta Maritime University, Romania..... LEGAL IMPLICATIONS OF THE VOYAGE CHARTERPARTY PERFORMANCE 3. ADASCALITEI OANA, Constanta Maritime University, Romania.... BEHIND THE THEORY OF SAFETY AGAINST CAPSIZING AND ASSESSING 4. SHIP STABILITY 1 ANDREI CRISTIAN, 2LAMBA MARINEL-DANUT, 3HANZU-PAZARA RADU 1,2,3 Constanta Maritime University, Romania ROMANIAN NAVAL AUTHORITY AND THE MARINE ENVIRONMENT 5. PROTECTION BERESCU SERBAN, Romanian Naval Authority, Constanta, Romania..... DEVELOPMENT OF THE COMPUTER-BASED QUALITY CONTROL SYSTEM 6. USED FOR TRAINING SPECIALISTS IN NAVIGATION DAVYDOV VOLODYMYR, MAIBORODA OLEXANDR, DEMYDENKO NADIYA, Kyiv State Maritime Academy, Ukraine.. 7. CONTRIBUTIONS AT QUAY CRANES EXPLOITATION OPTIMIZATION 1 DRAGOMIR CRISTINA, 2PINTILIE ALEXANDRU, 1Constanta Maritime University, 2 Constanta OvidiusUniversity, Romania... THE ANALYSIS OF INTACT SHIP STABILITY REGULATIONS 1 LAMBA MARINEL-DANUT, 2ANDREI CRISTIAN, 3HANZU-PAZARA RADU, 1,2,3 Constanta Maritime University, Romania................................................................................ 13
17 21
25
31
37
41
8.
45
RISK MANAGEMENT IN HIGHER EDUCATION 9. POPA LILIANA-VIORICA, Constanta Maritime University, Romania .................................... THE BENEFITS OF THE IMPLEMENTATION MECHANISMS FOR THE 10. INTEGRATED SYSTEM IN SMES POPA LILIANA-VIORICA, Constanta Maritime University, Romania .................................... COCONET PUTTING TOGETHER SEAS WITH ROMANIA AS WORK 11. PACKAGE LEADER FOR BLACK SEA PILOT PROJECT 1 SURUGIU GHEORGHE, 2SURUGIU IOANA, 3SURUGIU FELICIA, 1,2,3Constanta Maritime University, Romania. COLLISIONS RISK ANALYSIS 12. TROMIADIS (BEJAN) RAMONA, Constanta Maritime University, Romania. A CONSEQUENCE OF THE SECOND WORLD WAR: THE BELGRADE 13. AGREEMENT (AUGUST, 18, 1948) AND ITS CONSEQUENCES UPON THE NAVIGATION ON THE DANUBE TULUS ARTHUR-VIOREL, University "Dunarea de Jos" Galati, Romania...............................
49
53
57
63
67
MAIN GOVERNING EQUATIONS FOR A SHIP INVOLVED IN A SOFT 14. GROUNDING EVENT VARSAMI ANASTASIA, Constanta Maritime University, Romania.......................................... THE DEVELOPEMENT OF FORUM NON CONVENIENSAND LIS ALIBI 15. PENDENS DOCTRINES IN THE INTERNATIONAL MARITIME LAW 1 XHELILAJ ERMAL, 2LAPA KRISTOFOR, 1,2University Ismail Qemali, Vlora, Albania..
73
77
85
89
95
99 105
109
117
123
DOMESTIC SOLAR WATER HEATING POTENTIAL IN THE SOUTH- EASTERN 27. REGION OF ROMANIA 1 PARASCHIV SPIRU, 2MOCANU CATALIN-BOGDAN, 3PARASCHIV SIMONA, 1,2,3 Dunarea de Jos University of Galati, Romania................................................................... 28. ANALYSIS OF RESIDENTIAL PHOTOVOLTAIC ENERGY SYSTEMS 1 PARASCHIV SIMONA, 2MOCANU CATALIN-BOGDAN, 3PARASCHIV SPIRU, 1,2,3 Dunarea de Jos University of Galati, Romania....................................................................
139
143
CONTRIBUTIONS TO KNOWING THE ZOOPLANKTON ON SEVERAL LAKES 29. OF SOUTH-WEST DOBROGEA RADU ADINA, Eco-Muzeal Institute Researche, Tulcea, Romania... 30. NEW APPROACHES FOR THE MATHEMATICAL MODEL OF INJECTION TECHNOLOGY PROCESSES RAICU ALEXANDRA, Constanta Maritime University, Romania................. 31. STATIC ANALYSIS OF CYLINDER LINERS FROM DIESEL ENEGINES USING FEM SIMIONOV MIHAI, Dunarea de Jos University of Galati, Romania.......................................
147
159
163
169
QUANTIFYING HARMONIC DISTORTION 36. 1DORDEA STEFAN, 2NEDELCU ELENA, 1,2Constanta Maritime University, Romania.
GRIGORESCU LUIZA, 2DIACONESCU IOANA, Dunarea de Jos University of Galati, Engineering Faculty of Braila, Romania.
191
38.
TELEMEDICINE AND ETHICS 1 HNATIUC MIHAELA, 2IOVS CATALIN JAN, 1Constanta Maritime University, 2 Gr. T. Popa University of Medicine and Pharmacy, Iasi, Romania
195
IMPROVEMENTS OF THE DIRECT TORQUE CONTROLLED INDUCTION 39. MOTOR DRIVES 1 PATURCA SANDA-VICTORINNE, 2BOSTAN VALERIU, 3MELCESCU LEONARD 1,2,3 University Politehnica of Bucharest, Romania....
199
40.
CLOUD CONTENT DISTRIBUTION NETWORKS FOR DVB APPLICATIONS 1 SUCIU GEORGE, 2HALUNGA SIMONA, 1,2University POLITEHNICA of Bucharest, Romania
205
FUZZY CONTROL OF A NONLINEAR PROCESS BELONGING TO 41. THE NUCLEAR POWER PLANT WITH A CANDU 600 REACTOR 1 VENESCU BOGDAN, 2JURIAN MARIANA, 1,2Institute of Nuclear Research, Pitesti, Romania 42. PARAMETERS THAT INFLUENCE THE TRANSMISSION IN DVB-T2 1 VULPE ALEXANDRU, 2FRATU OCTAVIAN, 3CRACIUNESCU RAZVAN, 4 MUNTEANU ALEXANDRA, 1,2,3Politehnica University of Bucharest, Telecommunication Department, Romania...
209
215
SEASONAL VARIATIONS OF THE TRANSMISSION LOSS AT THE MOUTH OF 43. THE DANUBE DELTA ZARNESCU GEORGE, Constanta Maritime University, Romania............................ ENERGY-EFFICIENT TRANSMISSION METHOD FOR UNDERWATER 44. ACOUSTIC MODEMS ZARNESCU GEORGE, Constanta Maritime University, Romania...........................
223
227
231
THE EVALUATION OF GRAVITATIONAL PERTURBATION ACCELERATION 46. ACTIONS ON GPS SATELLITES LUPU SERGIU, Mircea cel Batran Naval Academy, Constanta, Romania. THE EFFECTS CAUSED BY NON-GRAVITATIONAL PERTURBATIONS: 47. THE ANISOTROPIC THERMAL EMISSION AND ANTENNAS EMISSION ON GPS SATELLITES LUPU SERGIU, Mircea cel Batran Naval Academy, Constanta, Romania.
235
239
CREATIVE THINKING ACTIVITIES IN FOREIGN LANGUAGE TEACHING 52. SIRBU ANCA, Constanta Maritime University, Romania............................................................ TRANSLATING MARITIME IDIOMS 53. VISAN IOANARALUCA, Constanta Maritime University, Romania.......................................
SECTION VI - TRANSPORT ECONOMICS INDICATORS FOR THE PERFORMANCE AND FOR THE EFFORT IN 54. TRANSPORT CARP DOINA, Constanta Maritime University, Romania........................................... MEASURING MARKET CONCENTRATION ACCORDING TO EUROPEAN 55. COMPETITION POLICY DOBRE CLAUDIA, Ovidius University of Constanta, Romania.... QUALITY STRATEGIES IN THE MARKET PROCESS 56. DRAGAN CRISTIAN, Constanta Maritime University, Romania...
261
265
271
279
THE IMPORTANCE OF RELATIONS BETWEEN GEORGIA AND ROMANIA FOR 59. THE PROGRESS OF ENERGY PROJECTS 1 GEORGESCU STEFAN, 2MUNTEANU MARLENA, 3GARAYEV TABRZ, 4STANCA COSTEL, 1,2Andrei Saguna University, Constanta, 3Bucuresti University, 4Constanta Maritime University, Romania.....................................................................................................
283
289
297
305 309
REPUTATION BUILD ON THE COMPANIES VALUES 63. GRIGORUT CORNEL, OVIDIUS University of Constanta, Romania.... THE ROMANIAN CENTRALIZED ORGANIZATIONS RESISTANCE TO 64. CHANGE MINA SIMONA, University Ovidius of Constanta, Romania. ECONOMICAL AND ENVIRONMENTAL COORDINATES OF BLACK SEA 65. REGION NEDEA PETRONELA-SONIA, Comercial and Touristic Faculty, Christian University "Dimitrie Cantemir", Bucharest, Romania...
313
321
66.
PRICE STABILITY 1 OLTEANU ANA-CORNELIA, 2CRISTEA VIORELA-GEORGIANA, 1,2Constanta Maritime University, Romania..................................................................................................................... CAPITAL REQUIREMENT FOR OPERATIONAL RISK 1 OLTEANU ANA-CORNELIA, 2CRISTEA VIORELA-GEORGIANA, 1,2Constanta Maritime University, Romania.....................................................................................................................
327
67.
331
OPERATIONAL RISK MANAGEMENT 68. OLTEANU ANA-CORNELIA, Constanta Maritime University, Romania... INFLUENCE OF TRANSPORTS ON ENVIRONMENT QUALITY 69. PASCU EMILIA, Comercial and Touristic Faculty, Christian University Dimitrie Cantemir, Bucharest, Romania...................... TRENDS ANALYSIS IN MANAGING MARITIME E-LEARNING TECHNOLOGIES 70. RAICU GABRIEL, Constanta Maritime University, Romania.. DAMAGES TO CARGO AND SHIPS GENERAL AND PARTICULAR AVERAGES 71. SURUGIU FELICIA, Constanta Maritime University, Romania. TEMPERATURE AND HUMIDITY TWO MAJOR CLIMATIC RISK FACTORS 72. AFFECTING THE QUALITY OF CARGOES CARRIED BY SEA SURUGIU FELICIA, Constanta Maritime University, Romania.. GOODS, SHIPS AND PORTS INTEGRATED CONCEPTUAL APPROACH FOR 73. THE INTERNATIONAL MARITIME TRANSPORT SURUGIU FELICIA, Constanta Maritime University, Romania.............................
335
355
359
SECTION I
NAVIGATION AND MARITIME TRANSPORT
1.
INTRODUCTION
The period of time within which the loading or discharging operation is required to be completed will be prescribed in the charterparty is known as laytime [1]. If the period is exceeded, he will have to pay compensation to the shipowner in the form of demurrage or damages for detention [1]. Laytime and demurrage constitute a complicated field, both from a technical and a legal standpoint [2]. As a consequence of this complexity, laytime and demurrage give rise to many difficulties and frequent disputes [2]. 2. THE CALCULATION OF LAYTIME
Most of charterparties contain an express term fixing laytime and this can be done directly specifying the number of days or less directly by an agreement that a specified weight or measurement of cargo will be loaded or discharged in a particular period of time [2]. In a general cargo charterparty it is usual to exclude days which are not worked at the port from the computation of laytime such as Saturdays and Sundays or Thursday afternoon Fridays in Muslim countries [3]. In this cases only working day i weather working may count as laytime [3]. Working days means all days on which work is ordinarily done at the port excluding Sundays and holidays (Fridays in Muslim countries) [4].The term describes a day of work and it is immaterial that on a working day the charterer is prevented from loading unless the cause of delay is covered by an exception [4]. Evidence of custom is admissible to explain the meaning of working day [4]. The number of hours in a particular working day on which a ship is required to load will depend on the custom of the port and Saturday will normally count as a whole day although it may not be customary to work in the afternoon [1]. The number of hours in a particular working day may be settled by express or implied agreement [4]. Weather working day excludes from the calculation of laytime those working days on which loading would have been prevented by bad weather [1]. In jurisprudence it was held that weather must affect the loading process and not merely the safety of the vessel with the result that the mere threat of bad weather which
resulted in a ship being ordered from berth by the harbour master did not prevent the period in question from counting as weather working days [1]. When bad weather occurs for a part of the weather working day a reasonable apportionment should be made of a day according to the incidence of the weather upon the length of the day that the parties either were working or might be expected to have been working at the time [4]. Therefor if two hours are lost due to rain, laytime is not suspended for two hours out of 24, rather for one quarter out of 24 hour conventional day if the normal working hours are eight [5]. It is irrelevant if no work is actually taking place on a working day although the weather was fine [5]. Where the clause refers to a weather working day of 24 consecutive hours the ratio method is not used and a deduction is made of the actual amount of time that has been lost or in a case of a vessel waiting for a berth would have been lost [5]. In a tanker charterparty there are not like to be such exceptions to laytime due to the nature of work period at the oil terminal [3]. However there will be other express exceptions to the laytime such as the time it takes to shift the ship from her anchorage place to her berth (in a berth charterparty this will be considered as part of the voyage even though the charterparty has allowed the notice of readiness to be given earlier than arrival on berth) or deballasting as this is a ship operation to make the ship ready to load [3]. Cargo to be discharged at the average rate of not less than- tons per day.Such clause where the tonnage of the cargo divided by the average rate of discharge gives a fraction over a day does not allow the charterer the whole of the last day [4]. Probably the fraction is to be computed by the proportion of hours used to the hours in the working day [4]. The weight of cargo actually loaded or discharge and not the nominal cargo on which freight may be payable must be used for the calculation [4]. Cargo to be loaded at the average rate of tons per hatch per weather working day.The provision requires the stipulate rate to be multiplied by the number of hatches which the vessel possesses the product being divided into the tonnage of cargo carried [4]. Cargo to be loaded at the average rate of not less than 150 metric tons per available working hatch per day.A working hatch is a hatch into which there is still cargo to be loaded or from which there is still cargo to be
13
Charterparties, in regard to the time for loading or discharge fall into two classes: for discharge in a fixed time and second, for discharge in a time not definitely fixed [4]. The approach to be applied where laytime is not fixed is summarized in jurisprudence as if no time be fixed expressly or impliedly by the charterparty, the law implies an agreement by the charterers to discharge the cargo within a reasonable time, having regard to all the circumstances of the case as they actually existed, including the custom or practice of the port, the facilities available thereat, and any impediments arising therefrom which the charterers could not have overcome by reasonable diligence [2]. The charterers will be excused for any obstruction such as a strike of the dock labourers, the lack of an available berth due to congestion in the port or arrest of the vessel-which effectively interrupts the loading, provided that is out of his control and that otherwise they had conducted the operation with reasonable dispatch [1]. However, circumstances to be taken into account do
Once the laytime has expired the charterer is in breach and would be liable for damages for detention [3]. However the majority of charterparties include a clause providing that he may retain the vessel for additional days in order to complete the loading or discharging operation on payment of a fixed daily amount, known as demurrage [1]. It is common practice in voyage charters to specify a demurrage rate, that is, an amount payable as agreed damages for each day or part of a day that a vessel is detained by the charterer [2]. The charter stipulates for a fixed number of days on demurrage or no time limit is expressed as e.g. eight days for loading after which demurrage at 2,000 per diem [1]. An agreement to demurrage is not, therefore, the payment of the contractual price for the exercise of a right to detain, it is no different in nature from any agreement providing for payment of liquidated damages [2]. The demurrage is recoverable by the shipowner irrespective of whether he suffered damage by the detention of the vessel [7]. Demurrage will cover losses of freight arising under subsequent charterparties affected by the delay or from consequent reduction in the number of voyages possible under a consecutive voyage charterparty [1]. An agreement to pay demurrage is normally treated as preventing the shipowner recovering from the charterer more than the agreed sum for the wrongful detention of his vessel. This is so however the delay is caused, whether by simply failing to load or
14
Where there is no provision in the charterparty for the payment of demurrage, a charterer will be liable for damages for detention for all the time he detains the vessel after the expiration of the laydays [1]. Another situation where damages for detention are payable is where a charterparty stipulates a fixed number of days for the payment of demurrage and these days have expired [1;2]. If the charterer is in breach of the charterparty in other respects and delay is caused the charterer will not be liable for demurrage but for damages for detention [3]. Thus, for example, if the charter delays the ship at the load port once loading has been completed either because it has not paid his agents and therefor the ship is prevented from leaving, or the charter fails to nominate the discharge port [3]. Even delays in loading or discharging may give rise to losses that fall outside the demurrage provisions [5].Thus, delay my cause less cargo to be loaded then required by the charter and this will give rise to a claim for dead freight [5]. It is important to determine whether a claim is one for demurrage or for damages for detention for a number of reasons [3]. First, the rate of damages is at an agreed rate for demurrage but not for damages for detention unless the charterparty expressly provides otherwise [3]. Therefore the owner would have to prove its loss and adduce evidence as to the market rate for the ship [3]. Where there are no provisions in the charterparty for the payment of demurrage damages are at large and will be asserted by the court in relation to the actual loss suffered by the shipowner [1]. If the charterparty provides a fixed number of days for the payment of demurrage and those days have expired, the court will normally assess the damages at a
15
There are a number of legal and technical implications relating on laydays, demurrage and damages for detention. First, they refer to conditions in which charterer will not be held liable: lack of exceptions in contract, if impediments arise from the
[1] WILSON John F., Carriage of Goods by Sea, 7th. ed., London: Longman, 2010; [2] DOCKRAY Martin, Cases & Materials on the Carriage of Goods by Sea, Third Edition, Cavendish Publishing Limited, 2004; [3] BAATZ Yvonne, Charterparties in Southampton on Shipping Law, Institute of Maritime Law, London, Informa 2008; [4] EDER Bernard, FOXTON David, QC; BERRY Steven, QC; SMITH Christopher, QC; BENNETT Howard, Scrutton on Charterparties and Bills of Lading, 22nd Edition, Sweet & Maxwell, 2011; [5] BAUGHEN Simon, Shipping Law, 4th ed, Routledge Cavendish, 2009; [6] CARR Indira, International Trade Law, 4th ed., Routledge Cavendish, 2010; [7] DERRINGTON Sarah, PANNA Andrew n M. W. D. White(ed.), Australian maritime law, 2 Ed., Federation Press, Annandale, N.S.W., Australia, 2000;
16
ABSTRACT The article aims to describe the main features of the standard clauses of voyage charterparties that transfer risk of delay. Use of the clauses relating either to a port or a specific berth determines the moment when laytime will begin to run. These clauses are an exception to the usually rule which states that the moment laytime starts to run is the moment notice of readiness is given. Keywords: clauses requiring charterer to nominate a reachable berth; clause time lost waiting for a berth; time to count weather in berth or not/ time to count weather in port or not clauses; clauses designed for specific ports; notice of readiness.
1.
INTRODUCTION
The very moment the vessel becomes an arrived ship, charterer is entitled to full use of the laydays. A series of standard clauses are designed to transfer risk of delay from the charterer to shippowner. The outcome depends on the type of charterparties i.e.berth charterparties or port charterparties. 2. CLAUSES REQUIRING CHARTERER TO NOMINATE A REACHABLE BERTH Voyage-charterparties may contain clauses which require the charterer to nominate a berth reachable upon arrival or a berth always accessible [1]. In jurisprudence it was held that the berth was not reachable upon arrival and the charterer was in breach with his obligations in situations where the berth was congested, lack of tugs or pilots or prohibition of night navigation [2]. It was held too that unlike wibon or time lost clauses, there is no distinction between congestion and bad weather [3]. Also the berth was not reachable on arrival if there is no sufficient depth of the water in the berth or in the port [1]. On the other side the word arrival means arrival at the point, weather within or outside the commercial or fiscal limits of the port, where the indication or nomination of the particular loading place would became relevant if the vessel where to be able to proceed without being held up [1]. From that moment, the charterer will have to bear the risk of any delay in that he will be liable for damages for breach of contract in failing to nominate a reachable berth [4]. Since the clause has no incidence on laytime, the time which normally is excluded from laytime( eg. Sundays and holidays) is not excluded from the computation of the damages [1]. From the time the vessel is an arrived ship the charterers are entitled to full use of the permitted laytime and the owners cannot recover damages at full large for the breach during the running of such time and cannot recover both demurrage and damages for the same delay
[1]. The charterers, instead, could trade off the time saved in loading against the initial time lost while they were prevented from nominating a reachable berth [4]. If the ship isnt an arrived ship, shipowner can recover damages for the delay and their calculation must take into account delays which would have occurred in any event if the ship have berthed at once [1]. Time saved on discharge cannot be credited against time lost waiting for a reachable berth to be nominated [4]. 3. CLAUSE TIME LOST WAITING FOR A BERTH Usually used in Gencon contracts provides that time lost waiting for a berth to count as loading/unloading time [4].The use of the time lost clause or of standard clauses related to particular ports whose waiting place is outside the limits of the port may well seem to be particularly appropriate to cases where the charterparty reserves to the charterer an option to chose a loading/unloading place out of a range of ports at some of which the risk of congestion may be greater than at others or at some of which the risk of congestion may be greater than at others or at some of which the usual waiting place lies inside and at others outside the limits of the port [5]. In English doctrine it was argued that in the absence of any express provisions the existence of the option means that the charterer by the way he requires the contract to be carried out may influence the incidence or extent of the risk to be borne by the shipowner [5]. The clause shifts the risk before the vessel becomes an arrived ship, i.e. from the moment when it could have entered a berth had one been available [4]. When the clause operates, the charterer will still be able to rely on the laytime exceptions [3]. Specifically the clause would not allow the shipowner to count as laytime periods when excepted causes such as holidays, bad weather and strikes would have prevented laytime from running had the vessel been in berth [1]. Likewise, in case of a port charterparty the clause does not allow the shipowner to count as laytime,
17
In the case of ports which are frequently congested or where the normal waiting place is outside the port limits, standard clauses standard clauses are designed which provides that laytime is to run from the moment from the time the vessel reaches a specific point but is unable to proceed further because of the shortage of berths or other obstruction [4]. Such clause will be effective even though the vessel does not become an arrived ship at that time [4].
The charterers require notice of arrival of the ship so that it can arrange to load the ship promptly [2]. The moment notice of readiness is given provides a starting point for the calculation of laytime [4]. Usually the moment is precisely determined in charterparties [4]. Unless the charterparty provides to the contrary, for a notice of readiness to be valid, the requirements which entitles the notice to be given, such as arrived ship and ship ready to load, must exist at the moment the notice of readiness is given [4]. In this respect, in jurisprudence it was held that where the holds require fumigation after the notice was given after the notice was given, such notice is invalid even though the work necessary to make the vessel ready takes only a short time and is completed before a loading berth becomes available [3].However it was recognised that a valid notice of readiness could be given even though some preliminary routine matters such as removal of hatch covers, still needed to be attended to, provided that they were unlikely to cause delay [3]. Where an invalid notice of readiness is given it will not become valid when the facts change so as to justify a notice being given [1]. In other words an invalid notice could not be treated as inchoate becoming effective when the cargo become available for discharge [8; 9]. In the absence of a valid notice of readiness, laytime will not start and as a consequence not only that the owners have earned no demurrage but also they are obliged to pay charterers dispatch money [8]. Laytime will not count even the ship commences loading/discharging operations [1]. By contrast to the normal position a charterparty may indicate that laytime is to run after the service of a notice of readiness even if the ship was not in fact ready, provided that the notice was served in good faith [1]. Usually charterparty provides that the laytime shall not commence before the commencement date except with the charterer sanction [2]. If the charterer requests the ship to tender notice of readiness and berth before the earliest lay day, then the charterer has sanctioned the earlier commencement of the laytime [2]. If an owner gives notice of readiness which is premature because it is before the earliest permissible lay day then, by contrast with a notice which is invalid due to the ship being unready for loading/discharging or it is not an arrived ship, it takes effect in the earliest lay day [1]. Most charterparties requires the shipowner to obtain free pratique before giving the notice of readiness and laytime will not commence before that moment [3]. Other clauses may provide that notice of readiness may be tendered after arrival of the vessel in loading port, at any moment, provided that the vessel is cleared by the port authorities [10]. The commencement of laytime shall run from the moment the notice of readiness was served provided that the requirement for port clearance to be given before notice of readiness was waived by charterers [10]. Many standard forms of charterparty provide that laytime will not commence until six hours after notice of readiness
18
A ship must be ready to load so as to prevent the cancelling clause from operating although she may not have complied with some requirement necessary before laytime starts [1]. Whether or not a ship is ready to load depends on a variety of factors such as position of the vessel, weather it is physically capable of receiving the cargo, and whether it has complied with all the port health and documentary requirements [4]. The ship must actually be ready subject to de minimis [2]. Notice of readiness to load can be given even though it is impossible to commence the loading operation because the vessel is not in berth [4]. The test of readiness to load is less stringent if applied in respect
The moment notice of readiness is given provides a starting point for the calculation of laytime. Charterers seek to use charterparty provisions shifting risk of delay. Risk transfer is accomplished in voyage charterparties through a variety of ways. According to Clauses requiring charterer to nominate a reachable berth owners cannot recover damages at full large for the
19
[1] Bernard EDER, FOXTON David, QC; BERRY Steven, QC; SMITH Christopher, QC; BENNETT Howard, Scrutton on Charterparties and Bills of Lading, 22nd Edition, Sweet & Maxwell, 2011; [2] BAATZ Yvonne, Charterparties in Southampton on Shipping Law, Institute of Maritime Law, London, Informa 2008;
20
1.
INTRODUCTION
Liability is different in various stages of the voyage charterparty. It can be channelled to the master in the carrying voyage or delivery of goods stage. It also may be a division of responsibility between the shipowners and a multitude of actors involved in discharge operation. Only by express contractual provisions or exemptions the responsible person will be discharged. 2. THE CARRYING VOYAGE
On completion of the loading operation, the responsability for continued performance of the charterparty will be transferred to the shipowner [1]. Thus the captain is the agent of the owners in providing those necessaries for the voyage which by the terms of the charter are to be paid for by the owners [1] or as agent of necessity under two obligations: the necessity for an extraordinary action such as sale, borrowing money on bottomry, salvage agreements, transhipment, jettison and second, no posibility of communicating with, or obtaining instructions from his principals wheather shipowners or cargo-owners [2]. The necessity remains if it proves imposible to obtain instruction because, although the cargo-owners have been comunicated with, they have failed to give instructions [2]. In modern times, however, the master has lessened authority owing to the increased facility of comunication [2] and the quality as agent of necessity will be rare in practice. If an agency of necessity is established it may entitle the agent to reimbursement from the principal of the necessary expenses incurred and in some circumstances remuneration for necessary services [2]. In recent case Ene I Kos v. Petroleiro Brasileiro (the Kos) [2010] it was held that the owners were not entitled to remuneration after the vessel was withdrawn. The owners were not doing anything more than required of a gratuituous bailee by way of caring for the cargo during the 2,64 days that elapsed before before the vessel
sailed away. Where there was no element of accident, emergency or necessity, remuneration which had not been expressly or impliedly agreed could not be due [3]. In substantion of the decision the Cargo ex Argos [1872] case was cited: not merely is a power given but a duty is cast on the master in many cases of accident and emergency to act for the safety of the cargo in such maner as may be best under the circumstances in which it may be placed; and as a correlative right he is entitled to charge its owner with the expenses properly incurred in so doing [3]. Also, if an agency of necessity is established it may afford the agent a defence to a tort action, e.g. for conversion [2]. The master has the duty of taking reasonable care of goods entrusted to him by by doing what is necessary to preserve them on board the ship during the ordinary incidents of the voyage, e.g. ventilation, pumping or other proper means [2]. Reasonable measures include also that necessitating expenses to prevent or check the loss or deterioration of goods by reason of accidents for the necessary consequences of which the shipowner is by reason of the bill of lading under no original liability and the shipowner will be liable for any neglect of such of duty by the master [2]. The Master will have a lien on the goods for any expenses incurred in the performance of such duty [2]. As the Master has to exercise a discretionary power, his owner will not be liable unless it is affirmatively proved that the master has been quilty of a breach of duty [2]. In jurisprudence it was held that if the master cannot comunicate with cargo-owner will be entitled to sell the goods which are damaged or perishable [2]. If however the master can but does not, comunicate with cargo owner before selling goods, the cargo-owner will be entitled to recover damages for conversion even though sale is reasonable [2]. Where the vessel in which goods are shipped is hindered by an excepted peril from completing the contract voyage, the shipowner must, if the obstacle can be overcome by reasonable expenditure or delay, do his
21
22
23
There are a number of legal implications for the voyage charter party performance- the carrying voyage and unloading stage. First, the quality as agent of necessity will be rare in practice. If an agency of necessity is established it may entitle the agent to reimbursement from the principal of the necessary expenses incurred and in some circumstances remuneration for necessary services. Also, if an agency of necessity is established it may afford the agent a defence to a tort action. Secondly, from a legal point of view the carrying voyage is question of responsibility. The division of responsibility in case of discharge operations may be modified by the custom of the port or by express provisions in the charterparty. In jurisprudence it was established that such a reallocation of risk by agreement is permissible and the shipowner
24
BEHIND THE THEORY OF SAFETY AGAINST CAPSIZING AND ASSESSING SHIP STABILITY
1
ABSTRACT The paper presents considerations about the mathematical modelling and the use in assessment of ship stability. Stability criterion is defined in the context of an expression. The connection between that stability criteria and the safety against capsizing is expressed as ordinal measures. The paper proposed a classification of stability criteria according with their possibility of dissimilarity. Keywords: safety, capsizing, stability, criteria.
1.
INTRODUCTION
Part of our life and moreover of the researches is mathematics. One of the definitions for mathematics as something where neither know what we are talking about not if what we are saying is write or wrong, was given by the English philosopher and mathematician Bertrand Russel. If we analyse this definition we can affirm that this is hallucinatory because in general terms the mathematics can be considered as the most exactly and purest science. Of course this can be considered real only in respect to pure mathematical exercises, where the theorems, axioms and definitions can prove what are we doing and the results we are obtaining. The problem is completely different in the situations when we try to interpret our mathematical findings, like for example when we use mathematical models to express the real facts. There are areas of study, like engineer education, where sometimes the difference between the mathematics, on the one hand, and the interpretation of mathematical relationship correlated with the validity of application in real facts, on the other hand, is widely neglected. This is an extremely important problem because they are not aware of how to correlate the mathematical expressions with models of real situations. This aspect is very important if it is trying to make mathematical models of real situations which are not yet sufficient modelled. The development of a pure mathematical relationship is a logical determined straight forward operation but to make a mathematical model is not. The mathematical model can be considered a practical act of research in order to find a mathematical relationship that might be used as a model to express real situations and or facts. Of course, the heuristically created relationship can not be considered all the time as a valid model of reality due to the fact that this is only a hypothesis which still has to be proven. In respect to reality, each mathematical model has deficiencies in proving procedures. To make the mathematical model practicable, in majority of the situations, it is hard to avoid or to neglect some of the conditions and facts which are present in reality and the necessity of simplification is involved.
The effect of such simplifications will be discovered only by applying repeatedly the hypothesis and interpret the results against what really happens. Analysing the hypothesis, there are two situations that have to be differentiated: Firstly the value and integrity of a mathematical model may be to explain qualitatively what really happens, like for example it can be taken from the Mathieu equation that under certain conditions extremely heavy rolling of a ship travelling in head or following seas may occur or follows from linearized equation of the roll motion that the roll amplitudes depend on the excitation frequency. We can agree such explanatory models and there is nothing wrong with them but, can be a improper use of mathematics if the authors protrude the models as exact solutions of a problem, especially in case of a real problem. Secondly, the main purpose of a mathematical model is to provide quantitative predictions. Almost in all cases, such predictions are subject to errors. Those errors are, in some of the situations, measuring errors which inevitably occur when the input and output of a model is determined in the corresponding reality in order to test the accuracy of the model. In other situations, more frequently, the errors came from the neglects in the model. In this way, the validation of the model is made formally, although a formal judgement of the errors could be made by another methods like for example statistical methods. Hence, is taken into account the practicality degree of accuracy. Thereby, the accuracy of the model is compared against its complexity and usually the engineers does not pay attention for the maximum achievable physical correctness but for results which are good enough for his purposes and can be obtained with an acceptable effort. In the view of the problem related to capsizing of ships there are various mathematical models issued but there is not yet a comprehensive and valid one. Many of the mathematical models that abounds the literature pretend that are offering a solution related to capsizing, although those models only explain some qualitatively aspects of the problem or, sometimes, derive from the mathematical complexity of the methods used that the results are valid. The latter aspect has been amply described and criticized in [1]:Put all your faith in
25
26
27
28
It is right to emphasize that mathematics by itself cannot solve a physical problem. The most important step is the modelling, i.e. a formulation which includes the most significant features of the problems to a sufficient approximation, and this must be tested by experiment in the history of ship hydrodynamics. The cases where theories are obviously physical relevant or where mathematics has preceded the experiments and vice versa are abundant in the literature. The examples are revealed in the calculation of virtual mass, development of ocean spectra and breaking waves or the wave motion in a rolling tank. In the fields where mathematics is not yet developed the engineers insight and experience can offer an adequate solution. In this fundamental problem of stability criterion based on values of righting lever a mathematician might note that nobody appears to have counted the numbers of independent dimensionless
29
[1] KUO C., WELAYA Y., Reply to a critique by A.Y. Odabasi, Ocean Engineering 1982. [2] THORNDIKE B., Comprehensive Desk Dictionary, Garden City, N.Y. 1958. [3] KRAPPINGER O., Stability of Ships and Modern Safety Concepts, International Conference on Stability, Glasgow 1975.
30
1.
INTRODUCTION
Cleaner seas represent an objective which involves the maritime states all over the world. The European Union member states are strongly committed to act in a harmonised manner to protect and intervene in case of maritime pollution as per the requirements of specific IMO conventions and European directives and regulations. To understand how important the struggle against marine pollution is and also the role played by the National Administration, is important to shortly present the Romanian Naval Authority (RNA), specialized technical body, acting as a state authority in the field of the safety of navigation, that represents and fulfils the obligations assumed by Romania with regard to international agreements and conventions such as those connected with environmental protection against marine pollution from ships. Maintaining a competitive level and a sustainable development are the major objectives even in the context of the world crisis negative effects. RNA provides high quality level services in accordance with the provisions of the legal and regulatory requirements which are included in the quality system policy and the procedure of the Management System having an essential contribution in the company competitiveness. The Romanian Naval Authoritys motto Safety through Quality represents the importance given to the highest standards within the company. The main tasks of the Romanian Naval Authority regarding the fight against pollution have been defined including the following: Inspection, control and surveillance of navigation in Romanian maritime waters and inland waterways;
Fulfilment of the obligations assumed from the international agreements and conventions to which Romania is part of; Representing the Romanian Government within the international organizations in the field of naval transports; Implementation of international rules, regulations and conventions into Romanian legislation; Development, endorsement and submission of drafts laws and mandatory norms to the Ministry of Transports for approval; Port State Control and Flag State Control; Coordination of search and rescue activities in the Romanian navigable waters and of the actions to be taken in case of navigation accidents and casualties; Protection of navigable waters against pollution by vessels; Sanctioning of the contraventions and investigation of the navigation accidents and casualties; Technical surveillance and certification of maritime and inland water ships, offshore drilling units flying the Romanian flag and of naval equipments, as per RNA regulations; Supervising the compliance of the Romanian naval transports with the provisions of the ISM Code and ISPS Code. To meet and apply the requirements set in the international conventions such as SOLAS/1974, SAR/1979, MARPOL 1973/1978 and OPRC/1990, Romanian Naval Authority has been legally appointed as the responsible authority to perform the management and mission co-ordination for SAR and Oil Response activities and also to monitor the vessels traffic within the area under Romania responsibility, through Maritime Coordination Centre.
31
32
Figure 1 Figure 1 Area of responsibility for national intervention in case of pollution and SAR operations minimize loss of life, injury, property damage and risk to the environment by maintaining the highest professional standards. These objectives are permitting to provide an effective SAR services for all risks, to protect the marine environment and to improve the safety and efficiency of navigation within maritime responsibility area. The legislative frame has been implemented containing all the relevant provisions with regard to EU Directives and IMO Conventions. Romania has ratified important conventions, protocols and agreements concerning the protection of the marine environment such as MARPOL 73/78 with all annexes; OPRC1990; CLC 1992; Bunkers 2001, as well the regional agreements: Bucharest Convention, 1992, Odessa Ministerial Statement, 1993, Regional Contingency Plan. The following EU Directives were transposed, implemented and enforced: Directive 2000/59/EC on port reception facilities for ship-generated waste and cargo residues; Decision 2850/2000/EC setting up a Community framework for cooperation in the field of accidental or deliberate marine pollution; Directive 2005/33/EC amending Directive 1999/32/EC as regards the sulphur content of marine fuels; Directive 2002/59/EC of the European Parliament and of the Council of 27 June 2002. At EU level, Constanta MCC is part of the Consultative Technical Group for Marine Pollution Preparedness and Response. All staff of the Maritime Coordination Centre is duly qualified and trained by authorized body of the International Maritime Organization to act as coordinators for SAR missions and Oil response incidents. The MCCs entire personal are trained to operate all the modern equipments, being able to perform missions in a close cooperation with the other appropriate organizations from the Black Sea region. In order to improve the efficiency of personnel and for maintaining a high level of response, trainings and the exercises are carried out at regular intervals of time.
33
Figure 2 CleanSeaNet EU planned images In case of marine heavy pollution, asks partial or total activation of the National Contingency Plan (NCP), through General Coordinator of Operative Commandment for Marine Pollution (OCDM); Sends alerts and keeps the contact for the emergency situations with relevant national and international authorities (including IMO, EMSA and Black Sea Commission). According to NCP, MCC has been designated as Maritime National Operational Contact Point (MNOCP), 24 hours capability. The main tasks of MCC as M-NOCP are to receive alerts for oil pollution incidents and to ensure off-shore response communications, directly or through RADIONAV SA. In accordance with the Regional Contingency Plan, M-NOCP exchanges information with Black Sea Commission and all Black Sea MCCs regarding the major pollution incidents in Black Sea and keeps informed national competent authorities on related situations. The CleanSeaNet system was developed for the detection of oil slicks at sea using satellite surveillance
and was offered by EMSA to all EU member States, according Directive 2005/35/EC. The system is based on marine oil spill detection by checking on scene the satellite images. The service integrated into the national and regional response chain, aims to strengthen operational pollution response for accidental and deliberate discharges from ships and assist Coastal States to locate and identify polluters in areas under their jurisdiction. CleanSeaNet is delivering oil spill alerts in near real time (30 minutes) to both the Coastal State(s) and EMSA for detected slicks as well as giving access to the satellite image(s) and associated information over the web (and via email for low resolution images). In case of a detected oil slick, an alert message is delivered to the operational contact point. 3. CLEANSEANET SYSTEM
Each Coastal State has access to the CleanSeaNet service through the dedicated CSN Browser. This web map interface tool allows the viewing of all low resolution images, with oil spill detection analysis
34
Figure 3 Satellite image received showing the existence of an oil sleek The said vessel was navigating from Turkish port Martas to Romanian port Galati located on the Danube River. On 4th of November 2008 at 14.00 lt m/v GUZIDE S arrived in the port of Galati and after the completion of the arrival formalities the representatives of pollution department within RNA commenced the investigation of the reported pollution incident. The investigation consisted in analysing and verification of the ships certificates, navigation log book, engine log book and the log book for oil and bunkering evidence. The notifications transmitted by the ship to Galati Harbour Master prior arrival and the documents submitted upon arrival were analysed as well. An expended control regarding the ships fulfilment with MARPOL requirements including the inquiry of the crew has been performed. As evidence it was found that the quantity of bilge water from the port side bilge tanks was smaller than the quantity of bilge water mentioned in the Oil Record Book. The figures from the Oil Record Book did not correspond with the reality revealed by tanks soundings. According the ships Navigation Log Book, on the 3rd of November 2008 at 08.12 UTC the vessel was in the position reported by Italian Monitoring Satellite Image Centre in fully accordance with the pollution moment. After such evidences the captain of the vessel has recognized the violation of the MARPOL requirements and that marine pollution was related to his ship due to negligent transfer of the bunker. According with the provisions of the Government Decision 876/2007, the Captain of the ship was punished with a substantial contravention fine.
35
Ecological disasters that occurred in recent years within the Europe have demonstrated the importance of compliance with the MARPOL provisions of all ships. The major pollution from ships can be avoided only through an efficient organization, preparedness for response and cooperation between countries in order to develop all national systems by strengthening the cleaner seas concept. The organizational system described in this article and the case report presented having access to new
36
DEVELOPMENT OF THE COMPUTER-BASED QUALITY CONTROL SYSTEM USED FOR TRAINING SPECIALISTS IN NAVIGATION
DAVYDOV VOLODYMYR, MAIBORODA OLEXANDR, DEMYDENKO NADIYA
Kyiv State Maritime Academy, Ukraine ABSTRACT The paper presents the analysis of the objective computer-based quality control system intended for training specialists in marine navigation. This system of quality control has been applied at the Navigation and Ship Handling Department of Kyiv State Maritime Academy (KSMA) in the process of training the students of 2 - 4 proficiency levels. The informational and methodological computer-based assessment tool package for the students trained for Bachelor, Specialists and Master Degrees has been developed on the basis of the following constituents: a) the system of control, b) three e-textbooks on basic theoretical subjects, c) built-in matrix for self-assessment to verify the testing results. Both experimental and current implementation of the computer-based quality control system proved to be a reliable and objective assessment tool used for quality evaluation of training on basic theoretical subjects included into the curriculum during the whole course of studies. Saving time and reducing financial resources spent for carrying out control sessions are significant. The main advantage of the system has been its efficiency which currently demonstrates the serious increase of the competency level of the students in Navigation. Keywords: navigation, computer-based quality control system, efficiency of testing
1.
INTRODUCTION
Disregarding the difficulties and complexities which took place during the long period of reforms in the field of higher education, Ukraine managed to preserve the best features of the old school and to bring the Maritime education and training (MET) to prominent results thus playing the leading roles in the worlds shipping industry as far as the qualified work force supply is concerned. The evidence is the great demand in seafarers with Ukrainian diplomas (Bachelor or Master) in the Merchant Navy. According to the statistics of BIMCO (Baltic and International Maritime Council), Ukraine occupies the leading place in the world by the percentage of higher ranks (Captains, Chief Officers, Chief Engineers) and the 5th place by the total number of seafarers supplied to the worlds labour market. Eastern Europe has become increasingly significant with a large increase in officer numbers. Thus, improved training and recruitment levels need to be maintained to ensure a future pool of suitably qualified and high calibre seafarers (BIMCO 2010:1). Preservation of this status for the longer period in future is the priority of Ukrainian Maritime institutions. This task is directly connected to national standards and quality of MET in Ukraine. Several factors affect the situation among which the most challenging are: the quality and integrity of international and national standards in the field of training the crew high ranks; the level of the general proficiency level of school leavers or students entering the Academy; the proficiency level of the teaching staff, especially those having the experience of command positions in Merchant Navy;
the availability of up-to-date certified training facilities and simulator base; the availability of the objective computerbased quality control system for training specialists in navigation. Most of these parameters are supported by the national regulations which standardize the process of maritime education and training and specify its quality requirements. At the same time, the system of quality control of trainees academic and professional competency level hasnt been standardized in some aspects. In most of Ukrainian Maritime institutions the only version of quality control instruments both in semester sessions and final (state) examinations for all proficiency levels is the traditional assessment with the help of examination cards and face-to-face teacherstudent contact. This approach being acceptable in common practice, doesnt always allow to evaluate objectively and in full scope the quality of a students educational level, mainly referring to the requirements of the branch standard for higher education developed by the Ministry of education and science, youth and family of Ukraine in the Educational qualification curriculum for Bachelors, Specialists and Masters. This national standard specifies that the final/state assessment for Bachelors and Specialists on 14 academic subjects should be held in the format of testing. From the point of view of objective characteristics, this number of subjects presumes the application of tests and rejects the traditional way of evaluation. In addition, the traditional form of final examinations usually brings to extreme physical overloading both of the examination board members and the students. Purposing the higher qualitative characteristics in the process of training, the attention was paid to the
37
YK =
(5)
where
Acp =
1 , Pcp
is the average alternation value per the testing session. In accordance with these parameters the system is able to correct automatically the results obtained, with the value of the guessing factor taken into account. The comparison of frequency analysis on two basic subjects for the testing sessions was performed in 2012. The objective character and authenticity of testing results were also proved out by the analysis of selective groups and subjects. 3. CONTENTS OF THE COMPUTER-BASED QUALITY CONTROL SYSTEM The comparative analysis takes into account the number of students in the groups taking the tests on Navigation and piloting - 208 Bachelors; on Safety of navigation - 113 Specialists. Both results are valid when considered from the point of view of Gaussian probability Law of normal distribution. The database for tests corresponds to the contents of the modules of the national standards in Marine and river transport training curriculum for the specialists in Navigation and to the minimum competency standards for OOW which are defined in STCW 78/95. The total number of tests assignments included into the final (state) examination is: 2843 on 15 basic subjects for Bachelors; 1760 on 15 basic subjects for Masters. The general information on the academic subjects and parameters of testing, methods of defining the number of assignments depending on the number of modules in a credit were analysed in research papers published during the period of preparation and implementation of the informational computer-based complex of quality control beginning with the year of 2006. Since then, more than 2000 students of the 2-4 educational levels have passed testing. The diagram below (Figure 1) presents the main data on testing the students theoretical competency during final examinations where vertical vector represents the number of students; horizontal vector indicates the year(s) of final testing; red colour depicts Bachelors; blue colour depicts Specialists; green colour represents Masters. All teachers and instructors of the leading departments performed the final examinations using materials of the computer testing. Hence, the total number of students subjected to this type of final control is 4.5 thousands which is statistically reliable for the further research aiming the increase of quality of students training in theoretical subjects.
(1)
for the 2nd type of assignments (multiple choice with several correct answers) it is
(2)
(3)
where Bi is the mark in points for each assignment of the test; N is the total number of assignments in the test. The initial score for the testing session is defined as a fraction of correct answers according to the formula:
(4)
38
Figure 1 Comparative diagram of applying the computer-based system of quality control in 2006-2012 The demand for the computerized control system and its objectivity and authenticity have created preconditions for designing The Informational and Methodological Complex of Quality Control Tools of Deck Officers. Its principal characteristics are a) the application of Opentest-2 software designed by the Kharkiv National Radioelectronics University; b) the implementation of the improved database comprising four types of assignments with different levels of complexity and the new design presentation; c) the implementation of the self-assessment system supplemented to the 3 e-textbooks on major subjects; d) the flexibility and multifunctional character of the quality control complex are based on combination of its educational, training and controlling functions, as well as its functional re-arrangement for a specific purpose. Objectivity and authenticity of competency quality assessment are achieved owing to the following factors: the results of testing being independent on subjective interference; application of similar criteria used for the final assessment and the same parameters of testing approved by the corresponding Testing Regulations in the Academy; sufficiently abundant database for each module (subject) 10 times increasing the number of questions suggested in the test by means of random sample; the variable sequence of questions; the correlation of correct answers and probability of guessing as per the primary mark and its scaling. 4. CONCLUSIONS
/Manpower_Study_handout_2010.ashx
The profound research and 7-year-long experience of objective computer quality control system for training specialists in navigation of the 2-4 proficiency levels brings to the following conclusion: 1. The department of navigation and ship handling of Kyiv State Maritime Academy has created the informational and methodological complex of computerbased tools for quality control of navigational students proficiency which comprises a) computer-based system
[2] MYKHAILOV V., KUDRIAVTSEV V., DAVYDOV V., Navigation and Piloting, E-textbook, Kyiv State Maritime Academy, 2009 [3] MYKHAILOV V., KUDRIAVTSEV V., DAVYDOV V., Practical nautical astronomy, Etextbook, Kyiv State Maritime Academy, 2009 [4] KUDRIAVTSEV V., DAVYDOV V., SOKOLOVSKY D., Insuring Safety of Navigation, Etextbook, Kyiv State Maritime Academy, 2012 [5] DAVYDOV V., On the problem of step-by-step training of senior officers for sea and river transport, Water transport, Kyiv State Maritime Academy, Issue 10, 2009, pp.149158. [6] DAVYDOV V., SOKOLOVSKY D., Informational and methodological complexes of quality control as a means of quality improvement of deck officers proficiency, Water transport, Kyiv State Maritime Academy, Issue 12, 2011m pp.121126. [7] STCW Convention Comprehensive Review, IAMU, 2010
39
40
ABSTRACT Quay cranes play an important role in cargo operations in ports and are considered leaders of the port operators technological process. This paper presents calculations of determination grabs efforts and there are proposed changes in grabs structure for optimizing the exploitation of quay cranes. Keywords: quay crane, grab, exploitation, port,cargo
1.
INTRODUCTION
Quay cranes are means of production that allow hard and low-skilled work replacement with easier highly skilled work, mechanization and automation of production processes, thus ensuring increased productivity, reduced cost price and shorted execution time in all areas of the economic activity. They play an important role in the development of maritime transport, inland waterways, road and rail. They also lead to increased productivity in industrial and civil construction bridge construction, viaducts, railways and other domains. Quay cranes forms the group of the main lifting machinery used in ports. With them is done most of the loading and unloading of ships. This is the reason for which quay cranes are considered leaders of the whole technological process. In order to load and unload vessels, quay cranes must perform a series of movements (maneuvers), so that goods can be placed or removed from any part of the ship. In order to optimize the traffic of ships in the port of Constanta, and to optimize ships operation, must be analyzed the structural characteristics and specific functions of each type of ship, closely related to structural features of operating berths [1]. Achieving high productivity quay cranes is imposed by the need to ensure quick loading and unloading of ships and barges, which is a prime importance necessity in activities of operating ports. Current and future trend in solving loading and unloading, transport and handling of goods, materials and spare parts of quay cranes is to create machinery with reliable operation, with high degree of mechanization and automation that occupy small workspaces, are effective, with increased productivity and with maintenance and operation easy to achieve. 2. DETERMINATION OF GRAB EFFORTS
Figure 1 Grab forces In figure 2 there are the following forces: - FB is force in the arm - Hi is closing force - S is force in the cable. Weights of the empty grabs main components are: G0 = G1 + G2 + G3= g (m1+m2+m3) [kN] where: m1 - mass of the upper beam m2 mass of buckets m3 mass of the lower beam According to the literature, the masses of empty grabs components can be approximated according to the below algorithm [3]. (1)
A clamshell or grab consists of hoist drum lagging, clamshell bucket, tag line, and wire ropes to operate holding and closing lines [2].
41
(2)
(3)
(4)
cos =
resulting
(8)
(9)
(5)
where: F1 force in the upper beam; F2 force in the articulation of bucket with bars; F3 - force in the lower beam; i - transmission ratio (for corn, with = 760 kg/m3, i has values between 3...4, [3] we choose i = 3); - efficiency of closing hoist (from calculations, = 997) Discussion: When choosing the transmission ratio i, is taken into account that is favorable a report as high as possible. This assumes, however, more cable wounded on the drum, therefore an increase of the time of closing cups. By default, the grabs productivity decreases. Therefore, we chose the lowest value in the range to get an increase in productivity. Calculations: S force in the cable, S results from the calculus: S = 90,32 kN; (6) F1 = 24 + (3 0,97 - 1) 90,32 = 196,51 kN F2 = 40 kN F3 = 16 + 3 0,97 90,32 = 278,83 kN 2.2 The equilibrium of forces All the forces acting in the grab are concurrent in O. Reducing these forces in point O, the grab bucket will be in equilibrium under the action of R2 and R3 resultants, respectively of closing force Hi. Following, we must calculate the resultants R2 and R3, as well as the angles under them are from the horizontal, in order to write the equation of equilibrium, ie: (7) Hi + R2 cos ( + ) = R3 cos
R1 =
2.4 Determination of R2 reaction force We apply the cosines theorem (see fig. 2.c) a2=b2+c2-2bccos A R22=F22 + R12 - 2 F2 R1 cos (1800-) (11) (12)
=> R22 = 402 + 223,302 2 40 223,30 cos 1520 R22=1600 + 49862,89 17864 (-0,88) R22= 67183,21 (13) So, R2=259,19 kN 2.5 Determination of angle By applying the cosines theorem, results: F22=R12 + R22 2 R1 R2 cos That is, (14)
cos =
result
(15)
= arccos
(16)
Calculus:
42
2.6 Determination of angle =900- (17) maximum angular half opening of the cups According to [3] , 2 = 1560...1600 ; we choose 2 = 1600 => =800 Resulting: =900-800 => =100 2.7 Determination of + angle =900 =900 280 Resulting: = 620 + = 5 +620=> + = 670
0
(18) (19)
2.8 Determination of closing force The titles of sections and subsections will be aligned left and numbered consequently. Is resulting from the equilibrium equation of forces, reduced in point O. Hi = R3 cos - R2 cos (+ ) (20) R3 is resulting from R3F3H3 triangle:
cos =
F3 R3
(21)
(22) Figure 3 Rods bending in FEMAP For optimization, tie rods could be replaced by hydraulic cylinders. 4. CONCLUSIONS
We propose the following measures for grab exploitation optimization: 1. To increase the closing force (in order to minimize cargo loss between buckets and to increase the digging force) the lower beam weight must be reduced. 2. According to studies, the closing force is maximum when the grab is opened and decreases as it closes, because Hi is proportional with the weight of the grab and with i gear ratio of the hoist. For these reasons it is recommended a transmission report as high as possible. This however requires more cable winding drum, so a higher closing time, which leads to lower
By replacing a pair of rods with a single hydraulic cylinder, the grabs weight is reduced. Due to hydraulic action, the closing force increases and closing time decreases. Also hydraulic action ensures good sealing of jaws and eliminates cargo losses during exploitation. Hydraulic drive can be separated on each bucket, ensuring a more precise control over each cup (together or separately). 5. REFERENCES
[1] BERESCU, ., Studiu constructiv dinamic al acvatoriului portului Constana i consideraii privind creterea eficienei operaionale n port, Ed. Nautica, Constana, 2012. [2] http://www.tpub.com/eqopbas/147.htm, accessed at 20.11.2012
43
44
ABSTRACT The present paper presents the history and the problems of the intact ship stability regulations entered into force over the years. The problems involving ships stability loss as well as ships capsize concerned the maritime community from the first beginning, this type of problems being always a part of maritime safety. Maritime casualties related to loss of ship intact stability continue to be present despite the fact that ships comply with stability criteria. The necessity of new generation of stability criteria is to be taken into account using additional factors involved. Keywords: ship stability, stability criteria, weather criterion, righting moments, metacentric height, lever arm curves, capsize. 1. INTRODUCTION methods, based on approximations, were invented to overcome this problem, but the final solutions came with the appearance of computers. Ship stability was judged mainly on the calculated value of metacentric height which also in nowadays is still wrongly viewed as a main factor. In 1939 Rahola carried out extensive statistical investigations into ship stability. Various still water lever arm curves of capsized ships were analyzed and he concluded that a large number of ships had righting levers below the minimum values of righting levers recommended by experts at that time. He identified that the ships had various values of righting levers, from too small, according to maritime board, to critical levers and sufficiently large lever arms. His investigations resulted finally in the definition of a standard lever arm curve defined by minimum levers at 20 and 30 degrees heel, the maximum lever being at 35 degrees heel and the angle of vanishing stability at 60 degrees. All lever arm curves are accepted as equivalent when the enclosed area up to 40 degrees is of the same amount or larger as the standard curve. Raholas investigation was a success and proved, later on, to be the base of minimum stability criteria adopted over the years. Even the present intact ship stability criterion, issued by International Maritime Organization through Resolution MSC.267 (85), is based on Raholas conclusions. From his investigation, it is important to note that ships that capsized due to dynamic effects like resonant rolling or shifting of cargo has been categorized as safe ships with sufficient large still water lever arms in most of the situations. Thus, dynamic influences were neither considered directly, nor indirectly, in Raholas minimum requirements. Provisions concerning intact ship stability were introduced at a later stage in international regulations of ship safety. The necessity of intact stability rules was indeed uncertain until SOLAS 48, as stated in Recommendations contained in Annex D, recommends to the Administrations a more detailed examination of intact ship stability. The first international intact ship stability rule at IMO was originated by a recommendation contained in the conclusions of SOLAS60, when for the first time was recommended to initiate studies on the basis of information referred by
The history and analysis of the regulations related to intact ship stability as well as improvement of a practical methodology for assessment of ship stability on board vessel, is the main objective of this article. The basic motivation came from the feeling that key advances in the knowledge, understanding, and applicability of ship stability principles, stated in regulations, correlated with practical problems, can be integrated within a single framework. 2. THE HYSTORY AND ANALYSIS OF INTACT SHIP STABILITY REGULATIONS The problem of stability of the floating bodies, which can be traced back to Arhimede himself, has never ceased to interest scientists and engineers and has become an important part of academic studies. Intact Ship Stability have been known for a very long time in terms of positive righting moments. Since from 1747 Bouguer define in his work Traite du navire, de sa construction et de ses mouvements the metacentre as the intersection of two vertical axes passing through the centre of buoyancy (de centre of gravity of the displaced fluid) at two slightly different angles of heel. Two years later, Euler in Scientia navalis sea Tractatus de Construendis ac Dirigendis Navibus (1749) gave a general criterion of the ship stability, based on the restoring moment: the ship remains stable as far as the couple weight (applied in vessels center of gravity) and the buoyancy force (applied in vessels center of buoyancy) creates a restoring moment. In 1757, Bernoulli, discovered the relationship between metacentric height (GM) and the rolling period of ships. Later on, Moesley, introduced the dynamic approach with respect to the area under level arm curves. Around 1900, the problem of ships stability was considered as solved based on knowledge to evaluate the dynamic stability of existing ships. In fact, only the theoretically considerations were solved, but the main problem was to apply these fundamentals as a practical calculations of ships stability related to righting levers, having in view the complex geometry of ships hulls. This problem remained up until last decades of 20th century, several
45
46
47
The general belief is that current ship stability regulations reflect little of the state of the art of ships behavior and seakeeping in different practical situations, especially in rough seas. Additionally, ships that are categorized as safe continue to loss their intact stability due to influence of factors, which depend, directly or indirectly, on minimum stability requirements. There is a necessity of rethinking of the stability problems, arising from the new ship design trends, new ships operation from economical point of view, as well as competitive officers on board vessels capable to face the new
[1] Belenky V, Jan Otto de Kat, Umeda N On Performance-Based Criteria for Intact Stability, ABS Technical Papers 2007 [2] Francescutto A. - Intact ship stability-the way ahead, Marine Technology, Vol. 41,2004, pp.31-37 [3] Francescutto A. The intact ship stability code present status and future developments, Marine Technology, 2009, pp.199-206 [4] Kuo C. and Welaya Y. A review of intact ship stability research and criteria, Ocean Engineering, Vol.8, Issue 1, pp.65-85 [5] Res. A 167 Recommendation on intact stability for passenger and cargo ships under 100 metres in length, IMO, London 1968 [6] Res. A 562 Recommendation on a severe wind and rolling criterion (weather criterion) for the intact stability of passenger and cargo ships of 24 metres in length and over, IMO, London 1985 [7] Res. A 749(18) - Code on Intact Stability for All Type of Ships Covered by IMO Instruments, as amended by MSC. 75(69), IMO, London 1999 [8] Res. MSC. 267(85) International Code on intact Stability, 2008 (2008 IS Code), IMO, London 2008
48
Risk management is "the formal process by which an organization establishes its risk management goals and objectives, identifies and analyzes its risks, and selects and implements measures to address its risks in an organized fashion". Today's risk management process encompasses more than just insurance, work safety and health and legal liability management. It also includes an ongoing and complex process of evaluating and minimizing inherent, enduring organizational risks-in this case, those of the academic institution, students, community agencies, community members, and others involved in the service-learning experience. To avoid health and legal liability, risk management procedures need to be considered before starting any service-learning experience. This fact sheet provides background information and describes a systematic approach to establishing a safe, minimal risk environment for all participants: students, faculty, supervisors, transporters, community agency representatives, and others. Keywords: risk management, students, process, objectives, components
1.
INTRODUCTION
Start by inquiring about the policies and procedures that may already be in place on your campus: does your campus have a risk management policy for communitybased educational experiences, for community service, or for clinical placements? To avoid duplication of effort, be sure to consult with administrators and faculty in other schools and departments on your campus that have an existing service-learning or community-based learning program in place. If your campus has an Office of Service-Learning or related office, consult with them as well. Learn from their stories of both successes and challenges involved in managing risks and avoiding liability. When available, request pertinent documents such as student and agency orientation materials, consent forms, university-agency agreement forms, liability policies - to review as templates for your program. In general, the more the service-learning environment is sanctioned by the academic institution, the greater the potential for liability to the academic institution. Conversely, the less the service-learning environment is sanctioned by the academic institution, the greater the potential for liability to the participating students and agency. For example, if the student does community service on his or her own - outside the scope of a credit-bearing course or official campus program the student is probably not covered by the institution's liability insurance. In either scenario, it is important to create signed agreements that clarify the liability insurance coverage provided by the community partner and the academic institution involved in servicelearning. Be sure to check your state's requirements; for example, worker's compensation insurance may be required by state law. Worker's compensation for students is often the responsibility of the academic institution if the service-learning experience is a requirement. Although it is critical to have some form of liability insurance coverage at both the community agency and the academic institution, financial losses will only be an
issue if an adverse event occurs - the ultimate goal is to prevent any adverse occurrences. It is not just financial losses that are at stake; one must also consider prevention of other losses, including loss of trust and mutual understanding in community-campus relations, which is the foundation of a successful partnership. Liability prevention involves the systematic identification, analysis, measurement and reduction of risks. It encompasses several aspects of the servicelearning experience, including the community agency (e.g., slipping on a wet stairway), product or service delivery (e.g., quality of care provided), transportation (e.g., motor vehicle accident), and worker's compensation, among others. An example of risk prevention includes training students in safe needle disposal before working in health clinics. An example of risk reduction includes assuring that gloves are available for student use in health care environments, or a review of emergency response procedures, such as fire exits. If an adverse event occurs that involves legal intervention, consider the following: Injury to student in service-learning experience: typically, medical costs are paid through workers' compensation when the students' injuries resulted while he/she was providing service within the scope of the SL experience Injury to someone else by student or faculty in service-learning experience: in this case, there is the possibility of litigationanybody involved in the situation could be named as a defendant, including the student, the academic institution, the faculty member, or the community agency. The academic institution would defend and indemnify the student and the faculty if each were operating within the scope of their student or faculty roles. Risk management is an ongoing process that requires continuous revision in response to changing governmental and workplace policies. To assure sustainability of your community-campus partnership, adequate planning, orientation, and continual evaluation are essential. Furthermore, involving all stakeholders
49
50
[1] Boise State University http://servicelearning.boisestate.edu/aboutsl/risk.asp. This site includes Risk Management and Insurance with Service-Learning FAQ's, an Informed Consent Form for service-learning trips, incident report procedures, and safety tips for students. [2] Brigham Young University Idaho www.byui.edu/ServiceLearning/subpages/fgliability.htm This page describes the set of steps BYUI staff and faculty should take in ensuring students are properly covered when leaving the campus for service-learning experiences, including a Master Service-Learning Placement form and a BYUI Student Service-Learning Agreement. [3] California State University System http://www.calstate.edu/cce/resource_center/servlearn_ri sk.shtml. This guidebook on Managing Risk in Service-Learning offers guiding principles to reduce risk in servicelearning, describes a process for implementing risk management, and provides a number of tools and checklists. [4] Iowa State University http://www.celt.iastate.edu/ServiceLearning/risk.html This page on Risk Management and Service-Learning is intended to assist faculty in assessing program risk issues. [5] Maricopa Community College www.maricopa.edu/legal/rmi/ This extensive site, created by Maricopa's Office of the General Counsel Risk Management Division, includes forms, information, resources, presentations, and new items in areas such as assumption of risk, claims, insurance, international education, and motor vehicle usage. [6] St. Edward's University http://www.stedwards.edu/risk This site includes a risk management manual, procedures for international trips and study abroad, and a checklist of when to use risk management forms. [7] Suffolk University blogs.cas.suffolk.edu/servicelearning/resources-forfaculty/suffolk-university-service-learning-riskmanagement-manual/
51
52
THE BENEFITS OF THE IMPLEMENTATION MECHANISMS FOR THE INTEGRATED SYSTEM IN SMES
POPA LILIANA-VIORICA
Constanta Maritime University, Romania ABSTRACT To survive and achieve to develop their activities in an increasingly competitive environment, small and medium sized enterprises have to increase their competitiveness and, progressively, reduce their operational cost. It is necessary to develop a flexible and unique management system for these enterprises to use to integrate all management systems or activities related to quality, safety and environmental issues and improve their overall business performance and also to get prepared for certification according to the relevant international standards. The output of this paper is a route map of activities for the implementation of the integrated management system, incorporating tools addressing specific management areas using quality, safety and environmental issues to focus them. The route-map has the potential to integrate the overall management activities of an organization. The tools of the route map were partially implemented to two SMEs, giving positive validation of the concepts. Key words: standard, SMEs, route map, safety, quality.
1.
INTRODUCTION
SMEs are and will be a significant task to work on within the framework of the European Union economy. The global trend indicates that today and in the near future big firms will be united to giant enterprises, which will dominate the market, influencing all its parameters and determining prices. Due to these reasons, SMEs will face many perils and run the risk of being excluded from the marketplace, if they do not manage issues like cost reduction and competitiveness immediately and in the most efficient way. The European Union's concern about enabling SMEs to survive in a rather difficult business environment is very obvious. It is important to mention that: In the 1996 British Quality Foundation & EFQM edition of the Business Excellence Model and the process of Self Assessment, special guidance for Small Businesses is provided for first time. The Regulation (EEC) 1836/93 for EcoManagement and Audit Scheme (EMAS) gives emphasis to the way that it can be implemented to SMEs. Most of the Research Programs funded by the European Union provide opportunities for projects that deal with SMEs development and their performance improvement Additionally, management system standards like the ISO 14004: 2005 "Environmental Management Systems - General Guidelines on Principles, Systems and Supporting Techniques" and the BS 8800: 1996 "Guide to Occupational Health and Safety Management Systems" make special reference to their applicability to SMEs. Quality, safety and environmental issues reflect all aspects of competitiveness. More precisely: the concept of Quality Management, as a means to achieve benefits for all stakeholders groups through sustained customer satisfaction (ISO 9001: 2008
"Quality Management Systems - Guidelines"), Safety, as a management field for controlling and reducing all kinds of losses and the relevant cost Environment, as a synthesis of internal and external parameters to be managed for the benefit of both the organization and the society create, in synergy, a triangle basis for developing a management system that SMEs need and are able to implement and which can be extended to serve the overall management needs of these enterprises. 2. DEVELOPMENT OF AN INNOVATIVE UNIQUE GENERIC SYSTEM FOR MANAGING QUALITY, SAFETY AND ENVIRONMENTAL ISSUES By examining the special characteristics and needs of SMEs, among which the necessity for cost-effective management systems and procedures and the limited resources are the most critical ones, the importance of establishing and using management systems that unify a number of issues in these organizations becomes very obvious. Although a number of articles have addressed the IMS approach for quality, safety and environmental issues, no generic management basis has been established yet for SMEs. Standards, models and regulatory documentation demonstrate a structural relationship between the management methodologies of these three issues but there is a big difference between their objectives and orientation as they appear to be put in practice at the moment: Quality management basically aims at satisfying customer expectations and needs Safety management primarily aims at fulfilling legal obligations Environmental management aims at proving the existence of a social responsibility. By integrating these issues through the development of a unique management system and a common
53
The combination of organizational flexibility, operational cost reduction and continuous total performance improvement assessment will increase SMEs competitiveness dramatically. An IMS route-map can act as a platform for the development of a new management style that will be based on: simplified purposeful procedures value-adding process planning front-line management practices 7.1. Inducing vertical subcontracting: the korean way Korea's experience is of special interest since the rapid development of its subcontracting system allowed the SME sector to greatly expand its role in manufactured output and exports in a relatively short period- the two decades since the mid 1970s. The radical change in industrial size structure during that period was partly a result of the changing composition of industrial output by sector, and partly due to a policy imperative to spread the fruits of industrial growth more widely (Baek, 1992). The later shift from low-wage strategy a development model in which interfirm networks gained importance (Cho, 1995, 2) also played a role. A dense subcontracting system was built on cultural, economic and policy factors, and on direct incentives. Many linkages rest on mutual trust and interpersonal respect based on social relationships, such as common schooling and regional or family background (Cho, 1995, 13). At the same time, market forces encouraging subcontracting were complemented by government policy and pressure. Some of the new small firms are spin-offs from the large enterprises for which they subcontract, while others have arose independently. Legislation enacted in 1982 specified the SME industries to be promoted, excluded large firms from activities reserved for small ones and promoted subcontracting (Cho, 1995, 4). Since the late 1980s, externalization (transfer of production activities
54
The particular tools of the IMS route-map aim at: introducing a systems approach to areas that, generally, are not managed that way in SMEs fulfilling the needs of SMEs satisfying the requirements of the applied standards
55
56
COCONET PUTTING TOGETHER SEAS WITH ROMANIA AS WORK PACKAGE LEADER FOR BLACK SEA PILOT PROJECT
1
ABSTRACT CoCoNet is the abbreviation for a research project called in full as "Towards COast to COast NETworks of marine protected areas, coupled with sea-based wind energy potential" funded under the OCEAN.2011-4 theme of the European Union's Seventh Framework Programme (better known as FP-7 project). This collaborative project comprises 39 partner institutions from 22 countries, including Romania. Environmental policies focus on protecting habitats that are considered valuable because of the biodiversity they encompass. Such policies also aim at producing energy in cleaner ways. The establishment of Marine Protected Area (MPA) networks and installation of Offshore Wind Farms (OWF) are important ways to achieve these goals. The scope of this paper is to highlight work packages (WP) established within CoCoNet Project on one hand and, on the other hand, pointing out Romanian partnership to CoCoNet Project. Keywords: CoCoNet Project, Black Sea, Mediterranean Sea, marine protected areas, work package. 1. INTRODUCTION Mediterranean and the Black Sea, shifting from a local perspective (centred on single MPAs) to the regional level (network of MPAs), and finally to the basin as a whole (network of networks). The identification of the physical and biological connections that exist among MPAs will then be useful to elucidate the patterns and processes of biodiversity distribution. to explore where offshore wind farms might be established, producing an enriched wind atlas for the Mediterranean and the Black Sea.
CoCoNet Project is intended to identify groups of putatively interconnected MPAs in the Black and the Mediterranean Seas, shifting from local (single MPA) to regional (Networks of MPAs) and basin (network of networks) scales. Amongst other points of interest, one is that coastal focus will be widened to off shore and deep sea habitats, comprising them in MPAs Networks. These activities will also individuate areas where Offshore Wind Farms might become established, avoiding too sensitive habitats but acting as stepping stones through MPAs. Socioeconomic studies will integrate to knowledgebased environmental management aiming at both environmental protection (MPAs) and clean energy production (OFW). Current legislations are crucial to provide guidelines to find legal solutions to problems on the use of maritime space. Two pilot projects (one in the Mediterranean Sea and one in the Black Sea) will test in the field the assumptions of theoretical approaches. The Project covers a high number of countries and involves researchers covering a vast array of subjects, developing a timely holistic approach and integrating the Mediterranean and Black seas scientific communities through intense collective activities and a strong communication line with stakeholders and the public at large [1]. It is project aim to produce the guidelines to design, manage and monitor network of MPAs, and an enriched wind atlas for both the Mediterranean and the Black Seas, creating a permanent network of researchers that will work together also in the future, making their expertise available to their countries and to the European Union. CoCoNet project has two main themes [1]: to identify prospective networks of existing or potential Marine Protected Areas (MPAs) in the
2.
There were several work packages (WP) established within CoCoNet Project that have been split between participants, as follows [1]: 2.1. WP1-Management Objectives: manage, direct and monitor the overall performance of the project; ensure the correct progress of the work so that the results of the project adhere to the contract's requests; ensure the adequate collaboration between the groups working in different work packages within the project; coordinate the production of deliverables and the organization of meetings both in person and video conferences; prepare and deliver the periodic progress report to the Commission; administer project resources; supervise the decision making process create the processes, templates and instructions to control the performance of work plans and resources with the support of a specific web based system;
57
2.2. WP2-Habitat mapping: state of knowledge, data integration and scenarios of protection The knowledge about habitat distribution and extent is critical for the conservation and the management of the marine system. Both data gathering through habitat mapping and data management systems to synthesize the available information about habitat distribution at large scale require the use of standard approaches to enable comparisons between areas and organize information in maps and reports. Different key concepts and methods in dealing with marine habitat classifications and mapping have been developed by different disciplines (e.g. marine geologists and ecologists, oceanographers). A unifying approach focusing on the definition of habitats, the measurable features to describe it, the scale and the hierarchical framework to be used is needed to provide up to date information about the distribution of habitats at basin scale. WP2 will integrate EU and non-EU experience in habitats classification and mapping from coastal areas to the deep sea. Data about habitat distribution will be combined to a spatial analysis of the distribution of overlapping threats at basin scale. Data modelling will further refine results of habitat mapping and use of site selection algorithms will combine results from different disciplines to provide scenarios of MPA networks. The aims of WP2 are: revision of habitat classification schemes and identification of common criteria to combine multi-scale geological, oceanographic and biological data to derive habitat maps at Mediterranean and Black Seas scales; review of habitat distribution and extent including the Mediterranean and Black Sea. Data mining, integration and production of a multi scale habitat cartography, with specific focus for habitats included in the Habitats Directive (92/43/EEC); mapping of human threats in coastal and deep sea habitats at basin scale; use of site selection algorithms to combine results coming from the contribution of different disciplines (i.e. modelling approaches) to provide scenarios of MPA networks at Mediterranean scale. 2.3. WP3-Species assemblages, dispersal and connectivity Objectives are defined through questions: what is the existing network of connectivity? what is the current effectiveness of the present network of MPAs for favouring conservation? what knowledge is lacking and how to expand the existing network of MPAs in order to maximize the conservation of biodiversity and resilience of ecological communities, while
58
2.10. WP10-Black Sea Pilot Project Objectives: acquisition of new geological, biological, oceanographic data in the Black Sea pilot area relevant for MPAs implementation; identification, within the pilot area, of key variables regarding connectivity (distance, size, strength and direction of currents, propagate supply) to be considered in the design of MPA network; definition of what is specific to the Black Sea and what can be generalized at larger scale within management plans in terms of connectivity processes; examination of the main natural and human driven causes of changes, potentially affecting the functioning and dynamics of the Pilot Areas ecosystems and description of potential implications for establishment of MPA networks; assessment of ecosystem resilience and implications for MPA network design and management in the Pilot area; evaluation of the impacts of offshore wind farm development on ocean circulation, wave action, bottom morphology and marine life near or within the pilot network of MPAs in the pilot area; identification of socio-economic impacts caused by offshore wind farm development within the network of MPAs; transfer of the field data generated by WP10 to the WP9 Geodatabase, and to contribute via other WPs to the final synthesis. Note: Romania is designated as work package leader for this part of CoCoNet Project. 2.11. WP11-Mediterranean Sea Pilot Project
2.7. WP7-Information Dissemination and Outreach Objectives: to disseminate the outputs (information, data, know-how, etc) made by the Project to all stakeholders in the most effective ways; to raise awareness of the stakeholders, particularly policy makers, students, and teachers, on MPAs and wind energy; to create and provide a common platform to facilitate the dialogue between all stakeholders. 2.8. WP8-Training and capacity building Objectives: background for the WPs through Focus workshops; training students, researchers and stakeholders through summer school courses. 2.9. WP9-Data Management and synthesis This work package is designed to provide a common framework for data management and final synthesis of the outcomes of WPs 2, 3, 4, 5, 6, 10 and 11. A decentralized Geodatabase and a WEBGIS system will be the linking tool for all partners, regions and thematic research (WPs 2-6, 10, 11). It will involve the entire consortium at different levels in topics such as data provision, GIS products, GIS interpretation, data archiving and data exchange. The work is organized around the following main objectives: assess the rules for data and metadata sharing between partners reviewing the existing common European protocols and standards; design and implement data repositories (Marine Geodatabase) to store and retrieve the spatial data collected during the lifespan of the project for the Mediterranean and Black Sea areas and for the pilot study areas; develop the COCONET WebGIS to integrate the multi scale GIS layers derived from WP 26, 10, 11 in all regions; develop an analytical and evaluative framework for designing, managing and monitoring regional networks of MPAs, including wind
Objectives: acquisition of new geological, biological, oceanographic data in the Mediterranean Sea pilot area relevant for MPAs implementation; identification, within the pilot area, of key variables regarding connectivity (distance, size, strength and direction of currents, propagate supply) to be considered in the design of MPA network; definition of what is specific to the Mediterranean Sea and what can be generalized at larger scale within management plans in terms of connectivity processes; examination of the main natural and human driven causes of changes, potentially affecting
59
3. ROMANIAN PARTICIPATION TO SUSTAINABLE DEVELOPMENT OF BLACK SEA REGION PART OF COCONET PARTNERSHIP At governmental level, Romania is part of following regional agreements [2]: Bucharest Convention, 1992 Odessa Ministerial Statement, 1993 Regional Contingency Plan The following EU Directives were transposed, implemented and enforced within the national legislation through main activity of Romanian Naval Authority (RNA) [3]: Directive 2000/59/EC on port reception facilities for ship-generated waste and cargo residues; Decision 2850/2000/EC setting up a Community framework for cooperation in the field of accidental or deliberate marine pollution; Directive 2005/33/EC amending Directive 1999/32/EC as regards the sulphur content of marine fuels; Directive 2002/59/EC of the European Parliament and of the Council of 27 June 2002 As part of CoCoNet partnership, Romania is participating through two entities: a) GeoEcoMar- is a research and development institute of national interest, performing research in geology, geophysics and geo-ecology, with focus on aquatic, marine, deltaic and fluvial environments. GeoEcoMar represents an excellence pole in the marine research, working as a European and national center for studies of sea-delta-fluvial macro systems. A modern research infrastructure, based mainly on marine and fluvial research vessels, enables Geoecomar to undertake complex, multidisciplinary studies in national and international programs [6]. b) Romanian Marine Research Institute (RMRI) has been established in 1970 by unification of the existing marine research institutes from Romania, at that time. In 1999, it was reorganized as National Institute for Marine Research and Development Grigore Antipa (NIMRD),
Being well known as typical continental sea, under a continuous stress caused by human factors (pollution, overexploitation of resources, tourism and so on), finding solutions for the Black Sea to that continuous stress and restore marine environment is imperative and one way is the creation of marine protected areas (MPAs) [3]. It is important, however, that these protected areas are not simple scattered oases but work as a whole. FP-7 project - CoCoNet (inter-coastal networks of marine protected areas) in which NIMRD Grigore Antipa and GeoEcoMar are partners, aim to interconnect individual marine protected areas through consistent management practices, and to promote education and cooperation between the various administrations and people who work and live in these areas. 5. REFERENCES
[1] www.coconet-fp7.eu [2] BERESCU, S., MARPOL and OPA conventions regarding oil pollution, Ovidius University Annals Series: Civil Engineering Volume 1, Issue 12, June 2010. [3] BERESCU, S., NI, A., RAICU, G., Modern Solutions used in Maritime Pollution Prevention, Ovidius University Annals Series: Civil Engineering Volume 1, Issue 12, June 2010. [4] HAMILTON, DANIEL AND MANGOTT, GERHARD (eds.), The Wider Black Sea Region in the 21st Century: Strategic, Economic and Energy Perspectives (Washington, D.C.: Center for Transatlantic Relations, 2008). [5] www.blacksea-commission.org [6] www.geoecomar.ro [7] www.rmri.ro
60
61
62
The review on shipping accident analysis indicates that the current approaches have only targeted certain perspectives. However, the occurrence of shipping accidents commonly depends upon various shortfalls in different segments of safety barriers. The principal focus of this paper is to provide an analysis, which aims at clarifying the probability and importance of the various factors leading to a shipping accident. Keywords: accidental loads, collision, fault tree, navigational area, human factor. 1. INTRODUCTION The main framework of these new harmonised regulations should follow the concept of SOLAS Part Bl, but include the main features of IMO Res, A,265 and the current deterministic regulations of SOLAS Chapter 8, also referred to as SOLAS 90. The recently published statistical reports have highlighted that there are still an enormous number of shipping accidents. The consequent impacts of shipping accidents include loss of life, marine pollution, damage to ship or cargo, and others. The factors that lead to shipping accidents can be human errors, technical and mechanical failures, and environmental factors. In this condition prevention of shipping accidents is still a focal matter of maritime interests. 2. PROBABILITY OF A COLLISION
In the last decade, international maritime authorities have made significant efforts to promote safety at sea in the shipping transportation industry. Especially, the International Maritime Organization (IMO) encouraged the establishment of a safety management system (SMS) in shipping companies in accordance with the international management code for the safe operation of ships and for pollution prevention (ISM Code). The first international probabilistic concept for damage stability regulation, Resolution A, 265, IMO (1971), was adopted by IMO in 1971. The probabilistic rules were an optional alternative to the deterministic passenger vessel regulation in the SOLAS Convention and were developed for passenger vessels only. The passenger vessel regulation was in 1990 followed by the adoption of subdivision and damage stability rules for dry cargo vessels, SOLAS (1990), also based on the probabilistic concept. While the probabilistic rules for cargo vessels are generally based on the same overall principles and damage statistics as the passenger vessel rules, there are some differences, specially the treatment of the vertical extent of damage. The damage statistics for the passenger vessel A, 265 regulation is based on data collected for casualties occurring in the 1950s and 1960s and covers vessels commonly used at that time. These vessels were considerably different from the ship design of today. Many of the vessels were often designed with many decks. Even though the shortcomings of the statistics were well known, the same statistics was used for the dry cargo regulation in 1990. The shortcomings mainly arise from lack of updating the statistics, but also from the fact that the statistics is based on only 296 ship collisions. The International Maritime Organization (IMO) is currently seeking to harmonise the damage stability regulation for all types of vessels using the probabilistic damage stability concept. Following, introduction of the probabilistic damage stability requirements of dry cargo ships, SOLAS Part B-l, IMO put on their work program for harmonisation of all damage stability requirements in SOLAS, using a probabilistic concept of survival.
When is determine the probability of a collision all factors related to the risk of a collision must be identified. In the present analysis a method of splitting the probability of a collision into two separate analyses is used. First the number of possible ship collisions is estimated, if no aversive manoeuvres are made. The result from this analysis is mostly concerned with the waterway and the size of the involved vessels. Then the causation probability which result in a collision event is estimated. The causation probability is influenced by a large number of factors related both to the waterway, the involved vessels and to the human factor. Methods of analyses for finding the causation factor may include fault trees. When all factors related to the risk of a collision are identified, they can be separated into two groups. The first group contains the factors which can be controlled. Factors in this group may be denoted as risk control options. The other group contains factors which cannot be controlled. Factors in this group are mainly related to the environmental conditions. Factors which may reduce or increase the consequences of the collision must be found.
63
The first step of the analysis is to define the system of interest. This means to define the structure, group the elements and their relationship by defining the output from the system and the impacts of interest. Determination of the probability analysis of a collision requires combination of knowledge and modelling of risk, involving human factors, the nature of the waterway, description and modelling of the ship structure deformation, global motions of the vessels and technical installations, both in connection with the waterway and on board the vessels. Reducing the probability of a collision may also be referred to as preventing the vessels from collisions. Preventing a vessel from accidents is one of the main objectives of the shipping industry, as accidents in many cases will result in loss of life, lost operational time, lost income and insurance claims from passengers, authorities or cargo owners. The factors that may influence a collision are: the navigational area system, the involved vessels and the human factor. The vessels cannot be analysed isolated from the waterway, and the ship and the waterway are a complex and interdependent systems that involves physical and human elements. Although, to identify factors related to risk and to identify where to implement actions of prevention, it seems to be a good separation. Some of the factors are difficult to change, but most of the factors mentioned in the following can be considered as risk control options, which can be used as parameters in cost-benefit analyses. 3.1 The navigational area system The navigational area system is analysed considering the traffic in that particular area, the management of the navigational area system and the environmental conditions. The analysis of the traffic includes information about the types and sizes of passing vessels and of the traffic intensity. Factors describing the traffic can normally not be changed, as they are a result of the surrounding harbours. Most regions in the world are not restricted in navigation, only the rules of the sea apply. Other areas are equipped with a vessel traffic system (VTS) or
64
65
The consequence of a collision involves the consequences for the vessel, consequences for human safety or for the environment. From these may follow consequences for the shipping company. Human safety is affected by a collision in the case of severe damage to the vessel or where the vessel may capsize and lives may be lost. Minor injuries may also arise during the collision. The consequences for the vessel can be separated into four cases, minor and severe damage, capsizing, and total loss of the vessel. Severe damage is damage to the vessel resulting in fracture of the ships hull. The consequence is repair of the vessel, which has economic consequences. The vessel will normally be delayed and cargo owners or passengers might consider the use of another shipping company the next time. Fracture of the hull may also result in oil outflow leading to environmental consequences or stability problems, which may again result in capsizing of the vessel. Capsizing can be a result of severe damage, but a vessel may also capsize due to reduced stability in connection with water inflow from smaller damages. Capsizing might cause oil outflow and have
[1] METIN CELIK, SEYED MIRI LAVASANI, JIN WANG, A risk-based modelling approach to enhance shipping accident investigation, 2010 [2] MARIE LUTZEN, Ship collision damage: Developing a General Overseeing Model for Small Queues, Ph.D. thesis, Technical University of Denmark, 2001 [3] KOLOV K., BELEV BL., Development of models and scenarios of navigational accidents realization in maritime transport system, Varna, 2011 [4] RADU HANZU PAZARA, CORINA POPESCU, VARSAMI ANASTASIA, The role of teamwork abilities and leadership skills for the safety of navigation, IAMU, 2011
66
A CONSEQUENCE OF THE SECOND WORLD WAR: THE BELGRADE AGREEMENT (AUGUST, 18, 1948) AND ITS CONSEQUENCES UPON THE NAVIGATION ON THE DANUBE
TULUS ARTHUR-VIOREL
University "Dunarea de Jos" Galati, Romania ABSTRACT Today, Belgrade Agreement (August, 18, 1948) is, with minor revisions, the official document that regulates the navigation on the Danube. The Convention is not unfavorable to small Danube riparian states, but the undiplomatic and unceremonial treatment applied to the great Western powers (especially Great Britain, France and the United States) when the text was drafted and voted had serious consequences on trade and navigation on the Danube. The economic spoliation of the small Danubian communist countries by their Soviet comrade, and the manner in which Stalin circumvented the principles of the Belgrade Convention along with his conflict with the Jugoslavian leader, Tito, managed to negatively affect the navigation on the Danube. The Danubes Thaw, occurred after Stalins death (after 1953), managed to partially correct the wrong that had been committed the Danubes removal from the great international commercial routes. Keywords: the Danube, the Belgrade Conference (July 30 to August 18, 1948), the Belgrade Agreement (August, 18, 1948), the Stalinist period, the Tito-Stalin conflict.
1.
INTRODUCTION
In 1945, the same year the Allies achieved their military goals through the capitulation of Germany and Japan, the war coalition fell apart and misunderstandings between the Great World Powers United States, Great Britain and The Soviet Union began to determine the new world policy. In fact, the crisis between East and West was triggered by the problematic status of the European regions declared free by the Red Army. Without taking account of the Western sensibilities, Moscow considered opportune to export communism in Eastern Europe at that time. Previously, on a diplomatic level, Stalin tried to quench the fears and precautions of his Western allies. We dont have stated the soviet leader on November 6, 1941 and we shall never have a war objective aiming towards imposing our regime and will on the Slavic people or the other European nations that count on our help [1]. However, during the war, Stalins perceptions had changed and, in the course of major conferences (Tehran, Yalta and Potsdam) or unofficial meetings, the communist leader was able to impose his views in front of the Western allies who so eagerly ceded territories to Kremlins army and propaganda. Furthermore, the United States and the Great Britain didnt even have a common standpoint considering the means to counter the Soviet influence. While the president of the United States, F. D. Roosevelt, was determined to support international moralism based on the democratic principles of state sovereignty, ethically hostile to the concept of dividing the world into spheres of influence, Winston Churchill, the British Prime Minister, in the spirit of realpolitik, embraced this kind of delimitation as we can see from the agreement between Churchill and Stalin, concluded after the British Prime Ministers visit to Moscow (9-17 October 1944). London considered that the only way to
save some countries was sacrificing others to their greedy Eastern ally. The discussions seem to have continued also during the famous meeting between the three great leaders F.D. Roosevelt, W. Churchill and I.V. Stalin at Yalta (February 1945), although in recent years, international and Romanian historians tend to deny the existence of such negotiations. Moscows policy a faits accomplish transformed the conflict between East and West, from a mere diplomatic dispute between the two global superpowers, into a fierce existential struggle between two social systems and ideologies (communism versus capitalism) which could not come through without the triumph of only one camp. Therefore, what happened within the Danube Conference, held between July 30 and August 18, 1948, in the capital of Jugoslavia, went beyond the normal understanding of the international negotiations. Our study does not aim to legaly analyze the Soviet project or the western contradictions, but the manner in which these works were carried out, especially, the consequences of the Belgrade Agreement on the navigation on the Danube river [2]. 2. PRELIMINARIES OF THE BELGRADE DANUBE RIVER CONFERENCE Establishing communist totalitarianism in Eastern Europe was a process that took place on two levels that were almost inseparable: imposing the communist totalitarianism in the countries and the formation of the anti-imperialist and democratic camp at first, which afterwards became socialist [3]. The Soviet military presence and the inclusion of the Eastern European countries under the military and political control of the Soviet Union were crucial for the coming to power of the communist parties in these countries. Moreover, the communization of the states around the Soviet Union, regarded as a protection line, was also used as a disguise
67
68
69
Statistics show that the non-inclusion of the Danube River among the international river waterways seriously
70
71
72
1.
INTRODUCTION
An overall view on groundings categorize the accidents in two major groups: 1. Grounding on soft sea beds, so-called Soft Groundings. The damage to the hull in terms of crushing at the point of ground contact is limited but the hull girder may fail in a global mode due to shear force and bending moment exceeding the hull girder capacity. 2. Grounding on hard bottoms, so-called Hard Groundings. The primary concern here is the local crushing and tearing of the ship bottom due to a cutting rock. The work concerning grounding on soft sea bottoms includes the following main aspects: Identification of the governing grounding mechanics. The hull girder is modelled as a linear elastic beam and the loads considered are gravity, hydrostatic pressure, hydro- dynamic pressure and a ground reaction. Establishment of a model for the hydrodynamic loads which takes into account the generation of waves and shallow water effects. Establishment of a model for the ground response to the penetrating bow. Based on observations from laboratory experiments, the idea of this theoretical model is that the bow generates a flow of pore water in the soil. The pressure of the pore water on the bow becomes decisive for the soil reaction. Derivation of the governing equations for the hull girder based on the Timoshenko beam theory. The solution is found both by the finite element method and by a modal super-position approach and the results of the two approaches are shown to be equivalent. Investigation of the sectional forces compared to the strength of a ship in a grounding event on a soft sea bed. It is shown that the grounding-induced loads may well exceed the wave bending moment and shear force capacity of the hull girder. The effect of the hull flexibility is found to be important in a dynamic analysis because the flexible deformation of the hull girder
unloads the grounding force and because the dynamic amplification of the sectional forces is significant for some grounding events. The effect of bow lift due to a receding tide is also investigated and it is shown that even a very smooth grounding event may lead to catastrophic failure if it is followed by a receding tide. 2. GOVERNING EQUATIONS
2.1 Generalities When a ship runs aground on a soft seabed the principal energy absorbing mechanisms which stop the ship are normally: 1. Deformation of the sea bed. 2. Friction between sea bed and hull. 3. Change of potential energy of the ship and the surrounding water. 4. Deformation of the hull. 5. Hydrodynamic damping. The solution method applied here for theoretical analysis of the soft-grounding problem is numerical integration of the equations of motion for the ship, i.e. time simulation. Alternatively, an overall simplified approach based on the conservation of energy and momentum could be applied. Such an approach was presented by the specialty literature and it gives a good picture of the overall grounding mechanics. However, to obtain detailed information about loads during the impact, it is necessary to resort to time simulation. The following sections describe the load modeling corresponding to the five effects listed above and the corresponding equations of motion. Previous experimental and numerical studies focused on the rigid body motion of the ship. This is relevant in connection with the design of protective islands for bridges and for the prediction of the tug-load necessary to refloat the ship. The finite crushing strength of the bow could also be included but as a first approximation, it is assumed rigid. The flexible
73
Figure 1 Coordinate systems (x; y; z) and (X; Y; Z) and definition of sectional forces. It is assumed that both the centre of gravity of the structural and hydrodynamic mass per unit length and the bending neutral axis of the hull girder can be considered as a straight line. This line is the x - axis of a coordinate system fixed with respect to the ship and with origin amidships. The y - axis points towards the port side and the z - axis points upwards, see Figure 1. A global coordinate system, (X; Y; Z), is fixed on the ground with origin at the point of the initial contact between bow and seabed. Displacement and rotation in surge, heave and pitch in the (x; y; z) coordinate system is denoted (u; w; ). Sway and yaw is not considered. The basic idea is to calculate accelerations in the local coordinate system, transform these to the global system and perform time integration in the global system to get the time history of velocities and displacements. 2.2 Loads from Water and Gravity Hydrostatic Loads - By combining the loads from the hydrostatic pressure with the gravity load on the structure in the so-called restoring load, modelling becomes simple. The ship is assumed to be in equilibrium in the initial configuration. Then, when a section is lifted out of the water, it experiences a static downward load due to the difference between weight and buoyancy. Since the weight is unchanged by lifting, the restoring force/moment is approximately given by the change of buoyancy alone. By assuming that the sides of the hull are parallel (vertical), the restoring load due to a static lift, w(x), of a hull section can be expressed as
qhs , z ( x ) = w gB ( x ) w ( x )
(2)
where the coefficients az() and bz() are denoted 'added mass' and 'damping'. The added mass is seen to be the part of the force which is in phase with the acceleration and the damping is the part in phase with the velocity. Equation 2 holds true for a harmonic motion with a welldefined frequency. In the general case of a transient motion the specialty literature says that the hydrodynamic load can be written
.. . . t qhd,z (x, t) = 2(x) w(x) 0 hz w(x, t ) w(x, t = 0) d
(3)
Figure 2 Experimental data for added mass and damping for a 310 m tanker (CB = 0.85) at restricted water depths. The non-dimensional quantities are defined as
' ' a z = a z / w , bz = b z / ( w g / L ) and = h / T
(4)
The assumption of vertical ship sides is good for large tankers with CB 0.85 but for the small fast vessels, Equation 1 will only hold true during the initial impact. Hydrodynamic Loads, Added Mass and Damping Since modeling of the hydrodynamic loads on a ship is of interest in several areas of marine engineering such as maneuverability, see-keeping and ship vibration, a substantial amount of literature has been published in this field. The characteristics for whole ships can be determined experimentally but since this is expensive and inconvenient at the design stage, many theoretical
2 0 bz ( x , ) cos td
(5)
where bz(x;) is the two-dimensional damping of the cross-section in heave at frequency . Since the damping thus comes as a weighted integral of the velocity history from the beginning of the motion, it is often denoted a 'memory effect' - contrary to the added mass, it is a function of the past. With Equation 3, the problem is now determination of the added mass and the damping coefficients, az() and bz(). As mentioned, several theoretical methods exist for unrestricted water depth but
74
z ( h, T , C S ) z (h = )
= 1 + 0, 54(C S 0, 2)
2 1
0,91
(8)
To limit the added mass at the point of contact ( = 0) in Equation 8 it is necessary to take into account the three-dimensional flow near the ends of the ship. Based on the results of the specialty literature, the maximum value of the correction factor, Equation 8, is here assumed to be 6.0. As this large added mass only occurs in a very limited area around the point of contact, the final results are not sensitive to this assumption so its validity will not be discussed in further detail. 3. GROUND REACTION
(6)
where AS is the submerged cross sectional area of the considered section and CV is defined from the draught to breadth ratio and the sectional area coefficient CS as:
CV = 2 2 (1 + a1 ) + 3a3 2 2 1 a1 3a3 3 2 2
C =
(1 + ) 1 / 4(1 + ) + 2 (1 4C s / )
a1 = C (1 ) a3 = C (1 + ) 1
= 2T / B
C S = AS / BT
A semi-empirical expression for the modification of added mass due to restricted water depth was given by the specialty literature, as:
z ( h, T , C s ) z ( h = , T , C S )
= 1 + 2(C S
T 0, 2) h
(7)
The expression is based on experiments with = h/T exceeding 1.5 and it is seen to have a maximum of 2.6 when = CS = 1. Thus, it does not depict the behaviour shown in Figure 2 for very small bottom clearances. The idea here is to retain the functional dependence on the sectional geometry and find another function for the dependence on = draught=depth based on the results presented by the specialty literature. The final result for the modification of added mass at infinite frequency due to restricted water depth becomes
The greatest challenge of developing a theoretical model for grounding on soft sea beds is establishing a model for calculation of the soil reaction. As the soil reaction induces the hull girder loads and eventually causes the ship to stop, it is of paramount importance to have a good model for the response of the ground to the penetrating bow. In the analysis presented by the specialty literature, the bow was assumed to move in the plane of the undisturbed slope and an efficient coefficient of friction was adopted. To obtain a good correspondence between theory and model tests, an effective coefficient for the bow/soil interaction equal to = 0.78 was assumed. The coefficient of friction between steel and sand is typically 0.3 - 0.4 so this effective coefficient of friction includes the normal pressure on the bow which must thus be quite significant. In the present analysis it is necessary to have a more sophisticated model for the soil behavior so that it can be applied to a time simulation scheme. The stopping force acting on a beaching ship is the result of ruptures in the soil in the areas of contact between bow and soil. The mechanics of this rupture is complicated, which is illustrated by an example where a ship with a cylinder bow with vertical sides is rammed horizontally into a slope of sand of 1:6, see Figure 3. The bow is semi-circular in shape with a radius of r = 378 mm and it has a at bottom. The sand is very uniform in gradation with a mean diameter of dm = 0:125 mm, permeability coefficient k = 9 10-5 m/s and frictional angle = 39. The ship is forced with a constant velocity and it is locked in the horizontal position so that it cannot heave or pitch. Figure 3 shows the horizontal and vertical soil reactions for different impact velocities as functions of the horizontal position for both dry and submerged slopes. It is noted that the force in the submerged case is 10 - 20 times greater than for the corresponding dry case. Figure 3 also shows that the reaction is clearly a function of impact velocity. It could be claimed that the dependence on impact velocity is due to the change in momentum of the soil being pushed by the bow but since no dependency is seen on impact velocity for the dry sand is observed, this cannot be the case. The results are important because they show that in a grounding event on a sand beach, the behavior of the soil is strongly
75
Figure 3 Soil reactions on a penetrating bow. Dry and submerged slopes, different velocities In classical soil rupture theory, conditions are assumed to be either drained or un-drained. If conditions are drained an incremental load increase on a soil element is carried solely by additional stresses in the grain skeleton ('effective stresses'), and if conditions are un-drained an additional load is carried by an additional pressure in the pore fluid alone. Both drained and undrained conditions are considered independent of the time history of the load - in this case of the impact velocity. According to Figure 3, which shows a clear dependence on impact velocity, neither of the two theories would therefore be suited for modeling of ship grounding events. The consolidation theory is a theory which includes the time variance of loads and deformations. The specialty literature presented a general set of equations governing the behavior of a saturated linear elastic porous solid under dynamic conditions. For standard geotechnical consolidation problems, however, the grain skeleton is most often assumed to be linear-elastic and inertia forces neglected. Obviously, these restrictions do not apply to grounding problems where strains are far beyond the elastic limit. The equations can be generalized to non-linear material behavior if the constitutive relation is written incrementally. A fully consistent theoretical analysis of a ship grounding event would require numerical solution of these equations, for example by use of the finite element method. The solution would include phenomena like elastic compression, rupture with very large strains, liquefaction, dilatation of the soil in rupture and flow of pore water. Use of such a model for grounding simulations would require extensive computer facilities and would be prohibitively time consuming. Therefore, the soil mechanics model used here has been based on very substantial simplifications and it is to some extent phenomenological. The pore-water creates strong effective stresses in the soil which act on the hull both as normal and tangential stresses. The question is how these large
Over the past decades there has been a continuous increase in the public concern about general risk issues. Many of the past improvements in safety of marine structure have been triggered by disasters but there is a change in this trend. The maritime society is beginning, albeit slowly, to think and work in terms of safety assessment of individual ships instead of the much generalized prescriptive regulations, which have evolved over the past 150 years. In line of these aspects, it is clear that rational procedures for evaluating the consequences of accidental loads are highly desirable, not to say necessary. A fundamental problem with rational consideration of grounding and collision in rules is that there are no simple measures of a ships defense against these loads. An idea would be to consider the statistical correlation between major design changes and the amount of oil spilled but the amount of oil spilled seems to be a random process. Within a reasonable time span this makes it impossible to draw cause and effect conclusions from statistics alone and attempts of doing so would most likely be highly reactionary with questionable effectiveness. 5. REFERENCES
[1] BARSAN, E., MUNTEAN, C., Combined Complex Maritime Simulation Scenarios for Reducing Maritime Accidents Caused by Human Error, Proceedings of the WSEAS 3rd International Conference on Maritime and Naval Science and Engineering, Constanta, Romania, 2010. [2] PEDERSEN, P.T., Collision and Grounding Mechanics, pages 125-158. Danish Society of Naval Architecture, Copenhagen, May 1995. [3] VARSAMI, A., POPESCU, C., DUMITRACHE, C., HANZU, R., CHIRCOR, M., ACOMI, N., The Influence of Grounding Events on Maritime Industry, Annals of DAAAM for 2011 & Proceedings of the 22nd International DAAAM Symposium, volume 22, no. 1, Viena, Austria, 2011.
76
THE DEVELOPEMENT OF FORUM NON CONVENIENSAND LIS ALIBI PENDENS DOCTRINES IN THE INTERNATIONAL MARITIME LAW
1
ABSTRACT Forum non conveniens doctrine, referred by prominent authors as the choice of foreign jurisdiction by a court, is generally considered as a constructive development in the jurisdictional system of the English courts. Lis alibi pendens is referred to as a case where a legal action concerning the same parties and the same matter is adjudicated simultaneously in two diverse jurisdictions. The authors of this study, take the view that a significant relationship seems to exist between forum non conveniens and lis alibi pendens, since the later is considered a supplementary factor relevant to the determination of the natural forum in forum non conveniens cases; both doctrines operate basically under the same legal principles and; they are in search of the same objective i.e. to render better justice in legal disputes, particularly in maritime disputes considering the international nature that this domain reflects. Despite the negative opinions surrounding the doctrine, forum non conveniens is regarded as an improvement in the rule of jurisdiction since it assists the litigants to consider advantages of diverse legal systems and to be adjudicated according to the most appropriate forum; promoting thus better fairness and justice in the dispute resolution in maritime affairs. Keywords: Maritime law, international law, forum non conveniens, lis alibi pendens, legal disputes, maritime disputes, natural forum, law of the sea, English maritime law.
1.
INTRODUCTION
Conflict of laws or private trans-national law is a significant discipline dealing with legal actions involving a foreign element1.The principal purpose of conflict of laws is to resolve trans-jurisdictional disputes by reference to the laws of all jurisdictions involved2. Important functions within this discipline play the doctrines of forum non conveniens and lis alibi pendens Although the genesis of these doctrines is relatively old, their effective application in the common law, as well as to some extend in the civil law system, has received wider acceptance only in the last 30-40 years. These doctrines are regarded as constructive approaches towards the clarification of incongruity issues in the conflict of laws3, since their implementation in the judiciary system assists the courts to prevent injustice in legal disputes, allowing thus the litigants to be adjudicated according to the most appropriate forum principle4. Consequently, by providing better fairness and justice in the resolution of legal cases, the application of these doctrines appears to be a positive step towards the development of the jurisdiction system. Several notable scholars, such as William Tetley in its prominent study International Conflict of Laws, Dicey and Morris in Conflict of Laws, and Cheshire and North in Private International Law, have shed light Albert Venn Dicey, Dicey and Morris on the Conflict of Laws, 13th ed. (London: Sweet and Maxwell, 2000), at 3. 2 Proshanto K Mukherjee, The Law of Maritime Liens and Conflict of Laws, JIML 9 (2003)6, at p.546 3 William Tetley, International Conflict of Laws: Common, Civil and Maritime (Montreal, Quebec: International Shipping Publications, 1994), at 43.
1
towards a comprehensive analysis of these important doctrines. Nevertheless, recent developments in the international and domestic legal domain necessitate the requirement for further research concerning the relevance and application of these doctrines. In this respect, the purpose of this paper is to analyse the interrelationship among the aforesaid doctrines by seeking to answer questions such as: What are the main features of these doctrines? What is their contribution towards the legal dispute resolution? What are the positive and negative points of the doctrines? Most importantly, is there any relationship among them? In light of these considerations, this paper will first contain a discussion regarding the evolution and the fundamental principles of forum non conveniens; secondly, the doctrine of lis alibi pendens viewed from the forum non conveniens perspective will be analysed; finally a comprehensive examination will be attempted towards the relationship among forum non convenience and lis alibi pendens with the focal point on the later. 2. EVOLUTION OF THE FORUM NON CONVENIENS DOCTRINE The roots of the doctrine of forum non conveniens are found back in the nineteenth century from the deliberations of the Scottish Courts5. The principle behind the forum non conveniens doctrine was that the court, after taken into account the interests of the litigants and the requirement of justice, may refuse to exercise jurisdiction based on the position that some other forum rather than the Scottish court is more appropriate for the case to be adjudicated6.
77
78
North, Cheshire and North's Private International Law, at 336. 37 Dicey, Dicey and Morris on the Conflict of Laws, at 395. 38 Ibid., at 397. 39 North, Cheshire and North's Private International Law, at 336-37. 40 Ibid., at 338-39. 41 Dicey, Dicey and Morris on the Conflict of Laws, at 396.
79
80
See Re Harrods (Buenos Aires) Ltd (1992) Ch. 72 (C.A.) 70 Dicey, Dicey and Morris on the Conflict of Laws, at 394. 72 See The Abedin Daver (1984) AC 398 at 411-412 73 Tetley, International Conflict of Laws: Common, Civil and Maritime, at 796. 74 Joost Pauwelyn, Conflict of Norms in Public International Law (Cambridge: Cambridge University Press, 2003), citation of Lowe at 115-16.
69
81
North, Cheshire and North's Private International Law, at 348. 76 See De Dampierre v De Dampierre (1988) AC 92, 1987 2 ALL ER 1 77 Dicey, Dicey and Morris on the Conflict of Laws, at 401. 78 See Cleveland Museum of Art v Capricorn Art International SA (1990) 2 Lloyds Rep 166 79 North, Cheshire and North's Private International Law, at 349. 80 See Henry v Henry case (1995) 185 CLR 571 81 Dicey, Dicey and Morris on the Conflict of Laws, at 401.
75
As a result, after a certain time period of ambiguity and uncertainty in the English courts, the doctrine of forum non conveniens was finally accepted by the English judicial system in the Abedin Daver case and subsequently in Spiliada case, wherein its main principles were laid down and summarized effectively by the House of Lords. Whereas many common law jurisdictions have adopted effectively the doctrine, the Australian approach appears to be ambiguous due to the integration of civil law principles within its practice, which in turn may obscure the application of the forum non conveniens doctrine. Notwithstanding the negative opinions surrounding the doctrine that its procedures may be time consuming; may lead to arbitrary decisions; and can sometimes complicate the legal practice, the doctrine of forum non conveniens is regarded as an improvement in the rule of jurisdiction since it assists the litigants to consider advantages of diverse legal systems and to be adjudicated according to the most appropriate forum; promoting thus better fairness and justice in the dispute resolution. Therefore, the doctrine appears to be a valuable, effective and well established principle, particularly, in common law jurisdictions. In addition, this paper revealed that a significant relationship seems to exist between forum non conveniens and lis alibi pendens since the later is considered a supplementary factor relevant to the determination of the natural forum in forum non conveniens cases; both doctrines operate basically under the same legal principles and; they are in search of the same objective i.e. to render better justice in legal disputes. Moreover, forum non conveniens may be
82 83
Marco Pistis, Forum non conveniens, at 4-5 North, Cheshire and North's Private International Law, at 347. 85 North, Cheshire and North's Private International Law, at 347. 86 Marco Pistis, Forum non conveniens, at 10 87 Ibid., at 3
82
BOOKS [1]. Adrian Briggs, The Conflict of Laws (Oxford: Oxford University Press, 2002) [2]. Albert Venn Dicey, Dicey and Morris on the Conflict of Laws, 13th ed. (London: Sweet and Maxwell, 2000) [3]. David McClean, Morris: The Conflict of Laws (London: Sweet & Maxwell Ltd, 2000) [4]. David J. Sharp & Wylie W. Spicer, New directions in maritime law, (Toronto: Carswell, 1984) [5]. Joost Pauwelyn, Conflict of Norms in Public International Law (Cambridge: Cambridge University Press, 2003), [6]. Peter North, Cheshire and North's Private International Law, 13th ed. (London: Butterworths, 1999) [7]. O.C. Giles, N.J.J. Gaskell, C. Debatista & R.J Swatton, Chorley and Giles Shipping Law. (8th ed. London: Financial Times Pitman Publishing, 2003). [8]. William Tetley, International Conflict of Laws: Common, Civil and Maritime (Montreal, Quebec: International Shipping Publications, 1994) [9]. William Tetley, International Maritime and Admiralty Law, Forum non conveniens and anti-suit injunctions at 412-413, (Cowansville Quebec: Edition Yvon Blais, 2002) ARTICLES [10]. Proshanto K Mukherjee, The Law of Maritime Liens and Conflict of Laws, JIML 9 (2003)6 [11]. David W. Robertson, Forum Non Conveniens in America and England: A Rather Fantastic Fiction (103 L.Q.R 398, 1987)
83
84
SECTION II
MECHANICAL ENGINEERING AND ENVIRONMENT
1.
INTRODUCTION
Clamshell buckets are automatic loading and unloading devices. Considering their way of action, they could be cable operated (one, two or four command cables), electric rotor or hydraulically operated. Clamshell buckets design varies according to their specific task. Single cable operated clamshell buckets are those with flexible traction devices used for lifting, lowering, opening or closing operations. Double cable clamshell buckets are the most utilized due to their simple design and efficiency. The hoist drum braking, the winding and unwinding of the closing wire rope lead to clamshell bucket closing or opening. Simultaneous winding and unwinding of both wire ropes, at the same speed, lifts or lowers the clamshell bucket. Maintaining the same speed is absolutely necessary for a proper clamshell bucket running. A simple or double pulley is required to open or close the clamshell bucket. Double wire rope sagging clamshell bucket increases operating stability. Clamshell digging is a result of clamshell buckets own weight so the clamshell buckets with a lower weight than the required one according to the load consistency will slip over the surface of the material, impeding the loading, and a higher one will lead to clamshell bucket dipping into the load. Grapple buckets are recommended equipments for handling bulk higher granulated loads. 2. ASSEMBLY MODELING
Clamshell buckets are operated closing and opening devices attached to lifting equipments. They dig and grab the materials and unload it to a designated place or to transportation equipment. Any clamshell bucket has two mechanisms: closing-opening clamshells mechanism and the lifting-lowering one. Grabbing and releasing the load are automated operations compared to other grabbing devices (hooks, buckets) in which case an operator is required. Double clamshells buckets hold a certain volume. They are designed to handle finely granulated material: sand, gravel, salt, cement, etc.
Clamshell buckets modeling using NX7 software allows 3D solids parametric design. 3D model is based on sketches (Sketch) created using Sketcher application when Sketch command is activated. Sketcher environment is left after designing the curves and applying the sketchs restraints. After that, the sketch is used in order to create solids or surfaces utilizing modeling operations. The solids can be modified by adding extra operations to get to desired shape. The technical characteristics of the designed assembly are: buckets capacity - 6,3m3; Tmaximum opening: 4500mm; closed clamshell bucket height: 4150mm; opened clamshell bucket height: 5120mm; closing wire rope diameter: 28; lifting wire rope diameter: 28. Each component was designed using NX 7.5. software. It started with a sketch for every single part using the commands as it follows: Sketch, Profile (Line), Arc, Circle, Quick trim, Quick extend, Constraints. Extra commands can be used according to each part design requirements: Chamfer, Rotate, Mirror curve, Offset curve, etc. Dimensioning and extruding was realized after sketching, using the commands Extrude and Revolve, considering each particular case. At this stage, the design could be modified according to the requirements (holing, detaching certain volumes to obtain certain shapes or surface particularities). Achieving that involved the using of certain commands: Hole, Edge blend, Chamfer, Unite, Subtract, etc.
a)
85
b) Figure 2
c)
Figure 3
d) Figure 1 Clamshell buckets parts a) clamshell; b) fork; c) small roller; d) big roller I have realized the assembly after each component was designed. In order to do that, I have brought the parts files in the working window and assembled the equipment piece by piece. I have used assembling constraints (Touch Align, Angle, Fix, Parallel, Concentric and Distance) for a more accurate assembling process. 3. MODELING RESULTS The modeling results are illustrated as it follows: Figure 4
86
There are different types of clamshell buckets and their efficiency and price varies according to the producer. Their productivity also varies according to their specific task. The production costs get lower as the design solution gets versatile and the production well managed. I have paid attention to technical parameters analysis, considering the advantages and disadvantages of each design solution according to different assignments. The comparative analysis involving different design and constructive solutions considered the dimensioning and specific calculation starting with 2D model. 5. REFERENCES
Figure 5 The components of the clamshell bucket have been assembled piece by piece. The modeling results have been presented in images. First, I have presented the clamshell bucket with closed clamshells but without the wire ropes Figure No.2, then the components assembled one by one- Figure No.3, after that, the clamshell bucket with opened clamshells- Figure No.4
[1] ALAMOREANU. M. - Maini de ridicat. Organe specifice, mecanismele i acionarea mainilor de ridicat, Ed. Tehnic, Bucureti, 1996. [2] MING C. LEU, KEITH AND PAT BAILEY PROFESSOR NX 7 for Engineering Desig [3] RANDY H. SHI - Parametric Modeling with DEAS 1, 2008 [4] OPROESCU GHE. - Masini si instalatii de transport industrial, Editura EDMUNT 2001
87
88
INFLUENCE OF NOISE ON THE PHYSIOLOGICAL ACTIVITY OF THE BLUE MUSSEL (MYTILUS GALLOPROVINCIALIS) FROM THE BLACK SEA
1
ATODIRESEI DINU, 2CHITAC VERGIL, 3PRICOP MIHAIL, 4PRICOP CODRUTA, 5 ONCIU MARIA-TEODORA
1,2,3
Mircea cel Batran Naval Academy, Constanta, 4Constanta Maritime University, 5 Ovidius University, Constanta, Romania
ABSTRACT In the last period, the ecosystem of the Black Sea has been highly changed and disturbed: extensive pollution, coastal development, disturbance caused by extensive vessel traffic, over-fishing, etc. , were several of the causes . In this paper we present the influence of vibration (as a source of noise) on the physiological activity of the Black Sea blue mussels (Mytilus galloprovincialis, Lamarck, 1819). Marine organisms are forced to implement their own strategy of defence against stress factors. Along their evolution, under the influence of environmental factors, mussels put in place anti-oxidant enzyme - systems and non-enzymatic defence systems against the action of harmful free radicals resulting from metabolic processes of organic xenobiotics. We can consider that in the Black Sea shallow waters, where, on rocky substrata the dominant species is the blue mussel, noises or vibrations with high frequency are harmful for the ecosystem. Keywords: Mytilus galloprovincialis, oxidative stress, noise, spectrogram, physiological activity, ,Black Sea 1. INTRODUCTION 2. MATERIALS AND METHODS USED FOR EXPERIMENTAL RESEARCH OF ACCOUSTIC EMISSIONS ON THE PHYSIOLOGICAL ACTIVITY OF THE BLUE MUSSEL The quipment used in experiments was obtained under the RoNoMar contract, from Bruel & Kjaer Company (B & K), and was used to perform measurements of underwater noise. The following equipment was used: 3 8106 hydrophones type 2 hydrophones type 8104, type 2713 power amplifier, data acquisition system XI LAN, laptop 14 PULSE software, cables, type 4229 calibrator (Figure 1). Underwater sound measurements were processed and analyzed by the help of the FFT analysis (Fast Fourier Transform) and the Wavelet analysis. In order to study the influence of vibration (as a source of noise) on the physiological activity of the Black Sea blue mussels (Mytilus galloprovincialis, Lamarck, 1819), healthy individuals were collected by scraping mussels off the epybiosis of berth no. 79, located to the south of Constanta Port, less influenced by port operations. A quantity of 180 l seawater was taken from the same area to fill the experimental ponds / tanks. (Figure 2).
Free radicals of oxygen (ROS) are transitory molecules or molecular fragments of high complexity, able to interreact with all bio-molecules (lipids, proteins, carbohydrates and nucleic acids), altering them structurally and functionally. The free radicals of oxygen are the superoxid radical, the hydroxyl radical, the hydrogen peroxide radical (hydrogen peroxide) and singlet oxygen [9]. In marine organisms, the free radicals of oxygen can be generated under hypoxia or experimental induced environmental hyperoxia conditions [15]. The oxygen free radicals can be neutralized by cellular antioxidant defence system consisting of enzymes such as superoxid dismutase (SOD), catalase (Cat), glutathione peroxidase (GPX) and some nonenzimatice antioxidants such as vitamin A, C, E, GSH, flavonoids, ubiqinona [6][7]. In the body of molluscs (bivalves) were discovered the components of the antioxidant enzyme system, such as superoxid dismutase, catalase and glutathione peroxidase, as well as numerous non-enzymatic substances with antioxidant activity and with high molecular weight, for example vitamins A, C, E and GSH [1][2][12][15].
89
Figure 1 Equipment used for the experimental research on accoustic emissions in the coastal marine environment: components of a 8106-type hydrophone; b -8104 type hydrophone; LAN-XI data entry system; d- type 2713power amplifier; e- the screenshot representing the PULSE 14 software from Bruel & Kjr's Tetraselmis. Water oxygen tanks were provided by bubbling air. For each experimental stage 2 tanks of different dimensions were used: the first, a 30 litre tank, and the latter, a 50 litter one. For all the experiments a witness tank was used for monitoring the mussels evolution in normal conditions (Figure 4).
Berth no. 79
Figure 2 Place of sampling mussels (a) by scraping the epybiosis (b) Vibration in the sample zone was determined (16/03/2010 and 09/04/2010), together with the biochemical parameters of collected mussels as reference values for the two series of experiments (Figure 3),
c a b
Figure 3 Determination of vibration values of the marine environment when sampling: a - an overview of the equipment used, b - recording the vibrations on the laptop screen A total of 80-90 mussels (length class of 35-40 mm and 45-50 mm) were kept in ponds / tanks / aquariums of 40 L, or in 60 L glass ponds, at temperatures ranging from 10 to 13 C, salinity from 14.1 to 14.6, pH = 8.2 to 8.4 IU, food being provided by algal cultures obtained from existing diatoms in seawater enriched by culture sp. Figure 4 Ponds (aquariums) used in the experiment : a experimental ; b, c witness The experiment was conducted over a period of 4 weeks (19.03 -22.04.2010), and two repetitions every 2 weeks for each of them. During this period, mussels were "stressed" with acoustic signals of different frequencies. By using the PULSE program from B & K, was generated and sent to 7071 V RMS signal through LAN DAQ XI, and the power amplifier type 2713 to two hydrophones type 8104. Noise was measured with two hydrophones type 8106, which were connected to the DAQ, and then the signals were transmitted to the laptop
90
a
Figure 5 Experimental configuration In the first experiment the emission frequency was 300 Hz signal, while in the second it was 16000 Hz. Signals were generated using the 8104 type hydrophone as transmitters. A second source of noise was the air pump, with a fundamental frequency of 50 Hz. In conclusion, during the experiment two sources of noise were continuously provided in the basins. After a period of 72 hours from the beginning of the experiment the blue mussels were sacrificed for analysis; their tissue was examined in order to state some metabolic parameters of oxidative stress: the superoxide dismutase activity, the catalase, the reduced concentration of glutathione (GSH) and the protein concentration. In determining the metabolic oxidative stress parameters we used the following methods: Superoxide dismutase (SOD) - method based on its ability to inhibit the reduction of tetrazolium salt - Nitro Blue tetrazolium (NBT) with superoxide radicals [14] The activity of catalase (CAT) - kinetic method based on the decomposition of hydrogen peroxide radicals existing in the reaction medium, as a result of catalase activity, using a spectrophotometer at 240 nm and 25 C, dt = 60 seconds [3]. Reduced glutathione (GSH) - classical colorimetric method based on DTNB reaction in the presence of GSH in the environment using a spectrophotometer at 412 nm [4]. Malonildialdeide (MDA) - conventional colorimetric method described by Drapper and Hadley, 1990 [5]. All of the reagents used for the completion of these experiments were purchased from Sigma-Aldrich (Steinheim, Germany). The data were statistically interpreted using statistical analysis software Origin Pro75. The difference is considered statistically significant when p 0.05. 3. MEASUREMENTS (RECORDS) OF THE NOISE EMISSIONS, AND THEIR EFFECTS ON THE SPECIES MYTILUS GALLOPROVINCIALIS It was observed that the oxidative stress is not obvious if appropriate equipment and experimental approaches are used (experimental values for noise measurements were summarized by number of experiments, pools and sources (Figure 6);
b
Figure 6 Measured values of noise and graphic representation of its evolution in the two experiments: a graphical representation of the evolution of noise for the first experiment, b - graphical representation of noise evolution for experiment no 2 We found that biochemical indicators for oxidative stress (superoxide dismutase, catalase, glutathione reduced malonildialdehida) in tissues of mussels exposed to different noise emissions reveal that the action of low frequency sounds (from 300 Hz) in experimental ponds over 144 hours, do not induce oxidative stress (Table 1). Table 1 Mean values of superoxide dismutase, catalase, reduced glutathione and of malonildialdehide after 144 hours of exposure at vibrations of 7 V RMS at 300 Hz Experimental group SOD Statis -tics index XES laboratory control group n t P +/M% XES n t P +/CAT of GSH ng mg of proteins-1 1.10 0.28 10 9.68 0.001 141.81 2.91 0.12 9 2.06 0.03 + MDA nmol mg of proteins-1 2.59 0.92 10 5.33 0.001 + 172.6 1.61 0.31 9 3.16 0.001
experimental group 1
91
During the experiments, we chose a fixed frequency for the signal generated by the hydrophones over a broadband signal in order to identify the contribution of hydrophones to SPL. For lower frequencies the oxygen injectors noise spectrum dominates the frequency spectrum, and when we have higher frequencies, the noise of hydrophones is the only noise source. In the second experiment, the two experimental groups were labeled 3 and 4, and were exposed to sounds with an intensity of 173 dB at 16 kHz. The other environmental conditions (temperature, water salinity, pH, and algal diet, equally air injectors) were the same as in the first experiment. The values of the studied biochemical parameters determined for the mussels collected from the Constanta Harbour are in concordance with those offered by the references. [11] If mussels are exposed to 72 hour high frequency sound (starting with 16 kHz) (Table 3) or for more than 72 hours (Table 4), induction of oxidative stress is observed, together with an activation of the biosynthesis mechanisms of superoxide dismutase (SOD) and catalase (CAT), directly involved in cell defense against damaging actions of oxygen free radicals resulting from metabolic processes. GSH can react directly with the oxygen free radicals or through enzyme whose cofactor is. This explains why the concentration of GSH decreases in experimental group 3. Lipid peroxidation index (MDA) concentration remains constant, proving that the cellular antioxidant system still gets through the concentration of free radicals. Table 3 Mean values of superoxide dismutase, catalase, reduced glutathione and of malonildialdehide after 72 hours of exposure at vibrations of 173 dB at 16 kHz Experimental group SOD Statis -tics index XES n XES n t P XES n CAT GSH ng mg of proteins-1 2.35 0.59 10 2.11 0.32 10 NS 1.42 0.24 10 MDA nmol mg of proteins-1 0.38 0.08 10 0.36 0.02 10 NS 0.46 0.14 10
EUmg of proteins-1 7.81 2.54 10 4.48 0.001 + 92.83 4.10 1.02 10 4.27 0.001 1.41 0.37 10 7.02 0.001 + 182.0 0 0.65 0.20 10 6.38 0.001 -
EUmg of proteins-1 1.69 0.14 10 7.69 0.67 8 27.67 0.001 4.39 0.98 10 0.57 0.18 10 0.71 0.25 9 NS 0.65 0.23 9
92
Table 4 Mean values of superoxide dismutase, catalase, reduced glutathione and of malonildialdehide after 144 hours of exposure at vibrations of 173 dB at 16 kHz ExperiSOD CAT GSH MDA mental Statisng mg nmol tics group mg of EUmg of of index proteproteproteins-1 ins-1 ins-1 XES 4.69 0.76 0.52 0.80 labora1.26 0.32 0.22 0.68 tory n 10 10 10 10 control t 7.46 9.55 group P 0.001 NS 0.001 NS XES 6.22 1.22 0.22 0.49 1.01 0.22 0.08 0.29 experimental n 10 10 10 10 group 3 t 2.97 3.67 4.02 P 0.008 0.001 0.001 NS XES 6.09 1.46 0.36 0.51 1.79 0.15 0.18 0.24 experimental n 10 10 10 10 group 4 t 2 6.12 P 0.05 0.001 NS NS In the last 72-hour exposure to a noise of 173 dB (totaling 216 hours), the mean values of biochemical parameters analyzed showed a direct involvement of cellular antioxidant defense systems in the free radicals, and even an inability to protect the cells (Table 5), which is a proof of the fact that medium oxidative stress appeared. Marine organisms are forced to implement their own strategy of defense against stress factors. Along their evolution, under the influence of environmental factors, mussels put in place anti-oxidant enzyme systems and non-enzymatic defense systems against the action of harmful free radicals resulting from metabolic processes of organic xenobiotics [11]. Dissolved oxygen is a critical component of the aquatic environment. During normal cell metabolic reactions, oxygen is converted to 0.1-0.2% oxygen free radicals [10] Table 5 Mean values of superoxide dismutase, catalase, reduced glutathione and of malonildialdehide after 216 hours of exposure at vibrations of 173 dB at 16 kHz ExperiSOD CAT GSH MDA Statis mental ng mg nmol -tics group mg of EUmg of of index proteins-1 proteproteins-1 ins-1 labora- XES 6.50 1.14 1.750. 0.78 tory 0.52 0.28 22 0.24 control n 10 10 10 10 group t 27.55 5.57 3.11 4.91
experimental group 4 4.
CONCLUSIONS
Studying the impact of underwater sound / noise on a few Black Sea populations determined us to introduce the experimental method in the working protocol; the mussels (Mytilus galloprovincialis) experimentally exposed to different noise emission, according to analysis carried out on tissues, presented changes of biochemical indicators for oxidative stress (superoxide dismutase, catalase, reduced glutathione and malonildialdehida), from which we can draw the following conclusions: action of low frequency sounds (from 300 Hz) for 144 hours do not induce oxidative stress; increase of malonildialdeide concentration , for a higher exposure (216 hours) to low frequency sounds, explains the oxidative stress hypothesis; mussels exposure to high frequency sound (16 kHz), over 72 hours, induces oxidative stress. 5. REFERENCES
[1] Akcha, F., Izuel, C., Venier, P., Budzinski, H., Burgeot, T., Narbonne, J.-F., Enzymatic Biomarker Measurement and Study of DNA Adduct Formation in Benzo Pyrene-Contaminated Mussels, Mytilus galloprovincialis, Aquat. Toxicol, 49, 2000: 269287 [2] Akcha, F., Izuel, C., Venier, P., Budzinski, H., Burgeot, T., Narbonne, J.-F., Pollution-Mediated Mechanism of Toxicity in the Common Mussel, Mytilus edulis L. and other molluscs. Funct. Ecol. 4, 2000: 415 424 [3] Beers, R.F, and Sizer, I.W., A Spectrophotometric Method for Measuring the Breakdown of H2O2 by Catalase, J. Biol. Chem, 195,1952: 133-140 [4] Beutler, E., Red Cell Metabolism in: A manual of biochemical methods 2, Ed. Grune and Stratton publishers London, 1975 [5] Brcan, A., Sutcu, R., Gokirmak, M., Hiczlmaz, H., Akkaya, A., Ozturk, O., -Total antioxidant capacity and C-Reactive protein levels in patients with community acquired pneumonia. Turk. J. Med. Sci. 38 : (6), 2008: 537 544
93
94
1.
INTRODUCTION
The recent evolution of railway systems in the world has emphasized the necessity of more efficient management and, at the same time, of developing new, more accurate, design approaches to reducing costs and increasing safety and reliability of railway systems. Among all the sub-systems and the components that are a part of a railway system, the wheel/rail interface is one of the most delicate, as performances of the train, as well as safety. Through the wheel/rail interface, in fact, the dynamic and static loads pass from the rail to the wheel ia a really small contact area, whose extension and geometry can vary during the in-service period. The behaviour of the wheel/rail interface is paramount for being certain that adequate comfort, stability and safety is guarantied during the train trip. Computer based methods are the appropriate instruments of investigation which offer an overview regarding the phenomenon in engineering [3], [4]. 2. MATERIAL AND METHODS
The induced loads in the contact area have been done by FEM analysis. Figures present the standard rail profile UIC60 and position of forces in the contact surface. The induced load of a wheel is about 10 tones. The contact surfaces are assumed to be elliptical, rectangular and circular. In this research the axial load is considered about 22 tones. The loadings that are induced to the rail and the wheel are 50 to 200 KN. The model of the wheel and the rail, uses the rail profile UIC60 and two axial wheel of bogie H665.
In this paper, analysis of contact stress for two rolling bodies is presented using the finite element method (FEM). The first stage is to determine the critical surface of stress, the positions of critical stresses being identified by FEM. From the beginning it is expected to be some differences of the values of these stresses, aspect presented in the section of stress analysis theory. Vertical cracks will grow faster than other cracks. According to analytical and numerical results, the direction of the crack growth and also the prediction of the fracture depend on the values of maximum stress and displacement or the value of multiplying of maximum stress and displacement. All these, maximum stress, displacement and critical surface and stress may be calculated by the use of the FEM. In order to analyse the contact stresses, numerical methods are used.
Figure 1 Finite element model of wheel and rail Due to nonlinearity of contact analysis, the region requires a very fine mesh. In this research, it is used a fine mesh with an average length less than 2 mm near the contact area. Using pilot node at the wheel centre, in this important place a pilot point is connected to the wheel using rigid link element. All the external loading and boundary conditions of the wheel load are applied on the pilot point. Both loadings and boundary conditions can be obtained through field measurements
95
Figure 5 Stress distribution in rail Figure 2 Finite element modeling wheel and rail with static loads
Figure 6 Stress distribution in wheel 3. DESCRIPTION OF THE CONTACT FATIGUE CRACK INITIATION MODEL A comprehensive model of the contact fatigue life prediction of mechanical elements should consider the time history of applied contact loads, regarding their range of variation. The rollingsliding contact loads, typical for mechanical elements such as gears, wheels and rails, rolling bearings etc., are generally stochastic in a certain range, due to the stochastic character of some contact parameters. For a description of a general case of contact loading, one has to estimate average normal and tangential contact forces for computational determination of surface and subsurface contact stresses. The actual contact problem is transformed into its generalised form applying the Hertzian theory, i.e. the equivalent contact cylinder is generated from the curvature radii of the considered contact mechanical elements at the point of the actual contact. The equivalent Youngs modulus and Poissons ratio are also computed from respective data of contacting bodies (figure 7).
Figure 4. Deflection of the wheel and rail Stress distribution individually has been investigated by finite element analysis (FEA) and the
96
(1)
where FN is the normal contact force per unit width, a is half length of the contact area, which can be determined from
(2) where E and R are the equivalent Youngs elastic modulus and the equivalent radius, respectively, defined as
where m is the coefficient of friction between contacting bodies. For the general case of elastic contact between two deformable bodies in a standing situation, the analytical solutions are well known. However, using general Hertzian equations it is hard to provide the loading cycle history and/or simulation of a contact pressure distribution of moving contact in the analytical manner. Therefore, the finite element method is used for simulating two-dimensional friction contact loading in this case and the same procedure is usually used when dealing with complex contact loading conditions. The equivalent contact model is spatially discretised in the region of interest, where finer mesh is used around material points (xi, yi) on and under the contact region. The computational model for evaluating contact stresses is a two-dimensional rectangle, with assumed plain strain conditions, Figure 8.
(3)
(4) where E1, R1, n1 and E2, R2, n2 are the Youngs modulus, the curvature radii and Poissons ratio of contacting cylinders, see figure 2. Next, the maximum contact pressure p0=p(x=0) can be determined as
Figure 8 Finite element model for determination of contact loading cycles The loading boundary conditions comprise the Hertzian normal contact loading distribution and tangential contact loading due to frictional forcers in the contact. The loading is moving along the contact surface of the generalised computational model. The stress analysis of the generalised contact model is performed in the framework of the finite element method. The appropriate stress state for each observed material point is computed for each position of the moving contact loading. This procedure results in generation of real stress loading cycles of material points in one pass of the rollingsliding contact, which are necessary for the following contact fatigue analysis. 4. NUMERICAL EXAMPLE
(5)
Figure 7 Equivalent model of two contacting cylinders In the analysing of real mechanical components, some partial sliding occurs during time-dependent contact loading, which can originate from different effects (complex loading conditions, geometry, surface, etc.) and it is often modelled with traction force due to the pure Coulomb friction law. In the analysed case frictional contact loading q(x) is a result of the tractive force action (tangential loads) due to the relative sliding of the contact bodies and is here determined by utilising the previously mentioned Coulomb friction law
The rail profile is chosen according to UIC regulation, the considered rail is the most common UIC60 profile that is shown in the UIC technical documentation. The wheel diameter is about 0.89 m and the wheel profile is chosen according to the AAR standard having a wide flange contour. The vertical load is assumed to be the maximum design load, which is 146.2 kN. In this study the rail length is considered to be 700 mm. The initial contact point is assumed to occur at the railhead centre and wheel tread centre. The results of the static load analysis of the wheel and rail contact are shown in figures 9 and 10. Figure 9 presents the von
97
5.
CONCLUSIONS
A multiaxial fatigue life prediction investigation is presented in this paper. In this study, the effects of different parameters have been studied individually. Future research needs to consider interactive effects of those parameters because the wheel/rail contact problem is highly nonlinear. Also, other effects, such as residual stress from manufacturing, brake loading, thermal loading, dynamic and impact loadings, material defects, etc. need to be included in the proposed methodology.
Nevertheless, the presented numerical model enables a better understanding of the process of fatigue crack initiation, in the contact arearollingsliding boundary conditions on the contact surface are taken into consideration. This causes permanent damage to mechanical elements. Moreover, this model may be also used for practical applicability of contact fatigue of mechanical elements, e.g. the contact fatigue of gear teeth flanks 6. REFERENCES
contact.
[1] HOWELL, M., HAHN G.T., RUBIN, C.A., Finite element analysis of rolling contact for nonlinear kinematic hardening bearing steel, ASME J Tribol 1995; 117:72936 [2] GUO, Y.B., BARKEY M.E,. Modeling of rolling contact fatigue for hard machined components with process-induced residual stress, Int J Fatigue 2004; 26(6):60513 [3] OANTA E., NICOLESCU B., Computer-aided approaches a path to the information of synthesis in engineering, Proceedings of the 5th International Conference on Quality, Reliability and Maintenance QRM2004, ISBN 1-86058-440-3, University of Oxford, 1-2 April 2004, pag. 265-268. [4] OANTA E., NITA A., An Original Method to Compute the Stresses in Applied Elasticity, Journal of Optoelectronics and Advanced Materials - Rapid Communications (OAM-RC), Editor in-chief: Prof. Dr. Mihai A. Popescu, ISSN: Print: 1842-6573, Vol. 3, No. 11, November 2009, pp. 1226-1230. [5] SLADKOWSKI, A., SITARZ, M., 2005, Analysis of wheelrail interaction using FE software, Wear. 258, 12171223. [6] WIEST, M., KASSA, E., DAVES, W., Assessment of methods for calculating contact pressure in wheelrail/switch contact, Wear., 265, 14391445.
98
1.
INTRODUCTION
Fatigue is progressive damage occurring in materials subjected to cyclic loads. The study of this phenomenon assumes special importance in the design of machinery and structures, since this is the most frequent cause of service rupture. As railways axle loads and speed increase, and wear prevention methods become more effective, it is crucial to implement solutions to prevent rolling contact fatigue. For example, increasing wear resistance of rails may imply that incipient surface cracks previously eliminated by wear, are no longer eliminated. On wheels, fatigue cracks can be initiated not only on the surface but also under it. The initiation of surface cracks seems to be highly influenced by the presence of residual stresses and thermal loads, caused by a forced brake. According to elastic analyses the maximum shear stress appears between 4 and 5 mm under the wheel surface, however some cracks can be initiated at depths between 4 and 20 mm. The phenomenon of fatigue in rail is more complicated because of load randomness. Maximum shear stress appears at a depth of 3 mm and cracks initiate between 3 and 15 mm below rail surface. Initiation of cracks under rail surface is very common in heavy haul rail. 2. FINITE ELEMENT MODEL
A 300 mm long rail was used to simulate the passage of the wheel and special attention was given to the contact surfaces, where smaller elements where used to correctly define it, since contact stresses are highly dependent of contact surface geometry. Figure 1 presents the mesh used for the rail. In order to reduce computational time, only a small part of the wheel was used as presented in figure 2, because only its surface geometry was needed. As shown in figure 3, a refined mesh was used on the contact surfaces.
Figure 2 Wheel mesh Parametric studies have been carried out for free rolling in order to obtain the contact stresses in normal and tangential direction. These data are fundamental for the investigation of friction and wear and they are offered to tribology experts for evaluation. In the next
99
Figure 3 Model mesh Dang Van proposed a fatigue initiation criteria based on the instantaneous value of shear stress a(t) and hydrostatic stress h(t), [4]. This criterion states that fatigue failure will occur if the condition (1) is verified: a(t) + aDV x h(t) > -1 where: a(t) - instantaneous shear stress value on a specific point; h(t) - instantaneous hydrostatic stress value on the considered point; -1 - material fatigue limit in reversed torsion; aDV - adimensional constant, which represents the influence of hydrostatic stress, and can be determined by: (1)
(2)
3.
The shape profile of the wheels and of the rails section are S1002 and UIC60. Table 1 summarizes the main geometric features of the wheel-set - rails system. Table 1. Main geometric features of the wheel-rails Internal gauge 1360 mm Wheel radius 0,457 mm Rail tilt 1/20 Figure 5 Cartesian coordinate systems and independent degrees of freedom of the axle 4. NUMERICAL RESULTS
The geometry of the investigated wheel as well as an appropriate finite element discretization is presented
100
One can notice the following aspects: - the numerical solutions differ significantly with respect to the HERTZian solution, because of the nonlinerity of the finite element model; - the maximum contact pressure is overestimated by the HERTZian theory; - for some cases no corresponding HERTZian solution can be found, due to the curvature at the initial contact points. 5. WHEEL-RAIL CONTACT APPROACH The rigid-contact hypothesis is based mainly on two approximations: (1) the elastic deformation of the wheel and the rail is insignificant, allowing both of them to be treated as rigid solids; (2) the contact is maintained between the wheel and the rail throughout the trajectory, ignoring the possibility of the wheel occasionally becoming separated from the rail. Given the wheel and rail profiles, for each specific pair of values for the wheelset degrees of freedom y and a, only one specific pair of values (z, f) satisfies the rigid contact condition for both wheels (rigid contact hypothesis). The elastic-contact hypothesis takes into account the existence of contact patches between the bodies, due to their elasticity; they are no longer regarded as rigid solids. The z and f degrees of freedom which were dependant in the rigid-contact hypothesis now become independent, since the condition that the contact between the wheel and the rail occurs at a point is not applicable any longer. The phenomenon of a double contact point between wheel and rail can be resolved simply, identifying the two interpenetration areas and calculating the normal forces separately, figures 9, 10 and 11.
Figure 7 Parameter definition of the contact setup, radii of curvature Linear elastic material behavior has been assumed with E = 210 GPa and = 0,3. The results are significantly dependent on the contact conditions of wheel and rail at the contact point due to the different radii of curvature of the wheel with lateral shift. In general rolling conditions, the lateral wheel position on the rail is not fixed to the center line. Thus, parametric studies for different laterally shifted positions of the wheel were carried out, as presented in figure 7. For the velocity v = 200 km/h and a wheel load of 90 kN the comparison of the contact patches is presented in figure 8.
101
Figure 9. Single-point contact Figure 12. Wheel and rail profiles are defined by discrete points
Figure 13. Indentation of the contact patch To model the profiles, discrete points are used (separated by 0.5 mm to 3 mm), and spline equations are used to interpolate between these points, figure 12. Spline functions offer a set of important advantages in the creation of the computer based models where interpolation is required, [5]. The profile of the wheel is scanned to delimit the zone or zones where interpenetration occurs. In each of these zones, by using the cubic spline curves, the maximum indentation that occurs in the direction normal to the wheel profile is calculated, figure 13. Using the Hertz theory, and given the value of maximum indentation, the radii of curvature of the solids at the contact point and the elastic constants of the material, the normal force that would have to be applied in order to produce such indentation can be calculated. The relationship between force and indentation is nonlinear, and is governed by the wellknown expression: N = C 3/2, where N is the normal force, d is the indentation and C is a constant that depends on the curvatures and the elastic constants of the bodies. The data that is strictly necessary for the subsequent dynamic simulation is the number and position(s) of the contact point(s), and the contact angles, indentations and curvatures of each of them. The normal forces could be
Figure 11 No point contact The search for the contact patches between the wheel and the rail, and the calculation of the corresponding normal forces, is relatively expensive in computational terms. For this reason, to reduce the calculation times, the problem is normally solved in two dimensions, assuming that the contact patches are located on a vertical diametric section of the wheel. This is equivalent to consider that the influence of the yaw angle a on the location of the contact patches is ignored. By neglecting this degree of freedom, the
102
Figure 15.Rigid contact approach Figure 14. Longitudinal scan of the solids and the contact patch In order to include all the possible positions that the wheelset could reach during a dynamic simulation, it is necessary to calculate the wheel-rail contact data for a large amount of cases. During the simulation, the elastic contact information at each integration step is obtained by interpolation of the adjacent data in the table. The crucial point is that the number of discrete positions calculated in the tables has to be large enough to detect the two-point contact cases if this may happen. For example, when the wheelset moves laterally and the contact point jumps from the tread to the flange of the wheel, two-point contact will probably occur for certain lateral displacements and angles of attack (y, a) of the wheelset. The size of the table should be adequate in order to include enough realistic cases of two-point contact for these displacements. In this way, their interpolation will be possible during the simulation. However, the technique that probably contributes most to the efficiency of the method described here is the use of the results obtained for the rigid-contact problem as a base for the elastic-contact calculations. The values of indentation are very small, since the wheel and the rail are made of steel and their stiffnesses are high. There is a direct relationship between the position of the wheelset relative to the track and the indentations in the contact patches, which can be obtained using linear interpolation. The normal forces, however, are related to the indentations by the expression: N = C 3/2 (3)
so their relationships to the degrees of freedom of the wheelset are non-linear. One option, since the value of indentation is known, is to recalculate the corresponding normal force using the Hertz theory. The other option is to interpolate the coefficients:
(4) instead of interpolating the normal forces. This coefficient and the interpolated indentation give the normal force using (3). It has been concluded that this method greatly improves the accuracy of the results. The semiaxes of the contact ellipse also depend non-linearly on the indentation value. The relationship in this case is: a = K 1/2 (6)
Figure 14. Inputs and outputs of the look-up table with the elastic three-dimensional contact data
103
where a and b are the dimensions of the major and minor axes of the contact ellipse. The interpolation procedure is the same as that described for the normal forces, except that we now interpolate the K coefficients, obtained from:
= 5. CONCLUDING REMARKS
(8)
A nonlinear analysis of the stress/strain state resulting of the passage of one wheel on a segment of rail was performed using non-linear finite elements. The analysis is presented in the present paper through the data corresponding to the first passage; subsequent work will consist of a fatigue behaviour characterization of the rail using a more refined mesh and simulating the number of passages necessary to reach a steady state characterized by the repetition of the loading path. Residual stress state is another direction of investigation, prior experience being available, [7]. The differences in wear indices, running safety coefficient and forces in the contact areas can differ significantly, depending on whether a 2-D or 3-D contact analysis is used, as long as the yaw angle reaches significant values. These differences become insignificant on straight tracks or in large-radii curves. Therefore, the aforemetionned method is specially suitable when analysing vehicles that will be used on rails with tight curves.
[1] BATHE, K., Finite-Elemente-Methoden, SpringerVerlag, Berlin 1990. [2] CRISFIELD, M. A.: Non-Linear Finite Element Analysis of Solids and Structures Advanced Topics, Wiley & Sons, New York 1998. [3] DANG VAN, K., GRIVEANU, B., O., On a new multiaxial fatigue limit criterion: theory and applications, Biaxial and Multiaxial Fatigue, Edited by M. W. Brown and K. J. Miller, 1989, Mechanical Engineering Publications, London p.479-496. [4] DANG VAN, K., MAITOURNAM, M. H., On some recent trends in modelling of contact fatigue and wear in rail, Wear, Vol. 253 (2002), p 219-227. [5] GAVRILA G., OANTA E., Interpolation and Computer Based Models, Annals of DAAAM for 2009 & Proceedings of The 20th DAAAM World Symposium Vienna, 25-28 Nov 2009, Vienna, Austria, Organized by Danube Adria Association for Automation & Manufacturing, Vienna University of Technology, University of Applied Sciences Technikum Vienna, Austrian Society of Engineers and Architects - OIAV 1848, Editor B. Katalinic, ISBN 978-3-901509-70-4, ISSN 1726-9679, ID 835, pag. 579-580 [6] JOHNSON, K.L., 1985, Contact Mechanics, Cambridge University Press. [7] NITA A., OANTA E., Improving the quality of the molded polymeric parts by reducing the residual stress, Proceedings of the 2nd International Conference on Manufacturing Engineering, Quality and Production Systems (MEQAPS '10), ISSN: 1792-4693, ISBN: 978960-474-220-2, pp. 77-82, Constantza Maritime University, Constantza, Romania, September 3-5, 2010. [8] VADILLO, E.G., GIEMENEZ, J.G., 1985, Influence of the yaw rotation on the wheel rail contact parameters, Procceedings of the 8th international wheelset congress 1 II.4 1-15.
104
1.
INTRODUCTION
Actual world is complex and dynamic and the instruments employed to explore its unknown facets must be fast, accurate and intelligent. Emergency situations are events which require decisions to minimize the losses of lives, financial losses and the additional long run threats, which must be made in a short period, taking into account various aspects. Because of the high complexity of the emergency situations, optimum decisions must be supported on accurate qualitative and quantitative instruments which allow the analyst to observe the events, to predict the evolution, to elaborate the decision and to control the outcomes. Information technology offers versatile instruments employed to assist the decision and the paper presents three directions of such instruments: knowledge management, advanced graphics for the emergency scene representation and complex science-based models for emergency situations. 2. ASPECTS REGARDING THE KNOWLEDGE MANAGEMENT FOR EMERGENCY SITUATIONS The process of data acquisition and reduction employed to support decisions has three aspects: selection of the classes of information which are useful for the decision support; methods of data reduction which can offer refined information; equipment necessary for data management from different locations of the headquarters. 2.1 Classes of information Several classes of information are necessary in order to have an overview regarding the on-going phenomena, [4]. Geographical information represents the stage of the emergency situation. GIS systems offer such
information. It is important to export the content in order to create a customized representation of the stage. Weather conditions are important for both expansion of the pollutants or fire and for the intervention means. There are sites which present the wind speed and temperature. Additional information is needed, such as the speed of the marine currents.
Figure 1 Data concentration for emergency situations Computer aided models of different structures, such as buildings, towers, bridges, tunnels, dams, piers, cranes. At least some parameterized prototypes are necessary in order to create a particular model of that given structure. Computer aided models of the phenomena are important in order to predict the evolution of the pollutant/fire and to select the most effective option of intervention. Pollution scenarios must consider a domain which may be in several countries. This is why it is important to gather information from all the countries which may be involved in the event, including the law which may be applied in that given case. Access to the databases regarding the persons involved in the event must be allowed, prior agreements being important to be signed with various countries and institutions.
105
A word of wisdom says an image prevails one hundred words. Human eye has the paramount capacity to integrate the information and to notice the according trends. Human vision has a dedicated processor, it means a brain of its own. From this standpoint, the advanced graphics facilities are very important for the decision making process of an emergency situation. Virtual reality is an intelligent resource which has not proved yet its power. Several research projects were dedicated to the use of virtual reality for various purposes, including emergency situations. Figures 3 and 4 present some applications of the virtual reality for an oil leak scenario.
Figure 2 Equipment for the situation room within the headquarters of the emergency committee The headquarters must offer fast links to the data sources or to the datacenter. Because of this condition, there are three possible locations: a unique location which has fast connections, itinerant headquarters which can use the network of an administrative organization in order to set-up a situation room or a specialized vehicle which is used to access the refined data, ready to be interpreted and used. These three possibilities reveal some measures to be undertaken in terms of data acquisition, data processing, technical teams and expert teams which must backup the management level and equipment which must be installed in all these types of headquarters. It should be also considered a certain degree of redundancy of the headquarters to be used in these operations. Moreover, all the equipment must be periodically tested and updated in order to maintain it operational capacity. As a final remark, one can notice that in all these situations data protection is paramount.
Figure 3 A virtual reality solution which uses ECMA script for the engine of functionalities Figure 3 proposes a set of specific functionalities based on ECMA and VRML. First of all, it should be noticed that the scenario is included in a quadrant which presents the geographical particulars: islands, light towers, depths of the seabed. The sea has the transparency attribute adjustable, so the user can see the objects underneath the sea surface, including the seabed. Objects hanging on the front wall may be touched and consequently they are introduced into the virtual world. Once included in the scene, if the object is touched a system of axes appears. The object can be moved into the scene if the axes are touched. The lines of the axes are used for the translation motion and the arrows are used for rotations. If the object is touched one more time, the axes disappear and the vehicle is realistically presented into the 3D scene. Beside the vehicles, there are also instruments, such as: system of axis, grids on horizontal or on side geometrical planes. Geometrical data are generated by the geographical information which is converted in such a way the
106
Figure 4 Advanced virtual reality solution which uses JAVA for the engine of functionalities As it can be used the domain is also organized on quadrants. On the walls of the quadrant there are presented different images who may present maps, reports, actors who act on the scene and others. Figure 5 Outcome of the levels of intelligence of the approach
107
Figure 6 Integration of the results of various models in engineering A set of generalized complex models for most common scenarios of emergency situations may be created, such as fluid-structure interaction, study of the building-soil system under various loads, dissemination of a pollutant, minimum path for emergency situation vehicles, metal structures under extreme temperatures, new chemical and biological means of depollution. Identification of the methods to calibrate these models according to the on-going event is another problem to be solved in order to have accurate results. 5. CONCLUSIONS
Science offers today the technologies and the instruments which may be very useful to support decisions in emergency situations. The paper is a synthesis of ideas which offer the opportunity to explore some directions to be followed in the upcoming years in order to create the grounds of the scientific based decisions in a world where the intelligence has an increasing importance. 6. ACKNOWLEDGEMENT
Several of the ideas presented in the paper are the result of the models developed in the framework of the MIEC2010 bilateral Ro-Md research project, Oanta, E., Panait, C., Lepadatu, L., Tamas, R., Constantinescu, M., Odagescu, I., Tamas, I., Batrinca, G., Nistor, C., Marina, V., Iliadi, G., Sontea, V., Marina, V., Balan, V. (20102012), Mathematical Models for Inter-Domain
[1] BERESCU, S., MARPOL and OPA conventions regarding oil pollution, Ovidius University Annals Series: Civil Engineering Volume 1, Issue 12, June 2010 [2] BERESCU, S.; NITA, A.; RAICU, G, Modern Solutions used in Maritime Pollution Prevention, Ovidius University Annals Series: Civil Engineering Volume 1, Issue 12, June 2010 [3] OANTA, E., Virtual Reality Original Instrument Employed in Crises Management, Proceedings of the 12th International Congress of the International Maritime Association of the Mediterranean (IMAM2007), Varna, Bulgaria, 2-6 September 2007, Maritime Industry, Ocean Engineering and Costal Resources Editors: Guedes Soares & Kolev, 2008 Taylor & Francis Group, London, ISBN 978-0-41545523-7, pg. 1095-1102 [4] OANTA, E.; TAMAS, I.; ODAGESCU, I., A Proposal for a Knowledge Management System for Emergency Situations, Proceedings of the 4th International Conference on Knowledge Management: Projects, Systems and Technologies, Section III. KM Projects, Organized by Project Management Association, Carol I National Defense University, Academy of Economic Studies, November 6-7, 2009, Bucharest, Romania, Editors: Toma Plesanu, Luiza Kraft, ISBN DVD 978-973-663-784-1, ISBN hardcopy: 978-973663-783-4, pp. 143-145 [5] OANTA, E.; PANAIT, C.; MARINA, V.; MARINA, V.; LEPADATU, L.; CONSTANTINESCU, E.; BARHALESCU, M. L.; SABAU, A. & DUMITRACHE, C. L.: Mathematical Composite Models, a Path to Solve Research Complex Problems, Annals of DAAAM for 2011 & Proceedings of the 22nd International DAAAM Symposium, ISBN 978-3901509-83-4, ISSN 1726-9679, pg. 0501-0502, Editor Branko Katalinic, Published by DAAAM International, Vienna, Austria 2011
108
A NEW INNOVATIVE DIRECT DISTRIBUTED INJECTION SYSTEM OF FUEL FOR INTERNAL COMBUSTION ENGINES
1
ABSTRACT This paperwork is proving via numerical simulation using the renowned software Fluent that the proposed Invention Patent is feasible as a viable solution to improve the combustion conditions inside the combustion chambers of the internal combustion Engines. The Romanian Invention Patent Number 123482 is protected according the International Laws. Fuel injection is a system for admitting fuel into an internal combustion engine. It has become the primary fuel delivery system used in automotive engines, having replaced carburetors during the 1980s and 1990s. A variety of injection systems have existed since the earliest usage of the internal combustion engine. The Authors are proposing a new concept of a new Direct Distributed Injection System of Fuel for Combustion Engines. The fuel injection systems that exist and are deployed in practice have an essential shortcoming: the fuel is injected inside the cylinder using a single injector that disregarding the complexity, being placed in a central position, it cannot fill completely the combustion chamber and the mixture rates between fuel-air, due to the fact that the fuel droplets are leaving from a single central point, cannot collide one against other, so that the said fuel-air mixing rates are lower since the dimensions of the fuel droplets are relatively rough. The Invention is proposing a shift of the injection paradigm, instead of using one central injector to have in place an injection system which is leading to better colliding conditions of fuel droplets against each-other, with the end of a finer diameter of the resulting fuel droplets which in turn will lead to better combustion conditions. Keywords: Fuel Injection; Internal combustion Engines; Direct Distributed Injection System; Injection Plate; Numerical Simulation; Invention Patent 1. INTRODUCTION/HISTORY Diagnostic capability Range of environmental operation Engine tuning The modern digital electronic fuel injection system is more capable at optimizing these competing objectives consistently than earlier fuel delivery systems (such as carburetors). Carburetors have the potential to atomize fuel better (see Pogue and Allen Caggiano patents). Operational benefits to the driver of a fuel-injected car include smoother and more dependable engine response during quick throttle transitions, easier and more dependable engine starting, better operation at extremely high or low ambient temperatures, increased maintenance intervals, and increased fuel efficiency. On a more basic level, fuel injection does away with the choke, which on carburetor-equipped vehicles must be operated when starting the engine from cold and then adjusted as the engine warms up. Fuel injection generally increases engine fuel efficiency. With the improved cylinder-to-cylinder fuel distribution of multi-point fuel injection, less fuel is needed for the same power output (when cylinder-tocylinder distribution varies significantly, some cylinders receive excess fuel as a side effect of ensuring that all cylinders receive sufficient fuel). Exhaust emissions are cleaner because the more precise and accurate fuel metering reduces the concentration of toxic combustion byproducts leaving the engine, and because exhaust cleanup devices such as the catalytic converter can be optimized to operate more efficiently since the exhaust is of consistent and predictable composition.
Fuel injection is a system for admitting fuel into an internal combustion engine. It has become the primary fuel delivery system used in automotive engines, having replaced carburetors during the 1980s and 1990s. A variety of injection systems have existed since the earliest usage of the internal combustion engine. The primary difference between carburetors and fuel injection is that fuel injection atomizes the fuel by forcibly pumping it through a small nozzle under high pressure, while a carburetor relies on suction created by intake air accelerated through a Venturi tube to draw the fuel into the airstream. Modern fuel injection systems are designed specifically for the type of fuel being used. Some systems are designed for multiple grades of fuel (using sensors to adapt the tuning for the fuel currently used). Most fuel injection systems are for gasoline or diesel applications. The functional objectives for fuel injection systems can vary. All share the central task of supplying fuel to the combustion process, but it is a design decision how a particular system is optimized. There are several competing objectives such as: Power output Fuel efficiency Emissions performance Ability to accommodate alternative fuels Reliability Driveability and smooth operation Initial cost Maintenance cost
109
110
a.
b. Figure 2 The injection plate In the figure above are shown two versions of the Injection Plate (3) which can be made in one piece (Fig.2-a) or out of two separate pieces (Fig.2-b) or put from multiple pieces. If made out of multiple pieces the machining of the fuel injection nozzles (11) is simpler but problems in ensuring the sealing between cylinder cover and the crank case are to be expected whereas if made out a single circular piece the machining of the nozzles is more difficult but more effective in terms of sealing. The functioning of the system is simple: the fuel is coming via (7)-high pressure duct for fuel to the injection plate (3) that having machined the nozzles (11) will spray the fuel inside the combustion chamber. A detail of the nozzle (11 penetrating the injection plate (3) is given in the figure below). Figure 1 3D View and Cross-section of a combustion cylinder with Direct Distributed Injection System
111
Figure 3 Detail of the nozzle (11) inside the injection plate (3) Functionally the position of the nozzle (11) against of the center of the combustion chamber is of a paramount importance.
Figure 5 Pistons upper part machined with guiding grooves As already mentioned the pistons may have a special design having the upper part machined with guiding grooves for the fuel jet (Figure 5). These grooves may lead the fuel jets and ensure enough space for the jets to develop taking into account that at the moment of the injection the width of the combustion chamber is very narrow. The proposed injection system is not taking into account how and which are the means of pressurizing the fuel before the injection stage. Any system electromechanic-hydraulic may be deployed to ensure this function. In plain words the invention is replacing the Spray Tip of a normal fuel injector with an injection plate, the rest of the injector remaining the same. 3. NUMERICAL SIMULATION
Figure 4 Position of the direction of nozzles (11) against of the center If as in Figure 4-a the direction of nozzles are targeting the center then this is the arrangement which is leading to the most intense collision rates between fuel droplets which in turn lead to the best dimensions of droplets due their fragmentation.
In order to demonstrate the efficiency of the proposed invention a numerical simulation was conducted following two separate scenarios. The geometry of the combustion chamber is given below, the
112
Injection plate Figure 7 Dynamic pressures for the computed scenarios Maximum of 3.75 e4 Pa for the normal injection reached near the walls of the combustion chamber, Maximum of 1.64 e4 Pa reached at the center of the chamber.
Figure 6 Geometry of the combustion chamber Keeping constant the geometry and the injection parameters, two scenarios were evaluated; the first scenario is supposing the existence of a normal central injector with nozzles of 0.22 mm and length of 0.006 m. The second scenario is supposing the existence of the injection plate with the same type of nozzles. The software involved was Fluent. The model used 133,339 finite volume cells with 27,326 nodes. The fuel was injected via a plain orifice atomizer, with 10 particle streams each, starting to inject the droplets at time 0.0 sec to 0.0026 sec. The calculation followed the evolution of the combustion process from 0.0 sec to 0.065 sec. The flow rate of the injection is 0.013 kg/sec at temperature of 350 0K. The turbulent dispersion was modeled via the stochastic tracking model with the time scale constant of 0.15. The under relaxations factors used were 0.3 for pressure, 1 for density, 1 for body forces and 0.7 for momentum. All the rest of the model settings had pretty much the standard values, the solver used is the segregated-implicitunsteady-3D one, the viscous model used was k-epsilon standard model. The model was iterated until the convergence was reached. 3.1 Computed Dynamic and Total Pressure As seen in the following figure the dynamic and total pressure developed for the two scenarios are:
Normal injection
Injection plate Figure 8 Total pressures for the computed scenarios Maximum of 4.15 e4 Pa for the normal injection reached near the walls of the combustion chamber, Maximum of 1.37 e4 Pa reached at the center of the chamber. Judging the pressure distribution after 65 ms from the injection start, for the computed scenarios, is quite visible that the pressures distribution inside the combustion chamber in the normal injection has bigger values with bigger pressure peaks whereas for the plate
Normal injection
113
Normal injection
Normal injection
Injection plate Figure 10 Temperatures for the computed scenarios 3.4 Computed turbulence kinetic energy and turbulence intensity As known the turbulence developed inside the combustion chamber is a key parameter for a good combustion process.
Injection plate Figure 9 Velocities for the computed scenarios Maximum of 3.17 e2 m/s for the normal injection reached near the walls of the combustion chamber, Maximum of 2.96 e2 m/s reached at the center of the chamber. Once again one may see that the velocity distribution inside the combustion chamber has a smoother shape. 3.3 Computed Temperatures As seen in the following figure the temperature fields developed for the two scenarios are given in the Figure 10. Maximum of 3.25 e3 0K for the normal injection reached near the walls of the combustion chamber, Maximum 3.32 e3 0K reached at the center of the chamber. By now is quite clear that the injection plate ensure a better combustion condition since the maximum reached temperatures are bigger and the shape of
114
Injection plate Figure 12 Mean mixture fractions for the computed scenarios As seen in the above figure the mean mixture fractions fields developed for the two scenarios are: Maximum of 9.31 e-1 for the normal injection reached near the walls of the combustion chamber, Maximum 1 reached at the center of the chamber. The mean mixture fraction is defining the reaction rates developed in the combustion process and as seen the normal injection provide only 10% of the solution proposed via the invention patent. 5. CONCLUSIONS
Injection plate Figure 11 Turbulent kinetic energy and turbulence intensity for the computed scenarios As seen in the figure 11 the turbulent kinetic energy and turbulence intensity fields developed for the two scenarios are: Maximum of 4.39 e3 m2/s2 for turbulent kinetic energy and 5.41 e3 % for turbulence intensity for the normal injection reached near the walls of the combustion chamber, Maximum of 5.69 e3 m2/s2 for turbulent kinetic energy and 6.11 e3 % for turbulence intensity reached at the center of the chamber. 3.5 Computed Mean mixture fraction
This paperwork is proving via numerical simulation using the renowned software Fluent that the proposed Invention Patent is feasible as a viable solution to improve the combustion conditions inside the combustion chambers of the internal combustion Engines. The Romanian Invention Patent Number 123482 is protected according the International Laws.
Normal injection
115
[1] LINDH, BJRN-ERIC (1992) (in Swedish), Scania fordonshistoria 1891-1991, Streiffert. ISBN 978-917886-074-6 [2] OLSSON, CHRISTER (1990) (in Swedish), Volvo Lastbilarna igr och idag, Frlagshuset Norden. ISBN 978-91-86442-76-7. [3]"The Direct Injection Engine Will Likely Power Your Next Car". HybridKingdom.com. http://www.directinjectionengine.com. Retrieved 201202-09. [4]"1940 6C 2500 Touring "Ala Spessa"" (in Italian) digilander.libero.it.http://digilander.libero.it/spideralfaro meo/1940b.htm. Retrieved 2012-02-09. [5] Circle Track, 9/84, pp.82-3. [6] WALTON, HARRY (March 1957). "How Good is Fuel Injection?". Popular Science 170 (3): 8893. http://books.google.com/books?id=byEDAAAAMBAJ& pg=PA88. Retrieved 2012-03-24. [7] DAVIS, MARLAN (October 2010). "What You Need To Know About Mechanical Fuel Injection". Hot Rod Magazine. http://www.hotrod.com/techarticles/engine/hrdp_1010_w hat_you_need_to_know_about_mechanical_fuel_injectio n/index.html. Retrieved 2012-02-09. [8] INGRAHAM, JOSEPH C. (March 24, 1957), "Automobiles: Races; Everybody Manages to Win Something At the Daytona Beach Contests". The New York Times: p. 153.
116
STUDY ON THE EFFECT OF NOISE ON THE PHYSIOLOGICAL ACTIVITY OF THE ROUND GOBY FROM THE BLACK SEA
1
CHITAC VERGIL, 2PRICOP MIHAIL, 3ATODIRESEI DINU, 4PAZARA TIBERIU, 5PRICOP CODRUTA, 6DRAGOMIR COPREAN, 7ONCIU MARIA-TEODORA, 8RADU MARIUS
Mircea cel Batran Naval Academy, Constanta, 5Constanta Maritime University, 6,7,8 Ovidius University Romania
1,2,3,4
ABSTRACT The Romanian coastal area of the Black Sea presents all types of artificial noise sources (ranging from naval activities, to military applications, construction activities, drilling platforms etc.,) with strong effect on acoustic sensitivity of hydrobionts. In this paper we present the influence of antropogenic sound on the physiological activity of the Round goby (Apollonia (Neogobius) melanostomus, Pallas, 1814) kept in cages. The analysis of some biochemical indicators for the oxidative stress (superoxid dismutase, catalase, reduced glutathione and malonildialdehide) performed on liver tissue of the round goby from the Black Sea exposed to different qualities of antropogenic noise, indicated that in shallow waters parts of the Black Sea, where goby fishes (with different species, characteristic for each biotope) are the dominant fish species, noises/vibrations with high intensity are harmful for the ecosystem. Keywords: Apollonia (Neogobius) melanostomus, oxidative stress, noise, spectrogram, physiological activity, goby fishes, Black Sea
1.
INTRODUCTION
This paper presents the results of the MUNROM project as part of RoNoMar (Romanian and Norwegian Maritime Project) which had the following objectives: to determine the ambient underwater noise level in the coastal region and in the harbor areas of the Romanian Black Sea coast, to determine the impact of noise produced by artificial sources on some species characteristic to NW area of the Black Sea: goby, blue mussel, dolphins. The effect of anthropogenic sounds on fish may vary and depends upon: (1) properties of the sound, such as frequency spectrum, source level, duration, rise and fall times in level, and repetition rate, (2) background noise (masking), (3) sound level, duration and spectrum of the sound as received by the animal, (4) hearing properties of the species (sensitivity, directivity index and critical ratio), and (5) species-specific or individual variation in reaction to sound [13]. For the reasons mentioned above, extrapolation of the effects of anthropogenic sound upon fish are notoriously difficult making the results of an experiment on caged fish to be applicable only to that specific situation. The effects of sound are also known to vary within a species; Popper et al. [15], found that the effect of sound on rainbow trout, tested under the same conditions, varied with groups of fish tested in different years. The reason for this was not entirely clear but may be a result of genetics or difference in the conditions during early life history of the fish [20]. For animals living in aquatic environment, total dissolved oxygen is a particularly important factor for the normal physiological processes. Some 2-4% from the total amount of oxygen consumed in metabolic processes is transformed into intermediate chemical species or oxygen free radicals (ROS). Oxygen free radicals are highly complex transient chemical species, able to react
with almost all bio-molecules, altering their biochemical and functional properties. The cellular antioxidant defence system against oxygen free radicals consists of substances both enzymatic and non-enzymatic. The main antioxidant enzymes are superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase and the nonenzymatic antioxidant defence system is represented by vitamins A, E, C, reduced glutathione (GSH), Coenzyme Q10, bilirubin, uric acid and -carotene. [9], [10], [18]. There is a close balance between the concentration of oxygen free radicals and the substances involved in the cellular defence system. If the antioxidant system is outdated, the phenomenon of oxidative stress is observed. So, in biological systems, the oxidative stress is characterized by the increase of concentration of oxidizing chemical species, the decrease of lowmolecular-weight antioxidants, frequent redox-balance disturbances and some injuries of bio-membranes, proved by the increase of malonildialdehide (MDA) concentration [1], [11]. 2. MATERIALS AND METHODS
In order to study the influence of vibrations (as source of noise) on the physiological activities of fishes, healthy individuals of Round goby Apollonia (Neogobius) melanostomus Pallas 1814 were collected from the southern rocky shore of the Romanian littoral and transported in 100 l barrels with filled sea water at 16C (Figure 1).
117
Figure 1 Preparing the fish for transport The experiment took place around the maritime station where the ship "Noordkaap" was moored (Figure 2); three experimental cages were carried on board. Figure 4 The Nordkaap ship and the cages location 1, 2, 3 the cages with experimental groups; 4, 5 hydrophones; 6 the source of noise The experiment was conducted between 8th -18th October, the sea was calm, and there were no rains or strong winds. The water temperature at surface varied between 15.5 C and 18 C (average value 16.9 C). The equipment used in the experiment consisted in two hydrophones type 8106 (with a general purpose transducer with a frequency range from 7Hz to 80 kHz) (Bruel & Kjaer), a LAN XI for the acquisition system (Bruel & Kjaer), and a laptop with 14 Pulse software installed. As a source of underwater noise we used a hydro pneumatic mechanism based on an air compressor (11 bars), with a fundamental frequency of 61-90 Hz, located near the EG-2 at 0.5 m above the top of the cage. This mechanism was used 8 hours a day, in successive steps of one hour emission, two hours break. The hydrophones were placed 1 m away from the cage, on the side. The source of noise had to be placed near the cage no. 2, because the sound was quickly attenuated: the first hydrophone (located near the source) recorded a higher level of noise as compared to the second hydrophone (located 5-6 m away from the source). The difference between the two hydrophones ranged from 6 dB to 9 dB, depending on the variation of air pressure in the compressor tank (when air pressure in the tank drops below 6 bars, its engine starts automatically, increasing the pressure value to 11 bars - 11 atmospheres). After a period of 72 hours from the beginning of the experiment the fish were sacrificed for analysis; their liver tissue was examined in order to state some metabolic parameters of oxidative stress: the superoxide dismutase activity, the catalase, the reduced concentration of glutathione (GSH) and the protein concentration. In determining the metabolic oxidative stress parameters we used the following methods: Superoxide dismutase (SOD) - method based on its ability to inhibit the reduction of tetrazolium salt - Nitro Blue tetrazolium (NBT) with superoxide radicals [19]. The activity of catalase (CAT) - kinetic method based on the decomposition of hydrogen peroxide radicals existing in the reaction medium, as a result of catalase
Figure 2 The site of the experiment (www.joie.ro/2010/07/gara-maritima-constanta/ ) The cages were made of two kinds of fishing net: one, with the mesh of 12 mm was used to build the side walls of a one meter cube. The same material was used to make the upper wall which presents a trap that allows access into the cage. Fishing net with the mesh of 7 mm was used to make the bottom of the cage (to allow mussels attachment).The cages walls were reinforced with 7 mm diameter metal rods (Figure 3).
Figure 3 Structure of the cages There were 30 fish placed in each cage by the help of a fish landing; the fish was given its favourite food [3]: live mussels and chunks of fish meat and beef. Cages were sunk in the water column, 12 m deep, one meter above the seabed. A cage containing the control group (GC) was located in the starboard side of
118
XES n XES
4.99 0.52 5 0.70 0.007 6 8.04 0.001 -85.97 4.25 0.17 5 1.33 NS > 0.05 -14.82 2.37 0.57 6 3.35 0.01
2.01 0.33 6 0.31 0.007 6 5.09 0.001 -84.57 1.66 0.12 6 1.00 NS >0.05
n t p +/M% XES
During the whole experimental period, also the test group control group (CG) was affected by the ambient noise generated by the traffic in the harbour. The noise levels of the ambient noise are relatively constant, between 121 123 dB re 1Pa (Figure 5.). The ten days of captivity with a moderate underwater noise have led to the installation of oxidative stress expressed by the accentuated increase of concentration of the antioxidant enzymes such as superoxide dismutase (SOD), catalase (CAT) ) and also of reduced glutathione (GSH), compared with the values obtained after the analyzes performed on animals collected directly from the sea. This is not associated with membrane lipid peroxidation, the parameter value that indicates the degree of lipid peroxidation, respectively malonildialdehide (MDA), which remained almost unchanged in the first period as compared with the test organisms sacrificed at the beginning of the experiment (Table 1).
n t p +/M% XES
-17.41 -65.43 -28.35 0.99 0.10 5 2.91 0.01 1.56 0.06 6 10.29 0.001 1.02 0.10 5 7.24 0.001
n t p +/-52.50 -103.0 -76.03 +52.2 M% LEGEND: X ES - arithmetic mean standard error, n= the number of individual values which lead to the average, t = test 't' of Student, p threshold of statistical significance, NS = statistically insignificant. Between 10th and 12th October, the experimental groups of gobies were exposed to the noise from the source (the hydro-pneumatic mechanism) with a rhythm of one hour vibrations two hours of silence. The noise levels were in the range from 157 dB re 1Pa to 163 dB re 1Pa. After the first 72 hours period of exposure to vibrations, the superoxide dismutase (SOD) activity decreased as a result of increased concentration of oxygen free radicals in the experimental groups held near the noise source (EG 2) (- 30, 46% vs. M, p 0.05) and at the bow (EG 1) (-56.91% vs. M, p 0.001). Catalase (CAT) acts on the substrate generated by superoxide dismutase; therefore it is a latency period between the moment of the activation of the superoxide dismutase and those of the catalase values confirmed by the scientific literature [2], [12] (Table 2). Table 2 Mean values of superoxide dismutase (SOD), catalase (CAT), reduced glutathione (GSH) and of malonildialdehide (MAD) in the liver after a rhythmical exposure of Round gobies to noises ranging between 157 dB re 1Pa and 163 dB re 1Pa during a 72 hours period Experimental group Statis- SOD CAT tics index EUmg of proteins-1 GSH mcg mg of MDA nmol mg of
Figure 5. Spectrogram of ambient underwater noise in the control group area Table 1 Mean values of superoxide dismutase (SOD), catalase (CAT), reduced glutathione (GSH) and of malonildialdehide (MDA) in the liver after 10 days of exposure of Round gobies to vibrations of 121 123 dB re 1Pa SOD CAT GSH mcg mg of proteMDA nmol mg of prote-
Experimental group
Statistics index
EUmg of proteins-1
119
XES n XES
EG 1 (15.10. 2010)
-56.91 -53.73 -50.69 3.27 0.34 6 2.38 0.05 1.56 0.17 6 1.55 NS >0.05 2.73 0.20 6 2.51 0.05 58.06 2.62 0.28 6 3.57 0.05
EG 2 (15.10. 2010)
-30.46 -22.38 4.79 0.20 6 1.76 NS >0.05 -2.00 1.78 0.18 6 2.09 NS >0.05
EG 3 (15.10. 2010)
n t p +/M%
-11,44 -59.75
The catalase (CAT) activity in our experimental model is reduced only at the experimental group 1 (EG 3), (- 53, 73% vs. M, p 0.001). Reduced glutathione (GSH) level is dependent on the concentration of oxygen free radicals, so when there is a high concentration of oxygen free radicals, the GSH level decreases. In our experimental model, the level of GSH decreases as a response to changes in normal concentrations of ROS , as presented in EG - 2 (- 58, 06% vs. M, p 0.05), EG 3 (- 59.75 % vs. M, p 0.05) and EG - 1 (- 50.69% vs. M, p 0.01). The malonildialdehide is an indicator of membrane lipid peroxidation, its levels increase only if the cell membrane suffers changes[16], [17]. It should be noted that our experimental model for all three experimental groups recorded increases in the level of the malonildialdehide, especially at the EG -1 (table 2). Summarizing, the exposure of goby fishes at vibrations ranged between 157 dB re 1Pa and 163 dB re1Pa during one hour, followed by two hours of silence, rhythm reiterated within a three day period, was harmful for the fishes in the proximity of the noise source; the oxidative stress was observed especially in EG 1, the very the next day, when all specimens from the cage were dead. Between 13th 15th October (the last 72 hour period), the rhythm of exposure of fishes at vibration was modified. The Diesel generators of the ship were used as noise producing sources. On 14th, after measuring the ambient noise, one Diesel generator was turned on and ran for approximately 25 minutes. Then, the second Diesel generator was turned on and we
c Figure 6 Spectrogram of the measurement of noise : a= with one generator -139 dB re 1Pa; b=with two generators - 145 dB re 1Pa; c=with two generators and the noise source (159 dB re 1Pa) During the last 48 hours, the noise source was set to generate noise increasingly. Also, an underwater video camera was placed in front of the cage with the experimental group 2 (EG 2). The movement of the gobies was recorded during those trials. The levels were also from 156 dB re 1Pa to 167 dB re 1Pa. The superoxide dismutase (SOD) activity decreased significantly in both experimental groups (EG 2 and EG 3) (table 3). This suggests that the level of oxygen free radicals concentration exceeded the reference value
120
CAT
EUmg of proteins-1 2.37 0.57 6 2.76 0.16 6 5.52 0.01 0.99 0.10 5 0.93 0.15 6 5.90 0.001
XES n XES
EG 2 (18.10. 2010)
-44.68 -53.73 -68.66 +192.53 2.65 0.19 6 6.12 0.001 1.63 0.03 5 4.56 0.05 1.76 0.15 6 8.43 0.001 1.34 0.13 6 7.34 0.01
EG 3 (18.10. 2010)
This suggests the following issue: in the two experimental groups, hydrogen peroxide radical concentration increased beyond the endurance of the enzyme and it is clearly outdated and no longer able to manifest its biochemical role which will ultimately lead to the installation of the phenomenon of oxidative stress. Regarding the level of reduced glutathione (GSH), in our experimental model it is a visible decrease of the mentioned bio-peptide in both experimental groups (EG 2 and EG - 3) (Table 3). Reduced glutathione (GSH) is consumed in the process of annihilation of oxygen free radicals resulted from the cellular metabolic processes [12]. In this experimental model, the oxidative stress is expressed also by the increase of malonildialdehide (MDA), showing a biochemical and functional alteration of cell membranes (Table 3). Round goby are free-swimming, and able to produce some sounds during reproductive season and are
The analysis of some biochemical indicators for the oxidative stress (superoxid dismutase, catalase, reduced glutathione and malonildialdehide) performed on liver tissue of Round goby from the Black Sea Apollonia (Neogobius) melanostomus Pallas, 1814 exposed to different quality of noise makes evident the following conclusions: - In Constanta harbour, ambient noise ranged between 121 and 123 dB re 1Pa and it had adverse effects on captive goby expressed by the values of biochemical indicators of the oxidative stress in liver tissue, except for the concentration of malonildialdehide, which did not occur in cellular splitting; - The increase of vibrations intensity (157 to 163 dB re 1Pa) with a constant rhythm of exposure (one hour of activity of an hydro-pneumatic mechanism, two hours of pause) determined the installation of oxidative stress after 72 hours; this is held true by the decrease of concentration of superoxide dismutase, catalase, reduced glutathione as a result of increased concentration of oxygen free radicals generated by stress; some captive fishes situated in the proximity of the noise source died after an exposure of 72 hours (EG - 1). - The exposure of Round gobies at increased and varying intensity of vibration (156 dB re 1Pa and 167 dB re 1Pa) for another period of 72 hours induced severe oxidative stress confirmed by values of specific biochemical indicators determined in the fishs liver, especially the concentration of malonildialdehide, which showed that cellular lysis also appeared. 5. REFERENCES
[1] Allen, R.G, Lamb, G.D, Westerblad, H., Skeletal Muscle fatigue:cellular mechanisms. Physiol. Rev. 88 : 287-332, as a pollution-mediated mechanism of toxicity in the common mussel, Mytilus edulis L. and other molluscs. Funct. Ecol. 4, 2008: 415424 [2] Babich, H., Palace, M.R., Stern, A., Oxidative Stress in Fish Cells;in Vitro Studies. Arch. Environ. Contarn. Toxicol. 24, 1993: 173 178 [3] Banarescu, P.,. Pisces-Osteichthyes. Fauna R.P.R. Editura Academiei RPR, Bucureti, 13, 1964: 998p [4] Beers, R.F, and Sizer, I.W,. A Spectrophotometric Method for Measuring the Breakdown of H2O2 by catalase. J. Biol. Chem. 195, 1952:133-140
121
122
1. INTRODUCTION As far as we know, water is an important aggressive agent with major negative effects over walls or construction elements, which comes in contact with. Porous construction materials (bricks, mortars, connection means of the walls) absorbs water. In this way, moisture crumbles concrete and might ruin plaster. From moisture negative elements, most important are: reinforcing and concrete embedded pieces corrosion, fact which leads to reduction and even complete loss of mechanical characteristics of the elements; degradation caused by frost-defrost cycles; materials mechanical-physical characteristics changes; thermical capacities reduction; dampness appearance. Capillary attraction allows water flow in all directions, creating a blotter effect. Capillary attraction action depends of water superficial tension, construction material hydrophilic surface and porous material thickness. Beside water, water steams can also go through porous materials and diffusion. To eliminate moisture from buildings walls we use many methods, like : chemical methods, electroosmotic methods and physical methods. 2. WATERPROOFING CHEMICAL WAYS OF INTERIOR WALLS AND BUILDINGS FRONT SIDES Interior and exterior walls waterproofing problem is a different way to deal with. Exterior insulation is considered the best, but is restrictive. It is appreciated that 60% from insulations are made on the inside. Generally, there are known four ways of chemical waterproofing of the inside walls by different chemical substances, as follows : a.with silicone hydrophobic substances; b.with limp substances (paraffin, resin); c.with sodium silicate; d.with a gel that occupies the capillarities (launched by REMMERS- Germany). Moving forward, we present some aspects of the chemical waterproofing at interior walls and buildings front sides.
3.
Silicons are polymeric mixings, in which silicium atoms are connected with the help of interleaved oxygen atoms, and silicium valence untied by oxygen are saturated with at least an organic radical.
R radical is most of the times a methylic group (CH3). Organic substitutes number and nature related, we obtained liquid silicons, resin and rubber types. They are organic-silicon parameters, by their water hydrolysis. Silicons occupies an intermediary position between anorganic mixings and the organic ones, especially between silicates and organic polymers [3], [4]. Thermical stability related with other properties determines silicons using in a lot of domains. Film creating capacity, hydrophobia and separating effect are most interesting properties of silicons, which recommends them for waterproofing and paper/pasteboard enrichment. Film creating capacity is explained by reduced superficial tension possessed by the silicons. Superficial film is created easier as the film generator and the sublayer (construction material, paper etc.) are mutual related [5]. Silicons creates films easily on most of solid bodies, penetrating even the hardest accesible places, with narrow slits and pores. Hydrophobic behaviour is a general characteristic for all silicons. It has been realised that a water drop cannot expand on a silicon treated surface. Because of this, silicons are used especially at surfaces hydrophobia.
123
First table presents Baysilon products for Tab.1 hydrophobe construction materials conferring. Second table presents RHODORSIL SILICONATE 51 product characteristics, which is a watery solution with potassium methyl-silicone. Table 2 Characteristics Dry substance contain, % Active substance, % 250C density, g/cm3 pH Coagulation point, 0C Thinner Values approx. 47 approx. 28 1,34 approx. 13 < -20 water
124
3.2. Interior walls waterproofing The holes where we inject the product must be performed at 10 or 20 cm from the ground. Holes must have a 10 to 16 mm diameter (preferably 12 mm), and the distance between them must be from 10 to 20 mm (preferably 12 mm).
Applya nce Low porou sity mater ials Low porou sity mater ials Poro us mater ials Poro us mater ials
100200
100200
200600
200600
200600
200600
3.3 Chemical waterproofing of the front sides Often the rain can penetrate the exterior walls. By front sides impregnation with silicon resins or fluorinechemicals, this trouble can be easily fixed. Inestetic consequences of the dampness (like mold attacks) and severe cold damage can be, also, completely eliminated after front side impregnation with silicon resins or fluorine-chemicals. Figure 4 Interior walls waterproofing Holes depth: 2/3 from wall thickness, and their angle must vary between 30 and 450. Waterproofing agent introduction can be made gravitationally (preferably) or by injection under low pressure (0,5-10 bars)and then holes must be closed up with a special prime. Injection is on until the support saturation, fact realized through waterproofing agent extrusion from support. Regarding wall thickness, waterproofing agent injection is made in a different way, as follows : a. For 30 cm walls : one mark and depth injection; b. For 40 cm walls : one mark injection, but at two depths (at 1/3 from wall thickness and at 2/3 from wall thickness); c. For >40 cm walls : two marks injection with the method shown at 40 cm walls. Regarding walls structure and nature, silicon RHODORSIL consumption varies between 4 and 20 l of solution/ml. Table 4 Fluorine-chemicals appliance mode (ZONYL products)
Figure 5 Chemical waterproofing of the front sides Pores and construction materials capillaries are covered with thenwaterproofing agent, as an extremely refined film, almost invisible. This way we can keep intactair, gas and water steams penetration Tables 5 Silicon agents appliance mode for front sides waterproofing
Assort ment
210
225
8740
321
9027
329 Assortment Baysilon S Baysilon LN Gasoline or another hydrocarbon with boiling point at 1401500C
Water
Dillution proportion
125
Applyable at:
For all natural and artificial constructions materials, strong or weak imbued, regardless of colour It can be applied once or twice on strong moist surfaces
Impregnation times
Table 5 presents a series of application ways of waterproofing silicon agent, type Baysilon. Most adequate appliance of impregnation in solution can be made through intense spraying. Construction materials (like plating bricks) cannot be impregnated. In case of PROTESIL silicon product, proper for natural rock waterproofing, plaster, apparent masonry and apparent concrete, his appliance being made by spraying or with brush or roller, in 1-2 layers. PROTESIL consumption is 0,2-0,4 l/m2, regarding surface absorption. Another silicon product used for front sides waterproofing is DRYFILL, which applies without dilution in 2 layers and has a cover up capacity of 3-7 m2/l, regarding surface quality. It is guaranteed a surface protection of 12-15 years. FUNOOSIL product is used for hydrophobic protection of the concrete in transports domain (streets, bridges, support and fonic protection walls, guide bars, bunk parkings, parking surfaces). For concrete, specific consumption is 0,3-0,5 l/m2, and for fresh concrete is 1,0 l/m2. 4. WATERPROOFING SILICON AGENTS USED AS PLASTER AND APPLICATIONS UNDER DISPERSION PAINTINGS Pleasant aspect and durability of paintings made with dispersion paints can suffer heavily in case of a moist content. If fissures appeared, painted exterior wall can be degraded by atmospheric rainfalls. Same phenomenon can have place once with paintings antiquation. By water penetrating, bubbles will appear at the painting surface, because it will take its crust off or it will get covered with efflorescences. These early damages can be avoided if exterior walls are hydrophobic, before painting appliance, with a silicon waterproof agent. As hydrophobe dampness applied before painting with dispersion paints, is recommended especially Baysilon LN agent, the procedure being the same as in the case of exterior walls impregnation. Subsequent cover up with dispersion paints of silicon agents impregnation is not difficult if the first painting has been made with an undiluted paint. By water increase (one part water at 2-3 paint parts) we can obtain a faultless paint.
Interior walls and front sides waterproof problem is special and can be fixed with chemical method, electroosmotic or physical. Silicon agents use in constructions is possible because there is chemical compatibility between the silicates from constructions materials and silicons. Chemical waterproof presents an efficient solution and easy to apply. We can see specific consumptions of different waterproof agents, regarding of waterproof efficiency which is conditioned by products nature and chemical composition. 7. REFERENCES
[1]CAMPIAN C., MOGA C., STREZA T., Constructions Magazine nr.2 (68) /2005 [2]NOLL W. Chemie und Technologie der silicone, Second Edition Verlag Chemie, Weinheim, 1968 [3]PROCA G., Constructions silicons, Ed. Matrix Rom, Bucharest, 1999 [4]Du Pont ZONYL Surface Protection Solution make, everyday easy [5]CHERCHES M, PUTINA P, Constructions Magazine nr 16 (68) 2006
126
THE STUDY OF NAVAL POWER PLANT: EXPENSES INCURRED BY THE SHIP AFTER VOYAGES MADE
LUPCHIAN MARIANA
"Dunarea de Jos" University of Galati, Romania ABSTRACT This paper presents the analysis of transport costs for oil tanker at various operating regimes. Are determined annual maintenance and exploitation for a ship. During a voyage, the ship is navigational several situations and main engine and auxiliary machinery does not always work the same load. Keywords: engine power, oil-tanker, cost of transport, ballast, full load.
1.
INTRODUCTION
Current naval propulsion plants are composed of compression ignition engines. Operating regimes of these power plants are quasistationary, given the static mechanical characteristics of the propulsion engine running, transmission power, consumer power and control the mechanisms of their positions. Main engine of a ship is the main consumer of fuel and energy major manufacturer on board. Economy in order to improve the installation of ship propulsion is used for marine main diesel engines of heavy fuel oil heavy fuel sulfur especially in slow diesel engines and semirapid diesel engines [2]. Ship's propulsion system must function safely with minimized expenses, so that the specific cost of transport to be as small. To evaluate the the amount of these expenses must keep in mind that during a voyage, the ship is navigation a variety of situations and the main engine and auxiliary machinery does not work all the time on the same charge. 2. CASE STUDY
Are analyzed nine operating modes for ship (ballast and full load). Specific cost of transport (annual) [1] : CST ,an = Chan,totale G 2R N v [/tMm] (1)
- Chan, totale =f(v, Rt,) - maintenance and exploitation - G [t] - quantity of products shipped per year - R[Mm] - distance between extremes route - NV- number of annual trips - NV = f(v,R, G); Nv = f(v); Minimum specific cost of transport CST.an = 0,002583 [/tmM] mode resulted in the ship with vb = 11 Nd speed, engine speed nb = 80,865 [rpm] (ballast) and vm = 14,87 Nd, nm = 121,87 [rpm] (full load), making a total of 10,140 ship voyages year total cost of transport is Chan,total = 8511407,357 [/year] (figure 1).
Petroleum is equipped with a propeller, the propulsion of the vessel being provided by a diesel engine MAN B & W & 6-cylinder, engine power: 9480 [kW], 127 [rpm]; deadweight in the sea water is 37000tdw [3]. The ship is equipped with three generators Diesel, 960 kW of power, speed 900 rpm. The crew consists of 31 persons. Should be noted that after some time the ship hull navigation buildup leading to a propeller hydrodynamic heavy and extra power requires for ship propulsion [3]. Current marine engines with compression ignition regimes operate on variables of power and speed which entail changing parameters indicate effective that characterize the operating mode of the engine. Mode of engine propulsion depends on: type of ship, navigational conditions, hull design, propeller type and mode of transmitting power from engine to engine.
Figure 1 Specific annual transport cost Figure 2 is the specific cost of transport (on travel and annually), total expenditure for maintenance and annual exploitation mode of ship speed vb = 15 Nd, nb = 119,5 [rpm] (ballast) and vm = 15 Nd, nm = 123,120 [rpm] (full load).
127
29.28%
% % % 35 09 8 % 2. 0. 1 0.1 8 .13 3
% 27 8.
2 3 4 5 6 7 8
8% 2 .4 2 % 0 .1
18.80%
9 10
28.21%
0 % .01
7.05%
11 12
Figura 3 The share of costs 1- Chsal; 2- ChCAS; 3- ChPs; 4-Chamort; 5-Chrep; 6-Chwater; 7-ChSuez; 8-ChCL; 9-Chasig; 10-Chcom; 11-Chdiv; 12-Chalte ch Fuels and lubricants expenses represent the largest share (29,28%) of the total cost of transportation. Figure 2 Specific variation in the cost of transport and total cost of annual maintenance and operation mode of the ship with speed v = 15 Nd 3. CONCLUSIONS 4. REFERENCES
Annual operating and maintenance costs are : Chsal =8,27%; ChPs =0,12%; Ch CAS =2,48%;
Chamort =18,80%;
ChSuez =28,21%;
Chrep =7,05%;
ChCL =29,28%;
Chapa =0,01%;
Chasig =2,35%;
[1] SIMIONOV, M., Instalaii de propulsie navala, Galati University Press, 2009. [2] LUPCHIAN MARIANA, The profit made by a oil tanker after a voyage, CIEI 2011, Proceedings of the International Conference On Industrial Power Engineering the 8-th Edition,, VASILE ALECSANDRI UNIVERSITY OF BACU, Alma Mater Publishing House, BACU ROMNIA, aprilie 14-25, ISSN-L 2069-9905, Bacau 2011. [3] LUPCHIAN MARIANA, Determination of optimum operating regime for a naval power plant based on minimum fuel consumption, Conference- New face of TMCR, Proceedings of 16th International Conference Modern Technologies, Quality and Innovation, volume I, ModTech Iai 2012. [4] MAN B&W Diesel A/S, S50MC-C Project Guide, 6th Edition January 2009. [5] MC Programme Engine Selection Guide MAN B&W Diesel A/S Two- stroke Engines 2nd Edition February 1992.
128
1.
INTRODUCTION
The problem of the approximations is inherent in any study and a high level of awareness in this matter is paramount for the accuracy of the results. It is important to notice that a non-trivial approach of this problem should start by remarking the several levels which operate with concepts based on approximations. Let us consider three levels: philosophy, theory and method. Before considering the philosophical level which may be defined as the rational investigation of the truths and principles, it is easier to approach the levels of the theory and method. In this way, the results may be used to access a higher level of understanding and a possible generalization from a philosophical interdomain standpoint. The essay emphasizes some of the approximations employed in the theories and in the methods currently used in structural studies. 2. THEORY-LEVEL APPROXIMATIONS
v w u u v v w w + + + + z y y z y z y z w u u u v v w w zx = + + + + x z z x z x z x u v u u v v w w xy = + + + + y x x y x y x y
yz =
(3)
By the use of the small deflections hypothesis, there may be neglected the nonlinear terms and the above definitions may be approximated as:
x =
respectively
u v w ; y = ; x = ; x y z
(4)
yz =
v w w u u v + ; zx = + ; xy = + (5) z y x z y x
The basic 'building blocks' of a theory are the hypotheses which state certain behaviour of a given aspect of the phenomena. Let us analyse the way how the hypotheses influence the definition and computation of the strains in the Theory of Elasticity. Functions of the displacement with respect to the axes of coordinates are:
u = u ( x, y , z ) v = v( x, y , z ) w = w( x, y, z )
(1)
Definitions (4) and (5) are widely used for most common problems, including in Strength of Materials. Even so, there are geometrical aspects regarding the definition of the strains, aspects which add more accuracy in this approach. In order to evaluate the rationality of some approximations, there must be evaluated the values of the strains which will be used in the following proof. According to Hookes law, we have = E (6) = G and the according strain is = E (7) =
G
(2)
By considering the maximum values of the allowable stresses and the minimum values of the according moduli of elasticity in their common ranges, one can calculate the maximum values of the strains. For the current materials the values of the strains are less than 0.004. For steel one can consider 250 MPa = = 1.25 103 (8) 2 105 MPa 150 MPa = = 1.875 10 3 0.8 105 MPa
129
(13)
yz = yz + zy yz
and
(9)
zy
are
even smaller than the maximum value of the angular strain yz . This is why this small angle may be replaced either with its tangent, or with its sine. It is interesting to evaluate how small may be the angle in order to have a relatively accurate approximation of the angle. Calculating the values of the sine and tangent functions, one can notice that the overall relative error is smaller than 1% if the angle is smaller than 10, that is
and z are smaller than 1, so they may be neglected in (13), and it results w y w = yz (14) 1+ 0 y . v z = v zy 1 + 0 z The according shear strain is w v , (15) yz = yz + zy + y z relation already presented in (5). The only technical literature source where expression (13) was found is [6], without any discussion regarding the size of the linear strains y and z . Apart of the definitions, let us consider the analytical relations employed to compute the strains for a rotated coordinate system. Segment AB, figure 2, is translated in order to have AA. In order to define the components of the strain there must be done the additional constructions
(10)
Condition (10) is respected for the current vales of (8), this means for the fair conditions of the so called small deflections hypothesis. The shear strain may be defined using the scheme presented in figure 1 and relation (9):
yz = B AB tg B AB = zy = C AC tg C AC =
BC OY , BC OZ , AB CB = {B}, CC AB , BC CC , BD AB . Components
of the strain related to the B point are: with respect to the Y axis: BC, with respect to the Z axis: BC. where
B B AB C C AC
(11)
v v d y+ dz y z w w B C = d y+ d z y z BC =
(16) (17)
Replacing BB , AB , C C and AC as they are expressed in figure 1, it results w w d y y y yz = = v v d y+ d y 1+ (12) y y . v v d z z z = zy = w w d z+ d z 1+ z z In the expression above there may be identified the linear strains y and z , as they are presented in (4).
Component on AB direction is
cos(CBC ) = cos( + ) =
and it results
BC BC
(19) (20)
BC = BC cos ( + )
is very small in
+
BC = BC cos( )
130
C B CB
C B = CB sin ( + )
=
(24).
v cos 2 ( ) + y {
y
(33)
w sin 2 ( ) z {
z
( ) =
y +z
2 +
y z
2
cos(2 ) +
(34)
yz
2
sin (2 )
C B = CB sin ( )
(25) (26)
Similar relation may be derived for the shear stress variation with respect to a rotated coordinate system. To conclude this paragraph regarding the theory, there can be noticed that both definitions and math relations are based on approximations which may be considered reliable for a certain class of problems. These definitions and math relations are used in Strength of Materials without any proof, so the approximations are inherent to all the subsequent calculi. Similar approximations may be also found in the theory of the Strength of Materials. 3. METHOD-LEVEL APPROXIMATIONS
DB = BC cos( ) + BC sin( )
=
where
DB AB
(28)
AB = ds d y = d s cos( ) d z = d s sin ( )
=
+ 1 v d s cos( ) + + d s y
It results
(32)
The term method may be considered for applications in both Theory of Elasticity and Strength of Materials, as well as for methods originated in other sciences, such as mathematics, numerical methods, computer science and experimental methods. Let us consider the model presented in figure 3, which represents an on-board crane employed to load or discharge the cargo on a ship. It is a statically indeterminate system which can be solved by the use of various methods. At a certain level all the solving methods must consider the compatibility of deformation. The geometrical solution is based on two hypotheses: infinite rigidity of the horizontal bar and parallelism of the vertical bars in the deflected position with respect to the initial position. Based on these hypotheses there may be written a relation between the sides of the similar triangles in figure 3. It results a relation between the deflections of the vertical flexible bars. This is an example of using hypotheses, which means applying approximations, in the solution of a concrete problem. A more accurate approach should consider the influence of the so-called rigid horizontal bar onto the results of the study.
w d s sin ( ) sin ( ) z
131
Figure 3 Model of a lifting system made of rigid and flexible bars Theory of Elasticity offers a set of general equations and a portfolio of methods to solve the equations. These methods are synchronous with the technological level of the computing instrument in that age of science. In the class of the so-called computing instruments there may be included: mathematical methods, numerical methods, computer algorithms. Some of the hypotheses were conceived especially to compensate the shortcomings of the computing instruments of a given technological level. An interesting aspect regarding the solving methods included in the theory of elasticity books is the presence of the experimental methods, [15]. A solution widely used in elasticity problems was to consider the Fourier and/or trigonometric series for the approximation of the notions. There was used the property of orthogonality of the functions, the final solution being expressed as a series of simple or double trigonometric functions. The concrete by hand calculus uses, at most, the two first terms of the series. Approximation may be also found in this method, the computer based solution which considers several terms of the series requiring appropriate algorithms which must avoid inaccuracy and even run-time errors of that original software. Beside its known weak points, Finite Difference Method, [10], is another important method which offers solutions to a wide class of problems: structural problems, heat transfer, computational fluid dynamics and others. It also uses approximations, starting with the expansion in series and followed by the size of the grid employed to solve the set of equations specific to that given phenomenon, size which may influence the convergence of the solution. It is a discrete method, so its approximations are added to the approximations of the basic theory of the model. It may be reliable if an original computer based solution is conceived and parameterized software is developed.
Each instrument, theory or method, is based on a set of hypotheses this means approximations. It is important to conceive a strategy which uses the strength of each such instrument and compensates their weak points. This means to create a strategy of the research which is embedding several types of studies: analytic, numeric, experimental, the integration being done by the use of some original software applications. A first example regards the way how the differential equation of the neutral fibre of a bar subjected to bending is deduced. The relation of the curvature is given by the differential geometry and it is
where u is the deflection and is the curvature. The rotation is defined by:
[1 + (u) ]
3 2 2
(35)
and it is considered to be very small. Moreover, (u ) is even smaller. Consequently it is neglected in (35), which becomes
2
du = u dx
(36)
= u || =
d2 u . dx 2
(37)
If (u ) is not neglected, other methods of integration should be used. Paper [11] presents an original computer based method and its limits with respect to the size of the cross-section of the bar subjected to bending. In this case it was imperative to use the numerical integration and an appropriate optimised algorithm. In this way the analytic and the numerical methods complete each other. There are other cases where experimental and numerical studies complete each other, [13]. A computer based procedural model is presented in reference [9]. It uses experimental data acquired by the
2
This differential equation may be easily integrated and it leads to the differential relation which may be integrated by the use of the method of initial parameters.
132
Until now, based on an extensive analysis of the technical literature, [1], [2], [3], [4], [14], [15], [16], [17], [18], there could not be found a quantitative analysis of the notions on which the approximations are based in structural studies. This essay unveils the true nature of the theoretical models, especially of the analytic models which are all based on accurate or lessaccurate approximations. There are several levels where approximations are considered, starting with the basic definitions and math relations of the theory of elasticity, up to the solution of the concrete structural problems. The essay offers some suggestions and examples to be considered in order to create models with a higher degree of accuracy. Coming back to the philosophic level mentioned at the beginning of the essay, Knuth, [5], quoted von Neumann who said Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. Mutandis mutatis, one can say Anyone who considers standalone analytic instruments to model real systems is, of course, in a state of sin. A final conclusion is that the 'unity in diversity' principle applied for the scientific instruments, the difference between their natures, enriches the knowledge growth. 6. ACKNOWLEDGEMENT
Ideas regarding the computer based instruments in applied elasticity are the result of the models developed in the framework of the MIEC2010 bilateral Ro-Md research project, Oanta, E., Panait, C., Lepadatu, L., Tamas, R., Constantinescu, M., Odagescu, I., Tamas, I., Batrinca, G., Nistor, C., Marina, V., Iliadi, G., Sontea, V., Marina, V., Balan, V. (2010-2012), Mathematical Models for Inter-Domain Approaches with Applications in Engineering and Economy, MIEC2010 - Bilateral Romania-Moldavia Scientific Research Project, under the supervision of the National Authority for Scientific Research (ANCS), Romania, that is the follow-up of the ID1223 scientific research project: Oanta, E., Panait, C., Nicolescu, B., Dinu, S., Pescaru, A., Nita, A., Gavrila, G., (2007-2010), "Computer Aided Advanced Studies in Applied Elasticity from an Interdisciplinary Perspective", under the supervision of the National University Research Council (CNCSIS), Romania. Ideas regarding the maritime structures are the result of the models developed in the framework of the scientific research study Development of computer assisted marine structures, Emil Oanta, Cornel Panait, Ghiorghe Batrinca, Alexandru Pescaru, Alexandra Nita, Feiza Memet, which is a component of the RoNoMar project, 2010.
133
[1] BUZDUGAN, G. Rezistena materialelor, Editura Academiei RSR, Bucureti, 1986 [2] FILONENCO-BORODICI, M.M. Teoria elasticittii, Editura Tehnic, Bucureti, 1962 [3] GREEN A.E.; ZERNA W. Theoretical elasticity, Oxford at the Clarendon Press, 1968 [4] IEAN, D. Teoria termoelasticittii, Editura Academiei RSR, Bucureti, 1979 [5] KNUTH, D.E. The Art of Computer Programming, Vol. 2. 2 ed. Reading, MA: Addison-Wesley, 1981. [6] MOCANU, D.R.; THEOCARIS, P.S.; ATANASIU, C.; BOLEANTU, L.; BUGA, M.; BURADA, C.; CONSTANTINESCU, I.; ILIESCU, N.; PASTRAV, I.; TEODORU, M Analiza experimental a tensiunilor, Volumul 1: Bazele teoretice ale metodelor tensometrice i indicaii practice privind utilizarea acestora, Editura Tehnic, Bucureti, 1976 [7] NITA, A.; BARSANESCU, P. Impact of the finite element method for the measuring of the residual stress in molded polymeric parts, 6th International Conference on the Management of Technological Changes, Date: SEP 03-05, 2009 Alexandroupolis GREECE Source: MANAGEMENT OF TECHNOLOGICAL CHANGES, VOL 1, pp.533-536, ISBN: 978-960-89832-7-4, IDS Number: BMO93 [8] NITA, A.; OPRAN C., Testing Charpy impact strength of polymeric materials, Annals of DAAAM for 2009 & Proceedings of 20th DAAAM International Symposium, 1529-1531, 25-28th November 2009, ISBN 978-3-901509-70-4, ISSN 1726-9679, pp 765, Editor Branko Katalinic, Published by DAAAM International, Vienna, Austria [9] OANTA, E., TARAZA, D., Experimental Investigation of the Strains and Stresses in the Cylinder Block of a Marine Diesel Engine, Paper 2000-01-0520, SAE 2000 World Congress, Detroit, Michigan, March 69, 2000, ISSN 0148-7191 [10] OANTA, E Fundamente teoretice n programarea aplicaiilor de inginerie mecanic asistat de calculator, 294 pages, Editura Fundatiei Andrei Saguna, Constanta, 2000, ISBN 973-8146-04-6, Preface by Aram Constantin, member of the Romanian Science Academy [11] OANTA, E.; NICOLESCU, B. An original approach in the computer aided calculus of the large deflections, Analele Universitii Maritime Constana,
134
1.
INTRODUCTION
Solar energy is practically inexhaustible and clean. The paper proposes a study of solar absorption refrigerating equipment. It can transform into other forms of energy. The difficulties of using solar energy are: - intermittent character; - low density on surface; - variation depending by season and climatic region; - small storage possibilities for a longer period. Powers achieved in Romania is about 600 W/m2. Advantages of absorption refrigeration systems: - can use low-potential heat; - long operating; - do not use compressors; - electricity consumption for pumps is low; - good behavior at partial load; - working solutions used will not damage the ozone layer. 2. SYSTEM DESCRIPTION
Figure 1 Absorption refrigeration scheme. We will call below: thermochemical compressor (composed of: steam generator, absorber, heat regeneration and expansion valve 1); thermochemical expansion (composed of: condenser, expansion valve 2 and surface evaporation). 2.2. Solar panel This is shown in figure 2. The main parts are: CS - solar panels TI - thermometers AV - automatic valves FM - flowmeter V - valves ST - storage tank SV - safety valve
2.1. Solar absorption refrigeration scheme This is shown in Figure 1. The main parts are: ST - storage tank; Ab - absorber; G - steam generator; RS - regenerative heat excenger; CS - solar panel; P1, P2 - pumps; Cd - condenser; V - surface evaporation; VL1, VL2 expansion valves.
135
efficiency
Figure 3 Efficiency variations depending by characteristic temperature Fig. 2 Drawing of solar panel The follows: 2.3. Exergetic flow diagram This is shows in Figure 3. In this diagram you can see two units: TC - thermochemical compressor; TE - thermochemical expansion.
and
(Ti + Te ) 2Ta
2I
=
Notations:
Ge c p T I
100
(2)
cp
-specific heat
K m2 W [%] W 2 m J Kg K
Kg 2 m s
characteristic Notations: Figure 3 Exergetic flow diagram
& x + Pp - total exergetic flow introduced; E QF & Exp TC - exergue loss (in TC); &x E - exergetic flow supplied by TC;
TC
Q0 &x E
- cooling power;
Qo
& xp E TE & xp E
t
136
3.
PC ARI =
Q0 Q F + PP
(5)
1 - Energy losses due to efficiency may be compensated by increasing the solar surface. 2 - Exergetic efficiency depends by the two units: thermochemical expansion and thermochemical compressor. 3 - Can use an auxiliary heat source for not excessively oversized storage tank and solar panels. 4 - The regenerative heat excenger increase the exergetic efficiency of installation. 5 - Ship engine cooling water may be source of energy for absorption refrigeration installation. 5. REFERENCES
exARI =
&x E Qo & x + Pp E QF
= 1
& xp E t & Ex QF + Pp
(6) [1] Radcenco V., Porneala, S., Dobrovicescu, A., Procese in instalatii frigorifice, Editura didactica si pedagogica, Bucuresti, 1983. [2] Danescu, Al. si col., Utilizarea energiei solare, Editura tehnica, Bucuresti, 1980. [3] Porneala, S., Beju, C., Reducerea pierderilor exergetice prin utilizarea regeneratorului de caldura intre solutii la instalatia frigorifica cu absorbtie, Revista de termotehnica, anul V, nr. 1/2001. [4]Nerescu, I., Radcenco, V., Analiza exergetica a proceselor termice, Editura tehnica, Bucuresti, 1970. [5]Leca, A., Stan, M., Transfer de caldura si masa, Editura tehnica, Bucuresti, 1998. [6]Bejan, A., Termodinamica tehnica avansata, Editura tehnica, Bucuresti, 1996.
exTC =
&x E TC & x + Pp E QF
&x E Qo & Ex
(7)
exTE =
Result:
(8)
TC
(9)
137
138
DOMESTIC SOLAR WATER HEATING POTENTIAL IN THE SOUTH- EASTERN REGION OF ROMANIA
1
ABSTRACT One of the most effective methods to include ecological technology in a house is the use of solar systems for water heating. This paper determined solar water heating potential (SWH) for the South - Eastern region of Romania. It resulted that the use of solar energy covers approximately 35-50 % of the thermal energy needs for water heating from January to April and from October to December and 80-100 % from May to September. This solar system reduces by up to two thirds the need to use traditional methods for water heating and minimizes costs for electricity or for fuel used in heating water, thus reducing the environmental impact. Keywords: solar water heating, solar radiation, energy consumption
1.
INTRODUCTION
The use of solar energy for domestic hot water supply proved to be a perfectly viable solution. The operating principle of the system of heating water with solar energy is simple, and the technology is already well known and reliable. Solar energy is non-polluting, inexhaustible, ecological and reliable. This facilitates energy savings, without producing waste or emitting polluting gases, such as carbon dioxide. Above pollution problems and greenhouse gas impact, domestic hot water supply represents a considerable part of the buildings energy bills, which can be reduced by using solar energy. 2. SOLAR HEATING SYSTEMS
supply for the buildings served, available control and adjustment equipment and systems, financial resources. Due to the increased energy cost and the fact that fossil fuel resources are limited, the idea of recovery of solar energy as renewable energy source is increasingly popular around the world because mankind has realized the multiple benefits of solar heating. A comparison between solar water heating systems and conventional systems is presented in Table 1. Table 1. Comparison between solar water heating systems and conventional systems Solar Water Heater Initial Investment Annual Operating Costs Typical Lifetime 1650 - 2600 30 - 45 15 - 40 Years Gas or Electric Water Heater 200 - 400 350 - 470 8 - 20 Years
Solar systems for domestic hot water preparation are among the first uses of solar energy. At present, they have acquired a considerable development because solar energy is a clean, nonpolluting energy, whose use leads to the reduction of the emissions of greenhouse gases. From the first attempts up to the present, the composition solutions, namely the functional - structural schemes have evolved greatly and are still developing. Equipment manufacturers, stimulated by new standards related to low energy consumption of buildings or by those with positive energy, have developed a varied range of products for hot water preparation. They can serve: - individual or collective residential buildings; - accommodation buildings; - social and cultural buildings; - swimming pools. The functional - structural schemes are drawn up so as to best respond to a series of criteria, characteristics and users requirements, to the buildings they serve, and location. Among these, the following are very important: hot water needs, location characteristics, the characteristics of climate calculation, solutions for heat
Lifetime Operating 400 to 1600 3000 to 9500 Cost Total Lifecycle Cost Emissions Return on Investment 2100 4300 Zero 10 to 30% 2800 - 9800 Fossil Fuel Emissions None
3. SOLAR SYSTEM SELECTED TO COVER THE NEEDS OF HEAT To ensure the needs of domestic hot water supply at a minimum temperature of 45C, it was chosen an installation using the solar collector Junkers FCB-1S. This type of installation contains a plane solar collector, a heat exchanger, a reservoir for hot water storage and preparation, two pumps and an electric resistance that keeps water in the storage at a minimum temperature of 60C tank throughout the year. Each pump is equipped
139
Figure 1 Solar system The main characteristic parameters of the solar system are presented in table 2. Calculations were made for a single family dwelling occupied by 4 persons whose total consumption of domestic hot water is 200 l/day with a specific consumption of qs = 50l/ person per day. The provisions of STAS 1343 and1478 were taken into account for water consumption. The distribution of hot water consumption within 24 hours is shown in table 3.
Table 3 Distribution of hot water consumption within 24 hours Time hour V(t) l 0 0 0 55.4 39 0 0 0 11.3 51.5 23.1 19.7 / % 0 0 2 0 4 0 6 24.2 8 19.5 10 0 12 0 14 0 16 9.15 18 25.75 20 11.55 22 9.85 24 0 0 Total 100 200
We can note in table 6.11 that a high consumption of domestic hot water is between 18:00-22:00 o clock. The energy required to heat the water daily volume is calculated with the following relation:
4. ENERGY ANALYSIS OF THE THERMAL SYSTEM In order to determine the energy efficiency of the system a numerical program was developed in Matlab that calculated, based on the climatic data and energy consumption, monthly energy produced by solar panels and the auxiliary energy necessary to cover heat load in the months with a lower intensity of solar radiation.
(6.25)
Vt daily volume of water necessary for a house (200 l); cp specific heat of water;
140
Auxiliary energy kWh 177.06 117.53 83.24 24.56 0.00 0.00 0.00 0.00 0.00 56.97 134.60 172.21 766
Figure 2 shows variation in solar radiation in a year, falling on a horizontal surface and, respectively, on an inclined surface with an angle of 30 .
220 200 180 Solar radiation [kWh/m 2] 160 140 120 100 80 60 Horizontal radiation Inclined radiation
Figure 4 shows the monthly variation of thermal energy needs and the monthly variation of auxiliary consumption (from other sources) in order to ensure the heat demand of the house.
300 Auxiliary Energy Energy consumption 250
150
100
50
40 20
0
6 Months
10
12
6 Months
10
12
Figure 2 Variation in solar radiation in a year Figure 3 represents the monthly variation of the energy produced by the solar thermal system and the monthly variation of energy consumption in order to ensure the heat demand of the house.
280 260
Figure 4 The monthly variation of thermal energy needs and of the auxiliary consumption (from other sources) in order to ensure the heat demand of the house Figure 5 shows the monthly variation of the heat produced by the solar collectors compared with the monthly variation of auxiliary consumption so as to cover the heat demand of the house.
300 Auxiliary Energy Energy produced 250
150
100
50
6 Months
10
12
6 Months
10
12
Figure 3 Monthly variation of the energy produced by the heating system and of the energy consumption in order to ensure the heat demand of the house
Figure 5 The monthly variation of the heat produced by the solar collectors and the monthly variation of auxiliary consumption so as to cover the heat demand of the house
141
250
100
50
6 Months
10
12
Figure 6 The monthly variation of thermal energy needs, auxiliary consumption and heat produced 5. CONCLUSIONS
The aim of this paper was to determine solar water heating potential (SWH) in the South- Eastern region of Romania for a single family dwelling occupied by 4 persons. After analyzing the system operation it resulted that the use of solar energy covers approximately 35-50 % of the thermal energy needs for water heating in from January to April and from October to December and approximately 70-100 % from May to September. This solar system reduces by up to two thirds the need to use traditional methods for water heating and
[1] Cengel Y.A., (2003), Heat Transfer: A Practical Approach, 2nd Ed. McGraw-Hill. [2] Hans Dieter B., Karl S., (2006), Heat and Mass Transfer. 2nd Ed. Springer. [3] Holman J.P., (1997), Heat Transfer, New York, Mcgraw-Hill Inc., 459-472. [4] Incropera F.P., DeWitt D.P., Bergman Th. L., Lavine A.S., (2007), Fundamentals of Heat and Mass Transfer, 6th Ed., John Wiley & Sons. [5] John A. Duffie, William A. B., (1991), Solar Engineering of Thermal Processes, John Wiley & Sons, 331-376. [6] Luminosu V.I., De Sabata T.C., De Sabata I.A., (2010), Studies on solar radiation and solar collectors, Thermal Science, 14, 157-169.
142
ABSTRACT Solar photovoltaic system is one of renewable energy system which uses PV modules to convert sunlight into electricity. Recent fossil fuel energy price escalation and likely future carbon dioxide emission cap-and-trade programs will substantially improved the cost-effectiveness of investment in energy conservation and renewable energy resources. Solar PV system is very reliable and clean source of electricity that can suit a wide range of applications. In this paper, a technical study about implementation of photovoltaic (PV) module which can be installed on the rooftop of the house to be used as clean energy source was done. The system's mathematical model is developed in MATLAB. Keywords: photovoltaic system, solar radiation, energy demand.
1.
INTRODUCTION
2.
At the start of this millennium the use of photovoltaic panels has increased in an accelerated rhythm. One reason for this growth is the improvement of solar cell manufacturing technology, followed by the fact that the price of photovoltaic panels decreased, and classic fuels become more expensive. We do not have to pay any money for the suns energy; still, we have to pay for energy capturing equipment. However, this cost has decreased lately and will decrease in the future and the price of oil and natural gas will grow as before. Photovoltaic panels consist of several photovoltaic cells connected in series and in parallel, so as to ensure the current and voltage for which they were designed. The efficiency with which monocrystalline photovoltaic cells (most commonly used) transform incident solar energy in electric energy is about 16-18%. One of the earliest uses of solar photovoltaic energy was for remote applications. In the areas and residences which are too far to be viably connected to the electrical network, photovoltaic energy could be used for almost any need of electric energy of the house. Photovoltaic panels can provide home lighting, electric energy for appliances, televisions and household accessories. There are two types of electrical installations with solar panels: 1. freestanding installations (isolated type) mainly used for power supply to consumers who do not have access to the power grid. They have generally low power, because the energy produced by panels is stored in electric batteries. 2. installations connected to the electrical network. In this case, power consumption is provided by solar panels, when they have conditions to produce it, or by the electrical network, when panels cannot produce power (for example, at night or when the sky is cloudy). When the current produced by the panels exceeds consumption needs, surplus power is delivered in the electrical network.
A classic insular photovoltaic system (Stand Alone System) consists of the following components: - photovoltaic panels, - battery charge controller, - group of batteries of 12, 24 or 48 V DC - inverter, that transforms direct current DC in alternating current AC, necessary for domestic consumers. The advantages of using photovoltaic panels are primarily represented by the possibility to ensure electricity in remote locations with no access to the electricity supply network. Such a system is easy to install, it does not require special knowledge in the energetic field, maintenance of panels is easy; also, panels only require cleaning of impurities deposited on their surface. The most important advantages of photovoltaic PV- systems: 1. Photovoltaic (PV) systems provide green, renewable power by exploiting solar energy. 2. Unlike wind turbines, Photovoltaic (PV) panels operate autonomous without any noise generation as they do not incorporate any moving mechanical parts. 3. With respect to operating costs and maintenance costs, Photovoltaic (PV) panels, unlike other renewable energy technologies, require minimum operating or maintenance costs; just performing some regular cleaning of the panel surface is adequate to keep them operating at highest efficiency levels as stated by manufacturers specs. Photovoltaic (PV) panels can be ideal for distributed power generation as they are highly suitable for remote applications. Figure 1 shows a system of production and use of current by means of photovoltaic panels. It can be noticed that photovoltaic panel is not the only component of the system. Since the moment when electric energy is needed is not the same as the moment when solar radiation is present, the electricity supplied by the panel is accumulated in one or more batteries to be used when needed. Between the photovoltaic panel
143
Figure 1 The operation scheme for a solar photovoltaic system of isolated type. 1 PV generator; 2 charge controller; 3 batteries; 4 DC/AC inverter; 5 - load
4. ENERGY ANALYSIS OF ELECTRIC SYSTEM Table 2 shows the results of the energetic analysis, for the photovoltaic system consisting of ten panels.
144
Month
January February March April May June July August September October November December Annual
ES (solar radiation) kWh 285 670 1282 1617 1865 2035 2321 2257 1612 952 409 236 15541
EPV (PV) kWh 26 66 125 157 177 189 210 205 150 91 39 21 1456
Energy [kWh]
Where: ES (solar radiation) the intensity of solar radiation (monthly or annually) accumulated on photovoltaic panels, inclined at an angle of 36; EPV (PV) the energy produced (monthly or annually) by photovoltaic panels; EE (excess) surplus of energy (monthly or annually) produced by solar panels; EA (auxiliary) surplus of energy (monthly or annually) necessary to complete electric energy obtained from solar panels; EC (consumed) the electricity required (monthly or annually) for a house. Figure 2 shows monthly variation of electric energy produced by Siemens solar panels and monthly variation of electricity demand.
110 100 90 80 Energy [kWh] 70 60 50 40 30 20 10 Energy consumed Energy produced by PV
In Figure 3 we can notice monthly variation of the need of electricity for a house and monthly variation of auxiliary consumption (from other sources) to ensure electrical needs of the house.
70
60
40
30
20
10
6 Month
10
12
Figure 3 Variation of the need of electricity for a house and monthly variation of auxiliary consumption (from other sources) Figure 4 shows monthly variation of electric energy produced by PV panels and monthly variation of auxiliary consumption (from other sources) to ensure electrical needs of the house.
6 Month
10
12
Figure 2 Electric energy produced by Siemens solar panels and monthly variation of electricity demand
145
80 Energy [kWh]
60
40
20
6 Month
10
12
Figure 4 Monthly variation of energy produced and monthly variation of auxiliary consumption Figure 5 shows the monthly variation of electric energy demand compared with the monthly variation of the auxiliary consumption (from other sources) in order to ensure the heat demand of the house and monthly variation of the electric energy produced by the PV panels.
120 Auxiliary energy consumption Energy consumed Energy produced by PV
100
80 Energy [kWh]
60
40
20
6 Month
10
12
Figure 5 The monthly variation of electric energy demand, auxiliary consumption and electric energy produced 5. CONCLUSIONS
We can notice that the energy produced by PV panels in January, February, March, October, November and December is below the electricity demand for a house, and for the other months of the year this energy
[1] Yun GY, McEvoy M, Steemers K. Design and overall energy performance of a ventilated photovoltaic faade. Solar Energy 2007;81:38394. [2] Mei L, Infield D, Eicker U, Fux V. Thermal modelling of a building with an integrated ventilated PV faade. Energy Build 2003;35:60517. [3] Balocco Carla. A simple model to study ventilated facedes energy performance. Energy Build 2002; 34:46975. [4] Onar OC, Uzunoglu M, Alam MS. Modeling, control and simulation of an autonomous wind turbine/photovoltaic/fuel cell/ultra-capacitor hybrid powersystem. Journal of Power Sources 2008;185(2):127383. [5] Reichling JP, Kulacki FA. Utility scale hybrid windsolar thermal electrical generation: a case study for Minnesota. Energy 2008;33:626. [6] Yang H, Zhou W, Lu L, Fang Z. Optimal sizing method for stand-alone hybrid solar-wind system with LPSP technology by using genetic algorithm. Solar Energy 2008;82:354
146
The zooplankton from the Danube and the Danube Delta has been a constant concern, as object of detailed study, for many researchers, but that from the lakes of the flooded area became less important for the researchers remaining almost unknown. Regarding the river estuaries in the south-western Dobrogea, they have been studied too little or not at all from a hydrobiological point of view. The specialty literature is poor in data on the plankton composition from these aquatic ecosystems, only Enaceanu (1947) published an analysis of the plankton in the ponds of Oltina and in 1961 Popescu-Gorj and Costea published a wide analysis on the hydro-biological conditions still in the Oltina puddles (Iortmac, Ciamurlia and Oltina). Thus in August 2000 within the International project Monitoring of the large European rivers basins started studying the zooplankton populations from the river estuaries Oltina, Dunareni and Bugeac. The three estuaries belong to the sector OstrovCernavoda of the Danube flooded meadow, sector located in the hilly Danube Dobrogea subunit, in the south-west part of it, corresponding to the Levantine coastal platform. A feature of the Levantine platform is the depressions - bays in which there are stationed the river estuaries [12]. Thus, Oltina, Dunareni (Marleanu) and Bugeac are lake units formed on secondary valleys brought to the confluence area with the Danube and have the form of depressions with high and abrupt banks, flat bottoms and without accidents [8]. 2. MATERIAL AND METHOD
The research made in the river estuaries, Oltina, Dunareni and Bugeac (Figure 1), in the period 20002004 reveals the existence of very low values of the specific diversity, together totalling 52 species and holoplanktonic varieties belonging to the Rotatoria groups (54%), Cladocera (27%) and Copepoda (Cyclopoida - 12% and Calanoida-5%). From the taxonomic spectrum total of the three lake ecosystems, 86% fall within the primary consumers category (Cp) and 14% in secondary consumers category (Cs). In the study period the zooplankton was represented by a small number of species, diversity and equity indices indicating specific features for each entity (Table 1).
In the period 2000-2004 there were collected every season, 71 quantitative zooplankton samples, but not in all the years we could get data from all seasons. The quantitative samples of zooplankton have been obtained by filtering 100 l water from the surface through a silk net with mesh size of 100 m. The samples were preserved directly on a field with 4% formaldehyde solution. The zooplankton samples have been examined on a microscope and stereomicroscope. For the species
147
In terms of specific composition, all three aquatic ecosystems are dominated by rotifers, cladocera, their competitors to food and the water bodies most energetic, being less represented. Comparing the resulted data with those obtained by Popescu-Gorj and Costea (1961), in Oltina there were identified with 70% fewer species namely 42 fewer species of rotifers, 14 species of cladocera and a species of calanoid copepods. The rest of the species reported by the authors mentioned above belong to other taxonomic groups, in this paper being presented only the main zooplankton groups (rotifers, cladocera and calanoid and cyclopoid copepods). The analysis of the biotope preferences of zooplanktons revealed a mixture of planktonic typical forms (species of the genera Brachionus, Keratella, Filinia passa, Polyarthra vulgaris, Synchaeta pectinata among rotifers, Bosmina longirostris, the species of the genus Daphnia, Moina micrura dubia among cladocera, Eudiaptomus gracilis, Cyclops vicinus vicinus among copepods), phytophilous (Euchlanis parva, Trichocerca similis, Testudinella patina among rotifers, Chydorus sphaericus, nectobentonic (Rotaria sp. rotifer) and hyponeustonic (Scapholeberis kingi cladocer), the planktonic forms owning a predominant role. Analyzing the qualitative composition of the zooplankton and in terms of nutritional preferences, the vast majority are primary consumers, sediment microfilters (species of the genera Brachionus, Keratella, Filinia) and ineffective micro-filters (Diaphanosoma orghidani, Bosmina longirostris), that well exploits the particulated organic substance. From a quantitative point of view the zooplankton from the river estuaries recorded an annual average the numerical abundance of 213439 exm-3 (Annex 1). The structure on taxonomic groups reveals the dominance cladocera with 54%, followed by rotifers with 35%, cyclopids, 8% and 0.007% calanoids. The numerical abundance of the primary consumers represent 70% of the total and that of the secondary consumer 30%. At the systematic group level the numerical dominance is provided by rotifers (52%) in the case of
148
300000
BRCA
-3
250000
200000
MECR
150000
BRDI 100000
BRDI JUV1
50000
ACVE NACI
NACI
0 1 3 5
MECR
7 9 11 13 15 17 19 21 23 25 27
oltina 29 31 33
dunareni 35 37 39
bugeac 41 43
species
Figure 2 The accumulated zooplankton density from the river estuaries (2000-2004)
12000
DAGA MECR DIOR 10000 LEKI
-3
ACVE
ACVE
4000
DAGA
LEKI
Figure 3 The accumulated zooplankton biomass from the river estuaries (2000-2004) The Dunareni recorded an average of the abundance numerical of 204930 exm-3 and gravimetric of 10565.59 mgm-3 (Annex 2). Numerically dominant are the cyclopids (50%) and gravimetrically the cladocers (96%), although as numerical abundance the represent only 10%. The c1/c2 ratio in what concerns the numerical abundance enhances the dominance of the primary consumers (77%) compared to the secondary ones (23%) and in the case of the biomass, the secondary consumers (89%) are eight times more abundant than the primary ones. To making the numerical density significantly participate the nauplial and juvenile forms of cyclopids, the adults of Mesocyclops crassus, with the rotifers Brachionus diversicornis and Polyarthra
149
The first 10 species with the highest ecological significance (Annex 1), ordered by density, have a high frequency (generally over 50%) and a dominance that summed represents over 60% of the entire population, and by biomass, although they do not have a high frequency, except the nauplial and juvenile stages of copepods, they have a dominance that summed represents over 80% of the whole population. Among the most important as density, particularly stand the rotifer species (Brachionus diversicornis, Brachionus angularis, Brachionus calyciflorus var. amphiceros, Keratella cochlearis var. tecta) and of the cyclopids, among the first places are the larval stages and the adults of Acanthocyclops vernalis vernalis, Mesocyclops crassus (Annex 2). However, among the most important species as biomass are particularly among cladocers namely Leptodora kindti, that ranks first as biomass in all three ecosystems studied, Diaphanosoma orghidani, Moina micrura dubia, but also among cyclopids such as Acanthocyclops vernalis vernalis, Mesocyclops crassus (Annex 2). A series of recent studies provide information referring to the species indicating the trophic status of an aquatic ecosystem [13], [1], [15]. In general, good indicators of eutrophic waters are the rotifers such as the species of the genus Brachionus, Anuraeopsis fissa, Keratella cochlearis, Keratella cochlearis var. tecta, Keratella quadrata, Trichocerca pusilla, Trichocerca cylindrica, Polyarthra euryptera, Pompholyx sulcata, Pompholyx complanata, Filinia longiseta and crustaceans such as Bosmina longirostris, Chydorus sphaericus, Mesocyclops crassus. Many of these species are found in the plankton of the river estuaries in south western Dobrogea and are even dominant, leading to the conclusion that these lakes are within the category of the eutrophic waters. To support this statement also come the physical-chemical parameter values of the water (Annex 3) that we determined during the study period and even the absence of the macrophyte vegetation. Also, Trk and Dinu (2006) consider that the numerical abundance of the cyanobacteria exceeds the point of algal blooming. The recorded values for the physical parameters (depth, transparency, colour) and the water chemical parameters (pH, oxygen, nitrogen, nitrates, phosphates, calcium and magnesium content) fall, generally, those estuaries in the good environmental status and indicate a trend of evolution of these ecosystems to the eutrophic state, according to the norm on surface waters quality classification. They found, however, deviations from the accepted values, which is reflected in modifications of the structure of biocenosis from these ecosystems, as a consequence of the human activities (agriculture, deforestation). Thus, the heat exchange that occurs in the
150
Special thanks are extended to Prof. Dr. MarianTraian Gomoiu for this valuable advises and comments that helped improve the manuscript. 6. REFERENCES
Based on the analysis made on 71 quantitative samples of zooplankton in the river estuaries Bugeac, Oltina, Dunareni on the information resulted from consulting the specialty literature one can draw the following conclusions: 1. the taxonomic spectrum of the zooplankton from the river estuaries sum, in the conditions of the years 2000-2004, a number of 52 species and varieties, a relatively low diversity compared with the literature data; 2. in terms of quantity, the zooplankton from the river estuaries Bugeac, Oltina, Dunareni records an annual average of the numerical abundance of 213439 exm-3; 3. the numerical dominance is ensured by copepods and rotifers; 4. the dominance of the species indicating the eutrophic water (rotifers as the species from the genus Brachionus, Keratella cochlearis and among crustaceans Bosmina longirostris, Mesocyclops crassus), correlated with the values of the physical-chemical parameters of the water and the absence of the macrophyte vegetation, frames the river estuaries into the category of eutrophic waters to the hypertrophic waters; 5. in conjunction with the numerical abundance, the zooplankton from the studied aquatic ecosystems is characterized by high values of the biomass, registering a multiannual average of 8211.78 mgm-3, above the limit characteristic to the natural lake ecosystems of shallow depth from the temperate continental zone of Europe;
[1] BERZINS, B., PEJLER, B, 1989 - Rotifer occurrence in relation to oxygen content, Hydrobiol., 183: 165-172. [2] DAMIAN GEORGESCU, Andriana, 1963 Copepoda. Fam. Cyclopidae (forme de ap dulce), Fauna R.P.R., Ed.Acad. R.P.R., 4 ,6, 204p. [3] DAMIAN GEORGESCU, Andriana, 1964 Dinamica copepodelor n complexul de bli CrapinaJijila (zona inundabil a Dunrii), Hidrobiologia, 5, Ed. Acad. R.P.R., Bucureti: 105-122. [4] DAMIAN GEORGESCU, Andriana, 1966 Copepoda. Calanoida forme de ap dulce, Fauna R.S.R., 4, 8, 130 p. [5] DUSSART, B.H., DEFAYE, D., 1995 Copepoda. Introduction to the copepoda in Guides to the identification of the Microinvertebrates of the Continental Waters of the World, cood, Ed.H.J.F.DUMONT, SPB Academic Publishing, bv.1995. [6] ENCEANU, V., 1947 Contribution a la connaissance du plancton des lacs Oltina, Ciamurlia et Iortmac (Roumanie), Not. Biol., 5: 1-3. [7] FOWLER, J., COHEN, L., JARVIS, Ph., 1998 Practical statistics for field biology, John Wiley & Sons, 259 p. [8] GTESCU, P., 1971 - Lacurile din Romnia, Limnologie general, Ed. Academiei R.S.R., Bucureti: 40-48, 123-160. [9] GOMOIU, M-T., SKOLKA, M., 2001 Ecologie, Metodologii pentru studii ecologice, Ed.. Ovidius University Press, Constana, 170p. [10] HARDING, J.P., SMITH, W.A., 1974 A key to the British Fresh-water Cyclopid and Calanoid Copepodes, Biol.Ass., Sc., Pub., 18. [12] IANA, Sofia, 1971 - Dobrogea de sud-vest, Rezumat tez de doctorat, Univesitatea Bucureti: 13-15. [13] KARABIN, A., 1985 - Pelagic zooplankton (Rotatoria + Crustacea) variation in the process of lake eutrophication. I. Structural and quantitative features. Ekologia Polska 33, 4: 576-616. [14] KIEFER, Fr., 1960 Ruderfuss Krebse (Copepoden). Kosmos Verlag Franckh Stuttgart, 99p. [15] MATVEEVA, L.K., 1991 - Can pelagic rotifers be used as indicators of lake trophic state? Verh. Internat.Verein. Limnol., 24: 2761-2763. [16] NEGREA, St., 1983 Cladocera Crustacea, Fauna R.S.R, IV, 12, Ed. Acad. R.S.R.
151
Annex 1 General characteristics of the zooplankton populations from the river estuaries (2000-2004)
Species ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI ROTCI CLACI CLAF% Davg Deco sps. m-3 100,6 4173,5 8011,4 3027,4 47692, 1 83,3 213,2 12,5 263,6 1770,4 0,3 28,9 399,7 41,7 261,0 1363,8 585,6 900,8 1026,0 13,9 205,1 6347,4 183,1 5,8 454,0 8,3 343,1 2,8 2,1 1,0 0,4 724,0 5365,9 19227, 3 6410,9 88046, 9 6000,0 1279,2 900,0 654,5 4395,5 20,0 693,3 1598,9 3000,0 2087,8 3506,8 1686,4 1544,3 6715,5 166,7 1230,8 11146, 6 1013,8 70,0 2043,1 66,7 988,0 100,0 75,0 14,0 30,0 DD% WD Rk
D
DB%
Rk
B
ASCO BRAN BRCA BRCP BRDI BRDH BRFO BRQU BRQB BRQC BRQM BRQR BRUR EPMA EUPA FIPA KECO KECT KEQU LELU PHYL POVU POCO ROTA SYPE TEPA TRGR TRSI TRIC ALRC ALNA
Ascomorpha sp. Brachionus angularis Br. calyciflorus var. amphiceros Br. calyciflorus var. pala Brachionus diversicornis Br. div. var. homoceros Brachionus forficula Brachionus quadridentatus Br. q. var. brevispinus Br. q. var. cluniorbicularis Br. q. var. melheni Br. q. var. rhenanus Brachionus urceolaris Epiphanes macrourus Euchlanis parva Filinia passa Keratella cochlearis Keratella cochlearis var. tecta Keratella quadridentata Lecane luna Phylodina sp. Polyarthra vulgaris Pompholyx complanata Rotaria sp. Synchaeta pectinata Testudinela patina Trichocerca gracilis Trichocerca similis Trichocerca sp. Alona rectangula coronata Alonella nana
14 78 42 47 54 1 17 1 40 40 1 4 25 1 13 39 35 58 15 8 17 57 18 8 22 13 35 3 3 7 1
0,05 1,98 3,80 1,44 22,62 0,04 0,10 0,01 0,13 0,84 0,00 0,01 0,19 0,02 0,12 0,65 0,28 0,43 0,49 0,01 0,10 3,01 0,09 0,00 0,22 0,00 0,16 0,00 0,00 0,00 0,00
0,81 12,41 12,58 8,23 35,01 0,23 1,30 0,09 2,24 5,82 0,01 0,24 2,18 0,17 1,24 5,02 3,11 4,99 2,73 0,23 1,27 13,09 1,25 0,15 2,19 0,22 2,38 0,06 0,05 0,06 0,02
32 9 8 11 2 38 26 46 22 15 55 37 24 41 29 16 19 17 20 39 27 7 28 45 23 40 21 48 50 49 54
0,02 2,09 14,42 5,45 85,85 0,15 0,38 0,02 0,47 3,19 0,00 0,05 0,72 0,10 0,47 1,77 0,76 1,17 1,33 0,03 3,49 1,90 0,05 0,10 0,59 0,01 0,79 0,01 0,00 0,01 0,00
0,14 2,68 34,61 11,54 158,48 10,80 2,30 1,62 1,18 7,91 0,04 1,25 2,88 6,90 3,76 4,56 2,19 2,01 8,73 0,38 20,92 3,34 0,30 1,19 2,66 0,09 2,27 0,23 0,17 0,13 0,27
0,00 0,03 0,18 0,07 1,06 0,00 0,00 0,00 0,01 0,04 0,00 0,00 0,01 0,00 0,01 0,02 0,01 0,01 0,02 0,00 0,04 0,02 0,00 0,00 0,01 0,00 0,01 0,00 0,00 0,00 0,00
39 14 11 12 8 43 30 49 25 15 56 42 26 45 32 18 23 19 24 40 20 16 36 37 27 44 22 51 53 47 54
152
2,28 6,55 144,00 680,96 967,20 2522,0 5 0,36 0,09 0,27 200,24 80,16 9,11 31,90 50,80 0,54 1,18 0,15 7,22 17542, 1 101,17 11,78 13,16 62,88 14,35 359,22
0,01 0,02 0,02 2,45 0,33 8,22 0,00 0,00 0,00 1,24 0,03 0,01 0,39 0,62 0,00 0,00 0,00 0,05 81,23 1,02 0,00 0,00 0,45 0,01 2,34
28 21 33 5 17 2 50 52 48 6 31 29 9 7 46 38 55 13 1 4 41 35 10 34 3
153
General characteristics of the zooplankton populations from the Oltina, Dunareni and Bugeac lakes (2000-2004)
Oltina Speciile ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI ROT-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CLA-CI CYC-CI CYC-CI CAL-CI CAL-CI CAL-CI ROT-CII CLA-CII CYC-CII CYC-CII CYC-CII CYC-CII CYC-CII CYC-CII ASCO BRAN BRCA BRCP BRDI BRDH BRFO BRQU BRQB BRQC BRQM BRQR BRUR EPMA EUPA FIPA KECO KECT KEQU LELU PHYL POVU POCO ROTA SYPE TEPA TRGR TRSI TRIC ALRC ALNA BOLO CHSP DACU DAGA DALO DIOR ILAG ILSO MALA MOMD SCMU SCKI NACI JUV1 NACA EUGR CALA ASHE LEKI ACVE ACVI CYSC CYVI EUSE MECR Ascomorpha sp. Brachionus angularis Br. calyc. var. amphiceros Brachionus calyc. var. pala Brachionus diversicornis Br. div. var. homoceros Brachionus forficula Brachionus quadridentatus Br. q. var. brevispinus Br. q. var. cluniorbicularis Br. q. var. melheni Br. q. var. rhenanus Brachionus urceolaris Epiphanes macrourus Euchlanis parva Filinia passa Keratella cochlearis Keratella cochl. var. tecta Keratella quadridentata Lecane luna Phylodina sp. Polyarthra vulgaris Pompholyx complanata Rotaria sp. Synchaeta pectinata Testudinela patina Trichocerca gracilis Trichocerca similis Trichocerca sp. Alona rectangula coronata Alonella nana Bosmina longirostris Chydorus sphaericus Daphnia cucullata Daphnia galeata Daphnia longispina Diaphanosoma orghidani Ilyochryptus agilis Ilyochryptus sordidus Macrothrix laticornis Moina micrura dubia Scapholeberis mucronata Scapholeberis kingi Nauplii ciclopid Juvenili - C1 nauplii calanid Eudiaptomus gracilis Calanoida sp. Asplanchna herricki Leptodora kindti Acanthocyclops vernalis ver. Acanthocyclops viridis Cyclops scutifer Cyclops vicinus vicinus Eucyclops serrulatus serr. Mesocyclops crassus F% 10 86 57 29 52 24 19 19 5 14 5 10 48 38 67 19 5 14 52 Davg
-3
Dunareni Bavg mg. m 0,0 2,5 24,2 3,7 192,9 1,1 0,5 0,7 0,2 0,0 0,3 0,0 2,3 1,8 2,5 4,3 0,0 3,4 0,2
-3
Bugeac Bavg
-3
sps.m 23,8 0,3 4940,0 12,8 13467,6 17,3 2079,0 4,8 107147,1 46,8 600,0 304,8 388,1 95,2 11,9 142,9 2,9 1768,1 1388,1 1925,7 3310,5 9,5 200,0 737,1 2,4 1,5 1,7 0,4 0,3 0,5 0,1 5,7 4,5 7,1 5,0 0,1 1,1 3,9
WD
RkD 32 7 6 14 1 20 22 21 30 34 29 38 11 15 10 13 35 26 17
WB 0,0 1,4 3,7 1,0 9,9 0,5 0,3 0,4 0,1 0,1 0,1 0,0 1,0 0,8 1,3 0,9 0,0 0,7 0,3
Davg sps.m-3 240,7 5900,4 9148,2 5529,6 37017,5 98,2 32,1 336,4 1008,2 0,7 874,6
WD 1,8 15,4 14,4 12,0 33,1 1,1 0,2 2,9 4,8 0,0 4,3
RkD 24 6 7 9 3 29 35 22 17 44 18 26 14 23 19 28 32 34 5 25 36 21 42 20 37 38 41 43 31 27 8 13 40 39 11 30 1 4
mg. m 0,0 3,0 16,5 10,0 66,6 0,2 0,1 0,6 1,8 0,0 1,6 0,4 2,0 0,5 0,6 0,2 0,1 0,3 4,3 0,1 0,2 1,1 0,0 1,2 0,0 0,0 0,0 0,0 0,2 0,8 399,2 389,7 0,0 0,0 182,2 0,5 39,8 45,2
WB 0,11 1,51 2,69 2,25 6,19 0,20 0,04 0,54 0,89 0,01 0,80 0,24 0,91 0,39 0,51 0,18 0,11 0,15 1,62 0,12 0,13 0,48 0,01 0,66 0,03 0,03 0,03 0,02 0,24 0,48 11,0 8,89 0,04 0,05 9,61 0,23 6,14 6,54
RkB 34 14 10 11 7 28 37 20 17 44 18 25 16 24 21 29 35 30 13 32 31 22 43 19 39 41 40 42 26 23 2 5 38 36 4 27 8 6
F%
Davg sps.m-3
WD
WB
RkB
64 18 55 45 5
1371,3 7,1 1645,7 4,5 847,0 5,2 6402,2 13,2 260,9 0,8
11 14 13 7 31
16 15 13 10 34
45 50 9 14 14 23 32 64 5 32 50 32 9 14 9 45
137,4 1,9 3960,4 10,8 3,5 175,7 525,2 743,9 97,0 517,0 8,7 435,7 1598,3 217,8 7,0 334,8 8,7 258,7 0,1 1,1 2,0 3,3 1,4 4,4 0,2 2,7 6,9 2,1 0,2 1,8 0,3 2,6
24 8 39 29 23 16 27 15 38 18 12 21 37 26 35 19
0,2 7,1 0,0 0,3 0,9 1,0 0,1 0,7 0,0 7,4 0,5 0,1 0,1 0,4 0,0 0,6
0,63 3,51 0,04 0,37 0,64 0,91 0,38 1,20 0,06 2,73 0,91 0,27 0,23 0,50 0,07 0,97
26 11 39 30 25 22 29 17 38 12 23 33 35 28 37 20
29 24 14
28 31 25
30 32 27
14 24 29 5 10 14 14 48 5 19 100 95
39 33 27 41 18 12 40 19 36 37 4 3
0,0 0,1 0,4 0,0 92,1 576,2 0,0 14,6 0,0 0,0 19,9 53,1
0,0 0,1 0,3 0,0 2,9 8,9 0,0 2,6 0,0 0,0 4,4 7,0
36 28 24 38 10 4 37 11 34 35 8 6
237,5 1,3 1569,6 5,7 385,0 2,5 447,5 3,2 148,2 1,1 28,6 0,5 19,6 0,3 14456,1 21,3 291,8 1,6 9,3 0,2 867,9 3,0 1,8 0,1 501,8 3,1 7,1 0,2 5,4 0,1 1,4 0,1 1,1 0,0 26,1 0,6 93,2 1,1 10113,9 12,6 3896,8 2,9 6,4 0,1
14 18 5 50 45
28 22 33 9
24 14 21 5 2
10836,1 16,3
6 1083,0
366,2 2,6 9,5 0,1 2,4 0,1 19921,4 27,9 24151,4 29,9
3,2 0,1 4553,9 10,9 108,2 0,8 39769,6 44,1 20550,0 31,7
45 5 5 100 100 14 18 27 50 82
2056,5 7,5 1443,5 1,9 526,1 1,1 33243,5 43,4 24206,5 37,1 39,1 0,5 6,1 0,2 626,5 3,3 240,0 2,6 19377,8 30,1
10 25 30 1 2 34 36
11,1 1,00 0,60 10,5 13,3 0,17 0,34 1,03 43,7 16,2
8 19 27 9 7 36 32 18 1 4
5 71 29 100 10 76 10 43
0,5 0,0 1929,0 7,3 1476,2 4,1 25320,5 31,4 361,9 1,2 5246,7 12,5 442,9 1,3 30716,2 22,7
42 9 16 2 24 8 23 5
0,0 0,0 3,3 1,5 8997,6 49,8 151,4 12,1 1,3 27,4 1,1 183,9 Bavg 244,6 9681,1 438,0 0,01 10363,8 0,3 4,5 0,3 8,7
42 12 1 2 22 7 26 5
57 36 68 4 32 68 Nsp 27 11 6 0 44
3800,0 10,3 1422,9 5,0 4224,3 11,8 103,6 0,4 1920,0 5,5
12 16 10 33 15 2
6,5 9175,4 20,7 0,4 5,9 187,7 Bavg 117,8 10148,1 299,7 0,0 10565,6
12 1 9 33 15 3
73 5 41 Nsp 22 9 4 1 36
16038,7 25,9 173,9 0,7 44389,1 33,1 Davg 20183,0 18614,3 137429,6 45,2 176272,2
5 32 3
6 31 3
154
Data 2 17.07.01
Oltina
18.07.01
Dunareni
18.07.01
Bugeac Oltina
12.09.01 11.09.01
Dunareni
12.09.01
Bugeac 1 Dunareni
21.03.02 2 22.03.02
Physical characteristics Water Secchi depth cm depth T0C (cm); T/A 3 4 5 155 20; 0,12 28,3 100 15; 0,15 28,2 170 25; 0,14 28,3 155 23; 0,14 28,7 145 18; 0,12 25,8 155 25; 0,16 26,1 150 24; 0,16 26,2 100 46; 0,46 28,7 120 30; 0,25 28,6 95 40; 0,42 28,4 50 50; 1,00 23,9 100 100; 1,0 25,9 100 100; 1,0 25,3 40 12; 0,3 17,6 105 9; 0,08 16,7 90 13; 0,14 19,1 100 15; 0,15 19,5 65 6; 0,09 19 60 8; 0,13 19,1 45 5; 0,11 20,3 100; 100 17,4 1,00 95 85; 0,89 17,3 100 13; 0,13 9,6 105 12; 0,11 8,5 50 20; 0,50 8,9 3 4 5 95 77; 0,81 11,3 55 55; 1,00 12,1
pH 6 8,46 8,89 8,39 8,35 8,80 8,73 8,80 8,86 8,82 8,85 7,60 8,65 8,41 8,31 8,44 8,56 8,62 8,42 8,52 8,56 8,11 8,17 8,77 8,76 8,80 6 8,49 8,35
O2 mg/l 7 6,56 7,01 6,85 6,96 3,93 4,25 4,02 8,13 7,80 8,10 2,28 3,93 7,32 9,30 8,89 10,15 9,50 12,80 12,40 13,25 6,34 5,89 11,77 11,33 10,89 7 9,41 8,65
Chemical characteristics PO4; NO3NO2P tot mg/l mg/l mg/l 8 9 10 0,52; 0,16 0,49; 0,15 0,95 0,155
NH4mg/l 11 0,998
1,30
0,082
0,175
115/93
0,80; 0,28
1,27
0,046
0,082
148/117
1,40 1,70
0,16 0,020
0,307 0,476
95/72 115/101
0,21; 0,06
1,80
0,057
0,126
107/83
0,9 9 0,4
0,078 10 0,021
0,22 11 0,132
109/70 12 108/74
155
Bugeac
27.06.02
0,67; 0,21
0,8
0,122
0,132
117/91
Dunareni
26.06.02
0,83; 0,27
2,3
0,157
0,675
145/112
Bugeac
03.10.02
3,5
0,069
0,249
98/84
Oltina
03.10.02
3,9
0,077
0,144
126/106
Dunareni
04.10.02
0,37; 0,12
2,4
0,078
0,276
119/95
Oltina
23.04.03
0,09; 0,02
0,13
0,05
0,130
109/72
Dunareni
24.04.03
0,23; 0,07
0,80
0,102
0,193
128/94
Dunareni
08.08.2003
1,35
0,159
0,077
147/114
Bugeac
05.08.04
1,27
0,125
0,032
115/88
156
Oltina
05.08.04
0,72; 0,22
1,94
0,087
0,145
116/90
Dunareni
04.08.04
0,80; 0,26
1,52
0,153
0,084
132/109
Bugeac
04.11.04
0,11; 0,03
0,50
0,040
0,134
193/150
Oltina
05.11.04
0,12; 0,03
0,30
0,049
0,078
218/181
157
158
1.
INTRODUCTION
One of the most important polymer processing operations is the injection molding. This process involves the following sequence of steps: (a) heating and melting the polymer, (b) pumping the polymer to the shaping unit, (c) forming the melt into the required shape and dimensions, (d) cooling and solidification [1]. Thermoplastics are usually processed in the molten state. Molten polymers have very high viscosity values and exhibit shear thinning behaviour. As the rate of shearing increases, the viscosity decreases, due to alignments and disentanglements of the long molecular chains. The viscosity also decreases with increasing temperature. In addition to the viscous behaviour, molten polymers exhibit elasticity. These include stress relaxation and normal stress differences. Slow stress relaxation is responsible for frozen in stresses in injection molded and extruded products. The normal stress differences are responsible for some flow instabilities during processing. A challenge is to optimize the mold design which leads to the most homogeneous filling. In order to be able to predict and model complex polymer flows, one must first have a basic understanding of the mathematics that govern the flow: the conservation of mass, the conservation of momentum, and the conservation of energy. 2. THE MATHEMATICAL MODEL SOLUTION The injection molding process is a very complex process and it must satisfy certain physical laws. So first, we must have a basic understanding of the mathematics equations that govern the flow. We express these laws in mathematical terms as the conservation of mass, the conservation of momentum, and the conservation of energy. In addition to these three conservation equations, there may also be one or more constitutive equations that describe material properties.
Solutions of the equations present several practical problems. Due to the characteristic thin wall of molded components, it is possible to make some reasonable assumptions that lead to a simplification of the governing equations. These simplified equations describe what is called Hele Shaw flow and may be readily solved in complex geometries. These simplified equations that are used in commercial plastics CAE analysis software [2]. Although analytical solutions to the conservation equations for some simple two-dimensional shapes are available, when more complex two-dimensional problems or three-dimensional analysis are required, numerical methods are required, like you see in figure 1. 2.1 Conservation of mass: The flow of a viscous fluid in the mold is mathematically described by the governing equations of conservation of mass, momentum and energy. For a material with density , specific heat at constant pressure C p in most general form these equations can be written as follows [2, 3 and 4]: Conservation of mass is:
D + ( v ) = 0 Dt
Where notation:
(1)
D = + v Dt t
(2)
means material derivative, which is a particular kind of time derivative, in which the material point is held constant. 2.2 Conservation of momentum: Conservation of momentum is described through next equation:
159
(3)
Figure 1 The mathematical model solution for the couple mold injection polymeric material When considering theory that governs plastic injection molding, we might begin by looking at fluid flow through the mold. One of the most general laws governing fluid flow is the Navier-Stokes Equation (3). In these equations v is the velocity vector, p is the pressure, is the stress tensor and g is the vector of the body force per unit mass acting on the fluid. Equations (1) and (3) are sufficient to describe the fluid flow. But there is no method to solve them analytically in such general form. For practical needs these equations are simplified. Typical simplifications are based on the assumptions that allow excluding less important terms from the equations. Usually, isotropic material and symmetric stress tensor are assumed. If material density is assumed to be constant = const. , for the incompressible fluid the equation (1) reduces to: incompressible at the pressures encountered at the filling stage. 2.3 Conservation of energy: If the material has constant density and constant thermal conductivity, k = const. , equation (5) describe the conservation of energy, where C p specific heat at constant pressure:
C p
DT = k T v : Dt
{ }
(5)
2.4 Constitutive equations The general form for the constituent equations for incompressible non-Newtonian fluid is:
( v ) = 0
= ,
(6)
(4) where is the viscous stress tensor, is the velocity of deformation tensor and is the fluid viscosity. The viscosity of a Newtonian fluid is independent on shear rate. Most of polymers are nonNewtonian fluids. Fluids for which equation (6) holds are called generalized Newtonian fluids. Term Generalized
One should take into account that simplified equation (4) holds only for materials, which are not only incompressible but also have low thermal heat expansion coefficient, their density does not change much with temperature [5]. Assumption of constant material density during the injection molding filling stage does not introduce big error because most of fluids are practically
160
= .
..
(7)
Type of the viscosity function depends on the material properties. If the curve at the viscosity versus shear rate plot turns downward with the increase of shear rate, then the fluid is called shear thinning or pseudo plastic. Most of polymers exhibit this type of nonNewtonian behavior. Polymer injection molding feedstocks, as materials based on polymer binders, have the same type of viscosity. Since the pioneering work of Hieber and Shen, the Hele-Shaw model has been constantly adopted to simulate two-dimensional injection molding. This model approximates 3D polymer melt flows between two flat plates assuming that the gap thickness is much smaller than the channel or cavity characteristic length. This strategy can be measured by the number of works available in the literature and by its use in commercial packages as Moldflow and C-mold [6 and 7]. A more realistic formulation would require a viscosity description consistent with the rheological model and a point-wise evaluation of the shear strain rate and temperature over the gap thickness. The shearthinning regime can be described by a number of empirical, semi empirical or theoretical functions. 3. THE CONSTITUTIVE MODEL FOR THE COUPLE MOLD POLYMERIC MATERIAL In injection molding fluid is flowing through a relatively small gap. If we apply this assumption, as well as others related to plastic injection molding we choose the Ostwald de Waele model (figure 2) [4].
Figure 3 Molding phases Mold cavity filling is characterized by the fountain effect, in which elements of the molten polymeric fluid undergo complex shear and stretching motions as they catch up to the free flow front and then move outwards to the cold walls. This phenomenon can impart considerable orientation to the resulting injection molded part. While molecular orientation is used in extrusion to improve the mechanical properties, in injection molding orientation is generally a nuisance. The orientation is further exacerbated during the packing stage. The consequent frozen in stresses can cause parts to become distorted, especially at elevated temperatures. Figure 4 shows streamlines and fluid element deformation in fountain flow.
Figure 2 Flow geometry through a cylindrical canal For the unidirectional flow in cylindrical coordinates the constitutive equation will be:
rz = m
dv z , dr
Figure 4 Fountain flow (8) We consider that the fill of the molds is done isothermal and the packing phase begins after the fill was finished.
161
Figure 5 Frozen skin layer in the mold During the molding cooling process, a threedimensional, cyclic, transient heat conduction problem with convective boundary conditions on the cooling channel and mold base surfaces is involved. The mold temperature is fluctuated periodically with time, what we cared is not the actual mold temperature but the effect of the mold temperature on heat transfer of molded part. The role of the fountain effect at the free-flow front is responsible for complex shear and stretching motions that result in fluid element deformations and molecular orientation phenomena. The filling stage of injection molding, which might be typically about 1 second in duration, is followed by a very short packing stage (0.1 second) necessary to pack more polymers in the mold to offset the shrinkage after cooling. During the packing stage, there is no net fluid flow but motions due to density differences that require pVT behavior analysis. Very high shear rates arise in injection molding operations, and as such to limit temperature increases from viscous heating and also to facilitate easy filling, low viscosity thermoplastic polymer grades are used. 4. CONCLUSIONS
To obtain the mathematical models its necessary to combine: 1. physical laws (conservation of mass, momentum and energy); 2. material response models (constitutive equations for viscosity and/or viscoelaticity, pVT behaviour); 3. information about the process (geometry and processing variables). The injected parts are real bodies with a high viscosity behaviour. In the thermoplastics processing, concerning the injection process, the deformation is accompanied by structural changes and rheological variation properties, the melt having a non-Newtonian behaviour. The rheological parameters and physical
[1] SERES, I.: Mold Injection, (In Romanian Language). West Printing House, Oradea, 1999, pp.347. ISBN 9739329-48-9 [2] BILOVOL, V. V.: Mould filling simulations during powder injection moulding, 2003, pp.136, ISBN 90806734-4-7 [3] MIHAIL R., STEFAN: The simulation of the polymers processes, (In Romanian Language). Technical Printing House, Bucharest, 1989, pp. 368, ISBN 973-310102-8 [4] NI (RAICU), A: Researches and contributions regarding the optimization of the mold injection process to increase the quality for some components and accessories of polymeric materials, Ph.D. thesis Constanta Maritime University, 2010 [5] OANTA, E.; PANAIT, C.; MARINA, V.; MARINA, V.; LEPADATU, L.; CONSTANTINESCU, E.; BARHALESCU, M. L.; SABAU, A. & DUMITRACHE, C. L.: Mathematical Composite Models, a Path to Solve Research Complex Problems, Annals of DAAAM for 2011 & Proceedings of the 22nd International DAAAM Symposium, ISBN 978-3901509-83-4, ISSN 1726-9679, pp 0501-0502, Editor Branko Katalinic, Published by DAAAM International, Vienna, Austria 2011 [6] RAICU G., STANCA C, Advanced concepts in nanomanipulations, Advanced Topics in Optoelectronics, Microelectronics, and Nanotechnologies IV, Proceedings SPIE Vol. 7297, 72971Z, Jan. 7 2009 [7] OAN E., BRHLESCU M., SABU A., Management of Change Based on Creative InterDomain Syntheses, Proceedings of the 7th International Conference on Management of Technological Changes, September 1st-3rd, 2011, Alexandroupolis, Greece, Editor: Costache Rusu, Vol II, ISBN (Vol. II) 978-96099486-3-0, ISBN 978-960-99486-1-6, Democritus University of Thrace, pp. 589-592
162
An intermediate steel has properties that characterize ferrite and cementite properties. Compared with ferrite, is harder and less plastic, but not so hard and brittle that cementite. Hardness, tensile strength, plasticity, of steel depends primarily on the ratio of the concentrations of ferrite and cementite. Dependencies of hardness, tensile strength, toughness, elongation at break according to carbon content are shown in figure 1 [3].
Figure 2 Construction cylinder liner in two layers [5]: 1 Upper part; 2 Insertion of iron; 3 Lower part. Chromium, carburigen element, shows very favorable influences on resilience and increase corrosion resistance steels, both at room temperature and also at high temperatures. Corrosion resistance of chromium steels is even higher as the chromium content is higher [3].
Figure 1 Variation of mechanical characteristics of carbon steels [5]. Characteristics of steel are strongly influenced by the presence in their structure of alloying elements. The
163
Cylinder liners by FEM calculation was performed using FEMAP software. To do this, we took into account a cylinder liner with geometrical dimensions corresponding to the Diesel engine D 103. The structure model of the cylinder liner is shown in figure 4:
Figure 3 Macrostructure of bimetallic cylinder liner steel - iron [5]: 1 - graphite, 2 - layer steel, 3 - layer of iron; 4 - transition zone Cast iron with lamellar graphite type A shows a basic structure predominantly pearlitic and gives higher strength properties (Fc250 - Fc4OO or Fcx2OO - Fcx350 brands). Nodular cast iron are strongly influenced by the structure of the mechanical properties (modulus of elasticity of cast iron with nodular graphite is between 16,500 and 18,500 daN/mm2, fatigue strength is superior cast iron with lamellar graphite, with values between 12 and 18 daN/mm2, and vibration damping capacity lower than those with lamellar graphite). The cast iron alloy with Ni, Cr, Mo, Cu, and Ti greatly improved properties are obtained [3]. The construction of engines usually use alloys Al Si. They contain, usually from 2% to 14% Si and various impurities: Fe up to 1.4%, Mg up to about 0.15%, Cu max. 0.6%... Improved mechanical and technological characteristics of these alloys can be made by alloying with Mg, Mn, Cu, and Ni... The alloys used are [3]: 1) Al - Si - Mg (Si 2 - 14%, Mg 2% and additions of iron, manganese, titanium); 2) Al - Si - Cu (Si 5 - 12%, with max. 5% and small additions of manganese, iron ...). These alloys, besides the aforementioned properties, have a good weldability and can be subjected to heat treatment for hardening and aging. Pimosenko [5] developed a method for obtaining bimetal cylinders cast iron - steel overlapping process with a process of saturation of the carbon released. Between the two layers is a transition zone, which produces smooth change of carbon concentration from 2.14% to strength steel layer. Transition zone thickness is 1 to 2 mm (figure 2 and figure 3).
Figure 4 Discretized calculation model. The model considered consists of 3575 nodes and 3985 solid elements (brick8). For calculations were considered two cylinder liners made of steel and cast iron (2 mm, 4 mm, 6 mm and 8 mm thickness). 3. THE CYLINDER LINER LOAD
Figure 5 shows the variation of forces acting on the motor mechanism D - 103, at speeds taken into account in theoretical and experimental calculations: 1320, 1500, rot 1620, 1755 and 1830 ([7], [8]). min In this paper, the FEM calculation was performed rot for speed n = 1755 . Corresponding to this speed, min the normal force, which is exerted on the cylinder liner will be FN = 6825 N. The distribution of normal force in FEM model was made on cylinder liner generatirx of the Diesel engine. Thus, the normal force was divided into five nodes cylinder liner. 4. CONSTRAINT CONDITIONS
For this structure, the cylinder liner areas used to fix into cylinder block of the Diesel engine are considered fixed at both the top and the bottom.
164
300,00
250,00
Displacement*10E-03, [mm]
50,00
0,00
Figure 5 Total force and normal force variation in the motor mechanism D 103 [7]. 5. RESULTS OBTAINS BY THE STATIC ANALYSIS Nodal displacements obtained from static analysis reveals that the maximum values are obtained in the motion plane of the connecting rod, the driving side of the piston from top dead center to bottom dead center. Displacement values of the cylinder liner generatrix made of cast iron are shown in table nr. 1 and figure 6. Table 1. Displacement values for the cylinder liner made of cast iron. Thickness Distance [mm] 239 230 220 210 200 190 180 170 160 150 140 130 120 110 100 90 80 70 55 2 [mm]
0.000 0.160782 0.212691 0.258464 0.284983 0.321032 0.320120 0.244519 0.187564 0.155447 0.128731 0.104871 0.083721 0.065307 0.049236 0.035444 0.023978 0.014241 0.000
0,00
50,00
100,00
200,00
250,00
300,00
Figure 6 The variation of nodal displacements of cylinder liner generatrix made of cast iron for four thicknesses. In figure 7 is presented a distorted cylinder liner made of cast iron with 4 mm in thickness (distorted amplification factor is presented to highlight displacement nodes).
4 [mm]
0.000 0.036784 0.050691 0.062635 0.067362 0.072299 0.070638 0.067771 0.054977 0.042481 0.034277 0.027946 0.022691 0.018001 0.013833 0.010159 0.006913 0.003948 0.000
6 [mm]
0.000 0.015506 0.022154 0.027618 0.030147 0.032103 0.031355 0.029724 0.024338 0.018933 0.015207 0.012328 0.010020 0.008016 0.006226 0.004603 0.003116 0.001745 0.000
8 [mm]
0.000 0.008339 0.012594 0.015448 0.017460 0.018176 0.018147 0.016774 0.014101 0.010981 0.008859 0.007159 0.005813 0.004655 0.003628 0.002683 0.001808 0.001004 0.000
Figure 7 Distorted cylinder liner made of cast iron with 4 mm in thickness Displacement values of the cylinder liner generatrix made of steel are shown in table 2 and figure 8. Table 2. Displacement values of the cylinder liner generatrix made of steel Thickness 2 Distance [mm] [mm] 0.000 239 4 [mm]
0.000
6 [mm]
0.000
8 [mm]
0.000
165
0.122066 0.161183 0.195775 0.215662 0.247414 0.242131 0.185017 0.141990 0.117692 0.097523 0.079511 0.063531 0.049611 0.037408 0.027038 018380 0.011004 0.000
0.027926 0.038412 0.047384 0.051055 0.054775 0.053459 0.051332 0.041626 0.032195 0.025989 0.021202 0.017230 0.013686 0.010536 0.007759 0.005300 0.003043 0.0000.
0.011786 0.016791 0.020958 0.022842 0.024349 0.023751 0.022544 0.018445 0.014362 0.011544 0.009365 0.007619 0.006103 0.004749 0.003519 0.002390 0.001345 0.000
300,00
250,00
Displacement*E-03, [mm]
150,00
50,00
Figure 9 Variation of nodal displacements for cylinder liners with 2 mm in thickness made of cast iron and steel
7,00E+01
250,00
6,00E+01 Displacement*10E-03, [mm]
5,00E+01
Displacement*10E-03, [mm]
200,00
150,00
2,00E+01
100,00
8-s
1,00E+01
50,00
0,00 0,00 50,00 100,00 150,00 Distance, [mm] 200,00 250,00 300,00
Figure 10 Variation of nodal displacements for cylinder liners with 4 mm in thickness made of cast iron and steel.
3,50E+01
Figure 8 Variation of nodal displacements of cylinder liner generatrix made of steel for four thicknesses. For cylinder liners made of cast iron and steel, a comparison of nodal displacements shown in figures 912.
Displacement*10E-03, [mm]
3,00E+01
2,50E+01
2,00E+01
1,00E+01
5,00E+00
Distance, [mm]
Figure 11 Variation of nodal displacements for cylinder liners with 6 mm in thickness made of cast iron and steel.
166
Displacement*10E-03, [mm]
1,40E+01 1,20E+01 1,00E+01 8-ic 8,00E+00 6,00E+00 4,00E+00 2,00E+00 0,00E+00 0 50 100 150 200 250 300 8-s
Distance, [mm]
Figure 12 Variation of nodal displacements for cylinder liners with 8 mm in thickness made of cast iron and steel. 6. CONCLUSIONS
Results obtained from the FEM static analysis of cylinder liners made of cast iron and steel, for the four thickness variants leads to the following conclusions: a) Nodal displacement amplitudes for cylinder liner made of cast iron are greater than those of steel (because the difference between the modules of elasticity of the two materials used); b) With decreasing thickness of cylinder liner, nodal displacement values increase; c) For the cylinder liners with thicknesses over 2 mm, the variation of nodal displacement values is not significant when the thickness increases; d) As seen from the data presented, the maximum dispacements are obtained near the top dead center, after piston change the support surface, the race of piston is to bottom dead center as regard measurements of the diffraction line width and the density dislocation [2]. From evolutions of crystalline grid dislocation density, it follows that if cylinder liners with thickness of 4 mm, the destruction process not begins by agglomeration of dislocations in areas where there were obstacles for moving dislocations, such as dissolved atoms,
[1] Bortevskii I.T., Mirosnicenko A.F., Pogodaev L.I., "Porsnie kavitationnoi stoikosti dvigatelei vnutrennego sgorania", Kiev, 1980. [2] CRUDU I., SIMIONOV M., GHEORGHIES C., The Tension State in the Superficial Layer in the Vibration Cavitation Case, 8th International Conference on Tribology NORDTRIB98, Ebeltoft, Denmark, 1998 [3] Geru N., s.a., "Materiale metalice. Structura proprietati, utilizari", Editura Tehnica, Bucuresti, 1985. [4] Grnwald V.,"Teoria, calculul si constructia motoarelor pentru autovehicule rutiere", Editura Didactica si Pedagogica, Bucuresti, 1984. [5] PIMOSENKO A.P., Zasita sudovh dizelei ot kavitationh rezrusenii, "Sudostroenie", Leningrad, 1983. [6] Pogodaev L.I., Sevcenko P.A., "Ghidroabrazini I kavitationni irnos sudovogo oborudovania", Sudostroenie, Leningrad, 1984. [7] SIMIONOV M., The Studies and the Researches Concerning the Cavitation Destruction of theCylinder Liners from the Diesel Engine (in Romanian), Ph.D. Thesis, University DUNAREA DEJOS of Galati, Romania, 1997 [8] SIMIONOV M., The cavitation of the cylinder liners from the Diesel engines, Mongabit Printing Galati Press, Galati, 2000.
167
168
SECTION III
ELECTRONICS, ELECTRONICAL ENGINEERING AND COMPUTER SCIENCE
S.C. FORMENERG S.A, 2Ministry of Transportation and Infrastructure, The University of South- East Europe Lumina, Romania
ABSTRACT The operational capability-the functionality- represent the definning feature of an instalation.The goal of this subject is to establish a manner of evaluating from a quality and quantity perspective the operational status of a technical equipment under exploitment.The quantification of this size consists on one hand to estimate the functionality status of a system based on the estimations of a group of experts and on the other hand, to associate to these opinions a probability related to the operational status of the technical equipment.Thereby, it is obtained an conjunctiv couple of operational status AND reliability, intended to offer to the manager a more reliable image concerning the level of the operational performances of the equipment subject to technical investigations.. Keywords: Fuzzy scale, Fuzzy operators, Ranks correlation factors, Experton, Mathematical expectation of the experton, Hamming distance, Ranks correlation factor, Logical conjuctions and disjunctions.
1.
INTRODUCTION
The functionality of a technical equipment is defines by the following concepts: -the operational status which includes basic parameters and the dynamic disruptive phenomena (vibrations, axial displacements, heat, noise) whose admissible values are given in the technical datasheets for various exploitation conditions. The operational status is given as nonnumeric (linguistic) estimations using a hierarchical scale arbitrarily established by the assessor; - operational safety given by the functioning probability (the reliability of the technical equipment). Figure 1 Septenary scale 2. THE ESTIMATION OF THE OPERATIONAL STATUS OF A TECHNICAL SYSTEM Let there be an estimation scale made up of a preestablishe number of levels, congrous to an accepted semantics of that particular scale. For the presented case it was preferred an septenary scale consisting in seven levels-Figure1, where k is level of nonlinear scale and status of a tehnical system. We propose the following mathematical relations [2]: for the concentrative operator, the case of the levels 2 and 3 we have
l l 2 nl = 2 , 3nl = 3 ; for the dilator operator, the
l we have 5nl = 5 ,
nt the index of the level, the graph of the nonlinear scale and the graph of the linear scale. In the following we propose a septenary non-linear scale, depicted using two parabolically shaped rings; the allure of the convexconcave graph, S letter-shaped and asymetrical is specific to the logostic function. In order to build a nonlinear scale we use fuzzy logic operators, frequently as
[1]: concentrative ( 2 ) and dilator ( 1 2 ) , is a certain level of the linear scale. In the following, considering the nature of the graph associated to a slow convex-fast concave phenomenological type of evolution, of the size under analysis- The operational
where 2 , 3 , 5 , 6 , 2 nl , 3nl ,
5nl , 6 nl are the levels of the scale in its linear/ nonlinear version, l the golden number from Fibonnacci an string: l = lim l 1,618034 , a n , a n 1 are n a n1 consecutive numbers in Fibonnacci string, e- the base of the natural logarithms: e = 2,71828. We find the presence of the inflection point of the graph: M (n = 4; = 0,5) . Because it is required to asses sizes of great impact as the operational status and/or the reliability of a technical system, evolutions like
l /e C 1 D
likehood superior to cases where it is used 2 (concentrative), frequently encountered. It is clear that the decider is the one that- accordingly to the problem-
169
, represent the level, step-size, between the two versions: 1 l = (0.1666...) , non-linear ( k 6
l nl nl k , k , k
interval associated to the j experton estimation, and m is the number of experts. Between the left/right elements,
s d ak , a k of the two columns of the experton we mention s d ak ak . Sometimes, in the case of fuzzy operators, concentrative/ dilators , rises the need to completely or partially change the initial semantics of that scale. For instance, in the case of the non-linear septenary scale propose by this case study, the linguistics of the levels 1, 2, 3, 4, 5, 6, 7 can be maintained like originally proposed for the linear version; but in the case of the 5th evel, to which we associated the word good, respectively the linear level 5 = 0,667 , in the nonlinear version , using the dilatation method, this level has been considerably modified compared to the previous level (from 5l = 0,667 to 5nl = 0,861 an augmentation of aproximately 29%); consequently, in the following, this level will be called more than better, symbolized by MTB. These simple, formal considerations do not affect the calculus. However, it is noticeable that if the number of the levels of a scale rises, than, the likelihood of the estimation drops. Using a scale with eleven levels the step-size = 0,1 , makes the semantic hierarchisation of the levels to become more difficult. The option for a certain scale, however, is the choise of the analyst. The building operation of expertons in the case of the logical conjunction or logical disjunction uses logical operators[3]: , means AND; also means minimum; represents the word OR; has the meaning of maximum. For the relations between entities or sentences we use the logical symbols: -considered AND, -considered OR.
statiscal edifice presented like a table with two columns where are reg istered the relative cumulated frequency given like an interval, associated to the levels of the scale:
f cr ( k ) = f cr ,inf ( k ); f cr ,sup ( k ) ,
where
represents the relative cummulated frequency with the inferior/superior boundaries corresponding to the scale levels. The defining size of an experton is the mathematical expectation also given as an interval:
f cr ( k )
is a certain
Z experton.
Table 1
Linear version
l k l k
Semantics
Non-linear version
nl k nl k
1 2 3 4 5 6 7
Unsatisfactory Almost unsatisfactory Less satisfactory Satisfactory Good Almost very good Very good
For the calculations of this size we use: a) for the linear scale, 1 E inf/ sup (Ex(Z )) = f cr ,inf/ sup ( k ) n 1 k
3. CASE STUDY 3.1. The operational status of the technical system. (1) Let there be a segment of a hydraulic grid regarding the deliver of heat to some consumers-Figure 2 (2)
(Ex (Z )) =
k
f cr ,inf/
sup
( k ) k
(2)
both f rc ( k ) relations expressing the relative cumulated frequency. The mathematical expectation of an experton is the average value m( Z ) of the associated intervals of the estimations regarding the operational status, given accordingly to the scale in use: E (Ex(Z )) = m(Z ) (3) where: a inf a sup j j j j , j = 1; m m(Z ) = ; (4) m m
Figure 2 Schematics of the hydraulic grid Where: CA- is accumulating pipeline; EP1, EP2electropumps; VR1, VR2- adjustment valve and B1, B2 -boilers, heater used when overload. Thermal agent is steam taken using an adjustable outlet of a heating turbine. Heat is delivered to the consumer in two ways: a 0 - version number 1, any of the EP1 and EP2 electropumps are working at full capacity (100%) or EP3 is working at partial load (50%);
170
* SS1 SS1
= 0,0148;
So, both Hamming distances are significantly lower then the critical level, * = 10% , set accordingly to [4]: % d H
* SS1 SS1
>% dH
* SS 2 SS 2
<< * . It can be
conclude that the second expertise validates the first expertise. This conclusion is also confirmed by the Spearman ranks correlation factor [5]: = 1 ranks correlation factor,
m m2 1
d 2j
) , where is the
e* j;
- the size of the group may or may not be identical. Actually, we are measuring the similarity of the opinions of the two groups of experts
* j.
location and m-number of experts. Based on the calculations data-locations- within the second and the third tabel, the fourt table is built. Ranks correlation factor is used to analyze links defined on qualitative features. After replacing we obtain: 1 = 0,80 ;
2 = 0,90 . Accordingly to [5], the values obtained define a strong link: = {0,80;0,90} [0,75;0,95) .
Table 5 presents the experton resulted from the logical disjunction- consequence of the two actions, first and second expertise; and the mathematical expectation of this experton E Ex SS *
*
following tools: Hamming distance and ranks correlation factor. The data FIRST EXPERTISE and SECOND EXPERTISE are given in tables 2 and 3; these include the assesments of the first/second group of experts, the associated fuzzy intervals, also their locations-ranks, corresponding to the preferential hierarchy. The Hamming distances [4] concerning the first expertise and the second expertise are given using the mathematical relations (for the subsystem SS1 and SS 2 ):
The expertons of the systems in (S) first-expertise version, respectively the second-expertise ( S * ) version
* * and also those of subsystems SS1 , SS 2 , SS1 , SS 2 make up table 6.
dH
* SS1 SS1
= =
m(SS1 ) m 2
( )
* SS1
(3)
dH * SS2 SS 2
* m(SS 2 ) m SS 2
( )
(3)
Table 2 First expertise
SS q
SS 2 subsystem heaters-valves
Locations
ej
A1 j
MTB AVG MDG; AVG AFB; FB AVG
I1 j
0,861 0,935 0,861; 0,935 0,935; 1 0,935 0,9054; 0,9332
A2 j
AVG MTB; AVG MDG; AVG AFB; FB MTG; AVG *
I2 j
0,935 0,861; 0,935 0,861; 0,935 0,935; 1 0,861; 0,935 0,8906; 0,9480
L1 j
5 2,5 4 1 2,5
L2 j
2 4 4 1 4
e1 e2
e3
e4
m(Eq ) e5
171
SS q
SS 2 subsystem heaters-valves
* A2 j * I2 j
Locations
ej
* e1 * e2
8 e3 * e4
A1*j
MDG AVG; VG MTB AVG MTB; AVG *
I1*j
0,861 0,935; 1 0,861 0,935 0,861; 0,935 0,8906; 0,9184
L* 1j
4,5 1 4,5 2 3 *
L* 2j
1,5 3 4,5 1,5 4,5 *
m* (Eq )
* e5
ej
e* j
* e1
L1 j
5 2,5 4
L* 1j
4,5 1 4,5 2 4 4
L2 j
L* 2j
1,5 3 4,5
d1 j
+0.5 +1,5 -0.5
d2 j
+0.5 +1 -0,5
2 d ij
2 d2 j
e1 e2
e3
0,25 1 0,25
* e2
* e3
e4
e5
* e4
1 2,5
2 3
1 4
1,5 4,5
-1 -0,5
-0,5 -0,5
1 0,25
0,25 0,25
* e5
4 Table 5
172
* * the expertons Ex(SS1 ) , Ex(SS 2 ) , Ex SS1 , Ex SS 2 , obtained according to (3). The relation (4) is checked.
Note: The symbol (*) is related to the secondexpertise information. The size of the group of experts that conducted the first and, respectivly the second expertise is the same, but the persons are different. Table 7 presents the mathematical expectations of
3.2. The reliability of the tehnical system It resulting from the schematics presented in figure 3 (reliability schematics for operational version nomber one)- for the first operational version.
( )
( )
m(SS1 ) m(SS 2 )
Ex(SS1 ) Ex(SS 2 )
Figure 3 Figure 3
( ) m(SS )
* m SS1 * 2
( ) Ex (SS )
* Ex SS1 * 2
Figure 4
173
Figure 5
Figure 8. The reliability schematics By this manner, a serial-parallel schematics was obtained. In this new schematics of reliabilities of elements resulted after the transfiguration, [7] and [8] will be established: RA = 1- (1- Ra) (1- Rh); RB = 1- (1Rc) (1- Ra); RQ = 1- (1- Rh) (1- Rc). Therefore: RA =0,9998815; RB =0,9990000; RQ =0,9999526. The reliability of the system, according to the schematics from figure 6 is: R(S) = RA (RBRb + RQRe - RBRbRQRe). It can easily be deduced that: R(S) =0,99895. By comparing these three methods, we can conclude that the error is unsignificant: R(S)a) = 0,99894; R(S)b) = 0,99891; R(S)c) = 0,99895. Operational scenario number two. The EP1 and EP2 electropumps are working simultaneously or only the EP3 electropumps is working (at full capacity). Any heater can provide the demand overload conditions. The realiability schematics, according to these conditions is given in Figure 9:
Figure 6 It is concludable that: R(S) = (Ra + Rh - Ra Rh) (Rb +Re - Rb Re) Rc + (RaRb + ReRh - RaRb ReRh) (1 Rc) (4) By replacing the values of the reliability of the elements a,b,c,e,h the reliability of the system is obtained as: R(S) = 0,99894929. B. The method is based on the F(S) structure function of the system, size defined by the logical sum of the minimal pathways of the reliability schematics presented in figure 4. A simple evaluation of this schematics highlights the following minimal pathways: ab, he, ace, hcb. Therefore, the structure function is a logical sum of products (the structure function is SOP type-sum of products [6], [7]):
F(S) = ab
+ he + ace
Figure 9 Reliability schematics, operational version two The expression of the reliability of the system in this version is: R(S)=(RaRd + Rf - RaRd Rf) RcRg(Rb +Rc - RbRe), where: Ra = 0,95; Rb =0,97; Rc = 0,98; Rd = 0,96; Re = 0,97; Rf = 0,96; Rg = 0,98. It is obtained that: R (S )V2 = 0,95644. This represents a diminishment of approximately 4,5% compared to the operational status of version number one.
174
product, (S ), R(S ) defines the probable operational status of the system or the operational potential, PO , of the system. The following values are obtained: POversion 1 = 0,9332 0,99893 0,93220; POversion 2 = 0,9332 0,95644 0,89255. 4. CONCLUSIONS
( (
))
The estimations of the groups of experts, obtained from both investigation actions, first and second expertise, offers to the decider a sufficiently real image regarding the operational capacity of the installation under analysis. The likelihood augments when the two operations validate the corectness of the points of vue of the group of experts. The used fuzzy scale must be sufficiently covering in modelling/interpreting as reliable as possible, the concluusions provided by the experts in situations when this type of opinions are interpreted relatively ambigous. The estimation and the reliability are able to offer to the decider optimum solutions concerning the implementation of an adequat management for exploiting a technical equipment at a performant operational level. For high values of the reliability of a system, its operational potential is determined by the performance function. This size, OP=0,932- version number 1represent 99,7% of the second-to-last level of the scale, which, by defuzzyfication, represents almost very good; also, version number 2 reports a factor of 95,3% compared to the same performance level.
[1] OFRAN E., BIZON N., IONI S., RDUCU R., Sisteme de control fuzzy (Fuzzy control systems). Editura All Educaional, Bucureti,1998. [2] CRLAN M., COROIU N., DEMENI I., A decisional Fuzzy Model on neliniar Subintervals of maximum Presumption. The Sixth Energy Sistem Conference, Torino, Italy, 2006. [3] MURGAN A. D., Principiile teoriei informaiei n ingineria informaiei i a comunicaiilor( Principles of information theory in information and communications engineering), Editura Academiei Romne, Bucureti, 1998. [4] KAUFMANN A., ALUJA I. G., Tehnici pentru gestiunea prin experi( Techniques for managing by experts). Editura Expert, Bucureti, 1995. [5] BARON T., BIJI E., TOVISSI L., WAGNER P., ISAIC-MANIU A., KORKA M., POROJAN D., Statistica teoretic i economic (Theoretical and economical statistics). Editura Didactic i Pedagogic, Bucureti, 1996. [6] CTUNEANU V. M., MIHALACHE A., Bazele teoretice ale fiabilitii ( Theoretical fundamentals of reliability). Editura Academiei Romne, Bucureti,1983. [7] CRLAN M., Probleme de optimum n ingineria sistemelor tehnice( Optimunm problems in technical systems engineering). Editura Academiei Romne, Bucureti,1994. [8] CTUNEANU V. M., BACIVAROF I. C. , Fiabilitatea sistemelor de telecomunicaii ( Reliability of communications systems). Editura Militar, Bucureti, 1985. [9] TRCOLEA C., FILIPOIU A., BONTA S., Tehnici actuale n teoria fiabilitii( Modern techniques in theory of reliability). Editura tiinific i Enciclopedic, Bucureti, 1989.
175
176
1.
INTRODUCTION
The paper analyses the CVT electrical characteristics during linear and nonlinear loading. The application showed that one of the main drawbacks of using a CVT is its inability to protect the equipment from voltage interruptions. Therefore, an application was conducted to find out if the RTT (ride-throughtransformer) prototype can perform like a typical CVT, while protecting the equipment from voltage sags and interruptions. 2. PERFORMANCE: LINE CURRENT DISTORTION A resistive linear load consisting of incandescent lamps was connected to the output of the CVT. The load was increased in ten equal increments from 0 to 8.3 A (output current rating of the CVT). Next, a bridge rectifier (such as the type that might be used in electricvehicle battery chargers) was connected to the CVT. The rectifier and its resistive load (incandescent lamps) were the complex nonlinear load of the CVT. By adding lamps, this complex load was increased in ten equal increments from approximately 0.4 A (rectifier with no lamps connected) to 8.3 A. Figure 1 and Figure 2 show the line-current distortion during these tests compared with the line-current distortion for the same loads connected directly to the electric-service supply. At no load, the power consumption of the CVT was approximately 120 W (core losses only). With the full linear load, total losses increased to approximately 134 W (core losses plus load losses); with the full nonlinear load, total losses dropped to approximately 110 W.
Figure 2 Line-current distortions for a nonlinear load Notice in Figure 1 and Figure 2 that, while the yaxis current-distortion magnitudes are significantly different, the absolute current-distortion values of the CVTs input current with either linear or nonlinear load is nearly identical. Current distortion at the CVTs input terminals was practically independent of the type of load connected to the output (approximately 40% at no loading to approximately 5% at full loading). When a linear, low-distortion load was connected to the CVT output, the CVT contributed to the current distortion at its input terminals from the electric-service power
177
For both linear and nonlinear loads, the size of the load affected the input power factor of the CVT. While the CVT was loaded at less than 40% of its output power rating (approximately 3.3 A), the power factor ranged from 0.65 to 0.95. While the CVT was loaded at greater than 40%, the power factor was greater than 0.95 for the linear load and greater than 0.90 for the nonlinear load. For the linear load, the power factor crossed from lagging to leading at approximately 60% load (approximately 5 A). Figure 3 and Figure 4 show the power factors for the linear and nonlinear load (without and with the CVT), respectively.
Figure 5 Schematic of a ride-through transformer As shown in Figure 5 the ride-through transformer (RTT) is designed to protect single-phase process controls. Unlike traditional CVTs, the RTT uses all three phases of supply voltage as its input. This enables the RTT to access energy in unsagged phases of the supply voltage during one- or two-phase voltage sags and interruptions. EPRI PEAC tested the prototype, 1-kV, 480-V RTT [12] to determine its ability to protect process controls during single-phase, two-phase, and three-phase voltage sags and interruptions. The particular prototype acquired for testing was connected to a load bank that consisted of a mixture of 12 industrial control components: ice-cube relays, motor starters, contactors, a programmable logic controller, a linear dc power supply, and a switch-mode power supply. Figure 6a and Figure 6b show the response of an RTT to phase-to-neutral and phase-to-phase sags.
Figure 4 Power factor for a nonlinear load The CVT significantly affected the power factor of the load. At low loading, the nonlinear load without the CVT had a power factor as low as 0.44. With the CVT, the total power factor of the nonlinear load ranged from 0.61 to near unity. However, when loaded at less than 50%, the CVT significantly reduced the power factor for the linear, resistive load, which normally has a unity power factor. Note that in most CVT applications, the aggregate facility loading is significantly small, so it would not be prudent to attempt any power-factor correction at individual CVT operating loads. Power-factor correction
178
b Figure 6 a/b. Performance of a ride-through transformer (RTT) during a ten-cycle voltage interruption and voltage sag. Voltage regulation of an RTT during a single-phase voltage interruption (top: input; bottom: output) To get the most out of a CVT with a three-phase input, the most trouble-free voltage phases of the electric-service supply will have to be determined. For example, if most voltage sags occur on phase A or B, then the center tap on the transformer primary should be connected to phase C. Although this prototype transformer promises to retail at a price substantially higher than the price of a traditional, single-phase CVT, the price differential can be greatly reduced by a reduction in size. Because the performance of a traditional CVT greatly depends upon loading, CVTs are often oversized for the connected load. A smaller but more loaded RTT should be able to perform as well as the derated, traditional CVT. 5. CONCLUSIONS
The test results revealed that the prototype RTT protected the connected process controls from most of the applied voltage sags and interruptions. Besides, it was observed that RTT performance greatly depended on the phase configuration (that is, single-, two-, or three-phase) of the voltage sags or interruption and, to a much lesser extent, on the loading of the RTT output. It
[1]. Sola/Hevi-Duty Corp., About Sola/Hevi-Duty, www.sola-hevi-duty.com/about/solahist.html, October 18, 2009. [2]. Advance Galatrek, CVT Background Data, http://www.aelgroup.co.uk/hb/hb003.htm, October 18, 2008. [3]. EPRI, System Compatibility Projects to Characterize Electronic Equipment Performance under Varying Electric Service Supply Conditions, EPRI PEAC, Knoxville, TN, May 2003. [4]. Godfrey, S., Ferroresonance, http://www.physics.carleton.ca/courses/75.364/mp1html/ node7.html,October 18, 2002. [5]. Cadicorp, Ferro-Resonance, Technical Bulletin 004a, www.cadicorp.com, October 18, 2002. [6]. Groupe Schneider, Ferroresonance, No. 190, www.schneiderelectric.com, October 19, 2002. [7]. IEEE, Standard for Ferroresonant Voltage Regulators, IEEE Std. 449-1998, Institute of Electrical and Electronics Engineers, Piscataway, NJ, 1998. [8]. EPRI, Sizing Constant-Voltage Transformers to Maximize Voltage Regulation for Process Control Devices, PQTN Application No. 10, EPRI PEAC, Knoxville, TN, October 1997. [9]. EPRI, Ferro-Resonant Transformer Output Performance under Varying Supply Conditions, PQTN Brief No. 13, EPRI PEAC, Knoxville, TN, May 1993. [10]. EPRI, Ferro-Resonant Transformer Output Performance under Dynamic Supply Conditions, PQTN Brief No. 14, EPRI PEAC, Knoxville, TN, January 1994. [11]. EPRI, Ferro-Resonant Transformer Input Electrical Characteristics during Linear and Nonlinear Loading, PQTN Brief No. 16, EPRI PEAC, Knoxville, TN, February 1994. [12]. EPRI, Testing a Prototype Ferro-Resonant Transformer, EPRI PEAC, Knoxville, TN, unpublished.
179
180
High power monolithic bridge drivers are for discrete transistors and half bridges in applications such as DC motor or stepper motor driving. The device contains four push-pull power drivers which can be used independently or as two full bridges. The driver is controlled by a TTL-level logic input and the drivers are equipped with an enable input which controls a whole bridge. Short circuits to ground can be protected against by the circuit and the upper transistor of the output stage is thus turned off, interrupting the short circuit current. When the short is removed the circuit recovers automatically. For DC Motor Driving in application where rotation is always in the same sense a single driver can be used to drive a small DC motor. The motor may be connected either to supply or to ground. Keywords: current flow, DC motor, bridge, dynamically during, the motors rotational.
1.
INTRODUCTION
In general monolithic bridge drives are an attractive replacement for discrete transistors in applications such as DC motor or stepper motor driving. The monolithic bridge drive may be controlled by a logic input and each pair a bridge may be controlled by an enable input. They can be used independently or as two full bridges. The monolithic bridge drives are used for short circuit protection or for DC motor driving.
of time is determined by the RC time constant. The upper transistor of the output stage is thus turned off, interrupting the short circuit current. When the short is removed the circuit recovers automatically. The waveforms are shown in figure 3. If the short circuit is removed while V1 is high the output stays low because the capacitor C is charged to VIH. The system is reset by the falling edge of V1, which discharges C.
Figure 2 This circuit protects a driver from output short circuits to ground
Figure 1 Internal structure The internal structure of device is represented as four push pull drives and it does not have external emitter connections. 3. 2. SHORT CIRCUITS PROTECTION It may be used to drive a small DC motor. The motor may be connected either to supply or to ground as in fig. 4. The control logic may be inverted. The maximum motor current is 1A. Care should be taken to avoid exceeding the maximum power dissipation of the package. Each motor in this configuration is controlled The monolithic bridge drive can be damaged by shorts circuits from the output to ground or to the supply. Short circuits can be protected against by the circuit shown in figure 2 which have a resistor, a capacitor and a transistor. When the output is short circuited the input is pulled low after a delay of roughly 10s. This period Figure 3 Waveforms illustrating the short circuit protection provided by the circuit of figure 2 DC MOTOR DRIVING
181
When Vs is relatively low it may be sufficient to control the differential voltage drop by just limiting the over voltage with the power supply capacitor and not having to use the schottky diodes. The feasibility of this solution should nevertheless be verified experimentally in the specific application. 5. REFERENCES
Vinh H H L
+A H L X
+B H L X
Note: L=low; H=high; X=Dont Care The enable/inhibit input is used for a free running stop - it turns off four transistors of the bridge when low. By reversing the current may be achieved a very rapid stop. There are necessary a tachometer dynamo and
[1] GLUCK,H Fahrdynamische Gesichtspunckte zur automatischen fahr [2] JAVORSKI, I. - Wirtschaftlinchkeeitsgrenzen zwischen Diesel [3] DEKKER, A.J. Electrical Engineering Materials, PrenticeHall inc. , New York, 1967 [4] SOLYMAR, L., WALSH, D. - Lectures on the Electrical Properties, University press, Oxford, 1990 [5] GUILLEMIN, P., SABY, B., 3-phase motor driving using the ST9 multi-functional timer and DMA, SGS THOMSON, 1992 [6] BOURGEOIS, J.M., MAURICE, B., SABY, B., Versatile and cost effective induction motor drive with digital three phase generation, SGS-THOMSON, 1992
182
ABSTRACT The harmonic distortion level may be significant in electric propulsion systems, as the main loads usually are variable speed propulsion/thruster drives with frequency converters. It is therefore necessary to be able to predict harmonic distortion, evaluate the effects, and perform the proper means to manage the voltage distortion, without functional faults over the life time of the installation. Keywords: harmonic distortion, periodic waveform, current converters, voltage converters.
1.
INTRODUCTION
A non-linear load connected to a network will distort the sinusoidal voltages. This deviation from a sinusoidal voltage or current wave form is named harmonic distortion. The distorted waveform may cause electromagnetic interference or erroneous measurement signals. It is particularly necessary that measurement systems of monitoring and protection devices are made for true RMS measurements in order to function properly. The harmonic distortion level may be significant in electric propulsion systems, as the main loads usually are variable speed propulsion/thruster drives with frequency converters. Rules and regulations normally give guidelines or requirements that limit the harmonic distortion in a ship network. However, these limitations are not a guarantee for proper functionality. It is therefore necessary to be able to predict harmonic distortion, evaluate the effects, and perform the proper means to manage the voltage distortion,
without functional faults over the life time of the installation. Distortion of currents and supply voltage waveforms may lead to: - Accelerated aging of insulation material. Increased power dissipation (losses) in equipment connected to the network, such as generators, motors, transformers, cables etc, from the harmonic currents, may cause overheating and deterioration of the insulation and reduced life time of the equipment. - Overloading of electronic equipment. Increased load current of electronic equipment that has been designed for sinusoidal voltage supply, may cause overheating and malfunction of this equipment. 2. HARMONICS OF VOLTAGE SUPPLY CONVERTERS A Fourier series, i.e. the infinite series of sinusoidal components and a DC term, can in general express any periodic waveform:
By observation, one can see that the current into the 12 pulse rectifier (fig.2) is equal to the 6 pulse rectifier, but the phase shift of the Y-connected transformer
183
THD(u ) = 100%
u( )
h =2 h
u(1)
h = 6 xn 1 ,
where u(1), i(1) are the fundamental RMS value of the voltage and current, and u(i), i(i) are the RMS value of the ith harmonic of the voltage (or current). Normally, one will only regard harmonics up to and including the 50th harmonic order. 3. HARMONICS OF CURRENT SUPPLY CONVERTERS For a CS converter, the characteristic harmonics will be similar to a VS converter. However, the decoupling between the line supply and the motor sides are not as ideal as for the VS, and the harmonics of the line side currents are strongly influenced by the motor side harmonics. In addition to the pure harmonics, a CS drive also generates non-integer harmonics to the power network. Non-integer harmonics are interfering components at frequencies that are not exact multiples of the system frequency. In a CS drive these non-integer harmonics are due to the DC pulsation frequencies caused by the machine converter and are therefore synchronous with the motor frequency according to the following formula:
In a 12-pulse converter (fig.2), multiples of sixth (+/-1) harmonics, which are present in the secondary and tertiary windings of the feeding transformer, will due to the 30-degree shift be cancelled in the primary windings and thus the remaining harmonic current components will be of order:
h = 12xn 1 ,
fi = h f N p f M
where: fi Non-integer harmonic component h Characteristic harmonic component from drives (1, 5, 7, 11, 13 etc) fN Network frequency p Pulse-number of the drive fM Machine frequency. The amplitude of the non-integer harmonic components are mainly determined by the size of the DC inductor, i.e. the larger inductor the lower amplitudes. Secondly, the amplitudes are in general much smaller than the integer harmonic components. 4. HARMONICS OF IDEAL 6 and 12 PULSES CURRENT WAVEFORMS Figure 2 12 pulses converter For the idealized current waveforms, one can establish the harmonic spectrum by the following relation (since the wave-form is an odd function with average zero):
The Total Harmonic Distortion (THD) is a measure of the total content of harmonic components in a measured current, THD(i), or voltage, THD(u):
ih =
THD (i ) = 100%
i( )
h=2 h
2 2h i (t )sin T T / 2 T
T /2
t dt
i(1)
Using this relation, one can find the following spectrum, where is the amplitude of the current: i1 = i2 = i3 = i4 = 0 i5 = i1 / 5
184
15
10
-5
-10
-15
Figure 3 Harmonics up to 37th of a six-pulse current waveform (missing our selves measurements, will have to bring as true the ones performed by Alf Kre dnanes)
In the 12 pulse current waveform, the harmonics of order 5, 7, 17, 19 etc, are cancelled due to the 30 degree phase shift of the three-winding transformer. These harmonics will flow in the transformer windings, but with opposite phase, in the secondary windings of the transformer, and by the summing, they will circulate inside the transformer only, and not flow into the network. The total harmonic distortion of these current wave forms can be found by the relation
THD(i ) = 100%
i( )
h=2 h
i(1)
Figure 4 Characteristic harmonics of a six and twelve pulse current wave-form. These are ideal current waveforms. In practice, impedance due to inductance, resistance, and capacitance alters the current shape (fig.5).
The corresponding harmonic spectrums that can be measured in a typical installation with VS converters are shown in Figure 4.
185
Figure 5 Characteristic harmonics of a 12 pulse current waveform, comparing the real values of a practical installation with the ideal amplitudes 5. CONCLUSIONS
The total harmonic distortion of the current yields THD(i) about 30% for the 6-pulse current, and 15% for the 12-pulse current. 6. REFERENCES
[1] Dordea, t., Zburlea, E., Active steering by 4 electric thrusters, Constanta Maritime University Annals, Year XII, 15th issue, Constanta, 2011, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [2] Dordea, t., Zburlea, E., Electric propulsion with turbo generators, Constanta Maritime University Annals, Year XII, 15th issue, Constanta, 2011, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [3] Dordea, t., Zburlea, E., Active steering Electric thrusters, Constanta Maritime University Annals, Year XI, 14th issue, Constanta, 2010, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS)
186
ABSTRACT It is necessary to be able in predicting harmonic distortion, evaluate the effects, and perform the proper means to manage the voltage distortion, without functional faults over the life time of the installation. There are two types of simulation tools available: time domain simulation and the more commonly applied, which calculates in frequency domain. The benefit of the frequency domain calculation tools is that the time and work for modeling and calculation of large systems is much shorter than for a time domain simulation. However, the accuracy will normally be lower, since one has to decide the harmonic content of the load current, which in reality is dependent on the network configuration and can only be determined by time domain simulation or by equivalent figures from similar systems. Special considerations should be made for PWM type of controllers and use of passive filters, where time domain simulations are strongly recommended in order to obtain results that are necessary for correct design and dimensioning Keywords: harmonic distortion, periodic waveform.
1.
INTRODUCTION
The harmonic currents drawn by a non-linear load from the network will be distributed in the network and flow through the other equipment in the power network. If regarded to be a current source of harmonic current components, it is obvious that the harmonic currents will flow through the paths with lowest impedance for the harmonics. These are normally the running generators, large motors, or large distribution transformers to other (higher or lower) voltage levels.
Oxy
Engines need oxygen
distribution
RECTIFIER
Diesel
Fuel debt Start
which calculates in frequency domain. The benefit of the frequency domain calculation tools is that the time and work for modeling and calculation of large systems is much shorter than for a time domain simulation. However, the accuracy will normally be lower, since one has to decide the harmonic content of the load current, which in reality is dependent on the network configuration and can only be determined by time domain simulation or by equivalent figures from similar systems. Special considerations should be made for PWM type of controllers (Fig.1&2) and use of passive filters, where time domain simulations are strongly recommended in order to obtain results that are necessary for correct design and dimensioning. 2. FREQUENCY DOMAIN HARMONIC INJECTION
electric interface
Diesel
Fuel debt Start
75
62,5
50 W
37,5
25 V
0 6 12 18 24
electric interface
OPERATOR
Figure1 by Stefan Dordea Thruster drives with diode bridge and drilling drives with thyristor bridge are running simultaneous There are two types of simulation tools available: time domain simulation and the more commonly applied,
In this method, the nonlinear load is represented by a harmonic current source, injecting harmonic currents to the network. The network in terms are modeled as a system where its various parts, generator, cable, transformer, motors, etc., are modeled with an appropriate impedance model, representing the impedance for the harmonic frequency currents injected by the harmonic current source. An example of a such model is shown in Fig.3, with a harmonic current source representing the frequency converter, and impedance models for generator, cable, transformers, and loads, e.g. motors. By calculating the resulting voltages from the harmonic currents, the harmonic voltages are found in the branches or points of interest. Summing up these, the harmonic voltage distortion is finally found. There are several calculation software programs assisting in building up and calculation of harmonic distortion in the frequency domain. Building up large networks is quite simple by library models, and calculation times are short.
187
OperatioThrSD10
Stefan EmoToR
Operational Center
BOW
THRUSTE R
3. TIME DOMAIN NETWORK SIMULATION By building up a circuit model of the network, with discrete impedance models, one can perform a time domain simulation of the system. Initial values of voltages and currents are chosen and after some simulation time, the system has stabilized sufficiently to represent stationary conditions.
By taking one fundamental period of the voltage or current waveform of interest, one can then perform a Fourier transformation and find the harmonic spectrum at any point or branch of the system. A simplified circuit model for the same system as in Figure 3 is depicted in Figure 4.
188
It is quite obvious that a complex network is cumbersome to model and time consuming to simulate. Time step in the simulation must also be relatively short in order to give accurate results. The great benefit is that this model gives an accurate calculation of the voltages and currents, and also the harmonic spectrum of the nonlinear loads.
In network with high voltage distortion one should avoid the use of tube lighting with capacitive compensators. An example of waveforms of switchboard voltage and currents to the thruster is calculated by the time domain analysis program KREAN and shown in the four figures below.
4.
CONCLUSIONS
Frequency domain calculations are widely used because of the simple modeling and short calculation times. If the harmonic representation of the converter currents is accurate, the results are also accurate. It is not always straightforward to find the harmonic representation, which may strongly be influenced by the network characteristics.
Then, a time domain simulation can be used, either for a complete calculation, or for a part of the system that is representative enough to give a good harmonic model of the converter, and feed these results into a frequency domain calculation for the complete system. To illustrate the potential faults, if comparing the results of the time domain simulation with a corresponding frequency domain calculation of the same system with the ideal harmonic currents of a 12-pulse rectifier, the real voltage distortion gives THD=8%,
189
[1] Dordea, t., Zburlea, E., Active steering by 4 electric thrusters, Constanta Maritime University Annals, Year XII, 15th issue, Constanta, 2011, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [2] Dordea, t., Zburlea, E., Electric propulsion with turbo generators, Constanta Maritime University Annals, Year XII, 15th issue, Constanta, 2011, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [3] Dordea, t., Zburlea, E., Active steering Electric thrusters, Constanta Maritime University Annals, Year XI, 14th issue, Constanta, 2010, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [4] Dordea, t., Zburlea, E., Electric drives for azimuth propulsors, Constanta Maritime University Annals, Year XI, 14th issue, Constanta, 2010, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [5] Dordea, t., Zburlea, E., Proportional steering system, Constanta Maritime University Annals, Year XI, 13th issue, Constanta, 2010, Editura Nautica, ISSN 1582 - 3601 (cotat B+ n sistemul CNCSS) [6] Dordea, t., Zburlea, E., Anti-hunt control system, Constanta Maritime University Annals, Year XI, 13th
190
1,2
ABSTRACT The paper proposes another point of view on problems regarding the spectrum of the impulse signal or a dissidence among signals. The numerical calculation, applied in an absolutely identical manner and with positive results to any other kind of signal known under the form of discrete numerical function f i (ti ), i = 1,2,3L , for which t = t i t i 1 , regardless of the fact that it may have discontinuities here and there, cannot be applied to the impulse signal. This has determined me to change, only now, and in a radical manner, the software processing the signals, which included the impulse type signals. Keywords: Dirac function, periodic signal, Fourier transform, spectrum.
1.
INTRODUCTION
2.1 The a 0 term(movement on the horizontal axis) Taking into account (2) we arrive at
Throughout the paper I will be using for the impulse signal an expression which is quite similar to the Dirac function (t) that is written as follows
(t ) =
0 , t t0 , t = t0
t 0 +
a0 =
0
1 T0
=
T0 0
(t )dt =
A T0
t 0 + dt t0
dt =
(3)
and
(t )dt = 1
0
(1)
t0
A dt 1 = T0 T0
It is easily seen that the Dirac function has no definite form, but rather proprieties. The impulse signal I will be working with will have a rectangular shape, with parallel flanks, but it could be defined with other shapes too, triangular, parabolic, etc., on condition the properties are respected (1). The shape of the signal is shown in figure 1 and in order for it to be even closer to a Dirac function, it will also follow the rule
From (3) it is concluded that an unperiodic signal (with the period T0 ) has zero movement. The shorter the period is, the more noticeable the movement of the signal will be (the constant component will have a bigger value). 2.2 The harmonic components of i order We will subsequently take into account (2) and the properties of the trigonometrical functions when they have as an argument small infinites, according to which sin () 0 = i cos() 0 = 1 .
A dt dt 0 = 1
(2)
so as to enable, for values dt 0 , the value of the funtion to tend to infinite, A dt 0 . The impulse signal may also be practically realised as a periodic signal with the period T0 . In this case the
ai = =
2 T0
T0 0
(t ) cos(i 0 t )dt =
t 0 + dt
2 T0
t 0 + dt t0
A cos(i 0 t )dt =
t 0 moment at which the signal is defined has the role of a phase and it will thus be called t 0 in applications.
2. THE TERMS OF THE FOURIER SERIES (THE HARMONIC COMPONENTS) Considering the impulse signal as a periodic signal 2 with the pulsation 0 = the terms of the Fourier T0 series will be as follows:
2A sin (i 0 t ) i 0 T0 t
=
0
2A i 0 T0
+ i 0 dt cos(i 0 t 0 ) - sin (i 0 t 0 )] ai =
2 cos(i 0 t 0 ), i = 1,2,3,L T0
(4)
191
bi = = 2 T0
0 t 0 + dt t0
(t ) sin (i 0 t )dt =
F(j) =
(10)
Asin (i 0 t ) dt =
t 0 + dt
= Re() + j Im()
In (10) the function f (t ) is replaced with the impulse function described above, with the following additions: - the inferior limit at integration will be 0 because the signal is defined on positive time intervals; - the superior limit at integration will be an integer multiple n of T0 period, respectively, the duration of the periodic signal; We, thus, obtain:
-2A = cos(i 0 t ) i 0 T0
=
t0
-2A [- i 0 dt sin (i 0 t 0 )] i 0 T0
bi =
2 sin (i 0 t 0 ), i = 1,2,3,L T0
(5)
nT0
t 0 + dt
A i = a i2 + b i2 =
and the phase
2 T0
(6)
A +A
T + t0
2T + t 0 + dt
( n 1)T + t 0 + dt ( n 1)T + t 0
t 0 + dt
+ +L
t0
(7)
+ +
sin (t ) sin (t )
sin (t )
2T + t 0 + dt 2T + t 0
From (6) and (7) it can be seen that for an impulse signal all the harmonic components have a constant value aacording to (6). If the impulse is defined in the origin ( t 0 = 0 or t 0 = T0 ), the phase is of . For the 2 impulse signanls defined anywhere on the interval of a period, the phase of harmonic components varies with the order of the harmonic function. The important thing is that up to i = all the harmonic functions have a constant amplitude. The larger the frequency of the impulse signals is, the proportionally larger the amplitude of the harmonics is, according to (6), which can be also written as
(n 1)T + t 0 + dt (n -1)T + t 0
n
(11)
Im() =
T + t 0 + dt
A sin (t )dt =A
0 2T + t 0 + dt 2T + t 0
sin (t )dt +
(n 1)T + t 0 + dt (n 1)T + t 0
T+t0
sin (t )dt =
Ai = 2 0
where 0 is the frequency of the impulses.
(8)
0 A cos(t ) t 0
0 A cos(t ) T+t 0
T + t + dt
(n -1)T + t + dt
3. THE FOURIER TRANSFORM AND THE SPECTRAL FUNCTION We know the transformation formula of a time function f (t ) into a function of complex variable F(j) , where is the pulsation.
0 A cos(t ) 2T + t 0
2T + t + dt
-L -
0 A cos(t ) (n -1)T + t 0
(12)
F(j) =
f (t )e
j t
dt
(9)
The module of the Fourier transform is called a spectral function S() and has the expression
called Fourier transform. The (9) relationship can be written using Eulers relations
S() = Re 2 () + Im 2 ()
(13)
representing the spectrum on the continuous domain of signal pulsation. It can be seen from the above that the
192
d Re 2 () + Im 2 () =0. d
cos{[(i -1)T0 + t 0 ]}
i =1
The numerical solving of the above raises a rarely encountered problem which is difficult to deal with. Normally, the differentiation operator dt 0 from the analytical calculation of the integrals, becomes, when numerically solving, a simple finite number t however smallbut not null . Depending on the concrete value of t , not null, are the solving precision and the calculations duration. For the impulse signal, whose duration is dt 0 , solving through numerical methods triggers integrating on the interval t 0 , t 0 + dt that becomes the interval t 0 , t 0 + t , which has to be divided into subintervals with small-finite value, under the value of t. This means integrating with two different norms regarding the division of intervals (namely, the differentiation operator), one of them marked dt for the interval , + (respectively 0, nT0 in finite duration signals) and another one, smaller than dt, of the form d(dt), for the interval t 0 , t 0 + d t , which is not correct, the basic (9) formula starts from an integration with a unique norm of this operator. This is why the numerical calculation, applied in an absolutely identical manner and with positive results to any other kind of signal known under the form of discrete numerical function f i (t i ), i = 1,2,3L , for which t = t i t i 1 , regardless of the fact that it may have discontinuities here and there, cannot be applied to the impulse signal.
=
(14)
=
n
sin{[(i -1)T0 + t 0 ]}
i =1 i =1
t 0 signal,
t0 = 0 .
Due to the complexity of the (14) equation the analytical solving can be easily done for concrete values of n. As a result of the calculations we get: - for n=1, = , 0 and S() = 1 , namely, a continuous and constant spectrum, of value 1; 2 - for n=2, = k = k 0 , k = 0,1,2,3L and s() = 2 , T0 namely, a periodic variable spectrum, reaching maximums at pulsations which are multiples of the fundamental pulsation of the signal; for the values k =k = , k = 1,3,5L the spectral function has T0 2 0 null values; - for n>2 we preferred the numeric solving, which also confirms the cases n=1 or n=2 and which shows that the maxims are equal to the n duration of the signal, occurring for the fundamental pulsations multiples.
5.
REFERENCES
[1]. CARTIANU, Gh. .a. Signals, circuits and systems, Editura Didactic i Pedagogic, Bucureti, 1980. [2]. MATEESCU, A. Signals, circuits and systems, Editura Didactic i Pedagogic, Bucureti, 1984. [3]. Mateescu, A., Dumitru, N., Stanciu, L. Signals and systems, Editura Teora, Bucuresti, 2001. [4]. NAFORNI, M., Isar, A. Time representations frequency, Editura Politehnica Timioara, 1998. [5]. OPPENHEIM, A.V., Schafer, R.W. Discrete-Time Signal Processing, Prentice-Hall, 1989. [6]. SVESCU, M. Signals, circuits and systemsProblems, Editura Didactic i Pedagogic, Bucureti, 1982. [7]. STANOMIR, D., Stnil, O. Mathematical methods in signal theory, Editura Tehnic, Bucureti, 1980. [8]. TEOLIS, A. Computational signal processing with wavelets, McGraw-Hill, 1999.
193
194
Constanta Maritime University, 2Gr. T. Popa University of Medicine and Pharmacy, Iasi, Romania
ABSTRACT The technological insertion in medicine in the last past years dramatically changed the medical practice. Based on the medical team reaction time optimization need and on the request for better medical information management, under continuous expansion with the technological advance, the telemedicine concept is taken into account and implemented more and more. The obvious advantages such as the power of patient and illness management, availability at any time and scalability of the information and better access to medical care, lower medical services cost, are balanced by side effects such as the vulnerability of the confidentiality concept, additional harm put on the patient and even the health services quality, the doctor patient relationship. The question we are asking here is if the side effects such as the mentioned ones should worry us. In this paper we address this question, underlining the ethical aspects related to the implementation of the telemedicine in the current medical practice. Keywords: telemedicine, patient, bioethics.
1.
INTRODUCTION
2.
About, more or less, 10-20 years ago, the majority would have been considered the bioethics unrelated to the technology. The need for better diagnostics, the increase in quantity and quality of the information, the medical treatment optimization and the wish for better quality of life, asked for technological applications in medicine. Thus, the technology to medicine insertion evolved more and more. Combined with the popularization of the abuse cases in medical research (abuses happened since the beginning of the 20th century), such was the Tuskegee experiment or even the Nazi experiments, this connection won a very important place and has been getting stronger and stronger. Today it is actually is actually really necessary for better understanding of the moral effects. The moral effects are referenced to some ethical principles on medical researches on human subjects, extended to post-research period, in long term usage. The most important four bioethical principles are the principle of justice, the principle of autonomy, the principle of non-maleficence and the principle of benefits. They were defined and framed into documents and guides for medical research on humans. The already mentioned Tuskegee case (following up the untreated syphilis for about 40 years) or the Nazi experiments (for instance, humans were placed in cold, frozen water, half dressed, to check out the human body limits, in order to improve the pilots protection equipments in case of plain failure) defined the well known Nuremberg code and the Declaration of Helsinki. The medical technology allowed the rise of new medical sub-domains. The current interest, the current focus is on the telemedicine, as the totality of the technologies that provide remote investigations. From this moment further even the patient doctor relationship knew variants, moving from the master face in face one to the one technologically mediated.
The telemedicine became already not only a very used term in the current medical practice, based on technology, but also a concept that is evolving every day, a trend. Even Romania knew it since several years ago, in various applications from medical training to remote assisted surgery. Even if the standard definition is not completely applied, there are certain specific elements that are used, both in medical follow up for long term surveillance of chronically diseased patients and in acute cases, post surgery. In this last case the patient carry on monitoring equipment, for the clinical, vital signs recording. Later, after a period decided together with the physician, either the data are uploaded from home by the patient himself on the health care central server (using the Internet) or the patient bring the device in person to the doctor office, where the data is downloaded locally and read by the physician. Either way the final is the same, the physician read the information and presents the results to the patient together with the further therapy suggestions. There are cases the chronic ill patients are carrying out the equipment all the time, without on line connection and in case of any emergency, a panic button will activate a tracking GPS system from the health care center that optimize the intervention time on the patient. The telemedicine is a technology used for health care services delivery especially when the distance to the health care provider is a critical item. The currently used type of telemedicine systems is the communication between hospitals, between hospitals and primary health care providers and between physicians in general medicine and specialists in various medical fields. It can be seen as an example of telematics applied in health care services, with wide application range because it includes informatics or information technology, with the aim of increasing health care providing efficiency or better health care system management. It intends to increase the society general health status (comfort and prevention). By the telemedicine, the remote health care
195
3.
DISCUSSIONS
The bioethics discourse definitely supposes balancing the advantages, the benefits and the side effects the technologies rises. The main interest is on the way this technology improves the patients quality of life on one side and on the other side the side effects the users could potentially face. Among the advantages we mention: the emergency intervention time optimization, in case the patient faces a medical condition requesting such the possibility for continuous patient surveillance inside and outside the hospital keeping the daily set up by setting up and personalizing the system at home or based on the user preferences. This is the scalability option available the possibility to share the medical decisions, thoughts among various other practitioners on different sites the data from the patients has high level of accuracy by the consistent length of the medical status and vital signs period surveillance. The disadvantages are more related to the moral aspects than to the technical issues, as follows: Sometimes the cost is prohibiting the patient to get access to this technology. The justice principle is this way somehow broken. Not all the patients would afford it. Limited amount of equipments available at a moment, at once. The increased number of persons in need produces a limitation of the technical supplies the health care providers can share. It is such the problem of justice, of resource allocation. As average, in case of theoretical approach of each person paying the same money to the medical system, how and who decide which patient will be the first, how the priority list would be filled up? The principle of justice is however under breaching risk. Patient marginalization by social isolation and the lack of the social contact. This issue is a big problem especially in the case of patients mentally ill, with medical prescription of social contact. A secondary issue is that the telemedicine system cut out the human feelings the physician could get from the real life. The patient should have technical skills prior to using the system or even should get by the time is supposed to use the telemedicine resources is a real barrier for many users. The learning curve is quite large as long as the patients average age is bigger. As the
196
197
Our intention on this paper is only to inform both the doctors and patients, and even those designing such kind of technologies, on the related issues. The main risks associated to this technology are the autonomy and confidentiality breach, to the informed consent from the patient side and to the human to human relationship damage. The telemedicine has a great health care providing potential especially on the inaccessible on a daily basis areas. It provides virtual clinics but it does not exclude the face to face meeting, in order to better get the feeling on the patient medical status, thoughts about the treatment, the procedures. This is indeed important for those patients without family support, living alone. In spite of the general idea that the bioethics is delaying many times the technological process that is however getting ahead, its aim is only on informing on any possible way about both the good and the bad effects.
[1] KLEMENS, J., Ethical considerations of privacy and cyber-medical information, Legal: Cyber Law, april 2, 2008 [2] NORMANDIN, S., R., An international quality care solution, Patient Safety & Quality Healthcare, January/February, 2008 [3] KARANTH, G., K., Patient, doctor and telemedicine. Some ethical issues, http://nbc.ijme.in/nbcpdfs/KaranthGK.pdf [4] SILVERMAN, R., D., Current legal and ethical concerns in telemedicine and e-medicine, Journal of Telemedicine and Telecare, 2003; 9: S1 [5] MADDOX, P., J., Ethics and the brave new world of e-health, Online Journal of Issues in Nursing, 2002; 8 [6] YADAV, H., LIN, W., Y., Patient confidentiality, ethics and licensing in telemedicine, Asian Pacific Journal of Public Health, 2002; 13: S [7] YEO, C., J., J., Ethical Dilemmas of the Practice of Medicine in the Information Technology Age, Singapore Med J 2003 Vol 44(3): 141-144 [8] BEAUCHAMP, T., L, CHILDRESS J., F., Principles of Biomedical Ethics, 4th ed. New York: Oxford University Press; 1994 [9] STRODE, S., W, GUSTKE, S,, ALLEN, A., Technical and Clinical Progress in Telemedicine, JAMA 1999; 281:1066-8 [10] PERRY, J., BEYER, S., FRANCIS, J., HOLMES, P., Ethical issues in the use of telecare, ADULTS SERVICES report 30, Great Britain, May 2010 [11] OGAWA, M., Togawa, T., Monitoring Daily Activities and Behaviors at Home by Using Brief Sensor, 1st Annual International IEEE - EMBS Special Topic Conference on Microtechnology in Medicine&Biology, Lyon, France, 2000 [12] DAVID, P., L. Assessing technological barriers to telemedicine: technology-management implications, IEEE Transactions on engineering management, vol. 46, nr. 3, august, 1999 [13] PENDERS, J., Privacy in (mobile) telecommunications services, Ethics and Information Technology, nr. 6, pg. 247-260, 2004 [14] ROSS, E., Smart Home, Managing Care Through the Air, IEEE Spectrum, decembrie, pg. 14-19, 2004 [15] STANBERRY, B., Telemedicine: barriers and opportunities in the 21th century, Journal of Internal Medicine, nr. 247, pg. 615-628, 2000
198
ABSTRACT This paper presents two methods for reducing the torque ripple of the conventional Direct Torque Control (DTC) of induction motor drives. The methods were implemented on a Digital Signal Controller, their effectiveness was evaluated for a set of motor operating points and the experimental results are comparatively presented. Keywords: Direct torque control, torque ripple, pulse width modulation, space vector modulation.
1.
INTRODUCTION
Direct Torque Control (DTC) is a high performance control strategy for induction motor drives fed by Voltage Source Inverters (VSI) [1]. The main advantages of DTC are the simple control scheme, a very good torque dynamic response, absence of the inner current loops and rotor position/speed measurement. These advantages make DTC an attractive option for applications that require a very fast torque response, torque reference input (no speed control, but torque control only) and improved reliability in harsh environments (dust, vibrations conditions), due to the absence of the rotor position transducer. However, the main drawback of the conventional DTC is the high torque ripple generated in steady state operation. In conventional DTC, the voltage vector selection is based on the torque and flux errors, but small and large errors are not differentiated by the hysteresis controllers. The voltage vectors are applied for the entire sample period, even for small errors, resulting large torque overshoots in steady-state regime. One approach to reduce the ripple is to increase the number of voltage vectors applied in a sampling period, using some sorts of pulse-width modulation (PWM). Among the possible choices, a simpler one is to use two voltage vectors: a nonzero one, applied for a fraction of the sampling period, and the null vector for the rest. The duty ratio must be calculated each sample period, and by varying it between its extreme values, it is possible to apply more voltage levels to the motor, according to the desired torque variation. In [2], an analytical online algorithm calculates the optimum duty ratio each sampling period, by using a torque ripple minimization condition, which is based on ripple equations. However, this algorithm requires high computational effort and additional motor parameters to be known. In [3] the duty ratio value is provided by a new fuzzy logic module, whose inputs are the stator flux position, the electromagnetic torque and an input defining the motor operating point, given by the speed and the torque values. This algorithm involves expert knowledge and needs the rotor speed. To overcome this problem, the authors proposed two modified DTC schemes, which are based on the idea of
increasing the number of voltage vectors applied in a sample period. This makes it possible to obtain more voltage levels at the inverter output, in accordance to the desired torque and flux variations. The proposed schemes were experimentally tested for a set of motor operating points. Their effectiveness was evaluated by calculating the root mean square (RMS) deviation of the instant torque values with respect to the load torque, in steady state regime. The paper presents the DTC principle, the proposed schemes, graphical and numerical experimental results of the tests that were conducted to evaluate the effectiveness of the proposed control schemes, and the conclusions. 2. DIRECT TORQUE CONTROL PRINCIPLE
The basic model of the classical DTC induction motor scheme is shown in figure 1. It consists of torque and stator flux estimators, torque and flux hysteresis comparators, a switching table and a VSI. The basic idea of DTC is to choose the optimum inverter voltage vector in order to control both stator flux and electromagnetic torque of machine simultaneously [1].
Two level Hysteresis
s* +
Flux reference
Controller
1
1
Switching Table
S A , S B , SC
T*
Torque reference
1
0
1
u s (0,1,...,7 )
U DC
S T
Sk (1,2,...,6 )
ia ib
Figure 1 Block diagram of the conventional DTC The stator flux space vector, , is calculated in the stationary reference frame ( ) using the stator voltage equations [5]. The circular trajectory of stator
s
199
T + F ZT ZF
1 T 1 ZT
(1)
where: T and F are the torque error and respectively the flux error; Z T and Z F are the corresponding base values used for normalization. The second term in (1) was introduced in order to avoid the flux decrease at low frequencies, due to the stator ohmic drops, which can occur when the torque error becomes too small. This term takes into account the flux error, which is normalized in the same manner as the torque error. The duty ratio is limited to a minimum value, min , in order to consider the maximum switching frequency of the inverter. Considering the voltage duty ratio, the line voltage at the inverter output is given by (2). U l = U DC (2) where: U l is the line voltage at the inverter output, U DC is the DC link voltage.
At current sample time, the ( , ) components of the stator voltage are calculated using (3) and (4).
u s =
S SC 2 U DC S A B 3 2 S SC 2 , u s = U DC B 3 3
(3) (4)
where is the duty ratio calculated at previous sample time. 2.1.2. Figure 2 Block diagram of the first proposed method Every sample time, one of the six nonzero voltage vectors are selected from the switching table, and it is applied to the inverter using symmetric PWM switching In this second method, a reference stator voltage space vector is calculated, in terms of magnitude and phase, using the instant values of torque and stator flux errors, and the flux position. DTC using Space Vector Modulation
200
u 3 (010)
F
u 2 (110)
m=
T T 1 + F 1 ZT ZF Z T
U ref
(5)
U ref
For torque and flux errors situated in these zones, m is calculated using the expression (5), otherwise m equals unity [9]. As regarding the voltage vector phase, its calculation is based on the conventional DTC principle. As presented in Section 2, the main idea of DTC is to choose the optimum voltage vector in order to achieve a simultaneous and decoupled control of the stator flux and electromagnetic torque. Every control sample time, one of the inverter voltage vectors is selected according to the torque and flux errors and the sector in which the actual flux vector is situated. The phase difference, , between the selected voltage vector and the middle of the flux sector, is k / 3 , where k {2, 1, 1, 2} . It can be proved that for a given voltage vector and a flux sector, the torque slope depends on the flux vector phase angle. This determines a irregular torque ripple in steady state operation, especially when the sector changes. In order to obtain a uniform torque ripple, the above mentioned dependency can be eliminated by considering a voltage vector which is phase shifted with relatively to the flux vector phase angle ( ), instead of the middle of the sector like in the conventional DTC. For simplicity, there were preserved the phase difference quantities. The phase of the reference voltage vector is calculated by adding to the flux phase angle the phase difference , whose value is simply derived from the sign of the torque and flux errors, like in equation (6).
u 4 (011)
u1 (100)
u 5 (001)
u 6 (101)
Figure 3 The voltage space phasor angle for two cases of torque and flux commands The principle of the presented method is illustrated in figure 3, for two cases of torque and flux commands. In figure 4 it is presented the block diagram of the proposed DTC-SVM scheme [5].
= sign ( T ) (3 sign ( F ) ) 6 = +
(6) (7)
The reference voltage vector, with the calculated magnitude, m , and phase , is applied to the inverter using Space Vector Modulation. In the linear region, the voltage vector at the inverter output is:
U ref = m
3 u i e j 2
(13)
The proposed methods, along with the conventional DTC, were implemented on a Digital Signal Controller, and tested for a series of motor operating points, given by torque and speed. The results obtained for an induction motor drive by using the conventional DTC and DTC with the proposed methods of torque ripple reduction are comparatively presented in this section. In figure 5 it is presented the experimental set-up.
m=
U ref U max
201
Figure 6 The scheme of the experimental set-up The components of the experimental set-up are: 3-Phase IGBT Power Module, 750W, Technosoft ACPM 750 v3.4; Controller board MSK28335, with TMS320F28335 Digital Signal Controller, floating point, 150 MHz; Induction motor Sieber, 370 W, with incremental encoder; Electromagnetic brake, with magnetic powder, DeLorenzo DL1019P, capable of maintaining constant torque regardless the operating speed; Software development tool Technosoft DMC28x Developer Pro, used for application development, reference input and data logging; PC with RS232 communication.
The control sampling period was set to 0.1 ms for all three control schemes, and in both proposed control schemes the PWM period was set to 10 kHz. The tests were conducted so that to emphasize: - the torque time response, as compared to the conventional DTC. - the effectiveness of the proposed methods of torque ripple reduction, at different speeds and loads. All the subsequent tests graphical results are expressed in relative units. In order to correctly evaluate the torque transient response provided by the proposed methods, there was performed a prime test in which the torque reference was directly imposed, with no outer speed loop. The reference was prior programmed using the reference generator built into the DMC tool in the following sequence: at t = 0 s, Tref = 2Tn, followed by Tref = Tn at t = 0.2 s and Tref = 0 at t = 0.4 s. The results, comparatively presented in figure 7, for conventional DTC and the two proposed DTC methods, show that the fast torque response of the conventional DTC was preserved, with an insignificant delay. Moreover, the steady state error of the torque, which is inherent in conventional DTC [9], is eliminated by the proposed schemes, due to the torque reference correction [8] .
Figure 7 Comparative results from the first experimental test, emphasizing the torque response in the case of conventional DTC (a), and DTC with the first and second proposed methods (b) and (c) respectively. The quantities represented, up to down, are: two stator phase currents, estimated stator flux module and the flux component on phase alpha, torque reference and estimated torque, measured speed.
202
Figure 8 Comparative results from the second experimental test, emphasizing the torque ripple reduction, as compared to conventional DTC (a), achieved by the first and second proposed methods (b) and (c) respectively. The quantities represented, up to down, are: two stator phase currents, estimated stator flux module and the flux component on phase alpha, torque reference and estimated torque, measured speed. The second test was conducted to evaluate the effectiveness of the proposed methods of torque ripple reduction , at different values of speed and load torque. For this purpose, an external speed loop was added to the control scheme, so that the torque behavior can be clearly observed at constant speed. Note that the speed transient response is not relevant in this case, since the test was focused on the steady state behavior. The test consists in running the motor at low and high speeds, for different loads, in the cases of conventional DTC and respectively the two proposed methods of torque ripple reduction. The test revealed that the torque ripple is significantly reduced by using the proposed methods, for both low and rated speed operation. Sample results of the performed tests are comparatively presented in figure 8, for the case of load equal to the rated torque, and two speed levels of 25% and 100% of the rated speed. The quantities represented are: two stator phase currents, estimated stator flux module and the flux component on phase alpha, torque reference and estimated torque, measured speed. All quantities are expressed in relative units, with respect to the corresponding rated values. For a quantitative evaluation of the proposed methods effectiveness, in table 1 it is presented the root mean square (RMS) deviation of the developed torque, with respect to the load torque. The presented values are calculated for a set of operating points, defined by speed and load torque, in steady state regime. For better readability, the speed and torque are expressed in percent of the corresponding rated values, and the torque standard deviation is expressed in percent of the rated torque. Table 1. Root mean square (RMS) deviation of the instant torque values from the load torque RMS of the torque deviation in percent of TN, M speed [%] 25 100 5. torque Conventional [%] DTC 25 100 25 100 34 23 33.5 17.3
( pc)
CONCLUSIONS
The paper presented two methods of torque ripple reduction, for the DTC. In order to test the proposed methods, and to compare the resulted motor behavior, all the methods were implemented on a Digital Signal Controller and tested on a 3-phased voltage inverter drive. The first proposed method has the advantage of preserving the structural simplicity of the conventional DTC, while adding very little computation effort. In the second method, the torque ripple is reduced in a significant larger amount, but in the expense of an increased complexity, which is inherently added by the
203
The work has been co-funded by the Sectorial Operational Programme Human Resources Development 2007-2013 of the Romanian Ministry of Labor, Family and Social Protection through the Financial Agreement POSDRU/89/1.5/S/62557. 7. REFERENCES
[1] I. TAKAHASHI, T. NOGUCHI, A new quickresponse and high efficiency control strategy of an induction motor, IEEE Trans. Ind. Applicat., IA, 22, 1986. [2] J. KANG, S. SUL, New direct torque control of induction motor for minimum torque ripple and constant switching frequency, IEEE Trans. Ind. Applicat., vol. 35, 1999. [3] L. ROMERAL, A. ARIAS, E. ALDABAS AND M. G. JAYNE, Novel Direct Torque Control Scheme With Fuzzy Adaptive Torque-Ripple Reduction, IEEE Trans. Ind. Applicat., vol. 50, June 2003.
204
1,2
ABSTRACT In this paper we present the integration of Digital Video Broadcasting (DVB) applications and Cloud Content Delivery Networks (CDN). DVB works reasonably well in the sens that the system is very scalable and can sustain very high request rates. But it also has its limitations and lack of geographic replication is one of them in the context of CDN use. So offering a true solution to getting video and audio web content rapidly to browsers across the world is most welcome. With SlapOS, the proposed open source distributed cloud system, we implement a testbed for content distribution service that caches content at different locations based on the access patterns of the individual users. We demonstrate that by using distributed cloud computing our platform is more scalable and resilient than many other DVB to IP gateway systems, and much easier to handle when it comes to offering access to different types of fixed and mobile IP terminals. Keywords: DVB, cloud computing, DTT, broadcasting, content delivery network. 1. INTRODUCTION optimize the shortage of the broadcast spectrum for regular television broadcasting [5]. The objectives of our work are to demonstrate that cloud computing is a well-developed and mature technology that can be used to improve the scalability of DVB content distribution applications over IP networks and to offer full access to video programs to all IP based terminals such as set-tops, TV, smart phones and multimedia PCs. The paper is structured as follows. Section II describes a distributed architecture for cloud platforms and identifies common functions. Section III presents an example to design and implement an open source test platform for cloud content delivery network. Section IV describes an approach to test different DVB to IP cloud solutions and an example of testbed using SlapOS. The conclusion summarizes the contributions. 2. ARCHITECTURE DESCRIPTION
Internet users are increasingly adding video content to existing online services and applications, therefore having the effect that the number of people viewing videos online has grown over the past year and the time spent per viewer has increased accordingly. Google sites, including YouTube, continue to be the most watched online video sites with more than 35.4 million Google sites visitors watching YouTube [1]. We will focus in this paper on using existing DVB platforms and adding cloud technology for improving content distribution to IP interconnected devices. We will introduce also SlapOS [2], the first open source operating system for Distributed Cloud Computing. SlapOS is based on a grid computing daemon called slapgrid which is capable of installing any software on a PC and instantiate any number of processes of potentially infinite duration of any installed software [3]. Slapgrid daemon receives requests from a central scheduler the SlapOS Master which collects back accounting information from each process. SlapOS Master follows an Enterprise Resource Planning (ERP) model to handle at the same time process allocation optimization and billing. SLAP stands for Simple Language for Accounting and Provisioning. This structure has been implemented for cloudbased automation of ERP and CRM software for small businesses and aspects are under development under the framework of the European research project Cloud Consulting [4]. The goal of Cloud Consulting is to create new technologies which automate the configuration of ERP and Customer Relationship Management software for the benefit of SMBs. DVB consists of a transmitting system for compressed digital signals over the available channel's frequencies that makes possible to broadcast more channels with better-quality pictures and sound than traditional analogue television. In its recent research of television policy, the Romanian Government called for the development of digital terrestrial television (DTT) to
2.1 Cloud Architecture SlapOS is an open source Cloud Operating system which was inspired by recent research in Grid Computing and in particular by BonjourGrid [6] a meta Desktop Grid middleware for the coordination of multiple instances of Desktop Grid middleware. It is based on the motto that everything is a process. SlapOS is based on a Master and Slave design. In this chapter we are going to provide an overview of SlapOS architecture and are going in particular to explain the role of Master node and Slave nodes, as well as the software components which they rely on to operate a distributed cloud for telemetry applications. Slave nodes request to Master nodes which software they should install, which software they show run and report to Master node how much resources each running software has been using for a certain period of time. Master nodes keep track of available slave node capacity and available software. Master node also acts as a Web portal and Web service so that end users and software bots can request software instances which are
205
Figure 2 SlapOS Kernel and User Software example [2] Any user, software or slave node with an X509 certificate may request resources to SlapOS Master node. SlapOS Master node plays here the same role as the back office of a marketplace. Each allocation request is recorded in SlapOS Master node as if it were a resource trading contract in which a resource consumer requests a given resource under certain conditions. The resource can be a NoSQL storage, a virtual machine, an ERP, etc. The conditions can include price, region (ex. China) or specific hardware (ex. 64 bit CPU). Conditions are somehow called Service Level Agreements (SLA) in other architectures but they are considered here rather as trading specifications that guarantees. It is even possible to specify a given computer rather than relying on the automated marketplace logic of SlapOS Master [7]. 2.3 State-of-the-art and comparison with other cloud approaches Many real-world systems involve large numbers of highly interconnected over Internet heterogeneous components. The Cloud is among one of the more promising system that will be deployed at a large scale in the near future because the field counts yet on many success stories: Amazon EC2, Windows Azure or Google App Engine [4]. The architecture used in our approach is a distributed cloud environment that is deployed on volunteer PCs at home, office or in small data centers and run SlapOS [2] open source cloud provisioning system, either standalone or in combination with existing virtualization technologies (OpenStack, OpenNebula, Eucalyptus, OCCI [8], VMWare, etc.). Recent research on Cloud Computing has focused on the implementation of Service Level Agreements (SLA) and operation of large Data Centers. However, in case of Force Majeure such as natural disaster, strike, terrorism, unpreventable accident, etc., SLA no longer apply. Rather than centralizing Cloud Computing resources in large data centers, Distributed Cloud Computing resources are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. Based on the implemented scenario, several question rise regarding its performances and efficiency.
Figure 1 Example of master-slave architecture [2] 2.2 Cloud Kernel SlapOS relies on mature software: buildout and supervisord. Both software are controlled by SLAPGrid, the only original software of SlapOS. SLAPGrid acts as a glue between SlapOS Master node (ERP5) and both buildout and supervisord, as shown in Figure 2. SLAPGrid requests to SlapOS Master Node which software should be installed and executed. SLAPGrid uses buildout to install software and supervisord to start and stop software processes. SLAPGrid also collects accounting data produced by each running software and sends it back to SlapOS Master. SlapOS master nodes keep track of the identity of all parties which are involved in the process of requesting Cloud resources, accounting Cloud resources and billing Cloud resources. This includes end users (Person) and their company (Organisation). It includes suppliers of cloud resources as well as consumers of cloud resources. It also includes so-called computer partitions which may run a software robot to request Cloud resources without human intervention. It also includes Slave nodes which need to request to SlapOS master which resources should be allocated.
206
Figure 3 SlapOS Slave implementation [2] SlapOS approach of computer partitions was designed to reduce costs drastically compared to approaches based on a disk images and virtualization. As presented in Figure 3, our current implementation does not prevent from running virtualization software inside a computer partition, which makes SlapOS at the same time cost efficient and compatible with legacy software SlapOS Slave software consists of a POSIX operating system, SlapGRID, supervisord and buildout. SlapOS is designed to run on any operating system which supports GNU's glibc and supervisord. Such operating systems include for example GNU/Linux, FreeBSD, MacOS/X, Solaris, AIX, etc. 4. CLOUD DVB TEST PLATFORM
We will use the SlapOS platform implemented during the Cloud Consulting project [4] hosted on several servers running Ubuntu Linux Apache MySQL template with current software release. SlapOS Master runs ERP5 Cloud Engine, a version of ERP5 open source ERP capable of allocating processes in relation with accounting and billing rules. Initial versions of SlapOS Master were installed and configured by human. Newer versions of SlapOS Master are implemented themselves as SlapOS Nodes, in a completely reflexive ways. A SlapOS Master can thus allocate a SlapOS Master which in turn can allocate another SlapOS Master, etc. By default, SlapOS Master acts as an automatic marketplace. Requests are processed by trying to find a Slave node which meets all conditions which were specified. SlapOS thus needs to know which resources are available at a given time, at which price and under which characteristics. Last, SlapOS Master also needs to know which software can be installed on which Slave node and under which conditions SlapOS Slave nodes are relatively simple compared to the Master node. Every slave node needs to run software requested by the Master node. It is thus on the Slave nodes that software is installed. To save disk space, Slave nodes only install the software which they really need. Each slave node is divided into a certain number of so-called computer partitions. One may view a computer partition as a lightweight secure container, based on UNIX users and directories rather than on virtualization. A typical barebone PC can easily provide 100 computer partitions and can thus run 100 wordpress blogs or 100 e-commerce sites, each of which with its own independent database. A larger server can contain 200 to 500 computer partitions.
By using virtual servers running on a cloud-based platform it is simple to deploy new servers with specific roles configured in a single file and boot them on demand. The applications running on the cloud service provider test platform are presented in Figure 4.
Figure 4 Cloud service provider platform for DVB applications Example of cloud based apps tested are cloud based program recommendation systems for digital TV that enhances the traditional electronic program guides (EPG) by offering suggestions based on statistics obtained from data mining on large data sets and possibility of searching for TV shows based on popular keywords. Our test platform is designed for a DVB to IP cloud content distribution network by using current DVB-T software releases and we intend to develop configuration templates also for DVB-T2 and future versions. We implemented an automation mechanism which allows automatic movement of running virtual servers
207
This research activity was supported by Ministry of Communications and Information Society of Romania under the grant no.106/2011 "Evolution, implementation and transition methods of DVB radiobroadcasting using efficiently the radio frequencies spectrum" and by the project Valorificarea capitalului uman din cercetare prin burse doctorale (ValueDoc) co-financed from the European Social Fund through POSDRU, financing contract POSDRU/107/1.5/S/76909. 7. REFERENCES
SlapOS is capable of allocating resources for content distribution networks beyond the borders of traditional Cloud Computing by providing an ecosystem of virtual machines, application servers and databases for delivery of next generation IP services. That said, SlapOS can be used to operate Smart TV, Video on Demand and IPTV applications more efficiently. The Cloud CDN has the potential to enhance the broadcast technology by enabling the provision of additional services quicker and at a lower cost than at any time in the past. To take advantage of these technologies, however, a next generation of systems integration is required, as well at the network and at the software interface level.
[1] A. Sutherland, The Story of Google, The Rosen Publishing Group, 2012 [2] http://www.slapos.org (October 2012) [3] The Free Cloud Alliance (FCA) Blueprint, http://www.freecloudalliance.org (October 2012) [4] G. Suciu, O. Fratu, S. Halunga, C. G. Cernat, V. A. Poenaru, and V. Suciu, Cloud Consulting: ERP and Communication Application Integration in Open Source Cloud Systems, 19th Telecommunications Forum TELFOR 2011, IEEE Communications Society, 2011 [5] Politics and strategy for the transition from analog terrestrial television to digital terrestrial broadcasting and implementation of digital multimedia services at national level,http://ec.europa.eu/information_society/policy/eco mm/doc/current/broadcasting/switchover/ro_2009_switc hover.pdf (October 2012) [6] H. Abbes, C. Cerin, and M. Jemni, A decentralized and fault-tolerant Desktop Grid system for distributed applications, Concurrency and Computation: Practice and Experience, vol. 22, no. 03, 2010 [7] http://www.osoe-project.org (October 2012) [8] http://occi-wg.org/ (October 2012) [9] D. Strohmeier, S. Jumisko-Pyykko, K. Kunze, and M. O. Bici, The Extended-OPQMethod for UserCentered Quality of Experience Evaluation: A Study forMobile 3D Video Broadcasting over DVB-H, Hindawi Publishing Corporation, EURASIP Journal on Image and Video Processing, 2011. [10] T. Ng, G. Wang, "The impact of virtualization on network performance of Amazon EC2 data center," IEEE INFOCOM 2010 - 29th IEEE International Conference on Computer Communications, vol. 29, no. 01, 2010. [11] A. Davies, Using the cloud, Broadcast Engineering, vol. 54, 2012.
208
FUZZY CONTROL OF A NONLINEAR PROCESS BELONGING TO THE NUCLEAR POWER PLANT WITH A CANDU 600 REACTOR
1
ABSTRACT The present paper is set on presenting a highly intelligent configuration, capable of controlling, without the need of the human factor, a complete nuclear power plant type of system, giving it the status of an autonomous system. The urge for such a controlling system is justified by the amount of drawbacks that appear in real life as disadvantages, loses and sometimes even inefficiency in the current controlling and comanding systems of the nuclear reactors. The application stands in the comand sent from the auxiliary feedwater flow control valves to the steam generators. As an environment fit for development I chose Matlab Simulink to simulate the behaviour of the process and the adjusted system. Comparing the results obtained after the fuzzy regulation with those obtained after the classical regulation, we can demonstrate the necessity of implementing artificial intelligence techniques in nuclear power plants and we can agree to the advantages of being able to control everything automatically. Keywords: Distributed Control System (DCS), smart control, simulation, artificial intelligence, fuzzy controller
1.
INTRODUCTION
The multilateral evolution of human society, determined by the expansion of the technical progress, is required to meet growing needs, both in terms of quality and quantity. In order to manufacture certain products or to conduct certain processes, experts had to build and put into operation large industrial plants and complexes, which they considered to be complex systems. [1] It can be said that a nuclear unit (NU), regardless of the chain reactor embedded, is indisputably a complex system. The unit consists of several main systems: Primary Heat Transport System (primary) The Turbine-Generator System (secondary) The Steam Boiler System, considered to be the interface between the first two systems. All three systems contain more than 100 technological subsystems placed in different hierachical structures. For instance, the water supply system (second tier) of the steam generator (first tier) brings together many subordinate systems (third tier): The Adjusting of Feedwater Flow The Boiler Feed Pumps The Auxiliary Pump Subsystem The Condenser System The structure of these systems/ subsystems is made up of thousand of components of different sizes and with different destinations. Due to the uniqueness of the technological process (nuclear processes), this domain is highly prone to hazardous events. Specific data have shown that the complex system called nuclear power plant must reach a global optimum in relation to three criteria: -system efficiency -system cost -operational security Besides the specific attributes of any complex system, the NPP domain highlights a number of technological features that are worth mentioning:
the increased complexity which, among other things, implies the following demands: to organize and execute all maintainance activities at a high professional level; to ensure a far greater use to the appliances; to ensure efficient management. The high degree of risk asociated to some activities is the third criteria of performance (operational security). Among these activities, the following stand out: the development of safety standards, included in policies and laws, that form the target of nuclear safety; the implementation of security systems meant to ensure obedience when it comes to these norms. As it is known, security systems are waiting systems: they need to keep their intervention capacity unaltered, to preserve it, although no need for such intervention is very much desired. Security analysis is a means of confirming (or denying) this preservation and the efficiency of these systems comes out after the risk analysis. 2. SMART CONTROL WITH A HIGH DEGREE OF AUTONOMY FOR THE NPP THAT CONTAINS A CANDU 600 TYPE REACTOR The smart autonomous command and control system (SIACC) for the NPP with a CANDU600 type of reactor, that is described in this paper, can be used for any sort of nuclear reactor due to a property through which the system can access auto-renewal. Automatic renewal is necessary in order for the system to make the proper decission when dealing with a multitude of dangerous situations in a NPP. SIACC will have a hierarchical architecture, qualifying itself for the standards of the autonomous smart systems, as shown in [2].
209
Figure 1 The architecture of the smart control solution with a high degree of autonomy for NPP On the first level of the hierarchy we can find the microagents, coordinated by the macro-agents. On the second level we have the agent of coordination and on the last level the supervisor. The supervisor has the role of maintaining the smooth functioning of the entire NPP system. He is the only one that can give the order for it to shut down. The agent coordination manages communication between macro-agents, giving them priority in communication and negotiating communication channels between them. They also interpret the commands given by the supervisor and sends those commands to the macro-agents. The macro-agents control the component systems of CNE, while the micro-agents deals with the functional units of the system. The macro-agents control the component systems of CNE, while the micro-agents deals with the functional units of the system. Micro-agents are responsible for the subsystems within the major systems of the reactor. They are design to retrieve and monitor field parameters. Monitoring also includes framing values in the allowable variation domains that are characteristic to each of them, also setting off proper alarms when it comes to exceeding thresholds. In the system described here, there is no need for oldfashioned setting; we can simply use the smart tecniques, namely those based on fuzzy logic. The degree of intelligence that the macro-agents posses is superior to that of the micro-agents. Microagents are implemented using small-dimensional systems, based on knowledge capable of understanding and interpreting information received from a higher level as well as a lower level (two different languages). An important factor in demonstrating the highdegree level of intellingence is, as mentioned before, the capacity of macro-agents to auto-renew their systematic parameters that can allow optimal system operation. This demonstrates their usefulness and efficiency in commanding systems other than those destined for NPP with reactors of the CANDU type. On the next hierarchical level there is the agent coordination which has the role of ensuring communication between the supervisor and the macroagents. The coordinator takes some crisp values of the parameters that have an essential role in the dinamics of the plant and processes them, turning them into linguistic indicators of the systems well-functioning. These indicators are transmitted to the supervisor in order to be used as parameters of functions performed by it.
210
Figure 2 Block diagram of FCV 119 system Given these conditions, the mathematical model of the valve actuator piston movement is defined by the following system of equations:
& + fx & = S ( p1 p2 ) m& x V1 ' & 1 = kQp [ k pu (u k x x) p1 ] p B V '' &+ 2 p & 2 = kQp Sx [k pu (u k x x) p2 ] B &+ Sx
(1)
The time of closing/opening for the flow control valves,as well as for the motorized valves for flow isolation, is of 20 s. The adjustment of a larger reinforcing can sometimes cause the system to remain stuck on a closed position, while the adjustment of a smaller reinforcing causes failure on an open position. The thermal-hydraulic parameters of the water supply for the steam generators at different power levels of the reactor have their values given in Table 2. 3.2 The mathematical model The constitutive assumptions of the simplified linear mathematical model of the motion of subsystem FCV 119 are the following: a) the actuator has equal active surfaces of the piston b) dead zone thresholds of the controller are neglected
where: x is the output variable varying variable of the valve actuator [cm] p1 pressure variation in the upper chamber of the actuator (held upright) [daN / cm 2] p2 - pressure variation in the lower chamber of the actuator [daN / cm 2] u is the control signal [mA] m mass of the moving part of the valve (including the piston actuator) [daN / cm 2] f - coefficient of viscous friction piston-cylinder actuator [daN s / cm] S - active surface of the piston actuator [cm2] V1, V2 - volume actuator cylinder chambers for the initial position of the piston when the valve is fully open (piston is in its upper position) [cm3 / s] B - module compressibility of air [daN / cm 2] k'Qp, k 'Qp - flow-pressure coefficients of the amplifier output that gain control; they are considered to be distinct on supply and on exhaust[cm5/daNxs] kpu pressure-current coefficient of the positioner [daN/ (cm2xmA)]
211
x(t ) X ( s ) L = u (t ) U ( s )
This corresponds to a third order system (given the difference in degree between denominator and numerator) without special problems of stability, simplification of poles and zeros etc. Numerical simulations will attest this assumption. After laborious mathematical calculations, the transfer function of the valve system will be obtained:
28.3778s + 21.2444 H ( s) = 4 0.00508s + 0.08401s 3 + 17.759s 2 + 36.3864s + 16.7194
Figure 5 Open loop system response highlighting transitional time In Figure 5 (in which transient time is highlighted) we can see that the transient time of the system is approximately equal to the 10s. That fact shows us that in an emergency situation the valve will not close immediately the water supply and that incidents might happen. Notice that the system is stabilized at the value xst = 20.32cm, which means that the valve will close completely (x = 0 - fully open valve, x = 20.32 - valve fully closed). The fuzzy controller will now have the duty of cutting off some of the transient time, thus obtaining an appropriate response to the requirements of the system. 3.4 Designing the fuzzy controller Fuzzy controller must ensure a minimalization of the transitional flow when it comes to the control valve of the water supply system. The water supply must circulate towards the steam generators at a certain reference level without affecting output in steady value. A control system with a fuzzy controller on a direct path has the following structure:
A first simulation of this system will result in open loop system response (without adjusting fuzzy). Reference signal will be a signal level of 20.32. This signal is the output value of the variable x for which the valve is fully closed. As we mentioned earlier on, the situation that we wish to create for this regulation is failure; the FCV119 valve should close, thus obtaining zero flux. In the present application we will not portray the feature of the output flow because it is a function of x. The representation of the system response (time variation of piston valve position) is sufficient to establish the correctness of the fuzzy control law and of the control solution presented in the paper. To achieve the simulation we used Matlab Simulink simulation environment. Thus, we built the system in open loop diagram (Figure 3).
Figure 3 Open circuit diagram valve system Open loop system response was obtained simulating the above scheme for a period of 100 seconds (Figure 4): Figure 6 Control scheme with fuzzy controller. The fuzzy controller receives from the beginning an error signal a crisp constant (e) and provides at the end a control constant (u), that is also a crisp constant. In this case, the fixed part (PF in Figure 6) is the nonlinear process- flow control valve of the water supply that runs through the circuit up to the steam generators.
212
Figure 7 Basic structure of a fuzzy controller The calculation of derivative and/or integral module (MCDI) is determined by some input values regarding numerical differentiation and/or numerical integration. The numerical differentiation of the crisp input value e(k) is calculated in the following manner:
where e(k) is the current crisp input value (the error) and e(k-1) is the value from the previous step. The numerical integration can be determined by summing up the crisp values of the error with the formula:
(4)
Notice: From all the above, we can see that a fuzzy regulator has, unlike a conventional controller, multiple input values. The fuzzy controller inputs are the linguistic variables from the prerequisite rules. In this case the fuzzy controller input will have two values: error calculated with the formula:
e( k ) = r ( k ) y ( k )
(5)
(6) MF fuzzification module performs the following functions: a) turn (scales) crisp input value into a normal, ferm value. Optional calculation is required here, mainly because of the numerical processing b) converts crisp values of the input value into a fuzzy list 3.5 Building the controller in Matlab After the controller has been designed (after the linguistic variables and membership functions were set, the inputs and outputs of the system were chosen and the inference table was built), we must then build it in Matlab and use it in the FCV119 valve adjustment scheme.
Figure 10. Construction of the fuzzy controller rule base The Matlab Simulink diagram on how to simulate the system is shown in Figure 11:
213
Figure 11 Diagram that explains how to adjust using a fuzzy controller Controlled system response is shown in Figures 12 and 13:
Figure 13. Response of the controlled system emphasing transitional time We can see that in Figure 13. the fuzzy controller genuinely decreased the transient time, bringing it to a value of 3.5s. It decreased by 65%, demonstrating how efficienct smart controll is in industrial processes. 4. CONCLUSIONS
In this paper we thought of an smart control solution at a reduced scale, more specifically, we have proposed a fuzzy controller to control a flow control valve in a system at the Cernavoda NPP. With this purpose in mind, we started out with the mathematical model of the valve system, we calculated the transfer function based on which we have simulated a valve behaviour in absence of smart control. Using open loop system response, we have emphasized performance and we have set a new system performance, so that we may build the smart controller. Thus, we witnessed superiority of those adjusting solutions, based on artificial intelligence techniques, over the classical ones. The latter have also lost ground in simplicity of determining the law of adjustment. In nonlinear processes, the law of adjustment, can be difficult to explain and it involves very complicated mathematical calculations. In contrast to the above facts presented, the adjustment techniques using smart controllers are
[1]. Controlul si comanda centralelor nuclearoelectrice, Stefanescu P. [2]. Ingineria reglarii automate, Dumitrache I.
214
ABSTRACT This paper proposes to analyze the opportunity of introducing Digital Terrestrial Television (DTT) in Romania by using directly the DVB-T2 standard instead of DVB-T. To this end we propose a testbed for performing measurements related to the functioning of the DVB-T, DVB-T2 and DVB-H networks. We also try to search for an appropriate configuration for sending video or data streams from a DVB-T2 transmitter equipment to a receiver. And for this the parameters at the transmission will be modified to simulate the in-field requirements. Results show that no configuration is better than another because in different environments we may have different error sources or different conditions that may need special tuning of each and every parameter of the transmitter. Keywords: Digital TV, Digital video broadcasting, Test equipment, Modulation, Code Rate.
1.
INTRODUCTION
In Romania, as well as other countries, demand for the release of broadcast spectrum for non-broadcast applications (e.g. mobile cellular communication) has increased as well as the demand for broadcasting frequency spectrum. This makes the maximization of spectrum efficiency a necessity Romania does not yet have an approved plan for transition to digital television, but initially the country had committed itself to the European Commission to complete the transition to digital and stop analog transmission until the end of 2012. The National Authority for Management and Regulation in Communications in Romania (ANCOM), in collaboration with the Romanian Ministry of Communications and Information Society (MCSI) had produced a draft strategy in early 2010, but the plans were abandoned and, at the moment, an implementation strategy is not yet completed. The Digital Video Broadcasting (DVB) Project is a worldwide alliance of companies in the video broadcasting, corresponding to the equipment manufacture or broadcasting network operating domain. For wireless terrestrial video broadcasting, there are two main standards developed under the DVB Project: DVBT (finalized in 1997) and DVB-T2 (2009). Both of them offer also a large versatility in implementation by choosing proper parameters of the transmitted signal in order to optimize the overall performances according to the desired requirements. The DVB-T2 Standard is based on the same principles as DVB-T, offering a high flexibility for the transmitting modes. DVB-T2 offers a high transfer rate for the information or a signal more shielded from errors. The high transfer rate along with MPEG 4 encoding means that there can be sent more than 2 HD channels on the same multiplex [1]. The rest of the paper is organized as follows. Section 2 gives some insight into the current state of digital terrestrial television implementation in Romania; Section 3 gives some scenarios and issues regarding how
the implementation of DTT in Romania should be handled, while Section 4 describes a testbed un UPB premises for carrying different experiments for analyzing different aspects of DVB-T, DVB-T2 and DVB-H networks. Section 5 highlights some basic measurements available on the UPBs DVB-T2 testbed, while sections 6-10 present some measurements carried out on the same testbed to evaluate the performances of DVB-T2 systems for various parameter settings. Finally, Section 11 draws the conclusions. 2. STATE OF DIGITAL TERESTRIAL TELEVISION TRANSITION IN ROMANIA The transition from analog to digital television is in progress around the world, but Europe is the most advanced continent in this regard. The European Commission has launched a series of actions for spectrum release and establishing a plan for its future use, so that EU citizens enjoy all the benefits of digitization. It is expected that all EU countries migrate to digital transmission (the so-called digital switchover) until 2015, if not earlier. The European Commission had initially recommended to member states that this process be completed by the end of 2012, but some countries have delayed the analog switch-off. Terrestrial digital broadcasting has already been introduced in 21 EU Member States and cover different geographical areas. Analog transmissions have already been completely stopped in 17 European countries [1]. At the moment in Romania digital terrestrial television (DTT) is only broadcast in the cities of Bucharest (2 locations) and Sibiu (2 locations) using the DVB-T standard. The signal, however, can be received on a fairly large distance around the cities, reaching up to about 60 km [2]. These are part of the pilot DTT program that was implemented by the Romanian Society of Radio-communications (RADIOCOM), which deployed the first digital television transmitters, the one in Bucharest in March 2006 and the one in Sibiu in November 2006. The multiplexes are broadcast on channels 54 (738 MHz 3 SD programs and 1 HD
215
Figure 1 Romanian Allotment in the UHF band and number of channels planned per allotment before (normal) and after (italic) DD2 of 2/3 and 3/4. There is also a Single-Frequency Network made up of transmitters in 3 locations in Bucharest operated by a private broadcaster which broadcasts on channel 30 (546 MHz 2 HD programs) with the same modulation parameters as the ones operated by RADIOCOM. At the Geneva Regional Radiocommunication Conference 2006 (RRC-06), a plan for digital television was adopted, and at the World Radiocommunication Conference 2007 (WRC-07) the decision was made to allocate the 790-862 MHz band (channels 61-69) for mobile services. According to the final acts [3] of the RRC-06, Romania has a total of 36 allotments in the UHF band, with each allotment having a number of channels that can be used in the future for the DTT network. The Romanian Authorities have produced a draft strategy [4] for implementing DTT in Romania that was later abandoned. It stated that there were two freeto-air multiplexes (1 and 2) which would carry national television and some other programs of public interest and another 4 multiplexes (3 to 6) that would carry commercial TV programs. Under this strategy, each allotment contains mainly 5, 6 or 7 channels in the UHF band, with the exception being the zones in the middle of the country with 9 channels (Fig. 1). The recent decision by World Radio Conference (WRC-12) in Geneva to allocate the 694-790 MHz band for mobile services [5, 6] (the so-called second digital dividend or DD2) puts more pressure on the broadcasting spectrum allocation in Romania, and it is highly unlikely that there will be more than 4 national multiplexes in the UHF band (channels 21-48). So, there is a need for rethinking the frequency assignments for these national multiplexes. The new number of channels per allotment is given in Fig. 1. Regarding the total closure of analog services, the Romanian Government has changed the initial date of 01.01.2012, moving the final date for ending analog broadcasting to 17.06.2015 in line with the ITU Geneva 2006 (GE06) agreement on analog switch-off date.
Some of the issues that have to be taken into account when moving from analog to digital TV broadcasts are [6]: The transition period is as lengthier and more difficult as the percentage of viewers that depend on the terrestrial platform is higher; There will be the need of simulcasting for a certain period; There may be a need for additional spectrum to accomplish the transition; There will be a need for incentives for viewers in order to accept the transition to newer techniques because this implies an upgrading of equipment, which will have to be paid for; Since Romania has not yet started the transition to operational DTT, only as a limited number of pilot / experimental DVB-T transmitters, it may be suitable to introduce DVB-T2 as the DTT standard. This is driven by several factors: The advantages of DVB-T2 over DVB-T (outlined in Section III) The recent decision of a mobile allocation in the 694-790 MHz band (the so-called second digital dividend) A national DVB-T network has yet to be implemented, which means that a strong argument against implementing DVB-T2 (the fact that DVB-T2 is not backward compatible with DVB-T) is not an issue. Some of the existing infrastructure from analog TV can be re-used (antennas, amplifiers, repeaters etc.), when implementing a DVB-T2 network. Apart from that, there will be a need for modulators, gateways, MIP Inserters for SFNs, monitoring equipment, and possibly filters. Also, on the receiving side, there will be a need for new DVB-T2 capable TV sets or DVB-T2 set-top boxes. There will, of course, be a simulcast period required, and its length will largely depend on the penetration rate of the existing analog television services. It is estimated that around 20% (about 1.4 million) households in Romania receive terrestrial analog television [7]. With such a relatively small percentage of population, there is only a need for a short simulcast period. It is also possible to take the approach that Germany has, where a region-wise transition was preferred, in order to avoid abrupt changes across the country. Also, the simulcast situation is virtually straightforward, since DVB-T2 can carry more programs in a multiplex than analog TV and there is only need for a small number of multiplexes, which is in line with the second digital dividend. To this end, some administrative measures should be taken especially by the authorities: Before the start of the transition, and also, during the simulcast period, there has to be an information campaign so the users understand the differences between analog and digital TV and also the advantages of the latter; Since there is a need for new TV sets or set-top boxes, there will be a need of investment from the consumer in this new receiving equipment. The authorities should take into consideration subsidizing this equipment, especially for low-income families
216
Figure 2 shows a proposed testbed for analyzing different aspects of DVB-T, DVB-T2 and DVB-H networks. It comprises two main parts: Transmission and Reception for each of the three technologies. Each of them is constructed of equipment that may or may not be common to one or more of the three chains. Details of the equipment are given below. Transmission. Common to all three chains is a Windows 7-based laptop running different streaming server software that comes bundled with the corresponding equipment it is connected to. Details are given next for every technology:
For the experimental measurements, the equipment is composed of the LabMod DVB-T2 Modulator and the
217
ReFeree T2, and two computers, one for sending the data stream and one for processing the received information. This corresponds to the center section of the transmitting chain depicted in Figure 2. The software used for the measurements was the one offered by the equipment providers. For the transmitter, the console was accessible using a web browser and from there we adjusted all the parameters for the transmission (Constellation, Code rate, Bandwidth). In order to transmit the data string, DiviSuite 1.0 software was used. At the receiver, the signal was analyzed using ReFeree v1.0 software and after each transmission was ended, it delivered reports so the data could be properly analyzed. The initial settings for the transmitter were used: Bandwidth of 8MHz; FFT Mode of 32k normal; Guard Interval of 1/128; Constellation chosen 64QAM with a code Rate 2/3.
Figure 3 illustrate the SNR and BER variations when trying to properly calibrate the equipment (i.e. place it where received DVB-T2 signal could be decoded and the stream could be played). The fluctuations of the signal are caused by different distances between the receivers antenna and the transmitter. The final distance at which we finally placed the antenna is of 1 m and all the other tests were done with the system in this position. 6. CHANGING THE OUTPUT PARAMETERS
After positioning the antenna in such a way that the signal was properly received, the next step was to tune the parameters and observe the results at the receiver. In order to see those changes, modifications were done to the following parameters: guard interval, modulation, code rate, constellation rotation.
Figure 7 SNR variation when changing modulation scheme Figure 8 BER variation when changing modulation scheme
218
Table 1 Measurements from the receiver for GI of 19/256 Quality measurements Time (sec) Signal Level SNR BER 0 -69.7 18.6 8.40E-03 2 -69.9 21.5 8.40E-03 3 -69.9 21.5 8.40E-03 5 -69.7 21.5 8.10E-03 6 -69.7 21.5 8.10E-03 8 -69.7 21.6 8.60E-03 9 -69.7 21.6 8.60E-03 11 -69.8 21.8 8.90E-03 12 -69.8 21.8 8.90E-03 14 -69.9 21.8 9.00E-03 17 -69.9 21.6 8.60E-03 20 -69.7 21.6 7.20E-03 23 -69.7 21.5 7.20E-03 26 -69.6 22.2 6.70E-03 29 -69.7 22.7 3.40E-03 32 -70 23.6 3.80E-03 35 -70 23.6 3.80E-03 Related to the configuration used in Section 5, only the guard interval was changed from 1/128 to 19/256. The report from the receiver contains the signal level, SNR and BER of the received signal. The output is summarized in Table 1. Figures 5 and 6 represent the curves obtained based on the information provided by the receiver, since such a representation is more intuitive and can be easily interpreted by the user. 7. CHANGE IN MODULATION SCHEME FROM 64-QAM TO 256-QAM
For a change in constellation from 64 QAM to 256 QAM the Signal to Noise Ratio (SNR) and the Bit Error Rate (BER) curves are the ones depicted in Figures 7 and 8. It can be seen that the signal gets stabile and as the SNR rises, the BER values drop. 8. CHANGE OF CODE RATE FROM 1/2 to 5/6
For a Code Rate of 5/6 the received signal parameters analyzed by the receiver are showed in Figures 9 and 10. Comparing this to the response we get when the Code Rate is set to 1/2, we can see in Figure 11 that the Signal to Noise Ratio has a better value and has a more linear shape: Figure 13 shows that the signal received is of a better quality for this configuration than for the previous one. 9. CHANGE IN CONSTELLATION FROM ROTATED TO NORMAL (FOR 64 QAM) For this test the settings chosen were the following: Bandwidth of 8MHz; FFT Mode of 32k; Guard Interval of 19/256; Pilot Patterns selected as PP8; Normal 64 QAM Modulation; Code Rate of 5/6. According to Table 2 and to Figures 13 and 14 that interpret the data gathered in the table, the signal is weaker than in the previous case but it is strong enough to cover the noise and to offer us a good Signal to Noise Ratio. Also from the BER graph we can deduce that there are some errors that the equipment tries to correct but still the values of the graph are smaller than in the previous cases. .
Figure 11 SNR variation when the Code Rate is changed from 5/6 to 1/2
Figure 12 BER variation when the Code Rate is changed from 5/6 to 1/2
219
0 4 7 9 10 12 13 15 18 21 23 24 26 29 32 34
10. CHANGE IN CONSTELLATION- FROM ROTATED TO NORMAL (FOR 256 QAM) For the last test done, the idea was to keep the normal constellation but to change its type from 64QAM to 256-QAM Table 3 offers the information about the data received. After this, in Figures 15 and 16 are depicted the interpretations of each column from Table 3. The last picture (Figure 17) is the monitoring window from the receiver. And it illustrates the variations of the overall bit rate and net bit rate. If we compare the values of the signal level in Table 3 with the one from the previous test in Table 1, we can see that the values indicate a weaker signal in the case of the 64 QAM configuration, even if for the 64 QAM configuration, the values seem to fluctuate more. Also, another remark that can be made concerning the graph of the SNR. For 256-QAM it is a more stable variation without significant jumps between values as it can be observed in the 64 QAM case (Figure 13).
In the case of the bit error rate, we can also see a difference between the two cases, Figure 14 compared to Figure 16. In Figure 16 there is a smoother variation indicating there are no severe errors to be corrected by the system, whereas in Figure 14 the fluctuations of the graph and the small values at its end indicate a rather high level of errors that the system needs to manage. 11. CONCLUSIONS The equipment used is able to verify the fluctuations that appear for the following parameters: Signal level, SNR, Pre LDPC BER, Post LDPC BER, Post BCH FER and also can depict the curves for the Overall and Net Bitrate. With these experiments we tried
220
Figure 17 GUI illustration of the receiver measurements for 256-QAM to show the way in which the parameters that we set at the transceiver can alter the parameters of the received signal and bitrate of the stream for a given distance between the transmitter and receiver. By changing the parameters and by analyzing the measurement graphs we can optimize the parameters of a real network to fit the given conditions. We cannot declare that one configuration is better than another because in different environments we may have different error sources or different conditions that may need special tuning of each and every parameter of the transmitter. 12. ACKNOWLEDGMENTS This research activity was supported by UEFISCDI Romania under grant no. 20/2012 Scalable Radio Transceiver for Instrumental Wireless Sensor Networks - SaRaT-IWSN, by the Ministry of Communications and Information Society of Romania under grant no.106/2011 Evolution, implementation and transition methods of DVB radiobroadcasting using efficiently the radio frequencies spectrum and by the Sectorial Operational Programme Human Resources Development 2007-2013 of the Romanian Ministry of Labour, Family and Social Protection through the Financial Agreement POSDRU/107/1.5/S/76903
221
222
SEASONAL VARIATIONS OF THE TRANSMISSION LOSS AT THE MOUTH OF THE DANUBE DELTA
ZARNESCU GEORGE
Constanta Maritime University, Romania ABSTRACT Underwater communication devices, such as underwater acoustic modems (UAM) are designed using the passive sonar equation. At the beginning of the design phase we must know very well the parameters that compose this equation, if we want the modem operation to depend as little as possible on the variability of the transmission channel. The only parameter that is not known a priori is the transmission loss (TL). The measurement of this parameter is fairly expensive because it involves at least one marine research platform, trained personnel and numerous devices. Therefore we need to estimate this parameter and an inexpensive solution is to simulate the underwater acoustic channel (UAC) in the region where we want to deploy the underwater acoustic modem. Using conductivity, temperature and depth (CTD) information taken from the NOOAs database, information about the wind speed at the surface and information about the geoacustical properties of the sea floor, we modeled the underwater acoustic channel at the mouth of the Danube Delta. With the help of the AcTUP simulation software we were able to estimate the seasonal variations of the transmission loss in the region of interest using a frequency dependent simulation method. These results will be used later to adapt the underwater acoustic modem to the transmission channel. Keywords: Transmission loss, passive sonar equation, underwater acoustic channel, underwater acoustic modem, frequency dependent simulation, channel modeling, channel simulation.
1.
INTRODUCTION
An underwater acoustic modem is a comunication device designed to transmit to the surface the data acquired by sensors. Multiple underwater acoustic modems compose an underwater wireless sensor network (UWSN). These communication equipments transmit information wirelessly using acoustic waves with a projector and receive the information with a hydrophone. Usually an UWSN is placed on the seafloor with the purpose of monitoring chemical and biological phenomena of interest [1]. An UAM is designed using the passive sonar equation. At the beginning of the design phase we must know very well all the parameters that compose this equation, if we want the modem to operate correctly in an underwater transmission channel whose parameters vary with temperature, salinity, depth, wind speed at the sea surface and geoacustical properties of the seafloor. The only parameter that is not known a priori is the transmission loss. The measurement of this parameter is fairly expensive because it involves at least one marine research platform, trained personnel and numerous devices. Therefore we need to estimate this parameter and an inexpensive solution is to simulate the underwater acoustic channel (UAC) in the region where we want to deploy the underwater acoustic modem [2]. Using information obtained from the National Oceanic and Atmospheric Administrations (NOOA) database [3], information about the wind speed at the surface and information about the geoacustical properties of the seafloor we modeled the underwater acoustic channel at the mouth of the Danube Delta. Using Acoustic Toolbox User-interface and Post-processor (AcTUP) simulation software we were able to estimate
the seasonal variations of the transmission loss in the region of interest using a frequency dependent simulation method. These results will be used later to adapt the underwater acoustic modem to the transmission channel. In the next section we will present the proposed underwater acoustic channel model and the method with which the transmission loss was computed. In section 3 we present the seasonal variations of the transmission loss obtained by simulating the propagation of the underwater acoustic waves in the considered transmission channel. In the final section we present the conclusions of this article and future work. 2. UNDERWATER ACOUSTIC CHANNEL MODELLING The region of interest is shown in Figure 1. It is geographically located on 45.3 N and 29.8 E latitude and longitude respectively. At this location were recorded 465 CTD data between 1986 and 1991. These data have been introduced in equation 1 to compute the sound speed profile (SSP).
where c is the speed of sound in m/s, T is the temperature in degrees Celsius, S is salinity in parts per thousand (ppt) and z is the depth measured in meters [4]. This equation is valid for
223
The depth of our location is 20 m. It will be assumed that near this location the depth will be constant.
d) Figure 1 The region at the mouth of the Danube Delta 2.1 Seasonal variations of the sound speed profile The mean sound speed profile was computed for each season using the data obtained from NOOA. Also we computed the standard deviation (std) of the SSP. These data were used to define two new sound speed profiles. One was obtained by adding the std data to the mean sound speed profile and the other one was obtained by subtracting the std data from the mean SSP. These profiles are shown in Figure 2. Figure 2 Seasonal variation of the sound speed profile at the mouth of the Danube Delta In Figure 2 we observe a large variation of the sound speed. This variation is between 1430 and 1510 m/s. In winter and spring we have the smallest sound speeds which are due to low temperatures. The highest sound speeds are observed during summer and autumn. Also we observe that the sound speeds in the mean SSP, the green trace, are less than 1500 m/s (the average value of the underwater sound speed on the globe). This is due to the fact that the average salinity at the mouth of the Danube Delta, 17 ppt, is much smaller than the average salinity, 35 ppt. The low salinity is due to the fresh water brought by the Danube into the Black Sea. Referring to the mean SSP we observe in Figure 2 a) a positive sound speed gradient. This is called the mixed layer and is due to the harsh conditions in the winter. The bad meteorological conditions determine the mixing of layers with different temperatures resulting in a layer with a constant temperature for the entire water column. In Figure 2 b) in the mean sound speed profile we observe again the mixed layer. Also in Figure 2 d) between 0 and 10 m the mixed layer is present. Between 10 and 20 m we notice a negative sound speed gradient. This is called the thermocline. Also during summer, Figure 2 c), because of the calm and sunny conditions we notice the thermocline. This is represented by a decrease in temperature with increasing the water column depth. 2.2 Seafloor sound speed profile and geophysical properties The seafloor consists of three sedimentary layers. The first layer is composed of silty-clay or mud. This is a dynamic layer which consists of river deposits continuously brought by the Danube. The second layer consists of silt and the third layer is made of sand. Table 1 Geophysical properties of the seafloor sediments Properties Depth (m) Sound speed (m/s) Density (kg/m3) Attenuation (dB/ ) c) Silty-Clay 0.15 1491 1480 0.15 Silt 0.05 1575 1700 1 Sand >1 1650 1900 0.8
a)
b)
224
Figure 3 Seafloor sound speed profile. The third sedimentary layer is deeper than 1 m. 2.3 Underwater acoustic channel modeling We envisioned an underwater wireless sensor network at the mouth of the Danube Delta, consisting of two modems, placed just above the seafloor, which can communicate horizontally. Using the data presented in sub-sections 2.1 and 2.2, we created in AcTUP simulation software [7], which is a MATLAB plug-in, 12 underwater acoustic environments, one for each sound speed profile. In figure 4 we present the proposed underwater acoustic channel. The sea surface was considered a reflector with 1.75 m rms roughness. The bottom was modeled as a flat reflector and attenuator. The sea depth is considered to be 20 m. The transmitter and receiver were placed at 50 cm above the seafloor in a horizontal configuration. The transmission distance between them is considered to be 500 m.
In equation 2 is the amplitude and is the phase of the impulse response. The delay of each impulse or the time of arrival relative to the first impulse is represented by , is the frequency response, is the transmission distance and is the transmission frequency. The transmission loss was computed using equation 3 and the frequency response from equation 2. We must emphasize that the presented method has several advantages over the experimental one. A first advantage is that it is less expensive than the experimental one because it requires the simulation of a mathematical model with real input data. The simulation results will be satisfactory if the underwater acoustic channel will be modelled more realistically. Another advantage of this method is that we can simulate the transmission losses for a wide range of frequencies. A third advantage is that we can change at any time the current simulation model. 3. SEASONAL VARIATIONS OF THE TRANSMISSION LOSS The simulation results are shown in Figure 5 for each season and for each sound speed profile. In each sub-figure the upper plot, 1, is characterized by the mean minus one std sound speed profile. The middle plot, 2, is determined by the mean SSP and the lower plot, 3, is characterized by the mean plus one std SSP.
Figure 4 Underwater acoustic channel model at the mouth of the Danube Delta. The sea depth, z, is measured in meters, cw(z) represents the water sound speed profile and cb(z) the seafloor SSP. 2.4 Transmission loss computation The method used to compute the transmission loss is described in detail in [8]. We briefly present the most important steps that were performed to compute the transmission loss at the mouth of the Danube Delta.
a)
225
b)
c)
In this article we present the variations of the transmission loss at the mouth of the Danube Delta in response to changes from the mean sound speed profile. We also show these changes for each season. We modeled the underwater acoustic channel using real data and AcTUP simulation software. We used the simulation results to compute the changes in the transmission loss for each season. These results will be used in designing an underwater acoustic modem. In the near future we want to install in the considered region an underwater wireless sensor network consisting of two modems placed on the bottom in a horizontal link. 5. REFERENCES
d) Figure 5 Seasonal variations of the transmission loss. We observe in Figure 5 a.1 three pronounced notches around 8, 40 and 90 kHz. As the sound speed increases we notice in a.2 that these notches are smaller, but others appear around 70 and 80 kHz. In a.3 we see the appearance of a notch around 22 kHz and two pronounced notches around 32 kHz. The one at 8 kHz disappeared and that at 40 kHz is still present. The notches around 70 and 80 kHz have moved 10 kHz away. We see that the frequency selectivity in figure 5 b.1 and b.2 is much larger than that in Figure 5 a). A quasilinear decay is observed in b.3 were the notches are very small.
[1] Akyildiz, I., Pompili D. and Melodia, T., Underwater acoustic sensor networks: research challenges, Ad Hoc Networks 3 (3), pp. 257-279, 2005. [2] Etter, C. P., Underwater Acoustic Modeling and simulation, 3rd ed., Spon Press, 2003. [3] National Oceanic and Atmospheric Administration, NOOA, http://www.noaa.gov/, 2012. [4] Brekhovskikh, L. M., Fundamentals of Ocean Acoustics, 3rd edition, Springer, 2003. [5] Oaie, Ghe., Secrieru, D., Black Sea Basin: Sediment Types and Distribution, Sedimentation processes, Proceedings of Euro-EcoGeoCentre Romania, 2004. [6] Jensen, F. B., Computational Ocean Acoustics, 2nd edition, Springer, 2011. [7] Maggi, A. M., AcTUP v 2.2, Acoustic Toolbox Userinterface and Post-processor, Installation and User Guide, 2010. [8] Xiao, Y., Underwater acoustic sensor networks, CRC Press, Boca Raton, 2010. [9] Porter, M. B., The Bellhop Manual and Users Guide, 2011. [10] Porter, M. B. and Liu, Y. C., Finite-Element Ray Tracing, Theoretical and Computational Acoustics 2, 947-956, 1994. [11] A. J. Duncan, A. L. Maggi, A Consistent, User Friendly Interface for Running a Variety of Underwater Acoustic Propagation Codes, Proceedings of ACOUSTICS 2006, pp. 471-477, 2006.
226
1.
INTRODUCTION
The ocean acoustic engineers have designed and implemented two types of underwater modems: cabled and acoustic. A cabled modem transmits the acquired information, to a data center placed near shore, through a fiber optic placed on the ocean or sea floor [1]. Also through this cable the modem is powered and can function a very long time, virtually unlimited. We must emphasis that this method of monitoring has a high cost of implementation and maintenance because a cable is very long and expensive and must be placed on the ocean floor. Furthermore it is not a reliable monitoring method because the cable can break at any time when the weather is bad. An acoustic modem is an underwater communication device used to acquire scientific data from the marine environment through the use of a sensor module and to transmit acoustically the data to another modem or to the surface. Afterwards the data are saved on a server placed near shore for immediate or later processing [2]. We must emphasize that a big shortcoming of this monitoring method is that the modem will operate a short period of time because it is powered by batteries. Even if this is an important disadvantage at the present time the underwater acoustic modems are widely used due to the fact that they have smaller manufacturing cost than cabled modems, but the costs are still high [3]. A big advantage is that an acoustic modem can be recovered easily. It is provided with an acoustic release, which can be operated remotely. The acoustic release opens and the modem comes to the surface. Afterwards the communication device is recovered, fixed and placed in the water again [4]. Although this operation is quite fast, it is expensive because it involves at least one marine research platform, trained personnel and numerous sophisticated devices. It can be repeated several times in a year because most of
the time the energy from the batteries is depleted quite fast. This is due to the fact that the modem uses a lot of power to transmit the data at shorter or medium distances. Efficient use of this energy, for data transmission, will increase the life of the underwater acoustic modem. This will be possible if we adapt the modem to the transmission channel. It means that we have to know the variations of the underwater acoustic channel ahead of time or to estimate them [5]. The energy-efficient transmission method described in this article is based on the idea of estimating the variations of the underwater acoustic channel in a particular region. The area of interest is located in the north-western part of the Black Sea belonging to Romania. We split this area in two important regions. In figure 1 we highlight the Danube Delta region and in figure 2 we show Constanta region. If the method described in this article will be used to design an underwater acoustic modem, it can reduce the energy used for transmission and the modem will be adapted to the underwater acoustic channel. Another advantage of this method is that the design and technical maintenance cost will be reduced which will determine a reduction in the total production cost of an UAM. In the next section we will present the energyefficient transmission method and we will highlight its use and performance with an example. In section 3 we will present the results obtained using this method for the variations of the underwater channel in the considered regions. In section 4 we present the conclusions of this article. 2. ENERGY-EFFICIENT TRANSMISSION METHOD The method presented in this article is based on the passive sonar equation which is shown in equation 1. SNR = SL TL NL (1)
227
where TVR is the transmitting voltage response and k is the amplification. We must emphasize that the TVR is defined as the output sound intensity level generated at 1 m range by a transducer for an input voltage of 1 V. The TVR profile for various transducers could be obtained from different manufacturers [6]-[8]. In equation 6 we show the new form of the passive sonar equation. SNR = TVR + 20log10 (k) TFR (6)
From equation 6 we observe that for a given SNR at the receiver we could find the optimum amplification k. Then for this amplification we can find the optimum transmission frequency. This method will offer good results if one could estimate accurately the transmission loss in the region of interest [9]. The amplification k, as a function of frequency, can be computed using the equation 7. Figure 2 Constanta region where SNR is the expected signal-to-noise ratio at the receiver, SL is the source level of the projector, TL is the transmission loss experienced by an underwater sound wave when travels from the transmitter to the receiver, NL is the noise level in the underwater acoustic channel produced by various sources. These four parameters are expressed in decibels relative to the intensity of a plane wave of rms pressure 1Pa and are functions of frequency. We must emphasize that the parameters of the left side of equation 1 are characterized by positive values only. The single parameter in equation 1 that can be modeled by an underwater modem designer is the SL. The parameters TL and NL depend on the specific characteristics of the underwater acoustic environment and can only be measured or estimated. The transmission loss will be estimated for each link configuration in the underwater acoustic channel. The noise level is presented in equation 2 for the frequency range 0.1-100 kHz NL = 50 + 7.5w0.5 + 20log10 (f) 40log10 (f+0.4) (2) where w is the wind speed in m/s. Next we define the parameter total frequency response (TFR) that is shown in equation 3 and represents the cumulative effect of transmission loss and ambient noise. TFR = TL + NL (3) Figure 3 Transmission loss estimated using real acoustic data acquired in the Constanta region. For the estimation of the parameter TL was used an underwater acoustic propagation modelling software named AcTUP (Acoustic Toolbox User interface and Post processor) [10]. This software runs under Matlab and is a guide user interface written by Amos Maggi and Alec Duncan which facilitates the rapid application of different acoustic propagation codes from Acoustic Toolbox which was written by Mike Porter [11]. k = 10((SNR-TVR+TFR)/20) (7)
Next for a given transmission bandwidth (B) around the optimum frequency we can find the amplification that will ensure the chosen SNR for the entire band of frequencies. 2.1 UAM design example We propose to ensure at the receiver a SNR equal to 60 dB re 1Pa. In figure 3 we present an estimate of the parameter TL for the frequency range 0.1-100 kHz that was computed for a specific Tx-Rx configuration with real acoustic data acquired in the region of Constanta.
We will use the above notation and rewrite equation 1. This is highlighted in equation 4. SNR = SL TFR (4)
228
a)
b)
Figure 6 The amplification in the region of the optimum frequency for a given transmission bandwidth We obtained an amplification of 27 V/V (red dot) for an 8 kHz transmission bandwidth. The optimum transmission frequency is 27 kHz (light green dot). 3. TRANSMISSION METHOD RESULTS c)
In this section we present the results obtained using the transmission method described in section 2. We computed the amplification for transmission bandwidths between 1 kHz and 20 kHz. In figure 7 we show the results for the region of Constanta and in figure 8 we show the results for the Danube Delta region. For the computation of optimum amplification we used the transmission voltage response shown in figure 5, the noise level described mathematically in equation 2 and estimated transmission losses from the considered regions.
229
a)
b)
In this article we present an energy-efficient transmission method that could be used by an ocean acoustic engineer in designing an underwater acoustic modem. This method described in section 2 can reduce the energy used for transmission and we could say that the modem will be adapted to the underwater acoustic channel. This method will offer good results if one could estimate accurately the transmission loss in the region of interest. For the results presented in section 3 we used transmission loss estimates based on acoustic data recorded for 108 years. Another advantage of this method is that the design and technical maintenance cost will be reduced which will determine a reduction in the total production cost of an UAM. 5. REFERENCES
c)
d) Figure 8 Transmitter amplification as a function of transmission bandwidth in the Danube Delta region We wish to specify that these amplifications were computed for a minimum SNR for which the amplification was greater than 1. We observe in both figures for all the transmission distances that the smallest amplification is obtained in summer. In figure 7 a-c we observe that for a transmission distance of 500 m the amplification increases exponentially with bandwidth, but this growth is slow. In figure 7 d between 1-10 kHz we notice an amplification smaller than 4. For distances 500 and 1000 m this amplification is maintained but for distance 2000 m the amplification is approximately 4 times greater. In figure 8 a, b and d we observe high amplifications. This is due to the fact that the seabed in the Danube Delta
[1] Maffei, A.R., et. al., NEPTUNE Gigabit Ethernet submarine cable system, Proceedings. IEEE Oceans, 2001. [2] Akyildiz, I., Pompili D. and Melodia, T., Underwater acoustic sensor networks: research challenges, Ad Hoc Networks, 2005. [3] Benson, B., et. al., Design of a low-cost underwater acoustic modem, IEEE Embedded Systems Letters, 2010. [4] Teledyne Benthos, Inc., Acoustic Releases, http://www.teledyne.com, 2012. [5] Zrnescu, G., Low cost adaptive underwater acoustic modem for the Black Sea environment, Advanced topics on optoelectronics, microtehnologies and nanotechnologies, 2012. [6] Benthos, Inc., Underwater transducers, http://www.benthos.com, 2012. [7] EvoLogics GmbH, Underwater transducers, http://www.evologics.de, 2012. [8] LinkQuest, Inc., Underwater transducers, http://www.link-quest.com, 2012. [9] Xiao, Y., Underwater acoustic sensor networks, CRC Press, Boca Raton, 2010. [10] Duncan, A., Maggi, A., Underwater acoustic propagation modelling software AcTUP v2.2l, http://cmst.curtin.edu.au. [11] Porter, M., The Bellhop manual and users guide, http://oalib.hlsresearch.com, 2011. [12] Brekhovskikh, L. M., Fundamentals of Ocean Acoustics, 3rd edition, Springer, 2003. [13] National Oceanic and Atmospheric Administration, NOOA, http://www.noaa.gov/, 2012. [14] Oaie, Ghe., Secrieru, D., Black Sea Basin: Sediment Types and Distribution, Sedimentation processes, Proceedings of Euro-EcoGeoCentre Romania, 2004.
230
SECTION IV
MATHEMATICAL SCIENCES AND PHYSICS
ABSTRACT This paper deals with the adaptive control of the uncertain hyper-chaotic Yujun system with unknown parameters. We determine adaptive control laws to stabilize the Yujun system to one of its unstable equilibrium points and we derived update laws for the estimation of system parameters. To validate and demonstrate the effectiveness of the adaptive control scheme derived in the paper numerical simulations are presented. Keywords: Adaptive control, hyper-chaos, stabilization.
1.
INTRODUCTION
x= a (y x ) + y z y= c x y x z + w z = x y b z w = x z+ d w
where x, y, z and w are state variables, while a, b, c and d are real constants. When parameters a = 35, b = 8 / 3, c = 55 and d = 1.5 are considered, the system (1) shows hyper-chaotic behaviour. Indeed, it has two positive Lyapunov exponents, 1= 1.4944 and 2 = 0.5012 . The others are
The control of chaotic systems means to design state feedback control laws that stabilize the chaotic systems around the unstable equilibrium points. It is presently a topic of interest research due to its potential applications in many fields including chemical reactions [1], electrical systems [2], meteorology [3], nonlinear aero-elasticity [4], control of unstable modes in multimodes lasers [5], ship capsize problem [6], etc. Since the seminal work by Ott et al [7], a variety of linear and non-linear techniques have been proposed for control of chaotic systems. They can be categorized based on different points of view [8]. One of the most important categories is the adaptive control methods. When a dynamical has some unknown parameters in its describing equations usually an identification algorithm is coupled with the control algorithm to provide an adaptive control system. In such a technique, parameter estimation and control are performed simultaneously. Adaptive control has been used widely for controlling chaos in many discrete and continuous time systems [913]. The main objective of this paper is to apply the adaptive control method for the stabilization of an uncertain hyper-chaotic Yujun system with unknown parameters to one of its unstable equilibria. The paper is organized as follows. The system description is given in Section 2. The determination of the adaptive control functions and of the update laws for the estimation of the system parameters are presented in Section 3. The simulation examples to demonstrate the performances of the proposed method are provided in Section 4. We close with a short summary and conclusions in Section 5. 2. SYSTEM DESCRIPTION
(1)
-50
0 x
50
100
-50
0 x
50
100
0 y
50
In 2010, Yujun et al [14-15] reported a new hyperchaotic system which was obtained by adding a nonlinear controller to a known three-dimensional autonomous chaotic system. The generated system undergoes hyper-chaos, chaos and some different periodic orbits with control parameters changed. It is described by
0 y
50
40
60 z
80
100
Figure 1 The phase planes of the Yujun system (1) with a = 35, b = 8 / 3, c = 55 and d = 1.5 The equilibria of the system (1) are obtained by setting x = y = z = w = 0 . After some algebra we get the following bi-quadratic equation in y:
231
(8)
xy x= ,z= , w = y + xz cx 2 b ab y
ab y
(3)
For a = 35, b = 8 / 3, c = 55 and d = 1.5 the system (1) has the following equilibrium points
E 0 : (0, 0, 0, 0), E 1: (49.9962, 8.7725, 164.4719, 5482), E 2 : ( 49.9962, 8.7725, 164.4719, 5482)
All these equilibria are unstable. To see this, we calculate the Jacobian
xc = e a (y c x c ) k 1 x c
y c= e c x c k 2 y c
(9)
z c = e b z c k 3 z c
a a + z y 0 c z 1 x 1 J = y x b 0 z 0 x d
w c= e d w c k 4 w c
(4) where k 1, k 2, k 3 and ~ ~ ~, b denote by a , c and
and evaluate it at the three equilibria. For the trivial equilibrium E 0 the Jacobian has eigenvalues { 65.0532; 29.0532; 2.667; 1.5} . For the equilibria
~ ~ ~, e = b b ~, e = d d ea = a a , e c= c c b d
~ ~ ~, e = b ~, e = d ea = a , e c= c b d
(10)
V=
1 2 2 2 2 2 2 2 x c+ y c + zc + wc +e2 a+ e b+ e c + e d 2
(12)
V = x c x c+ y c y c+ z c z c+ w c w c+ e a e a+ e b e b+
2 2 2 + e c e c + e d e d = k 1 x c k 2 yc k3zc k 4w2 4+ 2 ~ 2 ~ ~ + e b z c x c y c c + + e a y x x a b + e c c c c 2 ~ + e dwc d By choosing the parameter update laws as
with (x 0, y 0, z 0, w 0 ) one of the equilibrium points mentioned above. Doing this, the system (1) becomes
(6)
~ = y x x 2+ k e a c c c 5 a ~ 2 b = z c + k 6eb ~ c = x c y c+ k 7 e c ~ 2 d = wc + k 8e d
(7) where k 1, k 2, k 3 and k 4 are positive constants we obtain
2 2 2 2 2 2 V = k 1 x c k 2 yc k3zc k 4 wc k 5ea k 6eb 2 k 7ec k 8e 2 d
(13)
xc = a( y c x c ) + y c z c+ y 0 z c+ z 0 y c+ u 1 y c= c x c y c x c z c+ w c x 0 z c z 0 x c+ u 2 z c= x c y c b z c+ x 0 y c+ y 0 x c+ u 3 w c= x c z c+ d w c x 0 z c z 0 x c+ u 4
where u 1 , u 2, u 3 and u4 are four controllers to be designed. Choosing the controllers as
(14)
<0
232
x 0 = 49.9962, y 0 = 8.7725,
3.1 Stabilization to the equilibrium point E 0 This case corresponds to x 0 = y 0 = z 0 = w 0 = 0 . The calculations were carried out using the MatLab solver ODE23s and selecting the parameters of Yujun system (1) as a = 35, b = 8 / 3, c = 55 and d = 1.5 . Initial
value were chosen as examples. When the control is activated and the constants k i , i = 1, 2,...,8 are settled to be equal with 2, the controlled Yujun system (1) converges to the equilibrium point E 0 exponentially as shown in Figure 2. Figure 3 delineates the estimation of the uncertainties. We may note that the control is rapidly achieved and the unknown uncertainties are quickly estimated to the right values.
6 x y z w
z 0 = 164.4719, w 0 = 5482 , k i = 2, i = 1, 2,...,8 , and initial values (xc (0), y c (0), z c (0), w c (0) ) = (20, 10, 30, 50) and (ea (0), e b (0), e c (0), e d (0)) = (8, 1, 5, 6) . When the adaptive laws (8) are applied, the controlled Yujun system (1) converges to the equilibrium E 1 exponentially as indicated in Figure 4.
80 60 40 40 20 0 200 180 160 5550 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
x2=49.9962
3.5 4
(xc (0), y c (0), z c (0), w c (0)) = (5, 9, 2, 7 ) (ea (0), e b (0), e c (0), e d (0)) = (10, 1 / 3, 10, 23 / 6)
and
z
y2=8.7725
3.5 4
z 2=164.4719
0 0.5 1 1.5 2 2.5 3 3.5 4
w2=5482
3.5 4
time
Figure 4 Time responses of the controlled Yujun system 3.3 Stabilization time Here we restrict attention to the influence of the positive constants k i , i = 1, 2,...,8 , on the stabilization process. To this purpose, we fix k 1= k 2 = ... = k 8 in the remainder of this subsection. It seems obvious that high values of k i will lead to small values of the stabilization time. To verify this assumption, we performed a set of numerical experiments where the outputs were the stabilization time t s to equilibrium point E 0 and the
x, y, z, w
-2
-4
-6
time
time t u required for the uncertainties to be estimated to the right values. The absolute error used in the simulations was fixed at 0.0001. Our findings are presented in Table 1. As we shall see, both t s and t u converge to zero while k i approaches infinity. An interesting situation occurs when k i tends to zero. As expected, t u begins greater and greater but t s , after a normal increase, has a maximum at k i 0.4 , and then begins to decrease. We plan to address this behaviour in a future paper. Table 1 Dependence of t s and t u with k i
c=55
b-e
50
c-e d-e
40
a=35
30
20
10
b=8/3
0 0 0.5 1 1.5 2 2.5
d=1.5
3 3.5 4
time
233
In this paper we proved that using the adaptive control method we can simultaneously obtain the control of a chaotic system to one of its unstable equilibrium and estimation of its constant unknown parameters. We investigated two problems. In Section 2 we derived adaptive control scheme and update laws for the estimation of system parameters for the hyper-chaotic Yujun system with unknown parameters. In Subsection 3.3 we presented a short study about the influence of some constants involved in the adaptive control scheme on the stabilization time. The numerical simulations presented in the paper allow us to reliably suggest that the adaptive control method is very effective and convenient to achieve chaos control of hyper-chaotic systems. 5. REFERENCES
[1] SCHWARTZ, I.B., TRIADOF, I., Controlling unstable Stratesin reaction-diffusion systems modelled by time series, Physical Review E 50(4), p. 2548-2552, 1994 [2] GE, S.S., WANG, C., Adaptive control of uncertain Chuas circuit, IEEE Transactions on circuits and systems, 47(9), p. 1397-1402, 2000 [3] LIAO, T.L., LINI, S.H., Adaptive control and synchronization of Lorentz system, Journal of the Franklin Institute, 336, p. 925-937, 2007 [4] VIRGIL, L.N., DOWELL, L.H., Nonlinear aeroelasticity and chaos, Computational nonlinear mechanics in aerospace engineering, 146, p. 531-546, AIAA, Washington DC., 1992 [5] COLET, P., ROY, R., WIESENFELD, K., Controlling hyper-chaos in a multimode laser model, Physical Review E, 50, p. 3453-3457, 1994
234
ABSTRACT The precision to determine various points on the Earths surface, on sea or in the air using GPS receivers suffers not only due to the precision of determining the satellite position on its orbit but also to the technique that measures the distance between the satellite and the receiver. The orbital errors of the satellites are caused mainly by the gravitational and non-gravitational perturbations of satellites. This article proposes to evaluate the main gravitational perturbations that act upon a GPS satellites. Keywords: GPS satellite, kepler orbit, Runge-Kuta method, gravitational perturbation. where q0 is a reference Keplerian orbit used in the 1st order linear solution, having the role of a Taylor series node. The 2nd order derivative of the orbital element qi in relation with time will be the sum of the 1st order derivatives and the 2nd order solutions: dqi d 1qi d 2 qi (3) = + dt dt dt The 2nd order perturbations will be calculated with the following relation: 6 q &i (4) 2 qi = 1q j dt j =1 q j
1.
INTRODUCTION
There are two methods we can use to calculate GPS satellites orbit. The first method is based on the analytical solutions of Lagranages planetary equations, expressed in Keplers orbital element terms. The second method is based on the numerical solution of the 2nd order diferential equation of perturbed relative motion: && (1) r = 3 r+ r r expressed in Cartesian coordinates, from an inertial rectangular geocentric equatorial reference system (with the axis Ox facing the true vernal point at J2000 epoch), [3], where: - the Earths gravitational parameter, - is the sum of all perturbation accelerations that act on the GPS satellite. 2. THE ANALYTICAL SOLUTION
Replacing (2) and (3) in (4), 1 qi are the solutions of integrated Newton-Euler equations. The perturbation force function has the following form [12]:
ae
a a
F
m=0 p =0 q =
lmp
(i)Glpq (e)Slmpq (, , M , )
(5)
The analytical solution used in the precise calculus of short GPS orbit arcs represents an extension of the 1st order perturbation theory. Generally speaking, the linear solution offers a precision of 2m in the geocentric radius vector, for all the perturbation forces of the satellites, except the predominant effect of the 2nd harmonic zone (of coefficient C20 or J2). For this harmonic it is necessary to include the 2nd order perturbations; in principle, it requires the 2nd solution of Lagranages planetary equations, using the 1st order (linear) solution to evaluate the right member of this equation. The comparative analysis of the 1st order analytical solution with the numerical solution applied to GPS satellites, shows the fact that the 2nd order perturbations generally have magnitudes of the order 30, mainly in the direction tangent to the orbit. The 2nd order effects are included in the analytical solution, using Taylor series. For example, for an Keplerian element qi , its derivative in relation with time can be written as follows: 6 dqi dqi qi = ( q0 ) + (q j q 0 (2) j ) + ... dt dt q j j =1
For GPS satellites the following observations are available: for simplicity, Newton-Euler equations are integrated using only one term in (5); the 2nd order perturbations are necessary only for e, , M ; the sum with j index in equation (4) only requires including the 1st order solutions for e, , M ; only two terms of the developing perturbatory force function are necessary for each q j .
The 2nd order solution is obtained by isolating the most important 2nd order terms and is consistent with the numerical solution, the only disadvantage being the length of the calculus [11]. The maximum errors per revolution reach the order of 1m. For the same period, the mean square error has the value of 0,6m. A much more efficient 2nd order solution is that of Asknes and Kinoshita, based on a transformation of the differential equation system in a canonical form. More precisely, Asknes used Hills variables, while Kinoshita adopted a modified form of Delauneys variables. In both cases, the 2nd order perturbations reach the order of 1m or smaller for GPS satellite orbits.
235
The numerical solution of GPS satelites orbit is based on direct numerical integration of 2nd order diferential equations of perturbed relative motion, in Cartesian coordinates. This method, also known as Cowells method has the advantaje of a simple formulation for movement equations. In Cartesian coordinates, it has the following form: x && xi = 2 i + for i = 1, 2, 3 (6) r r xi is the sum of xi perturbed accelerations caused by the non-centred terrestrial gravitational field, the Moon-Sun gravitational attraction and the relativist effects and xi coordinates are defined in an inertial geocentric equatorial reference system. The movement equations are complete when each perturbation acceleration is evaluated and transformed in this reference system. The transformation of (6) relation from CTS (Conventional Terrestrial System) in CIS (Conventional Inertial System) is made with the help of the 4 rotation matrices, based on the relation: where r is the geocentric radius vector,
(9)
(10)
The numerical integration of this system is realised applying a 4th order Runge-Kutta algoritm [2]. This method can be applied also for integrating the Newton-Euler equation system, these having the advantage of being 1st order equations. Consider the Newton-Euler equations as follows:
(11)
where qi is any of the satellites keplerian elements. The integration of the system (11) is made on a constant step of time t (small enough), thus: &i wi = q q j (t ), t t
xi [CTS ] = R M R S R N R P xi [ CIS ]
(7)
where: RP - the rotation matrix for precessional motion RN - the rotation matrix for nutation RS - the rotation matrix for sidereal time RM - the rotation matrix for the poles movement. The arguments of R matrices are based on the new definitions adopted for the reference epoch J2000 (J = 2451545.0). The accelerations due to Moon-Sun gravitational attractions can be expressed directly in the adopted reference system. The caused acceleration is proportional with the Moon-Sun gravitational attraction on the GPS satellite minus the geocenter acceleration caused by the the same perturbation force. In this conditions, equation (6) becomes: V && + && xi = 3 xi + xiL + && xiP (8) r xi where the terms in brackets are expressed in CIS. The integration of the 2nd order diferential equation of relative perturbed movements of the GPS satellite (9) can be made using the 4th order Runge-Kutta algoritm.
(12)
236
Formula [Km/s2]
r2
Formula
J2 Harmonic
J3 Harmonic
2,3 x 10-11
J4 Harmonic
2,5 x 10-11
J5 Harmonic
1,1 x 10-12
5,6 x 10-13
r2 r2 a3 4 2 ec r r3 a4 5 2 ec r r4 a5 6 2 ec r r5 a6 7 2 ec r r6
2 aec
J2 J3 J4 J5 J6
3c
r r r r r3c 3c r ( r r )3 r33c 3c
3,4 x 10-12
3 2 a 1 e 2 c r
2 4
)
Figure 4 The variation of accelerations components due to J4 perturbation
where: r the geocentric vector radius of the GPS satellite; aec the Earths semi-major axis; e - the GPSs satellite orbit eccentricity; c the speed of light.
Keplerian orbit
Figure 5 The variation of accelerations components due to J5 perturbation Figure 1 The variation of acceleration components The perturbation accelerations of the zonal harmonics J2, J3, J4, J5 and J6
237
Figure 6 The variation of accelerations components due to J6 perturbation Perturbation accelerations due to Sun and Moon gravitational attraction
Out of all gravitational perturbations that act upon a GPS satellite, the non-centered gravitational terestrial field has the most significant effect. The odd zonal harmonics produce a very small perturbation effect. The reason is that the potential perturbator contain sin term inside Legendre polynomials. The J2 zonal harmonic has its order at least 3 times bigger than the other zonal harmonics. Suns and Moons attraction represents the second important category of gravitational type influences upon a satellite movement. The perturbation acceleration induced by the relativity effect is a consequence of the fact that artificial satellites are moving in Earths gravitational field. From the variation of the perturbation acceleration graphs the secular effect can be observed (added in time), long periodic, short periodic (the variation period is smaller than the orbital theriod of 12 hours) or mixed of the accelerations upon GPS satellites. 6. REFERENCES
Figure 8 The variation of accelerations components due to Moons gravitational attraction Perturbation acceleration due to relativistic effects
[1] Arnold V., Kozlov V., Neishtadt A., Mathematical aspects of classical and celestial mechanics, Springer, Germania, 2006 [2] Beutler G (2005) Methods of Celestial Mechanics I & II. Springer, Germany, 2005 [3] Brouwer, D., Clemence, G.M., Methods of Celestial Mechanics, Academic Press, New York and London 1961, p 11. [4] Claessens, S.J., Featherstone, W.E., Computation of geopotential coefficients from gravity anomalies on the ellipsoid, Western Australian Centre for Geodesy, 2004 [5] Cojocaru, S., Tratat de navigaie maritim Metodele moderne ale navigaie maritime, Ed. Ars Academica, Bucureti, 2008 [6] Collins G., The foundations of Celestial Mechanics, SUA, 2004 [7] Hofmann-Wellenhof, B. et al, GPS - Theory and Practice, New York 1993. [8] Lupu S. Elemente de dinamica sistemului global de pozitionare NAVSTAR - GPS, Ed. Academiei Navale Mircea cel Btrn, Constanta 2011 [9] Mathuna D., Integrable systems in celestial mechanics, Springer, SUA, 2008 [10] Montenbruck O, Gill E Satellite Orbits: Models, Methods and Applications. Springer, Germany, 2000 [11] Nakiboglu, S.M., and al. A multistation, multi-pass approach to Global Positioning System improvement and precise positioning. Geodetic Survey of Canada, Rep. 85-003, Ottawa 1985. [12] Seeber, G.,. Satellite geodesy, Walter de Gruyter, 2003 [13] Su, H., Precise orbit determination of global navigation satellite system of second generation, Germania, 2000 [14] Taylor F W (2005) Elementary Climate Physics. Oxford University Press, United Kingdom.
238
THE EFFECTS CAUSED BY NON-GRAVITATIONAL PERTURBATIONS: THE ANISOTROPIC THERMAL EMISSION AND ANTENNAS EMISSION ON GPS SATELLITES
LUPU SERGIU
Mircea cel Batran Naval Academy, Constanta, Romania ABSTRACT From the non-gravitational perturbations category acting on Earths artificial satellites, the most important is the solar radiation pressure. The mode of action of these perturbations is performed directly and indirectly. The indirect mode of action of solar radiation pressure is manifested by the albedo phenomenon, anisotropic thermal emission, antennas emission and eclipses. This article proposes to evaluate the orbital errors causes by the indirect effects of the solar radiation pressure: anisotropic thermal emission and antennas emission acting on the GPS satellites. Keywords: GPS satellites, indirect solar radiation pressure, orbital elements.
1.
INTRODUCTION
2.
Taking into consideration the perturbation accelerations acting in the way of the satellite deviation from the keplerian movement, the 2nd order inhomogeneous differential equation of the satellites perturbed motion, as a vector, has the following form [4], [12]: r r d 2r r = + (1) dt 2 r 3 r Of all non-gravitational perturbations, the most important is the solar radiation pressure. Its mode of acting on an artificial satellite of Earth is directly and indirectly. Among the indirect effects of the solar radiation pressure there is the anisotropic thermal emission and the antennas emission. These produce a very little effect compared to the central acceleration. Starting from the initial conditions (position and velocity) of a GPS satellite, the 4th order Runge Kutta algorithm was applied for the integration of the unperturbed equation motion [3], [9]. The soft determined the values of the range vector and velocity, for the selected period, and for each step of integration, it wrote the values in a text file. The soft also solved Laplaces problem and determined the keplerian elements (a, e, i, , , M ) [5], [10]. where: a - the semi-major axis e - the eccentricity i - the inclination - the mean anomaly - the perigee argument M - the longitude of the ascending node The parameters of the keplerian elliptical orbit define the position of the orbit in the inertial space (i, ), the orientation of the orbit in its plane (), the shape of the orbit (a, e) and the satellites position on the orbit (M). The only orbital parameter that depends on time is the mean anomaly.
An indirect effect of the interaction between the solar radiation and the artificial satellite is due to the fact that the equilibrium of temperature distribution on the satellite becomes uneven, due to different orientation related to the solar heating of different satellite parts. This leads to a new force since the thermal photons emitted by the hot surface areas have more kinetic energy than those from colder areas. For a spinning satellite there will be two asymmetries in the distribution of temperature [11]. For a nearly spherical body that rotates with high velocity the absorbed flow of energy by the satellites surface elements has the following form: a F e n S (2), where: - is the absorption coefficient,
239
Figure 5. The ascendent node longitude variation due to anisotropic thermal emission The semi-major axis suffers short periodic perturbations equal to an orbital period and with an amplitude of 4 105 kilometers. The eccentricity suffers a short periodic perturbation equal to 6 hours and with an amplitude of 6 1010 superimposed over a secular perturbation. The orbits inclination suffers a short period perturbation equal to an orbital period and with an amplitude of 3 108 The longitude of the ascending node suffers a mixed periodic perturbation: - short periodic perturbations with a period of 3 hours and an amplitude of 1 107 degrees. - long periodic perturbations with a period of 24 hours. 3. ACCELERATION PRODUCED BY ANTENNAS EMISSION Figure 3 Eccentricity variation due to anisotropic thermal emission The emission of GPS satellites navigation antennas produce a constant radial acceleration and as a consequence there is a change in the acceleration in this direction. GPS satellites continuously emit with a power between 70 and 80 watts in the antennas direction in the transmission of the two fundamental frequencies L1 and L2. The force expressed in newtons due to the absoption of photons from the incident radiation flux (E) is given by the formula:
240
(5)
where: E is expressed in W/m2 c the speed of light in vacuum expressed in m/s. Thus, according to Newtons third law when a power signal (W) is emitted, there is an equal and opposite reactive force acting in the opposite direction[17]. The resulting acceleration is given by: W Fr = (6) Mc Considering antennas power of emission of 80 watts the resulting acceleration on different GPS satellites is [17]: Table 1 The acceleration produced by GPS satellites GPS satellite type Block I Block II Block IIR Acceleration [m/s2] 5.3x10-10 3.0x10-10 2.4 x10-10
Figure 9 The right ascension variationof the ascendent node due to antennas emission The semi-major axis suffers many short period perturbations: - a short period perturbation equal to an orbital period and an amplitude of 4 105 kilometers. - a short period perturbation with a period of 3 hours and an amplitude of 1 105 kilometers. The eccentricity suffers a short period perturbation with a period of 3 hours and an amplitude of 2 1010 superimposed over a secular perturbation. The pitch suffers many periodic perturbations: - a short period perturbation equal to the orbital period and an amplitude of 3 108 degrees. - a mixed periodic perturbation superimposed over the short periodic perturbation. The perturbations that act upon the longitude of the ascending node are mixed periodic with periods of 3, 6, 18 and 48 hours and with amplitudes of 2 108 and
1, 2 107 degrees.
Figure 7 Eccentricity variation due to anisotropic thermal emission 5. CONCLUSIONS
Anisotropic thermal emission does not produce long term effects or secular effects on the semi-major axis, but only 1st order effects in eccentricity, upon the longitude of the satellites ascending node and the inclination. The emission of GPS satellites radio waves to Earth will produces a recoil of the solar radiation pressure.
241
[1] Barkstrom BR (1984) The Earth Radiation Budget Experiment. Bulletin of the American Meteorological Society 65(11): 1170-1185 [2] Bar-Server Y, Kuang D (2004) Improved SolarRadiation Pressure Models for GPS Satellites. NASA Tech Briefs (NPO-41395) [3] Beutler G (2005) Methods of Celestial Mechanics I & II. Springer, Germany [4] Brouwer, D., Clemence, G.M., Methods of Celestial Mechanics, Academic Press, New York and London 1961, p 11. [5] Cojocaru, S., Tratat de navigaie maritim Metodele moderne ale navigaie maritime, Ed. Ars Academica, Bucureti, 2008 [6] Fliegel H, Gallini T, Swift E (1992) Global Positioning System Radiation Force Model for Geodetic Applications. Journal of Geophysical Research 97(B1): 559-568 [7] Fliegel H, Gallini T (1996) Solar Force Modelling of Block IIR Global Positioning System satellites. Journal of Spacecraft and Rockets 33(6): 863-866
242
SECTION V
ENGLISH FOR SPECIFIC PURPOSES
1.
INTRODUCTION
and would be a good way of introducing/ clarifying information about The Informative Speech. Go to the following web page: http://www.americanrhetoric.com/rhetoricofterroris m.htm Click on the Presidents Address to the nation 9-11-01 Click on the MP3 play button. Listen to the entire speech. 2.2 Speaking Activity. After listening to the speech, the students could be given the general information about WHEN, HOW, and WHY this kind of speech is delivered. THE INFORMATIVE SPEECH There are some features you should take into consideration when referring to an informative speech: the occasion the speech is delivered, the construction of the speech, the function of this kind of speech. In addition to the already given information, I considered helpful for the students to read and discuss on a given chart about the construction of the informative speech. THE CONSTRUCTION OF THE INFORMATIVE SPEECH 1. INTRODUCTION 1.Salutation 2.Background Information
Some years ago I had the opportunity to attend a course in the USA, in the Defence Language Institute, a place considered by teachers of EFL working for the military the heart of teaching and sharing knowledge. My course was called Materials Development Seminar, and I did not know what to expect from a course that stated it could improve my personal way of developing teaching materials. My personal experience in teaching based on my own teaching materials was pretty large- I had had 12 years of teaching at that moment (4 in the primary school, 3 in the secondary school and high school, and 5 in the University) and I was over confident on my skills. The course was given to us by two of the best English teachers I have ever had the opportunity to work with, and it was based on using the internet for developing our own teaching materials. At the end of the course I had the feeling that my teaching career was going to change as a result of my new approach towards teaching and learning. And it changed, and it is continuously changing because my courses and seminars are constantly changing- improving I may addbecoming more personalized, trying to individually respond to my students individual needs in improving their skills. 2. MATERIAL DEVELOPMENT
As an example of how web-based learning can be sustained, I developed some activities answering ESP requests for my military students. The topic is TERRORISM, a subject that is present in our lives more than we would want because of the rise of terrorist attacks nowadays. I have chosen to develop my students speaking skill knowing that it is the second most difficult skill to develop in English after writing, but also the most needed by anyone who has to communicate in English. 2.1 Pre-Speaking Activity. I considered that a pre-speaking activity based on listening would introduce the students better in the topic
-The speaker greets the audience -The speaker provides the audience with some general information about the topic of the speech -The speaker introduces the main idea of the speech -The speaker presents the first main point of his speech -The speaker discusses the second main point of his
2.Second Argument
243
3.Third Argument
4.Fourth Argument (Rarely Found In A Speech) 3. CONCLUSION 1.Restatement of Topic Sentence 2.Concluding Statement
A second listening would help the students identify the three parts of the speech and its topic. Listen again to President Bushs address to the American people from the Oval Office on 9/11/01.While you are listening, mark the different parts of the speech on the given transcript. Underline the topic sentence from the Introduction. To expand the area concerning the informative speech additional reading could be given: Go to the web address: http://writing.colostate.edu/references/speaking/infomod/ index.cfm Read about purpose of informative speeches, major types of informative speeches, structuring, outlining and delivering informative speeches. After having so much information, each student must be able to start the construction of his informative speech. To help them I considered useful giving some roles and situations they might choose from in order to deliver their speeches: 1. You are the U.S. Secretary of Defense and you have to inform the public opinion about a car-bombing attack in Philadelphia. 2. You are the U.K. Prime Minister and you have to inform the public opinion about the capture of the man who was responsible with the London terrorist attacks. 3. You are the U.S. President and you have a press conference on the capture of Osama bin Laden. 4. You are the Egyptian President and you inform the press about the investigation of the terrorist attacks in Sharm el Sheik. 5. You are the U.S. Vice-president and you inform the chiefs of the secret agencies about the steps to be followed after a terrorist attack. 6. You are the French President and you have a press conference about a terrorist attack on Tour Eiffel. 7. You are the wing commander and you inform your pilots about an attack on a resistance nest in El Salvador.
Teaching English for Special Purposes had appeared an impossible mission for me years ago, but thanks to continuous learning, asiduous work, and training courses, like the one mentioned in the beginning of the paper, ESP military branch - has given the most fulfilling satisfaction. 4. REFERENCES
[1] http://www.americanrhetoric.com/MovieSpeeches/ moviespeechcrimsontide.html [2] http://www.americanrhetoric.com/MovieSpeeches/ moviespeechreturnoftheking.html [3] http://www.americanrhetoric.com/ rhetoricofterrorism.htm [4] http://www.whitehouse.gov/vicepresident/newsspeeches/ [5] http://writing.colostate.edu/references/speaking/ infomod/index.cfm
244
1.
INTRODUCTION
In the years since the emphasis in the language classroom began to move from the classical approaches to the communicative one, we as classroom teachers and researchers - have bravely addressed the accompanying problem of how to measure our students success in a way appropriate to this new type of language teaching. The older established methods of teaching have been used with various degrees of success, but rarely with a confidence that we were measuring exactly what we intended to teach and certainly with strong reservations that the result would tell us what we wanted to know. 2. TEST RELIABILITY
So overly concerned with test reliability were we that we often overlooked the compromise we were making with evaluating student competences in potentially real situations. The obvious flaw was that we simply were not measuring enough of what we were aiming to have the student acquire. Tests play a fundamental and controversial role in allowing access to the limited resources and opportunities that our world provides. The importance of understanding what we test, how we test, and the impact that the use of tests has on individuals and societies cannot be understated. Testing is more than a technical activity; it is also an ethical enterprise.(Fulcher 2010) The idea is generally shared that the most important implication of the concept of communicative competence is undoubtedly the need for tests that measure an ability to use language effectively to attain communicative goals. It would appear that communicative competence cannot be successfully tested with the normal procedures of classical test methodology when the student is mostly involved in classroom activities. Such competences had better be evaluated in a more tolerant atmosphere suggestive of real life situations, which would seem to indicate an interview event. It has been estimated that nonverbal and paralinguistic communicative systems might account for as much as 65 per cent of the actual meaning in a spoken message (Littlejohn & Foss 2005). Nevertheless, we still
believe that the interview is the method with the greatest potential for measuring communicative abilities. However, the interview must take on as much of nonclassroom real-life atmosphere as possible; it must be conversational in tone, with less of the searching aspects so often associated with other language interviews. That is to say, the students must be judged on their ability to communicate, not on the grammar or structures they do not know or seemingly cannot use. They must be allowed to direct the conversation according to their level of competence, avoiding whatever is difficult for them and using the schemes and clues they have at their disposal. The teacher will judge how well all of these aspects come together to produce effective communication, which is the bottom line in a true, non-classroom situation. Once the interview/conversation has taken place, the teacher faces a problem long associated with testing communicative abilities. While a discussion with each student would be ideal, time does not usually permit such initiatives. The use of a universal rating scale is good on condition that it follows the same philosophy as the interview. A scale that judges the grammatical components of the conversation as more important than continuity or comprehension and response does little to encourage the skills we are trying to enhance. Furthermore, the former approach does not adhere to the spirit and intent of the interview. We are using the scale to describe the levels of communication which require some degree of grammatical competence that is, more than elementary knowledge of sentence level grammar. The scale shall describe the requirements of conversational communication, which must be presented in a language tailored to the students comprehension ability. Teaching English in a communicative style has the clear advantage of bringing the trainees much closer to active communication which is not restricted to a particular issue. In this way, the trainers will understand the need of changing the language-learning approach from studying a subject to that of learning a skill. This requires new methods of evaluation. We can no longer test material as subject matter which is not being taught as such. A method of interview/ conversation is geared to the non-discreet parameters of the acquiring of communicative language. The interview is open-ended
245
Considering the above views, the interview is one method that evaluates what a communicative teacher is trying to achieve: competence in a real communicative situation. This method of evaluating the performance in the interview is a scale that has been designed to be personal and encouraging as much as evaluative. If the students are to learn anything from their ratings, these should be presented in a user friendly way. The ratings are often presented in a manner that shows their subjectivity as shown by Professor Herb Marsh in his study Making Students Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias, and Utility published in the American Psychologist, in 1997. Furthermore, conversation between two people is prone to misunderstanding on either side and the students are helped to understand that a problem arising from faulty communication is not necessarily one-sided. Topics that are introduced within the interpretative framework of personal usage cannot effectively be tested with the memory-oriented structures of conventional testing procedures. We consider that interviewing procedures have the potential to allow students to choose more creatively the communicative techniques at their disposal and to show their true skill in a certain language used for communication. There is no task inherent in the interview/ conversation except that of communication. However, even interviews can be non-communicative if they are too rigidly structured. This is the result of an interviewer
No matter how valuable, a single short interview can cover only some elements of a communicative situation. There is a need to develop more innovative procedures that shall measure only clear and accurate communication. Thus, taped or recorded interviews are sometimes used in order to better understand the approach trainers should adopt while assessing the knowledge of their trainees. Researchers such as Allan Pease and Barbara Pease, in The Definitive Book of Body Language, New York, Bantam, 2004 have even included the impact of body-language or even some psychological profiling in the interview ratings. They claim that no-verbal cues are layered on and contribute a great deal to the linguistic impact of the communicative act. To conclude, we must once again stress the fact that a new approach is needed in interviewing as a testing technique since it is definitely a complex process which brings together language practice, theory, ethics, and personality. 5. REFERENCES
[1] Helt, R, Developing communicative competences: A Practical model, Modern Language Journal, 1982 [2] Savignon, S, Communicative competence. Theory and Classroom Practice, Ed. Addison, Wesley, 1983 [3] Littlejohn, S.W. and K. A. Foss, Theories of Human Communication, Ed. Belmont California, 2005 [4] Marsh, H. W., & Roche, L. A, Making students evaluations of teaching effectiveness effective, American Psychologist, issue no. 52, 1997 [5] Fulcher, G. Practical Language Testing, London, Hodder Education, 2010
246
1.
INTRODUCTION
2.2 Usefulness of creative methods When approaching communication structures, teachers of English who work with Economics students shall have to resort to distinct methods, creating activities that can assure introduction of such material in a pleasant and effective manner. By solving a puzzle, by playing a word game, several elements of communication can well be introduced and then reinforced so that the students shall not simply learn the useful phrases by heart, which is a much inefficient and resented approach. It is nevertheless true that, more often than not, the use of only a few interactive drills will not do. They cannot, in themselves, be sufficient as such for assuring assimilation of material which will be further used in communicational circumstances. However, this will constitute a good beginning and can provide an appealing atmosphere a status so necessary for getting down to work. There shall, of course, follow other exercises on the same phrases, to reinforce that material. Then other similarly interactive activities shall be introduced. They are meant to produce synapses that will allow effective mental storage and will later assure the ability of reproducing the communicational standard phrases together with the usage of connections, in any given communicative environment, which should sound as natural as their own language (Cartwright 2002: 48). 3. USE OF INTERACTIVE DRILLS AND VISUAL AIDS 3.1 A strong mnemotechnique mix The mixture of colours, shapes and attractive activities to perform during classroom activities should prove very effective. Categorising, classifying and introducing structures in table cells to easily create visual elements that provide better memorising, i.e. the use of various forms, shapes, shades and strategic positioning, are indeed good methods for an overview of the elements. However, the introduction of drills such as gap filling or matching shall be more appropriate, assuring focus on the matter
The standard phrases involved in communication prove to be much less debated in the specialty literature than other aspects of teaching a language for specific purposes, i.e. Business English, such as the introduction of lexical elements or the basic grammar involved. An explanation may lie in the general impression that communication structures are too explicit and direct to create any problems in learning. Though a good point indeed, this does not mean that proper teaching of such material is useless or too easy and straightforward to consider. On the contrary, it implies that the approaches to introducing such ready-made elements to students can be rather dull or uninspired and in many situations it boils down to the rather non-didactic instruction to learn by heart the given formulas. Mundane and boring activities have to be avoided (Hogan 2003: 18). Instead, some interactive activities, which are both efficient and appealing, can be devised. Thus, the result will be much more rewarding for trainer and trainee as well. 2. DIDACTIC APPROACH TO THE ELEMENTS OF COMMUNICATION 2.1 Context issue Standard phrases that are useful in making a presentation (Sweeney 1997: 50-54), in a negotiation of contractual terms, in a problem solving session or an ordinary meeting (Goodale 1987:119-123) are obviously context bound and make plenary sense only in the authentic context. Elements of standard conversation, business communicational structures and economic vocabulary ancillary to the processes involving human interactions and transactions represent bases of the business activities. Such elements are rather hard to teach in context as that is neither easy to render by means of didactic creation of text nor are such communicational contexts available for didactic purposes. Thus, just a few exemplifications of partial contexts will have to suffice and teachers will need to do without the authentic text that represents for instance a basic tool for the introduction of BE vocabulary.
247
Figure 1 Conducting a meeting Then, other examples of phrases will be given and students will have to add them to the one of the previously discussed categories that they consider to be suitable. Being given an image containing several communication structures, students are asked to name the category represented. In the situation provided in figure 2, they are to recognise the category of controlling [2 pp.118-120].
reach a peak/ reach a maximum stabilize/ level out stay the same
Such lexical units are fundamental for the communication process specific to the business activity, and studying them is of utmost importance. Other means of stressing the importance of vocabulary acquisition are the crossword puzzles. Thus, they can be produced, by using actions across - and counteractions down, as exemplified in Figure 3:
Figure 2. Other standard phrases used in controlling a meeting Such visual aids prove extremely efficient with students of all ages. Using diverse shapes and figures, colours and highlights, a series of categories can be covered through conceptually similar exercises.
Figure 3. Action/Counteraction puzzle and solution Much in the same way, word search games can be created, with actions vertically and counteractions horizontally (see Figure 4).
248
Figure 4. Word search game and solution In the former case, that is the example in Figure 3, even definitions are unnecessary, if given the specification that each number has across and down one descriptor, respectively its opposite. As for the latter, much in the same way, the indication to find antonymic pairs, vertically and horizontally, may be added on condition that students are told that the parts of a phrase can be found linked together. Depending on the students level of language knowledge, distinct degrees of difficulty can be introduced by using more or less words, sometimes with letters switched around, etc. Furthermore, antonymic pair matching drills can be also given, or students can be required to identify synonymic structures in a certain word group. 3.3 Enjoying the acquisition of phrases
Figure 5 Negotiation puzzle - Base Then, the students are given several cards (i.e. six for each rectangle, so twenty four, for our example) each containing one standard phrase involved in the activities specified. The cards are to be shaped and cut such way that they represent the puzzle piece which shall fit in the rectangles. The students will be required to sort the cards according to the stages of negotiation that the phrases belong to and then solve the puzzle (Sweeney 1997: 111), as given in Figure 6.
Figure 6. Negotiation puzzle - Solution All in all, the standard phrases involved in business communication are inherent elements of Business English teaching and approaches that can be most appropriate for providing proper instruction are worth considering and discussing. As providers of education, the teachers of English for specific purposes (i.e. Economics) shall find the best methods to ensure effective assimilation and further good use of linguistic resources. As McKenna (1998: 15) believes, [I]n all communication, the key to success is knowledge.
249
As educators we should elaborate such activities that the students will feel attracted to from the otherwise blunt material where even a passive learning is possible. Nowadays didactic methodology is based more and more on interactive activities and presentation of material so that learning is primarily ensured by cognitive processes other than readingreproducing repeatedly, prior to memorising (Jensen 1998: 99-101). As a consequence, information is no longer served plain but is intervened upon and filtered, rearranged and rebuilt in numerous forms of presentation, as appealing as possible. This happens when indirect and passive learning is assured by awakening psychological triggers such as the cognitive affective processes. They link knowledge to emotion and further the reinforcement of the former is provided subliminally and unconsciously. Therefore, activities that arouse interest and appeal to the emotional side are regarded as pleasant, attractive and seemingly not demanding a big intellectual effort, though straining the intellect in a disguised form. These
[1] CARTWRIGHT, Roger, Communication, Oxford, Capstone Publishing, 2002 [2] Communication Skills, 2nd edition, Careers Skills Library, New York, Ferguson, Facts on File, 2004. [3] GOODALE, Malcolm, The Language of Meetings, Thomson Language Teaching Publications, Geneva, 1987. [4] HOGAN, Kevin; Stubbs, Ron, Cant Get Through: 8 Barriers to Communication, Pelican Publishing Company, 2003. [5] JENSEN, Eric, Teaching with the brain in mind, Association for Supervision and Curriculum Development, Alexandria, Virginia USA, 1998. [6] McKENNA, Colleen, Powerful Communication Skills, Career Press, 1998. [7] PICARDI, Richard P., Skills of Workplace Communication: a Handbook for T&D Specialists and their Organizations, Westport, Quorum Books, 2001. [8] SWEENEY, Simon, English for Business Communication, Students Book, Cambridge University Press, 1997.
250
ABSTRACT Maritime universities all over the world consider future Deck Officers training to be a sensible issue. Since the International Maritime Organization (IMO) introduced training on simulators as an integrated educational part for future seafarers, training future Deck Officers on simulators became a very important component of the maritime education process. Over the past decades, the education of professional officers has undergone many evolutions. Todays maritime universities, academies and faculties using advanced methods of teaching, modern simulators with communication in Maritime English and other sophisticated equipment have not to forget that practical training on board a ship still plays an invaluable role in officers education. Still, it must be acknowledged that a proper training on simulators is a good start for a theoretical training that could eventually be used onboard. In this paper we are trying to point out the fact that without the use of simulators combined with a proper knowledge of Maritime English, University graduates would face real troubles when trying to apply for a job at the crewing and shipping companies. Keywords: Deck Officer, simulator, university, training, maritime education
1.
INTRODUCTION
Simulator classes provided inside maritime universities offer a proper training programme for future officers by making Maritime English the main language used in specialized communication. This happens because it has been proven that Maritime English represents an important part of a future navigating officers training and it will still gain in importance as long as the shipping industry is in progress. Young seafarers are provided with all the necessary conditions to get acquainted with Maritime English during the university years. Therefore its only up to them if they reach a certain level of knowledge necessary for a proper watch keeping. They should always be aware of the fact that their lives, other crew members lives and the ships integrity might depend on this particular aspect. 2. MARITIME ENGLISH THEORETICAL TRAINING Decades ago, America and Britain were the worlds greatest sea-going nations. Eighty percent of crews were native English speakers. By the end of the nineteen seventies the situation was the opposite. Eighty percent of crews did not speak English as a first language. It was clear that in order to keep the seas safe the shipping industry would have to find new ways of passing information through the radio. A new way of speaking resulted after a period of time when experts in language worked closely with experts in shipping in order to reach a convenient agreement for both parties regarding communication at sea. The new language was called Seaspeak. The International Maritime Organization made Seaspeak the official language of the seas in nineteen eighty-eight. Seaspeak defined the rules of how to talk on the radio between ships. The official book of Seaspeak says that messages between ships should be of direct interest to the crew.
Messages should be short and clear. Such messages should be in words simple enough for a non-native speaker of English to understand. Therefore Seaspeak contains a list of about five thousand words. The main focus of the content is on words which are specific for ships and the sea and the rest of the words are in general use by all English speakers. But there is another very important thing about Seaspeak. It uses seven really important words, called message markers. A message marker is meant to tell the listener what kind of message is going to be transmitted. Message markers are words such as: Question, Warning, and Information. Ships fully manned with one nationality were common during the eighties and nineties to find, with only Pilipino, Russian, Indian, Romanian, Bulgarian etc. nationalities on board with no reason for those seafarers to speak any other language than the native one. The Master may have had to know a smattering of essential sentences like "pleeese, where be I", or "big ship, big ship, get out the way" but the engineer down in the engine room had no need for English very useful ignorance when port state control started snooping around or the superintendent visited. For the most part, the working language of many ships was whatever the predominant language was on board. Deck officers simply memorized some essential sentences in English that enabled the ship to get into and out of some foreign port and when the COLREG Regulations had to apply in the open sea and communication ship-to-ship was needed in order to avoid maritime accidents they simply preferred to avoid talking in the VHF. Nowadays, many ships are manned from top to bottom with officers and crews of varying origin working side by side over failing engines. Most of them are able to talk to each other better than any British seafarer ever could and up on the Bridge the Master is speaking the Queens English over the radio to some Welsh harbour Master who nobody can understand outside of Wales.
251
252
253
A safe shipping environment means that all seafarers across the world should reach high standards of competence and professionalism in the duties they perform onboard. The International Convention on Standards of Training, Certification and Watchkeeping for Seafarers 1978, as amended in 1995 (STCW-95), has the role of setting these standards, governing the awarding of certificates and controlling watchkeeping arrangements. Its provisions not only apply to seafarers, but also to ship-owners, training establishments such as maritime universities and national maritime administrations. Therefore, all affiliations and member qualities of a maritime university along with the fact that maritime universities are evaluated every year by naval authorities are solid proofs of a proper implementation of the 1995 STCW Convention in these institutions. All training programmes and assessments in maritime universities are provided in connection with the STCW-95 certificate and comply with STCW-95 standards, being approved national and international maritime authorities. Maritime education and training institutes all over the world have installed integrated bridge simulation systems, based on which maritime teaching and training have been designed and experimented. In response to these changes, course and syllabus design and organization as well as instruction and evaluation have thus undergone reforms since the attention has been particularly drawn to simulator training. Physically within language skills targeted integrated bridge simulation system training, all means of lingual communication devices employed in real ship operation should be properly fixed to simulate navigational and safety communications from ship to shore and vice versa, from ship to ship, as well as onboard ship. Maritime English course design and organization is critically important throughout the whole training program. It ought to take into account the emphasis IMO guidelines on ship management lays in the need for good
[1] BARSAN, E., HANZU-PAZARA, R., ARSENIE, P. and GROSAN, N., The Impact of Technology on Human Resources in Maritime Industry, 6th International Conference of Management of Technological Changes, Alexandropolis, Greece, Publisher: Democritus University of Thrace, pp 641 644, 2009 [2] BATRINCA, G., VARSAMI, A. and POPESCU, C., The Sustainability of Maritime Education and Training On Board Training Ships in the Present Economic Conditions, 6th International Seminar on the Quality Management in Higher Education, Tulcea, Romania, Vol. 1, pp 35 38, 2010 [3] BELEV, B., Information Capabilities of Integrated Bridge Systems, The Journal of Navigation 57, p. 145151, 2004 [4] BELEV, B., GECHEVSKI, P., DUNDOV, N., Using Simulators for Education and Training Condition, Necessity, Development, Proceedings of BulMet 2005, Varna, p. 39-44, 2005 [5] BUTMAN, B. S., STCW and Beyond: Minimal
Requirements and Additional Knowledge for Marine Engineers, 8th IAMU Annual General Assembly, Odessa, Ukraine, Edited by Dmitriy Zhukov, Odessa National Maritime Academy, pp 57 67, 2007
[6] HANZU-PAZARA, R., The shipping companies role in increasing onboard personnel competencies, Marine Transport & Navigation Journal, Vol. 1, No. 1, pp 111-116, 2009 [7] HANZU-PAZARA, R., STAN, L., GROSAN, N. and VARSAMI, A., Particularities of cadets practice inside of a multinational crew, 10th General Assembly of International Association of Maritime Universities, St. Petersburg, Russia, published in MET trends in the XXI century, pp 99 105, 2009
254
1.
INTRODUCTION
Knowing a language implies not only awareness and mastery of that language, i.e. of grammar and lexis, it also means being able to use it effectively in social situations, selecting the appropriate style, matching language to context, perceiving the speakers intention, and performing speech acts. (Cunningsworth 1983:8). Teachers have become more and more aware of the necessity of providing opportunities for spontaneous discussion and for impromptu activities that stimulate actual situations in which students might find themselves in the foreign culture. As a result, writers of teaching materials are concentrating on producing dialogues containing useful patterns that students may adapt in order to express their meaning in their own way. Consequently, more emphasis should be placed on state of the art exercises and activities in order to encourage student creativity. Despite inherent difficulties in daily classroom situations, encouragement must be given to students to perform as naturally as possible in practice sessions. Therefore, a range of creative exercises may be proposed allowing students of varying levels to achieve success along these lines. 2. STUDENT-TAILORED EXERCISES
material develops, they will be able to improve the precision of their semantic control of the language. Likewise, paraphrasing exercises prove to be useful. Their aim is to rephrase the content in different words while keeping the meaning of the new sentence/text as close to the original as possible. Subsequent analysis of the students version provides a good opportunity for discussion of the nuances of meaning, emphasis, tonality or levels of formality. The tremendous impact of tonality is shown in the table below conceived by Carpenter in Principles of Management, online version 2010:303. Placement of the emphasis I did not tell John you were late. I did not tell John you were late. I did not tell John you were late. I did not tell John you were late. I did not tell John you were late. I did not tell John you were late. I did not tell John you were late. What it means Someone else told John you were late. This did not happen. I may have implied it. But maybe I told Sharon and Jos. I was talking about someone else. I told him you still are late. I told him you were attending another meeting.
Initially, the teacher may request restricted answers to simple questions containing the essence of the response desired: the elementary comprehension exercises in which the students are able to extract the answers and produce them on the basis of minimal changes in the original material. The comprehension technique allows for a wide range of difficulty levels and can be used by teachers according to the needs and levels of their students. Comprehension exercises should be aided by means of thoroughly selected reading activities. Thus, students may take in information and discourse analysis expressed in the appropriate registers of the target language. These reading activities are aimed at activating the students recognition capacity for appropriate forms of expression while also being based on the field of interest of the students and the goal of the specific training. As the students mastery of the course
Textual commentary can also serve a similar end students may be asked to analyze the language used by the writer, explaining the intended aim of the material under discussion along with the level of formality and the linguistic expression of the concepts presented. In addition, short exercises based on dialogue creation can come in handy. In a gap filling type of exercise, the student plays the part of one speaker and is required to match his responses to the context and the meaning of the other persons utterances. Despite a slight element of contrast in this type of exercise,
255
Among creative thinking activities, oral composition, though quite demanding, can be used in classes for advanced students. This type of activity may be structured as an interview or a dialogue with someone in the appropriate field of study as well as a short dissertation on a topic of significance such as At the crewing office, At the customs office etc. This gives students the opportunity to create by means of their own resources an imaginative and extended discourse which can provide a good use of language in its fullest sense. These exercises need to be prepared carefully in advance, making sure that students have a good knowledge of the linguistic items, structural and lexical, adequate to the task chosen, together with all the necessary skills to be applied in the oral composition. However, while the above suggested activities encourage a progress from the relatively constrained forms of simple answers to the creative freedom of oral composition, they lack the essential feedback of a communicative situation, in which flexibility of response is related to the personal, social objectives both of the speaker and of his dialogue partner, which in turn make students respond appropriately to other peoples trains of thought and objectives in a natural manner. Further on, we should consider the purpose and benefits of communicative language teaching which are beautifully and explicitly stated by Jack C. Richards:
The basis of knowing a foreign language is the ability of the speakers to produce, in an appropriate and flexible manner, many statements based on a limited experience and by means of a specific corpus of language. This ability derives from the capacity to learn the rules of a language and to apply them correctly under various circumstances. Consequently, knowledge of a foreign language implies not only being familiar with its vocabulary and grammar but also a good understanding of other factors that influence the students choice of verbal expressions considering appropriateness criteria. Such factors might include noise, seating arrangements, relationships, space, and even ventilation. The value of the activities described in this paper resides in the opportunity to analyze the students performance and to explain deficiencies and suggest improvements in their use of the language, meant to contribute to their communicative skills. 5. REFERENCES
[1] CUNNINGSWORTH, A., English Teaching Forum, issue no. 4, October, 1983 [2] RICHARDS, J. C., Communicative language teaching today, Cambridge University Press, 2006 [3] CALLAN R., Callan Method: Teachers handbook, 3rd edition, Granchester, Orchard Publishing Ltd., 1995 [4] CARPENTER, M. Principles of Management, Barnes and Noble, online edition 2010
256
The historical significance of the sea cannot be left behind by a translator working into a foreign language like English. The translator may come across a myriad of nautical words and expressions that must be carefully handled in translation. As it is generally known, English and Romanian use different linguistic forms and these forms represent only one of the aspects of the difference between the two language systems. The most challenging are the cultural meanings that are intricately woven into the texture of the language, and it is the translators task to catch and render them appropriately. Thus, the differences between the source language culture (SLC) and the target language culture (TLC) make the translating process in general and the translation of maritime idioms in particular, a real challenge. In this case, translation strategies and techniques are of paramount importance. Since maritime idioms represent the special cultural image, the translator should ideally be bilingual, and most importantly, bicultural. Keywords: maritime idioms, translation strategy, equivalence 1. INTRODUCTION carrying a metaphorical sense that makes their understanding difficult. Irrespective of the idiom type, translators need to have a well-developed phraseological competence (Croitoru & Dumitracu 2006, Heltai 2001). They have to know the ready made phrases used in the nautical register in the language cultures bought into contact, namely, English and Romanian, as well as to match and evaluate them from a sociolinguistic perspective. Sometimes the meaning of maritime idioms cannot be predicted from the meaning of their constituent parts, i.e. cut a dido a face o manevr complicat; on the dags n concediu (despre un marinar); dress a ship overall a ridica marele pavoaz; make a heavy weather a suporta un balans puternic; ship a heavy sea a lua apa cu bordul; keep full for stays a pregti volta n vnt. Thus, idioms are said to be noncompositional because their meanings are not the sum of the meanings of their parts (Cruse 1986, 2000, Nattinger & DeCarrico 1992). However, it could be argued that other multi-word units are also non-compositional (of course, in effect) and yet are not generally considered idioms. The verb and noun in the idiom sling the cat, for example, have at least two meanings: their default context-free literal meanings, and the meanings that are induced by the idiom context. In non-idiomatic contexts, the verb sling will have the meaning "to throw or drop something" and the word cat the meaning "a small animal with fur, four legs, a tail and claws." In the idiom context, these words have a dual meaning, retaining their literal meanings but also acquiring the idiomatic meanings of "empty" and "the contents of the stomach." Popa (1992: 224) gives the following Romanian equivalents a da la pete, a avea ru de mare. Another maritime idiom which is worth mentioning is high and dry. This idiom refers to a vessel aground above high water mark (Royce 1993: 221). According to Huff (2004: 61), this phrase refers to a person left without support or resources, but it was originally used for a vessel left high upon the shore and dry by an ebbing tide. Lindquist (2009: 94) labels this idiom as opaque (i.e. opaque idioms are the full idioms whose
Both English and Romanian are permeated with phrases and expressions which originated at sea, deriving from the customs and traditions of seafarers who spent more time on board ship than on land, and therefore carrying with them a large amount of cultural information. Thus, the unique talk of sailors has found its way into the lands speech which got beautifully coloured and gained metaphorical significance. Considerable work on idioms has been done by several linguists (Lipka 1991, Cruse 1986, Carter 1987), and researchers into vocabulary and language teaching (Carter & McCarthy 1988, Nattinger & DeCarrico 1992). There are innumerable, intricate definitions of the term idiom in the literature, but this study will adopt the one provided by the Websters Encyclopedic Unabridged Dictionary of the English Language where an idiom is understood as a construction or expression of one language whose parts correspond to elements in another language but whose total structure or meaning is not matched in the same way in the second language (1997: 707). In terms of their translatability, idioms are considered as one of the most complicated elements of language. Since English has its own ways of expressing certain things, corresponding expressions may not be found in Romanian. This language-fixity makes the translation of maritime idioms rather problematic and thus, it is important to take a closer look at their possible translation strategies as well. 2. MARITIME IDIOMS
Maritime idioms are very elusive, and the difficulty of exactly characterizing them is perhaps one of the reasons why relatively little attention has traditionally been accorded to these expressions, in spite of their unquestionable relevance. In my opinion, maritime idioms constitute a real challenge to compositional models of language comprehension due to their widespread use in the language and their property of
257
The term strategy is used in different ways in translation studies, and a variety of other terms can be used to mean the same thing: procedures, techniques of adjustment, transformations, transfer operations etc. Superceanu (2006: 259) defines translation strategies as [...] individual cognitive procedures operating on a large or small scale. They are used consciously or unconsciously for the solution of a translation problem, for example, search, checking, monitoring, inferring, and correlating (Superceanu 2006: 259). Translation methods, translation techniques and translation strategies are all goal-oriented, however, only translation strategies are problem oriented and they are used when the translator realizes that the usual procedure is not sufficient for reaching a certain goal. Lscher (1991: 76) defines translation strategy as "a potentially conscious procedure for the solution of a problem which an individual is faced with when translating a text segment from one language to another". In our view translation strategies of maritime idioms are applied when a translation difficulty occurs and the translator wishes to solve the problem and produce a good translation. We shall also understand a strategy for translating idioms according to Leppihalmes view on strategies which he considers to be "means which the translator, within the confines of his/her existing knowledge, considers to be the best in order to reach the goals set by the translation task." (Leppihalme 1997: 28). There are some arguable points regarding idioms. Dollerup (2006) brings these issues into discussion when he states that [...] notably literal translation of idiomatic expressions is one of the most quoted types of error in translation (Dollerup 2006: 36). Thus, a straightforward word for word substitution in what maritime idioms are concerned cannot be allowed for. Regarding idioms, Baker (1992) suggests some translation strategies whose acceptability or nonacceptability depends on the context in which a certain idiom is translated. 3.1. Translation by an idiom with similar meaning and form This translation strategy involves the use of an idiom in the TL which conveys the same meaning as that of the SL idiom and consists of equivalent lexical items. In order to exemplify we shall use maritime English idioms together with their meaning and translation into Romanian: to execute antics to make tactical
258
Not only does maritime language encode particular meanings, but by virtue of these meanings and the forms employed to symbolise these meanings which constitute part of shared knowledge within a particular maritime
259
[1] BAKER, M., In Other Words. London: Routledge, 1992 [2] BEZIRIS, A., POPA, C., SCURTU, G., BANTA, A., Dicionar Maritim Romn-Englez. Bucureti: Editura Tehnic, 1982 [3] CARTER, R., Vocabulary. Applied Linguistic Perspectives, London: Allen & Unwin, 1987 [4] CARTER, R., McCARTHY, M., J., Vocabulary and Language Teaching, London: Longman, 1988 [5] CROITORU, E., DUMITRACU, A., Collocations and Colligations in Specialized Texts. In Specialized Discourse: Theory and Practice, Galai: Europlus Publishing House, 2006, pp 103-113 [6] CRUSE, D., A., Lexical Semantics. Cambridge: Cambridge University Press, 1986
260
SECTION VI
TRANSPORT ECONOMICS
1.
INTRODUCTION
We will consider the transport as a system included into the logistic. The sub-systems of the transport system are the means of transport: by road, by sea, by railroad or on air Multimodal transport is not only the reunion of several types of transportation. The management of multimodal transportation involves the entire transport infrastructure: terminals, consolidation warehouses, ports, airports or something else, which requires the highest extent of coordination from all of those involved in the logistics process. The cooperation and integration of several types of transport need special and dedicated indicators. If their definition surprises the specificity of the transport, they could be used for comparison of different systems of transport and for the measurement of their relative influence into the market. 2. INTER-MODALITY AND MULTIMODALITY
transport under control or ownership of one operator. It involves the use of more than one means of transport such as a combination of truck, railcar, railways, aeroplane or ship in succession. The most important advantages of multimodal transport could be considered the followings: coordinated and planned as a single operation, minimized the loss of time and risk of loss, pilferage and damage to the cargo at trans-shipment points. The markets is psychically reduced by faster transit of goods, the distance between origin or source materials and customers is getting to be insignificant. 2.1 Inter and multimodal characterisation of a system of transport A system of transport could be in one of the m states. Shannon [3] characterised the multimodal diversity of such a system by its entropic level: m (1) H = pi lg pi i =1 This formula is not related to the tendency of the system to change its state from the state i to the state j, where i, j { 1,2,m}. For this reason, the average entropy Hm of the system under a transitory matrix is m (2) H M = pi H i i =1 m where H i = pij lg pij (3) j =1 (4) and H M max = lg m
If the system laid in the state i it could pass to any state j. The entropy Hi only considers the incertitude of the change of the state without a suggestion about its nature. Let Tij be the transition of the system to a state j with a certain level of efficiency (done by the indicators eij).
The term of inter-modality was used for a system of transport which consists of modules of transport more different than similar. The method involves the transportation of freight in an inter-modal vehicle or container, using multiple modes of transportation (rail, ship, and truck), without any handling. This reduces cargo handling, and so improves security, reduces damages and loss, and allows freight to be transported faster. As a consequence, each mode of transport will have its own development and the differences between them will increase. They are two different approaches of the intermodality: - the inter-modal system consists of hubs (ports, airports, terminals, warehouses) and the network or - the inter-modal system consists of hubs (ports, airports, terminals, warehouses) only. It is obvious that nothing exists completely independent; anything is included in the wholeness. Multimodal transport means the simultaneous or alternative use of the different ways of the transport. It solves a big part of cargo mobility problems. The multimodal transport refers to a transport system usually operated by one carrier with more than one mode of
261
introduced by Theiler, Tovissi [4]. For the usual entropy, if pij=1 (doubtless transition) Hi=0 (the disorder is at minimum, for any j). For the weighted entropy, if there exists doubtless transition (pij=1), there
Hi
= lg wij = hij
or the
minimum disorder is expressed by the inefficiency hij of the transition from i to j, or the disorder is at minimum if the transition hij is inefficiency. The maximum value is : m H i max p = lg lg wij and is fulfilled for (8) j =1 wij , j=1,2,m pij = m wij i =1 The weighted entropy for the entire system [2] is: m H M p = pi H i p i =1
and
and trans-information in absolute value T and relative value Tr is: (13) T = H ( X ) + H (Y ) H ( X ,Y ) T (14) Tr = H (X )
Let we consider now the equivocation H ( X / Y ) = H ( X , Y ) H (Y ) (15)
H M max p = H i max p
(9)
In the case of a transhipment platform for goods, we could consider the following three different possible situations [5]: 1. containerised 2. united (i.e. palletised goods ) 3. bulk. The transitions between this could be described as follows: a part of the bulk merchandise should be united and going from the state 3 to the state 2, the units could pass into a container (from the state 2 to the state 1) and so on. It is possible to define the matrix of these passages. In the inter-modal freight transport, the goods lie into a state i and could pass into a state j with the
as a measure of the equivoque on the come in field X when is known the come out field Y. In dynamic interpretation this represents the equivoque of the passed modal distribution starting from the present modal distribution. A static interpretation is that H represents the equivoque of the traffic generators distribution starting from the present modal distribution off the traffic. Let we consider now the average error H ( X / Y ) = H ( X ,Y ) H ( X ) (16)
as a measure of incertitude of the come out field Y, when is known the come in field X. . In dynamic interpretation this represents the incertitude of the future modal distribution knowing the initial present modal distribution. A static interpretation is that H represents the incertitude of the future modal distribution as a result of the initial known distribution of the traffic generators. Trans-information T is also equalled to:
262
T = H ( X ) H ( X / Y ) = H (Y ) H (Y / X ) (17) and represents the average value of the mutual information related to the field X obtained from the field Y, or the average of the information passing throw the system. Based on the trans-information, the systems could be in the following situations: 1. There exists a one-to-one function between X and Y, there isnt equivoque on the come in signals and there isnt error on the come out signals, Tr=1. The traffic on each transport mode is provided by only one traffic generators or sender. 2. For any signal from Y corresponds only one signal in X, so there no exists equivoque on the signal xi come into the system when is known the signal yj to the exit. At the same time, for the same signal x i entered into the system they are generated many signals yj at the exit and this facilitate the genesis of some errors during the process. For the transport, this is the situation of a dispatch started by car, divided into many pieces and some of them will continue the way by another mod of transport. 3. A signal entered into the system generates only on signal to the exit, so there is no error. But, if the signal from the exit is known there exists equivoque on the entering signal: Tr = H (Y ) / H ( X ) (18)
4. There exists an equivoque related to the entered signals and error on the exit signals. This created an uncertin situation about the work inside the system and: Tr = H ( X ) + H (Y ) H ( X , Y ) / H ( X ) (19)
5. If all the signals: entered, leaved or the reunion of them have the same probability in their category 1 p ( xi , y j ) = and mn 1 1 p ( xi ) = , p ( y j ) = , i = 1...m, j = 1...n (20) m n then the entropies will have the maximum values and it results the followings: (21) H ( X ) = lg m, H (Y ) = lg n,
H ( X , Y ) = lg mn = H ( X ) + H (Y ) Tr = 0, The significance of this situation is the possible connexion of each entered signal to any signal arrived to the exit, so the uncertainty is dominant. The relative trans-information could define the quality of the connexions entrance-exit through the system as follows: Tr = 0 maximum complexity Tr = 1 correspondence on-to-one between entrance and exit, Tr 0, Tr 1 random connexion between entrance and exit. Into an intermodal terminal, during an interval of time come in a number of transport units from different
263
5.
CONCLUSIONS
The movements of goods from supplier to receiver imply the change of different modes of transport and a lot of inter-modal and logistic operations. The complexity of the transport activities is well to be measured using different indicators. The indicators are also necessary for ranking the different systems of transport, for characterize the informational and operational connexions in the hubs and in junctions points. There is not only one possible system of indicators. The proposed indicators do not completely cover the multimodal transport problems, but each of them clarifies same important and new aspects. The suggested algorithm achieves a connexion from the multimodal transport problems and the multi-criteria linear optimisation problems.
6.
REFERENCES
[1] CARP D. Noduri i reele de transport, Editura Didactic i Pedagogic, ISBN 978-973-30-2539-9 Bucuresti 2009 [2] CUNCEV I. Analiza entropica a sistemelor din transporturi, Revista transporturilor si telecomunicaiilor, nr.1 1978 [3] SHANNON C.E. A mathematical Theory of Communication, Bell Technical Journal, nr.27 [4] THEILER G., TOVISSIL. Msuri informaionale care in seama de nivelul de eficien al fiecrei stri, ,Revista de statistic, nr.3, 1977 [5] European Intermodal Association (2005). Intermodal Transport in Europe. EIA, Brussels, ISBN 90-901991-36, 2005
264
1.
INTRODUCTION
Economic theory indicates that concentration is an important determinant of market behavior and market results. Monopolistic practices are more likely where a small number of the leading firms account for the bulk of an industry's output than where even the largest firms are of relatively small importance. Therefore, in the explanation of business policy, the characteristics of an industry stated in the concentration index are likely to play an important part. This relation to the degree of monopoly has motivated most of the empirical studies involving the measurement of concentration. Concerns and general suspicion about market concentration have a long history in the United States, dating back to the earliest days of the new republic. That economic and political liberties were seen as inextricably linked fostered the sentiment that the concentration of economic power invariably leads to the concentration of political power. As Dirlam and Kahn (1954, p. 17) observe: Clearly we are not devoted to a competitive system only for economic reasons. It is also associated with such social and political ideals as the diffusion of private power and maximum opportunities for individual selfexpression. If the economy will run itself, government interference in our daily life is held to a minimum. Market concentration is useful as an economic tool because it points the degree of competition in the market. In this regard, Tirole (1988, p. 247) notes that: Bain's (1956) original concern with market concentration was based on an intuitive relationship between high concentration and collusion. There are game theoretic models of market
interaction that anticipate a future growth in market concentration that will result in higher prices and lower consumer welfare even when collusion in the sense of cartelization (i.e. explicit collusion) is absent. Such examples are Cournot oligopoly and Bertrand oligopoly for differentiated products. Empirical studies that are projected to test the relationship between market concentration and prices are jointly known as price-concentration studies). Any study that claims to examin the relationship between price and the level of market concentration is also testing whether the market definition (according to which market concentration is being calculated) is relevant; that is, whether the boundaries of each market is not being determined either too narrowly or too broadly so as to make the defined "market" meaningless from the point of the competitive interactions of the firms that it includes (or is made of). As a matter of public policy, the measurement of market concentration is important and lies at the heart of decisions about whether to approve mergers and acquisitions that might pose a potentially harmful impact on consumers in terms of both prices and the availability of goods and services. These issues have been addressed by antitrust laws in the U.S. dating to the Sherman Antitrust Act in 1890 [Hays and Ward 2011]. Unlike, it was not until 1989 that EU Policy makers realized the usefulness and the necessity of a common merger regulatory framework [Lipczynski and Wilson, 2001], and responded with the European Council Merger Regulation (ECMR) on the control of concentrations, forced by the increased cross-border activities of European firms in the second half of the 1980s [Jacobson and Androsso-O'Callaghan, 1996].
265
266
267
Own calculations using random date HHI can be calculated using data from the table: HHI = 252 + 202 + 152 + 152 + 102 + 102 + 52 = 1.700 From the example, if the second and third largest firms in the market were to merge, what will happen with the HHI index? To archive the result, we have to calculate the new HHI index under existing market shares after merger: Table 2 Firm 1 2 and 3 merge 4 5 6 7 Total Market share % 25 25 + 10 = 35 15 10 10 5 100 Squared market share 625 1225 225 100 100 25 HHI = 2.300
Own calculations using random date We seen in the table that after the merger of firms 1 and 2 square of market share is much higher than the sum of squares of individual shares before concentration. The merger increases HHI from 1.700 points to 2.300
268
Merger policy is seen as preventing excessive market concentration and monopoly power. The concern is that excessive concentration may cause a substantial lessening of competition or the creation of a dominant position, which may increase prices and reduce consumer welfare. The lessening of competition resulting from a concentration is more likely to be substantial, the larger is the market share of the incumbent, the greater is the competitive significance of the potential entrant, and the greater is the competitive threat posed by this potential entrant relative to others. In the analysis undertaken, the HHI index is 2.300 and the delta is 600. According to Commission Guidelines is also unlikely to identify competition concerns in a merger: with a post-merger HHI above 2.000 and a delta below 150. But HHI post merger and delta in our analysis exceeds the thresholds, therefore we are in front of an anticompetitive mergers (in practice of Europe Union, high post-merger HHIs and large changes in HHIs tend to be associated with anticompetitive mergers). But, not all mergers with these characteristics create or enhance market power. In markets with highly differentiated products, mergers may allow for unilateral price increases irrespective of market shares or HHI calculations. In my opinion, HHI is more complete and elaborate than other market indicators like concentration rate or market share, because it is a weighted average of market shares of all firms. Concentration ratios do not take account of the relative sizes of the leading companies. For example, a market which has four firms each with a 20% market share will have the same C4 ratio as a market in which the leading four firms have market shares of 55%, 20%, 3% and 2%. But it is very probable that the competitiveness of the two markets will differ. For instance, in the latter case there is a clear potential leader for the other firms to follow, whereas in the former case their might be fierce competition to become
[1] DIRLAM J., KAHN A., Fair competition: The Law and Economics of Antitrust Policy, Cornell University Press,1954 [2] TIROLE, J. The Theory of Industrial Organization, MIT Press1988 [3] HAYS F., WARD S.G., Understanding market concentration: internet-based applications from the banking industry, Journal of Instructional Pedagogies , 2011 [4] LIPCZYNSKI J., WILSON J., Industrial Organisation. An Analysis of Competitive Markets, London: Financial TimesPrentice Hall, 2001 [5] JACOBSON D., ANDROSSO-O'CALLAGHAN B., Industrial Economics and Organization: a European Perspective, Maidenhead: McGrawHill, 1996 [6] Council Regulation (EC) No 139/2004, art. 3 [7] ELZINGA K., Unmasking Monopoly: Four Types of Economic Evidence, in R. Larner and J Meehan, Economics and antitrust policy,1989 [8] HAY G. A., Market Power in Antitrust, 60 Antitrust L.J. 807-808, 1992 [9] European Commission. Notice on the Definition of the Relevant Market for the Purposes of Community Competition Law. Official Journal C 372. December 1997 [10] BOZIAN L., The Relevant Market Concept and the Hypothetical Monopolist Test, in Profil Concurena, no. 1, 2009 [11] European Commission, Guidelines on the assessment of horizontal mergers under the Council Regulation on the control of concentrations between undertakings (2004/C 31/03) Official Journal of the European Union
269
270
In order to resist the competitive environment in the market or to consolidate its leading position in the field, organizations are increasingly interested in implementing a quality management system and adopt quality-oriented strategies of the market processes. Also, in order to increase customer satisfaction, organization management is always interested in improving the effectiveness and efficiency of processes, products and services, through the implementation of continuous improvement programs, including preventive and corrective actions that are necessary. In this article, we try to plead for the necessity of adopting quality improvement strategies with direct impact on the market performance of the organization. Keywords: quality improvement, competitivity
1.
INTRODUCTION
The term quality management has a specific meaning within many business sectors. This specific definition, which does not aim to assure 'good quality by the more general definition, but rather to ensure that an organization or product is consistent, can be considered to have four main components: quality planning, quality control, quality assurance and quality improvement. Quality management is focused not only on product/service quality, but also the means to achieve it. Quality management therefore uses quality assurance and control of processes as well as products to achieve more consistent quality. 1.1. Short history on quality The concept of quality as we think of it now first emerged out of the Industrial Revolution. Previously goods had been made from start to finish by the same person or team of people, with handcrafting and tweaking the product to meet 'quality criteria'. Mass production brought huge teams of people together to work on specific stages of production where one person would not necessarily complete a product from start to finish. In the late 19th century pioneers such as Frederick Winslow Taylor and Henry Ford recognized the limitations of the methods being used in mass production at the time and the subsequent varying quality of output. Birland established Quality Departments to oversee the quality of production and rectifying of errors, and Ford emphasized standardization of design and component standards to ensure a standard product was produced. Management of quality was the responsibility of the Quality department and was implemented by Inspection of product output to 'catch' defects. Application of statistical control came later as a result of World War production methods, and were advanced by the work done of W. Edwards Deming, a statistician, after whom the Deming Prize for quality is named. Joseph M. Juran focused more on managing for quality. The first edition of Juran's Quality Control Handbook was published in 1951. He also developed the "Juran's trilogy," an approach to cross-functional
management that is composed of three managerial processes: quality planning, quality control and quality improvement. These functions all play a vital role when evaluating quality. Quality, as a profession and the managerial process associated with the quality function, was introduced during the second-half of the 20th century, and has evolved since then. Over this period, few other disciplines have seen as many changes as the quality profession. The quality profession grew from simple control, to engineering, to systems engineering. Quality control activities were predominant in the 1940s, 1950s, and 1960s. The 1970s were an era of quality engineering and the 1990s saw quality systems as an emerging field. Like medicine, accounting, and engineering, quality has achieved status as a recognized profession. 2. QUALITY INDISPENSABLE TOOL TO SURVIVE IN A COMPETITIVE ENVIRONMENT When we discuss about our country we can observe a particularity that is becoming more and more obvious in our business environment is the awareness of the quality importance in all organizational processes, customer relationships both internal (employees) and external customer (consumers, investors, suppliers, partners, etc.). In order to resist the competitive environment in the market, more and more organizations in Romania have implemented and certified a quality management system in accordance with standard SR EN ISO 9001:2008. On the other hand, to maintain its market leadership, organizations implement an integrated management systems concerning Quality, Environment, Safety and Ethics or they adopt different strategies for continuous quality improvement of its internal and external processes, including processes of market. In this sense, organizations define their own quality policy and objectives, document the organization quality management system and work toward promoting at least the following basic principles related to quality assurance processes:
271
3. WAYS OF DEALING WITH QUALITY IN THE MARKET PROCESSES In order to increase customers satifaction, the organization management is always interested in improving the effectiveness and efficiency of processes, activities and products. Organizations choose between two strategies on continuous quality improvement processes, namely: INNOVATION a strategy that refers to radical improvements of projects, or reviewing and improveing existing processes; KAIZEN a strategi that follows slower improvemt activities, developed by emplyees that work on the existing procesess Radical improvement projects involve significant redesign of existing processes and include: defining objectives and presenting process improvements, implementation of improvement actions, checking the improved process, evaluations on improvements. In the second case, improvements are made in small steps, and the best source of ideas is represented by the emplyees of the organisation. When it comes to elaborate an improvement project, interests of all parties are taken into account. Whichever method is selected improvement process should include the following components: Evaluation on existing situation Identification of possible risks Effect evaluation The implementation of the new solution Evaluation on efficiency concerning the process. 4. CUSTOMER FOCUS
Leaders establish unity of purpose and direction of the organization. They should create and maintain the internal environment in which people can become fully involved in achieving the organizations objectives. Applying the principle of leadership typically leads to: people will understand and be motivated towards the organizations goals and objectives; activities are evaluated, aligned and implemented in a unified way; miscommunication between levels of an organization will be minimized; considering the needs of all interested parties including customers, owners, employees, suppliers, financiers, local communities and society as a whole; establishing a clear vision of the organizations future; setting challenging goals and targets; creating and sustaining shared values, fairness and ethical role models at all levels of the organization; establishing trust and eliminating fear; providing people with the required resources, training and freedom to act with responsibility and accountability; inspiring, encouraging and recognizing peoples contributions. 6. INVOLVMENT OF PEOPLE
Organizations depend on their customers and therefore should understand current and future customer needs, should meet customer requirements and strive to exceed customer expectations. Key benefits: increased revenue and market share obtained through flexible and fast responses to market opportunities; increased effectiveness in the use of the organizations resources to enhance customer satisfaction; improved customer loyalty leading to repeat business;
People at all levels are the essence of an organization and their full involvement enables their abilities to be used for the organizations benefit. Applying the principle of involvement of people typically leads to: motivated, committed and involved people within the organization; innovation and creativity in furthering the organizations objectives; people being accountable for their own performance; people eager to participate in and contribute to continual improvement; people understanding the importance of their contribution and role in the organization; people identifying constraints to their performance; people accepting ownership of problems and their responsibility for solving them; people evaluating their performance against their personal goals and objectives;
272
A desired result is achieved more efficiently when activities and related resources are managed as a process. Applying the principle of process approach typically leads to: lower costs and shorter cycle times through effective use of resources; improved, consistent and predictable results; focused and prioritized improvement opportunities; systematically defining the activities necessary to obtain a desired result; establishing clear responsibility and accountability for managing key activities; analysing and measuring of the capability of key activities; identifying the interfaces of key activities within and between the functions of the organization; focusing on the factors such as resources, methods, and materials that will improve key activities of the organization; evaluating risks, consequences and impacts of activities on customers, suppliers and other interested parties. 8. SYSTEM APPROACH TO MANAGEMENT
Identifying, understanding and managing interrelated processes as a system contributes to the organizations effectiveness and efficiency in achieving its objectives. Applying the principle of system approach to management leads to: integration and alignment of the processes that will best achieve the desired results; ability to focus effort on the key processes; providing confidence to interested parties as to the consistency, effectiveness and efficiency of the organization; structuring a system to achieve the organizations objectives in the most effective and efficient way; understanding the interdependencies between the processes of the system; structured approaches that harmonize and integrate processes; providing a better understanding of the roles and responsibilities necessary for achieving common objectives and thereby reducing cross-functional barriers; understanding organizational capabilities and establishing resource constraints prior to action; targeting and defining how specific activities within a system should operate; continually improving the system through measurement and evaluation.
[1] IORDACHESCU, M., SCUTARU, E., IORDACHESCU D., STANCA, C., Concepte n managementul calitii totale, Editura Fundaiei Universitare Dunrea de Jos, Galai, 2004 [2] KAPLAINS, S., TALABA, D., NANOUSSI, D., Measuring Quality and Improvement: Quality Indicators, n Proceedings of QUEST International Conference, Patras, Grecia, 9-12 martie 2000 [3] KELADA, J., La gestion integral de la qualit. Pour une qualit totale, Edition Quafec, Quebc, 1990 [4] KELADA, J., Qualit totale et gestion par extraversion, "Gestion", 1991 [5] KNIGHT, J., Monitoring Quality and Progress of Internationalization, n Jurnal of Studies in International Education, vol.5, nr.3, Sage Publications, 2001 [6] Lupan, R., Kobi, A., Robledo, C., Bacivarov, I., The ISO 9000:2000 Quality System Improvement Using the Six Sigma Methodology, n Management of the Technological Changes, Technical University of Crete, Chania, Grecia, 2003 [7] LUPU, M.L., NEGURICI, O., Auditing the Quality of the Informational System Elaboration and Application n Management of the Technological Changes, Technical University of Crete, Chania, Grecia, 2003 [8] MOLDOVEANU, G., DOBRIN, C., Managementul calitii n sectorul public, Editura A.S.E., 2007 [9] OLARU M., Managementul calitii, Ediia a II-a revizuit i adugit, Editura Economic, 1999 [10] OLARU, M., ISAIC-MANIU, A., LEFTER, V., POP, N. A., POPESCU, S, DR[GULNESCU, N., RONCEA, L. i RONCEA C., Tehnici i instrumente utilizate n managementul calitii, Editura Economic, Bucureti, 2000 [11] OLARU, M., Modele de evaluare a performanelor obinute prin TQM, n Marketing-Management, nr.5-6, 1997 [12] OLARU, M., Politica i obiectivele organizaiei referitoare la calitate, n Q-Media, nr.3, 1999 [13] Oprean, C., Documentele sistemului de management al calitii, n Ghidul calitii n nvmntul superior, Proiectul Calisro, Editura Universitii din Bucureti, 2004 [14] PERIGORD, M., Etapele calitii. Demersuri i instrumente, Editura Tehnic, Bucureti, 1997 [15] PETRE, V., SRBULESCU, I., Protecia consumatorilor Drepturile fundamentale ale consumatorilor, Editura TipoRadical, Drobeta Turnu Severin, 2004 [16] PEYRAT, O., New ISO 9000 standards: generic, attractive, but not as simple as they look, n ISO Management Systems, October 2001 [17] PLEGUEZUELOS, C.,T., Calidad total en la administracion publica, 1999 [18] POLLITT, C., BOUCKAERT, G., Public Management Reform A comparative Analysis, Ediia a II-a, Oxford Univeristy Press, New York, 2004 [19] ROBLEDO, C., LUPAN, R., Measuring the Sharesholder value after the ISO 9000 Quality System Certification, n Management of the Technological
273
274
1.
INTRODUCTION
The need o f creating a department concerning risk management is given by the following reasons: the changes that took place in the public organizations leadership management; we must facilitate efficient and effective achievements of the organization goals based on risk management; we have to ensure the basic conditions for organizing a proper internal control system; an annual review and elaboration of plans are required for managing risks concerning developed activities, in order to limit the lost that risk involves, naming the employees responsible for implementing those plans; identification of inherent risks associated with any actions or inactions that can lead to unfulfilling organizations objectives; implementing an efficient internal control system that can lead to standardization of risk management in public organization; elimination inappropriate, ineffective managerial act and also removing overly inefficient centralized management systems that may affect reaching achievement. 2. RISK MANAGEMENT - DEFINITION
world. It occurs when an investor buys low-risk government bonds over more risky corporate debt, when a fund manager hedges their currency exposure with currency derivatives and when a bank performs a credit check on an individual before issuing them a personal line of credit. 3. OBJECTIVES OF RISK MANAGEMENT FUNCTION The objective of risk management is to identify potential problems before they occur so that riskhandling activities may be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives. On short notice this is what management risk employees should: identify and prioritise potential risk events; help develop risk management strategies and risk management plans; use established risk management methods, tools and techniques to assist in the analysis and reporting of identified risk events; find ways to identify and evaluate risks; develop strategies and plans for lasting risk management strategies. 4. ANALISYS OF RISK MANAGEMENT MODEL a. Management strategies The success in applying good risk management strategies in the organization is determined by: planning the actions that must be done in order to achieve the proposed objectives; planning the internal control actions that are required in order to manage risk properly; planning the strategy that must be implemented in case risks are being materialized. b. Risk analysis
Risk management is the process of identification, analysis and either acceptance or mitigation of uncertainty in investment decision-making. Essentially, risk management occurs anytime an investor or fund manager analyzes and attempts to quantify the potential for losses in an investment and then takes the appropriate action (or inaction) given their investment objectives and risk tolerance. Inadequate risk management can result in severe consequences for companies as well as individuals. For example, the recession that began in 2008 was largely caused by the loose credit risk management of financial firms. Simply put, risk management is a twostep process - determining what risks exist in an investment and then handling those risks in a way bestsuited to your investment objectives. Risk management occurs everywhere in the financial
Risk analysis is a technique to identify and assess factors that may jeopardize the success of a project or
275
The risk in an organization activity it refers to not reaching the objectives established in terms of performance (not reaching the quality standards), timelines and costs (budget overrun) A risk element is any factor that has a measurable probability to deviate from the plan. This of course requires the existence of a plan. The strategies, plans and the programs of an organisation are elements that allow reality prefiguration and then makes possible the confruntation between actual achievements and the expected ones. In order to attain the objectives is required the development of a activities set. In the identification faze of risk the potential hazards are evaluated, the effects and the probabilities of those to occur, in order to decide which of those risks can be avoided. 1. Risk evaluation and quantification
There are three important principles in risk evaluation: to ensure that there is a clearly structured process in which to be taken into account both the probability and impact of each risk; registration of risk assessment in a way that can facilitate monitoring and identifying priority of risks to understand very clearly the difference between inherent and residual risk. Evaluation should be based, as much as possible, on independent and impartial evidence. It also must be taken into account all factors affected by this risk and to avoid confusions between risk evaluation and the apreciation on acceptable character of the risk. 2. Risks prioritization
Once the risks were evaluated, the organization priorities concerning risks will come out. If a risk represents a great exposure, then that risk will beome a priority. The major risks should be always taken into account by the permanent council of the organization. Specific activities that must be implemented: Determining risk exposure A problem when you have a number of possible risks is that it can be difficult to decide which risks are worth putting effort into addressing. Risk Exposure is a simple calculation that gives a numeric value to a risk, enabling different risks to be compared. Risk tolerance evaluation Risk tolerance represents the ,,quantity of risk that an organization is prepared to accept, or on which it can be exposed at a certain time. Risk concept has different semnification depending on the risk nature, that can be
The problem of controlled or uncontrolled risks can be discussed depending on risk tolerance. In this context the subjects are the uncontrolled risks or partially controlled risks. Alternative strategies adopted for risk control Acceptance risks tolerance In such a situation, no measurement has to be taken, even if permanent monitoring of risk is necessary in order to find out if the exposure level is increased. Risks transfer Is the possibility of transferring the risk to a destination outside the organization by signing an insurance policy. Risk mitigation It involves risk control system appropriate application for reducing the identified inherent risk at a minimum level. Risk avoidance This strategy consists in eliminating activities that generate risks. We must mention the fact that the option of avoiding risks is limited in the public sector compared to the private one. Ending risks Is achieved by stopping the activity that is generating the risk but it can affect the objectives achievement. Handling difficult situations Risk response is the action phase of the risk management cycle, in which it aims: elimination of risks, risks mitigation or to split risks. Handling difficult situations consists in a plan elaboration that aims the impact reduction of risks that can not be avoided. Concluding what was presented above, we can deduct tha handling risks means control them using internal measures. 4. Monitoring, reviewing and reporting risks A. Monitoring risks
Risk monitoring and control continues on though an organisation until the objective is reached. Risk monitoring and control is the process of identifying and analyzing new risk, keeping track of these new risks and forming contingency plans in case they arise. It ensures that the resources that the company puts aside for a project is operating properly.
276
The managing function of managing risks ends with the development of a Risk Register for each department. In order to prepare the Risk Register we must follow the next steps: develop an operational working procedure; establish the main general and specific objectives of each department; identify risks associated with department activities; evaluation and quantification of inherent risk in order to determine its exposure; preparation of Risk Register which will contain inherent risks and residual risks after applying control strategies; monitor the process of risk management that keeps under control residual risk exposures in order to maintain it within normal tolerability limits; develop an action plan in which there will be mentioned: The strategy adopted for the specific risk: actions that will be taken by implementing the most appropriate forms and instruments of control to minimize risk Risk register development will be made in accordance with the model given by legal framework. 5. REFERENCES
[1] KAPLAINS, S., TALABA, D., NANOUSSI, D., Measuring Quality and Improvement: Quality Indicators, n Proceedings of QUEST International Conference, Patras, Grecia, 9-12 martie 2000
277
278
ABSTRACT Lean means creating more value for customers with fewer resources, by minimizing waste. Although traditionally this concept is applied in manufacturing, the Lean management improvement principles can be also applied in the case of educational institutions. This paper presents three case studies of implementing Lean in UK and USA universities that can be useful examples for implementing Lean in any university environment. Keywords: Lean, Lean management, Lean thinking
1.
INTRODUCTION
Lean thinking is a new paradigm that has become the foundation for continuous process improvement and excellence in manufacturing and service organizations around the world. Lean is focused on creating value through relentless elimination of waste [1]. A lean organization understands customer value and focuses its key processes to continuously increase it. The ultimate goal is to provide perfect value to the customer through a perfect value creation process that has zero waste. To accomplish this, lean thinking changes the focus of management from optimizing separate technologies, assets, and vertical departments to optimizing the flow of products and services through entire value streams that flow horizontally across technologies, assets, and departments to customers. Eliminating waste along entire value streams, instead of at isolated points, creates processes that need less human effort, less space, less capital, and less time to make products and services at far less costs and with much fewer defects, compared with traditional business systems. Companies are able to respond to changing customer desires with high variety, high quality, low cost, and with very fast throughput times. Also, information management becomes much simpler and more accurate. A popular misconception is that lean is suited only for manufacturing. Lean applies in every business and every process, and in this paper we will reffer to lean applied in the academia. It is not a tactic or a cost reduction program, but a way of thinking and acting for an entire organization.Businesses in all industries and services, including healthcare and governments, are using lean principles as the way they think and do [2]. In any business there are three types of activities: 1. activities that add value, are those activities which, from the point of view of the customer, make a product or service more valuable; 2. necessary activities that do not add value. In terms of the customer, such activities dont make a product or service more valuable, but from the point of view of the supplier such activities can not be eliminated;
3. unnecessary activities that do not add value are those activities that can be eliminated. Lean concept refers to the effective management of an organizations production processes by eliminating waste, ie processes that do not add value and are not required. The focus of Lean based management is on value, customer, efficiency and effectiveness, as well as savings, sustainability and increasing performance. There are more steps to implement Lean in an organization including creating Value Stream Maps. Firstly, a current Value Stream Map is identified. That means designing a chart that includes all the necessary steps to go from receiving an order from a customer to the delivery of the required product. After that is drawn a future Value Stream Map, including opportunities for improvement identified through analysis of the current map. This step of implementing Lean, among other steps, is referred to in the following case-studies. Lean reference list consist in works of James P. Womack and Daniel T. Jones (Lean Thinking), which is one of the earliest books describing Lean philosophy, Taiichi Ohno (The Toyota Production System: Beyond Large-Scale Production), Jefrey Liker (The Toyota Way), Mike Rother and John Shook (Learning to See Value Stream Mapping to Add Value and Eliminate Muda) and others like Don Tapping, Tom Luyster and Tom Shuker, Kevin J. Duggan or Kenneth Dailey. Lean thinking is a relatively new concept in Romanian management literature, though there are several national authors that offer a personal perspective on Lean[3]. 2. MODELS OF LEAN IMPLEMENTATION IN A HIGHER EDUCATION ORGANISATION The following case study presents how Lean, a technique traditionally used only in the manufacturing business, is tailored to the particularities of the higher education processes and is implemented in an USA university. 2.1 The four-step model of Lean implementation at University of Central Oklahoma
279
280
281
282
THE IMPORTANCE OF RELATIONS BETWEEN GEORGIA AND ROMANIA FOR THE PROGRESS OF ENERGY PROJECTS
1
Andrei Saguna University, Constanta, 3Bucuresti University, 4Constanta Maritime University, Romania
ABSTRACT Romania and Georgia have developed close relations during the past two decades. They have excellent bilateral relations, collaborating in a wide range of fields. Georgia is an important partner of Romania in the wider Black Sea area, while Romania is the most active European partner of Georgia, one of the strongest supporters of Georgias EuroAtlantic integration. As part of the Southern Energy Corridor, both countries are very interested in the delivery of Caspian energy resources to Europe through projects that include them as transit countries. Although Nabucco has been for a long time the most important project for them, now-a-days, the realization of AGRI became the most important goal. The relations between these two countries are thus vital for the development of this energetic project. Keywords: energy project, hydrocarbons, energy corridor, Nabucco, AGRI, South Stream, North Stream, liquefied petroleum gas, Southern Caucasus
1.
INTRODUCTION
During the two decades since the establishment of diplomatic relations between Romania and Georgia, these two countries and their relations have evolved considerably in all areas. Currently, the partnership between Bucharest and Tbilisi has new insights to deepen, strongly supported by the interests of each of these two countries as well as by the interests of the world powers as EU and NATO. Firstly, Georgia wants to integrate itself into EuroAtlantic structures, and Romania, as a member of NATO and the EU, supports democratic developments and European and Euro-Atlantic aspirations of Georgia and is open to share its experience in the preparation for accession. Also, Georgia is interested to confirm its position at regional level of transit country for energy resources from Southern Caucasus and Central Asia, strongly supporting energy projects in the region, a very important project being currently the Interconnector of liquefied natural gas Azerbaijan-Georgia-RomaniaHungary (AGRI), in which Romania and Georgia are partners. Regarding Romania, its interests are both those of a member of Euro-Atlantic structures for solving conflicts in the region, fight against terrorism and energy security, and those, personal ones, of safety in the Black Sea region and access to Caspian hydrocarbons by positioning Romania on energy routes from the Southern Caucasus and Central Asia to European markets. In an interview in 2005, Romanian President Traian Basescu explained why the importance of relations with Georgia, given the fact that it can provide contact with the "wider Black Sea area, providing 50% of energy required in EU.That is why our interests are major ones". As can be seen, both parties are equally interested in securing a transit role in regional energy projects, such projects having both economic and geostrategic value for the two states. Although over the years many variations of these projects were circulated, of major importance
for Romania and Georgia are pipelines Nabucco and AGRI, designed to supply Europe with Caspian natural gas, avoiding transit through Russia. Implementation of these projects remains uncertain for the moment, everything depending on the development of relations between participants, both regional and global, in this "Caspian game". However, the existence of cooperation relations between Romania and Georgia means a small but important step in achieving them. 1. EVOLUTION OF RELATIONS BETWEEN GEORGIA ANS ROMANIA AND THEIR BACKGROUND Shortly after World War I, when the Russian Empire was dismantled, Romania recognized on 18 February 1921 the independence of the Democratic Republic of Georgia. Also, after the dissolution of the USSR, on 27 august 1991, the Romanian Government welcomed the "Declaration of the Parliament of Georgia on the restoration of state independence" and expressed willingness to develop friendly relations and cooperation with Georgia, based on the UN Charter and principles of international law, Romania being the first state to recognize the restoration of Georgia's independence [9]. Diplomatic relations between the two countries were established on 25 June 1992, and the Romanian Embassy in Tbilisi was inaugurated on 25February 1998. Beginning with the visit of President of Georgia, Eduard Sheverdnadze, in Bucharest on 30 June 1995 at the meeting of BSEC [9], reciprocal official visits of leaders of the two countries have been conducted periodically and Romanian-Georgian relations constantly improved. For example, after Romania joined the EU in 2007, Georgia became a priority state for development assistance under EU and international principles. Since that year, the Ministry of Foreign Affairs of Romania financed development projects in Georgia worth about 2 million euros in areas of common interest such as
283
284
285
286
Caspian game, in the last two decades, has become increasingly complicated. Interests are great for all parties, consumers, carriers and suppliers. It is about geopolitics, money, power, energy. The European Union wants to diversify energy sources and routes to reduce Russian monopoly, a direction supported by the U.S. However, uncertainty about future energy projects that avoid Russia make the UE to hesitate when it comes to confronting this local hegemon. On the other hand, Russia is not willing to lose the status of Europe's energy supplier and control of the "near abroad". For this reason, it strongly opposes any energy project which do not include Russia and any state from its former sphere of influence approaches of the Euro-Atlantic structures. In addition to these great powers, other countries involved in Caspian game adopt each a position found to be the most advantageous in that context. In this paper we have discussed the importance of relations between Romania and Georgia to the great Caspian game. These two Black Sea littoral states have an important role as transit countries in energy projects that could bypass Russia. Both have cold relations with Russia, especially Georgia. Romania and Russia have different approaches to problems in Republic of Moldova and Transnistria and look in opposite directions, Romania being in good relations with the EU and NATO, with which Russia has many misunderstandings. Regarding Georgia, the situation is more than clear. Georgia wants sovereignty and independence towards Russia, hoping he can get them through the accession to Euro-Atlantic structures, while Russia is not willing to give up the power on Caspian region. Pro-western perspective of both countries make them closer and encouraged them to move towards liberalization under Russian monopoly, at least in energy terms. Thus, the two showed their willingness to participate in Nabucco, the first large-scale project that does not include Russia, or as supplier or as a transit country. But since this project is delayed, Romania took
[1] Chamber of Deputies, 2011. Information on official visit to Tbilisi of Ms. Roberta Alma Anastase, President of the Chamber of Deputies, the head of a parliamentary delegation, www.cdep.ro [2] [Ministry of Foreign Affairs Bilateral relations: Georgia, www.mae.ro [3] Ciubotaru, R. 2008. Romania-Georgia relations, foreign policy litmus, www.cotidianul.ro [4] Cristescu, A. 2011. Romania under the burden of energy dependence? Top European states by dependence on imported energy, www.econtext.ro [5] Drgan, C., Competitivity aspects on Romanian maritime transports, Constanta Maritime Universitys Annals, Year XIII, 17th Issue [6] Drgan, C., Developments of maritime transport economy in Europe, Constanta Maritime Universitys Annals, Year XIII, 17th Issue [7] EU does not want to depend on Russian gas, www.evz.ro [8] Fati, S. 2011. Why fear Russia and Turkey to Romania, Romania Libera, www.romanialibera.ro [9] http://south-stream.info [10] Line ferry to Batumi, between Romania and Georgia, could be reactivated soon, www.b1.ro [11] Ministry of Foreign Affairs, 2012. Press Release: Celebration of 20 years of diplomatic relations between Romania and Georgia, www.mae.ro [12] Ministry of Foreign Affairs, 2012. Press Release: Consultation of Foreign Minister of Romania, Titus Corlean, with his Georgian counterpart Grigol Vashadze, www.mae.ro [13] Ministry of Foreign Affairs, 2012. Press Release: NGO Forum Romania - Georgia www.mae.ro [14] National Foundation for Romanians Abroad, 2012. EUMM Georgia, www.romanii.ro [15] Necula, F. 2012. Romania, a new power in the gas league - Asia Times, www.ziare.com [16] Petrescu, A. 2012. Corlean: We want to implement joint energy projects of Azerbaijan-Georgia-RomaniaHungary Interconnector, Epoch Times http://epochtimes-romania.com [17] Prompt Media, 2010. Ferry lines between Romania and Georgia, from Constanta to Batumi, will be restarted, www.romedia.gr
287
288
Andrei Saguna University, Constanta, 3Bucuresti University, 4Constanta Maritime University, Romania
ABSTRACT South Caucasus (also referred to as Transcaucasus), is a region situated to the south of the Greater Caucasus Mountain Range, composed of Georgia, Azerbaijan and Armenia. Due to the rich oil reserves of the Caspian Sea basin and geostrategic importance of the Caucasus as a crossroad between Europe and Asia, this region has always constituted a pole of attraction for the great powers of the world after the collapse of USSR. Not only neighboring countries like Russia, Iran, Turkey and Central Asian states (Kazakhstan and Turkmenistan), but also the United States, European Union and China are becoming actively involved in this region. Thus, while Armenia has been allied with Russia and Iran, considering these two powers as a counterweight to Turkey - its main enemy in the region, Azerbaijan and Georgia have developed geostrategic alliance with Turkey, and the United States by promoting cooperation with NATO member countries. Moreover, the conflict in NagornoKarabakh had deprived Armenia of the possibility of cooperation with other South Caucasian states. Armenia, which bases itself mainly on the relationship with Russia, believes that maintaining good relations with Iran is vital in terms of its national security, therefore, Armenia encourages active presence of Iran in the region. Meanwhile, Azerbaijan and Georgia, which have developed geo-economic relations between them in course of time and expanded strategic partnership with Western democracies, particularly through the NATO alliance, put forth their best efforts in order to leave the sphere of influence of Russia. Keywords: South Caucasus, energy project, energy corridor, Caspian Sea, strategic interests, economic interests, Caspian energy, oil, energy security
1.
INTRODUCTION
According to the Statistical Review of World Energy of British Petroleum (BP) in 2012, global energy consumption has increased again in 2011, with a growth rate of 2.5%, a value near the average for the last ten years. Consumption growth is attributable especially to emerging economies, because in OECD countries (Organisation for Economic Cooperation and Development) demand fell in 2011 for the third time in four years. Fossil fuels continue to dominate the energy market, with a market share of 87% of the mix of hydrocarbons, the oil being the leader on the market (33.1%). Even if renewable energy is becoming increasingly used, it represents currently only 2% of global consumption. Research in recent years has shown that there are sufficient sources of hydrocarbons to meet demand growth, as evidenced each year by BP in its statistics on proven reserves, but problems accessing these resources in some regions and transportation to consumers create challenges in trying to secure an offer at reasonable prices to demand [BP, 2012]. For this reason, a significant part of foreign policy is concerned with the availability of pipelines and terminals, of future pipeline routes, partnerships, etc. [Dolghin, 2004], or, in short, with the energy security.To ensure the energy security in the last two decades, after the dissolution of the Union of Soviet Socialist Republics (USSR) in December 1991, the European Union (EU) and United States (U.S.) have tried to develop relations with the three countries of the South Caucasus (Azerbaijan, Georgia and Armenia), in order to
gain access through these countries to the rich energy resources of the Caspian basin. Caspian Sea region (South Caucasus and Central Asia) has aproximmately three to four percent of global oil reserves and four to six percent of global natural gas reserves [BP, 2012]. The proportion of Caspian hydrocarbon reserves of the world total is not significant, but given the uncertainty of oil supply from the Persian Gulf to international markets, and the possibility for Russia to use its energy supplier status as a tool for local hegemony, energy transport in the South Caucasus and Central Asia (Kazakhstan and Turkmenistan) to the Western countries through the Caucasus has become important for the EU and the U.S. [de Haas, 2006]. But not only the EU and U.S. have energy interests in the Caspian Sea, but also other players like Russia, Iran, Turkey, China and neighboring countries in Central Asia, which would like to get control of oil and gas production or of pipes through which the hydrocarbons will be transported to world markets [Negu et. Al., 2008]. U.S. wants to diversify energy routes in the South Caucasus to international markets, especially to Europe, to avoid Russian monopoly and strengthen the independence of states in the region, while Russia is keen to maintain its local hegemony. For Turkey and EU, South Caucasus is a bridge to the Caspian and Central Asia hydrocarbons, while Iran and the Central Asian states see the South Caucasus as route of transport for energy resources to the West [Mehtiyev, 2004]. China's role in this discussion is given by the fact that as the second largest energy consumer in the world, after the United States, the country imports large quantities of Caspian hydrocarbons from Kazakhstan and hence has a
289
Even after more than twenty years after the collapse of the USSR, Russia continues to regard South Caucasus states as part of its legitimate sphere of influence and try to restore traditional geopolitical hegemony in the region, actively fighting, but also subtle, for dominance over its neighbors in the "near proximity" [Nuriyev, 2001]. In addition to these geopolitical interests, Russia has economic grievances to the abundant energy resources in the Caspian Sea, wishing that the new republics of the South Caucasus to export most of these resources through pipelines that cross Russia to the Western countries. Thus, Russia would be the intersection of energy routes to Europe, EU becoming increasingly dependent on the Kremlin leadership. In addition, Russia has lately focused primarily on the exSoviet states of South Caucasus because the good relations between Georgia and Azerbaijan which are being closer than ever to NATO and the EU, could reduce Russian sphere of influence and bring security problems for a long term. Sunny (2010), like Nuriyev (2001), feels that the main goal of Russia in the South Caucasus is to restore its local hegemony in the "near proximity", as opposed to U.S. ambitions to achieve global hegemony. In this region, Russia is able to demonstrate the European Union and NATO that is not willing to cede power over the ex-Soviet states, South Caucasus and Central Asia being the most vulnerable from the former Soviet Union to the influence of great Western powers. If, by 2008, Russia used "soft power" to try preventing the increase of American and European influence in the region, in august 2008 Russia demonstrated by RussoGeorgian war that can appeal to "hard power" if competitors exceeds the limits imposed by Kremlin. Through these events, Russia has shown that if its interests in the region are neglected, both Azerbaijan and Georgia, the two South Caucasus countries open to the West, will suffer serious consequences since Russia has the capability to handle frozen ethnic conflicts from these two countries to restart wars in Nagorno-Karabakh or Abkhazia. Georgian control is essential for the "energy game" played by Russia, as Moscow considers energy as the key to return to the world stage. Since Georgia is the only alternative for transport of hydrocarbons in the South Caucasus and Central Asia to Europe by avoiding Russia, removing this alternative would be a great step in regaining the title of world power and energy control over European neighbors. The most vulnerable point is Georgia's Black Sea coast, Georgia being the only one of the three South Caucasus countries with access to the Black Sea, poorly protected for a sea invasion, a fact that
The three small states of the South Caucasus have gained each more attention from the United States than expected. Explanation is given by the Azerbaijani oil, strong international Armenian diaspora and the proWestern standing of Georgia [Olcott Brill, 2002]. U.S. involvement in the region is manifested by a desire to achieve and ensure the area stability by solving frozen conflicts and to ensure the exploitation and transportation of Caspian oil to international markets by removing Russian monopoly. As noted in the previous section, Russia, since the collapse of the USSR in 1991, has expressed a desire to control the ex-Soviet states, a fact disliked by the world powers, including the U.S.. The latter was attracted by Azerbaijan's oil reserves, and many U.S. oil companies such as Chevron, ExxonMobil, Unocal and Amerada
290
291
Although directly interested in the Caspian riches, as the largest global oil consumer and the main recipient of an East-West energy corridor, until the early 2000s, the EU has preferred to leave the initiative in regard to action in Caspian region to NATO, U.S. and their regional allies (Turkey), desiring not to worsen relations with Russia, the main supplier of energy in Europe. Initially, immediately after the collapse of the USSR, the EU has shown interest in the Caspian region as a potential supplier of oil, taking advantage of the chaos of the Russian Federation beginnings. In this respect, the EU launched major energy projects as TRACECA (Transport Corridor Europe-Caucasus-Central Asia) and INOGATE (International Oil and Gas Transport to Europe) that would have to link Europe to the Caspian region, but regional escalation of conflicts and Russia's return to power on the European energy market has led to stagnation of these projects. The EU also decided not to get involved in solving frozen conflicts in the Caucasus, leaving this task to others international organizations [Aldea, 2008]. Lately, however, EU enlargement to Eastern Europe, by the accession of Bulgaria and Romania in 2007, brought the EU to the border with the South Caucasus, which has increased the Union's interests for the region. In this context, European strategies regarding the Caspian area were reviewed and coordinated with the U.S. and NATO efforts. EU decided to become more active in the Caspian region, both as a mediator of conflicts, but also by reconsidering energy projects in the region, seeking to ensure energy security by diversifying energy sources. The latter can not be obtained without solving serious security problems of Caspian region both internally, given the political tensions and separatist conflicts, and externally, being influenced by geopolitical rivalries of regional actors. In fact, the South Caucasus states are also interested in developing relations with the West, which are solid security guarantees from major world powers such as the EU or NATO, needed to secure their political independence and economic viability [Cornell et. al., 2005]. The inclusion of the South Caucasus in the European Neighbourhood Policy in 2004 was a small step in this direction since announcing intensification of cooperation between the EU and South Caucasus
292
Turkey is the second most important regional player, after Russia, bordering all three South Caucasus states, being related to the region in historical, cultural and linguistic terms. South Caucasus in the last two decades has gained strategic importance for Turkey, especially for two reasons. The first is the need for stability regions after the collapse of the USSR, for the Turkish state own security. The second reason is economic growth given by Turkey's participation in energy projects in the region as a transit country for natural gas and oil pipelines leaving from the Caucasus and Central Asia to international markets [Szymanski, 2009]. Thus, we can conclude that Turkey shares common interests of the U.S. and the EU to ensure the stability and security of the South Caucasus through peaceful resolution of frozen conflicts in the region and achieving energy projects in the southern corridor to avoid transiting Russia. For the South Caucasus states, several features of Turkey make it to be regarded as an indispensable partner in the region. Among these important values is that Turkey is a NATO member and EU close with a traditional alliance with the Western democracies. Also, Turkey's position in the heart of Eurasia, at the intersection of Asia, the Middle East and Europe, gives it a strategic geopolitical importance as a transit country. Moreover, embrace of democracy and open market economy makes Turkey a model for the Caucasian countries and an attractive partner for cooperation and investment. However, Turkey has differentiated strategy in relations with the South Caucasus states. It considers Georgia and Azerbaijan as natural allies in the South Caucasus
Iran is also an important geopolitical actor in "The Great Caspian Game", being in the vicinity of the South Caucasus and with historical, economic, cultural and ideological interests in the region. With the collapse of the USSR, Iran hoped to be able to restore its historical influence on South Caucasus states [Nuriyev, 2001] categorically opposing the involvement of the Western powers in the South Caucasus and Caspian Sea region. Noting the opening of Azerbaijan and Georgia to cooperation with the West, including with Turkey, seen as a rival in the region, Iran has decided to ally with Armenia, supporting it at the beginning of the conflict of the Nagorno-Karabakh. Moreover, the assistance offered to Armenia helped to improve the relations between Iran and Russia, the two countries having common interests in the Caucasus, and subsequently led to the establishment of the axis Russia - Armenia - Iran [Sadegh-Zadeh, 2008]. Over the years, Yerevan and Tehran have built strong relationships, especially regarding the energetic cooperation, a first gas pipeline connecting the two countries being already operational. However, Iran's position regarding the conflict between Azerbaijan and Armenia is not the same as before, Tehran moving to certain neutrality and becoming interested in its diplomatic solution. Its northern border instability and possible involvement of third parties in the renewal of hostilities is a source of concern for Iran. Like Russia, Iran is very interested in what happens in Azerbaijan, especially in the Caspian Sea. The fact that Azerbaijan has strengthened its cooperation with the
293
Even though after 1990 Caspian basin became an important element in international geopolitical discourse because of its potential energy, the term "Caspian", besides to define the sea with same name and the depression in which it is, never meant an entity, either culturally or politically. Besides that was along time space for which control rivaled the Russian Empire and Iran, the two regions east and west of the Caspian Sea are relatively foreign to each other. The reason is that, in the past, ties between Europe and Asia were made either through the south axis Iran-Turkey to the Mediterranean or through the north by Russia [Peyrouse, 2009]. However, with the implosion of the USSR, the countries of South Caucasus and Central Asia have tried to regain role as intermediaries between Europe and Asia because, by developing bilateral relationships, these states can be opened to new markets, those in Central Asia being interested in Turkish and Iranian markets, and those in South Caucasus in Chinese and South-Asia markets. Peyrouse (2009) argues that interests are both economic and strategic, most of these countries wanting to reduce Russian dominance in the region, and being, in fact, influenced by major world powers, the United States trying to achieve an east - west axis instead the traditional north - south axis, and China trying to gain
294
In 2011, China recorded the largest increase in consumption of oil and gas worldwide, being the second largest energy consumer globally after the U.S. [BP, 2012]. Thus, with continued growth in consumption, China is looking for new energy sources and routes of transportation of hydrocarbons. Therefore, even if, so far, China has not shown a special interest in energy projects in the South Caucasus, this could occur in the near future. It is known that Iran, Kazakhstan and Russia already exports oil to China, between Kazakhstan and China being in operation a pipeline that starts near the Caspian Sea. Thus, the development of the submarine trans-caspian project could also facilitate exports of Azerbaijani hydrocarbons to the East. Steps in this direction have already been taken by China, which over recent years has significantly improved relations with the South Caucasus states. In addition, China has expressed a desire to ensure stability in the region, this being necessary to develop energy transport on the East-West axis. It can therefore be concluded that although still far from maturity, relations between the Caucasus and China will grow in importance in the near future, given the growing presence of China in the region, aimed to find new markets for its products and energy resources, but also of transport corridors to Europe. 9. CONCLUSIONS
Its geostrategic position and rich energy reserves turned Caucasus from an area unknown to the West in the new "star" of the world stage. Interest of the main powers of the world, highly industrialized and energy consumers, in an era where energy consumption is growing faster than the discovery of new resources, was attracted immediately after the collapse of the USSR to this newly independent region, each of them trying to secure benefit from it. As emphasized throughout this paper, Russia has the highest authority in the development projects in the region, not only being able to use "soft power" and "hard power" to impose its position. This regional actor can be stopped by a more active EU or U.S. presence. Up to date EU has not imposed sufficiently strong position in the region in order to avoid a conflict with Russia, and U.S. is not so much interested in the Caspian region to be more involved. Thus, the South Caucasus countries, still feeling the threat of Russia, and without strong international support, have the power to solve ethnic conflict and to secure peace and stability that investors expect to start valuable energy projects. Uncertainty and unpredictability that dominate this region make unknown the direction in which the three
295
296
OVIDIUS University of Constanta, 2National Institute of Economic Research "Costin Kiritescu" Bucharest, 3 Constanta Maritime University, Romania
ABSTRACT Marketing Intelligence Systems are tools that allow organizations to conduct a new business, a new integrative vision that includes the customers needs, requirements and desires. The activity of the organization should focus on achieving them. The marketing knowledge and information held by the organization about customers, market, competition, suppliers, distribution channels, generally about the environment in which it operates, can be easily processed using those technologies specific to the computerized systems which support the marketing decision. Thus, there is created a strategic advantage for solving, in real time, the problems of the organization. Certainly, Marketing Intelligence Systems - implemented and operated with the efficiency of expert systems, satisfy the desire of every marketing man/woman to have a "smart tool" that emulates human thinking for activities specific to its area of expertise. Keywords: Marketing Intelligence, Market Intelligence, Business Intelligence. Marketing Intelligence Systems,
1.
INTRODUCTION
The new information technologies that brought numerous dot.com businesses have created a global market place, restructured whole industry sectors and redefined how business is done. Romanian electronic enterprises increasingly consider information as an important resource. The challenge consists in using all the integration techniques of the day whether they are information, data or application systems to build an marketing intelligent platform that can meet the demands of real-time businesses [1], [2] The importance of a marketing intelligent system (MkIS) in any business is justified because is impossible to develop a competitive strategy without gathering and correctly analysing the information from the market [3]. 2. INTELLIGENT SYSTEM VS. ARTIFICIAL INTELLIGENCE The English-Romanian economic dictionary translates the concept of intelligence through: informatie, stire, informatii confidentiale, inteligen (information, news, confidential information, intelligence) [1], [3]. Intelligence involves identifying the problems in the organization: why and where they occur and with what effects. This broad set of information and activities is required to inform managers on how well the organization is performing and where problems exist. For instance, consider a commercial organization marketing a large number of different products and product variations. The management will want to know, at frequent intervals, whether sales targets are being achieved. Ideally, the information system will report only those products/product variations which are performing substantially above or below the target. In order to understand the thematic spectrum of our research, we need to introduce the concept of intelligent system. Like all powerful concepts which science
operates nowadays, the concept of intelligent system is a fuzzy one and it is characterized by a significant dynamic semantics. In fact, this should not surprise us because the phrase contains the concept of intelligence, which is a powerful concept discussed and analyzed semantically. This idea was developed in the engineering sciences and then amplified in the sciences of the artificial. More specifically, in the field of research devoted to artificial intelligence. In this context, the intelligent system is that system able to perform the functions of the human brain. In particular, the name of intelligent system was given to a software system that would perform decision processes similar to those performed by natural intelligence [1], [2], [3], [4], [5]. In the context of organizational dynamics, an intelligent system [4] is characterized by the following functional properties: it has the ability to obtain data, information and knowledge both from its internal and external environment1. it has the ability to process data, information and knowledge both synchronously and asynchronously. it has the ability to analyze its internal condition in relation to external environmental conditions and to determine the degree of adaptation necessary for survival. it has the ability to decide on the optimal use of resources and capabilities to achieve a competitive advantage relative to the other competitors on the market. it has the ability to innovate and adapt to continuous innovation requirements of foreign competition and to the level of performance required by the internal decision environment.
The external environment can be defined as the field of external forces of an organization, which can directly or potentially influence it.
297
Figure 1 The marketing information system [10], [11] Moreover, as Kotler's definition says, MIS is more than a system of data collection or a set of information technologies: a marketing information system is a continuing and interacting structure of people, equipment and procedures used in order to gather, sort, analyze, evaluate, and distribute pertinent, timely and accurate information for use by marketing decision makers, in order to improve their marketing planning, implementation, and control [10], [11]. A marketing information system (MIS) has four components: (a) the internal reporting system; (b) the marketing research systems; (c) the marketing intelligence system (MIS) and (d) marketing models [4]: Internal reports include orders received, inventory records and sales invoices. Marketing research takes the form of purposeful studies either ad hoc or continuously. By contrast, marketing intelligence (MI) is less specific in its purposes; it is chiefly carried out in an informal manner and by managers themselves rather than by professional marketing researchers [3], [6], [7]. 2.2 Marketing research Marketing research is defined as the systematic and objective search for and analysis of, information relevant to the identification and solution of any problem relevant to the firm's marketing activity and marketing decision makers [24]. 2.3 Marketing Intelligence (MI) On Wikipedia, MI is referred to as the information relevant to a companys markets, gathered and analyzed specifically for the purpose of accurate and confident decision-making in determining market opportunity, market penetration strategy, and market development metrics. Marketing Intelligence is necessary when entering a foreign market. Marketing Intelligence determines the intelligence needed, collects it by searching the environment and delivers it to those marketing managers who need it. [6], [7], [8] Marketing intelligence is systematic collection and analysis of publicly available information about competitors and developments in the marketing environment [10], [11], [25]. Marketing Intelligence (MI) is not the same as Market Intelligence (MARKINT). Hence, Marketing Intelligence professionals often research information and use those tools that take data from disparate data sources like web analytics, Business Intelligence (BI), call centre and sales data, which often arrive in separate reports. It is the role of Marketing Intelligence (MI) to put this data into a single environment [5], [6], [7]. For these reasons, it is often mistakenly perceived to be (or be part of) Business Intelligence. It is also sometimes mistakenly perceived to be (or be part of) Competitive Intelligence because organizationally, Marketing Intelligence can be the name of the department that performs both the market intelligence and the competitor analysis roles [6], [7], [8], [13], [14], [15]. 2.4 Market Intelligence (MARKINT) Market intelligence is another intelligence discipline that is often confused with the other intelligence disciplines. As surprising as it may sound, it is most often misperceived to be (or be part of) Business Intelligence [7], [9]. On Wikipedia [8], Market Intelligence is referred to as a branch of Market research, involving collation and analysis of the available and relevant information and data on specific markets. Market intelligence typically involves the collation of data from various sources such
298
Figure 2 Market Intelligence areas [7], [13], [14] Market intelligence yields an ongoing and comprehensive understanding of the market. Each of the four knowledge areas [7] - competitor intelligence, product intelligence, market understanding, and customer insight - interacts to form a complete understanding of the market. Each competitors strategies will impact their product actions, the overall trends of market growth and segment interaction will impact the strategies, and underlying all of this, the customers behaviors and attitudes will ultimately drive the market dynamics in terms of growth rates and product acceptance. This integration of all four knowledge areas is ultimately deliverable for market intelligence [15]. 2.3.1 Market intelligence vs. marketing research Marketing research is a critical and significant source of information. However, it does not encompass all the information areas which are covered by Market intelligence. The scope of information covered is one of the key differences between marketing research and market intelligence [16], [25]. When examining the communication feature of the market intelligence pyramid, the most important difference between market intelligence and marketing research is that good market intelligence involves a dialogue between the market intelligence analyst and the client/decision maker. Conversely, marketing research provides an assessment of a specific issue, or measures a specific market dynamic. While it clearly involves communication with the client/decision maker, it typically consists of limited interaction versus the full dialogue of market intelligence [13]. We can now see that Market Intelligence is actually a rather very different discipline from Business Intelligence, and it is actually much closer to a pure market research activity [6], [7], [8].
Although Business intelligence (BI) is widely used by companies these days, its terms and exact definition is often confused and mixed with other types of intelligence (marketing intelligence, market intelligence, and competitive intelligence) an organization looks to gather. It is therefore important for us to put things in order and help in order to better distinguish between these types of intelligence [7], [8], [9]. CIO.com [12] defines BI as ...an umbrella term that refers to a variety of software applications used to analyze an organizations raw data. BI as a discipline is made up of several related activities, including data mining, online analytical processing, querying and reporting. Companies use BI to improve decision making, cut costs and identify new business opportunities. BI is more than just corporate reporting and more than a set of tools to coax data out of enterprise systems. CIOs use BI to identify inefficient business processes that are ripe for re-engineering. On Wikipedia [8], BI is defined as ...computerbased techniques used in identifying, extracting, and analyzing business data, such as sales revenue by products and/or departments, or by associated costs and incomes. BI technologies provide historical, current and predictive views of business operations. Common functions of business intelligence technologies are reporting, online analytical processing, analytics, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, and predictive analytics. The main mission of Business Intelligence is to support better business decision-making and it is often referred to as a decision support system. While BI is sometimes used as a synonym for competitive intelligence, because they both support decision making, BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes. It goes without saying that both disciplines are important for organizations to utilize [3], [5], [7]. 2.6 Competitive Intelligence (CI) Competitive Intelligence - the action of defining, gathering, analyzing, and distributing information about products, customers, competitors and any aspect of the environment needed to support executives and managers in making strategic decisions for an organization [7], [8], [9], [15], [17]. We like to look at it much more simply: to stay ahead of the competition, you need as much relevant data as possible to make good decisions. Thats where clear competitive intelligence comes in. Being able to easily monitor any information or webpage allows businesses to focus on what they do best - running their business [5], [17]. The term Competitive Intelligence is often viewed as synonymous with Competitor Analysis, but competitive intelligence is more than just analyzing competitors it is about making the organization more competitive relative to its entire environment: customers,
299
A marketing intelligence system is a set of procedures and sources used by managers to obtain their everyday information about pertinent developments in the environment in which they operate. The marketing intelligence system supplies data about the market [11]. Another definition of marketing intelligence system is that it is a system for capturing the necessary information for business marketing decision making [22]. The fundamental purpose of marketing intelligence is to help marketing managers make decisions they face each day, in their various areas of responsibility. A marketing intelligence system is a set of procedures and data sources used by marketing managers to sift information from the environment, information that they can use in their decision making. [15]. 4.1. Intelligence using Open Source Data More often, companies began using open source data in developing marketing intelligence. The term is defined as the scanning, finding, gathering, exploitation, validation, analysis, and sharing with intelligence-seeking clients of publicly available print and digital/electronic data from unclassified, non-secret, and grey literature sources[17]. Open source intelligence is the most frequently used form intelligence gathering in business enterprises, desirable because it is easy, inexpensive and produces abundant raw material for further processing[15], [17]. Managers have been known to spend several hours a day searching for information, later realizing that much of the information they acquired has little relevance or value toward meeting their needs. Companies typically spend far more time gathering information than they processing, analyzing and exploiting it. This study shows that practitioners would like to reverse this equation, and spend more time processing, analyzing and exploiting data as opposed to just gathering it. 4.2. Collecting Marketing Intelligence on the Internet According to Kotler and Keller, the marketers can research the strength and weaknesses of the competitor's online on five different ways; (a) independent customer goods and service review forums; (b) distributor or sales agent feedback sites; (c) combo sites offering customer reviews and expert opinions; (d) customer complaint sites; (e) public blogs [26]. This scanning of the economic and business environment can be undertaken in a variety of ways, including:
300
Unfocused scanning
Semi-focused scanning
Informal search
Formal search
The manager, by virtue of what he/she reads, hears and watches exposes him/herself to information that may prove useful. Whilst the behavior is unfocused and the manager has no specific purpose in mind, it is not unintentional Again, the manager is not in search of particular pieces of information that he/she is actively searching but does narrow the range of media that is scanned. For instance, the manager may focus more on economic and business publications, broadcasts etc. and pay less attention to political, scientific or technological media. This describes the situation where a fairly limited and unstructured attempt is made to obtain information for a specific purpose. For example, the marketing manager of a firm considering entering the business of importing frozen fish from a neighboring country may make informal inquiries as to prices and demand levels of frozen and fresh fish. There would be little structure to this search with the manager making inquiries with traders he/she happens to encounter as well as with other ad hoc contacts in ministries, international aid agencies, with trade associations, importers/exporters etc. This is a purposeful search for information in some systematic way. The information will be required to address a specific issue. Whilst this sort of activity may seem to share the characteristics of marketing research, it is carried out by the manager him/herself rather than by a professional researcher. Moreover, the scope of the search is likely to be narrow in scope and far less intensive than marketing research This requires human interpretation, communicating and sharing of information and perspectives between internal and external experts [3]. 4.3.4. Computer Systems A comprehensive Marketing Intelligence System (MkIS) will combine many of the features of decision support systems, online databases and library systems [13]. It is therefore likely to include many of the following: For gathering information: CD-ROMs, online database access, data feeds, email, Internet access, filters, intelligent agents etc.; For storage and retrieval: Database/document management facilities, text retrieval, search engines, intelligent agents; For processing and analysis: modeling and visualization software, groupware, group decision support systems. 4.3.4.1. Marketing Decision Support System The system represents a decision support system for marketing activity. It consists of information technology mainly based on internet system, marketing data and modeling capabilities that enable the system to provide predicted outcomes from different scenarios and marketing strategies, so answering "what if?" questions [7]. 4.3.4.2. Internal records (Database): An electronic collections of information obtained from data sources within the company [25]. 4.3.5. An Organizational Focus Although many professionals do much of their own information gathering and analysis, there still needs to be a clear focal point of the Marketing Intelligence System
4.3. Key Elements of a Marketing Intelligence System 4.3.1. Information and data A continuous flow of information is the lifeblood of a good marketing intelligence system - information about new technologies, markets, customers, the economic and regulatory environment etc. Both formal (routine reporting, factual) and informal information (gossip, opinions) must be tapped [13]. 4.3.2. Information Management Processes With many professionals having external information delivered to their desktops, from online services and increasingly from the Internet, it is easy to believe that users have all the information and data they need on tap. However, this is raw information and it needs to be transformed into intelligence [11]. Before that, however, this information must be classified, stored and made accessible - applying good practice principles of information resources management [13]. 4.3.3. Intelligence Development Processes A good intelligence system is more than information. It is a recurring cycle of linking the needs of decision makers to the processes of turning the information into actionable intelligence [17].
301
302
- Sometimes, marketing intelligence systems were also called expert systems because they have integrated within their knowledge a series of domain-specific knowledge, at the level of human expertise. Despite the successes in information technology, specialists have failed so far to achieve intelligent computational systems that replicate human intelligence. This did not prevent professionals to continue and even expand the semantic field of the concept of intelligent research. - Marketing Intelligence Systems are intended to support management decision making processes. Management has five distinct functions and each requires support from an MkIS. These are: planning, organizing, coordinating, decision and controlling. - Marketing Intelligence Systems have to be designed to meet the way in which managers tend to work. Research suggests that a manager continually addresses a large variety of tasks and is able to spend relatively brief periods on each of these. Given the nature of the work, managers tend to rely upon information that is timely and verbal (because this can be assimilated quickly), even if this is likely to be less accurate than more formal and complex information systems. - Some enterprises will approach marketing intelligence gathering in a more deliberate fashion and will train its sales force, after-sales personnel and district/area managers in order to take cognizance of competitors' actions, customer complaints and requests and distributor problems. Enterprises with vision will also encourage intermediaries, such as collectors, retailers, traders and other middlemen to be proactive in conveying market intelligence back to them. - Managers play at least three separate roles: interpersonal, informational and decisional. Marketing Intelligence Systems, in electronic form or otherwise, can support these roles in varying degrees. Marketing Intelligence Systems have less to contribute in the case of a manager's informational role than for the other two. - Three levels of decision making can be distinguished from one another: strategic, control (or tactical) and operational. Again, Marketing Intelligence Systems have to support each level. Strategic decisions are characteristically one-off situations. Strategic decisions have implications for changing the structure of an organization and therefore the Marketing Intelligence Systems must provide precise and accurate information. Control decisions deal with broad policy issues and operational decisions concern the management of the organizations marketing mix. 6. REFERENCES
[1] SIMION, H.A., The Sciences of the Artificial (3rd ed.), Cambridge, MA: The MIT Press, 1996 [2] POPA, G., The financial law, circulation of money and inflation in Romania. Publicaia Rolul euroregiunilor n dezvoltarea durabil n contextul crizei mondiale, Editura Tehnopress (acreditat CNCSIS), Iai, 2012. [3] GRIGORUT, C., Marketing, BREN (acreditat CNCSIS), Bucureti, 2004.
303
304
OVIDIUS University of Constanta, 2National Institute of Economic Research "Costin Kiritescu", Bucharest, Romania
ABSTRACT Controlling outlines business policy of an enterprise, the term derives from an English word - to control - control, managing, setting rules and directing. The controller's duty is to serve the management as an economic navigator and to ensure that the company's ship reaches its profit targets. The controller has to be sure that he or she has an organizational support from the top management. It was suggested to establish the controlling department which can be applied to the systems of economy and which would be directed to recognizing and forecasting the future. Nowadays, a modern enterprise can successfully fight with the competition and crisis only if it puts efficient controlling processing into practice. The goal of controlling is to recognize and solve problems or suggest measures for solving them and all that, in order to avoid such problems in the future. Keywords: controlling, controlling system, controller, operational controlling
1.
INTRODUCTION
In a historical perspective, the management appears as an integrative science that has developed due to the contribution of other areas and only to a small extent by its own evolution. One possible explanation may be given by the complexity and dynamics of organizations, these being in constant competition for resources and market shares in an increasingly turbulent economic environment. In this context, controlling was created by an interesting combination of knowledge from the theory of automatic regulation with the pragmatic accounting ones, in an institution. Perhaps this is why there has been created a semantic confusion of the concept, by the tendency to identify its origin in the control function of management [1]. Controlling is not about product features; it is about the correlation between planned activities and the progress of their interpretation, using, as a metric, the company's financial and accounting support. By controlling, we assess the difference between what was planned and what has been achieved and there are highlighted the causes that contributed to this gap and the measures that must be taken in order to reduce or eliminate the difference. Controlling is much more, namely a concept of functional management, with the role to coordinate the planning, control and information in order to achieve the desired results. The controller is the "economic awareness" of the company [1]. 2. THE DEFINITION OF CONTROLLING
utilization of organizational resources so as to achieve the planned goals. Controlling measures the deviation of actual performance from the standard performance, discovers the causes of such deviations and helps in taking corrective actions. According to Brech, Controlling is a systematic exercise which is called as a process of checking actual performance against the standards or plans with a view to ensure adequate progress and also recording such experience as is gained as a contribution to possible future needs.[1]. According to Donnell, Just as a navigator continually takes reading to ensure whether he is relative to a planned action, so should a business manager continually take reading to assure himself that his enterprise is on right course. [2]. 3. WHY CONTROLLING?
Controlling consists of verifying whether everything occurs in conformities with the plans adopted, instructions issued and principles established. Controlling ensures that there is effective and efficient
Controlling can be considered as an internal management which a general manager and other managers use in making decisions and it gives answers to the following questions [1] Do you know exactly what products record profit and where this profit has to be allocated? Do you learn in advance if everything goes according to the plan or if there is deviation from the plan? Do you know the causes of these deviations from the plan? Do you know how certain actions affect your results? Do you know the results achieved according to the business principles, i.e. without tax adjustment? Can you implement the company's strategy in concrete action plans and results? What leads to indirect cost increase? Do you know what is the best investment alternative?
305
4. THE CONTROLLER WITHIN THE INSTITUTION / COMPANY Basically, the controller has two different tasks of coordination, both in relation to the planning system and to the information system. On the one hand, it deals with structuring and development, and, on the other hand, it deals with daily functioning (permanent coordination). Firstly, the controller supports management, and, secondly, he/she is active at the decentralized level in the sense of the implementation of the idea of controlling among the employees. Its contribution to the planning process and to the information needed leads to the controllers direct subordination to the company's management. The controller transforms himself/herself from a simple service provider in a management consultant [1], [3]. The controllers role within the planning is to coordinate the partial plans and to organize the entire planning process. Therefore, not the controller but the manager is the one who normally plans and coordinates; The controllers role in the information system is to disseminate the information needed, to obtain and to process them (in accounting) and to transmit them in reports. The controlling departments are usually placed in the head office (there is a head office or a team on a managements side) as a controlling department [1]. Using the controlling, the management of an enterprise can effectively fulfil its role, which can be observed through several most important directives such as managing through defining the clear goals - from the top to the bottom and vice versa. 4.1. Controlling system (an example made by Horvath & Partners) Every manager (financial manager and commercial manager) is faced with questions concerning his or her controlling organization and the appropriate resources in the company [1]. Do we have a suitable controlling system? Does management within controlling stick to the dotted line as it should? Do we have too many controllers on board? Do procedures and results in planning and reporting live up to best practice expectations? 4.1.1. The solution
The Controller uses a short analysis based on benchmarks to answer these and other questions on the current state of management accounting and controlling within your company and provides concrete
306
The activities of Conntroller consists of five core elements which are run in a tried-and-tested, step-by-step approach [1]: Developing a target position for your controlling: The future challenges and goals for your company shape the demands upon corporate management and thus upon your management accounting and controlling. This mid- to longterm target position is first defined with top management at the beginning of the controllings activities (controlling audit). The audits findings are then used to firm up the target position and adopt it formally. The future role of your controlling department is anchored in a set of controlling guidelines. Managements understanding of its role and the demands upon it are critical parameters for the future target position of your management accounting and control. Analyzing the current state: An evaluation of the current situation within your controlling organization is carried out by means of questionnaire-based interviews and workshops with managers and controllers. The analysis focuses on the structure of the organization, the controlling processes, the methods and approaches applied, the tools and instruments used and the resources. Here, both the way the controller sees the controlling department and the way customers see things are recorded and systematically evaluated. Reflecting the results of the current-state analysis using benchmarks (positioning): Underlying both the quantitative and the qualitative benchmarks is the most extensive and comprehensive controlling database. Using a standardized process model for all relevant controlling activities, the highest possible degree of comparability is both guaranteed and achieved. The results of the evaluation are laid down concisely in a detailed report. Deriving the need for action (gap analysis) using best practices: In order to make this possible, Controller will identify and analyze the gap between the target position of your controlling and its current state. What is decisive here is the integration of the various optimization measures into an overall controlling concept. Drawing up the implementation plan (realization): The concrete implementation planning in the last stage of the Controlling system ensures rapid transition from analysis and concept to realizing the optimization measures.
The Controller show you very quickly the current state of management accounting and controlling in your company in comparison with industry benchmarks and provide you with concrete recommendations on how to improve performance and to achieve a defined target position for your controlling. With the help of our initial implementation plan you can get down to realization very quickly. 4.2. Building a Controlling system Controlling system is typical for modern businesses. In addition to central controlling (company controlling), which takes general controlling tasks for various divisions (departments) and functions, as a transversal function, there are decentralized controlling departments for each activity, sector and branch. Significant elements of an effective controlling system specific to companies in this field: The overall controlling of the company financial controlling; strategy planning and organization; investment controlling; coordination of subsidiaries agencies; the general management of clients. Development controlling - the growing importance of this function, which developed in terms of costs and strategically, as a central factor for business success, increased the interest to present this field shipping, more transparently and economically. Controlling development was regularly faced with the lack of activity standardization, which involved a series of implications for planning and control. Similarly to the field of developments, achieving projects is more and more presented as a success factor, which justifies the existence of project controlling. Staff controlling - flexible planning of staff involvement and productivity measurement based on scanner data - "qualification controlling". Logistics controlling - trying to replicate all the functions of Porter's value chain by emphasizing the issues related to procurement and distribution.
Which is the most important factor that generates the need to implement and improve Controlling in Romanian companies?
1 2 3 4 5
1
As the business environment becomes more competitive, Controlling can help maintaining the overview As companies expand and diversify their activity, the management needs reliable data about the profitability of products/services The need to improve the performance, the planning and the reporting instruments accepted in the entire company Revenues fall and costs grow. Controlling should provide an analysis and a support in taking the right measures for lowering costs The main competitors are already using Controlling instruments
307
When asked, Which are the major challenges in creating a Controlling department and adopting different controlling tools? the majority admitted that the Romanian market lacks know-how and experts in this field. This fact is not surprising, since there is no university in Romania offering a specialization on Controlling or at least a controlling course. Many managers still perceive Controlling as Accounting or Audit. Horvath & Partners in 2011 [3] states, in our competitive todays business environment the informational role of Accounting is no longer sufficient. The decisions of a manager must be based on information about the future, on analysis and anticipation of future problems. This applies to a greater extend to Romanian managers, which often make decision without having a solid basis, but guiding themselves after current opportunities or acting on instinct. The message is clear: 49% of the managers attending or participating in training courses [4] recognized with what Controlling is supposed to help their organization - by being a business partner that supports the decisions of management. Due to these conditions, the market is seeking possibilities for training to adopt state of the art controlling practices. This is the reason why the specialists in Controlling, unlike in the initial plan, has decided to offer Romanian companies different solutions and best practices examples.
Which are the major challanges in creating a Controlling department and adopting modern Controlling instruments?
As a conclusion, we can state that the companies in Romania realized that in order to grow profitably, in the setting of growing complexity of operations, higher investments and also risks, they are in great need of coordinating their business units with each other and with the external social and economical environment the main task of Controlling. Along this tricky road, the experts in Controlling offer them individualized, innovative solutions and help them build a performanceoriented Controlling department easier. 7. REFERENCES
1 2 3 4 5
1
Controlling is not needed, the revenues and growth rates are (still) big Clear sharing of responsabilities (Financial Accounting and decision about the business) Availability/ Acces to the Controlling expertise and know-how on Romanian markets Availability of professional trainings Inaccurate interpretation of Controlling as Internal Audit or as a function of Control
Figure 2 Tasks for creating a Controlling department Besides the search for know-how and good Controlling practices, leading Romanian companies are actively developing their Controlling departments and processes. This is seen as an appropriate strategy to improve management control and transparency of results, performance and costs.
[1] HORVATH & PARTNERS, Controlling. Sisteme eficiente de cre tere a performanelor firmei. Ed. a II-a, Editura C. H. Beck, Bucureti, 2009; [2] Controlling System activities; available online at http://www.managementstudyguide.com/controlling_function.htm ; accessed 03.12.2012; [3] PEROVIC. V., NERANDZIC, B., TODOROVIC, A., Controlling as a useful management instrument in crisis times, Review African Journal of Business Management Vol. 6 (6), 15 February, 2012, pp. 2101-2106, available online at http://www.academicjournals.org/AJBM; [4] HORVATH & PARTNERS Focus, Romania available online at: http://www. controlingakademie.de. [5] POPA, G., IMF role in the context of the Romanian financial law, OVIDIUS University Annals, Economic Sciences Series, 2012; [6] POPA, G., The financial law, circulation of money and the inflation in Romania, in publicaia Rolul euroregiunilor in dezvoltarea durabila in contextual crizei mondiale, Editura Tehnopress, Iai, 2012.
308
While self esteem is important for reputation, the main goal of reputation is not the organization to be enjoyable for others but to set it apart from competitors. Reputation precedes faith. Values are the basis of reputation, since they determine organizational decisions. Reputation may be the most important asset entrusted to the top management of the institution. As an intangible asset, it can help define and meet the needs, interests and expectations of collaborators and the public, being a differentiating factor in the competition. It is an asset which can hardly be restored as it is based on perceptions and expectations (confirmed or not). Keywords: Reputation, Mission, Visual brand identity, High visibility, Vision, Values, Competitive advantage.
1.
INTRODUCTION
3.
Reputation, as a dimension of management, is a state of competitiveness achieved through a high level of efficiency and productivity, which ensures sustainable market presence, given the complex interaction of many factors [1]. Reputation philosophy implies the existence of a relationship of mutual appreciation between managers and performers; it has to start from mutual trust and develop through communication and the participation in the management process [1], [2]. Reputation philosophy relies on the skills that the manager is endowed with: customer focus; continuous product renewal; renewal of the methods used and of the organizational structures; stimulating a sense of pride within the employees, arising from their participation in its achievement [1], [3]. 2. DEFINING REPUTATION
When trying to define reputation, we find that, in the last century, this notion has undergone significant changes: (1) until the twentieth century, reputation was synonymous with "honour", respect, fame, "dignity", "merit", celebrity, "value" and it was used in relation to individuals, (2) afterwards, this notion has become more widely used, extending to institutions, (3) in recent years, reputation is treated as synonymous with "social responsibility" [1]. in Micul dicionar enciclopedic, reputation means a favourable or unfavourable public opinion about someone or something; the way one is known or appreciated [1]. in Petit Larousse illustr, reputation is defined as a favourable or unfavorable public opinion, with meanings oriented towards having a good reputation, esteem, compromising ones reputation [1]. in its ordinary meaning [1], "reputation is a social evaluation obtained by someone and which is the general opinion about the qualities, merits and deficiencies of any particular individual"[6], [8].
In classic marketing, reputation or visibility, are defined as [4], [17], [18], [19]: Be obsessed with your product and service nothing comes close to superior product quality in influencing the way people feel about your organisation. Deserve confidence - lead from the front and engender trust from employees and customers. Be available - build relationships with customers, employees and suppliers. Admit mistakes - if mistakes are made, admit them and respond rapidly. Engage peoples interest - get all staff involved. Have something to say top management of the company can use their own and the businesss personality to communicate with impact and colour. Marketers focus on identity, image (brand image) and awareness, which is what Walter Lippmann (1922) called "mind images" [10], [19]. 3.1. Brand, identity and reputation "The companys identity () is the sum of all those visual clues by which the public recognizes the company and distinguishes it from other companies" [10], [11]. The companys identity varies depending on the circumstances, the institution's policies and audiences and on the public categories which the company wishes to address. These three terms are sometimes used interchangeably brand and identity; visual brad identity - image and reputation [4], [5], [7]. 3.1.1. Brand identity a step building reputation
The outward expression of a brand including its name, trademark, communications, and visual appearance is brand identity. Because the identity is assembled by the brand owner, it reflects how the owner wants the consumer to perceive the brand and by extension the branded company, organization, product or service. This is in contrast to the brand image, which is a
309
The recognition and perception of a brand is highly influenced by its visual presentation. A brands visual identity is the overall look of its communications. Effective visual brand identity is achieved by the consistent use of particular visual elements to create distinction, such as specific fonts, colors, and graphic elements. At the core of every brand identity is a brand mark, or logo [5]. a. Visibility vs. high visibility Being visible means that an individual has generated a high level of awareness in the market segment that he or she serves. Many people in that segment have heard of the person and may even know the person [14], [17]. The persons visibility might stem for other characteristics, such as notoriety, outstanding physical or mental features, and so on. For many individuals and organizations, visibility can make the difference between success (n.a. reputation) and unrealized potential. To achieve the level of success a personal or corporate brand is capable of, visibility is a core requisite. For this reason, the organizations and individuals should think as seriously about their brand strategy as they do their product or service strategy [16]. A highly visible individual is one that has achieved memory lock or long-term recall, to become one of the top two or three individuals people remember. However, a highly visible person does not necessary need to achieve memory lock across many sectors [17]. In organizations, highly visible people can serve as role models and inspiration for employees of the organization. It also can serve to crystallize the product as the person becomes an extension of it. b. Brand loyalty Brand loyalty, in marketing, consists of a consumer's commitment to repurchase or otherwise continue using the brand and can be demonstrated by repeated buying of a product or service, or other positive behaviors such as word of mouth advocacy [16]. 3.2. Image vs. reputation This image or set of images thus contributes to the reputation of the organisation. Roger Muchielli defines the image as a representation or idea which is formed by the individuals
Kotler defines the mission in more sustainable terms, i.e. "the rationale of the company"; it reflects the fundamental purpose of the existence of this entity. The companys mission is the means that cannot be changed. The companys operations and sphere of activity are flexible, but they must be identically aligned to the middle line [18]. 3.3.2. Vision The vision can be defined as an image of the future favourable state which the company wants to reach. It clearly shows what the company aspires to become and accomplish. In order to define all these things, a company must create a mental picture of the future, given the corporate mission [18]. 3.3.3. Values the base of reputation
The values express (in words) a set of corporate priorities and managerial efforts towards their inclusion in the business practices, with the hope that they will reinforce the behaviours that bring benefits to the company and to the communities inside or outside it, which further strengthens the values of the institution[13], [16]. On the other hand, the values can be considered "institutional standards of behaviour of a company" [17]. In Leoncionis perspective, there are four different types of corporate values:
310
Therefore, reputation clearly defines a unique identity and consolidates it with authentic integrity, in order to build a strong image for the public. The companys identity is the institutions card to the public. It shows how the organization is structured, what are its values and nature. The companys identity and image are fundamental elements of the organizational strategy and successful implementation of an efficient management reputation. Modest reputation remains at that level for a long time then breaks into a high level of visibility. 7. REFERENCES
[1] PETRESCU, I., Managementul Reputaiei, Editura EXPERT (acreditat CNCSIS), Bucureti, 2007; [2] http://www.dictionare-online.ro/ 05.12.2012 [3] CHIROVEANU A, MNCIU M, NICOLESCU N, RDULESCU Gh., UTEU V, Micul Dicionar Enciclopedic, Editura tiinific i Enciclopedic, Bucureti, 1978, p. 825; [4] SOWELL, Thomas, A Conflict of Vision, New York, 1987, p. 32 [5] MUCHIELLI, R., Psychologie de la publicit et de la propagande, Librairies Techniques, Paris, 1970, p. 110 [6] FLORESCU, C., MLCOMETE, P., POP, N., Marketing. Dicionar explicativ, Editura Economic, Bucureti, 2003; [7] http://en.wikipedia.org/ accessed on 26.11.2012 [8] ARGENTI, P.A., DRUCKENMILLER, B., Reputation and the corporate brand, Corporate Reputation Review, 6(4), 2004, p. 368-374. [9] POPA, G., Globalization and Money Laundering, International Conference on European Integration, Realities and Perspectives (EIRP), 2012;
311
312
1.
INTRODUCTION
Leadership is a process that is not specifically a function of the personal in charge. Leadership is a function of individual wills and individual needs, and the result of the dynamics of collective will organized to meet those various needs. Second, Leadership is a process of adaptation and evolution. Leadership and management are processes of dynamic exchange and the interchanges of value. Leadership is deviation from convention and a process of energy, not structure. This is the way in which Leadership is different from management: managers assure stability, while leadership is all about change ([1], Barker, 2001, p.491). Leadership and change go hand in hand. They are also two of the most contentious and problematic elements of organizational life with much debate and Controversy over what constitutes Leadership and which are the benefits of change ([2], Beer and Nohria, 2000). As Burnes ([3], 2009; [4], 2012) points out, implementing change in organization is not an usual aspect, only around 30% of all changes initiatives are successful ([5]Bessant and Haywood1985; [6]Crosby, 1979; [7]Hammer and Champy 1993). Entities, whether they be individuals, managers, teams or organizations, which do not adapt to change in timely manners, are unlikely to survive. Fortune magazine first published its list of Americas top 500 companies in 1956. Sadly, only twenty nine companies from the top 100 on the original list remain today. The other seventy-one have disappeared through dissolution, merger, or downsizing. Survival even for the most successful companies cannot be taken for granted. Giants such as General Motors, Ford and Chrysler know that, to survive, they must adapt to accelerating and increasingly complex environmental dynamics. Today norms of change bring problems, challenges and opportunities.
Those individuals, managers and organizations that recognize the inevitability of change, learn to adapt to it, and attempt to manage it, will be the most successful. Change is the coping process of moving a present state to a more desired state in response to dynamic internal and external factors. Essentially, change means that we have to do things differentially in the future. In general, most people dislike change because of the uncertainty between what is it and what might be. To successfully implement changes, managers need to possess the skills to convince others of the need for change, identify gaps between the current situation and desired conditions, create visions of desirable outcomes, design appropriate interventions, and implement them so that desired outcomes will be obtained. There are two major types of change. The first is unplanned change that is forced on an organization by the external environment. This type of change occurs and is dealt with as it happens through emergency measures- a practice often called fire fighting. Sources of unplanned change include technology, economic conditions, global competition, world politics, social and demographic changes and internal challenges. Another type of change is planned. It results from deliberate attempts by managers to improve organizational operations. One example is total quality management, with a focuses on continuous process improvement. Short-run programs of implementing changes have usually unintended dysfunctional effects on participant satisfaction and the long term goal of the organization. Change programs that are aimed at improving long-term effectiveness, efficiency, and participant well-being are usually more successful. The Lewins experiment established the three phases of planned change, from the unfreeze, that determines change, that determines refreeze. ([8], Lewin, 1951). In the first phase, a manager needs to help people accept that change is needed because the existing
313
2.1. Leaders Identity Work. Transformational leadership The ways individuals craft, uphold and revise their identities-captured within the definition of identity work have gathered much attention from organizational scholars. Researchers have elucidated how individuals shape their self-conceptions, within social interactions, in order to transition into or sustain a desired role ([10] Ibarra, 1999; [11] Kreiner, Hollensbe& Sheep, 2006; [12] Pratt, Rockmann& Kaufmann, 2006; [13] Thornborrow& Brown, 2009, apud [14] Petriglieri, 2012) or to avoid the taint associated with a stigmatized one [15] Ashfort& Kreiner, 1999; [16] Snow& Anderson, 1987). These studies have deemed identity work successful of individuals manage to craft identities that sustain their self-esteem and grant their social validation in their roles, and have provided the foundations for an emergent stream of organization studies concerned with the identity dynamics underpinning the emergence and the exercise of the leadership ([17] Day& Harrison, 2007; [18] De Rue&Asford, 2010; [19] Ibarra, Snook, & Guilen Ramo, 2010; [20] Lord& Hall, 2005). Leadership is not synonymous with occupying position of formal authority or enacting requisite styles, and endeavors to account the interaction of intra-psychic and social dynamics in the making of demise, of leaders([21] DeRue & Ashford, 2010). The leaders are more effective when their message is deeply personal and yet touches shared concerns ([22] Petriglieri, 2011, p.6). Some other authors describe the leadership performance on the transformational attributes (consider contingencies to determine the best interventions). The best strategy for changing a given situation depends on various contingencies. Key factors to consider include time; importance; anticipated resistance; power situations; ability; knowledge and resources required; and source of relevant data. If a change needs to be made quickly and it is not critical important, and if resistance is not anticipated, using direct authority may be appropriate. However, if the change is important, resistance is anticipated, and the power of position of
314
The compulsive personality reaches faster the top of the career. Although in the managements point of view they are considered masters, the effects on the organization are toxic. Compulsive managers fear that they might find themselves in certain forms of dependency towards people or events. The main preoccupation is, to dominate or being in control. Interpersonal relationships are interpreted in terms of domination and obedience. In areas where they are in charge, they insist that the lower ranking personnel accomplish their objectives without any objections. The compulsives have a sense of perfectionism which hinders them to see the big picture ([24] Manfred Kets de Vries, 181:2003). They are preoccupied with details, establishing norms, regulations and procedures which will correspond to certain easy tasks. They prefer the routine; they are not able to stray from the planned activities, from their environment and are inflexible to change. They lack creativity, spontaneity; form is more important than the essence, the need for affiliation and relating do not manifest themselves. The usual characteristics of this manner of leadership are as follows: thoroughness, dogmatism and stubbornness, which is translated into an excessive preoccupation for the rational-legal climate, for the managements organization function and efficiency. Managing decisions is a very difficult task; the important decisions are often delayed, due to the fear that the actions might not be as efficient as were expected. The compulsive manager is a workaholic; the perception created is that to work hard, the devotion and involvement leading to the neglect of interpersonal relationships. The organizational culture created by this type of leadership is one that is resistant to change and it promotes an untrustworthy environment. In order to implement the coordinating function of management, the leader will prefer the formal mechanisms of control, in favor of the peoples willingness, demand in favor of determination. The decisional process is accompanied by persuasion, beyond the limit of manipulation and suspiciousness. The leaders preoccupation with control will limit the employees freedom of action, in a conflicting manner. The intrinsic motivations do not work, the organization is not decentralized and the human resources structure is divided between the decisional players (executive managerial level) and the laborer (operational level). The bureaucratic culture of
315
316
Figure 1 Bilogic Gender This relative imbalance is explained by the number of headship positions in comparison to the executor positions, within the organizational ensemble. After indentifying the subjects, weve proceeded to the codification of presence or absence of one of the five subjects, for every investigated subject. The presence of the subject was noted with a value of 1, whereas the absence has a value of 0. 4. CONCLUSIONS
With regards to the active period within the institution, the interviewed subjects have an average of 9.26 years, a median of 10 years and a standard deviation of 3.09 years. The most frequent category is that of 11 years, active period of employees being between 4 and 14 years. Based on the median value ([33] Schermerhon, 2002, pp.349-356) we can appreciate that the subjects with less than 10 years are considered relatively new subjects within the institution, and the ones that have over 10 years are considered older. From the afore mentioned data it results that the distribution of the 80 interviewed subjects regarding their active time within the institution is a symmetrical type (Skewness = -0.21; Skewness standard deviation = 0.26) and mesokurtic (Kurtosis = -1.06; Kurtosis
A number of 53 interviewees (66.25%) have mentioned, in their protocols, the fact that the union leaders are weak negotiators, whereas 27 people (33.75%) were not present at this subject. The personal interest subject has been identified in the case of 50 subjects (62.5%), whereas 30 subjects (37.5%) have not recognized this subject in the answers. The political co-involvement has been identified as a subject by 60 people (75%), whereas 20 subjects (25%) have not perceived this subject about the union leaders.
317
Table 4 Absence of the negotiation attributes of leadership Frequency Valid Absence of issue Presence of issue Total 27 53 80 Percent 33,75 66,25 100,0 Valid percent 33,75 66,25 100,0 Cumulative percent 33,75 100,0
Table 5 Individual interest Frequency Valid Absence of issue Presence of issue Total 30 50 80 Percent 37,5 62,5 100,0 Valid percent 37,5 62,5 100,0 Cumulative percent 37,5 100,0
Table 6 Political implication Frequency Valid Absence of issue Presence of issue Total 20 60 80 Percent 25 75 100,0 Valid percent 25 75 100,0 Cumulative percent 25 100,0
318
Table 7 Generates conflicts in working groups Frequency Valid Absence of issue Presence of issue Total 25 55 80 Percent 31,25 68.75 100,0 Valid percent 31,25 68.75 100,0 Cumulative percent 31,25 100,0
Table 8 Test Statistics Negotiation process Mann-Whitney U 603,000 Wilcoxon W 1684,000 Z -2,127 Asymp. Sig. (2-tailed) ,033 a. Grouping Variable: Seniority 5. REFERENCES Individually interest 552,000 1633,000 -2,669 ,008 Politically Determination 722,000 1803,000 -,779 ,436 Mass media Relationships 598,000 1679,000 -2,110 ,035 Managing conflicts 637,000 1718,000 -1,758 ,079
[1] Barker, R.A., The nature of leadership, Human Relations, 54(4), 469-494, 2001 [2] Beer, M.,& Nohria, N., Breaking the code of change, Boston, MA: Harvard Business School Press, 2000. [3] Burnes, B , Managing change, 5th Edition, London, FT/Prentice Hall, 2009. [4] Bernard Burnes, Rune Todnem By, Leadership and change: The case for a greater Ethical Clarity, in Springer Science+Business Media, B.V. , J.Bus, pp.240249, DOI 10.1007/s10551-011-1088-2, 2011. [5] Bessant, J., &Haywood, B., The introduction of flexible manufacturing systems as an example of computer integrated manufacture, Brighton Edition , 1985. [6] Crosby, P.B., Quality is free, New York: McGrawHill, 1979. [7] Hammer, M., & Champy, J., Re-engineering the corporation, London: Nicolas Brealey, 1993.
[8] K. Lewin, Field Theory in Social Sciences, New York, Harper& Row, 1951. [9] Grigoru C., Anechitoae C., Creativity and Innovation - Supporting the Employability of Graduates on Labor Market, UNISO 2010. UNlversity in SOciety, 9th Edition Lifelong Learning - Support for Economic Growth, 07-11 iulie 2010, Timioara, pp. 111-117, http://media.unibuc.ro/stiri-din-universitate/cea-de-a-ixa-editie-a-universitatii-de-vara-universitatea-in-societateuniso-2010 [10] Ibarra, Hermina, Provisional selves: Experimenting with image and identity in professional adaptation, Administrative Science Quarterly, 44, 764-791, 1999. [11] Kreiner, Glen E., Hollesbe, Elaine C.,& Sheep, Mathew L., Where is the me among the we? Identity work and the search for optimal balance, Academy of Management Journal, 49,1031-1057, 2006. [12] Pratt, MichaelG., Rockmann, Kevin W., & Kaufmann, Jefrey B., Constructing professional identity: the role of work and identity learning cycles in the
319
320
1.
INTRODUCTION
The Black Sea is one of the most remarkable regional seas in the world. It is almost cut off from the rest of the worlds oceans, is over 2200 m deep and receives the drainage from a 1.9 million km2 basin covering about one third of the area of continental Europe. Its only connection is through the Bosphorus Strait, a 35 km natural channel, as little as 40 m deep in places. This channel has a two layer flow, carrying about 300 km3 of seawater to the Black Sea from the Mediterranean along the bottom layer and returning a mixture of seawater and freshwater with twice this volume in the upper layer. Every year, about 350 km3 of river water enters the Black Sea from an area covering almost a third of continental Europe and including significant areas of seventeen countries: Austria, Belarus, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Georgia, Germany, Hungary, Moldova, Slovakia, Slovenia, Romania, Russia, Turkey, Ukraine and Serbia. Europes second, third and fourth largest rivers (the Danube, Dnipro and Don) all flow to the Black Sea. Isolation from the flushing effects of the open ocean, coupled with its huge catchment, has made the Black Sea particularly susceptible to eutrophication (the phenomenon that results from an over-enrichment of the sea by plant nutrients). Eutrophication has led to radical changes in the Black Sea ecosystem in the past three decades with a major transboundary impact on biological diversity and human use of the sea, including fisheries and recreation. Prior to the 1990s, little or no action had been taken to protect the Black Sea. Political differences during the Soviet era, coupled with a lack of general knowledge of the environmental situation resulted in an absence of effective response. In
1992 the Black Sea countries signed the Bucharest Convention followed closely by the first Black Sea Ministerial Declaration (the Odessa Declaration) in 1993. 2. GEOGRAPHIC BOUNDARIES. BATHYMETRY The Black Sea is an inland Eurasian sea bordering Ukraine and the Russian Federation to the north, Bulgaria and Romania to the west, Georgia to the east and Turkey to the south (figure 1). The Black Sea is located between latitudes 40 56N and 46 33N, and longitudes 27 27E to 41 42E. It is located in the east-west depression between two alpine fold belts, the Pontic Mountains to the south and the Caucasus Mountains to the northeast. The topography of the north western coast (except for Crimea) is relatively low and flat [8]. The Black Sea is a semi-enclosed sea connected to the shallow (1020 m) Azov Sea through the Kerch Straits and to the Mediterranean Sea through the Bosporus Straits, the Marmara Sea and the Dardanelles Straits. The flat abyssal plain (20% of free surface, depth. 2000 m) rises to the continental shelves. The northwestern shelf (mean depth 50 m) has a shelfbreak at about 100 m between the Crimean peninsula and Varna in the South. The Danube and the Kerch fans are gentle continental slopes. The other portions of the shelf are narrow (20 km), fractured by canyons, abrupt ridge extensions and steep continental slopes. The only connection to other marine water bodies is through the winding Istanbul (Bosporus) Straits, a 35 km natural channel, as little as 40 m deep in places [10].
321
Figure 1 Geographical boundaries in Black Sea Region The Black Sea is up to 2212 metres deep (North of nebolu) and receives the drainage from a 1.9 million km2 basin, covering about one third of the area of continental Europe. The Bosporus has a two layer flow, carrying about 300 km3 of seawater to the Black Sea from the Mediterranean along the bottom layer and
Figure 2 Black Sea bathymetry Source: *** Black Sea Transboundary Diagnostic Analysis, may, 2007 3. COASTLINE CHARACTERISTICS The continental shelf covers 24.1% of the Black Sea surface area and has a 0.5-5 slope. This area generally extends 0-90 m depth from the shoreline. The continental shelf is very important for fishing, although it is quite narrow along the Anatolian and Caucasus coasts. The length of national Black Sea costlines is presented in table 3.1, Ukraine having the longest coast and Romania the shortest [9].
The length of the Black Sea shoreline is approximately 4 340 km (table 1). The Black Sea has similar geological properties as the major oceans, and is classified geomorphologically into three key sections namely: - the continental shelf; - the continental side; - the abrasion platform.
322
Social and economic changes within the Black Sea Basin both impact the ecosystem and are impacted by many of the environmental changes that have been brought about during the last century. The historical socio-economic conditions of the Black Sea have largely shaped practices that continue to date. The shift from the Soviet economic system to a more free market system in the Warsaw Pact States, the movement towards EU accession of some countries and economic fluctuations in the 1990s has influenced the ecosystem of the Black Sea. The Black Sea countries coastal zones are estimated to contain about 20 million people in their coastal areas. However, the situation with regard to Istanbul is confusing, since the coastal administrative unit which includes Istanbul has a short Black Sea coastline. Thus, if the population of this area is also included, the value increases to over 40 million people. The proportion of national populations living within Black Sea coastal administrative areas varies widely: 0.6% in Russia, 4.5% in Romania, 10.5% in Turkey (excluding Istanbul), 14.4% in Ukraine, 26.5% in Bulgaria, 37.1% in Turkey (including Istanbul) and 38.6% in Georgia. Avalable data suggest the proportions of populations living in coastal administrative areas which are connected to sewerage systems range from about 53% in Russia, through 70% in Turkey (excluding Istanbul) to > 90% in Bulgaria, Georgia and Romania. However, intuitively these values do appear to be on the high side, and bear no relationship to the level of treatment that is applied to the wastewater produced. A coastal population of some 7 million inhabitants is connected to sewerage systems discharging directly into the Sea. 5. LEGISLATION FOR TRANSBOUNDARY COOPERATION Since the beginning of the 1990s, the countries of the region, with financial assistance from the international community, have started to co-operate in order to promote the sustainable use of transboundary water resources. The 1992 Bucharest Convention and its Protocols, the 1993 Odessa Declaration and the 1996 Black Sea Strategic Action Programme for the Protection of the Black Sea against Pollution provided
The Black Sea biota reflects the general historical processes that have influenced the ecosystem of the sea. The main biotopes are sandy-bottom shallow-water areas, especially in the north-western part of the Black Sea and the Sea of Azov. The coasts of the southern Crimea, the Caucasus, Anatolia, some capes in the south-western part of the Black Sea (Kaliakra, Emine, Maslen Nos, Galata) and Zmeiny Island are mostly rocky. The sea beds are mostly mud in the zone between 10 to 20 m and 150 to 200 m depth. The total area of Black Sea coastal wetlands is about 10 000 km2. There are sites of reproduction and feeding and wintering grounds of many rare and commercially valuable fish species, including the sturgeon family, and are therefore biotopes of special importance. Anoxic conditions occurring below about 120-200 m depth delimit the vertical distribution of planktonic and nektonic organisms, as well as bottom-living organisms. The structure of marine ecosystems differs from that of the neighbouring Mediterranean Sea in that species variety is lower and the dominant groups are different. However, the abundance, total biomass and productivity of the Black Sea are much higher than in the Mediterranean Sea [1].
323
Figure 3 Physiography of the Black Sea biogeographical region, Source: *** European Environment Agency, Europes biodiversity - biogeographical regions and seas, Biogeographical regions in Europe, The Black Sea Region - shores and delta, ZooBoTech HB, Sweden, Linus Svenson (final edition), 2010 7. PROTECTED AREAS The statistics show that the largest marine protected areas (MPAs) are designated by Ukraine, while protected wetlands and coastal terrestrial areas are the largest in Romania. Romania leads in terms of protected marine area per unit shoreline, followed by Ukraine and Georgia. In Bulgaria, the coverage of MPAs is clearly insufficient. Turkey has no designated MPAs, and the least coverage of coastal protected areas compared with other Black Sea countries, albeit that Russian data were not provided.
The Black Sea community has a global responsibility to preserve the character of its varied ecosystems and landscapes, and to conserve the migratory species that cross the region and the threatened species that it hosts. Measures taken to conserve or restore habitats and species in the Black Sea entail the establishment of protected areas as a major approach of in situ biodiversity conservation. The total surface of Black Sea marine and coastal protected areas by country is given in table 2.
Table 2 Total surface of Black Sea marine and coastal protected areas by country and marine protected areas (MPA) per unit shoreline Country Protected areas (ha) Coastal Coastal wetlands terrestrial 16902.23 115589.9 0 28571 339336.98 226008 Shoreline length (km) 300 310 225 475 1 400 1 628 4 338 MPA (ha) / Shoreline (km) 4 51 93 0 76 -
Marine
Total
Bulgaria 1160 13365.13 Georgia 15742 44313 Romania 21000 586344.98 Russian Federation Turkey 0 31335 3000 34335 Ukraine 123530.7 92497.7 68658 284686.4 Total 161432.7 480071.9 441826.9 1083331.5 Source: *** Black Sea Transboundary Diagnostic Analysis, may, 2007 The majority of protected marine and coastal areas (93%) were declared during the 1990s, which is indicative of significant recent progress in in situ conservation of biodiversity in the Black Sea region.
Romania ranks first (56%) regarding surface of protected areas designated during the 1990s, followed by Ukraine (22%) Bulgaria (10 %) and Georgia (4%), while
324
325
As a general conclusion I can say that we depend on the seas for our survival and yet the marine environment is deteriorating fast. This requires better ways of managing it. The protection of the marine environment is the responsibility of everyone. We must be conscious of the pollution threats to our waterways and oceans and the serious effects that may result. Biodiversity in the Black Sea region is highly threatened. Many rare species of plants are to become extinct in the near future unless the countries in the region take conservation action.
326
ABSTRACT We can talk about price stability when not seen as inflation or deflation phenomena. Thus the European Central Bank defines price stability as an increase of up to 2% per annum of the harmonized index of consumer prices. The lowest inflation rates were recorded in countries such as Greece and Sweden, and the highest in Hungary, followed by Romania, according to recent data provided in November 2012 by the European Central Bank. Overall, most countries faced with low and relatively stable levels of inflation, explained that although individual prices of products in some sectors have seen a substantial increase in overall they were compensate of price reductions in other sectors and finally reached a relatively stable general price level. Keywords: stability of prices, HICP, deflation, inflation.
1.
INTRODUCTION
causing variation of some major overlaps, thus limiting comparisons between countries. 2. ANALYZING THE INFLATION IN THE EUROPEAN UNION AND ROMANIA Determination HICP is critical to maintaining price stability, increase economic welfare, global economic growth and creating new jobs. Thus it is necessary for compliance with a set of rules legally binding, and completion of certain steps by which to determine the variation of prices of consumer goods and services at both the country and the entire euro area: - Collecting monthly prices of different goods and services by trained observers. - Calculating the share of product groups according to their importance considering all consumers (rich and poor, young and old). - Calculate the weight of each country in proportion to total consumption expenditure in the euro area. In the 90's, the inflation rate reached values significantly lower values (due to two important elements: the European Central Bank monetary policy and preparing the country for the launch of euro) compared with the 70 - 80's when inflation reached very high values in many EU countries.
As defined by the Central European Bank stability of prices represents an increase of 2% per annum of the harmonized index of consumer prices (HICP) of the medium term. Therefore we can talk about price stability in the absence of deflation or inflation. These two notions are economic phenomena that negatively affect the economy of any state, characterized by decreased, respectively generalized increase in prices in the long term, affecting the purchasing power of money. Given the major influence of inflation on price stability, its modification is an essential element. Is there when inflation is rising prices, especially taking into account the dynamics of prices in a market economy. Inflation for the euro area is determined by "the Harmonized Index of Consumer Prices" (HICP) representing an indicator by which to determine changes in consumer prices over time. Through it can be compared data from different countries, this is suggested by the name "harmonized" indicating that all Member States apply the same methodology. The introduction of this indicator was essential considering that in the past, before the euro became the common currency; inflation was determined by each country using methods and techniques, this frequently
327
Figure 1 Inflation rates by countries Sources: Eurostat Based on calculations using the "Harmonized Index of Consumer Prices" (HICP) in October 2012, the lowest inflation rates were recorded in countries such as Greece (0.9%), Sweden (1.2%) and Latvia (1.6% ). At the opposite pole are situated countries like Hungary (6.0%), Romania (5.0%) and Estonia (4.2%). Inflation has had a fluctuating trend in the European Union compared to September of the same year, so that it fell in 13 Member States at the same time increased in 10 and remained stable in 4 of them.
Figure 3 Highlights the inflation rate based on "Harmonized Index of Consumer Prices" (HICP) Sources: Eurostat The figures shows the annual percentage change in the general level of prices of consumer goods and services, since 1996, considering the percentage change compared with the same month last year. The graph shows data in the euro area. Inflation peak was reached in July 2008 and bottomed out in July 2009. Among factors influence price level we can mention: - Changes in interest rates influencing short-term consumption, i.e. savings, and thus the supply of products and services on the market; - The change in market liquidity affects changes in the general level of prices and not the unemployment and real income, the latter depending on the population, technology, fiscal and social policies etc.
Figure 2 Inflation rates by countries Sources: Eurostat Items that have had a major contribution in increasing inflation in October 2012 were transport (Name Rank has a growth rate of 4.1%) and alcohol & tobacco (both an increase of 4%), unlike communications (rate 3.5%), recreation and culture (1.0% rate), clothes and household goods (each at a rate of 1.2%) who had the lowest growth rate.
328
Overall, most industrialized countries have experienced in recent years, with low levels of inflation and stable realtor explained that although individual product prices in certain sectors (eg fuel, energy) saw a substantial increase this per compensate total were decreases in prices in other sectors (e.g. telephone, TV, etc.) and finally reached a relatively stable general price level. By achieving price stability can achieve objectives such as: - Maintaining a low unemployment by creating jobs; - Maintaining a high standard of living of the population; - Steady increase in economic activity, i.e. rapid economic growth. 4. REFERENCES
[1] European Central Bank (2012) Price stability objective Eurosystem Mp.001 Http://Www.Ecb.Europa.Eu [2] Gerdesmeier Dieter, (2011) Price stability: why is it important? European Central Bank, Germany ISBN (Online) 978-92-899-0704-0 http://www.ecb.europa.eu [3] European Central Bank (2012) Euro area annual inflation down to 2.5% 160/2012 - 15 November 2012 [4] National Bank of Romania (November 2012) Inflation report, www.bnro.ro [5] F. Surugiu, Gh.Surugiu, Consumers Identity- The Role Of The Self Concept In The Consumer Behavior, Analele Universitatii Maritime Constana vol .XVII, 2012 [6] G. Raicu, A. Nita, Modern approach to improve economical performance using knowledge management, Black Sea 2008 Conference, Ninth International Conference on Marine Sciences and Technologies, October 23-25 Varna, Bulgaria 2008, ISBN 978-95490156-5-2, pp. 265-268 *** http://www.ecb.int/ecb/educational/hicp/html/index.ro.ht ml *** http://www.ecb.int/stats/prices/hicp/html/inflation.en.ht ml
329
330
ABSTRACT Over time it was concluded that the risk associated is a vital component of all economic activities, which can not only manage the fight, considering that if they do not assume any risk you may lose opportunities to win, which means that the risk assumed under established can bring value to the institution, representing a process of risk management becomes competitive advantage. Increased operational risk in recent years has been enhanced by the creation of products and services ever more complex financial innovations, increased competition, etc.. That required an adequate operational risk management and included in the internal capital estimation and allocation. Due to its novelty and importance of operational risk treatment I chose this theme, showing how to calculate the capital requirement needed to cover operational risk for a institution in Romania using both simple methods and advanced methods, in order to highlight the approach best. Keywords: Operational Risk, BIA, SA, AMA, LDA, EL, UL
1.
INTRODUCTION
2. DETERMINING THE CAPITAL REQUIREMENT Determining the capital requirement in the following using econometric study were determined capital requirements for institutions in Romania, using the methodologies proposed by the Committee on Banking Supervision in Basel. Losses incurred due to operational risk have been identified and related international standards:- The eight business lines: corporate finance, trading and sales, retail Banking, Commercial Banking, Payments and settlement, agency services, retail brokerage, asset management- And seven types of events: internal fraud, external fraud, risks arising from relationships with customers, products and business practices, damage to physical assets, business interruption and performance, execution, delivery and management processes related to employment conditions and job security staff labor. Thus the determination of capital for operational risk under the Basic Indicator Approach apply factor 15% of average gross income obtained during three consecutive years. In Figure 1 presents the evolution of the gross income of the institution analyzed.
Risk is a future and uncertain event that could not be disregarded but only manage (ensure minimizing the likelihood and potential effects for the entity to obtain maximum profit), considering that if they do not assume any risk you may lose opportunities to win. Over time many opinions have been formulated on operational risk and its management methods, but the complete definition in terms of causes, can be adopted by any institution is formulated by the Committee on Banking Supervision Basel to consider operational risk as "risk of direct or indirect loss resulting from deficiencies or failures of procedures, personnel, internal systems or external events". Thus this definition is the main component of legal risk - "manifestation" of potential operational risk, representing an indirect question arising from one or more reasons (personnel, processes, systems or events outside the organization), but excludes: strategic risk (because is difficult to determine the financial loss incurred) and reputational risk (because although it can identify the consequences appear to be diffuse and precalculated data). Basel Committee proposed three methods to quantify operational risk with varying degrees of difficulty, namely: the basic indicator approach, standardized method and advanced method. The first two approaches are considered rather security measures and not measures of determining exposures. In the advanced methods are part of the loss Distribution Approach, Internal Assessment Approach Scorecard approach, thereby providing institutions flexibility in design approaches based on internal models database and included in the operational risk management. About these approaches in the literature two views have emerged: practitioners who recommended approaches are complex since they offer advantages and practitioners who consider these methods too expensive prefer a simple approach.
Figure 1 Evolution of gross income of the institution analyzed Obtain high values for a particular business line gross income shows size and intensity of activity in
331
Figure 3 Allocation of number of losses and distribution of the losses on each cell of the matrix Next to each cell of operational risk matrix observations were modeled in terms of severity and distribution of the frequency distribution Frequency distribution was modeled with the Poisson distribution parameter is set to represent the average value of loss arising for each cell of the matrix of operational risk. For modeling the severity distribution parameters were estimated gamma distributions, Exponential, Normal, Pareto, Weibull for each cell of the matrix of operational risk and reliability tests by: chi-square, KolmogorovSmirnov, Anderson-Darling plan Quantive-quantum were selected Exponential distribution as the empirical distribution close. After repeating the process for each cell of the matrix Monte Carlo simulation is used to aggregate distribution frequency and severity of losses, using VaR methodology, commonly used for distribution of aggregate losses, such as fixed capital requirement (value at risk) in each group events. Average aggregate loss distribution is determined as the product between the media and media distribution Poisson distribution Severity (exponential or normal) level resulting such provision (expected loss) Capital reserve to cover unexpected losses are determined as the distribution function of aggregate loss distribution (VaR) and the expected loss
Figure 2 The capital requirement under the standardized approach to business lines Thus as shown in Figure 2 and increasing proportion of the capital requirement is allocated to payment and settlement business lines, followed by trading and sales business line consistent with results obtained and the information revealed by Figure 1. Advanced method for considering the information provided by the institution on which analysis was done was presented individually Loss Distribution Approach was applied. Risk profile was originally developed and built operational risk matrix types of events and business lines, with the starting point for developing sources of risk - as you may have different statistical properties for the distribution frequency and severity of losses for each cell the risk matrix.
332
Figure 5 Synthesized methods for the institution Analyzing the first two approaches (basic indicator method and standardized method) find an increase in capital requirements is explained by the fact that the main line of business is sales and trading and settlement payments for which = 15% and = 18% . Minimum capital is allocated using the Distribution loss approach because it can identify, measure and manage operational risk more effectively - achieving a consistent economic capital needed to cover this risk. 3. CONCLUSIONS
With the increasing complexity approach, equity decreased significantly explained by the fact that by using advanced approaches can identify measure and manage operational risk more effectively achieving a consistent economic capital for operational risk. 4. REFERENCES
Figure 4: The expected loss, unexpected loss and total loss of business lines and event types for the institution To cover the average loss should be a provision for expected loss, and if it wants to protect the stability of the activity must constitute a serious additional capital against potential losses related to unexpected loss. If the institution fails to create reserves above may occur losses equal to the sum of expected and unexpected losses that affect profitability and thus will affect the outcome of the shareholders. Capital
[1] Anghelache, G.-V., A.-C. Olteanu i A.-N. Radu (2011), Risks Assesment for Financial Institutions in Romania, Revista Romn de Statistic,Trim.I, ISSN: 1018-046x [2] Basel Committee on Bank Supervision (2001) Operational Risk Working Papers Bank for International Settlements, Basel, Elvetia www.bis.org [3] Andrew Kuritzkes, Til Schuermann (2007) - What we know, dont know and cant know about bank risk: a view from the trenches Wharton Financial Institutions Center Working Paper No. 06-05 [4] Paulica Arsenie, Radu Hanzu-Pazara, Eugen Barsan, Paul Bocanete, Felicia Surugiu, Gabriel Raicu, Ionut Scriosteanu, Nicoleta Parasca, Ships bridge equipment and human errors, Intermodal Conference on Safety Management and Human Factors, Sydney, Australia, 2007 [5] Tinca Adrian (2006) The Operational Risk in the Outlook of the Basel II Accord Implementation pg 31-34 www.ectap.ro/articole/217.pdf
333
334
Associated risk is a vital economic activities undertaken under well established and can bring value to represent a process of risk management becomes competitive advantage ("art" to make decisions and act on the basis of insufficient data). Basel Committee on Banking Supervision has developed rules and regulations which recognized the impact of operational risk (emphasizing that the implementation of proper management of risks is vital for the existence of a financial institution). This paper aims to establish the optimal method for determining capital requirements for institution analyzed. Keywords: Operational risk, Operational risk management, expected loss, PE, LGE, unexpected loss, provisions, capital.
1.
INTRODUCTION
Banks' capitalization required under the Basel I and II proved to be insufficient so there was need complex prudential policies grouped under a new Basel III. Such new standards aimed at: improving risk management, strengthening transparency requirements, problems systemically important banks, in a word decrease the negative effects of financial crisis by increasing the requirements on capital adequacy, liquidity requirements and leverage. Operational risk as defined by the Committee on Banking Supervision in Basel is the "risk of direct or indirect loss resulting from deficiencies or failures of procedures, personnel, internal systems or external events". Thus the main categories of operational risk are: - Internal Fraud involves intentional losses due to failure of the internal regulations of the institution's policy, laws, involving at least one company employee - External fraud is based on a third party business losses due to fraud, prevent compliance or to acquire goods / values, violations of security systems - Risks arising from relationships with customers, products and business practices are a product of customer negligence or professional obligations in the nature and design of product, improper business practices or market - Involve damage to tangible losses materialized in damage, loss of physical assets of the organization and the impact on business - Business disruption and system availability resulting from the operation. - Execution, delivery and management processes involving losses due to faulty registration and transaction execution, monitoring and reporting faulty - Conditions related staffing and job security due to loss-making activities contrary to law and conventions on employment, health and safety at work. Because administration of many monetary instruments, monitoring and correcting large exposures, the number of transactions increased in a relatively short
period, from several sites, including e-banking service, the complexity and volatility of the banking system and through breaches legislative and regulatory, in a word, due to internal factors and external determinants of operational risk can record a series of losses or profits estimated realizable. Given the literature we conclude that there is a wide variety of views on operational risk and its management methods. This was treated as operational risk, financial risk "residual" after eliminating the credit and market risk, is a vague definition which includes business risk (including market positioning, management competence, etc.). It is also considered that operational risk arises from conduct financial transactions with the error sources, disorders, deficiencies of systems, equipment, people, techniques, etc. regardless of intentional acts performed by employees or outside the institution for fraud. Operational risk has been treated as a risk, business risk exclusion, which arises from the existence of inadequate internal control system, which also takes account of catastrophic natural events and dishonest acts within and outside the institution. A final treatment of operational risk considers that it is direct or indirect loss resulting from technological processes, inadequate internal control procedures, technological disturbances, unauthorized activities of employees or external influence. Enhancing operational risk is explained by changes [16] in business environment, infrastructure and organizations that have arisen because competition becoming more heated, automated technologies and electronic commerce, emergence of increasingly complex products, due to globalization, decentralization, changes since the banking system through mergers, acquisitions and consolidations, increased activity of retail trade, which led to a more careful management of this risk materialized in the assessment and allocation of capital. Risk management is a management process that includes techniques and methods used for risk
335
336
Figure 1: Capital requirement for operational risk As seen in previous Figure minimum assigned capital is achieved by using Internal Measurement Approach as a institution can identify the affected business lines and operational risk are the main risk factors . 3. CONCLUSIONS
In last time manifested an intense interest in this type of risk and operational risk because it can occur in any sector of the economy not only in banking. So we make a statement that this risk may be caused by: - Internal processes including risks arising from relationships with customers, products and business practices and Execution, Delivery and Process Management - Human risk including internal fraud, conditions related staffing and job security, risks arising from relationships with customers, products and business practices. - Risk Business disruption and systems comprising operation - External events such as events involving external fraud, damage to physical assets and business interruption and operation. Establishing actual optimal level of capitalization is particularly important because it allows capital to fulfill their protective function force (absorb unexpected losses - decrease the probability of bankruptcy of the bank and increase the level of implicit trust in the banking system). Thus there is no one best method to quantify operational risk quantification approaches proposed by the Basel Agreement has some shortcomings implicitly lead to incorrect measurement of this risk: - While methods "simple" does not require complex database and calculation formulas are simple between indicators of operational risk exposure and cannot establish any connection (increased income is penalized by capital growth and ultimately result in lower gain entity which is the opposite strategy), in addition there is a negative correlation between losses and capital needs. Also these methods do not take into account the internal process-specific differences between institutions and
[1] CAROL ALEXANDER (2002) Operational risk management: Advanced Approach [2] GABRIELAVICTORIA ANGHELACHE, CARMEN OBREJA, BOGDAN COZMNC, ADRIANA CATALINA HNDOREANU, ANA CORNELIA OLTEANU, ALINA NICOLETA RADU (2009) "Synthesis of research report on the draft capital adequacy for operational risk in the context of moral hazard," ASE http://www.acrom.ase.ro/Documente/Sinteza2009.pdf [2] CROUGHY, M., GALAI, D., MARK, R., Risk Management, McGraw-Hill, New York, 2001 p 478-479 [3] Basel Committee on Bank Supervision (2004) Guidelines on Operational Risk Mangement. National Bank of Austria,, available online at www.bis.org [4] Basel Committee on Banking Supervision (2003) Sound Practices for the Management and Supervision of Operational Risk, no.86, 91 and 96, www.bis.org [5] Basel Committee on Bank Supervision (2001) "Operational Risk" Working Papers, Bank for International Settlements, Basel, Switzerland www.bis.org [6] CAROL ALEXANDER (2003) "Operational risk regulation, analysis and management" Financial Times Prentice Hall, Pearson Education [7] KELLER JAMES S. (2001) "Request for Comments on the New Basel Capital Accord" The PNC Financial Services Group, Pittsburgh, Pennsylvania [8] ANDREW KURITZKES, TIL SCHUERMANN (2007) - "What we know, and do not know Can not know about bank risk: a view from the trenches" Wharton Financial Institutions Center Working Paper No. 06-05 [9] MARSHALL, CH, Measuring and Managing Operational Risk in Financial Institutions, John Wiley and Sons [10] Merrill Lynch & Co., Inc (2001) Merrill Lynch response to the Basel Committee and European Commission [11] ANCA ELENA NUCU (2011) "Basel III Challenge for Romanian banking system" Theoretical and Applied Economics Volume XVIII (2011), No. 12 (565), pp. 5768 [12] TOSHIHIKO MORI, EIJI HARADA (2001) "Internal Measurement Approach to Operational Risk Capital Charge" Bank of Japan http://www.boj.or.jp/en/type/ronbun/ron/wps/kako/data/f wp01e02.pdf [13] FELICIA SURUGIU, SIMONA UTUREANU, CRISTINA DRAGOMIR, Human Resources Management in a Navigation Company in the Globalized Context, Analele Universitii Ovidius, Seria: tiine
337
338
1.
INTRODUCTION
The contemporary technical and scientific revolution has generated the occurrence of new products with some superior performances and caused a conflict between man and nature, a correlation between society and nature. Human interventions on nature have generated series of successive or simultaneous crises, with alarming social consequences such as the ecological crisis [3].
2. QUALITY OF PRODUCTS AND THE RELATIONSHIP TO ENVIRONMENT PROTECTION The pressures that aimed at protecting ecological carried more strongly by the ecologists, consumers and legislators, determines companies to find new solutions for environmental protection by creating products, using environmentally friendly procedures (figure 1) [3]. The natural environment is negatively affected by a significant number of physical factors (air, water, light) biological (food, disease) or economical factors (some of technological processes, operational conditions and use of goods and so on). The negative effects of these factors can temporally deteriorate or they may become permanent due to human intervention. Protection of the natural environment is conditioned, in part, of capitalizing residue of the waste by achieving of production cycles, closed type: raw material - production - product - raw material. In other words, the issue of protection the natural environment has become an issue closely linked to quality of products and services available to man today [8]. Reuse of recoverable materials and their reintegration into the economic cycle, is required acutely as a matter of environmental protection and as an economic problem of regaining some raw materials. It is considered that among the sources of environmental pollution, an important place is occupied by: waste vehicles, of pesticides and herbicides, radioactive the substances, noise made by a series of devices, cars and equipment (goods). Thus, the waste from clothing, furniture, household apparatus, soap, detergents, cosmetics, pharmaceuticals, household products chemicals, and especially waste of packaging, are diversified and are in the steadily increasing the quantitative.Given that a large proportion of these are not biodegradable, and some are directly toxic (eg household products insecticides) the natural purifying takes place with difficulty, thus affecting the the bacterial flora the wastewater.
Figure 1 Environmental pressures The legislation adopted in the environmental field by the EU member states and candidate countries has changed the perspective of business men. Pollution is costy and the laws regulating this matter are very strict. The importance of environmental protection depends in the publics mind on the proofs regarding global heating, decrease of the ozone layer, the increasing waste generation, the destruction of tropical forests, the extinction of species. The legislative power has expanded in most countries and this will positively influence how they will do business.
339
Figure2 The 12 global environmental problems [4] 3. THE INFLUENCE OF TRANSPORTS ON ENVIRONMENTAL QUALITY The means of road transport, sea and air, who use internal combustion engine eliminate products such as carbon monoxide, nitrogen oxides and unburned hydrocarbons, which are noxious. The means of transport with petrol give the biggest quantity of pollutants into the environment among which the lead, a very dangerous polluting agent. As we know, lead is added to gasoline as tetraethyl lead to increase the octane gasoline. To reduce the impact on environmental quality has been introduced unleaded petrol and for purification of exhaust gases from motor vehicles were introduced catalysts. From the analyzes carried, throughout highways, there has been identified a high contamination of the soil and vegetation with lead, surpassing in crowded areas limits recommended by the World Health Organization with 2-4% (maxim 2 micrograms/m3 air). The carriage is responsible for around 25% of total greenhouse gas emissions, with important consequences for the sustainability of the planet. A program developed by the International Union of Railways (ECOPASSENGER) allows comparison of air travel routes, road and railby calculating the energy consumption, CO2 emissions and other pollutants (nitrogen oxides emissions, particulate emissions, emissions of hydrocarbons) for each selected route [5]. Also, the program is recommended for choosing the train as a means of making the journey, its use being the solution to reduce greenhouse gas emissions from the transport sector. The transport services were among of first common policies European Union and still since the entry into force of the Treaty of Rome in 1958, transport policy was based on removing border obstacles between Member States, contributing thus to free movement of persons and goods. The main objectives of transport policy are [2]: completion of the internal market; achieving sustainable development; transport network expansion across Europe; maximizing the use of European space; improve transport safety and developing international cooperation. The single market has been an turning point for the common transport policy because together with the 2001 White Book, his policy has developed harmoniously and simultaneously different modes of transport, particularly by using each available mode of transport (land, sea or air) in a more efficient way. In order to reduce greenhouse gases emissions have been undertaken numerous actions (research, introduction of alternative solutions, especially in road transport) and the European Union has defined a policy to promote biofuels and for reduction from road transport emissions and air transport. Since the 70s were adopted guidelines for emissions from motor vehicles, which have resulted in a gradual reduction of emissions the gaseous pollutants and particularly in a certain extent acoustical noise emissions of second hand vehicles. Effects transport impact on the environment aims to be reduced as more and in that effect air emission reductions set by "EURO" from I to V, standards
340
341
342
Despite the progress made over time, transport continues to be a burden, especially in terms of emissions of greenhouse gases. Some studies have shown that improving technologies can not solve problems which are planned for future, and should be increased the effort to improve this, and not least reducing the sector's contribution to climate change. The transport has an important contribution to producing climate change. Basic relationship between transports and climate change is simple: the transport is almost entirely dependent on oil that along with other fossil fuels (coal, natural gas) are the main sources of CO2.
343
[1] BLCEANU C., APOSTOL D.M., The theory of "growth poles" and its impact on sustainable development, SGEM 2012, Volume IV, Conference Proceedings [2] NEDEA P.S., Traficul rutier transfrontalier n sudvestul Romniei - Impact asupra mediului i a aezrilor, Editura Univesitar, Bucureti, 2011 [3] STANCIU, I., Managementul calitii totale, Editura Cartea Universitar, Bucureti, 2004 [4] SADGROVE, K., Ghidul ecologic al managerilor, EdituraTehnic, Bucureti, 1998 [5] Synthses de la lgislation de l'UE - Transports, nergie et environnement, http://europa.eu/legislation_summaries/transport/transpo rt_energy_environment/l28165_fr.htm
344
1.
INTRODUCTION
Education encompasses teaching and learning specific skills, and also something less tangible but more profound: the imparting of knowledge, positive judgment and well-developed wisdom. Education has as one of its fundamental aspects the imparting of culture from generation to generation (see socialization). Education means to draw out, facilitating realisation of self-potential and latent talents of an individual. It is an application of pedagogy, a body of theoretical and applied research relating to teaching and learning and draws on many disciplines such as psychology, philosophy, computer science, linguistics, neuroscience, sociology - often more profound than they realize though family teaching may function very informally. Lifelong, or adult, education has become widespread in many countries. However, education is still seen by many as something aimed at children, and adult education is often branded as adult learning or lifelong learning. Adult education takes on many forms, from formal class-based learning to self-directed learning. Lending libraries provide inexpensive informal access to books and other self-instructional materials. The rise in computer ownership and internet access has given both adults and children greater access to both formal and informal education. In Scandinavia a unique approach to learning termed folkbildning has long been recognised as contributing to adult education through the use of learning circles. Formal Education:- the hierarchically structured, chronologically graded education system, running from primary school through the university and including, in addition to general academic studies, a variety of specialized programs and institutions for full time technical and professional training. Informal Education:- The truly lifelong process whereby every individual acquires attitude, values, skills and knowledge from daily experience and the educative influences and resources in his or her environment from family and neighbors, from work and play, from the market place the library and the mass media.
Non-Formal Education - any organized educational activity outside the established formal system- whether operating separately or as an important feature of some broader activity that is intended to serve identifiable learning clienteles and learning objectives. 2. EDUCATION TECHNOLOGY
Technology is an increasingly influential factor in education. Computers and mobile phones are being widely used in developed countries both to complement established education practices and develop new ways of learning such as online education (a type of distance education). This gives students the opportunity to choose what they are interested in learning. The proliferation of computers also means the increase of programming and blogging. Technology offers powerful learning tools that demand new skills and understandings of students, including Multimedia, and provides new ways to engage students, such as Virtual learning environments [1], [2]. Technology is being used more not only in administrative duties in education but also in the instruction of students. The use of technologies such as PowerPoint and interactive whiteboard is capturing the attention of students in the classroom. Technology is also being used in the assessment of students. One example is the Audience Response System (ARS), which allows immediate feedback tests and classroom discussions. Information and communication technologies (ICTs) are a diverse set of tools and resources used to communicate, create, disseminate, store, and manage information. [3] These technologies include computers, the Internet, broadcasting technologies (radio and television), and telephony. There is increasing interest in how computers and the Internet can improve education at all levels, in both formal and non-formal settings [4], [5]. Older ICT technologies, such as radio and television, have for over forty years been used for open and distance learning, although print remains the cheapest, most accessible and therefore most dominant delivery mechanism in both developed and developing countries.
345
Pedagogical elements are an attempt to define structures or units of educational material. For example, this could be a lesson, an assignment, a multiple choice question, a quiz, a discussion group or a case study. These units should be format independent, so although it may be implemented in any of the following methods, pedagogical structures would not include a textbook, a web page, a video conference or an iPod video. When beginning to create e-Learning content, the pedagogical approaches need to be evaluated. Simple pedagogical approaches make it easy to create content [6], but lack flexibility, richness and downstream functionality. On the other hand, complex pedagogical approaches can be difficult to set up and slow to develop, though they have the potential to provide more engaging learning experiences for students. Somewhere between these extremes is an ideal pedagogy that allows a particular educator to effectively create educational materials while simultaneously providing the most engaging educational experiences for students [7]. The use of computers and the Internet is still in its infancy in developing countries, if these are used at all, due to limited infrastructure and the attendant high costs of access. Usually, various technologies are used in combination rather than as the sole delivery mechanism. For example, the Kothmale Community Radio Internet uses both radio broadcasts and computer and Internet technologies to facilitate the sharing of information and provide educational opportunities in a rural community in Sri Lanka. The Open University of the United Kingdom (UKOU), established in 1969 as the first educational institution in the world wholly dedicated to open and distance learning, still relies heavily on print-based materials supplemented by radio, television and, in recent years, online programming. Similarly, the Indira Gandhi National Open University in India combines the use of print, recorded audio and video, broadcast radio and television, and audio conferencing technologies. The term computer-assisted learning (CAL) has been increasingly used to describe the use of technology in teaching. 4. VIRTUAL CLASSROOMS
Communication technologies are generally categorized as asynchronous or synchronous. Asynchronous activities use technologies such as blogs, wikies, and discussion boards. The idea here is that participants may engage in the exchange of ideas or information without the dependency of other participants involvement at the same time. Electronic mail (Email) is also asynchronous in that mail can be sent or received without having both the participants involvement at the same time. Synchronous activities involve the exchange of ideas and information with one or more participants during the same period of time. A face to face discussion is an example of synchronous communications. Synchronous activities occur with all participants joining in at once, as with an online chat session or a virtual
The term e-Learning 2.0 is used to refer to new ways of thinking about e-learning inspired by the emergence of Web 2.0. From an e-Learning 2.0 perspective, conventional e-learning systems were based on instructional packets that were delivered to students using Internet technologies. The role of the student consisted in learning from the readings and preparing assignments. Assignments were evaluated by the teacher. In contrast, the new e-learning places increased emphasis on social learning and use of social software such as blogs, wikis, podcasts and virtual worlds such as Second Life. This phenomenon has also been referred to as Long Tail Learning The first 10 years of e-learning (e-learning 1.0) was focused on using the internet to replicate the instructorled experience. Content was designed to lead a learner through the content, providing a wide and everincreasing set of interactions, experiences, assessments, and simulations. E-learning 2.0, by contrast (patterned after Web 2.0) is built around collaboration. e-Learning 2.0 assumes that knowledge (as meaning and understanding) is socially constructed. Learning takes place through conversations about content and grounded interaction about problems and actions. Advocates of social learning claim that one of the best ways to learn something is to teach it to others. Distance education has long had trouble with testing. The delivery of testing materials is fairly straightforward, which makes sure it is available to the student and he or she can read it at their leisure. The problem arises when the student is required to complete assignments and testing. Online courses have had difficulty controlling cheating in quizzes, tests, or examinations because of the lack of teacher control. In a classroom situation a teacher can monitor students and visually uphold a level of integrity consistent with an institution's reputation. However, with distance education the student can be removed from supervision completely. Some schools address integrity issues concerning testing by requiring students to take examinations in a controlled setting [9]. Assignments have adapted by becoming larger, longer, and more thorough so as to test for knowledge by forcing the student to research the subject and prove they have done the work. Quizzes are a popular form of testing knowledge and many courses go by the honor system regarding cheating. Even if the student is
346
Figure 1 Web based IMO Tanker Courses We use Moodle as Knowledge Management System; through will have an explicit Knowledge Management objective of some type such as collaboration, sharing good practice. Moodle is a Course Management System (CMS), also known as a Learning Management System (LMS) or a Virtual Learning Environment (VLE). It is a free web application that educators can use to create effective online learning sites. A Learning Management System (LMS) is a set of software tools designed to manage user learning interventions. LMS go far beyond conventional training records management and reporting and the value added for it is the extensive range of complementary functionality they offer. Via internet and LMS the participants have access to the internal tests of different topics and the students can enrol themselves directly on the website [13], [14]. 7. FUTURE MARKETS AND CONCLUSIONS
LMS buyers generally report poor satisfaction based on survey results from the American Society for Training and Development (ASTD) and the eLearningGuild. The ASTD respondents were very unsatisfied with an LMS purchase doubled and those that were very satisfied decreased by 25%. The number that was very satisfied or satisfied edged over 50%. (About 30% were somewhat satisfied.) Nearly one quarter of respondents intended to purchase a new LMS or outsource their LMS functionality over the next 12 months. eLearningGuild respondents report significant barriers including cost, IT support, integration, and customization. They also report significant effort to implement with a median of 23 months being reported
347
[1] LOUTCHKO, IOURI; KURBEL, KARL; PAKHOMOV, ALEXEI: Production and Delivery of Multimedia Courses for Internet Based Virtual Education; The World Congress "Networked Learning in a Global Environment: Challenges and Solutions for Virtual Education", Berlin, Germany, May 1 4, 2002 [2] PARKER, QUIN (2007-04-06). A second look at school life, The Guardian [3] DORVEAUX, XAVIER (2007-07-15). Apprendre une langue dans un monde virtuel, Le Monde. Retrieved on 2007-07-15
348
According to the maritime commercial literature, the damage is the total outstanding expenses and prejudice brought to the ship and cargo onboard or only to one of them subsequent to loading and departure and until their return and unloading. In the opinion of some authors, the damage is a prejudice brought to a ship or its cargo, as a result of a navigation accident or any force majeure event, as well as expenses or sacrifices incurred during transport, in order to prevent a danger threatening the ship. Damages represent all deteriorations caused to the ships hull, her inventory, as well as cargo onboard intended to transport. Thus, there are two corresponding categories of damages: damages to the ship and damages to cargo. Damages to the ship represent: the total prejudices brought to the ships hull and her installations, being mainly generated by navigation accidents as: boarding, grounding, fire onboard, explosion onboard, touch of the sea bottom, leakage and engine damages. Cargo damages represent: total prejudices brought to the goods due to damages to the ship or cargo. Considering the manner of cost coverage and setting of responsibility, there are two distinct forms of damages: general average and particular average. All overhead expenses and prejudices willingly incurred for the salvage of ship and goods are considered general averages and all prejudices brought and all expenses incurred in respect of the ship or goods onboard are considered particular averages. 2. ELEMENTS OF GENERAL AND PARTICULAR AVERAGES 2.1. Definition elements and general damages typology The common damage is the extraordinary maritime sacrifice or outstanding expenses intentionally and rationally incurred by the ships master, in order to save the ship and cargo (goods loaded onboard) from a danger threatening them during the maritime transport that should be borne by the parties taking benefit from it, proportionally to the asset values at risk.
349
350
351
2.5. Damages due to the nature of goods and vicinity to other goods 2.5.1. Damages due to the nature of goods Caused by the intrinsic characteristics of goods that is their physical-chemical characteristics. A special attention should be paid to the hazardous cargo and their special stacking, handling, transport and storage conditions. Failure to observe the specified conditions can lead to self-ignition or explosions with destructive effects, both to the ship and cargo. One of the special conditions to observe in case of such goods is the fact that they are not to be stacked near heating sources or in the vicinity of goods which support the burning and may self-ignite [6]. 2.5.2. Damages due to the vicinity with other goods Are damages caused by unfit stacking of goods or the fact that goods of different nature were loaded in the same area (bilge compartment, holds, cargo holds). It is necessary that the following recommendations are taken into account: Two or more sorts of bulk goods are not to be loaded in the same hold; When liquid goods are carried, a good sealing of the connection pipes in order to avoid infiltrations from one tank to another (in case of oil tanker) must be provided. 2.6. Damages caused prior to the loading of goods onboard and those caused by handling and transport a) Damages caused prior to the loading The damages caused prior to loading refer to damages which can affect the goods under the following circumstances: During certain distances travelling to the loading port where the goods are handled on and off various means of transportation, as from the place of origin up to the loading port; In the port, when the goods are stored in warehouses or berths until the arrival of the ship which will take over the cargo. Given such circumstances, the packing and even the goods can be damaged. Therefore, when the goods are
As conclusion, following situations are pointed out as being examples of inappropriate handling of goods: Negligent or inappropriate handling of winches (ships own loading/unloading installations), allowing the goods loading at too high speed or their abrupt hoisting or overload beyond the loading limit of the installations, when they can break down, causing the fall of goods from heights; Using certain tools during loading/unloading operations which are incompatible with the type of handled goods; Defective and negligent handling of goods which can lead to the flattening of packages and altering the balance of the whole stack; Inadequate anchoring during stacking, which causes the ships swinging and implicitly the moving of goods, their rubbing against each other or against the holds walls. The damage of goods by rubbing can cause severe damages, especially in case of electric or telephone cable rolls, etc. 4. REFERENCES
[1] ALEXA, C., CIUREL V.,SEBE E., MIHESCU A., Asigurri i reasigurri n comerul internaional, Editura All, 1992, Bucureti. [2] BTRNCA GH., Comer maritim internaional, Editura Arvin Press, Bucureti, 2004 [3] BEZIRIS, A., Transport maritim, Editura Tehnic, Bucureti, 1988 [4] CARAIANI GH., TUDOR M. Asigurrile n transporturile maritime, Editura Lumina Lex, Bucureti,1998. [5] VCREL I., BERCEA F. Asigurri i reasigurri, Editura Expert, Bucureti, 1998. [6] BTRNCA, G., RAICU, G., POPESCU, C., Considerations about introduction of new technologies in the maritime field and their impact on safety at sea, 6th International Conference on the Management of Technological Changes, Management of Technological Changes, Vol. 1 Pages: 453-456, 2009, ISI Web of Knowledge indexed.
352
353
354
TEMPERATURE AND HUMIDITY TWO MAJOR CLIMATIC RISK FACTORS AFFECTING THE QUALITY OF CARGOES CARRIED BY SEA
SURUGIU FELICIA
Constanta Maritime University, Romania ABSTRACT Every day, millions of tons of temperature sensitive goods are produced, transported, stored or distributed worldwide. For all these products the control of temperature and consequently the control of humidity is essential, mostly when it is about transportation of the goods by sea. The quality of these products might be changed rapidly when inadequate temperature and relative humidity conditions are not preserved during transport and storage. Temperature variations can occur in warehousing, handling and transportation. Recent studies show temperature-controlled shipments rise above the specified temperature in 30% of trips from the supplier to the distribution centre, and in 15% of trips from the distribution centre to the store. Lower-than required temperatures occur in 19% of trips from supplier to distribution centre and in 36% of trips from the distribution centre to the store (White, 2007). It is the scope of this paper to highlight the impact of air temperature and atmosphere humidity on the quality of goods carried by sea onboard maritime ships. Keywords: Temperature, humidity, air circulation velocity, cargo, maritime transport.
1.
INTRODUCTION
Throughout the process of transportation, a special attention should be paid to the preservation of merchandise properties and prevention of quality risks, in order to eliminate or diminish degradation and depreciation which may occur as a result of the effects of certain risk factors. By merchandise properties we understand a cluster of typical features consistent with the specific functions of a product, its utilization value, as well as its quality. Among the major risk factors, acting mainly in the maritime transport we specify herein the temperature, the humidity and the effects of air circulation velocity on the quality of goods shipped by sea. 2. TEMPERATURE AND HUMIDITY CLIMATIC RISK FACTORS AT SEA 2.1. Impact of air temperature on the quality of goods shipped by sea For each type of product intended to maritime transport it is required to ensure an optimal temperature status, because keeping it on a certain level during preservation influences both the maintenance quality and the lifespan of those products. The preservation temperature must not fluctuate too much, especially in case of food products. This goal may be reached by proper ventilation of the storage area, procedure which can be performed either by natural way (opening of silo hatchways and ventilation cowls) or by special ventilation installations. Temperature dropping under the levels set forth by standards may lead to alterations, such as freeze and dilation of products, precipitation, alteration of solubility and viscosity of oils and fats. Increase in temperature above the standard levels entails a range of physical alterations, such as dilation
and high pressure inside the tanks up to explosion. Also, metabolic processes are accelerated and losses of quantity occur in the products weight. Any merchandise sensitive to temperature fluctuations demands the observation of certain requirements in this respect. If the temperature of storage areas on maritime vessels throughout the transport complies with the requirements, the necessary premises to maintain the quality of shipped merchandise are thus ensured. 2.2. Transport temperature fluctuation interval The transport temperature is the optimal storage temperature of a product, which provides the best conditions to maintain its quality. For most goods (which do not fit the category of those under mandatory temperature control status), the optimal transport value ranges between +5C and +20C. Of course, when different climate zones are crossed, different values are to be expected that is temperatures higher than +20C for subtropical areas and lower than +5C (even negative) for temperate areas, in winter time. In such situations, preventive steps are called for, so that the temperature of storage area shall not exceed a high admitted level or does not decrease below a low admitted level. If the high admitted temperature level is exceeded, considerable quality depreciations and even total spoilage of goods may occur due to the intensification of the enzymatic and microbiological processes. High temperatures may lead to the occurrence of the phenomenon of overheating and even self-ignition of the shipped cargo (such as products with high content of oils). Another significant example of spoilage is that of tobacco leafs which, exposed to temperatures above the high admitted threshold enter the stage of over ripening, crushing and transforming into powder.
355
Figure 1 Molliere diagram of the possible ratio between temperature fluctuation and relative air humidity [17] b) Sorption behavior of goods Hygroscopicity is the term which describes the goods response capacity to the water content of air and the phenomenon manifests itself either by absorption, or water vapors elimination. The crucial elements in the analysis of goods hygroscopicity are: Relative air humidity; Air temperature; Goods water content. 2.4. Classification of goods as per water content The water content of a product is the water quantity of the total weight of that product, expressed in percents. Many hygroscopic products intended to maritime transport are organic. However, there are inorganic products (many chemical products) which are also hygroscopic; therefore, a special attention should be paid to such characteristic during all transport phases. Hygroscopic goods can mostly cause degradation of neutral products from hygroscopic point of view, such as metals or chemical products and therefore, a risk factor with regards to the occurrence of corrosion phenomenon. Hygroscopic goods have the specific feature of variable water content, therefore being able to absorb humidity from the environment or releasing water vapours into it. Thus, in a relatively low humidity environment, such goods release water vapours, while in a relatively high humidity environment they absorb the humidity from the air. This way, in case of hygroscopic goods, the water content changes and alterations in their total weight occur. Such a situation can generate more severe effects. Beside the aspect of quality alteration, depreciations what so ever may occur, leading to total depreciation of such goods. A product is deemed dry when its water content does not affect its quality throughout the transport under normal weather conditions. For example, in case of organic goods, a high water content may generate the
356
2.6. Effects of air circulation velocity on the quality of goods shipped by sea The prevention of condensation water might be ensured by good storage ventilation, providing the cargo with optimal storage conditions throughout the maritime transport. Adequate ventilation shall provide a constant flow of air in the storage areas, so that the heat, gases and smell emanations from goods may be evacuated, thus providing the temperature the goods need for an adequate keeping. Storage ventilation is carried out through cowls which shall be oriented towards the resultant between the ships heading and wind direction. Besides the natural ventilation system, modern vessels are also equipped with an artificial ventilation system. This system, consisting of an electric fan installed in the cowls, provides controlled ventilation to the sense of forced air intake or exhaust within the storage areas. The method of manually adjustable cowls or wind sails is simple and classic for all vessels carrying general goods. In such cases, temporary wooden air shafts are used inside the storage areas for providing cowl-intake air distribution among stacks. In case of cargo that emanates gas, it facilitates the forming of strong sweat and even the stowage manuals recommend the opening of hatch covers during transport, under strict supervision, depending on weather condition, for natural ventilation inside cargo. Of course, in case of automatic ventilation installations inside storage areas such manoeuvres for ventilation are no longer necessary. A permanent air flow of 1- 4 m/sec is provided by centralized control. Anyway, an important ventilation measure, such as opening the hatch covers shall be permanently supervised and constantly entered in the log book. This shall remain as material evidence for each survey report, should it be required in the future. In case of bad weather, too high humidity, too high outside temperature or rain, or waves going over the deck, the storage ventilation is to be cut off. 3. CONCLUSIONS
Most of the cargo loss or damage resulting cargo claims can be prevented by a proper maintenance of vessels and proper care of cargo. If a vessel causes loss or damage to her cargo and if carriers are held liable, carriers would have to compensate cargo interests for their damages. Furthermore, extra time and costs will be incurred in discharging the damaged cargo. In the worst case, cargo receivers might refuse to take delivery of the damaged cargo that results in delay in the vessel's departure. Moreover, carriers reputation may be deteriorated, which might result in loss of business. Accordingly, carriers are required to take proper care of cargo throughout their loading, navigating, discharging and delivering operations.
357
358
GOODS, SHIPS AND PORTS INTEGRATED CONCEPTUAL APPROACH FOR THE INTERNATIONAL MARITIME TRANSPORT
SURUGIU FELICIA
Constanta Maritime University, Romania ABSTRACT Maritime transport is an important factor of economic development of every maritime country. Its basic task is providing shipping services, meaning that they may as well be considered as the product of the shipping economic activity. Regarding the current international shipping crisis, the key to success of every shipping organization, region and maritime country lies in efficiency and safety of its maritime shipping services on one hand, and on the other hand, is about having an integrated conceptual approach as regards the key elements i.e. goods, ships and ports. It is the aim of this paper to broadly emphasize the particularities of each key element contributing. Keywords: maritime transport, goods, ships, ports, transshipment.
1.
INTRODUCTION
The fundamental aim of the maritime transport and trade is to ensure the domestic and international regular and safe circuit of goods, in coordination with economic efficiency and according to the conventions, laws and contract terms in force. The transport is an element indispensable to life because it offers people the possibility to know, perceive and assimilate, as easy as possible, what human civilization and culture have to offer. The existence and improvement of means of transportation have allowed contact between various countries and nations, which has determined the economical, political and cultural life of mankind. The maritime transport contributes to the closeness of geographical areas, development of economic branches and territorial distribution of production and marketing. The level of development of maritime transport has a direct impact on the social division of labor, which, in its turn, determins the specialization, as well as the increase of the degree of accessibility to natural resources and fruits of human labor. The basic elements as indispensible to the achievement of the fundamental aim of transport are the following: goods - as object of maritime transport; ships - as maritime means of goods transportation; ports - as flow nodes, transshipment and warehousing of goods. 2. GOODS SHIPS PORTS AS KEY ELEMENTS OF MARITIME TRANSPORT 2.1 Goods - as objects of the maritime transport It is obvious that, in the development of maritime transport by its three basic elements, the goods have an essential role, both for the ports development and evolution of ships.
All three elements are permanently interdependent, however the research performed in the past has indicated that the main element in the economy of maritime transport is, either as raw material, by traffic diversity, quantity and regularity or as manufactured products, the more diverse, complex and demanded they are in international trade, the more economic, scientific and technical progress is advanced. Advanced technologies had an influence on the ports, which expanded in recent years and modernized in order to allow the profitable handling of goods. At the same time, at the request of owners, the innovative processes have made the transition from the classic freighter to specialized vessels, which incorporate stateof-the-art technologies, subsequent to the changes occured on the freight markets, imposed by the quality and quantity evolution of goods in maritime traffic. It is worth to mention that the propelling element of maritime transport is the quality-quantity leap of the goods factor, the other two, ships and ports, being the effects which, in their turn, influence the cause, forming the dialectic deterministic chain. Considering the opinions expressed in the specialized literature, we may claim that the goods influence the development of ships and ports through their physical condition, quantity and regularity on various transport routes; quality, diversity and handling and stacking characteristics; nuisance value; sensitivity; perishability and specific freight by each type of goods. According to their physical condition and handling and stacking characteristics, the goods subjected to maritime transport can be classified into two large categories: bulk cargo (or continuous goods), including homogenous lots of unpacked goods, large enough to cover themselves the transport capacity of a ship or of a cargo hold, which allow a continuous or nearly continuous loading flow; general cargo (or discrete goods) which, by its nature, consists of non-homogenous lots of packed goods, smaller in size, which does not allow a continuous flow of loading and requires
359
360
2.3. Importance of ports, as flow nodes, transshipment and warehousing of goods In the opinion of maritime transport experts, the modern maritime commercial port is a specially arranged seashore area where maritime and land transport ways of the continental area serving the port are joined and where there is a continuous and organized two-way trade in goods. Initially, the ports were defined as simple places where goods were loaded or unloaded. In the course of time, they have evolved from the status of simple interface between maritime and land transport (first generation ports), to the current phase of industrial and commercial clusters where several services are rendered (third generation ports). Thus, we reach the concept of logistic for value adding, which means that besides the primary loading or unloading functions, the ports add value to the goods. Just in order to respond to this new aspiration, ports are currently designed and developed as close as possible to the place of manufacturing and distribution of goods, within a wide area. Taking into account the opinions expressed in the specialized literature we can state that ports, regardless of their size, have three important functions: transshipment, storage and industry. The function of transshipment in very important and refers to the transfer of goods from ships to shore and back, in order to provide optimal conditions for the goods flow, as from the shipper to the consignee. Improvement of such function depends on the following: increase in operation speed and introduction of the continuous flow of goods handling; reduction of the laying time, thuis leading to a decrease in the transshipment time; modernization of maritime terminals, fitting them with modern handling installations and means of partial and total processing of raw materials; efficiency of infrastructure (piers, basins and quays), as well as overstructure works, represented by means of transshipment placed alongside the berthing area, considering that the transshipment takes place in the port basin, on quays or operating berths; performing an active cooperation between ship and quay. The port function of transshipment has two forms of manifestation: transitional storage and warehousing storage. The transitional storage refers to the situation when stocks are formed in order to decrease the gap between
361
As a first conclusion, modern maritime ports have simultaneosly the following functions: transit gate towards maritime and land ways and maritime terminal, as organizational unit of transit improvement, as well as regional processing of mass-produced goods. As a second conclusion, can be stated that the maritime transport is a highly complex economic activity of national and international interest, which must be considered and developped in such a way that to administer the needs and to ensure profitability. The main function of maritime transport is to ensure the link between production and consumption and is characterized by two essential economic features: economic profficiency to the sense of complying with defined requirements; profitability, as essential prerequisite of a wide economic activity, which involves transport costs and transport-related operations costs.
[1] CARAIANI, GH., ALEXA, V., SEBE, E., Transporturi i asigurri, curs Universitatea RomnoAmerican Bucureti, Editura Metrolpol, 1994. [2] CARAIANI, GH., ALEXA, C., PENCEA, R., Reglementri i uzane n transportul internaional de mrfuri, Editura "Scrisul Romnesc", 1986. [3] BTRNCA GH., Comer maritim internaional, Editura Arvin Press, Bucureti, 2004. [4] BERESCU S., Studiu constructiv dinamic al acvatoriului portului Constanta si consideratii privind cresterea eficintei operationale n port, Editura Nautica, Constanta, 2012. [5] BEZIRIS A., BAMBOI GH., Transportul maritim , vol. I-II, Editura tehnic , Bucureti, 1988. [6] BEZIRIS, A., Teoria i tehnica transportului maritim, partea I, Editura Didactictic i Pedagogic , Bucureti, 1978. [7] BEZIRIS A., TEODOR M., RICAN, G., Teoria i tehnica transportului maritim, partea II, Editura Didactic i Pedagogic, Bucureti, 1979. [8] FILLIP GH., Dreptul Transporturilor, Editura ansa, Bucureti, 1996. [9] RAICU, GABRIEL; BATRINCA, GHIORGHE; BRANZA, GRATIELA, Technological and economic considerations about managed learning systems implementation, 6th International Conference on the Management of Technological Changes Alexandroupolis, Greece, Vol. 2 pages: 331-33, 2009, ISI indexed. [10] CORNEL GRIGORUT, LAVINIA GRIGORUT, CONSTANTIN ANECHITOAE, MARIANA COJOC, From the bargain/bargaining agreement to the shipping agreement (II), Constanta Maritime University Annals, 2012. [11] LAVINIA-MARIA GRIGORUT, CORNEL GRIGORUT, CONSTANTIN ANECHITOAE, MARIANA COJOC, From the Bargain/Bargaining Agreement to the Shipping Agreement (I), Constanta Maritime University Annals, 2010. Anale
362