• Determination of the present and future state of the ocean
remains one of the greatest challenges in ocean sciences. • Physical and biological processes in the ocean and overlying atmosphere act and interact over a broad range of time and space scales. • Direct, simultaneous measurement on all of these scales is beyond the capabilities of even our most advanced observational tools. • Numerical simulations can provide insight into ocean behavior on many scales, but are dependent on both approximations for unresolved processes and on imposed surface and boundary conditions of questionable fidelity. • These dependencies limit a model’s ability to accurately simulate and predict the ocean. Ocean Sampling Networks • Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. • It is a simultaneous global mega-sequencing campaign aiming to generate the largest standardized microbial data set in a single day. • This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. • This commentary outlines the establishment, function and aims of the Consortium and describes our vision for a sustainable study of marine microbial communities. Ocean Sampling Networks • NOAA has been involved in marine microbial sciences for many decades. • NOAA’s interest in marine microbes stems from the need to better identify diversity, number, distribution and function of marine microbes in ecosystem functioning and services. • Recently, NOAA joined the international efforts of the Ocean Sampling Day and of the Genomics Observatories. • The Ocean Sampling Day (OSD) is a simultaneous sampling campaign of the world’s oceans that will take place on the Summer solstice (June 21st) of 2014. • These cumulative samples, related in time, space and environmental parameters, will provide insights into fundamental rules describing microbial diversity and function and will contribute to the blue economy through the identification of novel, ocean-derived biotechnologies. • NOAA’s various line office Laboratories (NOS/CCEHBR and COL, NMFS/ NWFSC, OAR/AOML and OER) have participated in a pilot sampling project on June 21, 2013 and established 10 sampling sites around the country and in the 2013 Winter solstice pilot sampling. Ocean Sampling Networks • The Ocean Sampling Day (OSD) is a simultaneous sampling campaign of the world’s oceans at sites selected as part of the initial OSD network. • The goal of the project is to make available a large-scale dataset on marine viral, bacterial, archaeal and protists genomes and metagenomes to study microbial diversity and function and to define new targets for biotechnical applications. • The database will provide a baseline for the marine environment, readily accessible to the research community, industry, the public and policy makers. Ocean Sampling Network • OSD builds on past efforts including the Global Ocean Survey (GOS), the International Census of Marine Microbes (ICoMM), and Tara Oceans. • In addition OSD is being put together in collaboration with the Genomic Observatories Network (GOs Network), the Earth Microbiome Project (EMP), and the Global Genome Initiative (GGI). Monitoring systems
• It has been mainly focused on measuring, monitoring, surveillance and controlling
underwater environments. • Sensor nodes to measure water parameters (salinity, conductivity, turbidity, pH, oxygen, temperature, depth, etc.) • Sediments and pollution sensor nodes • Acoustic sensors • Martinez et al. design, build, and test a portable, watertight, and user-friendly an autonomous underwater sound recording device (USR) that monitors the underwater sound and pressure waves generated by anthropogenic activities, such as underwater blasting and pile driving. • It uses two hydrophones or other dynamic pressure sensors which allow approximately up to 1 h and 55 min of data collection from both sensors simultaneously. • They proposed two versions. • The first one is a submersible model deployable to a maximum depth of 300 m and the second one is a watertight but not fully submersible model. • The submersible version is appropriate for collecting long-duration measurements at depths that would require very long hydrophone cables or extension cables, while the non-submersible version is better for short duration underwater monitoring cases. Pollution Monitoring • Ocean pollution was ignored for years, but in recent decades the consequences have become more visible. • On an individual level, pollutants can cause detrimental effects to the activities, health, and survival of marine organisms and humans. • On a larger scale, it threatens biodiversity, climate, and the preservation of some of the most treasured locations on the planet. Pollution Monitoring • Many of the pollutants we think about have a smaller impact on ocean health than others, some of which we cannot even see. • Even too much of a seemingly harmless substance can have deleterious effects on the environment. • For instance, small quantities of elemental phosphorus and nitrogen are vital to life for people, animals, aquatic plants, and food crops. • When these nutrients are released into aquatic ecosystems in high concentrations, though, they can drastically over-fertilize algae. • Because high nutrient levels are linked to algal overgrowth, dissolved oxygen reduction, dead zones, and fish kills, they are now recognized as leading pollutants in the world’s coastal zones. • CO2 is another example of an invisible substance that may have quite harmful systemic impacts on the ocean when present in excess. Pollution Monitoring • Ocean pollutants vary widely, ranging from toxic chemicals to discarded toys to sound waves. • They are grouped into different classes based on similar characteristics, sources, and effects. • Different pollutant classes can also have different degrees or spatial scales of impact. • For example, oil slicks are dangerous to local marine organisms, but usually don’t affect life outside the spill area. • Greenhouse gases, though, can result in widespread ecosystem changes that cover the globe, even in areas uninhabited by humans. • More research is needed to understand the risks posed by emerging contaminants, and therefore be able to manage their effects. Network Monitoring • The Ocean Bottom Seismometer (OBS) presented by Manuel et al. in [7] has been designed to be used in long duration seismic surveys. • It has low power consumption and it is able to store large data with high resolution and Signal-to-Noise Ratio (SNR). • One of its key points is the noise level of the acquisition system. • Pipeline networks can be used to allow a remote facility to detect and to report the positions of any leakage, defect, or risk. • One of the main differences between the networks used for pipelines and other networks is that the network needed for pipeline applications is structured in a line where all sensor nodes are distributed on that line. • This characteristic enforces some reliability challenges in monitoring pipeline infrastructures. • The reliability challenges are linked directly with the reliability of the network connecting these sensor nodes. • Having a reliable network is one of the main conditions of having a reliable monitoring system for pipelines. Environmental Monitoring and Tactical Surveillance Systems • Architecture • Each sensor node has a bunch of sensors. These sensors can be lowered to a calculated depth via a cable, i.e., the sensors of the same node stay at the same depth. • The cable that connects sensors to the surface buoy is used for communication between sensors and the buoy. • By choosing the wired medium as the communication link between sensors and the surface buoy, we eliminate the difficulties of acoustic media such as reverberation, environmental noise, etc. • Sensor nodes communicate with each other through the wireless medium over sea surface by using the antenna at the surface buoys. • The buoys collect the sensed data from their sensors and convey this data to the data collecting buoy (cbuoy), i.e., the sink. Environmental Monitoring and Tactical Surveillance Systems Major challenges in Underwater sensor networks • Battery power is limited and usually batteries cannot be recharged, also because solar energy cannot be exploited. • The available bandwidth is severely limited. • Channel characteristics, including long and variable propagation delays, multi-path and fading problems. • High bit error rates. • Underwater sensors are prone to failures because of fouling, corrosion, etc. • Wired underwater is not feasible in all situations as shown below-: – Temporary experiments – Breaking of wires – Significant cost of deployment – Experiment over long distances. Factors that affect the UWSN • The hardware constraints lead sensor nodes to frequently fail or be blocked for a certain amount of time. • These faults may occur because of a lack of power, physical damage, environmental interference, or software problems. • The large number of inaccessible and unattended sensor nodes, which are prone to frequent failures, make topology maintenance a challenging task. • The successful operation of a WSN relies on reliable communication between the nodes in the network. • In a multi-hop sensor network, nodes can communicate through a wireless medium creating links between each other. Wireless Sensor Network • WSN (Wireless Sensor Network) is the most standard services employed in commercial and industrial applications, because of its technical development in a processor, communication, and low-power usage of embedded computing devices. • The wireless sensor network architecture is built with nodes that are used to observe the surroundings like temperature, humidity, pressure, position, vibration, sound, etc. • Applications – smart detecting – a discovery of neighbor nodes – data processing and storage – data collection – target tracking – monitor and controlling – Synchronization – node localization – effective routing between the base station and nodes. Wireless Sensor Network • A Wireless Sensor Network is one kind of wireless network that includes a large number of circulating, self-directed, minute, low powered devices named sensor nodes called motes. • These networks certainly cover a huge number of spatially distributed, little, battery-operated, embedded devices that are networked to collect, process, and transfer data to the operators, and it has controlled the capabilities of computing & processing. • Nodes are tiny computers, which work jointly to form networks. WSN Architecture • The most common wireless sensor network architecture follows the OSI architecture Model. • The architecture of the WSN includes five layers and three cross layers. • Mostly in sensor n/w, we require five layers, namely application, transport, n/w, data link & physical layer. • The three cross planes are namely power management, mobility management, and task management. • These layers of the WSN are used to accomplish the n/w and make the sensors work together in order to raise the complete efficiency of the network. WSN Architecture • The architecture used in WSN is sensor network architecture. • There are 2 types of wireless sensor architectures: Layered Network Architecture, and Clustered Architecture. – Layered Network Architecture – Clustered Network Architecture WSN Architecture - Layered Network Architecture • This kind of network uses hundreds of sensor nodes as well as a base station. Here the arrangement of network nodes can be done into concentric layers. It comprises five layers as well as 3 cross layers which include the following. • The five layers in the architecture are: – Application Layer – Transport Layer – Network Layer – Data Link Layer – Physical Layer • The three cross layers include the following: – Power Management Plane – Mobility Management Plane – Task Management Plane WSN Architecture Layered Network Architecture WSN Architecture - Layered Network Architecture • Application Layer • The application layer is liable for traffic management and offers software for numerous applications that convert the data in a clear form to find positive information. Sensor networks arranged in numerous applications in different fields such as agricultural, military, environment, medical, etc. WSN Architecture - Layered Network Architecture • Transport Layer • The function of the transport layer is to deliver congestion avoidance and reliability where a lot of protocols intended to offer this function are either practical on the upstream. • These protocols use dissimilar mechanisms for loss recognition and loss recovery. • The transport layer is exactly needed when a system is planned to contact other networks. WSN Architecture - Layered Network Architecture • Network Layer • The main function of the network layer is routing, it has a lot of tasks based on the application, but actually, the main tasks are in the power conserving, partial memory, buffers, and sensor don’t have a universal ID and have to be self-organized. WSN Architecture - Layered Network Architecture • Data Link Layer • The data link layer is liable for multiplexing data frame detection, data streams, MAC, & error control, confirm the reliability of point–point (or) point– multipoint. WSN Architecture - Layered Network Architecture • Physical Layer • The physical layer provides an edge for transferring a stream of bits above the physical medium. • This layer is responsible for the selection of frequency, generation of a carrier frequency, signal detection, Modulation & data encryption. • IEEE 802.15.4 is suggested as typical for low rate particular areas & wireless sensor networks with low cost, power consumption, density, the range of communication to improve the battery life. • CSMA/CA is used to support star & peer to peer topology. There are several versions of IEEE 802.15.4.V. • The main benefits of using this kind of architecture in WSN is that every node involves simply in less-distance, low- power transmissions to the neighboring nodes due to which power utilization is low as compared with other kinds of sensor network architecture. • This kind of network is scalable as well as includes a high fault tolerance. WSN Architecture - Clustered Network Architecture WSN Architecture - Clustered Network Architecture • In this kind of architecture, separately sensor nodes add into groups known as clusters which depend on the “Leach Protocol” because it uses clusters. • The term ‘Leach Protocol’ stands for “Low Energy Adaptive Clustering Hierarchy”. • The main properties of this protocol mainly include the following. • This is a two-tier hierarchy clustering architecture. • This distributed algorithm is used to arrange the sensor nodes into groups, known as clusters. • In every cluster which is formed separately, the head nodes of the cluster will create the TDMA (Time-division multiple access) plans. • It uses the Data Fusion concept so that it will make the network energy efficient. WSN Architecture • Wireless sensor nodes are the essential building blocks in a wireless sensor network. – sensing, processing, and communication – stores and executes the communication protocols as well as data processing algorithms • The node consists of sensing, processing, communication, and power subsystems. – trade-off between flexibility and efficiency WSN Architecture • ADC converts the output of a sensor - which is a continuous, analog signal - into a digital signal. • The processor subsystem – interconnects all the other subsystems and some additional peripheries – its main purpose is to execute instructions pertaining to sensing, communication, and self-organization – It consists of • processor chip • nonvolatile memory - stores program instructions • active memory - temporarily stores the sensed data • internal clock • Communication subsystem • Fast and energy efficient data transfer between the subsystems of a wireless sensor node is vital • However, the practical size of the node puts restriction on system buses • communication via a parallel bus is faster than a serial transmission. – a parallel bus needs more space. Protocols • Although many recently developed network protocols for wireless sensor networks exist, the unique characteristics of the underwater acoustic communication channel require new efficient and reliable data communication protocols, whose design is affected by many challenges such as: – The propagation delay is five orders of magnitude higher than in electro-magnetic terrestrial channels due to the low speed of sound (1500 m/s). – The underwater acoustic channel is severely impaired, especially due to time-varying multipath and fading. – The available acoustic bandwidth depends on the transmission distance due to high environmental noise at low frequencies (lower than 1 kHz) and high-power medium absorption at high frequencies (greater than 50 kHz); only a few kHz may be available at tens of kilometers, and tens of kHz at a few kilometers. – High bit error rates and temporary losses of connectivity (shadow zones) can be experienced. – Underwater devices are prone to failures because of fouling and corrosion. – Batteries are energy constrained and cannot be recharged easily (solar energy cannot be exploited underwater). Protocols • Most impairments of the underwater acoustic channel can be addressed at the physical layer by designing receivers that are capable of dealing with high bit error rates, fading, and the intersymbol interference (ISI) caused by multipath. • Conversely, characteristics such as the extremely long and variable propagation delays, limited and distance-dependent bandwidth, and temporary loss of connectivity, must be addressed at higher layers. Protocols • Medium Access Control Protocols – Code Division Multiple Access is a promising physical and MAC layer technique in this environment because, • it is robust to frequency-selective fading, • it compensates for the effect of multipath by exploiting Rake filters at the receiver, • it enables receivers to distinguish among signals simultaneously transmitted by multiple devices. Protocols • Routing Protocols – Proactive Protocol • Proactive protocols (e.g., destination sequenced distance vector [DSDV], optimized link state routing [OLSR]) provoke a large signaling overhead to establish routes for the first time and each time the network topology is modified because of mobility, node failures, or channel state changes because updated topology information must be propagated to all network devices. • In this way, each device can establish a path to any other node in the network, which may not be required in underwater networks. • Also, scalability is an important issue for this family of routing schemes. • For these reasons, proactive protocols may not be suitable for underwater networks. Protocols • Destination-Sequenced Distance Vector routing – is a table-driven routing scheme for ad hoc mobile networks based on the Bellman–Ford algorithm. – It was developed by C. Perkins and P.Bhagwat in 1994. – The main contribution of the algorithm was to solve the routing loop problem. – Each entry in the routing table contains a sequence number, the sequence numbers are generally even if a link is present; else, an odd number is used. – The number is generated by the destination, and the emitter needs to send out the next update with this number. – Routing information is distributed between nodes by sending full dumps infrequently and smaller incremental updates more frequently. Protocols • Optimized Link State Routing Protocol • OLSR is a proactive link-state routing protocol, which uses hello and topology control (TC) messages to discover and then disseminate link state information throughout the mobile ad hoc network. • Individual nodes use this topology information to compute next hop destinations for all nodes in the network using shortest hop forwarding paths. Protocols • Routing Protocols – Reactive Protocol • Reactive protocols (e.g., ad hoc on-demand distance vector [AODV], dynamic source routing [DSR]) are more appropriate for dynamic environments. • But incur a higher latency and still require source-initiated flooding of control packets to establish paths. • Reactive protocols may be unsuitable for underwater networks because they also cause a high latency in the establishment of paths, which is amplified underwater by the slow propagation of acoustic signals. Protocols • Routing Protocols – Geographical routing Protocol – Greedy Face Greedy [GFG], Partial Topology Knowledge forwarding [PTKF]) are very promising for their scalability feature and limited signaling requirements. – However, global positioning system (GPS) radio receivers do not work properly in the underwater environment. – Still, underwater sensing devices must estimate their current position, irrespective of the chosen routing approach, to associate the sampled data with their 3D position. Protocols • Transport Layer Protocol – A transport-layer protocol is required to achieve reliable transport of event features and to perform flow and congestion control. – A transport layer protocol designed for the underwater environment, Segmented Data Reliable Transport (SDRT), recently was proposed. – The basic idea of SDRT is to use Tornado codes to recover error packets to reduce retransmissions. – The data packets are transmitted block-by block, and each block is forwarded hop-by-hop – SDRT keeps sending packets inside a block until it receives positive feedback and thus, it wastes energy. Protocols • Goals of Reliable Data transport for underwater sensor network – high energy efficiency – high channel utilization – simple protocol management. • Segmented Data Reliable Transport (SDRT), a hybrid approach exploring both Forward Error Correction and Automatic Repeat Request. • SDRT assumes that receivers can detect corrupted packets. • When a packet is completely lost or unrecoverable from corruption, this packet is treated as “lost”. Protocols • In SDRT, a data source node first groups data packets into blocks. • Then the data packets are delivered from the source to the destination, block by block and hop-by-hop. • An intermediate node encodes each data block using an efficient erasure coding scheme. • Pumps encoded packets into the channel. • After a receiver receives the encoded packets, it decodes and reconstructs the original block. • After the reconstruction is done, the receiver encodes the block again and relays the block to the nodes in next hop. • For each relay of a data block, the sender keeps pumping encoded packets until receiving a positive ACK from its next hop. GPS • The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radio-navigation system owned by the United States government and operated by the United States Space Force. • It is one of the global navigation satellite systems (GNSS) that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. • Obstacles such as mountains and buildings block the relatively weak GPS signals. GPS • The GPS does not require the user to transmit any data. • It operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. • The GPS provides critical positioning capabilities to military, civil, and commercial users around the world. • The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver Underwater GPS • The global positioning system (GPS) is commonly used on land and in the air to obtain position and timing information. • However, the radio frequencies used by GPS cannot penetrate in seawater, requiring a different system for underwater positioning. • This information is useful to a variety of ships, automated vehicles, and even individuals. Underwater GPS Underwater GPS • One solution is to measure positions relative to a framework of baseline stations. • Classic underwater positioning systems. – Long baseline (LBL) systems – short baseline (SBL) systems – ultra-short baseline (USBL) systems • In a long baseline system, baseline stations are placed on the seafloor and their locations are measured precisely. • The underwater target, such as an autonomous underwater vehicle (AUV), remotely operated vehicle (ROV), or a diver, transmits an acoustic signal that is received by the baseline stations. • The baseline stations send an acoustic signal back to the underwater target, which records the response. • Times of arrival of the signals from the baseline stations are used to estimate the location of the underwater target. Underwater GPS • The Positioning System for Deep Ocean Navigation (POSYDON) program is working to extend the concept of long baseline systems to allow positioning across ocean basins. • An undersea target would use signals transmitted by a small number of long-range acoustic sources to obtain continuous, accurate positioning without surfacing. Underwater GPS • Short baseline systems and ultra-short baseline systems are frequently mounted on vessels. • Short baseline systems have three or more sonar transducers connected by a wire to a central control box. • Ultra-short baseline systems have three or more sonar transducers mounted on a rigid pole. • The underwater target transmits an acoustic signal that is received by the sonar transducers. • Times of arrival of the signals from the underwater target are used to estimate its location. Underwater GPS Underwater GPS • GPS Intelligent Buoys (GIBs) are a portable tracking system that includes a network of surface buoys equipped with GPS receivers and submerged hydrophones. • Each hydrophone receives acoustic signals transmitted by a synchronized pinger onbard an underwater target. • The buoys communicate the times of arrival of the received signals to a central station, such as a local support vessel, where the position of the underwater target is estimated. Underwater GPS Autonomous Underwater Vehicles • An autonomous underwater vehicle (AUV) is a robot that travels underwater without requiring input from an operator. • AUVs constitute: – part of a larger group of undersea systems known as unmanned underwater vehicles, – a classification that includes non-autonomous remotely operated underwater vehicles (ROVs) – – controlled and powered from the surface by an operator/pilot via an umbilical or using remote control. • In military applications a AUV is more often referred to as unmanned undersea vehicle (UUV). Autonomous Underwater Vehicles • The U.S. Navy Unmanned Undersea Vehicle (UUV) Master Plan identified the following UUV's missions: – Intelligence, surveillance, and reconnaissance – Mine countermeasures – Anti-submarine warfare – Inspection/identification – Oceanography – Communication/navigation network nodes – Payload delivery – Information operations – Time-critical strike Autonomous Underwater Vehicles • The Mesobot, is currently being designed to study mesopelagic (midwater) processes. • Mesobot will use cameras and lights to non-invasively follow mesopelagic animals, • Track the fate of descending particles, • Follow rising bubbles and droplets, • Enabling scientists to characterize in situ behavior over extended periods for the first time.