Next Article in Journal
Assessment of a Side-Row Continuous Canopy Shaking Harvester and Its Adaptability to the Portuguese Cobrançosa Variety in High-Density Olive Orchards
Next Article in Special Issue
A Residual-Dense-Based Convolutional Neural Network Architecture for Recognition of Cardiac Health Based on ECG Signals
Previous Article in Journal
Measurement of Water Vapor Condensation on Apple Surfaces during Controlled Atmosphere Storage
Previous Article in Special Issue
Study on In-Service Inspection of Nuclear Fuel Assembly Failure Using Ultrasonic Plate Wave
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method for Training CycleGAN to Enhance Images of Cold Seeps in the Qiongdongnan Sea

1
Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
2
Ocean College and Institute of Ocean Research, Fujian Polytechnic Normal University, Fuzhou 350300, China
3
Guangzhou Marine Geological Survey, Guangzhou 510075, China
*
Author to whom correspondence should be addressed.
Submission received: 21 November 2022 / Revised: 31 January 2023 / Accepted: 31 January 2023 / Published: 3 February 2023

Abstract

:
Clear underwater images can help researchers detect cold seeps, gas hydrates, and biological resources. However, the quality of these images suffers from nonuniform lighting, a limited range of visibility, and unwanted signals. CycleGAN has been broadly studied in regard to underwater image enhancement, but it is difficult to apply the model for the further detection of Haima cold seeps in the South China Sea because the model can be difficult to train if the dataset used is not appropriate. In this article, we devise a new method of building a dataset using MSRCR and choose the best images based on the widely used UIQM scheme to build the dataset. The experimental results show that a good CycleGAN could be trained with the dataset using the proposed method. The model has good potential for applications in detecting the Haima cold seeps and can be applied to other cold seeps, such as the cold seeps in the North Sea. We conclude that the method used for building the dataset can be applied to train CycleGAN when enhancing images from cold seeps.

1. Introduction

Exploring cold seeps in the deep water of the South China Sea has attracted extensive scientific interest [1]. Current research reveals that the escape of submarine cold seeps often reflects the existence of methane hydrates or deep natural gas [2]. Feng et al. conducted a quantitative study on the contribution of cold seep fluids to the bottom-water carbon reservoir of the cold seeps in the South China Sea. These cold seeps might help our understanding of the dynamics and the environmental impacts of hydrocarbons [3]. Di et al. determined size distribution, bubble release rate, and bubble diameters using a semiautomatic bubble-counting algorithm and observed that the Haima cold seeps in the South China Sea may be a source of methane for the ocean [4].
Clear underwater images can be a powerful tool for researchers in detecting cold seeps, gas hydrates, and biological resources. However, complex illumination conditions in the water, such as nonuniform lighting, diminished colors, and unwanted signals, limit the range of optical detection [5]. To solve these problems, underwater image enhancement technology has been widely studied [6]. The white balance method and the shades-of-gray method have been successfully applied [7,8,9]. However, at the same time, these methods may lead to image distortion because of the overcompensation of the red channel [10]. Those methods do not consider the fact that the color perceived by the retina is determined by an object’s ability to reflect long-, medium-, and short-wavelength light. Thus, they always suffer from an imbalance with respect to color channels. In consideration of the issues mentioned above, a visual model named retinex was created to explain perceived human color [11,12,13]. Joshi and Kamathe applied retinex to underwater image enhancement [14]. With retinex processing, one can obtain a dynamic range or color and lightness rendition compression, but these cannot be obtained simultaneously. Therefore, Rahman et al. presented a multi-scale retinex model to overcome this limitation [15]. Furthermore, Jobson et al. proposed a multi-scale retinex model with color restoration (MSRCR) to solve the distortion defect problem caused by strong local contrasts [16]. With the development of retinex, MSRCR has also been applied to underwater image enhancements, and good results have been achieved [17]. However, the underwater environment is complex, and MSRCR is not flexible enough; thus, we need to debug the results many times in order to obtain good enhancement results to improve its application range [6]. To solve this problem, Hu et al. introduced the natural image quality evaluation index and a gravitational search algorithm in a multi-scale retinex model to obtain a parameter index with an excellent adaptive ability relative to environmental changes [18].
With the development of artificial intelligence (AI), underwater image enhancements based on AI have achieved good results [19]. Perez et al. [20] proposed an underwater image enhancement method using a convolutional neural network (CNN) that can be regarded as the first application of the deep learning method to underwater image enhancement. Anwar et al. later enhanced underwater images by constructing an underwater CNN [21]. Although both CNNs and underwater CNNs have been successfully applied in the field of underwater image enhancement, the complexity and scarcity of underwater images seriously restrict their applications [22]. Using deep learning frameworks can effectively improve the enhancement’s quality, although they fail to preserve the details of far-off objects in underwater images. To overcome these issues, Han et al. proposed a deep supervised residual dense net. It first uses residual dense blocks to extract features, and then the net adds residual path blocks between the encoder and decoder to reduce semantic differences between low-level features and high-level features; thus, it achieves good qualitative and quantitative enhancement effects [23]. Hu et al. introduced the natural image quality evaluation index to provide generated images with higher contrasts. The enhanced image of this algorithm is clearer than the truth image set provided by the existing dataset [24]. Zhang et al. used a two-stage detection network and a self-built dataset to construct an end-to-end model and improved the detection speed by 6% [25].
Generative adversarial nets (GANs) provide another possibility for transferring an image from one domain to another [26]. Domain conversion can be performed by using CycleGAN in the absence of paired data [27], thereby providing a new idea for underwater image enhancement. Fabbri et al. proposed UGAN, which uses CycleGAN to generate degraded underwater image data and improve the image quality [28]. A weak model inspired by CycleGAN was proposed by Goodfellow et al. [26]; it uses structural loss rather than the consistency loss of CycleGAN to generate a repaired image from the distorted image. This method addresses the influence of different types of water on the enhancement results, but it often produces strange textures and is highly dependent on the training dataset. Ahn et al. [29] used matched clear and unclear underwater images as a dataset for training, and the model structure of CycleGAN was adjusted to improve the effect. The multichannel CycleGAN technology proposed by Lu et al. [30] addresses the influence of different types of water on the enhancement results, but it is also highly dependent on the training dataset. An end-to-end underwater image enhancement method based on CycleGAN for an unpaired dataset was proposed by Du et al. in 2022. They used URPC2019 and EUVP datasets to train the model, effectively restored the blue-green background, and transformed a blurred underwater degraded image into a clear image [31]. Their work proves that CycleGAN can effectively enhance underwater images, but because its dataset requires clear underwater images and fuzzy underwater images at the same time, training the dataset is often difficult in practical work; thus, the application of CycleGAN is limited [5]. In the process of deep-sea cold seep exploration, obtaining large, clear datasets for training is difficult due to the limitation of capital costs. Moreover, because CycleGAN can render photographs into their respective styles [27], when a large number of underwater images are required to train other AI models, the trained CycleGAN can also be used to generate photographs with the style of underwater images. CycleGAN plays a basic role in unpaired dataset training, but cycle consistency limits the training efficiency. To solve this issue, Jiang et al. proposed a new GAN named EnlightenGAN [32]. They proposed an effective and efficient GAN suitable for unpaired training and applied it to low-light image enhancement, and it easily enhanced real-world low-light images from different domains.
Although the method based on MSRCR has been widely used, it suffers from difficulties in parameter setting, limiting its practical application. Although the underwater image enhancement method based on CNNs has a good effect, it is difficult to apply because of the complex underwater environment it is expected to operate in and scarcity of data resources. CycleGAN provides a novel scheme for solving these problems, but training the network is difficult, especially an inappropriate land dataset is used, and the training effect of GAN may be elusive. Unfortunately, in the process of deep-sea cold seep exploration, it is often difficult to obtain a large number of clear images for training. Moreover, because CycleGAN can render photographs into their respective styles [27], when a large number of underwater images are required to train other AI models, the trained CycleGAN can also be used to generate photographs with the style of underwater images. As shown in Figure 1, when the dataset is not appropriate, it is difficult for CycleGAN to obtain good results via training. More effort needs to be directed toward finding a suitable method that can make CycleGAN practical for applications in cold seep underwater image enhancement.
When we used the standard model for training, we observed that it could not correctly enhance cold spring images from all environments. When we used the self-built dataset, if the dataset was inappropriate, we were not able to complete the model training. In order to solve these issues, we propose a new method of building datasets with the help of MSRCR and underwater image quality measurement (UIQM) to achieve good results. We used an actual image collected by a remotely operated vehicle (ROV) in a cold seep in Haima. Then, we chose images of the representative working conditions of the Haima ROV to test the enhancement effect of MSRCR with different parameters. After an automatic comparison with UIQM, the MSRCR parameters with the best effect were used to build the dataset to train CycleGAN. After hundreds of epochs, the model was trained for practical applications. The enhancement experiment using the trained CycleGAN demonstrated that the images were enhanced very well, and CycleGAN had a good generalization ability. Considering the above works, the main contributions of this paper are as follows:
  • The effect of cold seep image enhancements using MSRCR on different imaging devices in different detectors was tested, and it was shown that a single enhancement coefficient or a fixed table could not meet the requirements of different scenes.
  • CycleGAN was trained using the standard dataset and applied to the image enhancement of cold seep images. It was observed that the model worked well in some conditions and failed in other conditions.
  • We found an effective way to build datasets to train CycleGAN with the help of MSRCR for cases in which a clear image dataset is difficult to obtain.
  • Finally, an active underwater image enhancement CycleGAN that can be applied to practical applications rather than standard data models was trained. Compared with previous studies, the training ideas proposed in this paper may be applied to any underwater scene, with good universal applicability.
The rest of this paper is organized as follows. The basic methods of MSRCR and CycleGAN are presented in Section 2 briefly. Section 3 provides the details of the tests and results of the experiments in this paper. A discussion and conclusions are presented in Section 4 and Section 5, respectively.

2. Materials and Methods

2.1. Principle of MSRCR

The word retinex is a combination of retina and cortex [11]. Retinex theory includes two aspects: an object is determined by the reflection ability of the object rather than the absolute value and the reflected light intensity, and the color of the object is not affected by the uniformity of illumination [16]. Ma et al. [33] systematically introduced the idea of underwater image enhancement using MSRCR. Although MSRCR can yield good results when applied to underwater image enhancement, it is often necessary to test and update relevant parameters according to underwater illumination and turbidity conditions to obtain good image enhancement and color restoration, which limits its application [33].

2.2. Principle of CycleGAN

2.2.1. Net Structure

As shown in Figure 2, CycleGAN has four basic networks: two generators and two discriminators [27]. The generator network is used to generate two distributions, m and n. Generators G and F generate m domain data to n and n domain data to m, respectively. Their purpose is to generate images to deceive the two discriminator networks. Discriminators D M and D N judge the converted images and verify whether the image generated by G is M or N. A six-layer ResNet [34] is used to preserve the properties of the original input in the CycleGAN. Then as in [35], the discriminator network takes an image as input and tries to predict it as the original image or the output image of the generator.

2.2.2. Cycle Consistency Loss

We followed the design of reference [26] to calculate the loss of CycleGAN. As shown in Figure 3, G(m) can be obtained by substituting data m into generator G; by substituting G(m) into the inverse generator F, we can obtain F(G(m)) ≈ m, and the forward consistency loss can be obtained by calculating the loss, L. Similarly, we can obtain the backward consistency loss by calculating G(F(n)) ≈ n.
The related formula to calculate the cycle consistency loss L c y c can be expressed as
L c y c G , F = E m ~ p d a t a m F G m m 1 + E y ~ p d a t a n G F n n 1 ,
where E indicates the calculation of the excepted value, and p is the probability. F, G, m, and n have the same meaning, as shown in Figure 3. Moreover, ‖ ‖ is the L1 norm. The full objective loss is
L G , F , D m , D n = L G A N G , D n , X , Y + L G A N F , D m , X , Y + λ L c y c G , F ,
where λ is a permanent control of the relative importance of the two objectives. For the two sets to generate countermeasures, the network is reversed to solve
G * , F * = arg min G , F max D m , D n G , F , D m , D n .

3. Tests and Results

3.1. Dataset Preparation Using the MSRCR Method

Because the original MSRCR has so many parameters that make its application difficult, in this study, we selected the simplified MSRCR method in the GNU image manipulation program, which reduces the number of needed key parameters to one [36]. The method first calculates the mean value M e a n and mean square deviation V a r of each RGB channel of image R x , y . Then, the minimum and maximum values, M i n and M a x , of each channel are calculated using the following formulas:
M i n = M e a n D y n V a r , M a x = M e a n + D y n V a r ,
where D y n is the parameter used to adjust color saturation contamination around the new average color. This is the parameter for tweaking optimal results, because its effect is extremely image-dependent. Finally, the method performs linear mapping to output the enhanced image R ^ ( x , y ) using the following formula:
R ^ x , y =   V a l u e M i n   / M a x M i n 255 ,
where V a l u e indicates the RGB value of each pixel in the image.
The parameter Dyn plays an important role in this method, and according to the representative working conditions in the actual detection of cold seeps with the ROV, we adjusted the relevant parameters for the corresponding images and then modified the relevant parameters of MSRCR. Figure 4 and Table 1 present the quantitative values of the enhancement effect of each parameter. In Figure 4, the first row shows the original images collected by the ROV, and the other rows depict the enhanced images. To ensure that the parameters have a large variation range and a small enough variation interval, we set the value of D y n from 0.4 to 6.0 and the interval to 0.2 to enhance the cold seep images under different operating modes. The quality of enhancement was judged by the UIQM introduced in [37]. The training dataset was then chosen automatically based on the principle of maximizing the value of UIQM. For different cameras of the Haima ROV under different working conditions, the Dyn parameter of MSRCR was distributed in the selected parameter range listed in Table 1. When Dyn was 6.0, the value was too high, leading to deterioration with respect to image enhancement. Therefore, we set its maximum value to 6.0 in the experiment. Similarly, Dyn = 0.4 was too low, resulting in deterioration with respect to image enhancement; thus, we set its minimum value to 0.4. To make the density of Dyn sufficiently high, we set the step size to 0.2 in the experiment. For the sake of brevity, Table 1 only lists the statistical rule of MSRCR enhancement when the step size is 0.8.
To fairly assess these different parameters, we selected the standard metric UIQM and set the same parameters as observed in [37]. We then assessed the enhancement results of each image in the original dataset under different parameters and identified the one that maximized the UIQM’s value. Finally, the images were inserted into the training dataset as the conversion dataset of the original underwater dataset to train CycleGAN.
After a comparison based on the principle of being able to see every detail in the image and that the color should be as natural as possible, we chose the dynamic parameters that led to the maximum value of UIQM and then inserted them into the training dataset. Although MSRCR can enhance underwater images, different parameters need to be used for different scenarios, which seriously affects its practical application. Therefore, we unified the images with the best MSRCR enhancement effect under different parameters into a dataset for different use scenarios and used the dataset to train the CycleGAN model. A total of 80% of the images were randomly selected for training, and the other 20% were used for verification.

3.2. CycleGAN Training and Underwater Image Enhancement

We used the MindSpore framework. Compared with other AI frameworks, such as Tensorflow and PyTorch, MindSpore can remain consistent with the python language style, and its intermediate representation technology ensures efficiency [38]. The parameters referenced in Ref. [6] were used for model training. On this basis, to make full use of hardware resources and improve image resolution, we set the image’s resolution to 512 × 512.
All tests were run on a Lenovo P920 workstation. The operating system was CentOS Linux8.6, and the deep learning framework was MindSpore 1.9.0. Using the method proposed in this paper, the total number of images in the dataset was set as 2438 images, corresponding to the five representative underwater working conditions in Figure 4. The CPU used was Intel Xeon 5218R, and the GPU was NVIDIA 3090. All the data were stored on a Lexar NM800 hard disk.
The curves reflecting the training cycle’s consistency loss (G_loss and D_loss) are shown in Figure 5. In the first 100 iterations, the loss curves descended rapidly, and Table 1 confirms this viewpoint. In the first 100 epochs of training, the UIQM values of images enhanced by the trained CycleGAN model were not stable enough because the variance value was too large; however, after 100 epochs, the mean and variance of the images in the cold seep dataset tended to be stable. The average time cost of training was 17’56” per epoch, so all training processes cost ~105 h for the 350 epochs. Because the loss was reduced very slowly after 200 epochs, the parameters of CycleGAN could be obtained by comprehensively considering the time cost factor of training.
Figure 6 shows the effect of using the trained CycleGAN to enhance several working conditions that were opposite to those indicated in the first line of Figure 4. It can be observed from the figure that the images under all working conditions were brighter, with richer colors and better color consistency after enhancement. As a comparison, the image in the second line is the enhancement effect of the network trained with the standard EUVP dataset (http://irvlab.cs.umn.edu/resources/euvp-dataset accessed on 22 September 2022). It can be observed from the figures that although the images enhanced by CycleGAN, which was trained with the EUVP dataset, seem more colorful, CycleGAN failed in a specific scene (the figure on the left).
Figure 7 shows the chosen images of the cold seeps in the North Sea [39]. The statistical regularities of the UIQM value for MSRCR and the training steps of CycleGAN are provided in Table 1. From the images and the UIQM values, we can see that, before enhancement, the images’ color was dim, and the target object was not clear. After enhancement, the image color was rich, and the contrast was obvious. Moreover, as with the dataset tests, the UIQM value was still better than that of MSRCR. As seen in the third line of Figure 7, the CycleGAN trained with the EUVP dataset failed to enhance the images collected from cold seeps in the North Sea [39].

3.3. Versatility Test of the CycleGAN

Training the classification model or the target recognition model requires a large number of labeled data. However, in practical applications, obtaining a substantial number of underwater data directly is difficult. Using CycleGAN to degrade images may provide another possibility for training the underwater depth model. As shown in Figure 8, bright images could deteriorate underwater images.

3.4. EnlightenGAN Test

To verify the generality of the method proposed in this article, we tested it using EnlightenGAN [32]. The hardware used in the training process was the same as that used for CycleGAN. The AI framework was Pytorch 1.12.1. We made very little changes to the source code shared on Github to adapt the new version of Pytorch. The test took 4 h and 17’56” to finish 200 training epochs. The training efficiency was significantly improved compared with that of CycleGAN. Table 2 shows the statistical UIQM values of each training time for EnlightenGAN. The table indicates that EnlightenGAN can achieve a good result after very little training. The UIQM statistical value reached the maximum at 120 iterations, and further training did not improve the enhancement effect.
Figure 9 shows the images corresponding to Figure 4 and Figure 6. EnlightenGAN parameters of 120 training epochs were selected to enhance the images. It can be seen from the figure that the underwater images were effectively enhanced. The experiment proves this method is also applicable for EnlightenGAN.

4. Discussion

By adjusting the parameters of MSRCR, enhanced images with different levels of contrast and sharpness can be obtained. However, a more general scheme is often needed in practical work to reduce the time cost of on-site debugging. In this sense, the CycleGAN trained in this study has obvious advantages because the quality of enhanced images was stable for a variety of scenes in the cold seep area of Haima. We also tested the ability of CycleGAN to degrade the image. The results reveal that the model performed well with the blue background of the sea, whereas brighter images were obviously degraded. Generally, CycleGAN provides a stable degradation output, offering a dataset for the training of underwater image classification models or a target recognition model.
Although a robust CycleGAN model that can be applied to detect cold seep areas can be trained using the method in this study, the contrast of the image enhanced by the trained model still needs to be improved. In addition, more efficient network implementation needs to be considered for high-definition, real-time video stream processing.
The parameters of MSRCR need to be adjusted by conducting experiments to obtain good enhancement effects. Thus, in this study, we creatively employed MSRCR to build a dataset to train CycleGAN. An image enhancement CycleGAN model suitable for a variety of different scenes in the cold seep area of Haima was then obtained. Using the relevant interfaces provided by the MindSpore architecture, the model can easily be deployed in an ROV or in a shipboard computer to improve underwater video quality.

5. Conclusions and Future Steps

In this article, we found a new method for building a dataset with the help of MSRCR and UIQM. Using the dataset, CycleGAN was trained to enhance the images collected from the Haima cold seeps, with good effects. Compared with the CycleGAN trained with the public dataset, the model that we trained can be widely used to enhance images of cold seeps. It will be helpful for the detection of cold seeps in deep seas.
We were effectively able to implement an easy-to-expand dataset-building method to provide effective solutions for underwater image enhancements in different environments.
There are still some shortcomings in this article. The training effect is dependent on the dataset itself, and CycleGAN could not be trained to obtain a larger statistical UIQM value than the dataset we built. In this paper, only the classical CycleGAN was used for experiments. Although this model has been widely used, compared with the newly proposed network, its training efficiency and accuracy are relatively limited. Thus, it still requires some improvements, which could be achieved in future studies, such as the following:
  • A better conventional underwater image enhancement method can be found compared to MSRCR, and a new dataset for training can be built.
  • Improved datasets can be obtained by implementing better enhancement evaluation methods.
  • Our method is generic, and in future work, we will apply more updated models to explore the path of underwater image enhancement and explore more dataset-building schemes.

Author Contributions

Methodology, numerical experiments and writing—original draft, Y.L.; Proposed, planned, and designed the project, S.Y.; Collect the images from cold seeps, Y.G.; Writing—review and editing, J.C. and J.Z.; Organize the images, G.H.; Download EUVP dataset and check, Y.D.; Formal analysis, D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This article was supported by the Natural Science Foundation of Guangdong (No. 2020B1111510001), the National Natural Science Foundation of China (No. 62071123), the PI Project of the Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) (No. GML2020GD0802), the Basic Foundation of Guangzhou (No. 202201011192), the Natural Science Foundation of Fujian (No. 2020J05247), and the Key Research and Development Program of Guangzhou (No. 202206050002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The MindSpore framework and basic code of CycleGAN can be downloaded from https://www.mindspore.cn/ (accessed on 4 January 2022) and https://gitee.com/mindspore/models (accessed on 4 January 2022), respectively. The basic code of EnlightenGAN can be downloaded from https://github.com/VITA-Group/EnlightenGAN (accessed on 25 January 2023). The generated dataset, trained model, and enhanced image in the current study are not publicly available because they are the property of the project, but they can be made available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank the Guangzhou Artificial Intelligence Public Computing Center for providing computing power and algorithm support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, L.; Liu, B.; Xu, M.J.; Liu, S.X.; Guan, Y.X.; Gu, Y. Characteristics of active cold seeps in Qiongdongnan Sea Area of the northern South China Sea. Chin. J. Geophys. 2018, 61, 2905–2914. (In Chinese) [Google Scholar]
  2. Zhao, J.; Liang, Q.Y.; Wei, J.G.; Tao, J.; Yang, S.X.; Liang, J.Q.; Lu, J.A.; Wang, J.J.; Fang, Y.X.; Gong, Y.H.; et al. Seafloor geology and geochemistry characteristic of methane seepage of the ”Haima” cold seep, northwestern slope of the South China Sea. Geochimica 2020, 49, 108–118. (In Chinese) [Google Scholar]
  3. Feng, J.; Li, N.; Luo, M.; Liang, J.; Yang, S.; Wang, H.; Chen, D. A quantitative assessment of methane-derived carbon cycling at the cold seeps in the northwestern South China Sea. Minerals 2020, 10, 256. [Google Scholar] [CrossRef]
  4. Di, P.; Feng, D.; Tao, J.; Chen, D. Using time-series videos to quantify methane bubbles flux from natural cold seeps in the South China Sea. Minerals 2020, 10, 216. [Google Scholar] [CrossRef]
  5. Raveendran, S.; Patil, M.; Birajdar, G.K. Underwater image enhancement: A comprehensive review, recent trends, challenges and applications. Artif. Intell. Rev. 2021, 54, 5413–5467. [Google Scholar] [CrossRef]
  6. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Lu, Z. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
  7. Buchsbaum, G. A spatial processor model for object color perception. J. Frankl. Inst. 1980, 310, 337–350. [Google Scholar] [CrossRef]
  8. Forsyth, D.A. A novel algorithm for color constancy. Int. J. Comput. Vis. 1990, 5, 5–35. [Google Scholar] [CrossRef]
  9. Liu, Y.C.; Chan, W.H.; Chen, Y.Q. Automatic white balance for digital still camera. IEEE Trans. Consum. Electron. 2004, 41, 460–466. [Google Scholar]
  10. Bae, Y.; Jang, J.H.; Ra, J.B. Gamut-adaptive correction in color image processing. IEEE Int. Conf. Image Process. 2010, 3597–3660. [Google Scholar]
  11. Land, E.H.; McCann, J.J. Lightness and retinex theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
  12. Land, E.H. The retinex theory of color vision. Sci. Am. 1981, 237, 108–128. [Google Scholar] [CrossRef]
  13. Land, E.H. An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 1986, 83, 3078–3080. [Google Scholar] [CrossRef]
  14. Joshi, K.R.; Kamathe, R.S. Quantification of retinex in enhancement of weather degraded images. In Proceedings of the 2008 International Conference on Audio, Language and Image Processing, Shanghai, China, 7–9 July 2008; pp. 1229–1233. [Google Scholar]
  15. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image enhancement. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; Volume 3, pp. 1003–1006. [Google Scholar]
  16. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Ke, L.; Xujian, L. De-hazing and enhancement methods for underwater and low-light images. Acta Optica Sin. 2020, 40, 73–85. [Google Scholar]
  18. Hu, K.; Zhang, Y.; Lu, F.; Deng, Z.; Liu, Y. An underwater image enhancement algorithm based on MSR parameter optimization. J. Mar. Sci. Eng. 2020, 8, 741. [Google Scholar] [CrossRef]
  19. Anwar, S.; Li, C. Diving deeper into underwater image enhancement: A survey. Singal Process Image Commun. 2020, 89, 115978. [Google Scholar] [CrossRef]
  20. Perez, J.; Attanasio, A.C.; Nechyporenko, N.; Sanz, P.J. A deep learning approach for underwater image enhancement. In Biomedical Applications Based on Natural and Artificial Computing, Proceedings of the International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2017, Corunna, Spain, 19–23 June 2017; Ferrández, V.J., Álvarez-Sánchez, J., Paz, L.F., Toledo, M.J., Adeli, H., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10338, pp. 183–192. [Google Scholar]
  21. Anwar, S.; Li, C.; Porikli, F. Deep underwater image enhancement. arXiv 2018, arXiv:1807.03528. [Google Scholar]
  22. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson, R.M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2018, 3, 387–394. [Google Scholar] [CrossRef]
  23. Han, Y.; Huang, L.; Hong, Z.; Cao, S.; Zhang, Y.; Wang, J. Deep supervised residual dense network for underwater image enhancement. Sensors 2021, 21, 3289. [Google Scholar] [CrossRef]
  24. Hu, K.; Zhang, Y.; Weng, C.; Wang, P.; Deng, Z.; Liu, Y. An underwater image enhancement algorithm based on generative adversarial network and natural image quality evaluation index. J. Mar. Sci. Eng. 2021, 9, 691. [Google Scholar] [CrossRef]
  25. Zhang, X.; Fang, X.; Pan, M.; Yuan, L.; Zhang, Y.; Yuan, M.; Lv, S.; Yu, H. A marine organism detection framework based on the joint optimization of image enhancement and object detection. Sensors 2021, 21, 7205. [Google Scholar] [CrossRef] [PubMed]
  26. Goodfellow, I.J.; Abadie, J.P.; Mirza, M.; Xu, B.; Farley, D.W.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  27. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar]
  28. Fabbri, C.; Jahidul, I.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 22–25 May 2018; pp. 7159–7165. [Google Scholar]
  29. Ahn, J.; Yasukawa, S.; Sonoda, T.; Ura, T.; Ishii, K. Erratum to: Enhancement of deep-sea floor images obtained by an underwater vehicle and its evaluation by crab recognition. J. Mar. Sci. Technol. 2017, 22, 758–770. [Google Scholar] [CrossRef]
  30. Lu, J.; Li, N.; Zhang, S.; Yu, Z.; Zheng, H.; Zheng, B. Multi-scale adversarial network for underwater image restoration. Opt. Laser Technol. 2019, 110, 105–113. [Google Scholar] [CrossRef]
  31. Du, R.; Li, W.; Chen, S.; Li, C.; Zhang, Y. Unpaired underwater image enhancement based on CycleGAN. Information 2022, 13, 1. [Google Scholar] [CrossRef]
  32. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  33. Ma, J.; Fan, X.; Ni, J.; Zhu, X.; Xiong, C. Multi-scale retinex with color restoration image enhancement based on Gaussian filtering and guided filtering. Int. J. Mod. Phys. B 2017, 31, 1744077. [Google Scholar] [CrossRef]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  35. Guo, Y.; Li, H.; Zhuang, P. Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Ocean. Eng. 2019, 45, 862–870. [Google Scholar] [CrossRef]
  36. Neumann, S. Drawing and layer modes. In GIMP Pocket Reference; Neumann, S., Ed.; O’Reilly & Associates: Sebastopol, CA, USA, 2008; pp. 93–95. [Google Scholar]
  37. Panetta, K.; Gao, C.; Agaian, S.S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
  38. Chen, L. Unsupervised learning: Deep generation model. In Introduction to Deep Learning with MindSpore; TsingHua University Press: Beijing, China, 2020; pp. 158–185. [Google Scholar]
  39. Deimling, J.S.; Linke, P.; Schmidt, M.; Rehder, G. Ongoing methane discharge at well site 22/4b (North Sea) and discovery of a spiral vortex bubble plume motion. Mar. Pet. Geol. 2015, 68, 718–730. [Google Scholar] [CrossRef]
Figure 1. CycleGAN enhancement of an underwater image with an inappropriate dataset.
Figure 1. CycleGAN enhancement of an underwater image with an inappropriate dataset.
Sensors 23 01741 g001
Figure 2. Basic structure of CycleGAN.
Figure 2. Basic structure of CycleGAN.
Sensors 23 01741 g002
Figure 3. (a) Forward and (b) backward consistency losses.
Figure 3. (a) Forward and (b) backward consistency losses.
Sensors 23 01741 g003
Figure 4. Enhancement of MSRCR under different parameters.
Figure 4. Enhancement of MSRCR under different parameters.
Sensors 23 01741 g004
Figure 5. Training loss for CycleGAN.
Figure 5. Training loss for CycleGAN.
Sensors 23 01741 g005
Figure 6. Enhancement results for the trained CycleGAN.
Figure 6. Enhancement results for the trained CycleGAN.
Sensors 23 01741 g006
Figure 7. Results for the trained CycleGAN’s enhancement of images in the North Sea [39].
Figure 7. Results for the trained CycleGAN’s enhancement of images in the North Sea [39].
Sensors 23 01741 g007
Figure 8. Degradation of enhancement results.
Figure 8. Degradation of enhancement results.
Sensors 23 01741 g008
Figure 9. Enhanced images using EnlightenGAN.
Figure 9. Enhanced images using EnlightenGAN.
Sensors 23 01741 g009
Table 1. MSRCR and CycleGAN image enhancement statistical data of UIQM values for Qiongdongnan and North Sea cold seeps.
Table 1. MSRCR and CycleGAN image enhancement statistical data of UIQM values for Qiongdongnan and North Sea cold seeps.
MSRCR Qiongdongnan Cold SeepsNorth Sea Cold Seeps
Msrcr Dyn valuesMeanVarianceMeanVariance
Original images2.1800.2042.4540.382
Msrcr-0.41.0340.0471.4880.110
Msrcr-1.22.0540.16322.1430.233
Msrcr-2.02.0450.2191.8130.247
Msrcr-2.81.9230.2341.5670.244
Msrcr-3.61.7520.2361.5200.200
Msrcr-4.41.5860.2281.3370.164
Msrcr-5.21.4400.2131.1860.135
Msrcr-6.01.3180.1971.0630.110
Msrcr best value chosen2.4440.1892.5720.197
CycleGANCycleGAN Training timesMeanVarianceMeanVariance
202.3440.2022.5720.126
402.3980.1812.6060.140
602.4780.1572.5900.171
802.4210.1582.5720.208
1002.3430.1712.5440.218
1202.4890.1742.6370.215
1402.4220.1662.6280.187
1602.4990.1542.8390.112
1802.4650.1732.6350.234
2002.4260.1772.6470.260
Table 2. The statistical UIQM values of each training time for EnlightenGAN.
Table 2. The statistical UIQM values of each training time for EnlightenGAN.
EnlightenGAN Training TimesMeanVariance
202.3550.190
402.3840.165
602.3520.174
802.1990.177
1002.2310.178
1202.5000.174
1402.2130.187
1602.1890.192
1802.1860.193
2002.2640.176
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Yang, S.; Gong, Y.; Cao, J.; Hu, G.; Deng, Y.; Tian, D.; Zhou, J. A New Method for Training CycleGAN to Enhance Images of Cold Seeps in the Qiongdongnan Sea. Sensors 2023, 23, 1741. https://doi.org/10.3390/s23031741

AMA Style

Li Y, Yang S, Gong Y, Cao J, Hu G, Deng Y, Tian D, Zhou J. A New Method for Training CycleGAN to Enhance Images of Cold Seeps in the Qiongdongnan Sea. Sensors. 2023; 23(3):1741. https://doi.org/10.3390/s23031741

Chicago/Turabian Style

Li, Yuanheng, Shengxiong Yang, Yuehua Gong, Jingya Cao, Guang Hu, Yutian Deng, Dongmei Tian, and Junming Zhou. 2023. "A New Method for Training CycleGAN to Enhance Images of Cold Seeps in the Qiongdongnan Sea" Sensors 23, no. 3: 1741. https://doi.org/10.3390/s23031741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop