Deep learning-based super-resolution for harmful algal bloom monitoring of inland water

ABSTRACT Inland water frequently occurs during harmful algal blooms (HABs), rendering it challenging to comprehend the spatiotemporal features of algal dynamics. Recently, remote sensing has been applied to effectively detect the algal spatiotemporal behaviors in expensive water bodies. However, image sensor resolution limitation can render the understanding of spatiotemporal features of relatively small water bodies challenging. In addition, few studies have improved the resolution of remote sensing images to investigate inland water quality, owing to the image sensor resolution limitations. Therefore, this study applied deep learning-based Super-resolution for transforming satellite imagery of 20 m to airborne imagery of 5 m. After performing atmospheric correction for the acquired images, we adopted super-resolution (SR) methodologies using a super-resolution convolutional neural network (SRCNN) and super-resolution generative adversarial networks (SRGAN) to estimate the Chlorophyll-a (Chl-a) concentration in the Geum River of South Korea. Both methods generated SR images with water reflectance at 665, 705, and 740 nm. Then, two band-ratio algorithms at 665 and 740 nm wavelengths were applied to the reflectance images to estimate the Chl-a concentration maps. The SRCNN model outperformed SRGAN and bicubic interpolation with peak signal-to-noise ratios (PSNR), mean square errors (MSE), and structural similarity index measures (SSIM) for the validation dataset of 24.47 (dB), 0.0074, and 0.74, respectively. SR maps from the SRCNN provided more detailed spatial information on Chl-a in the Geum River compared to the information obtained from satellite images. Therefore, these findings showed the potential of deep learning-based SR algorithms by providing further information according to the algal dynamics for inland water management with remote sensing images.


Introduction
Harmful algal blooms (HABs) phenomena degrade inland water quality and aquatic ecosystems by releasing toxic and odorous compounds, such as microcystin and 2-methylisoborneol (Baek et al. 2022;Gerber 1977).Recently, the size and duration of algal blooms have accelerated owing to rapid urbanization, global warming, and climate change, in connection with increasing nutrient loading and warm water (O'Neil et al. 2012).Previous studies have indicated that water monitoring campaigns should be conducted to understand these HABs by providing quantitative and qualitative algal blooms to mitigate the degradation of inland water quality (Jang et al. 2022).However, HABs monitoring is difficult to understand the spatiotemporal distribution features of algal dynamics from the traditional in situ monitoring because it is conducted at a specific time and location in rivers and reservoirs (Park, Tae Kim, and Hyoung Lee 2020).Therefore, advanced monitoring of spatiotemporal variations is vital for preventing HABs during water quality management.
Remote sensing data from airborne and satellite monitoring have been introduced to acquire spatiotemporal features of algal dynamics in expansive water bodies (Pyo et al. 2022).Multidimensional data explore HABs phenomena using the water spectral reflectance that estimates the chlorophyll-a (Chl-a) concentration (Hong et al. 2021).Lin et al. (2018) utilized remote sensing data from satellite imagery to identify cyanobacterial blooms in an eutrophic lake.He et al. (2020) estimated Chl-a concentration using Chl-a retrieval algorithms with satellite-derived reflectance.However, remote sensing images have challenging issues depending on imaging sensor limitation such as spatial, spectral, and temporal resolutions (Yang et al. 2015).Satellite remote sensing is utilized for a wide range of environmental monitoring but generally provides low resolution spectral and spatial information, rendering it difficult to detect features in relatively small regions (Tao et al. 2019).
Super-resolution (SR) technology enhances image quality to provide complementary information of the spatial resolution by reconstructing images from lowresolution imagery (Yang et al. 2019).Recently, SR algorithms have progressed with deep learning based on convolutional neural networks (CNN) and generative adversarial network (GAN) models (Dong et al. 2015;Ledig et al. 2017).These deep-learning algorithms are advanced technologies that provide high-resolution imagery to address the challenges associated with the low spatial resolution of satellite imagery (Yang et al. 2015).SR algorithms have been applied to obtain preliminary optical properties associated with water quality.Zhang and Huang (2011) used a machine learning method to increase the satellite resolution with better spatial resolution for the visible band.Su et al. (2021) utilized a CNN-based model to resolve the SR of subsurface temperature imagery on a global scale using satellite remotesensing data for advanced detection.Although remote sensing using the SR technique can be useful for environmental monitoring, relatively few studies have applied SR algorithms to HABs monitoring.
Here, we proposed deep-learning algorithms for the SR of satellite imagery in the Geum River, South Korea.Our study adopted three SR algorithms: bicubic interpolation, super-resolution convolutional neural network (SRCNN), and super-resolution generative adversarial networks (SRGAN).Three SR algorithms generated SR imagery that could estimate the Chl-a concentration using inland water reflectance.The main purpose of our study was to measure remote sensing via airborne and satellite imagery to acquire the spatiotemporal features of algal dynamics in an expansive water body, conduct single superresolution imagery to generate water reflectance and compare the performance of the SR methods, and acquire a fine resolution map of Chl-a distribution using the bio-optical algorithm and SR imagery.

Study area
The Geum River is the major river and third largest river in the mid-western province of the Republic of Korea.Figure 1 shows the Geum River basin that reaches the neighboring sea around the Korean Peninsula (N 36.35°-36.52°, E 127.48°-127.60°).It supplies water to surrounding cities, such as the Chungcheoung province, for municipal, domestic, agricultural, and industrial use.The basin area and length of the Geum River are 9,912.15km 2 and 360.70 km, respectively (Lee et al. 2018).There are nine intake stations and several industrial complexes along the mainstream of the Geum River.Moreover, this region is dominated by a monsoon climate, associated with intense rainfall (Kim et al. 2022).In the last three decades, annual temperature and precipitation have been recorded at 10.9°C and 1,295 mm from June to August, respectively (Choi et al. 2021).During this reason, the Geum River experiences annual HABs due to the inflow of non-point and point sources from intensive runoff and industrial complexes (Lee et al. 2016).In this study, we chose three representative regions along the Geum River basin that is represented in Figure 1.

Research overview
For HABs monitoring, we used remote sensing imagery with enhanced resolution and water reflectance to provide further spatiotemporal information on algal dynamics.We applied deep learning-based SR to estimate Chl-a concentration in three steps: (1) input data preparation (Figure 2 .Two monitoring campaigns were conducted using an airborne approach to measure hyperspectral high-resolution (HR) images.Additionally, we collected Sentinel-2 satellite multispectral low-resolution (LR) images.The hyperspectral and multispectral reflectance signals decreased the atmospheric effects related to the adjacency effect, heterogeneous land surface, water vapor, and aerosols using specific atmospheric correction software.Subsequently, water surface reflectance bands, including B04 (665 nm), B05 (705 nm), and B06 (740 nm), were prepared for multispectral input data in the deep learning models.For efficient SR training, the input data were normalized.SRCNN and SRGAN were then applied to obtain a single image of SR from LR to HR images.The generated SR imagery with water reflectance estimated the Chl-a concentration by applying a biooptical algorithm using specific spectral information related to Chl-a biomass.Finally, SR Chl-a maps were generated and compared to identify the feasibility of deep learning-based super-resolution for water monitoring.

Data acquisition
In this study, we conducted two monitoring campaigns (airborne and satellite sensing) and collected hyperspectral and multispectral images from the Geum River on 30 September 2019 and 24 October 2020.The airborne captured hyperspectral imagery was monitored using an AISA Eagle sensor (SPECIM Inc., Finland) that was perpendicularly installed on a Cessna 208 multipurpose aircraft (Fig S1).This airborne monitoring campaign was performed under specific conditions including a flying altitude of 3 km, a monitoring time of 3 h starting from 8:30 AM, and weather states of fair days with low wind speed.The spectral range for hyperspectral imagery was 400-970 nm, with spectral and spatial resolutions of 4 nm and 2 m, respectively (Table S1).Thus, this hyperspectral image dataset included a total of 47 sections according to the Geum River monitoring campaigns.Moreover, multispectral images were the Sentinel-2 Level-1C product that was downloadable from the Sentinels Scientific Data Hub (ESA, https://scihub.copernicus.eu/).The Sentinel-2 satellite orbits the world regularly at a mean altitude of 786 km while providing continuous remote sensing imagery with a five-day revisit frequency (Lanorte et al. 2019).The multi-spectral instrument measures 13 optical bands with spatial resolutions of 10 m, 20 m, and 60 m, ranging from 443 to 2,290 nm (Tables S1 and  S2).We collected two multispectral images with 20 m resolution for the Geum River, in which the cloud cover percentages were 0.93% and 0.00%, respectively.

Airborne and satellite image preprocessing
The image processing implemented geometric and atmospheric corrections for the airborne images.Hyperspectral imagery has been applied for geometric correction to decrease geometric distortion of remote sensing images (Luan et al. 2014).Moreover, airborne imagery was applied atmospheric correction to eliminate atmospheric and illumination effects on hyperspectral imagery using atmospheric and topographic correction 4 (ATCOR 4) software (Tuominen and Lipping 2011).ATCOR4 calculates the radiative transfer function by adopting Moderate Resolution Atmospheric Transmission version 6 (MODTRAN6) for atmospheric correction computing the optical parameters according to the weather and observation conditions (Richter and Schläpfer 2002).Moreover, these images treated HR imagery and were resized to the spatial resolution of 5 m by using the weighted average of pixels.
The Sentinel-2 reflectance data were distributed to Level-1C products containing Top of Atmosphere (TOA) reflectance, which was influenced by atmospheric effects including aerosol particles, water vapor, ozone, and the existence of clouds (Nazeer et al. 2021).The TOA reflectance is converted to Bottom of atmosphere (BOA) reflectance using the Sen2Cor processor with Sentinel Application (SNAP) software to decrease atmospheric effects (Mueller-Wilm, Devignot, and Pessiot 2019).The Sen2Cor processor supports the terrain, cirrus, atmospheric correction, and scene classification tasks of Sentinel-2 Level-1C (Main-Knorn et al. 2017).Further, the remote sensing images classified the water area reflectance to select water indices that were separated between water and non-water pixels in the imagery (Mondejar and Tongco 2019).

Super-resolution of satellite imagery using deep-learning models
Our study applied deep-learning models to perform the super-resolution with satellite imagery.The superresolution algorithms adopted CNN-and GAN-based models.The CNN model widely deals with multidimensional imagery to extract meaningful image features using forward and backward propagation during the model train (Naranjo-Torres et al. 2020).In the CNN model, the convolutional layers with kernels are performed to train image features moved along with the input data by calculating the weight and bias.The GAN was designed to generate the new data using two neural network models competing with each other (Goodfellow et al. 2014).These networks that are contained the generator and discriminator can produce the images and distinguish images between real and fake.In this study, we implemented the CNN-based SRCNN model and GAN-based SRGAN model to enhance the image resolution, which contained multidimensional imagery with water reflectance bands, resulting in the calculation of Chl-a concentration maps.Prior to the simulation of deep learning models, we applied data preprocessing which was the max normalization to reduce the scale of the dataset with a range of zero to one.Input data was then divided into training and validation as about 60% and 40% for monitoring campaigns.Thereby, the remote sensing data was fed into the deep learning model to increase the spatial resolution of LR imagery.This study utilized Python 3.6 programming language and TensorFlow API version 2.50 for deep learning simulation.Furthermore, our models were operated using an Intel® Core i9-11900K 3.50 GHz processor, NVIDIA GeForce RTX 3090 graphic card, and 128 Gigabytes of DDR 4 randomaccess memory.

Super-resolution CNN (SRCNN)
Our study implemented the super-resolution using SRCNN as the ideal algorithm to enhance image resolution (Ahn, Kang, and Sohn 2018;Kim, Kwon Lee, and Mu Lee 2016).The SRCNN was suggested as a type of CNN to increase the resolution for single-image superresolution.It directly learns the end-to-end mapping represented via the CNN between LR images as input data and the enhanced resolution image as output data (Dong et al. 2015).We designed the stack of satellite imagery as input data so that the SRCNN model could learn the features of water reflectance (Figure 3(a)).The multidimensional data for the LR imagery were extracted from the feature vectors, and the size and number of kernels were 5 × 5 and 32, respectively.To extract the water reflectance features, ResNet was utilized in the SRCNN model structure that contained convolutional layers, a batch normalization layer, and a PReLU activation function (Kaiming et al. 2016).The SRCNN model transposed the resolution of the LR imagery to increase the image quality, which was conducted with convolutional layers and an upscaling factor (4×) in the up-sampling layer.Finally, our model reconstructed high-quality images, which increased the image resolution with water reflectance bands to provide further information for remote sensing.To minimize the loss between the SR and HR images, the SRCNN model applied loss function with means square error (MSE) during the model training.
The following equation indicates the MSE: where n is the number of training samples, F is the HR image, and X is the output image that generates SR images from the satellite images of LR (Dong et al. 2015).The loss was minimized using a stochastic gradient descent with standard backpropagation (Leibe et al. 2016).Therefore, we reduced the SRCNN model error between the SR and HR images using the Adam optimizer to update the weights.

Super-Resolution Generative Adversarial Network (SRGAN)
The SRGAN model is a type of GAN that enhances the resolution of LR imagery by incorporating the generator and discriminator of two neural networks, which were used to generate SR images and distinguish between SR and HR images (Ledig et al. 2017).
The SRGAN model is shown in Figure 3(b,c).The generator network, consisting of ResNet, produces an SR image from the satellite LR images.In contrast, a discriminator network was used to distinguish between SR and HR images.Therefore, realistic SR images were produced to deceive the discriminator (Goodfellow et al. 2014).The SRGAN model is represented by the value function V, which is calculated using SR images with the generator and discriminator.
The following equation represents the total network: where G is the generator, discriminator (D) is the estimate of the probability that is trained to maximize the probability log(D(x)) from the HR images x, G(z) represents the generator output when the LR images z are given to minimize pixel-wise error measurement by D, E x indicates the expected value over all HR images, and E z is the expected value over all random inputs to the G.
In our study, we defined the loss function l SR that was calculated using the content loss and adversarial loss (Equation 4).The content loss l SR MSE was based on the MSE value, which is the most widely applied optimization target for SR (Equation 5).The adversarial loss, l SR Gen , was utilized to generate realistic SR images to deceive the discriminator (Equation 6), and the loss function of SRGAN can be calculated by the following equations: where the loss function, l SR , is calculated with content loss, l SR MSE and adversarial loss, l SR Gen , I x;y represents the reflectance value of the image data at point (x, y) for the dimensions of the I HR x;y and I LR x;y , W and H are the width and height for the pixel numbers of the images, r and θ are the x4 of the upscaling factor and the denoted weights and biases, and n is the number of image data, respectively.

Generation of super-resolution map of Chlorophyll-a concentration
This study generated a spatial distribution map of the SR imagery using a bio-optical algorithm derived from the Chl-a optical properties to detect and estimate algal blooms in the surface water system (Pyo et al. 2016).The SR algorithms generated highquality imagery with an enhanced resolution that was applied to the bio-optical algorithm to determine pigment concentration using the apparent optical properties of inland water reflectance (Mishra, Schaeffer, and Keith 2014).We applied a two-band ratio algorithm, which is a typical semiempirical algorithm used for Chl-a estimation.It contains 665 nm and 705 nm spectral bands related to Chl-a concentration (Gitelson et al. 2009;Moses et al. 2009).The following band ratio algorithm can be estimated Chl-a concentration: where Chl-a concentration is proportional to the twoband ratio of λ 1 and λ 2 which indicate the B04(665 nm) and B05(705) nm remote sensing reflectance [sr −1 ], respectively.This study produced Chl-a concentration ratio maps using SR imagery containing spectral information associated with Chl-a.

Performance evaluation
We applied evaluation matrices to calculate the image quality using The MSE, peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM).The image quality between the HR and SR was compared using the MSE and PSNR which are widely applied indices (Sara, Akter, and Shorif Uddin 2019).The SSIM compares the similarity between two images (Dou et al. 2020).Matrices were obtained using the following equations:

Spatial variability of water reflectance in the processed satellite and airborne data
We transformed the BOA reflectance using the Sen2Cor library that applied an atmospheric correction to the visible (VIS), near-infrared (NIR), and shortwave infrared (SWIR) bands in order to decrease the atmospheric effect from the TOA reflectance (Figures 4-5) (Louis et al. 2016).The TOA contains spectral information during the monitoring campaigns, which consists of the water reflectance as well as the aerosol and gas molecules, whereas the BOA is the corrected product that was subjected to atmospheric correction to eliminate atmospheric effects according to the water vapor and aerosol optical thickness (Main-Knorn et al. 2017).The average TOA decreased in the BOA when atmospheric correction was used.Furthermore, the average reflectance between B01 (443 nm) and B02 (490 nm) was significantly reduced by approximately 0.09 and 0.06, respectively (Figures 4-5).These bands are strongly affected by aerosols and gaseous molecules (Kokhanovsky 2008).However, the averaged B09 (945 nm) spectral region slightly increased the reflectance value from the TOA to the BOA because the wavelength between 935 nm and 955 nm was influenced concurrently by the aerosol and water vapor effects.

Training and validation of SRCNN
We When comparing SRCNN with the bicubic method, the interpolation method showed the relatively poor quality of results than deep learning-based SR techniques.It has insufficient results for small and narrow inland waters because the bicubic method is calculated with the nearest pixel value that has a non-value due to non-water pixels in imagery.The interpolation method showed relatively lower performance in terms of PSNR values than the SRCNN performance.The bicubic interpolation method showed PSNR ranging from 11.78 to 16.57 (dB) for the training data and from 8.09 to 25.02 (dB) for validation.Moreover, the bicubic results showed that the SR image produced by the interpolation method was blurry and had insufficient details compared to the SRCNN model for remote sensing monitoring.The bicubic method directly affects the LR of satellite imagery because the interpolation technique produces HR images with convolution using the average of pixels in the nearest 4 × 4 neighborhood (Viaña-Borja and Ortega-Sánchez 2019).Keys (1981) introduced the cubic convolution interpolation method to estimate missing pixels using the weighted average of nearby pixels with known values; however, the interpolation-based approach restored the generated image to be overly blurry and vanish.

Training and validation of SRGAN
The overall visual comparison of the SR images for the SRGAN is presented in Figures 6-9 with the model performance values.Figure 6 for the training of B04 (665 nm) shows a PSNR ranging from 14.74 to 19.95 (dB) for the training dataset.In the validation, the PSNR values for B04 ranged from 14.70 to 25.35 (dB) in Figure 8. Furthermore, the SR images with the SRGAN model for the B05 (705 nm) and B06 (740 nm) bands were calculated using the PSNR model evaluation.For B05 (705     nm), the SRGAN model was evaluated PSNR values ranging from 18.84 to 25.83 (dB) for the training data and from 17.19 to 23.71 (dB) for the validation data 7 and 9].Salgueiro Romero, Marcello, and Vilaplana (Romero, Luis, and Vilaplana 2020) applied a deep learning algorithm to accomplish single-image super-resolution and increased the 10-m spatial resolution to a resolution of 2-m using the GAN-based SR model.
Figures 6-9 and S2-S3 present a comparison of the spatial distribution with the SRGAN and bicubic interpolation method, including training and validation.For overall reflectance, the training data set showed the averaged PSNR of 17.8 (dB), 21.2 (dB), and 22.2 (dB) for the SRGAN, while the bicubic method had the averaged PSNR of 14.7, 13.5, and 14.1 for the B04 (665 nm), B05 (705 nm), and B06 (740 nm), respectively.The SR algorithms achieved super-resolved resolution of 20-m to 5-m for SR images.The deep learning-based SR images could measure the absorption properties of Chl-a concentration to provide preliminary information for monitoring HABs phenomena than the interpolation method.This implies that the deep learningbased SR model generated more fine-spatial resolution images than the bicubic interpolation method for monitoring narrow and small rivers and reservoirs.Galar et al. (2020) presented the GAN model applied to the RGB and NIR bands of Sentinel-2 imagery to enhance the resolution from 10-m to 5 m or 2.5 m spatial resolution.However, the SRGAN model appears as the checkboard artifact that was inevitably produced from the noise in real-world images in Figures 6-9 (Wang, Chen, and Hoi 2020).It could interrupt the stability and the enhanced monitoring using real-world imagery with deep learning-based SR algorithms.Kim et al. (2020) showed that the GAN-based model avoided the checkboard effect by combining interpolation and convolutional modules, thereby they resulted in stability and enhanced image quality.To produce stable SR images with real-world images, the SRGAN model might be enhanced image quality by combining the interpolation module and additional input images.

Performance comparison of SRCNN with SRGAN
This study compared the SR techniques of CNN-based methods with the training and validation performance presented in Table 1.The SR images generated using the deep learning models were similar to the airborne reflectance at the entire site.The SRCNN model achieved the best performance with the highest PNSR and SSIM values, and the average PSNR and SSIM were evaluated at 25.21 (dB) and 0.79, respectively (Table 1).This implies that the SRCNN model is suitable for generating SR images from remote sensing (Chang and Luo 2019).A previous study showed that the proposed model maintained the spectral radiometry of SR imagery for LR imagery after performing super-resolution.Additionally, the SRCNN model allows an end-to-end mapping process to reconstruct super-resolved imagery using the extracted image features between the LR and HR images as input datasets (Dong et al. 2015).When calculating the evaluation performance, the SRGAN model showed that the PSNR, MSE, and SSIM averaged 21.08 (dB), 0.0111, and 0.61, respectively.The deep-learning-based SR algorithm results for the Geum River were better than those obtained using the interpolation method.This implies that the SR algorithm with deep learning could be used to characterize water bodies for remote sensing (Wang, Bayram, and 2022).SR algorithms using deep learning techniques perform various loss functions and architectures that deal with the data-driven learning process between the LR and HR images (Ledig et al. 2017).However, real-world spectral imagery is generally non-uniform and varies in its characterization with water bodies (Honggang et al. 2022).Thus, the SRGAN model generated relatively low-quality imagery compared with the end-to-end mapping process of SRCNN model.Moreover, the visual image results by the SRGAN often appeared in the checkboard patterns due to the generator with deconvolution (Zhao et al. 2019).It means that the model architecture might appropriately choose based on the input data imposing a large of computing power to achieve a suitable model simulation.Xia et al. (2021) also presented that the model complexity had a negative impact on model simulation by excessive computation capacity depending on the input data, which required an appropriate model structure to achieve a suitable model performance.

Fine resolution map of Chlorophyll a distribution from SRCNN and SRGAN
In this study, we estimated the Chl-a concentration ratio using the bio-optical algorithm with SR images, as shown in Figures 10-11.The bio-optical algorithm was applied to the reflectance at B04 (665 nm) and B05 (705 nm) associated with Chl-a and produced spatial distribution maps of the concentration ratio using the band ratio (B05/B04) of the HR and SR images .Previous studies present a positive relationship between Chl-a concentration and the applied bio-optical algorithm with the coefficient of determination and root MSE values of 0.75 and 24.64, described in detail by Hong et al. (2022).To compare the spatial distribution maps, SRCNN imagery (Figures 10-11 Moreover, the SRGAN results showed checkerboard patterns caused by the transposed convolutional layer (Lei, Shi, and Zou 2019).The deeper the SRGAN network, the more difficult it is to train and restore finer texture details for super-resolution (Yang et al. 2019).Cai, Meng, and Ho (2020) stated that it is difficult for deeper networks to achieve SR reconstruction of high-resolution imagery in the real world, often resulting in incorrect SR simulation imagery.

Super-resolution using deep learning for water remote sensing
This   they considered that real-world images contain a high signal-to-noise ratio.However, further studies must be performed to establish additional data acquisition and monitoring areas.
The fine-resolution distribution of Chl-a quantification was estimated as a proxy indicator for phenomena related to HABs.However, owing to the presence of non-linear relationships influenced by habitat-specific factors and interactions, it might be deemed insufficient to solely rely on the Chl-a distribution as a monitoring indicator for accurate assessment of HABs phenomena.Wang et al. (2023)  proposed a data-based inferential model to characterize the variability of Chl-a and its relationship with the occurrence of algal blooms.They also emphasized the influence of the repeated bloom phenomenon on other biogeochemical factors, including salinity and Chl-a triggering.As a result, they suggested that the analysis of bloom indicators should consider the uncertainties and spatial distribution of blooms to account for multiple triggering factors.Additionally, multiple trigger factors could be investigated to address the uncertainty assessment and sensitivity associated with Chl-a attributed to variation of the algal bloom phenomena.Francesca Pianosi et al. (2016) showed that sensitivity analyses can utilize environmental modeling to investigate dominant parameters and uncertainty assessments.Therefore, algal bloom-specific indicators based on remote sensing information might require dominant bands of satellite imagery to account for sensitivity factors attributed to the variations in algal blooms.Together, these results can provide crucial insights into the application of deep-learning-based superresolution algorithms and remote sensing for overcoming the spatial resolution challenges arising from equipment limitations and providing further information for water management.

Conclusion
This study determined whether remote sensing with deep learning models could provide preliminary information for monitoring water quality according to HABs phenomena.To achieve remote sensing super-resolution in the Geum River, we applied SR algorithms based on the CNN architecture to increase the quality of LR imagery to HR imagery.Thus, LR satellite and HR airborne spectral images were employed for model training.We performed remote sensing of HABs with airborne platforms to monitor eutrophic phenomena to train and validate deep learning achievements during the monitoring campaigns of 2019 and 2020, respectively.Deep learning models have been developed to resolve SR from LR imagery to HR images using retrieved reflectance information.Furthermore, we estimated the Chl-a concentration map using a two-band ratio algorithm with SR imagery.The major findings of our study are as follows: (1) Remote sensing can provide the spatiotemporal distribution of water quality in terms of water resources.From the remote sensing results, atmospheric effects such as aerosols and water vapor influenced the measurement of water reflectance for water quality monitoring.
(2) The SRCNN model was ideal algorithm among of the deep learning-based SR algorithms, with the highest PSNR, SSIM and the lowest MSE values for evaluation metrics.
(3) The generated SR images provided preliminary information by estimating the Chl-a concentration using a bio-optical algorithm method that could be applied to monitor HABs phenomena for water quality management in narrow and small water bodies.
In dealing with water quality issues related to eutrophication phenomena, this study shows that remote sensing with deep learning-based SR has significant potential to provide further information associated with algal dynamics.Moreover, our study contributes to overcoming the limitations of remote sensing of inland water for water quality monitoring.

Figure 1 .
Figure 1.Location of the Geum River.(a) downstream, (b) midstream, and (c) upstream of the Geum River in the Republic of South Korea.

Figure 2 .
Figure 2. Research flowchart to achieve SR from LR imagery by using the deep learning-based SR algorithms for acquiring the fineresolution map of Chl-a distribution; (a) indicates the image process of preparing LR and HR input data; (b) denotes the application of deep learning-based SR including SRCNN and SRGAN models; (c) is the SR image generation performance evaluation of SRCNN and SRGAN and generation of the Chl-a distribution maps.

Figure 3 .
Figure 3. Description of the SRCNN and SRGAN model; (a) is the SRCNN model, (b) and (c) indicate the generator and discriminator in the SRGAN model, respectively.
) where M and N indicate super-resolved image rows and column numbers, O is the number of image channels, MAX f indicates the peak signal level in the image data, μ I SR and μ I HR represent the mean values of the image I SR and I HR , σ I SR ;I HR is the covariance of the image I SR and I HR , and c indicates a constant value to avoid a divide-by-zero error.The absence of noise between SR and HR imagery means that the MSE and PSNR are zero and infinite values, respectively.
developed an SRCNN model to enhance image resolution from satellite imagery of LR reflectance spectra and airborne HR imagery.The SRCNN increased the spatial resolution of satellite imagery of 20 m to airborne imagery of 5 m.The SRCNN model conducted a single-image super-resolution by minimizing the pixel-wise error between the SR and HR imagery.For the B04 (665 nm), B05(705 nm), and B06(740 nm) bands of the Sentinel-2 satellite, the quality and quantitative results of the representative area included the PSNR, showing the generated SR images in Figures 6-9 and S2-S3.Figures6-9show the generated reflectance maps of the bands at the representative area in the Geum River.The generated SR imagery by SRCNN shows is relatively similar to the reflectance values following the spatial patterns of HR imagery.Moreover, the SRCNN model showed a PSNR ranging from 23.28 to 29.49 (dB) for the spatial distribution images of B04(665 nm).In addition, the model performance for the validation dataset was evaluated as 22.86 (dB), 25.48(dB), and 27.89 (dB), respectively (Figures6 and 8).For B05, the PSNR values ranged from 22.10 to 30.43 (dB) for the training data and from 17.81 to 30.13 (dB) for the validation dataset [Figures7 and 9].The SRCNN model produces a super-resolved single SR image, which can provide preliminary information for remote sensing reflectance.Thus, the SRCNN model directly produces SR images with high learning ability by adopting pixel loss for network optimization(Dong et al. 2015).Galar et al. (2019) presented the SRCNN model for Sentinel-2 with high-spatial-resolution imagery based on a CNN model.Huang et al. (2017) demonstrated super-resolution reconstruction of real-world remote sensing imagery using the SRCNN model, which increased the resolution of Sentinel-2 remote sensing images.Müller et al. (2020) applied that multispectral satellite imagery was performed to increase image resolution by the deep convolutional neural network model.

Figure 4 .
Figure 4.The reflectance spectra range from 465-955 nm in the Geum River.The dash-dot line represents the highest and lowest reflectance values for September 30, 2019.The Solid line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01-B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

Figure 5 .
Figure 5. Reflectance spectra range from 465-955 nm in the Geum River.The dash-dot line represents the highest and lowest reflectance values for the October 24, 2020.The black line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01-B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

Figure 6 .
Figure 6.The quality training results of super-resolved representative area imagery.(a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B04(665nm) on September 30, 2019.The red square indicates the zoom-in view of representative images by the SR methods.The red circles represent the checkboard artifact in the visual image results.

Figure 7 .
Figure 7.The quality training results of super-resolved representative area imagery.(a) is downstream, (b) indicates the midstream, and (c) represents the upstream of river basin for the B05(705nm) on September 30, 2019.The red square indicates the zoom-in view of representative images by the SR methods.The red circles represent the checkboard artifact in the visual image results.

Figure 8 .
Figure 8.The quality validation results of super-resolved representative area imagery.(a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for B04 (665 nm) on October 24, 2020.The red squares indicate the zoom-in view of representative images by the SR methods.The red circles represent the checkboard artifact in the visual image results.

Figure 9 .
Figure 9.The quality validation results of super-resolved representative area imagery.(a) is downstream, (b) indicates the midstream, and (c) represents the upstream of river basin for the B05(705 nm) on October 24, 2020.The red squares indicate the zoom-in view of representative images by the SR methods.The red circles represent the checkboard artifact in the visual image results.
(d), (i), and (n)) for training and validation datasets showed that existed the spatial distribution pattern of the Chl-a ratio was similar to HR imagery, (f), and (k)].These results imply that they provided preliminary information to monitor the Chl-a concentration using remote sensing SR imagery regarding water quality(d,i and n)].Su et al. (2021) suggested a CNN-based SR model using remote sensing imagery that provides spatial data of higher resolution to observe mesoscale phenomena in the subsurface temperature field.As shown in Figures 10-11 (e), (j), and (o), the SRGAN model underestimated Chl-a concentration.
simultaneously.Satellite and airborne monitoring methods entail time lags dictated by their respective monitoring schedules.Satellites traverse specific orbits at defined intervals and capture images depicting reflectance of inland water surfaces.Airborne monitoring campaigns are executed under predetermined conditions, including parameters such as flying altitude, monitoring time, and weather conditions.This temporal disparity between satellite and airborne monitoring could complicate validation of the practical application of deep learning-based SR research for real-world monitoring images and SR imagery.Wang et al. (2021) performed superresolution analysis using 40 aerial images, because

Figure 10 .
Figure 10.Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio.These maps indicate September 30, 2019, for the downstream (a-e), midstream (f-j), and upstream (k-o) of the Geum River.

Figure 11 .
Figure 11.Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio.These maps indicate October 24, 2020, for the downstream (a-e), midstream (f-j), and upstream (k-o) of the Geum River.