Black and Odorous Water Detection of Remote Sensing Images Based on Improved Deep Learning

Abstract Black and odorous water seriously affects the ecological balance of rivers and the health of people living nearby. Satellite remote sensing technology with its advantages of a large range, long-time series, low cost, and high efficiency, has provided a new area for water quality detection. Much archived remote sensing satellite data can be further processed and used as a data source for black and odorous water detection. In this paper, Gaofen-2 remote sensing data with a spatial resolution of 1 m is leveraged as the data source. To enrich the data source in the northern coastal zone of China, we have built a high-quality remote sensing dataset, called the remote sensing images for black and odorous water detection (RSBD) dataset, which is collected from the Gaofen-2 satellite in Yantai, China. In addition, we propose a network with an encoder-decoder discriminant structure for black and odorous water detection. In the network, an augmented attention module is designed to capture a more comprehensive semantic feature representation. Further, the median balancing loss function is adopted to solve the imbalance issues. Experimental results demonstrate that the network is superior to other state-of-the-art semantic segmentation methods on our dataset.


Introduction
Black and odorous water (BOW) has two main manifestations, the color is typically black, dark green, or brown, and the smell is usually described as a metallic, fishy, or sewage-like odor (Duan et al. 2014).
BOW can be extremely harmful to both the environment and human health.The high levels of organic materials can lead to a decrease in the dissolved oxygen levels in the water, which can lead to the death of aquatic organisms.Additionally, BOW can contain bacteria and other microorganisms that can cause water-borne illnesses if ingested by humans (Meng et al. 2020;Saha et al. 2017).The pollution sources in the external environment and the production of internal sediments together aggravate the adverse effect of BOW.The "Great Stink" in the London Thames in 1858 is a typical historical case.Thousands of people died of cholera after drinking from the sewage-filled river (Luckin 2006).Since then, the prevention and detection of BOW have attracted the attention of many experts and scholars.Horbe and Santos (2009) analyzed the water quality components of the main blackwater branches in the western Amazon rivers of Brazil at low tide.Yu et al. (2009) studied the occurrence pattern of odor in numerous rivers in China.Therefore, it is of great necessity for economic and social development and ecological progress to effectively detect BOW.
In previous research, the identification methods of BOW were mainly done based on site visit and field test.Meanwhile, the evaluation index of BOW was constructed by combining some organic pollutant indicators to identify the degree of black odor in water bodies (C.Liu et al. 2011;G.-H. Lu et al. 2011).However, when facing a large-scale water body for monitoring, the traditional BOW detection method requires a lot of human and financial resources.And it is easy to miss information and wrong detection of water bodies.Nowadays, remote sensing photography technology has achieved milestone breakthroughs, which shed new light on BOW detection.It can detect the water body in a wide range and continuously, and obtain information of the water body in an all-around way.This progress makes up for the difficulty of obtaining information about on large scale water bodies by conventional detection methods (Kutser et al. 2016;J. Zhao et al. 2013).
However, for centimeter-to-meter resolution water pollution detection, few image datasets are suitable to this task.The existing methods (Olmanson et al. 2011) based on low and medium-resolution remote sensing data are not suitable for accurately detecting BOW in narrow rivers due to their limited resolution.With the continuous launch of high-resolution satellites, the sub-meter and meter-level spatial resolution remote sensing has provided data resources for accurate BOW detection (Moortgat et al. 2022).On August 19, 2014, China launched the Gaofen-2 (GF-2) satellite, which is the first Chinese sub-meter level earth observation satellite (Tong et al. 2016).At present, several kinds of research have been conducted to use GF-2 satellite data to detect river water quality.Yao et al. (2019) designed a BOW index (BOI) combining red and green bands.This index was developed by leveraging the remote sensing reflectance characteristics of BOW in Shenyang, China.Shen et al. (2019) proposed a color purity index derived from remotely sensed reflectance and analyzed the BOW of Shenyang, China using GF-2 data.In this paper, GF-2 satellite images are used to detect BOW.Now, deep convolutional neural network (DCNN) has achieved increasingly development (Z.Liu et al. 2023;Wambugu et al. 2021).It is mostly applied to extract all kinds of information about ground objects in the remote sensing field.DCNN utilizes the spectral properties and texture information of remote sensing images to detect ground objects.Shao et al. (2022) adopted semantic segmentation models based on DCNN to detect BOW, and incorporated attention blocks into the network.Many studies have shown that DCNN is an effective method for BOW detection (Pu et al. 2019;Wang et al. 2022).However, DCNN requires a substantial amount of accurately labeled data for training, which can be both difficult and time-consuming.There is still a scarcity of open research data resources available for BOW detection (Nambiar et al. 2022).
In recent years, Chinese northern coastal areas have experienced increasing water and energy shortages and offshore environmental pollution.This has limited the sustainable development of the regional economy and society.Yantai City, a major port urban in the Bohai Sea region, is bordered by the Yellow Sea to the south and the Bohai Sea to the north.The BOW in the urban area have become a major obstacle to the development of Yantai City.Unfortunately, there is a lack of research data resources for detecting BOW in Yantai City (R. Liu et al. 2021;Y. Lu et al. 2020;Xu et al. 2016).In this study, we explore BOW detection using DCNN model, and establish a new dataset based on the GF-2 remote sensing data of Yantai City.This dataset serves as a benchmark resource for evaluating and improving water pollution detection.BOW always distribute in small rivers, which can make BOW detection more difficult.To increase performance of BOW detection, we design an improved U-Net semantic segmentation network (Ronneberger et al. 2015), which adds an augmented attention module to capture global context information.There are three conspicuous contributions are summed as follows: 1.A GF-2 remote sensing image dataset is built to detect BOW, called RSBD dataset

Study area
Yantai City, located in the northeast of the Shandong Peninsula, China, is bordered by the Yellow Sea to the south and the Bohai Sea to the north.It is one of the first 14 coastal cities in China and is one of the top three core development cities in Shandong Province.Its Gross Domestic Product ranked third in the province in 2022.The city has a well-developed river network with many small and medium-sized rivers.Industrialization and urbanization have led to serious pollution in many rivers, resulting in complex pollution sources and black odor phenomena.As a result, these rivers are suitable data sources for detecting BOW (Wang et al. 2013).The study area is represented in Figure 1 and Figure 2. Yantai City has taken a proactive approach to implement BOW treatment and environmental protection measures.The city has also released a list of rivers with black odor characteristics in urban areas, in order to better protect the water resources within the city.According to the list of rivers with black odor characteristics in Yantai City published by the Yantai Urban Administration Bureau (http://cgj.yantai.gov.cn/art/2016/1/11/art_160_451908.html) in 2016, there were 22 rivers with black odor characteristics in Yantai City.We select Laishan District and Muping District as the study area, containing a total of 6 rivers with black odor characteristics, namely the Dongdu River, Dongfeng River, Yuniao River, Xiaozhang River, Sanba River, and Furong River.These rivers are polluted due to various reasons such as industrial water discharge, river obstruction, and garbage dumping.They vary in length, ranging from 0.75 km to over 6.36 km.The rivers in Laishan District and Muping District with black odor characteristics are more representative, and hence, we have chosen them as the research objects.Table 1 shows the study area of specific information.
BOW has the following characteristics, abnormal water color, river blockage, and harsh environment on the shore.Polluted rivers are typically gray-black or dark green in color, while normal water bodies are usually pure green or blue, with a clean and impurityfree surface.BOW often presents in small branches of rivers, which have narrow channels that can easily become blocked on both sides, leading to increased water pollution.Furthermore, BOW is often located near factories or garbage dumps (Zhang et al. 2022).These features provide the basis for the visual interpretation of the BOW annotation in this paper.

Dataset and analysis
Data acquisition and refinement GF-2 was lifted off on August 19, 2014.It carries a high-resolution camera with a panchromatic (PAN) resolution of 1 m and a multispectral (MS) resolution of 4 m, achieving high spatial resolution.The undersatellite measurement points have a spatial resolution of 0.8 m, and the swath width is 45 km.It has submeter spatial resolution and more accurate positioning capability, which significantly improves the comprehensive observation effectiveness of the satellite.GF-2 provides data support for a variety of research fields, including mineral resources development and monitoring, air environment monitoring, and water environment monitoring (L.Chen et al. 2022;Sun et al. 2020;Tong et al. 2016;Wei et al. 2021).Table 2 shows the parameters of GF-2.
GF-2 captures PAN images with only one band, while the MS images captured have four bands, separately, they are blue band, green band, red band, and near-infrared band, which are denoted by B1, B2, B3, and B4, respectively.
In this work, the GF-2 remote sensing images are collected from 2015 to 2016 in Yantai, China, with a total of 10 raw images.Less than 30% of the cloud cover in these images, meets the requirements of the application.Download GF-2 data through the China Center for Resources Satellite Data and Application.As shown in Table 3, the information on the remote sensing data is listed.Radiometric calibration is the process of converting the digital number of an image into a radiometric brightness value.The conversion process can be expressed as Equation (1): where L is the radiance brightness value after conversion, Gain is the calibration slope, DN is the digital number of the image element, and offset is the absolute calibration coefficient offset.Atmospheric correction aims to eliminate the influence of factors such as atmosphere and light on the reflection of features.
The quick atmospheric calibration (QUAC) tool in ENVI software is used to achieve atmospheric correction of high-resolution remote images.The central  wavelengths for each band of the input image are 0.514 lm, 0.546 lm, 0.656 lm, and 0.822 lm.To eliminate geometric distortion in remote sensing images, the rational polynomial coefficient file that accompanies the PAN and MS images of GF-2 is used to perform orthorectification.When performing orthorectification, we resample the resolution of the MS and PAN images to 4 m and 1 m, respectively.Image fusion involves resampling MS and PAN images to create a remote sensing image with multi-spectral features and high spatial resolution.The nearest neighbor diffusion pan sharpening tool is used to fuze data and generate a 1 m spatial resolution fusion image.Figure 3 shows the preprocessing process of PAN image and MS image.

Dataset production
The size of image is 29,200 Â 27,620 pixels.However, due to hardware limitations, the existing server cannot train and predict large-scale high-resolution remote sensing images simultaneously.Therefore, it is necessary to cut the raw image into smaller patches.The raw images are cut into 256 Â 256 pixels using the ROI tool in ENVI software.At the same time, to reduce the impact of positive and negative sample imbalance on the performance of the classifier, we remove these samples whose background pixels account for more than 90%.Finally, we select 329 original images to build the dataset.The location of each BOW label is obtained from that the list of rivers with black odor characteristics in Yantai City published by the Yantai Urban Administration Bureau.At the same time, the visual interpretation signs of BOW and general water bodies are combined for labeling.Each pixel is labeled into two categories: BOW and others, by using the Labelme tool (Torralba et al. 2010), which is pixellevel marked with two different colors: white and black.The 329 original images are split into a training set and a test set according to the ratio of 7:3.In order to compensate the inadequacy of the dataset, we leverage the data augmentation method, including brightness adjustment, color adjustment, and flipping (Q.Chen et al. 2019;Singh et al. 2020;Xia et al. 2017).For the flipping method, horizontal flipping and vertical flipping are utilized.The training set consists of 1,155 images and the corresponding labels, and the test set consists of 490 images and their labels.As shown in Figure 4, some typical images and their ground truth are displayed.
Given the high spatial resolution of the GF-2, the geometry of the captured river scenes is much clearer and finer, posing additional challenges for image classification.Due to the different high and low earth surfaces, the BOW in different scenes may appear in different sizes and directions.Also, the flight altitude and shooting direction of satellites, and the sun elevation angle can have a great influence on the appearance of BOW.The BOW of each sample image has different shapes, sizes, and proportions, and they are extracted at different times and seasons.These make the RSBD dataset have higher intra-class variations, i.e., samples from same class have high variation in attributes or characteristics.In addition, the RSBD dataset has lower inter-class dissimilarity, i.e., samples from different classes have less variation in attributes or characteristics.BOW and general water bodies mostly show similar shapes and belong to the same river with a low dissimilarity.In conclusion, the RSBD dataset has higher intra-class variations and lower inter-class dissimilarity.These variabilities may be closely related to geographical factors, pollution sources, environmental factors, etc.It provides help to develop a BOW detection method with stronger generalization ability.

The proposed method
The main challenge for detection BOW is that they are often located in hard-to-detect areas such as branches of rivers and small rivers.U-Net has been proven to be an effective architecture for semantic segmentation tasks.U-Net is a typical encoderdecoder structure.Its skip connections and symmetric encoder-decoder structure make it effective in capturing both local and global information.This is crucial for accurate segmentation.Additionally, U-Net can train effectively with limited data and patch-based augmentation techniques.It is suitable for different segmentation applications.

Network architecture
In this paper, we propose an encoder-decoder network based on the U-Net, named BDNet, which is designed to improve the accuracy of BOW detection.BDNet is made up of two main components: the encoder and the decoder.In the encoder part, four augmented attention modules are introduced to focus on spatial features and channel features of the BOW, which emphasize the BOW semantic information and increase the feature representation.The decoder part fuses the low-resolution semantic features and highresolution semantic features.Additionally, the MBL function is adopted to solve the imbalance issues of BOW features and other features.The MBL function is a variant of the cross-entropy function, which sets a corresponding weight for each category, effectively solving the problem caused by the imbalance of object categories.
As illustrated in Figure 5, the encoder part consists of the convolution block, max pooling layer, and the augmented attention block.Convolution block comprises 2 convolutions with a kernel of 3 Â 3 and rectified linear unit (ReLU) activation function.The input image is first fed into the convolution block to obtain the basic feature map A1.Then through the max pooling layer and the convolution block, repeat three times, the feature maps A2, A3, and A4 are obtained.The obtained feature maps A1, A2, A3, and A4 are connected with the features that have undergone the convolution block, as the input to the next layer of convolution blocks.The features acquired in the last layer are put into the decoder.In the feature fusion stage, the features acquired in the previous layer are connected with the corresponding features in the encoder part, and then deconvolution and up sampling operations are performed to recover the size of the feature map.Finally, 1 convolution block with a kernel of 1 Â 1 is used to achieve dimensionality reduction.

Augmented attention module
Aiming at the problem of missing detection caused by BOW of tiny shapes, the augmented attention module is introduced to optimize and improve the network.A channel attention module (CAM) and a spatial attention module (SAM) are contained (Woo et al. 2018).CAM obtains a stronger ability to extract BOW by integrating the channel information of the input image and generating a global association between channels.SAM makes the convolutional network pay more attention to which positions of the image play a crucial role in the final output of the network, i.e., which information at the location has the greatest impact on the final prediction.Combine CAM and SAM sequentially to form a complete attention module.
Figure 6 shows the structure details of CAM, the max pooling and the average pooling are performed on the input features F, respectively.The input features F have dimensions H Â WÂC, where H Â W is the height and width of the feature map and C is the number of channels.We input the generated maximum pooled features and average pooled features  into a shared network, which consists of the multilayer perceptron (MLP) and hidden layers.And then, the output features are subjected to a summation operation.Further, a sigmoid activation operation is performed to generate a channel attention matrix M c of dimension 1 Â 1 Â C. Finally, M c is multiplied with the input feature map to generate the final output feature F 0 : Figure 7 presents the structure of SAM, F' of dimension H Â WÂC is inputted into the max pooling and the average pooling.Concatenate operation is performed on the obtained feature map based on the channels to obtain the H Â W Â 2 feature map.Then after a 7 Â 7 convolution operation, the dimensionality is reduced to an H Â W Â 1 feature map.And then the sigmoid function is used to produce feature matrix M s : Finally, the spatial attention map M s and the input feature map F' are multiplied to obtain the final required spatial feature map F''

Loss function
MBL function replaces cross-entropy loss function in the decoder part.Cross-entropy assigns same weight to categories.Once the problem of unbalanced categories occurs, the training process will be dominated by classes with more pixels and will be difficult to learn the features of fewer objects, thus reducing the effectiveness of the network.The MBL function is a variant of the cross-entropy function, which sets a corresponding weight for each category, effectively solving the problem caused by the imbalance of object categories (Kampffmeyer et al. 2016).Therefore, we adopt the MBL function to supervise the training process of BDNet.
The median frequency balancing way (Eigen and Fergus 2015) is applied to calculate the weights of each class, the P ¼ f1, 2, :::, Cg represents the set of the C classes, as presented in Equation ( 2): where w i denotes the weight of the ith class, MFBðff 1 , f 2 , :::f c gÞ represents the median frequency balancing way, and f i is the frequency of the ith class, which denotes the proportion of the pixels number of the ith class to the sum of all pixels.The loss is presented in Equation (3): where L is the MBL function, x i denotes the target label of the ith class, and sðx i Þ is the output of softmax of the ith class.

Experiment and evaluation
In this section, we introduced the details of experiments and common evaluation metrics.Several popular deep learning models, including PSPNet (H.Zhao et al. 2017), U-Net, FCN8s (Long et al. 2015), Deeplabv3 (L.-C.Chen et al. 2018), and LinkNet (Chaurasia and Culurciello 2017), were compared with the BDNet on RSBD dataset.Then the ablation experiments were performed.

Implementation details
The experiments were implemented on the Pytorch.We set the learning rate to 10 À4 , and optimizer was the Adam.The decay of weight, momentum, and power was set to 2 Â 10 À5 , 0.9, and 0.99, respectively.
After training 1000 rounds, the loss value converged gradually, thus the number of iterations was set to 1000.The BOW dataset contained 1645 images, which was divided into 7:3, represented the training set, and test set, respectively.The input training set contains 1155 images with a size of 256 Â 256 pixels, which included two types: BOW and others.

Evaluation metrics
In this work, we considered the extraction of BOW as a binary classification problem and classified the prediction results into BOW and others.The classification results were evaluated using a pixel-based confusion matrix.The category of BOW was called positive and the category of others was called negative.
The correct prediction of the classifier was recorded as true, and the error prediction of the classifier was recorded as false.The four basic terms were combined, which together constituted four elements: true positive (TP) example, false positive (FP) example, false negative (FN) example, and true negative (TN) example.Four evaluation indexes were applied to assess the accuracy of the test results, including intersection over union (IoU), Mean IoU (MIoU), Accuracy, and F 1 -score (Shrestha and Vanneschi 2018;Xue et al. 2021).
The intersection of the predicted values and actual values are divided with the concurrent set to form the index IoU, description is shown in Equation ( 4): MIoU is the average of the IoU values for all categories, description is shown in Equation ( 5): where IoU BOW represents the IoU value of the category BOW and IoU others represents the IoU value of the category others.
Accuracy indicates the proportion of correctly predicted pixels to the sum of all pixels, represents the description is shown in Equation ( 6): Precision describes the percentage of the detected positive samples that are true positive samples.Recall describes the percentage of the actually positive samples that are true positive samples.With Precision and Recall above, we can calculate F 1 À score, which is the summed average of these two index values, description is shown in Equation ( 7): Comparison with advanced methods We adopted several state-of-the-art segmentate networks to comprehensively assess the effect of BDNet on the RSBD dataset, including PSPNet, U-Net, FCN8s, Deeplabv3, and LinkNet.4, the IoU of the category BOW was lower than the IoU of the category others, which may be because the BOW pixels in the image were much less than in the others pixels.This reason led to imbalanced learning in these networks.
As shown in Figure 8, some visualization segmentation results are presented, which more intuitively shown that BDNet can better extract the BOW.PSPNet had the worst performance in extracting BOW.FCN8s and LinkNet easily recognized others as BOW in the fifth and seventh columns.U-Net and Deeplabv3 struggled to recognize BOW with narrow river channels, as seen in the fourth and sixth columns.In contrast, BDNet achieved segmentation results closest to the ground truth, demonstrating better detail segmentation and effectively solving the class imbalance problem.Overall, BDNet outperformed other segmentation networks on the RSBD dataset.

Ablation experiments
To evaluate and analyze the effect of the augmented attention module and the MBL function, ablation experiments were performed.The baseline model adopted U-Net, CAM and SAM simultaneously appended to U-Net, which loss applied the crossentropy loss function.Furthermore, the MBL function was added to the U-Net to assess its effect.The effectiveness of the module was verified through a series of experiments.Table 5 shows the ablation experiments quantitative results.
In Table 5, "Baseline" denotes U-Net, "CAM" and "SAM" simultaneously adds the encoder part, and "Loss" means that the MBL function is considered in U-Net.Experimental results showed that the IoU of the category BOW, MIoU, and F 1 -score of the BDNet were 3.43%, 1.65%, and 2.65% improved than that of adding CAM and SAM to the baseline network, respectively.Meanwhile, the IoU of the category BOW, MIoU, and F 1 -score of the BDNet were 1.94%, 1.06%, and 1.49% improved than that of adding the MBL function to the baseline network, respectively.The qualitative results are shown in Figure 9, which intuitively indicated the effectiveness of adding augmented attention module and the MBL function to the baseline network.As shown in Figure 9, the baseline U-Net showed poor performance of the BOW feature for narrow rivers.Introducing the augmented

Conclusion
In this work, we built a remote sensing dataset for BOW detection, named RSBD dataset.It comprises GF-2 remote sensing data with a high spatial resolution of 1 m and covers representative polluted rivers in Yantai, China.The BOW dataset is the first of its kind for detecting BOW in Yantai, China and contains 1645 images.Of these, 1155 images are in the training set, while 490 images are in the test set.Then, a novel network based on the U-Net, referred to as BDNet, was designed to identify BOW for GF-2 remote sensing images, which incorporated an augmented attention module to emphasize BOW feature information.We selected several most common semantic segmentation methods to evaluate the effectiveness of BDNet on RSBD dataset.The experiment results indicated that the segmentation accuracy of BDNet preceded the other existing networks and also had a better performance in segmentation details.
Practically, the dataset is still insufficient to cover all practical situations.In the next step, we plan to pay more attention to adding other signal sources, such as thermal infrared remote sensing images, to optimize and enrich our dataset.Then, a better BOW detection method will be proposed, and verifying its feasibility.
Figure 1 marks all the rivers and BOW in the Laishan District.Figure 2 marks all the rivers and BOW in the Muping District.

Figure 1 .
Figure 1.Geographic distribution of black and odorous water sampling points in Laishan District, Yantai City, China.

Figure 2 .
Figure 2. Geographic distribution of black and odorous water sampling points in Muping District, Yantai City, China.
Remote sensing satellites are affected by external factors and internal factors when imaging process, resulting in a certain gap between the obtained remote sensing images and real objects.External factors include fuzzy remote sensing image caused by atmospheric interference, uneven image and artifact caused by radiation scattering and illumination difference, etc. Internal factors include image distortion and position deviation caused by satellite attitude, orbit deviation or sensor system.To reduce these gaps, it is necessary to preprocess the obtained remote sensing images.GF-2 remote sensing image preprocessing has four steps: radiometric calibration, atmospheric correction, orthorectification, and fusion of PAN image and MS image.

Figure 3 .
Figure 3. Preprocessing operations are performed on MS and PAN images produced by Gaofen-2 satellite, respectively.The information in the example image is shown in the lower left corner.

Figure 4 .
Figure 4. Some typical black and odorous water images and their corresponding annotation images, the first row and third row indicate the Gaofen-2 remote sensing images, and the second row and fourth row indicate the corresponding label images.

Figure 5 .
Figure 5.The architecture of BDNet.It mainly contains two branches: the encoder part in the red box and the decoder part in the blue box.'Augmented Attention Module' contains the channel attention module and the spatial attention module, repeat the operation four times, and obtain feature mapping A1-A4.Introducing the median balancing loss function in the decoder part.

Figure 6 .
Figure 6.Diagram of the channel attention module.

Figure 7 .
Figure 7. Diagram of the spatial attention module.
attention module resulted in more accurate detection of BOW.The baseline U-Net with the MBL function led to misclassification problem, while our BDNet overcame this challenge.The augmented attention module improved the model's ability to focus on important BOW features.Furthermore, the integration of the MBL function further contributed to the improved accuracy of BOW detection.Overall, the experimental results validated the effectiveness of the augmented attention and the MBL function in augmenting the accuracy of BOW detection.

Figure 9 .
Figure 9.The visualization results of ablation experiment.(a) The original Gaofen-2 images.(b) The corresponding ground truth.(c) The results of baseline model.(d) The results of adding both channel attention module and spatial attention module in the baseline model.(e) The results of using the median balancing loss function in the baseline model.(f) The results of BDNet.

Table 1 .
Detailed information on the distribution of rivers.

Table 3 .
Image information of Gaofen-2 satellite used in this research.

Table 4
BDNet performed the best among the five models.BDNet integrated the spatial features and the channel features of images, and addressed the imbalance issues of BOW features and other features.BDNet can distinguished BOW from other categories more accurately and had the smallest misclassification rate.As shown in Table

Table 4 .
Quantitative results on the RSBD dataset compared our BDNet with common deep learning methods network.

Table 5 .
Ablation experiments on the channel attention module, the spatial attention module, and the median balancing loss function.Bold formatting represents the best results of the network in the assessment metrics.