Improving the quality of training samples

Selecting good training samples for machine learning classification of satellite images is critical to achieving accurate results. Experience with machine learning methods has demonstrated that the number and quality of training samples are crucial factors in obtaining accurate results [29]. Large and accurate datasets are preferable, regardless of the algorithm used, while noisy training samples can negatively impact classification performance [30]. Thus, it is beneficial to use pre-processing methods to improve the quality of samples and eliminate those that may have been incorrectly labeled or possess low discriminatory power.

It is necessary to distinguish between wrongly labeled samples and differences resulting from the natural variability of class signatures. When training data belongs to a large geographic region, the variability of vegetation phenology leads to different patterns being assigned to the same label. A related issue is the limitation of crisp boundaries to describe the natural world. Class definitions use idealized descriptions (e.g., “a savanna woodland has tree cover of 50% to 90% ranging from 8 to 15 m in height”). In practice, the boundaries between classes are fuzzy and sometimes overlap, making it hard to distinguish between them. To improve sample quality, sits provides methods for evaluating the training data.

Given a set of training samples, experts should first perform a cross-validation of the training set, to be able to assess their inherent prediction error. The results indicate whether the data is internally consistent. Since cross-validation is not a predictor of actual model performance, this chapter provides additional tools for improving the quality of training sets. More detailed information is available on Chapter “Validation and Accuracy Assessment”.

Datasets used in this chapter

The examples of this chapter use two datasets:

  1. cerrado_2classes: a set of time series for the Cerrado region of Brazil, the second largest biome in South America with an area of more than 2 million km^2. The data contains 746 samples divided into 2 classes (Cerrado and Pasture). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 2 bands (EVI, and NDVI).

  2. samples_cerrado_mod13q1: a set of time series from the Cerrado region of Brazil. The data ranges from 2000 to 2017 and includes 50,160 samples divided into 12 classes (Dense_Woodland, Dunes, Fallow_Cotton, Millet_Cotton, Pasture, Rocky_Savanna, Savanna, Savanna_Parkland, Silviculture, Soy_Corn, Soy_Cotton, and Soy_Fallow). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 4 bands (EVI, NDVI, MIR, and NIR). We use bands NDVI and EVI for faster processing.

library(sits)
library(sitsdata)
# Take only the NDVI and EVI bands
samples_cerrado_mod13q1_2bands <- sits_select(
  data = samples_cerrado_mod13q1,
  bands = c("NDVI", "EVI")
)

# Show the summary of the samples
summary(samples_cerrado_mod13q1_2bands)
#> # A tibble: 12 × 3
#>    label            count    prop
#>    <chr>            <int>   <dbl>
#>  1 Dense_Woodland    9966 0.199  
#>  2 Dunes              550 0.0110 
#>  3 Fallow_Cotton      630 0.0126 
#>  4 Millet_Cotton      316 0.00630
#>  5 Pasture           7206 0.144  
#>  6 Rocky_Savanna     8005 0.160  
#>  7 Savanna           9172 0.183  
#>  8 Savanna_Parkland  2699 0.0538 
#>  9 Silviculture       423 0.00843
#> 10 Soy_Corn          4971 0.0991 
#> 11 Soy_Cotton        4124 0.0822 
#> 12 Soy_Fallow        2098 0.0418

Cross-validation of training sets

Cross-validation is a technique to estimate the inherent prediction error of a model [31]. Since cross-validation uses only the training samples, its results are not accuracy measures unless the samples have been carefully collected to represent the diversity of possible occurrences of classes in the study area [32]. In practice, when working in large areas, it is hard to obtain random stratified samples which cover the different variations in land classes associated with the ecosystems of the study area. Thus, cross-validation should be taken as a measure of model performance on the training data and not an estimate of overall map accuracy.

Cross-validation uses part of the available samples to fit the classification model and a different part to test it. The k-fold validation method splits the data into \(k\) partitions with approximately the same size and proceeds by fitting the model and testing it \(k\) times. At each step, we take one distinct partition for the test and the remaining \({k-1}\) for training the model and calculate its prediction error for classifying the test partition. A simple average gives us an estimation of the expected prediction error. The recommended choices of \(k\) are \(5\) or \(10\) [31].

sits_kfold_validate() supports k-fold validation in sits. The result is the confusion matrix and the accuracy statistics (overall and by class). In the examples below, we use multiprocessing to speed up the results. The parameters of sits_kfold_validate are:

  1. samples: training samples organized as a time series tibble;
  2. folds: number of folds, or how many times to split the data (default = 5);
  3. ml_method: ML/DL method to be used for the validation (default = random forest);
  4. multicores: number of cores to be used for parallel processing (default = 2).

Below we show an example of cross-validation on the samples_cerrado_mod13q1 dataset.

rfor_validate <- sits_kfold_validate(
  samples = samples_cerrado_mod13q1_2bands,
  folds = 5,
  ml_method = sits_rfor(),
  multicores = 5
)
rfor_validate
#> Confusion Matrix and Statistics
#> 
#>                   Reference
#> Prediction         Pasture Dense_Woodland Savanna_Parkland Rocky_Savanna Savanna Dunes Soy_Corn
#>   Pasture             6612             27                4            10     117     0       36
#>   Dense_Woodland       496           9674                0           615     147     0        2
#>   Savanna_Parkland       2              0             2645            45      12     0        1
#>   Rocky_Savanna          5             67               25          7305       7     0        0
#>   Savanna               62            192               25            30    8889     0        9
#>   Dunes                  0              0                0             0       0   550        0
#>   Soy_Corn               7              0                0             0       0     0     4846
#>   Soy_Cotton             3              0                0             0       0     0       46
#>   Soy_Fallow            13              0                0             0       0     0       29
#>   Fallow_Cotton          6              0                0             0       0     0        2
#>   Silviculture           0              6                0             0       0     0        0
#>   Millet_Cotton          0              0                0             0       0     0        0
#>                   Reference
#> Prediction         Soy_Cotton Soy_Fallow Fallow_Cotton Silviculture Millet_Cotton
#>   Pasture                  10         30            37            2             1
#>   Dense_Woodland            1          1             0           98             0
#>   Savanna_Parkland          0          1             0            0             0
#>   Rocky_Savanna             0          0             0            0             0
#>   Savanna                   0          1             0           11             0
#>   Dunes                     0          0             0            0             0
#>   Soy_Corn                 63        344             6            0             2
#>   Soy_Cotton             4042          0            24            0            17
#>   Soy_Fallow                0       1713             1            0             0
#>   Fallow_Cotton             1          8           557            0            23
#>   Silviculture              0          0             0          312             0
#>   Millet_Cotton             7          0             5            0           273
#> 
#> Overall Statistics
#>                           
#>  Accuracy : 0.945         
#>    95% CI : (0.943, 0.947)
#>                           
#>     Kappa : 0.936         
#> 
#> Statistics by Class:
#> 
#>                           Class: Pasture Class: Dense_Woodland Class: Savanna_Parkland
#> Prod Acc (Sensitivity)             0.918                 0.971                   0.980
#> Specificity                        0.994                 0.966                   0.999
#> User Acc (Pos Pred Value)          0.960                 0.877                   0.977
#> Neg Pred Value                     0.986                 0.993                   0.999
#> F1 score                           0.938                 0.921                   0.979
#>                           Class: Rocky_Savanna Class: Savanna Class: Dunes Class: Soy_Corn
#> Prod Acc (Sensitivity)                   0.913          0.969            1           0.975
#> Specificity                              0.998          0.992            1           0.991
#> User Acc (Pos Pred Value)                0.986          0.964            1           0.920
#> Neg Pred Value                           0.984          0.993            1           0.997
#> F1 score                                 0.948          0.967            1           0.947
#>                           Class: Soy_Cotton Class: Soy_Fallow Class: Fallow_Cotton Class: Silviculture
#> Prod Acc (Sensitivity)                0.980             0.816                0.884               0.738
#> Specificity                           0.998             0.999                0.999               1.000
#> User Acc (Pos Pred Value)             0.978             0.976                0.933               0.981
#> Neg Pred Value                        0.998             0.992                0.999               0.998
#> F1 score                              0.979             0.889                0.908               0.842
#>                           Class: Millet_Cotton
#> Prod Acc (Sensitivity)                   0.864
#> Specificity                              1.000
#> User Acc (Pos Pred Value)                0.958
#> Neg Pred Value                           0.999
#> F1 score                                 0.908

The results show a good validation, reaching 94% accuracy. However, this accuracy does not guarantee a good classification result. It only shows if the training data is internally consistent. In what follows, we present additional methods for improving sample quality.

Hierarchical clustering for sample quality control

The package provides two clustering methods to assess sample quality: Agglomerative Hierarchical Clustering (AHC) and Self-organizing Maps (SOM). These methods have different computational complexities. AHC has a computational complexity of \(\mathcal{O}(n^2)\), given the number of time series \(n\), whereas SOM complexity is linear. For large data, AHC requires substantial memory and running time; in these cases, SOM is recommended. This section describes how to run AHC in sits. The SOM-based technique is presented in the next section.

AHC computes the dissimilarity between any two elements from a dataset. Depending on the distance functions and linkage criteria, the algorithm decides which two clusters are merged at each iteration. This approach is helpful for exploring samples due to its visualization power and ease of use [33]. In sits, AHC is implemented using sits_cluster_dendro().

# Take a set of patterns for 2 classes
# Create a dendrogram, plot, and get the optimal cluster based on ARI index
clusters <- sits_cluster_dendro(
  samples = cerrado_2classes,
  bands = c("NDVI", "EVI"),
  dist_method = "dtw_basic",
  linkage = "ward.D2"
)
Example of hierarchical clustering for a two class set of time series (Source: Authors).

Figure 38: Example of hierarchical clustering for a two class set of time series (Source: Authors).

The sits_cluster_dendro() function has one mandatory parameter (samples), with the samples to be evaluated. Optional parameters include bands, dist_method, and linkage. The dist_method parameter specifies how to calculate the distance between two time series. We recommend a metric that uses dynamic time warping (DTW) [34], as DTW is a reliable method for measuring differences between satellite image time series [35]. The options available in sits are based on those provided by package dtwclust, which include dtw_basic, dtw_lb, and dtw2. Please check ?dtwclust::tsclust for more information on DTW distances.

The linkage parameter defines the distance metric between clusters. The recommended linkage criteria are: complete or ward.D2. Complete linkage prioritizes the within-cluster dissimilarities, producing clusters with shorter distance samples, but results are sensitive to outliers. As an alternative, Ward proposes to use the sum-of-squares error to minimize data variance [36]; his method is available as ward.D2 option to the linkage parameter. To cut the dendrogram, the sits_cluster_dendro() function computes the adjusted rand index (ARI) [37], returning the height where the cut of the dendrogram maximizes the index. In the example, the ARI index indicates that there are six clusters. The result of sits_cluster_dendro() is a time series tibble with one additional column called “cluster”. The function sits_cluster_frequency() provides information on the composition of each cluster.

# Show clusters samples frequency
sits_cluster_frequency(clusters)
#>          
#>             1   2   3   4   5   6 Total
#>   Cerrado 203  13  23  80   1  80   400
#>   Pasture   2 176  28   0 140   0   346
#>   Total   205 189  51  80 141  80   746

The cluster frequency table shows that each cluster has a predominance of either Cerrado or Pasture labels, except for cluster 3, which has a mix of samples from both labels. Such confusion may have resulted from incorrect labeling, inadequacy of selected bands and spatial resolution, or even a natural confusion due to the variability of the land classes. To remove cluster 3, use dplyr::filter(). The resulting clusters still contain mixed labels, possibly resulting from outliers. In this case, sits_cluster_clean() removes the outliers, leaving only the most frequent label. After cleaning the samples, the resulting set of samples is likely to improve the classification results.

# Remove cluster 3 from the samples
clusters_new <- dplyr::filter(clusters, cluster != 3)
# Clear clusters, leaving only the majority label
clean <- sits_cluster_clean(clusters_new)
# Show clusters samples frequency
sits_cluster_frequency(clean)
#>          
#>             1   2   4   5   6 Total
#>   Cerrado 203   0  80   0  80   363
#>   Pasture   0 176   0 140   0   316
#>   Total   203 176  80 140  80   679

Using SOM for sample quality control

sits provides a clustering technique based on self-organizing maps (SOM) as an alternative to hierarchical clustering for quality control of training samples. SOM is a dimensionality reduction technique [38], where high-dimensional data is mapped into a two-dimensional map, keeping the topological relations between data patterns. As shown in Figure 39, the SOM 2D map is composed of units called neurons. Each neuron has a weight vector, with the same dimension as the training samples. At the start, neurons are assigned a small random value and then trained by competitive learning. The algorithm computes the distances of each member of the training set to all neurons and finds the neuron closest to the input, called the best matching unit.

SOM 2D map creation (Source: Santos et al. (2021). Reproduction under fair use doctrine).

Figure 39: SOM 2D map creation (Source: Santos et al. (2021). Reproduction under fair use doctrine).

The input data for quality assessment is a set of training samples, which are high-dimensional data; for example, a time series with 25 instances of 4 spectral bands has 100 dimensions. When projecting a high-dimensional dataset into a 2D SOM map, the units of the map (called neurons) compete for each sample. Each time series will be mapped to one of the neurons. Since the number of neurons is smaller than the number of classes, each neuron will be associated with many time series. The resulting 2D map will be a set of clusters. Given that SOM preserves the topological structure of neighborhoods in multiple dimensions, clusters that contain training samples with a given label will usually be neighbors in 2D space. The neighbors of each neuron of a SOM map provide information on intraclass and interclass variability, which is used to detect noisy samples. The methodology of using SOM for sample quality assessment is discussed in detail in the reference paper [39].

Using SOM for class noise reduction (Source: Santos et al. (2021). Reproduction under fair use doctrine).

Figure 40: Using SOM for class noise reduction (Source: Santos et al. (2021). Reproduction under fair use doctrine).

Creating the SOM map

To perform the SOM-based quality assessment, the first step is to run sits_som_map(), which uses the kohonen R package to compute a SOM grid [40], controlled by five parameters. The grid size is given by grid_xdim and grid_ydim. The starting learning rate is alpha, which decreases during the interactions. To measure the separation between samples, use distance (either “sumofsquares” or “euclidean”). The number of iterations is set by rlen. For more details, please consult ?kohonen::supersom.

# Clustering time series using SOM
som_cluster <- sits_som_map(samples_cerrado_mod13q1_2bands,
  grid_xdim = 15,
  grid_ydim = 15,
  alpha = 1.0,
  distance = "euclidean",
  rlen = 20
)
# Plot the SOM map
plot(som_cluster)
SOM map for the Cerrado samples (Source: Authors).

Figure 41: SOM map for the Cerrado samples (Source: Authors).

The output of the sits_som_map() is a list with three elements: (a) data, the original set of time series with two additional columns for each time series: id_sample (the original id of each sample) and id_neuron (the id of the neuron to which it belongs); (b) labelled_neurons, a tibble with information on the neurons. For each neuron, it gives the prior and posterior probabilities of all labels which occur in the samples assigned to it; and (c) the SOM grid. To plot the SOM grid, use plot(). The neurons are labelled using majority voting.

The SOM grid shows that most classes are associated with neurons close to each other, although there are exceptions. Some Pasture neurons are far from the main cluster because the transition between open savanna and pasture areas is not always well defined and depends on climate and latitude. Also, the neurons associated with Soy_Fallow are dispersed in the map, indicating possible problems in distinguishing this class from the other agricultural classes. The SOM map can be used to remove outliers, as shown below.

Measuring confusion between labels using SOM

The second step in SOM-based quality assessment is understanding the confusion between labels. The function sits_som_evaluate_cluster() groups neurons by their majority label and produces a tibble. Neurons are grouped into clusters, and there will be as many clusters as there are labels. The results shows the percentage of samples of each label in each cluster. Ideally, all samples of each cluster would have the same label. In practice, cluster contain samples with different label. This information helps on measuring the confusion between samples.

# Produce a tibble with a summary of the mixed labels
som_eval <- sits_som_evaluate_cluster(som_cluster)
# Show the result
som_eval
#> # A tibble: 79 × 4
#>    id_cluster cluster        class          mixture_percentage
#>         <int> <chr>          <chr>                       <dbl>
#>  1          1 Dense_Woodland Dense_Woodland           81.0    
#>  2          1 Dense_Woodland Millet_Cotton             0.00871
#>  3          1 Dense_Woodland Pasture                   6.14   
#>  4          1 Dense_Woodland Rocky_Savanna             7.16   
#>  5          1 Dense_Woodland Savanna                   4.03   
#>  6          1 Dense_Woodland Silviculture              1.63   
#>  7          1 Dense_Woodland Soy_Corn                  0.0261 
#>  8          1 Dense_Woodland Soy_Cotton                0.0261 
#>  9          1 Dense_Woodland Soy_Fallow                0.00871
#> 10          2 Dunes          Dunes                   100      
#> # ℹ 69 more rows

Many labels are associated with clusters where there are some samples with a different label. Such confusion between labels arises because sample labeling is subjective and can be biased. In many cases, interpreters use high-resolution data to identify samples. However, the actual images to be classified are captured by satellites with lower resolution. In our case study, a MOD13Q1 image has pixels with 250 m resolution. As such, the correspondence between labeled locations in high-resolution images and mid to low-resolution images is not direct. The confusion by sample label can be visualized in a bar plot using plot(), as shown below. The bar plot shows some confusion between the labels associated with the natural vegetation typical of the Brazilian Cerrado (Savanna, Savanna_Parkland, Rocky_Savanna). This mixture is due to the large variability of the natural vegetation of the Cerrado biome, which makes it difficult to draw sharp boundaries between classes. Some confusion is also visible between the agricultural classes. The Millet_Cotton class is a particularly difficult one since many of the samples assigned to this class are confused with Soy_Cotton and Fallow_Cotton.

# Plot the confusion between clusters
plot(som_eval)
Confusion between classes as measured by SOM (Source: Authors).

Figure 42: Confusion between classes as measured by SOM (Source: Authors).

Detecting noisy samples using SOM

The third step in the quality assessment uses the discrete probability distribution associated with each neuron, which is included in the labeled_neurons tibble produced by sits_som_map(). This approach associates probabilities with frequency of occurrence. More homogeneous neurons (those with one label has high frequency) are assumed to be composed of good quality samples. Heterogeneous neurons (those with two or more classes with significant frequencies) are likely to contain noisy samples. The algorithm computes two values for each sample:

  • prior probability: the probability that the label assigned to the sample is correct, considering the frequency of samples in the same neuron. For example, if a neuron has 20 samples, of which 15 are labeled as Pasture and 5 as Forest, all samples labeled Forest are assigned a prior probability of 25%. This indicates that Forest samples in this neuron may not be of good quality.

  • posterior probability: the probability that the label assigned to the sample is correct, considering the neighboring neurons. Take the case of the above-mentioned neuron whose samples labeled Pasture have a prior probability of 75%. What happens if all the neighboring neurons have Forest as a majority label? To answer this question, we use Bayesian inference to estimate if these samples are noisy based on the surrounding neurons [41].

To identify noisy samples, we take the result of the sits_som_map() function as the first argument to the function sits_som_clean_samples(). This function finds out which samples are noisy, which are clean, and which need to be further examined by the user. It requires the prior_threshold and posterior_threshold parameters according to the following rules:

  • If the prior probability of a sample is less than prior_threshold, the sample is assumed to be noisy and tagged as “remove”;
  • If the prior probability is greater or equal to prior_threshold and the posterior probability calculated by Bayesian inference is greater or equal to posterior_threshold, the sample is assumed not to be noisy and thus is tagged as “clean”;
  • If the prior probability is greater or equal to prior_threshold and the posterior probability is less than posterior_threshold, we have a situation when the sample is part of the majority level of those assigned to its neuron, but its label is not consistent with most of its neighbors. This is an anomalous condition and is tagged as “analyze”. Users are encouraged to inspect such samples to find out whether they are in fact noisy or not.

The default value for both prior_threshold and posterior_threshold is 60%. The sits_som_clean_samples() has an additional parameter (keep), which indicates which samples should be kept in the set based on their prior and posterior probabilities. The default for keep is c("clean", "analyze"). As a result of the cleaning, about 900 samples have been considered to be noisy and thus removed.

new_samples <- sits_som_clean_samples(
  som_map = som_cluster,
  prior_threshold = 0.6,
  posterior_threshold = 0.6,
  keep = c("clean", "analyze")
)
# Print the new sample distribution
summary(new_samples)
#> # A tibble: 11 × 3
#>    label            count    prop
#>    <chr>            <int>   <dbl>
#>  1 Dense_Woodland    8214 0.207  
#>  2 Dunes              550 0.0139 
#>  3 Fallow_Cotton      345 0.00870
#>  4 Pasture           5353 0.135  
#>  5 Rocky_Savanna     6289 0.159  
#>  6 Savanna           7901 0.199  
#>  7 Savanna_Parkland  2106 0.0531 
#>  8 Silviculture        85 0.00214
#>  9 Soy_Corn          4191 0.106  
#> 10 Soy_Cotton        3489 0.0880 
#> 11 Soy_Fallow        1139 0.0287

All samples of the class which had the highest confusion with others(Millet_Cotton) have been removed. Most samples of class Silviculture (planted forests) have also been removed since they have been confused with natural forests and woodlands in the SOM map. Further analysis includes calculating the SOM map and confusion matrix for the new set, as shown in the following example.

# Evaluate the mixture in the SOM clusters of new samples
new_cluster <- sits_som_map(
  data = new_samples,
  grid_xdim = 15,
  grid_ydim = 15,
  alpha = 1.0,
  distance = "euclidean"
)
new_cluster_mixture <- sits_som_evaluate_cluster(new_cluster)
# Plot the mixture information.
plot(new_cluster_mixture)
Cluster confusion plot for samples cleaned by SOM (Source: Authors).

Figure 43: Cluster confusion plot for samples cleaned by SOM (Source: Authors).

As expected, the new confusion map shows a significant improvement over the previous one. This result should be interpreted carefully since it may be due to different effects. The most direct interpretation is that Millet_Cotton and Silviculture cannot be easily separated from the other classes, given the current attributes (a time series of NDVI and EVI indices from MODIS images). In such situations, users should consider improving the number of samples from the less represented classes, including more MODIS bands, or working with higher resolution satellites. The results of the SOM method should be interpreted based on the users’ understanding of the ecosystems and agricultural practices of the study region.

A further comparison between the original and clean samples is to run a 5-fold validation on the original and the cleaned sample sets using sits_kfold_validate() and a random forest model. The SOM procedure improves the validation results from 95% on the original dataset to 99% in the cleaned one. This improvement should not be interpreted as providing a better fit for the final map accuracy. A 5-fold validation procedure only measures how well the machine learning model fits the samples; it is not an accuracy assessment of classification results. The result only indicates that the training set after the SOM sample removal procedure is more internally consistent than the original one. For more details on accuracy measures, please see Chapter Validation and accuracy measures.

# Run a k-fold validation
assess_orig <- sits_kfold_validate(
  samples = samples_cerrado_mod13q1_2bands,
  folds = 5,
  ml_method = sits_rfor()
)
# Print summary
summary(assess_orig)
#> Overall Statistics                          
#>  Accuracy : 0.945         
#>    95% CI : (0.943, 0.947)
#>     Kappa : 0.936
assess_new <- sits_kfold_validate(
  samples = new_samples,
  folds = 5,
  ml_method = sits_rfor()
)
# Print summary
summary(assess_new)
#> Overall Statistics                         
#>  Accuracy : 0.991        
#>    95% CI : (0.99, 0.992)
#>     Kappa : 0.989

The SOM-based analysis discards samples that can be confused with samples of other classes. After removing noisy samples or uncertain classes, the dataset obtains a better validation score since there is less confusion between classes. Users should analyse the results with care. Not all discarded samples are low-quality ones. Confusion between samples of different classes can result from inconsistent labeling or from the lack of capacity of satellite data to distinguish between chosen classes. When many samples are discarded, as in the current example, revising the whole classification schema is advisable. The aim of selecting training data should always be to match the reality on the ground to the power of remote sensing data to identify differences. No analysis procedure can replace actual user experience and knowledge of the study region.

Reducing sample imbalance

Many training samples for Earth observation data analysis are imbalanced. This situation arises when the distribution of samples associated with each label is uneven. One example is the Cerrado dataset used in this Chapter. The three most frequent labels (Dense Woodland, Savanna, and Pasture) include 53% of all samples, while the three least frequent labels (Millet-Cotton, Silviculture, and Dunes) comprise only 2.5% of the dataset. Sample imbalance is an undesirable property of a training set since machine learning algorithms tend to be more accurate for classes with many samples. The instances belonging to the minority group are misclassified more often than those belonging to the majority group. Thus, reducing sample imbalance can positively affect classification accuracy [42].

The function sits_reduce_imbalance() deals with training set imbalance; it increases the number of samples of least frequent labels, and reduces the number of samples of most frequent labels. Oversampling requires generating synthetic samples. The package uses the SMOTE method that estimates new samples by considering the cluster formed by the nearest neighbors of each minority label. SMOTE takes two samples from this cluster and produces a new one by randomly interpolating them [43].

To perform undersampling, sits_reduce_imbalance() builds a SOM map for each majority label based on the required number of samples to be selected. Each dimension of the SOM is set to ceiling(sqrt(new_number_samples/4)) to allow a reasonable number of neurons to group similar samples. After calculating the SOM map, the algorithm extracts four samples per neuron to generate a reduced set of samples that approximates the variation of the original one.

The sits_reduce_imbalance() algorithm has two parameters: n_samples_over and n_samples_under. The first parameter indicates the minimum number of samples per class. All classes with samples less than its value are oversampled. The second parameter controls the maximum number of samples per class; all classes with more samples than its value are undersampled. The following example uses sits_reduce_imbalance() with the Cerrado samples. We generate a balanced dataset where all classes have a minimum of 1000 and and a maximum of 1500 samples. We use sits_som_evaluate_cluster() to estimate the confusion between classes of the balanced dataset.

# Reducing imbalances in the Cerrado dataset
balanced_samples <- sits_reduce_imbalance(
  samples = samples_cerrado_mod13q1_2bands,
  n_samples_over = 1000,
  n_samples_under = 1500,
  multicores = 4
)
# Print the balanced samples
# Some classes have more than 1500 samples due to the SOM map
# Each label has between 10% and 6% of the full set
summary(balanced_samples)
#> # A tibble: 12 × 3
#>    label            count   prop
#>    <chr>            <int>  <dbl>
#>  1 Dense_Woodland    1600 0.0968
#>  2 Dunes             1000 0.0605
#>  3 Fallow_Cotton     1000 0.0605
#>  4 Millet_Cotton     1000 0.0605
#>  5 Pasture           1592 0.0963
#>  6 Rocky_Savanna     1488 0.0900
#>  7 Savanna           1600 0.0968
#>  8 Savanna_Parkland  1584 0.0958
#>  9 Silviculture      1000 0.0605
#> 10 Soy_Corn          1592 0.0963
#> 11 Soy_Cotton        1552 0.0939
#> 12 Soy_Fallow        1520 0.0920
# Clustering time series using SOM
som_cluster_bal <- sits_som_map(
  data = balanced_samples,
  grid_xdim = 10,
  grid_ydim = 10,
  alpha = 1.0,
  distance = "euclidean",
  rlen = 20
)
# Produce a tibble with a summary of the mixed labels
som_eval <- sits_som_evaluate_cluster(som_cluster_bal)
# Show the result
plot(som_eval)
Confusion by cluster for the balanced dataset (Source: Authors).

Figure 44: Confusion by cluster for the balanced dataset (Source: Authors).

As shown in Figure 44, the balanced dataset shows less confusion per label than the unbalanced one. In this case, many classes that were confused with others in the original confusion map are now better represented. Reducing sample imbalance should be tried as an alternative to reducing the number of samples of the classes using SOM. In general, users should balance their training data for better performance.

Conclusion

The quality of training data is critical to improving the accuracy of maps resulting from machine learning classification methods. To address this challenge, the sits package provides three methods for improving training samples. For large datasets, we recommend using both imbalance-reducing and SOM-based algorithms. The SOM-based method identifies potential mislabeled samples and outliers that require further investigation. The results demonstrate a positive impact on the overall classification accuracy.

The complexity and diversity of our planet defy simple label names with hard boundaries. Due to representational and data handling issues, all classification systems have a limited number of categories, which inevitably fail to adequately describe the nuances of the planet’s landscapes. All representation systems are thus limited and application-dependent. As stated by Janowicz [44]: “geographical concepts are situated and context-dependent and can be described from different, equally valid, points of view; thus, ontological commitments are arbitrary to a large extent”.

The availability of big data and satellite image time series is a further challenge. In principle, image time series can capture more subtle changes for land classification. Experts must conceive classification systems and training data collections by understanding how time series information relates to actual land change. Methods for quality analysis, such as those presented in this Chapter, cannot replace user understanding and informed choices.