| sits_TempCNN {sits} | R Documentation |
Use a TempCNN algorithm to classify data, which has two stages: a 1D CNN and a multi-layer perceptron. Users can define the depth of the 1D network, as well as the number of perceptron layers.
This function is based on the paper by Charlotte Pelletier referenced below and code available on github (https://github.com/charlotte-pel/temporalCNN). If you use this method, please cite the original tempCNN paper.
sits_TempCNN( samples = NULL, cnn_layers = c(64, 64, 64), cnn_kernels = c(5, 5, 5), cnn_activation = "relu", cnn_L2_rate = 1e-06, cnn_dropout_rates = c(0.5, 0.5, 0.5), dense_layer_nodes = 256, dense_layer_activation = "relu", dense_layer_dropout_rate = 0.5, optimizer = keras::optimizer_adam(learning_rate = 0.001), epochs = 150, batch_size = 128, validation_split = 0.2, verbose = 0 )
samples |
Time series with the training samples. |
cnn_layers |
Number of 1D convolutional filters per layer |
cnn_kernels |
Size of the 1D convolutional kernels. |
cnn_activation |
Activation function for 1D convolution. Valid values: 'relu', 'elu', 'selu', 'sigmoid'. |
cnn_L2_rate |
Regularization rate for 1D convolution. |
cnn_dropout_rates |
Dropout rates for 1D convolutional filters. |
dense_layer_nodes |
Number of nodes in the dense layer. |
dense_layer_activation |
Activation functions for the dense layer. Valid values: 'relu', 'elu', 'selu', 'sigmoid'. |
dense_layer_dropout_rate |
Dropout rate (0,1) for the dense layer. |
optimizer |
Function with a pointer to the optimizer function (default is optimization_adam()). Options: optimizer_adadelta(), optimizer_adagrad(), optimizer_adam(), optimizer_adamax(), optimizer_nadam(), optimizer_rmsprop(), optimizer_sgd(). |
epochs |
Number of iterations to train the model. |
batch_size |
Number of samples per gradient update. |
validation_split |
Number between 0 and 1. Fraction of training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. |
verbose |
Verbosity mode (0 = silent, 1 = progress bar, 2 = one line per epoch). |
A fitted model to be passed to sits_classify
Gilberto Camara, gilberto.camara@inpe.br
Alexandre Ywata de Carvalho, alexandre.ywata@ipea.gov.br
Rolf Simoes, rolf.simoes@inpe.br
Charlotte Pelletier, Geoffrey Webb and François Petitjean, "Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series", Remote Sensing, 11,523, 2019. DOI: 10.3390/rs11050523.
## Not run:
# Retrieve the set of samples for the Mato Grosso (provided by EMBRAPA)
# Build a machine learning model based on deep learning
tc_model <- sits_train(samples_modis_4bands, sits_TempCNN(epochs = 75))
# Plot the model
plot(tc_model)
# get a point and classify the point with the ml_model
point <- sits_select(point_mt_6bands, bands = c("NDVI", "EVI", "NIR", "MIR"))
class <- sits_classify(point, tc_model)
plot(class, bands = c("NDVI", "EVI"))
## End(Not run)