| sits_ResNet {sits} | R Documentation |
Use a ResNet architecture for classifiying image time series. The ResNet (or deep residual network) was proposed by a team in Microsoft Research for 2D image classification. ResNet tries to address the degradation of accuracy in a deep network. The idea is to replace a deep network with a combination of shallow ones. In the paper by Fawaz et al. (2019), ResNet was considered the best method for time series classification, using the UCR dataset. Please refer to the paper for more details.
The SITS implementation of RestNet is based on the work of Hassan Fawaz and collaborators, and also inspired by the paper of Wang et al (see below). Fawaz provides a reference code in https://github.com/hfawaz/dl-4-tsc. If you use this function, please cite the references.
sits_ResNet( samples = NULL, blocks = c(64, 128, 128), kernels = c(8, 5, 3), activation = "relu", optimizer = keras::optimizer_adam(learning_rate = 0.001), epochs = 300, batch_size = 64, validation_split = 0.2, verbose = 0 )
samples |
Time series with the training samples. |
blocks |
Number of 1D convolutional filters for each block of three layers. |
kernels |
Size of the 1D convolutional kernels for each layer of each block. |
activation |
Activation function for 1D convolution. Valid values: 'relu', 'elu', 'selu', 'sigmoid'. |
optimizer |
Function with a pointer to the optimizer function (default is optimization_adam()). Options: optimizer_adadelta(), optimizer_adagrad(), optimizer_adam(), optimizer_adamax(), optimizer_nadam(), optimizer_rmsprop(), optimizer_sgd(). |
epochs |
Number of iterations to train the model. |
batch_size |
Number of samples per gradient update. |
validation_split |
Number between 0 and 1. Fraction of training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. |
verbose |
Verbosity mode (0 = silent, 1 = progress bar, 2 = one line per epoch). |
A fitted model to be passed to sits_classify
Gilberto Camara, gilberto.camara@inpe.br
Alexandre Ywata de Carvalho, alexandre.ywata@ipea.gov.br
Rolf Simoes, rolf.simoes@inpe.br
Hassan Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller, "Deep learning for time series classification: a review", Data Mining and Knowledge Discovery, 33(4): 917–963, 2019.
Zhiguang Wang, Weizhong Yan, and Tim Oates, "Time series classification from scratch with deep neural networks: A strong baseline", 2017 international joint conference on neural networks (IJCNN).
## Not run:
# Retrieve the set of samples for the Mato Grosso (provided by EMBRAPA)
# Build a machine learning model based on deep learning
rn_model <- sits_train(samples_modis_4bands, sits_ResNet(epochs = 75))
# Plot the model
plot(rn_model)
# get a point and classify the point with the ml_model
point <- sits_select(point_mt_6bands,
bands = c("NDVI", "EVI", "NIR", "MIR")
)
class <- sits_classify(point, rn_model)
plot(class, bands = c("NDVI", "EVI"))
## End(Not run)