| GloVe {rsparse} | R Documentation |
GloVe matrix factorization model.
Model can be trained via fully can asynchronous and parallel
AdaGrad with $fit_transform() method.
GloVe
R6Class object.
componentsrepresents context embeddings
shufflelogical = FALSE by default. Defines shuffling before each SGD iteration.
Generally shuffling is a good idea for stochastic-gradient descent, but
from my experience in this particular case it does not improve convergence.
For usage details see Methods, Arguments and Examples sections.
glove = GloVe$new(rank, x_max, learning_rate = 0.15,
alpha = 0.75, lambda = 0.0, shuffle = FALSE)
glove$fit_transform(x, n_iter = 10L, convergence_tol = -1,
n_threads = getOption("rsparse_omp_threads", 1L), ...)
glove$components
$new(rank, x_max, learning_rate = 0.15,
alpha = 0.75, lambda = 0, shuffle = FALSE)Constructor for Global vectors model. For description of arguments see Arguments section.
$fit_transform(x, n_iter = 10L, convergence_tol = -1,
n_threads = getOption("rsparse_omp_threads", 1L), ...)fit Glove model given input matrix x
A GloVe object
An input term co-occurence matrix. Preferably in dgTMatrix format
integer number of SGD iterations
desired dimension for the latent vectors
integer maximum number of co-occurrences to use in the weighting function.
see the GloVe paper for details: http://nlp.stanford.edu/pubs/glove.pdf
numeric learning rate for SGD. I do not recommend that you
modify this parameter, since AdaGrad will quickly adjust it to optimal
numeric = -1 defines early stopping strategy. We stop fitting
when one of two following conditions will be satisfied: (a) we have used
all iterations, or (b) cost_previous_iter / cost_current_iter - 1 <
convergence_tol. By default perform all iterations.
numeric = 0.75 the alpha in weighting function formula : f(x) = 1 if x >
x_max; else (x/x_max)^alpha
numeric = 0.0 regularization parameter
list(w_i = NULL, b_i = NULL, w_j = NULL, b_j = NULL)
initialization for embeddings (w_i, w_j) and biases (b_i, b_j).
w_i, w_j - numeric matrices, should number of #rows = rank, #columns - expected number of rows/columns in
input matrix. b_i, b_j = numeric vectors, should have length of
# expected number of rows/columns in input matrix
http://nlp.stanford.edu/projects/glove/
temp = tempfile()
download.file('http://mattmahoney.net/dc/text8.zip', temp)
text8 = readLines(unz(temp, "text8"))
it = itoken(text8)
vocabulary = create_vocabulary(it)
vocabulary = prune_vocabulary(vocabulary, term_count_min = 5)
v_vect = vocab_vectorizer(vocabulary)
tcm = create_tcm(it, v_vect, skip_grams_window = 5L)
glove_model = GloVe$new(rank = 50, x_max = 10, learning_rate = .25)
# fit model and get word vectors
word_vectors_main = glove_model$fit_transform(tcm, n_iter = 10)
word_vectors_context = glove_model$components
word_vectors = word_vectors_main + t(word_vectors_context)