| multinomialFit {rpf} | R Documentation |
For degrees of freedom, we use the number of observed statistics (incorrect) instead of the number of possible response patterns (correct) (see Bock, Giibons, & Muraki, 1998, p. 265). This is not a huge problem because this test is becomes poorly calibrated when the multinomial table is sparse. For more accurate p-values, you can conduct a Monte-Carlo simulation study (see examples).
multinomialFit(grp, independenceGrp, ..., method = "lr", log = TRUE, .twotier = TRUE)
grp |
a list with the spec, param, mean, and cov describing the group |
independenceGrp |
a list with the spec, param, mean, and cov describing the independence group |
... |
Not used. Forces remaining arguments to be specified by name. |
method |
lr (default) or pearson |
log |
whether to report p-value in log units |
.twotier |
whether to use the two-tier optimization (default TRUE) |
Rows with missing data are ignored.
The full information test is described in Bartholomew & Tzamourani (1999, Section 3).
For CFI and TLI, you must provide an independence model.
Bartholomew, D. J., & Tzamourani, P. (1999). The goodness-of-fit of latent trait models in attitude measurement. Sociological Methods and Research, 27(4), 525-546.
Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-information item factor analysis. Applied Psychological Measurement, 12(3), 261-280.
# Create an example IFA group
grp <- list(spec=list())
grp$spec[1:10] <- rpf.grm()
grp$param <- sapply(grp$spec, rpf.rparam)
colnames(grp$param) <- paste("i", 1:10, sep="")
grp$mean <- 0
grp$cov <- diag(1)
grp$uniqueFree <- sum(grp$param != 0)
grp$data <- rpf.sample(1000, grp=grp)
# Monte-Carlo simulation study
mcReps <- 3 # increase this to 10,000 or so
stat <- rep(NA, mcReps)
for (rx in 1:mcReps) {
t1 <- grp
t1$data <- rpf.sample(grp=grp)
stat[rx] <- multinomialFit(t1)$statistic
}
sum(multinomialFit(grp)$statistic > stat)/mcReps # better p-value