CRAN Package Check Results for Package mix

Last updated on 2024-03-28 23:01:50 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 1.0-11 6.11 29.16 35.27 OK
r-devel-linux-x86_64-debian-gcc 1.0-11 4.58 22.96 27.54 OK
r-devel-linux-x86_64-fedora-clang 1.0-11 45.62 OK
r-devel-linux-x86_64-fedora-gcc 1.0-11 43.54 OK
r-devel-windows-x86_64 1.0-11 8.00 209.00 217.00 ERROR
r-patched-linux-x86_64 1.0-11 6.28 28.88 35.16 OK
r-release-linux-x86_64 1.0-11 5.09 29.26 34.35 OK
r-release-macos-arm64 1.0-11 19.00 OK
r-release-macos-x86_64 1.0-11 27.00 OK
r-release-windows-x86_64 1.0-11 8.00 48.00 56.00 OK
r-oldrel-macos-arm64 1.0-11 21.00 OK
r-oldrel-windows-x86_64 1.0-11 15.00 55.00 70.00 OK

Check Details

Version: 1.0-11
Check: tests
Result: ERROR Running 'mix.R' [165s] Running the tests in 'tests/mix.R' failed. Complete output: > library(mix) > data(stlouis) > x <- stlouis > # Perform preliminary manipulations on x. The categorical > # variables need to be coded as consecutive positive integers > # beginning with 1. > s <- prelim.mix(x,3) > # look at missingness patterns > print(s$r) G D1 D2 R1 V1 R2 V2 12 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 8 1 1 0 1 1 1 1 6 1 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 0 1 1 1 3 1 0 0 0 1 1 1 3 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 1 1 1 1 0 0 1 1 6 1 0 1 0 0 1 1 2 1 0 0 0 0 1 1 4 1 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 0 0 0 0 0 1 3 1 1 1 1 0 1 0 1 1 1 1 0 0 1 0 2 1 0 1 0 0 1 0 1 1 0 0 0 0 1 0 1 1 1 0 1 1 0 0 1 1 0 0 1 1 0 0 4 1 1 1 1 0 0 0 2 1 1 0 1 0 0 0 1 1 0 0 1 0 0 0 1 1 0 1 0 0 0 0 > # Try EM for general location model without restrictions. This > # algorithm converges after 168 iterations. > thetahat1 <- em.mix(s) Steps of EM: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100...101...102...103...104...105...106...107...108...109...110...111...112...113...114...115...116...117...118...119...120...121...122...123...124...125...126...127...128...129...130...131...132...133...134...135...136...137...138...139...140...141...142...143...144...145...146...147...148...149...150...151...152...153...154...155...156...157...158...159...160...161...162...163...164...165...166...167...168...169...170...171...172...173...174...175...176...177...178...179...180...181... > # look at the parameter estimates and loglikelihood > print(getparam.mix(s,thetahat1)) $pi , , D2=1 D1=1 D1=2 G=1 0.14040543 0.09654434 G=2 0.01627643 0.04617068 G=3 0.03103847 0.01452370 , , D2=2 D1=1 D1=2 G=1 0.1358792 0.01847542 G=2 0.1076896 0.17768936 G=3 0.1119740 0.10333338 $mu [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] R1 109.77629 77.22294 115.80413 110.7962 93.52564 56.09284 123.7264 104.1379 V1 130.76154 65.92582 137.70299 134.1615 113.81725 58.14789 161.0442 135.9854 R2 99.71291 89.54078 82.91913 103.9516 126.02480 88.08945 118.0722 109.4170 V2 116.75389 76.68685 96.22231 132.3583 142.64726 105.19174 132.6509 109.1568 [,9] [,10] [,11] [,12] R1 105.6808 119.6342 106.96369 107.4552 V1 127.0207 141.4109 103.57375 107.6212 R2 100.7178 136.9676 97.03059 106.9518 V2 128.0930 181.0136 102.42986 104.9008 $sigma R1 V1 R2 V2 R1 169.5822 220.0024 97.0918 245.9642 V1 220.0024 443.2755 152.9049 396.1698 R2 97.0918 152.9049 113.9333 191.8200 V2 245.9642 396.1698 191.8200 528.3272 > print(getparam.mix(s,thetahat1,corr=T)) $pi , , D2=1 D1=1 D1=2 G=1 0.14040543 0.09654434 G=2 0.01627643 0.04617068 G=3 0.03103847 0.01452370 , , D2=2 D1=1 D1=2 G=1 0.1358792 0.01847542 G=2 0.1076896 0.17768936 G=3 0.1119740 0.10333338 $mu [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] R1 109.77629 77.22294 115.80413 110.7962 93.52564 56.09284 123.7264 104.1379 V1 130.76154 65.92582 137.70299 134.1615 113.81725 58.14789 161.0442 135.9854 R2 99.71291 89.54078 82.91913 103.9516 126.02480 88.08945 118.0722 109.4170 V2 116.75389 76.68685 96.22231 132.3583 142.64726 105.19174 132.6509 109.1568 [,9] [,10] [,11] [,12] R1 105.6808 119.6342 106.96369 107.4552 V1 127.0207 141.4109 103.57375 107.6212 R2 100.7178 136.9676 97.03059 106.9518 V2 128.0930 181.0136 102.42986 104.9008 $sdv R1 V1 R2 V2 13.02237 21.05411 10.67396 22.98537 $r R1 V1 R2 V2 R1 1.0000000 0.8024177 0.6985010 0.8217321 V1 0.8024177 1.0000000 0.6803922 0.8186402 R2 0.6985010 0.6803922 1.0000000 0.7818385 V2 0.8217321 0.8186402 0.7818385 1.0000000 > print(loglik.mix(s,thetahat1)) [1] -110.3436 > # take 100 steps of data augmentation starting from thetahat1 > rngseed(1234567) > newtheta <- da.mix(s,thetahat1,steps=100,showits=T) Steps of Data Augmentation: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100... > # re-run em beginning from newtheta; should converge after 86 > # iterations. > thetahat2 <- em.mix(s,newtheta) Steps of EM: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51... > # Notice that the loglikelihood at thetahat2 is somewhat different > # than at thetahat1. Examination of thetahat1 and thetahat2 reveals > # that EM has converged to different values. The likelihood is > # multimodal. > print(loglik.mix(s,thetahat2)) [1] -112.8721 > print(getparam.mix(s,thetahat2)) $pi , , D2=1 D1=1 D1=2 G=1 0.15208306 0.10080990 G=2 0.01659304 0.04610470 G=3 0.03121219 0.01453756 , , D2=2 D1=1 D1=2 G=1 0.11979734 0.01861405 G=2 0.08987228 0.19525607 G=3 0.11155106 0.10356875 $mu [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] R1 111.5689 77.39990 115.77582 108.7645 102.64743 56.10832 125.2010 104.2851 V1 142.1018 66.18402 137.75406 128.9370 99.54955 58.20300 156.9325 140.2331 R2 102.2716 89.51753 82.93291 102.3256 79.09548 88.12929 119.4024 108.9178 V2 124.5163 77.06119 96.27258 125.9764 90.69851 105.27720 129.3375 110.5576 [,9] [,10] [,11] [,12] R1 102.0338 50.15794 103.2737 107.3209 V1 126.3196 141.77485 105.7274 107.9395 R2 100.6748 136.66741 106.8670 106.8964 V2 128.4985 180.70583 112.0142 105.6503 $sigma R1 V1 R2 V2 R1 172.80452 212.2194 79.18561 239.1591 V1 212.21935 446.6223 166.86453 401.7181 R2 79.18561 166.8645 120.64414 219.6733 V2 239.15914 401.7181 219.67333 570.4547 > # Now try fitting a model with restrictions. We'll first fit the > # "null model" described on p. 130 of Schafer (1991), which > # fits the margins G and D1xD2 in the contingency table, and > # has a full D1*D2 interaction for each continuous variable > # but no effect for G. > margins <- c(1,0,2,3) > intercept <- rep(1,12) > d1 <- c(-1,-1,-1,1,1,1,-1,-1,-1,1,1,1) > d2 <- c(-1,-1,-1,-1,-1,-1,1,1,1,1,1,1) > design <- cbind(intercept,d1,d2,d1d2=d1*d2) > rm(intercept,d1,d2) > thetahat3 <- ecm.mix(s,margins,design) Steps of ECM: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65... > print(loglik.mix(s,thetahat3)) [1] -153.5587 > # If we play around with starting values, we'll find that the > # likelihood under the "null model" is also multimodal. > # Now let's fit the "alternative model" on p. 131 of Schafer (1991). > margins <- c(1,2,0,2,3,0,1,3) > glin <- c(-1,0,1,-1,0,1,-1,0,1,-1,0,1) > design <- cbind(design,glin) > thetahat4 <- ecm.mix(s,margins,design) Steps of ECM: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100...101...102...103...104...105...106...107...108...109...110...111...112...113...114...115...116...117...118...119...120...121...122...123...124...125...126...127...128...129...130...131...132... > print(loglik.mix(s,thetahat4)) [1] -141.4389 > # Now try some imputations. The following commands produce three > # multiple imputations under the alternative model. The imputations > # are proper if we can assume that the data augmentation procedure > # achieves stationarity by 100 steps. > rngseed(454545) > newtheta <- dabipf.mix(s,margins,design,thetahat4,steps=100,showits=T) Steps of Data Augmentation-Bayesian IPF: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100... > imp1 <- imp.mix(s,newtheta,x) > newtheta <- dabipf.mix(s,margins,design,newtheta,steps=100,showits=T) Steps of Data Augmentation-Bayesian IPF: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100... > imp2 <- imp.mix(s,newtheta,x) > newtheta <- dabipf.mix(s,margins,design,newtheta,steps=100,showits=T) Steps of Data Augmentation-Bayesian IPF: 1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36...37...38...39...40...41...42...43...44...45...46...47...48...49...50...51...52...53...54...55...56...57...58...59...60...61...62...63...64...65...66...67...68...69...70...71...72...73...74...75...76...77...78...79...80...81...82...83...84...85...86...87...88...89...90...91...92...93...94...95...96...97...98...99...100... > imp3 <- imp.mix(s,newtheta,x) > > proc.time() user system elapsed 0.39 0.10 0.42 Flavor: r-devel-windows-x86_64