Introduction to RRRR

library(RRRR)

Introduction

The R package RRRR provides methods for estimating online Robust Reduced-Rank Regression.

If you use any code from the RRRR package in a publication, please use the following citation:

Yangzhuoran Yang and Ziping Zhao (2020). RRRR: Online Robust Reduced-Rank Regression Estimation. R package version 1.1.0. https://pkg.yangzhuoranyang.com/RRRR/.

This vignette aims to provide illustrations to estimate and update (Online) Reduced-Rank Regression using various methods contained in the package.

Formulation

The formulation of the reduced-rank regression is as follow: \[y = \mu +AB' x + D z+innov,\] where for each realization

The matrix resulted from \(AB'\) will be a reduced rank coefficient matrix with rank of \(r\). The function estimates parameters \(\mu\), \(A\), \(B\), \(D\), and \(Sigma\), the covariance matrix of the innovation’s distribution.

Simulation

To simulate example data that can be used to estimate reduced-rank regression, use function RRR_sim.

data <- RRR_sim()
data
#> Simulated Data for Reduced-Rank Regression
#> ------------
#> Specifications:
#>    N    P    Q    R    r 
#> 1000    3    3    1    1 
#> 
#> Coefficients:
#>         mu        A        B        D   Sigma1   Sigma2   Sigma3
#> 1  0.10000  1.24991 -0.91243 -1.95318  1.00000  0.00000  0.00000
#> 2  0.10000 -0.64522  0.22024  1.12842  0.00000  1.00000  0.00000
#> 3  0.10000  0.85119 -0.32485  0.70834  0.00000  0.00000  1.00000

A number of parameters can be specified. See ?RRR_sim. The default arguments are set in such a way that the matrix resulted from \(AB'\) will be a reduced rank coefficient matrix with rank of \(r\).

str(data)
#> List of 4
#>  $ spec:List of 11
#>   ..$ P    : num 3
#>   ..$ N    : num 1000
#>   ..$ Q    : num 3
#>   ..$ R    : num 1
#>   ..$ A    : num [1:3, 1] 1.25 -0.645 0.851
#>   ..$ B    : num [1:3, 1] -0.912 0.22 -0.325
#>   ..$ mu   : num [1:3] 0.1 0.1 0.1
#>   ..$ D    : num [1:3, 1] -1.953 1.128 0.708
#>   ..$ r    : num 1
#>   ..$ Sigma: num [1:3, 1:3] 1 0 0 0 1 0 0 0 1
#>   ..$ innov: num [1:1000, 1:3] -0.773 -3.226 -0.254 -0.994 0.463 ...
#>  $ y   : num [1:1000, 1:3] 0.104 -2.57 0.981 -0.989 1.513 ...
#>  $ x   : num [1:1000, 1:3] -0.968 0.442 -0.41 0.507 -0.956 ...
#>  $ z   : num [1:1000] 0.571 -0.279 -0.217 -0.331 -0.228 ...
#>  - attr(*, "class")= chr [1:2] "RRR_data" "list"

The returned list of RRR_sim contains the input specifications and the data points \(y\), \(x\) and \(z\).

Reduced-Rank Regression using Gaussian MLE : RRR

The Gaussian Maximum Likelihood Estimation method is described in Johansen, S. (1991). This method is not robust in the sense that it assumes a Gaussian distribution for the innovations which does not take into account the heavy-tailedness of the true distribution and outliers.

res_gmle <- RRR(y=data$y, x=data$x, z = data$z)
res_gmle
#> Reduced-Rank Regression
#> ------------
#> Specifications:
#>    N    P    Q    R    r 
#> 1000    3    3    1    1 
#> 
#> Coefficients:
#>           mu          A          B          D     Sigma1     Sigma2     Sigma3
#> 1  0.2271781 -1.2632534  0.9409798 -1.8761543  2.5412669  0.0045757  0.0074556
#> 2  0.1116005  0.6235539 -0.2233690  1.1676535  0.0045757  2.5032066  0.1269371
#> 3  0.0932108 -0.8098394  0.2897232  0.7435337  0.0074556  0.1269371  2.5766221

The matrix \(z\) and the constant \(\mu\) term are optional.

res_gmle <- RRR(y=data$y, x=data$x, z=data$z, mu = FALSE)
res_gmle <- RRR(y=data$y, x=data$x, z=NULL, mu = TRUE)
res_gmle <- RRR(y=data$y, x=data$x, z=NULL, mu = FALSE)

Robust Reduced-Rank Regression using Cauchy distribution and Majorisation-Minimisation: RRRR

The Majorisation-Minimisation estimation method is partly described in Zhao, Z., & Palomar, D. P. (2017). This method is robust in the sense that it assumes a heavy-tailed Cauchy distribution for the innovations. As before the matrix \(z\) and the constant term \(\mu\) are optional.

res_mm <- RRRR(y=data$y, x=data$x, z = data$z, 
               itr = 100, 
               earlystop = 1e-4)
res_mm
#> Robust Reduced-Rank Regression
#> ------
#> Majorisation-Minimisation
#> ------------
#> Specifications:
#>    N    P    Q    R    r 
#> 1000    3    3    1    1 
#> 
#> Coefficients:
#>           mu          A          B          D     Sigma1     Sigma2     Sigma3
#> 1  0.1587844 -0.6572206  1.8301050 -1.9307780  0.7433522 -0.0146134  0.0026712
#> 2  0.1061328  0.2932582 -0.4761372  1.1415869 -0.0146134  0.7926448  0.0065512
#> 3  0.0814476 -0.4302945  0.5777307  0.7557723  0.0026712  0.0065512  0.7738511

Additional arguments that are worth noticing are itr, which control the maximum number of iteration, and earlystop, which is the criteria to stop the algorithm early. The algorithm will stop if the improvement on objective value is small than earlystop \(\times\ objective\_from\_last\_iteration\).

This method is an iterative optimization algorithm so we can use the inbuilt plot.RRRR method to see the convergence plot of the algorithm.

plot(res_mm, aes_x = "iteration", xlog10 = TRUE)

Argument aes_x can set the x axis to be the number of iteration or the run time. Argument xlog10 can indicate whether the scale of x axis is log 10 transformed.

Online Robust Reduced-Rank Regression: ORRRR

The description of the generic Stochastic Successive Upper-bound Minimisation method and the Sample Average Approximation can be found in Razaviyayn, M., Sanjabi, M., & Luo, Z. Q. (2016).

There are two major estimation methods supported:

The algorithm is online in the sense that the data is continuously incorporated and the algorithm can update the parameters accordingly. As before the matrix \(z\) and the constant term \(\mu\) are optional.

At each iteration of SAA, a new realisation of the parameters is achieved by solving the minimisation problem of the sample average of the desired objective function using the data currently incorporated. This can be computationally expensive when the objective function is highly nonconvex. The SMM method overcomes this difficulty by replacing the objective function by a well-chosen majorising surrogate function which can be much easier to optimise.

SMM: Stochastic Majorisation-Minimisation

By default the function ORRRR uses SMM.

res_smm <- ORRRR(y=data$y, x=data$x, z=data$z, 
                 initial_size = 100, addon = 10)
#> Loading required namespace: lazybar
res_smm
#> Online Robust Reduced-Rank Regression
#> ------
#> Stochastic Majorisation-Minimisation
#> ------------
#> Specifications:
#>            N            P            R            r initial_size        addon 
#>         1000            3            1            1          100           10 
#> 
#> Coefficients:
#>           mu          A          B          D     Sigma1     Sigma2     Sigma3
#> 1  0.1580173 -0.6569760  1.8274067 -1.9309594  0.7458841 -0.0141483  0.0019549
#> 2  0.1033638  0.2921993 -0.4738060  1.1395438 -0.0141483  0.7966697  0.0056392
#> 3  0.0799705 -0.4308073  0.5769531  0.7552932  0.0019549  0.0056392  0.7762973

The simulated data set is of size 1000. In the above command, in the first iteration 100 realisations are used in the estimation with 10 more data points in each of the following iteration. Because of the increasing data size, the estimation will be slower the longer the algorithm run. Therefore, the estimated time left in the progress bar is not very accurate in this sense.

The output from ORRRR can also plotted using plot.RRRR.

plot(res_smm)

SAA: Sample Average Approximation

When using SAA, there are two sub solver supported in each iteration.

res_saa_optim <- ORRRR(y=data$y, x=data$x, z=data$z,
                       method = "SAA", SAAmethod = "optim")
res_saa_mm <- ORRRR(y=data$y, x=data$x, z=data$z,
                       method = "SAA", SAAmethod = "MM")

optim is a general purpose solver which means it will be quite slow for this specific problem, especially when the number of parameters is large. Embedding majorisation-minimisation into subiteration of SAA is more like a heuristic without solid theory backing up its efficiency. Due to the time constraint we do not show the estimated result here.

Truly online: update.RRRR

With the result from ORRRR, user can still update it with newly achieved data using function update. Note the result from RRRR can also be updated where it simply takes the result from RRRR as the starting point in online estimation.

newdata <- RRR_sim()
res2_smm <- update(res_smm, newy=newdata$y, newx=newdata$x, newz=newdata$z)
res2_smm
#> Online Robust Reduced-Rank Regression
#> ------
#> Stochastic Majorisation-Minimisation
#> ------------
#> Specifications:
#>            N            P            R            r initial_size        addon 
#>         2000            3            1            1         1010           10 
#> 
#> Coefficients:
#>          mu         A         B         D    Sigma1    Sigma2    Sigma3
#> 1  0.182109 -0.443200  2.098299 -0.764120  2.030435 -0.117180 -0.172408
#> 2  0.029384 -0.176949  0.526462  1.124432 -0.117180  1.938935 -0.686911
#> 3  0.115268 -0.042169  0.635905  0.452803 -0.172408 -0.686911  1.410744

Without other arguments specified, update will just take the original specification in the model. If it applies to output of RRRR, then the default would be the default arguments in ORRRR, i.e., with method set to “SMM” and addon set to 10.

References

Johansen, S{}ren. 1991. “Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models.” Econometrica: Journal of the Econometric Society 59 (6): 1551.

Z. Zhao and D. P. Palomar, “Robust maximum likelihood estimation of sparse vector error correction model,” in2017 IEEE Global Conferenceon Signal and Information Processing (GlobalSIP), pp. 913–917,IEEE, 2017.

Razaviyayn, Meisam, Maziar Sanjabi, and Zhi Quan Luo. 2016. “A Stochastic Successive Minimization Method for Nonsmooth Nonconvex Optimization with Applications to Transceiver Design in Wireless Communication Networks.” Mathematical Programming. A Publication of the Mathematical Programming Society 157 (2): 515–45.

License

This package is free and open source software, licensed under GPL-3