Title: | Bayesian Analysis of Seemingly Unrelated Regression Models |
---|---|
Description: | Implementation of the direct Monte Carlo approach of Zellner and Ando (2010) <doi:10.1016/j.jeconom.2010.04.005> to sample from posterior of Seemingly Unrelated Regression (SUR) models. In addition, a Gibbs sampler is implemented that allows the user to analyze SUR models using the power prior. |
Authors: | Ethan Alt |
Maintainer: | Ethan Alt <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.1.2 |
Built: | 2024-11-05 05:42:54 UTC |
Source: | https://github.com/ethan-alt/surbayes |
C++ implementation to obtain a matrix of samples from predictive posterior density
predict_surbayes_cpp(Mu, Sigmalist, n, J, nsims)
predict_surbayes_cpp(Mu, Sigmalist, n, J, nsims)
Mu |
matrix of means |
Sigmalist |
list of covariance matrices |
n |
number of observations |
J |
number of endpoints |
nsims |
Number of simulations (number of rows in Mu) |
C++ implementation to obtain one sample from predictive posterior density
predict_surbayes_helper(mu, Sigma, n, J)
predict_surbayes_helper(mu, Sigma, n, J)
mu |
vector of means |
Sigma |
covariance matrix shared among all observations |
n |
number of observations |
J |
number of endpoints |
This function returns a list of new data sets by sampling from the posterior predictive density of Y | Y0, Xnew.
## S3 method for class 'surbayes' predict(object, newdata, nsims = 1, ...)
## S3 method for class 'surbayes' predict(object, newdata, nsims = 1, ...)
object |
Result from calling |
newdata |
|
nsims |
number of posterior draws to take. The default and minimum is 1. The maximum is the number of simulations in surbayes |
... |
further arguments passed to or from other methods |
n x J x nsims
array
of predicted values
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## Fit model out = sur_sample( formula.list, data, M = M ) ## Obtain predictions pred = predict(out, data, nsims = 1)
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## Fit model out = sur_sample( formula.list, data, M = M ) ## Obtain predictions pred = predict(out, data, nsims = 1)
This is a c++ implementation of sampling Sigma via Gibbs in SUR model–inverse Wishart
sample_sigma(nu, V, p)
sample_sigma(nu, V, p)
nu |
degrees of freedom |
V |
scale matrix |
p |
dimension of covariance matrix |
sampled covariance matrix
This function is a wrapper function that performs either (1) Direct Monte Carlo or (2) Gibbs sampling of the SUR model depending on whether 1 or 2 data sets are specified.
sur_sample( formula.list, data, M, histdata = NULL, Sigma0 = NULL, a0 = 1, burnin = 0, thin = 1 )
sur_sample( formula.list, data, M, histdata = NULL, Sigma0 = NULL, a0 = 1, burnin = 0, thin = 1 )
formula.list |
A list of formulas, each element giving the formula for the corresponding endpoint. |
data |
A |
M |
Number of samples to be drawn |
histdata |
A |
Sigma0 |
optional a |
a0 |
A scalar between 0 and 1 giving the power prior parameter. Ignored if |
burnin |
A non-negative integer giving the burn-in parameter. Ignored if |
thin |
A positive integer giving the thin parameter. Ignored if |
A list. First element is posterior draws. Second element is list of JxJ covariance matrices.
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## Fit models out_dmc = sur_sample( formula.list, data, M = M ) ## DMC used out_powerprior = sur_sample( formula.list, data, M, data ) ## Gibbs used
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## Fit models out_dmc = sur_sample( formula.list, data, M = M ) ## DMC used out_powerprior = sur_sample( formula.list, data, M, data ) ## Gibbs used
This function is called by sur_sample_cov_cpp
.
It samples the covariance matrix of a SUR
sur_sample_cov_helper_cpp(Y, Xlist, n, J, pj, sigma11, r1)
sur_sample_cov_helper_cpp(Y, Xlist, n, J, pj, sigma11, r1)
Y |
A |
Xlist |
A |
n |
Integer giving number of observations |
J |
Integer giving number of endpoints |
pj |
A |
sigma11 |
A scalar giving a draw for the (1,1) component of the covariance matrix |
r1 |
A |
C++ implementation of Zellner and Ando (2010) Direct Monte Carlo method for sampling from the posterior of a Bayesian SUR
sur_sample_cpp(Y, Xlist, y, X, XtX, pj, M)
sur_sample_cpp(Y, Xlist, y, X, XtX, pj, M)
Y |
|
Xlist |
A |
y |
|
X |
design |
XtX |
|
pj |
|
M |
An integer giving the number of desired samples |
This function samples from the posterior of a SUR model using the DMC method of Ando and Zellner (2010)
sur_sample_dmc(formula.list, data, M = 1000)
sur_sample_dmc(formula.list, data, M = 1000)
formula.list |
A list of formulas, each element giving the formula for the corresponding endpoint. |
data |
A |
M |
Number of samples to be drawn |
A list. First element is posterior draws. Second element is list of JxJ covariance matrices. Other elements are helpful statistics about the SUR model to pass to other functions.
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## fit using historical data as current data set--never done in practice out = sur_sample_powerprior( formula.list, data, histdata = data, M = M )
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## fit using historical data as current data set--never done in practice out = sur_sample_powerprior( formula.list, data, histdata = data, M = M )
This is a c++ implementation of Gibbs sampling SUR model with power prior
sur_sample_gibbs_cpp( Sigma, M, X, X0, XtX, X0tX0, Y, Y0, y, y0, a0, pvec, burnin, thin )
sur_sample_gibbs_cpp( Sigma, M, X, X0, XtX, X0tX0, Y, Y0, y, y0, a0, pvec, burnin, thin )
Sigma |
initial value for covariance matrix |
M |
number of samples |
X |
design matrix for current data |
X0 |
design matrix for historical data |
XtX |
matrix that is |
X0tX0 |
matrix that is |
Y |
future response as matrix (Y1, ..., YJ) |
Y0 |
historical response as matrix (Y01, ..., Y0J) |
y |
future response as vector |
y0 |
historical response as vector |
a0 |
power prior parameter |
pvec |
|
burnin |
Burn-in parameter |
thin |
Thin parameter |
sampled covariance matrix
This function uses Gibbs sampling to sample from the posterior density of a SUR model using the power prior.
sur_sample_powerprior( formula.list, data, histdata, M, Sigma0 = NULL, a0 = 1, burnin = 0, thin = 1 )
sur_sample_powerprior( formula.list, data, histdata, M, Sigma0 = NULL, a0 = 1, burnin = 0, thin = 1 )
formula.list |
A list of formulas, each element giving the formula for the corresponding endpoint. |
data |
A |
histdata |
A |
M |
Number of samples to be drawn |
Sigma0 |
A |
a0 |
A scalar between 0 and 1 giving the power prior parameter |
burnin |
A non-negative integer giving the burn-in parameter |
thin |
A positive integer giving the thin parameter |
A list. First element is posterior draws. Second element is list of JxJ covariance matrices.
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## fit using historical data as current data set--never done in practice out = sur_sample_powerprior( formula.list, data, histdata = data, M = M )
## Taken from bayesm package if(nchar(Sys.getenv("LONG_TEST")) != 0) {M=1000} else {M=10} set.seed(66) ## simulate data from SUR beta1 = c(1,2) beta2 = c(1,-1,-2) nobs = 100 nreg = 2 iota = c(rep(1, nobs)) X1 = cbind(iota, runif(nobs)) X2 = cbind(iota, runif(nobs), runif(nobs)) Sigma = matrix(c(0.5, 0.2, 0.2, 0.5), ncol = 2) U = chol(Sigma) E = matrix( rnorm( 2 * nobs ), ncol = 2) %*% U y1 = X1 %*% beta1 + E[,1] y2 = X2 %*% beta2 + E[,2] X1 = X1[, -1] X2 = X2[, -1] data = data.frame(y1, y2, X1, X2) names(data) = c( paste0( 'y', 1:2 ), paste0('x', 1:(ncol(data) - 2) )) ## run DMC sampler formula.list = list(y1 ~ x1, y2 ~ x2 + x3) ## fit using historical data as current data set--never done in practice out = sur_sample_powerprior( formula.list, data, histdata = data, M = M )