Thursday, 23 August 2012

INLA functions

I've had this idea for a while, and may be I'm finally coming around to doing it. Recently, I've been doing some work using INLA and while I was at it, I have written some very simple R functions that would do little pieces of analysis that I thought were useful.

As is often the case, at first I just wrote them for the specifics of my problem $-$ something quick & dirty. But then I thought that they may be helpful in more general situations and so I started to code them in a better way to make them into proper functions.

The basic setup considers a Bayesian hierarchical model 
$$ y \sim p(y \mid \theta) $$
$$ g(\theta) = X\beta + \alpha $$
where $\beta$ is a set of unstructured effects and $\alpha$ represents the structured effects. The function $g(\cdot)$ of course depends on the way in which the model for the data $y$ is specified. Typically, the coefficients $\beta$ are given independent minimally informative priors, while $\alpha \sim \mbox{Normal}(0,\tau)$, where $\tau=1/\sigma^2$ is the precision.

The first function simply computes some summaries for the structured effects in terms of their standard deviation (rather than the precisions, which is the INLA default). I've used the R-INLA built-in function inla.rmarginal which allows to sample from the posterior marginal distribution of one of the variables monitored by the procedure and effectively produced a sample from the distribution of $\sigma$. Then, I simply replace the table with the results reporting summaries for $\sigma$ (instead of $\tau$). [In fact, R-INLA also lets you compute analytically the mean and the standard deviation for $\sigma$, using inla.expectation and some easy algebra (see for example page 7 here).]

The second function computes the Bayesian p-value for the unstructured effects. In other words, the code allows to compute the posterior probability that one of the coefficients in $\beta$ is positive (or negative, depending on its sign), again using inla.rmarginal and effectively inserting a step of Monte Carlo inside INLA.

I mentioned all this to HÃ¥vard Rue, INLA's dad (and incidentally, one of the nicest guys around!), and he got back to me with an interesting idea to sample from the marginal posterior of the unstructured effects accounting for possible correlation among them. I've not had time to think about this, but it looks cool, so I'll try to fix it into the current code.

I'll say more when all is ready to roll.

No comments:

Post a Comment