## Thursday, 26 January 2017

### Three rooms left...

Last December, Kobi and his classmates did their Christmas play, which was based on a relatively close representation of the Nativity (well $-$ perhaps back then shepherds used to run around with most of their hands up their nose, waiving at their parents too...).

Anyway, one of the top acts of the whole thing was something like this, hence the title of the post.

But, more importantly, we're almost running out of single rooms for our summer school, later this year, in Florence (although there are more double rooms)! So book your space soon!

## Monday, 23 January 2017

### Face value

This is actually a not-so-recent paper, but I've only discovered now and I think it's very interesting. The underlying issue is about trying to do "causal inference" from observational data $-$ perhaps one could see this in a simpler way by considering the idea of "balancing" observational data, to mimic as far as possible an experimental setting (and so be able to estimate "causal" effects). [There's lot more on the philosophical aspects behind this problem, which I'm conveniently swiping under the carpet, here...]

Anyway, one of the most popular ways of dealing with this issue of unbalanced background covariates (or generally, confounding) is to use propensity score matching. But, while I think that the idea is somewhat neat and clearly important, what has always bothered me (among other things) is the fact that the resulting outcome model does assume that the estimate of the propensity score (PS) is "perfect" $-$ known with absolute precision, although the basic assumption is that "the PS model needs to be correct". But of course, there's no way of knowing perfectly that the PS model is correct...

So the idea of joining model selection and propagation of uncertainty through the outcome model is actually very interesting. I've only flipped through the paper and I did have some very preliminary ideas in mind on this, so I really want to have a proper look at this!

## Friday, 13 January 2017

### New year resolution

Now that the Christmas break is just a distant memory (Marta would say that I am quite happy with that $-$ she thinks I'm like the Grinch around the Christmas holiday. And she is right), I've given way to my new year's resolution of finally, properly packaging our two R packages that aren't on CRAN yet.

The first one is SWSamp (about which I've already talked here and here) and the second is survHE (which I have also already mentioned here and here).

I've got better at using GitHub and (for survHE) benefited from the help of Peter Konings, who's helped with bits of code and also given me either tips or "forced" me to look into better solutions for the management and potential distribution of the packages, even if they're not directly on the official R repository.

Eventually, this means I've settled for (I think!) a good compromise $-$ I've created a local repository in which I've stored my packages; this in itself doesn't take care of all the dependencies, but it's easy (even for practitioners not too familiar with R) to install the packages and all the others on which they rely to work with very simple commands, for example
install.packages("survHE",
repos=c("http://www.statistica.it/gianluca/R",
"https://cran.rstudio.org",
"https://www.math.ntnu.no/inla/R/stable"),
dependencies=TRUE
)
$-$ this way R uses three repositories (one for survHE, one for all the other dependencies stored on CRAN and one for INLA, which is under its own repository).

We've done some tests and all seems to be working OK, which is great. I've also set in motion a couple of plans for updates to both the packages $-$ I'll post more on this soon! (Incidentally, this also gives way for the development of two more interesting projects: Anthony's work on single arm trial and Andrea's work on missing data for cost-effectiveness analysis. Again, will post more as we have some more output to show for!).