Thursday, 19 March 2015

Utility bills

Because I'm involved in many collaborative projects, some of which luckily involving LaTeX, and because I'm trying (sort-of succeeding) to spend as much time as possible outside the office (mostly failing) to work on the books, in the past few weeks I've found myself wanting some track-changes utility for the work I was sharing with my LaTeX-savvy colleagues. [Could this be a candidate for the longest opening sentence of a post, ever?]

I had a quick look online and found this very nice package $-$ it's probably well established, but I'd not encountered it before, so I was very pleased to discover it. 

It works quite smoothly and lets you annotate the original .tex file with changes, additions and notes. And what's even nicer is that the compiled document has some mark-up (eg different colour for new text), but it's not very cluttered, so that you can read fairly easily the current version with notes.

Speaking of LaTeX, I also found this other couple of useful programmes: the first one is a perl script that creates the bibtex code of a given reference $-$ basically you can copy and paste the full reference of a text of interest and the script will return the LaTeX code to paste into a .bib file. The second one searches PubMed and retrieve the LaTeX code for the hits that match the search string.

Again, both are probably quite old and well established. But it was quite serendipitous.

Friday, 6 March 2015

Banned!

This is not really news any more, but I still think it's an interesting story. 

Last week the journal Basic and Applied Social Psychology has published an editorial setting out their views (or rather prescriptions) for how statistical analyses should be conducted in papers that seek publication with them.

The editorial starts by effectively banning the use of p-values and null hypothesis significance testing, which "is invalid, and thus authors would be not required to perform it". Then it goes on to say that "Bayesian procedures are more interesting", but also suffer from issues with "Laplacian assumption" (non-informative priors) and therefore they "reserve the right to make case-by-case judgments, and thus Bayesian procedures are neither required nor banned from BASP.

The conclusion of the editorial is then that basically psychologists do not need to bother with any inferential procedure, "because the state of the art remains uncertain. However, BASP will require strong descriptive statistics, including effect sizes. We also encourage the presentation of frequency or distributional data when this is feasible".

This has caused quite a stir among many statisticians (and I think psychologists should join the protest!). Here's a series of responses by important statisticians. I personally think that some of the problem at least is the view that statistics is some sort of recipe-book: if you have such and such data collection, then do a t-test; if you have such and such a design, then do an ANOVA; or perhaps if you have this other data, then use meta-analysis and throw in some priors-kind of thing $-$ I'm no real expert here, but I think that psychology as a field suffers particularly from this problem (perhaps for historical reasons?).

Most importantly, this reminds me of my first ISBA conference, back in 2006 (I think that's the last time it was held in the Valencia area). The final night of the conference, some attendees prepare some entertainment and that year, together with a few (back then) young friends, we prepared a news broadcast $-$ we spent most of the last day of the conference doing this, rather than attending the talks, I'm half proud, half ashamed to confess.

Anyway, among the "serious" news we were reporting was a riot that had happened outside the conference hotel where frequentists had come in masses to protest, waiving placards reading "We value p-value!" (worryingly, we also reported that Alan Gelfand, then-President of ISBA, had to be transferred to a secure location). 

Thursday, 5 March 2015

Cannabis on trial

The other night, Channel 4 has broadcast this programme. That's some sort of spin-off from the trial we're working on at UCL (Valerie Curran is the principal investigator $-$ the whole group is really good and all nice people to work with!). The idea of the TV programme was to have a bunch of celebrities to try different forms of cannabis, to explore the hypothesis that it is the actual composition of cannabis that can make it harmful.

The point is that skunk is the younger and stronger version, which contains higher proportions of the "bad component" (THC), while hash is mainly made by a milder component (CBD), which seems to have far lower damaging effects $-$ in fact it can prove beneficial in some cases. A live blog detailing the programme is here.

I've mentioned this already (here, for example) and it's interesting from the stats point of view as we're implementing an adaptive design to this trial. We're collecting information on a set of volunteers and we'll continuously monitor the results, updating the uncertainty on whether several doses of the compound we're testing (based on CBD) is the most effective and can then proceed to a head-to-head trial phase against placebo.

Sunday, 1 March 2015

Non-trivial wedges

During February, I've been really bad at blogging $-$ I've only posted one entry advertising our workshop at the RSS, later this month. I have spent a lot of time working in collaboration with colleagues at UCL and the London School of Hygiene and Tropical Medicine to prepare a special issue of the journal Trials.

We've prepared 6 articles on the Stepped Wedge (SW) design. This is a relatively new design for clinical trials $-$ it's basically a variant of cluster RCTs, in which all clusters start the study in the control arm and then sequentially switch to the intervention arm, in a random order, until all the clusters are given the intervention. 

There are some obvious limitations to this design (first and foremost the fact that there may be a time effect over and above the intervention effect, which means that time needs to be controlled for, to avoid bias). But, as we show in our several articles, there may be some benefits in applying it $-$ I think we've been very careful in detailing them, as practitioners need to be fully aware of the drawbacks.

The paper I've been working on mostly is about sample size calculations for a SW trial. Some authors have presented analytical formulae to do these, but while they work in specific circumstances, there are several instances in which the features of the SW formulation (time effect, repeated measurements on the same individuals in the clusters, etc) are better handled through a simulation-based approach, which is what we describe in details in our paper. 

I'm also finalising a R package in which I'll collect the functions I've prepared to sort-of-automate the calculations, for a set of relatively general situations. I'm planning on naming the package SWSamp (Samp have won today, so I'm all up for it, right now $-$ we'll see how they when I get closer to finishing it, though...).

Sunday, 1 February 2015

GAS workshop

The General Application Section (GAS) of the Royal Statistical Society has asked us to reenact the session Richard Nixon, Chris Jackson and I did at Bayes Pharma last year on Bayesian methods in health economic evaluationIn fact, we have a nice addition, as we asked Nicky Welton to contribute a talk on multi-parameter evidence synthesis 

I think it'll be an interesting event. The meeting/workshop will be held at the RSS HQ in London on Friday 27th March 2015, from 2pm to 5.15pm (there's a registration fee, I'm afraid and I think it'll be on a first-come, first-served base).

Friday, 30 January 2015

More than Word

I know this will sound childish and possibly snobbish. But for some reason (mostly because of several collaborative papers I'm working on at the moment with colleagues that do not use LaTeX), I have spent a good 90% of my working time on MS Word, in the last week or so. 

I have abjured Windows a long time ago and frankly even for maths-free writing, I still think that LaTeX is the best option; but in honesty, I also think that Libre/OpenOffice are not just as good as their corresponding MS Office alternatives. 

Most of the time, WINE is more than enough for my Windows sins $-$ and it has been of late. But I'm really looking forward to finishing these few Word-bound things and go back to Word-free docs!

Tuesday, 20 January 2015

A bunch of papers

The beginning of the new year has been particularly busy, as I'm working on several interesting projects. On the bright side, some of these are starting to give their fruits and, coincidentally, in the last few days we've had a few papers finalised (ie published, accepted for publication or submitted to the arxiv in an advanced status).

The first one has been published in Cost Effectiveness and Resource Allocation (the open access version is here). I've been involved in this paper with colleagues at UCL. The paper is an economic evaluation of an interesting and rather complex community trial conducted in Malawi, a country with particularly low life expectancy and high rates of HIV. In the paper, we did most of the economic analysis using BCEA.

The second one has also just been published in Pharmacoeconomics and is an "educational" piece that I co-wrote with several colleagues at UCL. I think this too was an interesting piece of work, in that we tried to focus on several statistical issues that are of concern in many economic evaluations $-$ the idea that health economics is in many ways inextricably related to statistics is of course one of my pieces de resistance (I guess I've showed off enough complicated words for a post...).

The third one is the RDD paper, which we had submitted ages ago to Statistics in Medicine. I had a brilliant experience with the Structural Zeros paper $-$ I submitted the first version in August and the paper was online by November. This time around, we had to struggle a lot more (apparently they couldn't find suitable reviewers, then the reviews arrived but took some time, then we responded to the comments $-$ long story short, it's been almost one year). Anyway, finally, they seem to have accepted the paper (which we previously arxived a similar version); we need a couple more changes and we should be good to go (I hope I'm not jinxing it!).

Finally, the last one is part of one of my PhD students (technically, I'm only second-supervising him). In fact, the paper develops a nice Bayesian non-parametric model to perform clustering and model selection simultaneously. We developed the model to handle a real clinical dataset, which records data on patients with lower urinary tract infection. I only knew little about Bayesian NP before working on this, so it was a nice opportunity. William has done a very good job in sorting this out and we have also submitted the full paper to Statistics in Medicine (hopefully, we'll get a quick turnaround!).

Monday, 19 January 2015

DIA Joint Adaptive Design and Bayesian Statistics Conference

This is my first real contribution to the ISBA Section on Biostatistics and Pharmaceutical Statistics, in my new role of secretary. Our section has formally endorsed this very interesting conference $-$ the timeline is very short, as the conference will start on February 11th.

The conference has two very interesting tutorials (the first one on Bayesian Methods for Drug Safety Evaluation and Signal Detection, given by David Olhsen and Amy Xia; and the second one on Use of Historical Data in Clinical Trials, taught by Heinz Schmidli). There'll also be other interesting talks and discussion.

In the Biostats/Pharma section, we'll try to organise (either directly, or through endorsement, or some other form) a few similar meetings (Bayes Pharma is of course another interesting one).