Thursday, 15 September 2016

The fix

This is a very interesting post by Martyn Plummer on the JAGS News blog, describing how apparently silly details may make a world of difference. I think Martyn says he's now fixed the issue (basically, it appears that JAGS was sensitive to the order in which the model was written, eg at compilation you may have a staggering difference $-$ 16 minute vs 8 seconds $-$ depending on whether you defined the deterministic relationships between parameters first or last).

I'm writing this post mostly as a signpost for myself $-$ I guess you always encounter issues like this, which seem trivial, and the fix is so easy $-$ if only you had a bunch of little workers at your disposal all the time...

LGM 2016

Yesterday I went to beautiful Bath for The Fifth Workshop on Bayesian Inference for Latent Gaussian Models with Applications and give a talk on our work on INLA-SPDE to compute the Expected Value of Partial Perfect Information.  

I couldn't stay for the whole three days, which is a shame because yesterday was really interesting. In the morning, Mike Betancourt (I'm not sure the page I'm linking here is his "official" one, as he's left UCL now) gave an excellent tutorial on Stan. I really enjoyed the morning playing around with the code $-$ in fact, I think we'll try and use it more and more (for example, I will try and integrate this into survHE).

Then in the afternoon, there were two interesting sessions, on talks that had obviously the common thread of LGMs, but were in fact quite diverse. I liked that too!  

Tuesday, 13 September 2016

Careful whisper

PREFACE: This post is only partially a grumpy man's emotional outburst: just hear me out on this one... 

ABSTRACT: A grumpy man vents about spam emails from random scientific (and sometimes pseudo-scientific) journals.


MAIN TEXT: First off, I should say that, luckily, my spam filter works pretty well, so I normally don't really get to see these messages (except for when I take 5 minutes to check what's ended up in the spam $-$ typically these are my 5 minutes of fun...). 

But, I find it super-hilarious to read the weird invitations to contribute papers to the most bizarre journals (that is bizarre with respect to my own field of expertise, of course $-$ they are often good journals, although I think that sometimes the weirdness goes hand in hand with their ridiculousness...). 

Anyway, I particularly really, really like when they start the email with something like this: 
Dear Dr Gianluca@my email address, [of course, to these people my email address is my full name]
We follow very carefully your research and we are impressed by your scientific production. We would be delighted if you could contribute a paper (possibly within the next 20 minutes) to the Journal of Something that has absolutely nothing to do with Statistics, or Health Economics, or anything you've ever done in your life since you were 4 and accidentally sat on your mum's little cactus and all the spiny stems pricked your bottom. [and that, sadly, is a true story...]
Now, that's what I call carefully following somebody's research!

CONCLUSIONS: Incidentally, one of my biggest regrets in life is to have never managed to wear my hair like George Michael. And that's something I've carefully tried to do when I was 14.

Friday, 26 August 2016

Sad night

I've just heard the very sad news that Richard Nixon has passed away this morning. I can't say I knew Richard very well, but I thought he really was a lovely guy and I am very saddened.

I knew of him (among other things) through his work on covariate adjustment in health economic evaluations, which I think was part of his PhD at the MRC Cambridge. I then got in contact with him more closely when I was thinking of organising the short course based on BMHE, since he and Chris were already doing something like that. I suggested we did the course together and he was very enthusiastic about it. In fact, when he was asked to teach a short course at the University of Alberta, he said the three of us should have a go, which we did. Then we taught the course at Bayes 2014UCL and at a one-day workshop organised by the RSS. He fell ill just before the last edition of the course.

Tonight I have a very vivid memory of the time we were in Edmonton having dinner after the first night of the course when I told that for some reason Italians usually get really crossed about chicken in pizza and that he used to tease me with that every time we've met since, saying that he would love a pizza with chicken. And how we used to introduce ourselves to the audience $-$ and how sometimes people were to young to get the references. I'll miss you, Richard.

Wednesday, 17 August 2016

National lottery

Yesterday, many British newspapers have covered the news of the new Dementia Atlas, released by the Department of Health.

As far as I can see, the atlas uses data from a variety of sources (including the Quality Outcomes Framework, QOF, scheme, which collects information from general practices around the country, providing incentives to the doctors to record data on key indicators).

So far so good $-$ nothing wrong with that. In fact, cool representation with maps highlighting geographical variation across England and providing rates for several summary statistics, eg prevalence of dementia, level of diagnosis, etc. As usual, though, the media couldn't resist jumping on the news and making a meal of it, mostly by presenting it with grand headlines, which in many cases missed the point, or bluntly mis-represented reality, I think.

For example, beloved Daily Mail and The Telegraph yell about "Post-code lottery in care". Now, it may well be that the data reveal massive inequality in the access to care and diagnosis across the country, which is a very good thing to expose in order to tackle it and then remove it or at least limit it $-$ that's in the spirit of the NHS. But, although I think the website should have made a much better job at explaining the numbers reported, it appears that the information presented in the maps is about the raw rates! It's not quite clear then whether the background characteristics of each area (defined in terms of Clinical Commissioning Group, CCG) do play a role in explaining away some of the differences in the actual rates for each of the measures reported in the table. 

So may well be that we're playing Peter Griffin's lottery with people's health. Or there may be much more than that. But some media just don't care about which is which...

Friday, 15 July 2016

Finish line (nearly)

We are very close to the finish line $-$ that's being able to finally submit the BCEA book to the editor (Springer).

This has been a rather long journey, but I think the current version (I dread using the word "final" just yet...) is very good, I think. We've managed to respond to all the reviewers' comments, which to be fair were rather helpful and so this should have improved the book. 

Anna and Andrea have done very good work and I didn't even have to play the bad, control freak guy to have them prepare their bits quickly $-$ in fact, I think at several points, I've been late in doing mine... 

Here's the (somewhat simplified to only sections and sub-sections) table of content:
  1. Bayesian analysis in health economics
    1. Introduction
    2. Bayesian inference and computation
    3. Basics of health economic evaluation
    4. Doing Bayesian analysis and health economic evaluation in R
  2. Case studies
    1. Introduction
    2. Preliminiaries: computer configuration
    3. Vaccine
    4. Smoking cessation
  3. BCEA - a R package for Bayesian Cost-Effectiveness Analysis
    1. Introduction
    2. Economic analysis: the bcea function
    3. Basic health economic evaluation: the summary command
    4. Cost-effectiveness plane
    5. Expected Incremental Benefit
    6. Contour plots
    7. Health economic evaluation for multiple comparators and the efficiency frontier
  4. Probabilistic Sensitivity Analysis using BCEA
    1. Introduction
    2. Probabilistic sensitivity analysis for parameter uncertainty
    3. Value of information analysis
    4. PSA applied to model assumptions and structural uncertainty
  5. BCEAweb: a user-friendly web-app to use BCEA
    1. Introduction
    2. BCEAweb: a user-friendly web-app to use BCEA
Throughout the book we use a couple of examples of full Bayesian modelling and the relevant R code to run the analysis and then use BCEA to do the "final part" of the cost-effectiveness analysis.

We've tried to avoid unnecessary complications in terms of maths, but we do include explanations and formulae when necessary. It was difficult to strike a balance for the audience $-$ especially as it was complicated to define what the audience would be... I think we're aiming for statisticians who want to get to work in health economic evaluations and health economists who need to use more principled statistical methods and software (I couldn't resist in several points moving my tanks to invade Excelland and replace the local government with R officials...).

The final chapter also present and discuss the use of graphical front-ends to R-based models (eg as in SAVI) $-$ we have a BCEA front-end too. I think these may be helpful, but they can't replace making people in the "industry" more familiar with full modelling and away from spreadsheets and stuff (these work when the models are simple. But the models that are required are not very often that simple...).

We also present lots of work on value of information (including our recent work), which is also linked to our short course. May be it's time to link BMHE and this to do a long course... (there's more on this to come!)

Wednesday, 6 July 2016

Bad medical journal

This is an interesting story, I think and I have to say I'm sort of being inspired in the title of the post from a talk that Stephen Senn gave a while back at UCL. He in turn had been referring to Ben Goldacre's book Bad Pharma $-$ the book argues that the pharmaceutical industry is often guilty of cherry picking the evidence they use to substantiate claims of clinical benefits for their product, while Stephen counters that this is rather a two-way street and often respectable medical journals are just as bad.

So: a while back we have done some work on a paper analysing an intervention to facilitate early diagnose of dementia and then submitted it to one of the leading medical journals. Unfortunately, the intervention didn't turn out to produce a massive difference, but we thought it would be interesting anyway. The paper went out for review $-$ in fact it was assessed by 3 reviewers. As per the journal policy, the reviewers have to state their name explicitly, so that the whole process becomes more transparent. 

Now, interestingly, the 3 reviewers' comments were (I am copying their comment verbatim, but add my own text in italics below):
  1. The authors report a well-designed and well-conducted cluster RCT which addresses a subject of high clinical importance.  The findings are conclusively negative in terms of their simple informational intervention effecting earlier diagnosis. This is however an important and clinically useful negative finding, in that it demonstrates that a very simple intervention is not effective in enhancing timely diagnosis. This means that a more multifaceted approach may be needed to address the clinical and policy priority for earlier ab d better diagnosis of dementia. The change in ascertainment of primary outcome to include an imputed MMSE score from ACE data was necessary in the light of unpredicted change in clinical practice due to the prospect of incurring a charge from the copyright holders.  The method used is scientifically reasonable and will not have obscured the results of the intervention. This is a good strong paper with an important negative finding that warrants publication.
  2. THE MANUSCRIPT: [title of the paper] is very important and of high standard.I cecommend [sic] the publication in [name of the journal] without restriction. 
  3. This is a cluster randomized controlled trial of a simple intervention meant to empower patients and their families to seek early assessment for memory difficulties from their primary care physicians. The intervention which included an information leaflet and personalized physician letter was laudably developed with input from patients and caregivers. The experimental design was quite rigorous and utilized an appropriate sample size calculation. One might argue about the choice of the primary outcome measure and whether the intervention as designed could reasonably have been hypothesized to have families bring their relative to EARLIER attention or not (as opposed to bringing them at all), but all outcomes were clearly described and reported. Unfortunately, the study was largely negative. Besides the potential reasons suggested by the authors, not enough attention is paid to primary care physician attitudes to the value, or lack thereof, of an early diagnosis of a memory disorder, when there is a widely perceived lack of effective therapies. Until effective therapies are available, or until physicians can be convinced of the reasons why an earlier diagnosis is advantageous, earlier GP referral to a memory service is unlikely to occur even in the context of a highly empowered patient population.

I think one could argue that reviewer number 3 (remember, in the spirit of the open and transparent policy of the journal, the reviewer is named) is perhaps less enthusiastic than the other two. But I think I would be very happy to receive these sort of review for any paper I ever submit to a journal. And that you'd expect a request for changes to the manuscript (to be fair, reviewer 1 had some minor comments requesting clarification in a couple of parts in the text) but full consideration for publication, at this stage.

Well, that's not quite what happened, because the research editors overruled the reviewers and rejected the paper straightaway. Now, obviously, it is perfectly possible that the editors find flaws in the reviews (although I think you'd question the choice of reviewers, if all 3 turned out to be not up for the job, in the view of the editors who have appointed them...). But I think that their comments were rather out of place $-$ for example, they mention a "8 year difference between Intervention and Control groups", while actually the difference in age was less than a year (and they got confused with an 8 point difference in % of males).

I don't want to sound petty and, as I said, the journal have all the rights to reject the paper $-$ after all, we did acknowledge some of the limitations (e.g. we had important issue with missing data and we dealt with it using statistical methods $-$ interestingly, that wasn't a problem for the editors per se, although they said they thought we had "changed our primary outcome", while we simply imputed missing values for the primary outcome using, among other variables, a similar outcome variable, as mentioned by reviewer number 1). So I think it would have been OK to be told that, because of the issue with missing data, the results did not look strong enough. In fact, we did acknowledge the uncertainty in our estimation, but even after modelling missingness the results were negative, so I think that should have been less of an issue. 

Anyway, we did appeal (more to make a point than in hope of any real change) and yet again I was on the losing side $-$ I'm starting to think I should probably start supporting all the causes that I really do not believe in, so that I'll either win some of the future referenda to create as many city-states are there are in the world, or at least will see them fail and take some credit for my personal jinx...

Tuesday, 5 July 2016

Fire and mouse alarm

Yesterday we finally had our workshop on infectious disease modelling in health economic evaluation. I think the day went very well and we had very interesting talks (as soon as I can, I will upload the slides on the website above, for all the speakers who can grant us permission to do so).

The morning was the most technical part (as intended), but I think everybody was able to follow the talks (which at times had lots of statistical details), because they all had some practical results and nice graphical displays. 

The "industry" session was also very interesting and quite varied with different perspectives and problems being highlighted. 

Then, just after the start of the final session (which had speakers from NICE and the JCVI, to present the reimbursement agencies perspective), all hell broke lose: first, the fire alarm went off, for what turned out to be basically nothing $-$ I was trying to listen to the conversation between the UCL people and Fire Marshals trying to determine whether there really was a risk and I think I understood that they were getting really annoyed at the alarm going off for what was clearly nothing). 

Then, as we resumed the session, we had another interesting surprise. Shortly after the beginning of the second talk, a little mouse (probably disturbed by all the fuss caused by the fire alarm) decided to start roaming through the lecture theatre. I thought I caught a glimpse of something moving suspiciously while I was listening to the talk, but I made nothing of it $-$ until I saw other people looking away from the lectern and increasingly slightly disgusted... Eventually, the mouse got bored (and possibly scared) of the people and disappeared somewhere in one of the air conditioning holes. But that's Central London for you...