Last Friday, I went to one of the health economics seminars that are organised at UCL; the format that is used is that one of the people in the group suggests a paper (typically something that they are working on) but instead of having them leading the discussion, one of the others takes the responsibility of preparing a few slides to highlight what they think are the main points. The author/person who suggested the paper is usually in the room and they respond to the short presentation and then the discussion is open to the group at large.
I missed a couple since they started last summer, but the last two I've been to have really been interesting. Last time the main topic was mapping of utility measures; in a nutshell, the idea is that there are some more or less standardised measures of "quality of life" (QoL $-$ the most common probably being the EQ5D and the SF6D).
However, they are not always reported. For example, you may have a trial that you want to analyse in which data have been collected on a different scale (and I'm told that there are plenty); or, and that's perhaps even more interesting, as Rachael pointed out at the seminar, sometimes you're interested in a disease area that is not quite covered by the standard QoL measures and therefore you want to derive some induced measure by what is actually observed.
In the paper that was discussed on Friday, the authors had used a Beta-Binomial regression and were claiming that the results were more reasonable than when using standard linear regression $-$ which is probably sensible, given that these measures are far from symmetrical or "normally distributed" (in fact the EQ5D is defined between $-\infty$ and 1).
I don't know much about mapping (so it is likely that what I'm about to say has been thoroughly investigated already $-$ although it didn't come out in the seminar, where people were much more clued up than I am), but this got me thinking that this is potentially a problem that one can solve using (Bayesian) hierarchical models.
The (very raw) way I see it is that effectively there are two compartments to this model: the first one (typically observed) is made by data on some non-standard QoL measure and possibly some relevant covariates; then one can think of a second compartment, which can be build separately to start with, in which the assumptions underlying the standard measure of QoL are spelt out (eg in terms of the impact of some potential covariates, or something).
The whole point, I guess, is to find a way to connecting these two compartments, for example by assuming (in a more or less confident way) that each of them is used to estimate some relevant parameter, representing some form of QoL. These in turns have to be linked in some (theory-based, I should think) way. A Bayesian approach would allow for the exchange of information and "feed-back" between the two components, which would be potentially very helpful, for example if there was a subset of individuals on which observations on both the compartment were available.
I'll try to learn more on this $-$ but I think this could be interesting...
No comments:
Post a Comment