When Hansel comes out to save the day preventing "Derek [Zoolander] to off the prime minister of Micronesia", the evil Jacobim Mugatu says "It's that damn Hansel – He's so hot right now!".
Quite in a similar fashion(?), it seems as though in the last few weeks the Expected Value of Partial Perfect Information (EVPPI) has become so hot...
First there was Mohsen Sadatsafavi's paper (which I have mentioned here and whose method I've already implemented in BCEA). Then, the Bristol workshop (on which I reported here). And then, just last week, another quite interesting paper (by Mark Strong and Jeremy Oakley) has appeared in Medical Decision Making, discussing pretty much the same issue.
The problem is, in theory, relatively simple: given a bunch of parameters subjected to uncertainty, we want to compute the value of reducing this uncertainty on one (or some) of them, eg by collecting additional information. In health economic evaluation this is particularly important, as the current decision may be (and in general is) affected by how well we "know" the value of the model parameters. Thus, the decision-maker may be in the position of being better off by not making a decision right now and deferring until someone (for example the pharmaceutical company trying to market some new drug) can come up with some extra information $-$ incidentally, these concepts could be used to negotiate co-payments, as we argue here.
From the computational point of view, the calculation of the EVPPI can be dealt with using 2-stage MCMC simulations, which can be quite demanding. Thus, people are trying to research clever methods to reduce the computational burden.
I think these are all very interesting, especially because, unlike the 2-stage MCMC approach, these methods seem very general and thus can be applied to, virtually, any model, especially when it's run under a Bayesian framework (eg like I do in BMHE). I'll try to talk to Strong & Oakley and possibly implement their method in BCEA too!