Thursday, 4 February 2016

The value of being informed about a short course

Later this year, Anna, Mark, Nicky and I will organise and then teach on a 3-day workshop on Statistical Methods for the Value of Information Analysis

Not-quite-by-chance, this will happen exactly the week before the European Conference of the Society for Medical Decision Making $-$ we've been in talks with the organising committee and so our course is also advertised on their website, here.

We are finalising the schedule and will start advertisement very shortly, but we already know that the course will have a first introductory day in which we'll go through the basic concepts of Value of Information analysis $-$ something like the one Nicky did in Bristol a couple of years ago.

The second and third day, on the other hand will be pretty much hands-on: we'll ask participants to bring their laptop so they can actually have a go at the methods based on non-parametric regression that both Mark and we have developed. In fact, unlike the very unwise child who's cheering for Excel, we'll do all this using R-based tools, which we have tried to make as friendly as possible, though...

I think we'll allow for flexible registration, so people could come for just the first day, or for the second and third day only, or for the full three days. The course is subsidised by UCL and the MRC Network of Hubs for Trials Methodology Research and so the registration fee will be very, very small (I think £10 for one-day; £20 for two-days and £30 for the full short course). I'll post more when all the links on the registration pages are up and running.

Wednesday, 3 February 2016

Young folks

We (as in Significance, in partnership with the Young Statisticians Section of the Royal Statistical Society) have just launched the 2016 Young Statisticians Writing Competition



The competition is open to any young statistician, regardless of whether they are an RSS or ASA member. In the past editions, we've had some very, very good pieces $-$ like this.

Monday, 25 January 2016

I will survive!

Here's a very long post, to make up for the recent silence on the blog... Lately, I've been working on a new project involving the use of survival analysis data and results, specifically for health economic evaluation (cue Cake's rendition below).


I have to say I'm not really a massive expert on survival analysis, in the sense that it's never been my main area of interest/research. But I think the particular case of cost-effectiveness modelling is actually very interesting $-$ the main point is that unlike in a standard trial, where the observed data are used to determine some median estimate of the survival curve (typically across the different treatment arms), in health economic evaluations the target quantity is actually the mean survival time(s), because these are then usually used to extrapolate the (limited!) trial data into a full decision-analytic model, often covering a relatively large time horizon. Among many, many others , I think Chris et al make a very good point for this, here.

Anyway, one of the main implications to this is that typically the practitioners are left with the task of fitting a (range of) parametric survival model(s) to their data. Nick Latimer among others have done excellent work in suggesting suitable guidelines. (In fact, both Chris and Nick did come to talk to one of our workshops/seminars, last summer).

Over and above the necessary choice of models, I think there are other interesting issues/challenges for the health economic modeller:

  1. (Parametric) Survival models are often tricky because there are many different parameterisations, leading to different results presentations. This can be very confusing and without extra care lead to disastrous consequences (because the economic model extrapolates on the wrong survival curves!).
  2. Even when the parameterisation is taken care of, we are normally interested in characterising the full uncertainty in the joint distribution of the survival model parameters $-$ we need to do this to perform Probabilistic Sensitivity Analysis (PSA), so even in a non-Bayesian model, this is a required output of the analysis. Pragmatically, this means computing a survival curve for a large combination of parameter values and feeding each to the economic model to assess the impact of uncertainty on the final decision-making process.
  3. Much to my frustration (and, I realise, to the frustration of the people I keep nagging about this!), the economic models are (too) often performed in Excel. This means that while the survival analysis is done externally in a proper statistical software, then the results (usually in tabular form) are copied over the spreadsheet and used to then construct the survival curves (eg via VBA macros).
I think the process is complex enough and after talking to some health economist colleagues, I've started to work on a R package to try and standardise and simplify it as much as possible. In fact, this will be more of a wrapper for several other R packages, but I'm planning on including some nice (I would say that, wouldn't I?) features.

Firstly, the idea is to allow the user to fit survival models using MLE as well as a Bayesian approach. In the former case (which I will reluctantly set as the default $-$ for reasons I'll explain later), survHE, which is how I've called the package, will just remotely call flexsurv, which is a (very clever!) wrapper itself. flexsurv allows the user to get bootstrap simulations from the joint distribution of the relevant parameters, which are used for the PSA problem. Also (here's the reason I referred to earlier), the range of models that it can fit is wide enough to cover those suggested by the NICE guidelines $-$ in fact even more, probably.

For the Bayesian analysis, I'm allowing the user to either use the (not-so-wide) range of in-built models in INLA, or to go fully MCMC and use OpenBUGS (which offers the same range as flexsurv). Of course, one of the crucial features of going fully Bayesian on this is that the posterior joint distribution of the model parameters can be fully characterised (using the new inla.posterior.sample function in INLA, or directly from the MCMC simulations in OpenBUGS) and so the uncertainty in the survival curves can be propagated directly in the economic model.

In terms of running time, INLA is basically just as fast as the MLE, while MCMC is generally slower (and, it is known to possibly run into convergence problems for some parameterisations). 

The typical call to survHE will be something like this

fit <- fit.models(formula, data, distr, method, ...) 
where 
  • formula can be specified using standard R notation, something like Surv(time,event)~as.factor(arm), for MLE analysis or inla.surv(time,event)~as.factor(arm), for INLA. I think I've managed to make the function clever enough to recognise which formula should be used depending on what method of inference is specified and also to figure out how to translate this into BUGS language.
  • data is shockingly the dataset to be used.
  • distr is a (vector) of string(s) indicating which parametric distribution(s) should be fitted to the data, something like distr=c("exponential","weibull"). Again, to make the modellers' lives easier, I've kind of made mine miserable and tried to be very clever in accounting for differences in terminology across the three packages/approaches I cater for.
  • method is a string specifying what kind of analysis should be done, with values at "mle", "inla" or "mcmc".
The other options are mainly to do with INLA & BUGS $-$ for example, in the latter case, the user can specify the number of simulations to be run. If the extra argument n.iter is set to 0, then survHE will not run the model, but simply prepare and save on the user's current directory the BUGS code associated with the assumptions encoded in the call, and create the data list, the vector of parameters to be monitored and a function creating the initial values in the R workspace. In this way, the user can fiddle with the template model.

Alongside the main fit.models function, I have (or I am) prepared (preparing) a set of other functions for plotting, printing and, most importantly, exporting the results (eg of the PSA) for instance to an Excel file. The idea is that in this way, all the funny business of doing part of the survival analysis in a statistical software and then completing the computation of the survival curves for different values of the parameters in Excel can be avoided $-$ which seems to me a very valuable objective!

I'll post more on this when I'm closer to a beta-relase $-$ I'm also trying to prepare a structured manual to accompany the package. 

Monday, 4 January 2016

The guide

Before and over the Christmas break, Christina and I have done some more work on our bmeta package, which I've already mentioned in another post, here $-$ well, to be fair, Christina has done most of the work; I was being annoying suggesting changes to the maths formatting and thinking about potential new plots or additions just one second after she'd finished coding up the previous batch...

Anyway, one of the things we felt was missing, was a detailed guide to the package $-$ and so we wrote one. I guess the point is that there are quite a lot of options that the user is free to select/change, when using bmeta. And many of these are not quite easy to put in the standard R help file. Also, the package implements many more-or-less standard models and so I thought it would be a good idea to actually write down what these are.

We've also included some text about how
bmeta does create some relevant JAGS code that is used to do the analysis but can also be modified by the user $-$ that's effectively a set of templates for Bayesian meta-analysis and meta-regression.

I've put the development version (0.1.2) on the website with some (probably irrelevant for the R-versed users) info on how to install it $-$ we'll do some more testing before we upload it on CRAN too.

Tuesday, 22 December 2015

Post mortem

This is again a guest post, mainly written by Roberto, which I only slightly edited (and if significantly so, I am making it clear by adding text in italics and in square brackets, like [this]). By the way, the pic on the left shows my favourite pathologist examining a post-mortem. 








A day after the bull-fight is over and done with (at least until the next election, which may be sooner than one would expect), we have to do some analysis as to what went right, and what failed, as it regards to our model. First let’s look at the results:


Right off the bat, we can say a few things: firstly, the model captures the correct order of the parties, and hence the results of this election, with impressive certainty. A study of our simulations suggest that if this election was run 8000 times, we would get the correct ranking of parties according to seats 90% of the time, with the remaining 10% having Ciudadanos ahead of Podemos, and the other parties still in their correct place. The model predicts the correct ranking of parties according to votes 100% of the time.

This is an interesting point because towards the end of the campaign it looked as if Ciudadanos could overtake Podemos, but came way short on election day. It is also interesting to see that the Popular Party performed better than expected, and one could think that, due to the ideological similarities, when it came to election day some Ciudadanos aficionados opted to vote for the PP, anticipating that they were better positioned to govern. 

This explanation only holds in part, since we would have to understand why the same reasoning didn’t apply to Podemos voters. However a key could lie in the under-performance of the "Other" parties, which due to their regional anti-central-government nature, and their mostly left-leaning ideology, may have been an easy prey for the Podemos tide. 

Whatever the true mechanism, this highlights the main problem with our model, which is that we didn't model the substitution effect among parties. For future references, I believe these results point to two important variables which can forecast whether a party ends up "swallowing" votes from another: similarity of ideology and probability of achieving significantly (in political terms) more seats that the other party. 

When we look at the number of seats won, the model has a Root Mean Squared Error (RMSE) of just below 10 seats, suggesting that the true number of seats gained by a party lies 10 seats away from my forecast on average. 10 seats represents just shy of 3% of the total seats, so when put in context, this is not a huge margin of error, and it is clearly low enough to allow us to make relevant and useful inference as it regards the results, 5 days before the election. However we could have probably improved on this, perhaps by trying to model regional races separately, which would have enabled us to witness the "finalists" of each race, perhaps allowing us to reduce the overall variability. 

As it regards to vote shares, our RMSE is around 2%, suggesting our vote share estimate for each party is around 2% away from its actual result. This is better than the seats estimate, perhaps due to the larger number of polls at our disposal, as well as the lower variability due to the absence of sources of external variability such as the electoral law. 


When we plot our prediction intervals for seats against the actual results and we stretch the prediction interval slightly to include 3 standard deviations (hence including even rare outcomes under our model), it becomes evident that this was indeed somewhat of a special election. Almost all of the results are either at the border of our prediction, or have jumped past it just slightly, with the only exception being Ciudadanos. This suggests that, beyond the Ciudadanos problem, this election was an extreme case under our model, meaning that either our model could have been better, or we were just very unlucky. I tend to believe the former.

Our vote share results also hold some interesting clues. Although our point estimates here do better on average than for the seats, we pay the price to some over-confidence in our estimates. Our prediction intervals just aren't large enough. This could be for several reasons, including perhaps too high a precision parameter over our long-run estimates. Moreover, polls may have been consistently "wrong", leading our dynamic Bayesian framework to converge around values which, sadly, were untrue. We should look into mitigating this effect through more thorough weighing of the polls.
[I think this is a very interesting point and similar to what we've seen with the UK general elections a few months back, where the polls were consistently predicting a hung parliament and suggesting a positive outcome for the Labour Party, which (perhaps? I don't even know what to think about it anymore...) sadly never materialised. May be the question here is more subtle and we should try and investigate more formal ways of telling the model to take the polls with a healthy pinch of salt...]

Other forecasters attempted the trial, with similar accuracy as our own effort, but using different methodologies. Kiko Llaneras, over at elespanol.com, and Virgilio Gomez Rubio,at the University ofCastilla-La Mancha, have produced interesting forecasts for the 2015 Election. They have the larger merit, compared to our own effort, of having avoided the aggregation of "Other" parties into a single entity, as well as having produced forecasts for each single provincial race.
[In our model, we did have a province elements in that the long-term component of the structural forecast did depend on variables that were defined at that geographical level. But we didn't formally model each individual province as a single race...]

I put our results and error together with theirs in the following tables for comparison. For consistency, and to allow for proper comparison, I stick to our labels (including the “Other “ aggregate). It should be noted that the other guys were more rigorous, testing themselves also on whether each seat in the "Other" category went to the party they forecast within that category. Hence their reviews of the effort may be harsher. That said, since none of the "Other" parties have a chance at winning the election, this rating strategy is fair enough for our purposes. 


As it clear from this table, Virgilio had the smallest error, whilst all forecasts have similar error shares. Where Virgilio does better than both us and Kiko is in the PSOE forecast, which he hits almost on the dot, whilst we underestimate it. Furthermore he’s more accurate on the "Others", as is Kiko, suggesting that producing provincial forecasts could help reduce error on that front. Finally, whilst our model falls short of forecasting the true swing in favour of the PP, it also has the smallest error for the two new parties, Ciudadanos and Podemos.
[I think this is interesting too and probably due to the formal inclusion of some information in the priors for these two parties, for which no or very limited historical forecast is available.]

Looking at the actual results, we can only speculate as to the future of Spain. None of the parties even came close to touching the magic number of 176 seats, needed for an outright majority. However some interesting scenarios may unfold: the centre-right (C+PP) only manages to put together 163 seats; the left, on the other hand, could end up being able to form a governing coalition. PSOE, IU and Podemos can pool their seats together to get to 161 and if they manage to convince some of the "Other" left-leaning parties, they could get the 17 seats they need in order to govern. 

However, this would certainly be an extremely fragile scenario, which would lead to serious contradictions within the coalition: how could Podemos forge a coalition with the PSOE over extremely serious disputes such as the Catalonian independence referendum? Rajoy's hope is that he'll be able to convince the POSE to form a "Gran Coalition" for the benefit of the nation; however, this scenario, whilst being the preferred from the markets worldwide, is unlikely as the PSOE "smells blood" and knows it can get rid of Rajoy, if it holds out long enough.

In conclusion, our model provided a very good direction for the election and predicted the main and most important outcome of the election: a hung parliament and consequent uncertainty. However, through a more thoughtful modeling of polls; an effort to disaggregate "Others" into its respective parties; and province-level forecasts, we could go a long way in reducing our error. 

Saturday, 19 December 2015

Political Forecasting Machine - The Spanish Edition

This is a (rather long, but I think equally interesting!) guest post by Roberto (he's introducing himself below). We had already done some work on similar models a while back and he got so into this that now he wanted to actually take on the new version of Spanish Inquisition (aka general election). I'll put my own comments on his original text below between square brackets (like [this]).


My name is Roberto Cerina, I am a Masters student under Gianluca's supervision here at UCL, and this is a guest post on work we've been doing together.

A year after "shaking-up" the status-quo of pollsters and political pundits with our fearless forecasting of the US 2014 Senate election, the Greatest Political Forecasting Machine the world has ever seen is back with a vengeance. This time, we take on the Spanish Armada. The juicy results come after the "Model" section, so feel free to jump right there if you can't contain your enthusiasm for this forecast. Apologies for all and any errors in this work, for which I take full responsibility for.
[Actually, I guess I should be taking more responsibility in those cases. But given he's volunteered, I'll let Roberto do this! ;-)]

Pablo Iglesias, leader of the anti-austerity party "Podemos"

Intro:
The Spanish Election which will take place this coming Sunday (December 20th) is a whole different kind of challenge compared to our experience with the US. To start with, we are talking about a multi-party system, which has seen over 100 different political entities inhabiting the "Congreso de los Diputados" (the lower house of the Spanish Parliament, which is the focus of our forecast); furthermore, the Kingdom has only been a democracy since 1977 and has morphed into its classical form of PP (Partido Popular, centre-right) vs PSOE (Partido Socialista Obrero Español, centre-left) as late as 1989, giving very few past elections to base our results on. To complicate matters further, this 2015 elections is hardly exchangeable with the previous ones, involving two new parties, Ciudadanos (C) and Podemos (P), which are polling at around 20% each. Finally, strong regional identities lead to territorial political parties which do not fall into any specific left or right category, and whilst they perform well within their specific constituency, they are largely irrelevant on the national stage.

In order to tackle this beast, we use data from the Global Election Database, giving us access to constituency-level past election results; economic data from quandl and the Spanish Instituto National de Estadistica; historical government approval rating from the Centro de Investigaciones Sociologicas; seat and vote share polls from the (very, very useful) tailor-made wikipedia page.


The model:
We capture the long-term dynamics of the Spanish election through a Discrete Choice Model based on a Multinomial Logit Regression, estimated in a Bayesian fashion. We recognise this does not solve the "Independence of Irrelevant Alternatives” (a property automatically encoded in the multinomial-logit model which suggests that the entry of a new party in the race leave the relationship between the other parties unchanged). We look forward to modify this historical model, perhaps using Multinomial Probit or Nested Logit, in the next iteration of our forecast.

The parties which we examine throughout are those that compete Nationally, whilst the "Other" label captures regional entities and very low performing national parties. The parties examined are: Partido Popular (PP); Partido Socialista Obrero Español (PSOE); Izquierda Unida (IU) and Others. This Long Run model does not include estimates for the new parties, C and P. This model is at the province level, and it is simple to aggregate the results to get a national estimate of the vote shares. Having a province-level model allows us to keep flexibility with respect to using provincial polls if they become available. The variables which we have used for this particular example are: province-level GDP; National approval rating of government and national incumbency of party. The model is essentially a version of the famous "Time for Change" developed by Alan Abramowitz, which is a good starting point for election dynamics, although it fails to be a good predictor for non-governing parties in multi-party races. We hope to cure this ill by introducing a party-province random effect to account for significant "party strongholds" effect.  

The Discrete Choice Model is based on the idea that rational voters gain utility from voting for one party or another. We model their utility as a function of the observed utility (based on the "Time for Change" economic vote model), and unobserved utility distributed through a Gumbel distribution, hence encoding a Multinomial Logit model.
$$U_{ikt} = H_{ikt} + e_{ikt}, \mbox{ with } e_{ikt} \sim\mbox{Gumbel}(\mu,\beta). $$
The probability that an individual voter (or an aggregate of voters in our case) will vote for party i in province k at time t is dependant on whether party i guarantees the individual more utility than the other parties. 
$$ P_{ikt} = \mbox{Pr}(U_{ikt}>U_{jkt}) = \mbox{Pr}(V_{ikt} + e_{ikt} > V_{jkt}+e_{jkt}) = \mbox{Pr}(e_{jkt}<e_{ikt}+V_{ikt}-V_{jkt}) \forall j\neq i.$$After a short and painless derivation it is possible to see that the probability of interest is equal to the logit of the observed utility: 
$$\mbox{logit}(P_{ikt}) =H_{ikt}.$$[I like the idea of a "painless" algebraic derivation $-$ I certainly didn't know any such thing, when I was a student...]

The historical forecast is a hierarchical function of National ($x$) and Provincial ($z$) variables, and all the relevant coefficients are assigned vague priors:
$$H_{ikt}= \alpha_{ik}  + \sum_a \beta_{ik} x_{akt} + \sum_b \zeta_ {ik}z_{bkt}$$
After deriving the long run vote share probabilities, we need to produce a forecast for 2015 which includes the new parties running. We want a National forecast, so we aggregate the province forecasts. Then we assign reasonable vague priors to the expected vote shares of C and P, and re-weight our previous estimates to account for the presence of these. Here we assume that C and P steal votes equally from all parties, something that is probably overly simplistic.
$$\mbox{P}_{C,t}, \mbox{P}_{P,t} \stackrel{iid}\sim \mbox{Uniform}(0.1,0.3) $$We then re-weight the national long run estimates, after we assign C and P uniform priors between 10% and 30%, measures determined by understanding that a) a party winning 30% of the vote in this election would essentially win, and neither party looks poised to come first; b) that both parties polled well above 10% as early as February, and hence there would have been no chance of either falling through this bracket.

We use a Dynamic Bayesian model to update the Long Run estimates (for each party) obtained above, with observed opinion polls. The multitude of polls available allows us to follow the campaign for 51 weeks, meaning every week since the beginning of 2015. We connect weeks together through a reverse random walk pinned on election week at the Long Run estimates, which enables us to make inference over weeks were no polls have been released. Every week we pool all opinion polls released during the course of that week and feed them to the model, which then updates its estimates for the predicted vote shares of every party, for every week in the campaign, in a Bayesian fashion. 

  • $\mbox{Y}_{il} \sim \mbox{Multinomial}(\mbox{N}_{l},\theta_{1l},..., \theta_{nl})$
  • $\theta_{il}=\mbox{logit}^{-1} (v_{il} + m_{i}),$
  • $v_{iL} \sim \mbox{Norm} \left( \mbox{logit}(\mbox{P}_{iT}), \frac{1}{\tau_{hist}}\right)$
  • $v_{il} \mid v_{il} \sim (v_{il},\sigma_v)$
  • $m_{i} \sim \mbox{Norm} \left( 0, \frac{1}{\tau_{mi}}\right)$
In the equations above, we have $I=1,\ldots,L$ campaing weeks; $i=1,\ldots,n$ parties; $Y_{ik}$ is the number of voters expressing a preference for party $i$ at week $l$ of the campaign; $N_k$ is the number of voters polled over week $l$; $P_{iT}$ is the predicted vote share for the election year at hand $T$, derived from the Long-Run model; $v_{il}$ is a party-week effect; and $m_i$ is a party effect capturing the remaining unobserved party-specific variability, during the campaign. The precision parameter $\tau_{hist}$ represents the confidence we have in our long run model estimates as it regards this election at hand, and it is to be calibrated through a sensitivity analysis.

We get an initial seat projection by exploiting the historical correlation between votes and seats. We produce two different regressions, one for "Governing Parties", the other for "Non-Governing parties", and use the former to estimate the seats shares for IU, Other, C and P. The latter estimates the seats shares for PP and PSOE. The estimates are then pooled together and re-weighted in order to make sure they add up to the full 350 seats in the chamber of deputies.

The final seat projections come from another Dynamic Bayesian model, which makes use of the available seat projection polls (which are usually not as freely available) since the beginning of 2015. Again, we connect weeks together through a reverse random walk pinned on election week at the Seat Projections derived above (equations omitted as almost identical to the above Dynamic Bayesian Model). 



Results:
We first produce our long run estimates, and derive a measure to assess their reliability over time. Our Long-Run dynamics estimates for the national vote shares are as follows (to two decimal places):
How reliable are these predictions? A Root Mean Squared Error of 0.0529 (4dp) over past elections since 1989, suggests that the model misses the correct vote shares of the 5 parties by 5.3% on average. This makes for a fairly reliable model which captures most of the historical dynamics behind the election. The point estimates of the model provide a mixed bag of results with respect to pointing to the correct winner of the election, although this is more a reflection of the competitiveness of Spanish politics (shown by overlapping prediction intervals for governing parties) than a weakness of the model itself.
We then proceed to update these estimates with the available polls. The final vote share estimates are also displayed below. According to these estimates, the PP wins the largest percentage of votes, followed by the PSOE and Podemos and C. Other parties and IU win predictable percentages. The model provides strong confidence in these estimates, with standard deviations lower than half of a percentage point. These results are more or less in line with the long-run model, with the exception that the PSOE is underestimated according to the long-run dynamics. However both history and polls conclude that Mariano Raoy’s Popular Party is set to gain the largest vote share.
The Dynamic Bayesian model allows us to make inference as it relates to the campaign behaviour of voters, which is displayed in the following plots. For this post we only fit the model for election week due to computational constraints, and the last polls embedded in the model are from December 16th (although the official polling deadline wad December 14th, some "Illegal Polls" from credible papers have been published since). We should point out the the validity of inference made from this model, and especially as it pertains to voter behaviour during the campaign, is proportional to the validity of the polls as a tool for monitoring the behaviour of votes. Some pollsters are better than others, and in future iterations of this model we will investigate polling firms further and use a weighted average of polls rather than a simple average.
The dotted line in the plots points to the historical estimate of the vote share. We can see how the PSOE over-performs the historical estimate throughout the campaign, whilst the PP dances around it, outperforms it form week 28 to week 47 of the campaign, and eventually converges to its historical estimate. The vote shares for these governing parties are rather stable throughout the campaign. A very different scenario unfolds when we look at the "young guns" C and P.
The two new parties start from opposite ends of the spectrum: C is a former regional, Catalonia-based anti-separatist party, which was largely unknown by the wider public at the beginning of this campaign. Podemos, on the other hand, came off the back of a successful European Election which helped put it on the map, but also strong international recognition as Spain’s anti-austerity answer to the perceived dictates of the European Union. Furthermore, the electoral victories of Syriza (the Greek "equivalent" of Podemos), and the prominence of the discussion on austerity and the EU, essentially allowed Podemos to monopolise the debate at the early stages of 2015. As the campaign went on we can see how Podemos fell short of its initial goals, and lost as high as 15% of its popular vote share, before picking back up to a more reasonable 20%, in the last few weeks of the campaign. C follows essentially the exact opposite path, increasing its share as Podemos’ falls, and dropping back to around 17% of the total vote share as Podemos pushed back in the last week. This suggests there is a high degree of substitutability between the parties, which is odd considering C is a centre-right party and Podemos is very far left.

More investigation on the reasons for this shared electorate is in order, however a simple explanation could just be that they represent some new ideas in Spanish politics. Given that someone decides to vote for a "New", before the rise of C, he/she could only vote for Podemos, perhaps even if the voters’ ideas were in contrast with those of the party, as a sign of protest. However, the rise of C allowed those disaffected with the current political system but still of the same ideological spectrum as the current (right wing) governing party, to find a new home. This is just speculation for now, but perhaps it could explain at least some of the exchangeability between the two parties. IU and Other parties maintain a fairly invariant percentage of the vote, with the exception of the last few weeks for Other parties. This could be as a result of the of the parties included in this category being correlated with C or Podemos’ variability.

We then go ahead and try to provide Seats Projections for the Congress. The difficulty here lies in that, not having at our disposal regional polling, it is not possible for us to determine the result of every single race. 
Furthermore, the relationship between national vote share and seats is not 1:1, since Spain uses the D'Hondt system of seat allocation, which is a complex mechanism meant to strike the right balance between representation and governability. However, this system has its weaknesses, ie that it over-represents large parties, especially in rural areas, whilst penalising smaller ones. This simple conceptual difference is enough for us to justify the use of two different projections for governing and non-governing parties, as explained above. Hence, we pool the seats won (historically) by the parties into these two categories, do the same for the votes and then regress seats on votes for both groups. The results of these projections are then aggregated and re-weighted providing us with the following estimates based solely on the correlation between votes and seats:

We then proceed to update these with seat projection polling: the results of this effort are below, along with the plots showing how potential seats shares have evolved during the campaign. Finally, seats are projected on the actual parliament, and compared to the parliament composition we were left with in 2011, from which we can make inference as to the total change in seats.
 
Variability in seats is very high generally, but we experience many of the same patterns that we see at it regards to vote share during the campaign. PP and PSOE are relatively stable in their estimates, and especially in the last few weeks of the campaign, they don’t seem to vary much at all. The same can be said for IU, which is even more remarkably stable throughout, whilst Other parties experience a slight U-shape trend in the last few weeks of the campaign, but stay within their overall campaign average interval. C and Podemos are again remarkably variable with Podemos setting itself up to be potentially the second largest party at the beginning of 2015, but then falling dramatically in conjunction with C’s rise. C’s momentum was very strong in the second-half of the campaign, but flattened out in November and December, something that coincided with Podemos’ re-gaining strength.



In conclusion, we expect the congress post-2015 election to look extremely fragmented. No party will outright win a majority and it will be a power-play to see who will manage to form a government. Mariano Rajoy is likely to stay prime minister
[I guess it's: sorry, Spain...?]
but should he find himself incapable of forming a government by brokering deals with some of the other parties, we may see new and unpredictable scenarios unfolding. The powerful show and entry in parliament by Ciudadanos and Podemos raises a question: are we seeing a gradual but definite systematic renewal of Spanish politics or is this predicted strong performance by new parties due to the exceptional circumstances we live in (austerity, unemployment, EU centralisation, etc)? We cannot answer that, as of today, but we can be sure that the answer depends almost entirely on how much "change" these parties are going to be able to bring and, crucially, how quickly. They may find that even a politically adventurous electorate such as the Spaniards will have very little patience, when it comes to broken promises.

[Just to add that another interesting forecast of the Spanish election has been done by Virgilio]

Saturday, 12 December 2015

ERCIM/CMStat 2015

Tomorrow I'll be at the 8th International Conference of the ERCIM WG on Computational and Methodological Statistics (CMStatistics 2015), which has again come to London. I have only been once to this conference, two years ago $-$ that time too it was held at Senate House, just around the block from the office.

I've been invited to talk in the session on Health Economics $-$ that's the first time such a session has been held at CMStats $-$ and I'll present our work on the Expected Value of Partial Information (I've mentioned this already here. My slides are here). 

The session looks good (details here $-$ search for code "EO254"). Interestingly, it seems like a Italian-Greek face-off (I guess we're somewhat in between, with Ioanna being a co-author). Anna is the odd-one-out as the sole non Graeco-Roman... 

(Speaking of, the picture above is incidentally the Temple of Concordia in Agrigento, where I was born $-$ well, not in the temple, obviously, just the town...)

The Master plan

Together with Jolene (who's really been the driving force behind this) and Marcos, I've been working in the past few months to try and set up a new MSc in Health Economic Evaluation and Decision Science at UCL.

The process has been relatively long and we've had to overcome a few bumps, but it would appear that we are being successful $-$ there're a couple more signatures to get through and all the business with advertisement and setting up a couple of new modules, but these shouldn't be too terrible!

I think this is a very exciting prospect: the MSc will be made by 8 modules and will comprise a joint core in which students will have the choice of a focus on higher income country or a global context (the latter will tend to emphasise the challenges of low and middle income countries).  As the MSc title gives away, in addition, the students will be able to choose a "decision science" stream or an "economics and policy" stream. I think this is very nice and crucial, since we'll be able to provide interesting options and possibility of selecting from a wide and diverse range of modules from which to learn.

We'll be involved particularly with the "decision science" stream for which my new module "Bayesian methods in health economics" will be core (mandatory), together with other modules that we currently provide (eg Medical Statistics). Again, I think this is really good as it's increasingly important (IMHO) to have modellers who have very advanced statistical skills, in health economic evaluation (or more specifically, I should say cost-effectiveness/utility analysis).

If all goes to plan, we'll start with the first cohort of students in September 2016 $-$ that's really exciting!