Can we trust Newspoll?

Via William Bowe:

The Australian has reported the result with its characteristic bullish confidence, precisely attributing the Coalition’s 1.5% improvement as compared with the election result to the “passage of the government’s $158 billion income tax cuts, drought funding package and national security legislation”.

YouGov Galaxy principal David Briggs also seemed hesitant to concede that anything had been seriously amiss in a column shortly after the election, in which he argued the company’s seat polls and state-level results offered “crucial evidence” that should have alerted commentators to a tighter race than their consensus suggested.

While it’s true that none of the Newspoll and YouGov Galaxy seat polls published in the last week of the campaign showed Labor leading in a seat it didn’t actually win, the voting intention numbers were no less prone than the national polling to understating what proved to be the Coalition’s true level of support.

This was particularly notable in Queensland, where YouGov Galaxy polls recorded dead heats a few days out from polling day in the seats of Forde and Herbert, both of which the Coalition ended up winning by margins of between 8% and 9%.

The pollster’s statewide results from Queensland were never better for the Coalition than 51-49 during the campaign, and while the YouGov Galaxy exit poll landed closer at 53-47, it was still outside the error margin of the 58.4-41.6 thumping that the state’s voters inflicted on Labor on the day.

The impression that methods that had worked well for the pollster in the past were no longer doing so was reinforced with the emergence of the campaign tracking polling it had conducted for Labor, which was only fractionally less favourable to Labor than what was appearing publicly.

More recent commentary from Briggs has been more candid, acknowledging the persistent exaggeration of Labor support and speculating that a “shy Tory effect” may have been at work, using a phrase that was frequently invoked in Britain after the Conservatives defied the polls in 1992 and 2015.

However, the latest poll comes with no insights as to what might have been done to correct for it, and how the methods that today credit the Coalition with a 53-47 lead might differ from those that had it trailing 51.5-48.5 on the eve of an election whose result was exactly the reverse.

In the past, YouGov Galaxy has felt able to justify the opaqueness of its methods on the grounds that its “track record speaks for itself”.

That justification will be finding far fewer takers today than it did before the great shock of May 18.

Mark the Ballot recently assessed “house effects”:

I have three anchored models for the period 2 July 2016 to 18 May 2019. The first is anchored to the 2016 election result (left anchored). The second model is anchored to the 2019 election result (right anchored). The third model is anchored to both election results (left and right anchored).  Let’s look at these models.


The first thing to note is that the median lines in the left-anchored and right-anchored models are very similar. It is pretty much the same line moved up or down by 1.4 percentage points. As we have discussed previously, this difference of 1.4 percentage points is effectively a drift in the collective polling house effects over the period from 2016 to 2019. The polls opened after the 2016 election with a collective 1.7 percentage point pro-Labor bias. This bias grew by a further 1.4 percentage points to reach 3.1 percentage points at the time of the 2019 election (the difference between the yellow line and the blue/green lines on the right hand side of the last chart above).

The third model: the left-and-right anchored model forces this drift to be reconciled within the model (but without any guidance from the model). The left-and-right anchored model explicitly assumes there is no such drift (ie. house effects are constant and unchanging). In handling this unspecified drift, the left-and-right anchored model has seen much of the adjustment occur close to the two anchor points at the left and right extremes of the chart. The shape of the middle of the chart is not dissimilar to the singularly anchored charts.

While this is the output for the left-and-right anchored model, I would advise caution in assuming that the drift in polling house effects actually occurred in the period immediately after the 2016 election and immediately prior to the 2019 election. It is just that this is the best mathematical fit for a model that assumes there has been no drift. The actual drift could have happened slowly over the entire period, or quickly at the beginning, somewhere in the middle, or towards the end of the three year period.

My results for the left-and-right anchored model are not dissimilar to Jackman and Mansillo. The differences between our charts are largely a result of how I treat the day-to-day variance in voting intention (particularly following the polling discontinuity associated with the leadership transition from Turnbull to Morrison). I chose to specify this variance, rather than model it as a hyper-prior.  I specified this parameter because: (a) we can observe higher volatility immediately following discontinuity events, and (b) the sparse polling results in Australia, especially in the 2016-19 period, produces an under-estimate for this variance in this model.

All three models have a very similar result for the discontinuity event itself: an impact just under three percentage points. Note: these charts are not in percentage points, but vote shares.


And just to complete the analysis, let’s look at the house effects. With all of these houses effects, I would urge caution. These house effects are an artefact of the best fit in models that do not allow for the 1.4 percentage point drift in collective house effects that occurred between 2016 and 2019.

 

 

In short, forget Newspoll. Track YouGov.

Comments are hidden for Membership Subscribers only.