In the run-up to the 2015 UK election there must have been in the order of 100 opinion polls. Few if any put the gap in the share of the popular vote at over 6%. Nearly all relied mostly on data acquired from sampling voters' opinion over the telephone or the internet.
This morning we know that nearly all national opinion polls conducted recently were way off the mark.
And whilst many will put the discrepancy down to voters lying (no-one admits to voting Tory, right?!) or changing their minds at the last minute (it's a lovely sunny spring day, there's nothing much wrong with the current government!), I put the blame firmly on the pollsters for perpetuating a myth that a hastily polled sample of a thousand or so voters can be corrected to be truly reflective of national opinion.
Today's relentless drive for "content" - online publications dragging out news where there is none, competing for website visitor numbers from a readership with the combined attention span of Katie Hopkins on junk - drove the demand for poll after poll after poll.
Consequently opinion polls were pulled together as cheaply as possible in order to meet the demand from publishers.
But there's one huge problem with cheaply-produced surveys - they rely on the lowest-cost methods of gathering data; which today is over the internet or the telephone.
And these methods simply cannot provide a representative picture of voting intent because there is a large demographic who simply can't be reached by one or the other of these methods.
Yes I have a telephone, but I no longer answer it if I don't recognise the number.
The rise in cold calling from marketeers has made it pretty much impossible to contact anyone over the telephone who doesn't want to spend half their life listening to how they could be owed thousands from mis-sold PPI or get a new boiler for pretty much nothing due to some government scheme or other.
As for online surveys, well they typically reach the clued-up generation; the active participants in an interactive medium willing to lodge their closely-held political beliefs with any computer programme that cares to ask.
My point is that neither of the lower-cost sampling methods is particularly representative.
Additionally I don't believe the two most-popular methods, telephone and internet surveys, complement each other particularly well either - that is if you add a telephone poll to an internet poll there is still a sizeable subset of voters whose views are not represented.
Any truly representative survey would need to employ an appropriate mix of polling methods, from doorstep questions to street surveys together with telephone and internet polls.
This itself isn't controversial, but the opinion polling organisations counter this with their mythical Model.
"Oh, it doesn't matter that our sample isn't representative because we have the data to correct it for all the classes of people our survey didn't reach. We just plug the data into our Model and we are statistically accurate to within a percent, two at the most."
But building the model involves in-depth sampling and is consequently costly to build.
Costly in an era where publishers are commissioning opinion polls on a shoe string, forcing polling organisations to focus most of their efforts on turning around cheap polls rather than maintaining the model.
We all know that political opinion shifts, but what I believe pollsters have built in their models is a weighting system that can't keep up with the drift and fluctuation in voting intentions.
They end up plugging today's non-representative data into yesterday's weighting model and selling the result as a true reflection of the public mood.
Models based on voter behaviour up to five years ago can never correct for a shift in opinion.
Maybe one year someone will commission one representative survey instead of 2 dozen throw-away polls; only then will we be able to answer the question of whether the weighting model approach is flawed, or maybe the electorate simply can't be relied on to answer a political question honestly?!