One of the leftover questions that is consuming researchers, statisticians, and pollsters is how just about everyone incorrectly predicted this election. In some cases, by astonishingly large margins.
The most inaccurate of them all was Huffington Post, which on Tuesday afternoon announced that Hillary Clinton had a 98% chance of being elected president. While just about every pollster foresaw a Clinton win, HuffPost’s prognostication was the most wildly off-base. Instead of a Hillary Clinton lock to break that glass ceiling, it ended up with Donald Trump meeting President Obama at the White House yesterday.
In an extensive mea culpa, HuffPost’s Senior Polling Editor, Natalie Jackson, courageously took the bullet. In “Why HuffPost’s Presidential Forecast Didn’t See A Donald Trump Win Coming,” she cites a number of factors for her bad call.
She notes the main culprit was an over-reliance of polls at the exclusion of just about everything else. Aggregating many polls together – when they’re similarly flawed – doesn’t make for more accurate forecasting. But that’s not where it stopped.
Most pollsters don’t take into account external factors – the difficultly of a two-term president’s party winning four more years was a factor. And perhaps so were the biases of the pollsters themselves. Remember how wrong Mitt Romney’s pollsters were about their candidate’s chance of unseating Barack Obama in 2012? Sometimes polls can be tinted, tainted, and obscured by the hopes and desires of the pollsters themselves, the networks that discuss them, and the candidate’s teams who are wishing for an outcome.
If you’ve been involved in media research at the radio station level in your career, you know how this can happen. A room full of smart strategists, great programmers, savvy managers, and respected researchers reviewing 200 slides of data can still make bad calls due to wishful thinking. How many times have you heard someone say (or maybe you’ve said it yourself) that even though the numbers look bad, we have a good feeling about the format, a personality, a contest, or anything else? And so you end up acting against the cold, hard message the data is trying to deliver.
We see the numbers we want to see. We believe in the truth we want to believe in.
Another key factor is passion, and that raw emotion may explain Trump’s victory more than anything else. Pollsters mechanically score the choice of a candidate from a likely voter the same – it either goes in the Clinton column or the Trump column.
But as we learned in this campaign, Trump voters were far more zealous, fired up, and passionate. Far more passionate.
Think of them as Super P1s. Nielsen counts them all the same. But a P1 can be someone who listens to a station for just a handful of quarter-hours throughout the week. Or it can be someone who tunes in to a favorite station six hours a day. A P1 is a P1. Except when they’re not.
Trump voters were more motivated and activated than Clinton voters throughout the many months of this campaign. And so it is with radio stations and personalities. Some audiences are passive and even ambivalent. They listen out of habit, and can easily be tempted by a new station, a new format, or even a contest. But other fans are intensely loyal, passionate, and fervent. They listen longer, they tell their friends, they social share, they support the station, and they show up at events and promotions.
I missed the power of the Trump passion factor, too. In a blog post this past fall, “How Big Is Your Audience?” I suggested that Trump’s huge event turnouts were tantamount to him paying too much attention to the phones and uber P1s at the exclusion of the cume, or the larger pool of voters. But in fact, those massive, energized crowds were indicative of the higher passion levels the Clinton camp couldn’t muster. A strong “ground game,” millions of dollars more for TV and radio spots, and better social media outreach mean little if the candidate isn’t connecting with voters.
Ad Age’s Jack Neff refers to it as an “enthusiasm gap” that helps explain why just about every pollster and pundit missed the mark this week.
It’s the same X-factor buyers and marketers often miss in the radio business as well. When looking at two stations with a .5 average rating, what is the difference? Listeners are listeners. Voters are voters.
Until they’re not. One station could be a music machine that’s constantly played in the background. And the other can have great personalities the audience embraces, motivating them to post about the station socially, talk about it in the office, and show up for station events. Which one would YOU buy?
Passion matters.
It’s how elections are won…and lost. Especially this one.
Pollsters will spend months and maybe years re-examining their samples, their weighting, their algorithms, their averaging, and their formulas. But until they examine their own biases and the pure passion of voters, they’ll continue to get it wrong.
And so will we.
- Hey Twitter, Gotta Go - November 12, 2024
- Kicking Ryan Seacrest’s Butt - November 11, 2024
- Get Short(y) - November 8, 2024
Clark Smidt says
In college psych we learned 1/2 of 1% sampling population was needed to be statistically valid. A handful of robotic listening devices left in the same hands for extended periods of time is a significant concern. Didn’t metros have more diaries that went to different households with each survey? And, Fred is spot on: Turbo P1 Active Emotional Support is a DNS (does not show). An amazing week. Use Radio. Pull Together. Connect with what’s really out there.
Fred Jacobs says
Thank you, Clark.
Bob Bellin says
I think it was simpler than that. Yes, the passion was there for Trump and not for Clinton, but there were a lot of hold your nose Trump votes that were mostly anti Hillary and not at all fueled by passion.
I think two things happened – first, the polls all look for likely voters and the rural white vote that came out for this election had not traditionally voted much – so the models were all flawed. And I don’t buy the bias argument as much as you do because Trump’s own polling had him losing as of election day. Second, I don’t think there was enough time to fully factor the impact of the Comey letter on the voters. Polls lag by 3-4 days and if it was possible to poll on Monday the 7th, I think many of them would have at shown a swing to Trump.
Here’s another view of this – admittedly biased.
1) She DID get more votes, despite it all
2) THe Comey letter cost her net 3 points in the polling – assume that’s correct (and it may be low – it might have cost her more). If that letter doesn’t come out, she wins FL, PA, MI and the presidency. As I said, I don’t think the polling had enough time to capture the impact of that letter – hence they were all wrong.
This is probably a good thing – candidates will be in it til the end now more than they have in the past.
Fred Jacobs says
And maybe we all won’t be so slavish to the vagaries of the polls (or the ratings). Thanks for a great comment, Bob.
Jim Morrison says
The social desirability bias was likely in play – some respondents will give a socially favorable answer and not reveal their true intent. The negative appeal of these candidates may have empowered this bias. To your point on passion: When dependent on predictive analysis make sure your data stack has a pulse.
Fred Jacobs says
Jim, thanks for the comment. I think the social desirability issue was a real one, too.
Dave Hamilton says
An obscure polling company, Trafalfgar Group, picked up Trump’s winning margins in the battleground states. I was reading their data on election eve. They asked survey respondants : “who are your neighbors voting for” ? Using this question,
Trafalgar noticed a 5-10% increase in Trumps share as compared to the standard candidate preference response. This model/methodology picked up the shy Trump voter, many of which were college educated whites – a group he ended up carrying.
Fred Jacobs says
Sometimes you have to ask the question differently in order to get to the truth. And of course, not be afraid to look it in the eye. From one research guy to another, Dave, thanks for the comment.
Dan carlisle says
Your thoughts on seeing what you want and ignoring cold data brought back a memory of a column i read in Billboard back in 1967 about the future of FM. I was on WABX then and we could feel right time and right place. The writer was a gentleman who was PD at a drake formatted station in Dallas. He wrote that all the big time rating tools spelled out certain doom for FM. It was wishful thinking of course and ignored all competing data like album sales and who was buying them, concert ticket sales etc. kind of reminds me of what happened two days ago.
Fred Jacobs says
Misreading the data – or reading it with an agenda in mind – is always dangerous. Thanks, Dan.
K.M. Richards says
Fred, you’re spot on with your thoughts and so are the commenters ahead of me.
I started thinking “how could the polls be wrong?” about the time Clinton hit the point where even if she carried every remaining state Trump would win. And then two days later, Nielsen tells us 8% of the PPMs deactivated themselves for some reason they haven’t identified?
Hell, for all I know the research that says the CPM to reach an over-55 audience is wrong as well and we should program for them, letting the advertisers and their agencies catch up to us as they discover all of their long-held beliefs are wrong.
I mean, it makes as much sense as Trump winning when the polls said Clinton was a shoe-in.
Now we just have to hope our music research methodology isn’t flawed like opinion polls are …
Fred Jacobs says
It takes a great deal of discipline and courage to look at numbers, and question their validity, especially in a scenario where you’re seeing contrary evidence or behavior. Thanks for the thoughts, K.M.
Robin Solis says
Great post and very astute comments!
Fred Jacobs says
Mnay thanks, Robin.
Bill Conway says
Don’t forget the geography in sampling. Trump carried lots of small towns. It is harder to get a good representative sample of dozens of small towns over a large geographic area. Easier to get your sample closer to the population centers where Clinton had more support in states like WI, PA and MI. We have also lost many small town newspaper polls because most small town newspapers are gone. Locally owned stations with local news staffs are mostly gone too. So we couldn’t get reports of the strong emotions from all those towns either. We might have seen how attitudes were similar from town to town across large geographic sections of a state.
Fred Jacobs says
Clearly, this is a case where small towns – the local factor – made a difference. And you’re right, Bill, that polls often woefully miss these areas. The fact that rural turnout was so strong, overshadowed whatever pollsters were seeing in big markets. Thanks for the comment, Bill.