One of the leftover questions that is consuming researchers, statisticians, and pollsters is how just about everyone incorrectly predicted this election. In some cases, by astonishingly large margins.
The most inaccurate of them all was Huffington Post, which on Tuesday afternoon announced that Hillary Clinton had a 98% chance of being elected president. While just about every pollster foresaw a Clinton win, HuffPost’s prognostication was the most wildly off-base. Instead of a Hillary Clinton lock to break that glass ceiling, it ended up with Donald Trump meeting President Obama at the White House yesterday.
In an extensive mea culpa, HuffPost's Senior Polling Editor, Natalie Jackson, courageously took the bullet. In “Why HuffPost’s Presidential Forecast Didn’t See A Donald Trump Win Coming,” she cites a number of factors for her bad call.
She notes the main culprit was an over-reliance of polls at the exclusion of just about everything else. Aggregating many polls together – when they’re similarly flawed – doesn’t make for more accurate forecasting. But that’s not where it stopped.
Most pollsters don’t take into account external factors – the difficultly of a two-term president’s party winning four more years was a factor. And perhaps so were the biases of the pollsters themselves. Remember how wrong Mitt Romney's pollsters were about their candidate’s chance of unseating Barack Obama in 2012? Sometimes polls can be tinted, tainted, and obscured by the hopes and desires of the pollsters themselves, the networks that discuss them, and the candidate's teams who are wishing for an outcome.
If you’ve been involved in media research at the radio station level in your career, you know how this can happen. A room full of smart strategists, great programmers, savvy managers, and respected researchers reviewing 200 slides of data can still make bad calls due to wishful thinking. How many times have you heard someone say (or maybe you've said it yourself) that even though the numbers look bad, we have a good feeling about the format, a personality, a contest, or anything else? And so you end up acting against the cold, hard message the data is trying to deliver.
We see the numbers we want to see. We believe in the truth we want to believe in.
Another key factor is passion, and that raw emotion may explain Trump’s victory more than anything else. Pollsters mechanically score the choice of a candidate from a likely voter the same – it either goes in the Clinton column or the Trump column.
But as we learned in this campaign, Trump voters were far more zealous, fired up, and passionate. Far more passionate.
Think of them as Super P1s. Nielsen counts them all the same. But a P1 can be someone who listens to a station for just a handful of quarter-hours throughout the week. Or it can be someone who tunes in to a favorite station six hours a day. A P1 is a P1. Except when they’re not.
Trump voters were more motivated and activated than Clinton voters throughout the many months of this campaign. And so it is with radio stations and personalities. Some audiences are passive and even ambivalent. They listen out of habit, and can easily be tempted by a new station, a new format, or even a contest. But other fans are intensely loyal, passionate, and fervent. They listen longer, they tell their friends, they social share, they support the station, and they show up at events and promotions.
I missed the power of the Trump passion factor, too. In a blog post this past fall, “How Big Is Your Audience?” I suggested that Trump’s huge event turnouts were tantamount to him paying too much attention to the phones and uber P1s at the exclusion of the cume, or the larger pool of voters. But in fact, those massive, energized crowds were indicative of the higher passion levels the Clinton camp couldn’t muster. A strong “ground game,” millions of dollars more for TV and radio spots, and better social media outreach mean little if the candidate isn’t connecting with voters.
Ad Age’s Jack Neff refers to it as an “enthusiasm gap” that helps explain why just about every pollster and pundit missed the mark this week.
It’s the same X-factor buyers and marketers often miss in the radio business as well. When looking at two stations with a .5 average rating, what is the difference? Listeners are listeners. Voters are voters.
Until they’re not. One station could be a music machine that’s constantly played in the background. And the other can have great personalities the audience embraces, motivating them to post about the station socially, talk about it in the office, and show up for station events. Which one would YOU buy?
It's how elections are won…and lost. Especially this one.
Pollsters will spend months and maybe years re-examining their samples, their weighting, their algorithms, their averaging, and their formulas. But until they examine their own biases and the pure passion of voters, they’ll continue to get it wrong.
And so will we.
- In Radio, Whatever Happened To “4 And Out The Door?” - December 7, 2023
- An Open (News)Letter To Radio - December 6, 2023
- The Case For Handcrafted Radio - December 5, 2023