Sometimes, we programmers get a feeling – a pang, an itch – and we can’t often explain it. I realize that in our highly calibrated, data-heavy, algorithmically engineered world, feelings sound like they’re from the Renaissance Era.
Now, let me remind you – I’m a research guy, and I love when a new study comes rolling in or when Jacobs Media conducts one. In fact, I think of myself as something of a “data whisperer.” I have the ability to look at research – often from multiple markets – and tie it together. When I’m on my game, I can connect those numerical dots and often see what the data is trying to tell us – beyond the spreadsheet.
But I also love talking to listeners – better put, listening to them, usually in focus groups and one-on-one interviews. These days, they’re on Zoom. And if you spend enough time hearing their media habits, loves, and hates, patterns and pictures form. And that’s where those non-statistical feelings come from.
I am often asked how I figured out that Classic Rock was an underplayed but passionately desirable music trend that was bubbling under in the early ’80s. The fact is, there was no data that told me it could be a successful, sustainable format that would endure for more than 35 years. I felt it.
It was a combination of all those focus groups, talking to listeners at station events, answering the request lines, talking to other programmers, and observing the pop culture springing up around me. At that time, films like The Big Chill were killing it at the box office featuring nostalgic-laden soundtracks, while on the other hand, MTV and “Hot Hits” CHR stations were pounding all the big hits of the day. You could feel the fragmentation.
These days, I’m feeling something else. I have no data to back up my notion – it’s not even a theory. And I may rankle some feathers in the radio research community, a club I’m a member of.
Simply put, I’m seeing and hearing change in the wind when it comes to how consumers are listening to the radio, along with the growing legions of other sources.
We researchers love to categorize and pigeonhole listeners. She’s a Country fan. He’s a Classic Rocker. And for as long as I’ve been in radio, the Holy Grail measurement has been one’s “favorite station.” We have come to believe it speaks volumes about a listener and their habits. Even better, it’s easily trackable. If you’re gaining or losing ground – even by a percentage point or two – you’ll usually learn it in the form of measuring the preferred station – the one consumers listen to most.
In recent decades, we refer to that station as the P1 because that’s how the ratings services – first Arbitron, and now Nielsen – “cut the data.” It’s the place a listener spends the most time with each week. Of course, that measure varies. Some are heavy listeners while others are not. You are considered a P1 if you spend hours and hours every week listening to “The Eagle” station or just a handful of quarter-hours tuned in – as long as it’s the station you listened to most during a given time period.
In 2020, is P1 a valid measure of how consumers behave – and as importantly, how they think?
What if more and more listeners really don’t have a favorite station? Maybe they still listen to the radio, but punch around rather than having a clear preference, splitting their time with different services and platforms. Or they, in fact, spend more time listening to a single station, but the joy, passion, and loyalty are lacking.
I have a feeling – yes, that’s all it is – that I’m describing a growing list of media consumers – people who really cannot comfortably give you a clear choice. Try it yourself. Ask people you don’t know (so they aren’t aware of where you work) to name their favorite radio station.
I’m betting you’ll encounter a lot of this:
“Well, I really don’t have a favorite.”
They may hem and haw and tell you they punch around a lot in an effort to avoid commercials and songs that are beaten to death. But I’m betting you’ll run into lots of folks who will not readily give you a preference.
So, what’s going on when those research surveys are taken? After all, if you have a 500-person sample, most if not all of them gave you their radio station choice.
Or did they?
When interviewers encounter indecision on the favorite station question, how many are instructed to ask, “Well, if you had to choose a radio station, which one would it be?”
That’s not exactly an endorsement.
And in web surveys, are respondents given this type of option, along with that long list of local market AM and FMs:
“I don’t have a favorite radio station at this time.”
How many would gladly take that option if it were offered?
And then there’s this…
Most radio research qualifies respondents by asking (potential) respondents if they’re broadcast radio listeners. Maybe it’s 30 or 60 minutes a day, or a minimum number of days in a week. In any case, most research studies avoid those who have little-to-nothing to do with AM/FM radio. Their rationale? Why talk to defectors unlikely to come back?
So, what’s the net effect? After all, don’t we just want to know what radio listeners think of the local stations in the area?
Of course we do. But the radio qualifier question masks or diminishes the effects and impact of satellite radio, streaming audio platforms, podcasts, and even talking books on FM radio.
And even when non-broadcast outlets show up, competitively ranking in the P1 and/or cume columns right next to local radio stations, the results are typically met with shrugs. After all, we don’t really compete against SiriusXM or Spotify, now do we? And why should we waste precious time and money researching those who are long gone?
These practices leave radio broadcasters with a false positive. When you’re only studying a relatively finite group of local broadcast stations, strategies, tactics, and solutions always fall in the context of radio. How can we win the Country image over the Wolf? How can we ensure we’re getting credit for being the traffic station? How do we make sure our 30-minute commercial free rides are outdoing our radio competitors?
I’m not privy to research in other arenas, but I suspect network and local television stations are only too aware of how Netflix, Disney+, and Amazon Prime Video are ranking. Automakers likely track Tesla and other upstarts, even though they make electric vehicles rather than internal combustion cars and trucks. So, why wouldn’t radio broadcasters want their fingers on the pulse of where “ears” are moving?
You can’t compete against services that you don’t measure. If you don’t track listener satisfaction – or the lack of it – for “other” audio sources, how can radio broadcasters truly understand the changing audio marketplace?
Perhaps there’s an “have your cake and eat it, too” solution. How difficult and/or expensive would it be to gather demographic information at the front end of a survey (unconventional, I know), followed by a question like this:
“Of all the audio sources available to you – including AM/FM radio, streaming audio channels, satellite radio, and podcasts – which ONE do you listen to most often in a typical week?”
The research could even capture second and third choices. And if the respondent doesn’t listen to AM or FM radio, they’re terminated, but included in the totals. In this way, they are counted without disturbing the other key questions in the study that may be more radio-centric.
All this reminds me how Public Radio stations were systematically ignored in the pre-PPM days. Their ratings simply weren’t published in diary-measured markets – at least in the physical books. You could see them only when you ran rankers on a computer. The net result? Out of sight, out of mind.
Most commercial radio broadcasters had no clue how the Public Radio station was performing in their markets. And over time, they didn’t care. A look at today’s ratings – PPM and diary – in many markets reveals many Public Radio stations aren’t just competitive – they’re often dominant, occupying the top three ranks in key demographic groups.
These days, programmers are conditioned to ignore how SiriusXM, Spotify, podcasters, and other digital players are performing in their local markets. We know they’re out there, we know people are listening. But they don’t show up in Nielsen books, and we typically mute their appearance in our research studies.
At our own peril.
Tomorrow, another of my “feelings” about how the old radio ratings norms are changing, perhaps producing results we may not even realize are occurring. – FJ
- The Show(runner) Must Go On - September 28, 2022
- “For What It’s Worth,” The Problem With Cover Versions - September 26, 2022
- Blame It On The Radio - September 23, 2022