I have held off on commenting, posting, or bloviating about what is already the most uttered word radio in 2015 – Voltair. Until now.
Like talking politics at a dinner party, this topic about radio ratings on steroids is something that many broadcasters are tied up in knots about. And for good reason.
Before all hell broke loose in the trades, conversations and speculation about Voltair were cautious, mysterious, and hush-hush. Throughout most of this past fall, it was very much a secret to many broadcasters. No one wanted to say or give away too much about a possible hole in the ratings fence.
But since the Telos presentation at the NAB this past spring in Las Vegas, Voltair has been outed. The attention these black boxes has garnered in the trades and at industry gatherings over the past few months has made it a topic that, while painful to many, is an unavoidable one. Voltair fascinates the industry for many good reasons, in much the same way people whisper about the black arts.
So Voltair is on everyone’s mind and generating a lot of press; let’s explore some of the main issues revolving around this big story and the technology behind it.
- We don’t know the true impact and full extent of Voltair on PPM ratings.
- We don’t know how many stations are using the Voltair technology, who they are, and in which markets they’re using it.
- We don’t know if Voltair is more advantageous to some formats, music types, or personalities, and harmful to others.
- We do know that if the radio industry and Nielsen don’t get to the truth about Voltair and address these and other issues, then the credibility and revenue foundation of the industry is at risk.
So, there’s a lot of noise, but there is also a lack of indisputable facts and a great deal of mystery still surrounding Voltair. And consequently, the industry needs to clarify the true facts about Voltair and its ability to juice up encoding. Quickly.
Those who have Voltair – and who are willing to talk about it – will mostly tell you they’ve seen a positive impact. But based on what? The ratings went up? As we know, lots of things make the ratings go up…or down.
How well does Voltair’s technology fill in the blanks in the encoding process, and how much of this listening improvement is due to the same factors that have moved the ratings since their inception? No matter how skilled a programmer, it has always been something of a a mystery since the dawn of ratings to determine the specific forces and conditions that drive them north or south.
From signal to execution to music flow to weather to competitive pressures to the sample frame to local market forces, the variables are many. Gathering ratings is an inexact science. In some ways, Voltair has overlayed itself on that confusion.
Now a new expose in the well-respected FiveThirtyEight by Carl Bialik blows up this conversation and takes it outside of the radio industry trades and into the mainstream press. It suggests that the Smooth Jazz format was, in fact, a victim of faulty meter measurement.
Our company was clued into the Voltair technology last summer. As consultants, we weren’t just radio pundits having intellectual conversations about a mysterious new black box. We live and die with the ratings just like the radio people working inside stations. We are invested in the results, and much of the time, they determine our fate, too.
So over the fall and into the new year, I found myself engaged in veiled conversations with our clients in PPM markets. They often went something like this:
Me: I’m going to say a word to you that you’ll either immediately know what I’m talking about or we’re about to have an interesting conversation.
Client: OK, go ahead.
Client: You mean the French philosopher?
Of course, other versions of this same conversation ended with the client laughing and saying, “We just ordered four boxes. Don’t tell anyone.”
Now the word is that 25-Seven Systems has sold 600 of these boxes that some say is like Viagra for PPM ratings. The buyers are broadcasters who understandably choose to keep their purchases to themselves – and for good reason. Part of the story behind Voltair and its impact on metered ratings is that no one knows – save for 25-Seven – who actually is running the box in their rack rooms.
Until Voltair came along, concerned PPM discussions have been about the efficacy and accuracy of metered ratings, specifically the device’s ability to recognize codes of different styles of music, voice, and audio in general. Critics have pointed to the inaccuracies of PPM, exacerbated by the assertion that the meter fails to capture all audio a respondent is exposed to. The belief is that some types of music or even specific personalities encode better than others.
No methodology is able to achieve 100% accuracy. And clearly, many of the people whining the loudest about Nielsen and PPM have forgotten what it was like to live with diary measurement. In the UK, FiveThirtyEight reports that RAJAR – the Radio Joint Audience Research – rejected the PPM methodology in 2004, and elected to update the diary system by having respondents log their listening online.
Jerry Hill, RAJAR’s CEO, avers that “One of the prices you pay for granularity and data is that a lot of your listening ends up just disappearing.”
Perhaps, but if you’ve talked to anyone in Wichita or Pensacola lately, chances are they’ll tell you that the diary methodology – let’s just call it recalled listening – has its downsides, too. It’s not just “lost listening” that’s the probem – it’s usage that doesn’t always square with reality.
That’s because every ratings methodology is flawed in some way. It’s why they call them “estimates.”
Two true stories:
As the Research Director at WRIF in the late ‘70s, my charge was to extol the virtue of our Arbitron diary ratings when we had a good book. And to find “holes” and other unbelievable ratings stories when we had a lousy quarterly ratings performance. It was never hard to find absurd findings in the book like these:
“How can we have absolutely no Men 18-24 during middays on Sunday?”
Or “How is it possible that this AM daytimer became the #2 female 25-34 station in one book?”
Or “How can a Country station move from #17 to #1?”
I remember one year when W4 – which was Country at the time – had a major technical problem, forcing them off the air for several days. As a research guy, I couldn’t wait to see the impact of this catastrophic event on the ratings. During those days, Arbitron offered the AID system (Arbitron Information On Demand), allowing subscribers to request their desired demographics and time period – then wait overnight for the results to upload so you could access them in the morning.
When I saw in horror that W4 managed to post decent ratings during a period when they were off the air, it was a stark reminder the diary methodology was far from a perfect reflection of listening reality. Many simply wrote them down even though it was not technically possible to listen.
I also remember the converse was often true. One year, WTWR (Tower 92) snared the rights to broadcast the soundtrack for the Detroit/Windsor “Freedom Festival” riverfront fireworks show one year. Their outspoken GM, Tony Salvadore, was convinced that with more than a million attendees and his station blaring through loudspeakers all over downtown Detroit, the station would have a 20-share. Alas, it was another flaw of the diary methodology because people had to recall that it was, in fact, Tower 92 playing during “the rockets’ red glare.” Their rating book that summer was nowhere near as spectacular as the fireworks themselves.
So when people question PPM, the sample size, meters attached to ceiling fans and strapped onto dogs, and other ratings horror stories, it’s really no different than the agony of sifting through diaries in Beltsville, Laurel, or Columbia, and seeing mom fill out seven diaries for the family and the other inequities that occur with regularity because diaries have limitations, too.
Whether it’s exit polling or radio ratings, there is no perfect research. And PPM, like its predecessor methodologies, has its share of built-in imperfections. And in fact, most people in radio are willing to accept that reality.
But the intrinsic promise in any ratings contest is that every station has a fair chance to succeed – or fail. It is supposed to be a level playing field built upon a foundation of trust and credibility. How stations play the game (or game the system) has always been part of the arts, crafts, and voodoo of being a sharp programmer.
You could make the case that certain formats are more conducive to the diary methodology. Many Alternative programmers, managers, and owners always believed their core audience often failed to give them proper diary crediting.
But with PPM, there is that implicit belief that all formats, all announcers, and all sounds should have a fair chance to win or lose. The meter’s coding should be able to capture all dynamic ranges, from Delilah and Kenny G solos to Rush Limbaugh and Metallica.
That may not be the case, and that’s where Voltair has been allowed to shake up the system. Not all 600 stations that have purchased a box are experiencing higher PPM ratings. There are variables that go beyond settings and encoding that are in play here, too..
While 25-Seven proudly boasts that no one has asked for a refund, part of that may be due to the nature of radio owners and managers. Like gamblers in Vegas, they are always convinced that the next book, monthly, or weekly will be the one where they break through. Voltair ensures they have a chance to win, and that their fate will be dependent on their programming and marketing skills. Or at worst, the box allows them to keep up with their competitors who also invested in Voltair, making it a fairer fight.
There has been much disappointment with metered measurement almost from the beginning. The promise and hope of PPM were thrashed by a number of factors – the cratering economy as well as the disruption from other media options that have mushroomed over the past several years. The planets have lined up – but sadly, almost all of them angry, difficult, and confusing. The timing of the PPM methodology and its higher price tag couldn’t have been worse.
But encoding problems that lead to decreases in persons using radio puts the entire enterprise at risk. At a time when advertisers believe they’re seeing more accurate, accountable digital measurement, the broadcast radio industry deserves better.
So we’re in the middle of “The Year of Voltair” in radio, and it’s not been a good one as “half time” ends, and we begin the second half of the year. The final story and the true impact of this conundrum won’t be written in the rack room or on ratings spreadsheets. The true extent of the damage may be quietly whispered in ad agencies by buyers, planners, and account managers who are reading the vitriol, finger-pointing, and rampant speculation that is so common in radio whenever Arbitron, and now Nielsen, are discussed. The erosion of trust in the system is at the heart of the matter when it comes to radio and the way it is measured.
The real story behind Voltair is centered not on whether the box works, but on the credibility of the radio ratings themselves. The onus is on Nielsen to swiftly, accurately, and transparently study Telos’ technology – and the “Voltair Effect” – and make an honest determination of its impact and implications. The radio industry – from the Advisory Council to the MRC – would be best served by speaking in a calm but firm and unified voice to identify, solve, and address this problem, and then move on to radio’s real issues.
There is no bigger issue facing radio broadcasters than the credibility of its measurement. The radio industry needs and deserves answers and solutions to these questions…now.
“Doubt is an uncomfortable condition, but certainty is a ridiculous one.” – Voltaire
Jacobs Media has consistently walked the walk in the digital space, providing insights and guidance through its well-read national Techsurveys.
In 2008, jacapps was launched - a mobile apps company that has designed and built more than 1,000 apps for both the Apple and Android platforms. In 2013, the DASH Conference was created - a mashup of radio and automotive, designed to foster better understanding of the "connected car" and its impact.
Along with providing the creative and intellectual direction for the company, Fred consults many of Jacobs Media's commercial and public radio clients, in addition to media brands looking to thrive in the rapidly changing tech environment.