That principle applies to pre-election opinion polls too. Election canvassing is a two-way communication between candidates and voters. The media's job is to report on, contextualise and even give its opinions on this communication. But when the media starts telling the voter who's going to win how many seats, it starts interfering in the communication between candidates and voters. Given that these polls come with a claim of scientific methodology and empirical truth, they are not the same as saying, "I think this party will win." An opinion poll implies that the media has already asked the people who they will vote for, and reported it back to the people even before the people actually vote.
Opinion polls are so clearly a way of influencing voter behaviour, it is surprising they have been allowed for so long. They are particularly insidious in a multi-party democracy with a first-past-the-post electoral system. In such a system, there may be as many as 10-20 candidates in the fray in a constituency. With so many options, voters don't like to waste their votes. Typically, they want to know who is seriously in the fray and whose candidature is to be ignored. Such election discourse is known as "hawa". Through rallies, speeches, posters, and the media, political parties try to create the perception that they are a serious option, and you will not be wasting your vote if you choose them.
This is why we have "paid news" at election time, a phenomenon that has been declared a malpractice by the Election Commission. In recent years, parties and candidates have been willing to pay for advertisements that are published in the news columns, suggesting that they are in the fray and are likely to win. That is how important it is to create the illusion of victory.
Enter opinion polls. Hard data. Not your teashop chatter, not the speculation of the armchair political analyst, not the tall claims of partisans. Despite scepticism of opinion polls because they are repeatedly wrong, these surveys suggest broad trends to voters, and thus influence the hawa.
Since there are no studies showing the impact of opinion polls on voter behaviour, one must speculate about the results they bring. One could argue that if the opinion polls for the Delhi assembly election recently hadn't written off the Aam Aadmi Party, it could have won even more votes and seats than it did. But since the opinion polls largely suggested the Bharatiya Janata Party was coming to power with more than a clear majority, perhaps many voters thought they would be wasting their vote on the AAP.
Hawa for Hard Cash
You could argue that if opinion polls frequently get it wrong, how could they be accused of influencing voter behaviour? It turns out that they are only too happy to deliberately manipulate their data to do just that. Seven reporters of a news channel, News Express, went to 11 opinion poll companies, posing as officials of political parties, asking them if they would put out manipulated data to say that their party was doing better. Apart from CVoter, the rest of the ten are not big names in election forecasting. Two large firms, AC Nielsen and CSDS-Lokniti, refused to entertain the undercover reporters, saying they were booked for election season.
In tapes that the channel released on February 25, CVoter managing director Yashwant Deshmukh is seen and heard as saying that minor tweaking of results would be possible, though he couldn't completely invent false results. "Rafu" is possible he says, not "paiband". We can darn the results, but not put a patch on them. The reference is to manipulating results a bit by increasing the margin of error from 3% to 5%. In many constituencies, however, the margin of victory can be less than 1%.
Deshumkh defended himself on Twitter, but did not deny or explain the "rafu" vs "paiband" statement. Though Deshmukh's claims he refused the offer, the channel is not showing that part of his interaction with them.
Let us just look at CVoter's performance. CVoter predicted a BJP win in the Delhi assembly elections in 2008. It predicted 39 seats for the BJP and 30 for Congress. The result was 23 for BJP and 43 for Congress. In the 2009 general elections, in predicted 189 seats for the BJP-led National Democratic Alliance alliance, and 195 seats for the Congress-led United Progressive Alliance. The result was 159 and 262. In the previous general election, in 2004, it had predicted 276 seats for the NDA, but the actual result was 181. The UPA won 218 seats, as against CVoter's forecast of 173. This was their exit poll forecast, based on what the voters said after having cast their votes. It gets worse: CVoter has predicted two different results for two different channels.
The India Today Group put out a statement that they were suspending CVoter's services, but the question is: why does a company that gets it wrong all the time, get commissioned again and again? In any other industry, repeated failure would mean you go out of business. In election forecasting, your business only seems to grow. The only conclusion you can draw from that is that news channels don't really care about getting it right. Either they only care about the television ratings that broadcasting opinion polls bring, or worse, they're also in on the game of creating hawa for cash.
Even the best falter
This is not to single out CVoter. It's just an example of a commonplace practice. The issue here is not just malafide manipulation. Even an independent, unbiased election survey company could get it more wrong than right. That's because there is no scientific way of computing how vote shares get translated into seats, unless surveys were to be held in every constituency. That is why these surveys are often less wrong about vote shares than seats a party could win. In an electoral system where a small difference in vote shares can mean a huge difference in seats, the conversion of the former into the latter is mere guesswork. That is why CSDS-Lokniti stopped predicting seats long ago: it gives the vote shares to CNN-IBN, which gets another statistician to do the conversion.
Consider the surveys conducted by AC Nielsen in January and in February. In one month, it showed the NDA vote-share remaining at 31% but the UPA share went up by one percentage point. Despite this, the resulting seats changed drastically: the NDA went up from 226 to 236, where as the UPA fell from 101 to 92! Will AC Nielsen at least explain how that happened?
In the 2012 Uttar Pradesh Vidhan Sabha elections, CSDS-Lokniti predicted a 10 percentage point difference between the vote shares of the Mayawati-led Bahujan Samaj Party and the Mulayam Singh Yadav-led Samajwadi Party. It predicted that 34% voters would vote for the Samajwadi Party and 24% for the BSP. It claimed that the BSP's Dalit voter was shifting to the SP – a bit like saying Muslims were shifting to the BJP. The actual difference was 3.2%. While CNN-IBN went to town claiming they got it right when the predicted an SP majority, what explains getting the base results so off the mark?
Yogendra Yadav of CSDS-Lokniti defended that survey. In his new avatar as a politician of the Aam Aadmi Party, he told Scroll.in that psephology in India hasn't succeeded. There is no money in research and development, he said, and the media is too interested in forecasting seats. The real value of psephology is in "post-poll" surveys, he said: surveys conducted after the result is out, so that voters speak more freely about how they voted, and help us understand India's electoral decisions. Unfortunately, the media by that time has moved on to reporting about government formation, ministries, controversies and other breaking news.
Surveying the surveyors
This writer has met college students in Rajasthan who told him that they work for some of these reputed survey companies and they just fill up the survey forms themselves. It is too much hard-work to go from house to house, in villages and remote areas, asking countless questions of sometimes reluctant voters. Since these college students have a good idea of the local hawa, they reflect that in the survey forms and predict the vote shares close to the final tally. This is easier in two-party states where the governments alternate between two parties. This is also partly why the BJP often looks better in these polls: the polls have an urban bias and the BJP does better in cities than in rural areas.
These limitations are hard to see for the rest of us because the survey companies, and the media channels who hire them, rarely put out the raw data. The industry has no regulation. There is no Indian version of the British Polling Council and nobody is in a hurry to establish one. The channels and the survey company that gets it least wrong claim victory. When they get it wildly wrong, they just shut up and forget it about it. The Editors Guild of India, the Press Council or the self-regulatory associations of the news channels never seem to worry about this misleading of public opinion. Have you ever seen a news channel, print publication or a survey company apologise for getting it wrong? The only example is that of 'Outlook' magazine, which decided after the 2004 elections that it was not going to commission opinion polls at all.
Many questions remain: How much of a sample size is big enough for 1.2 billion people? Was this sample size representative enough? Did the Brahmin surveyor really go to the Dalit basti or the Muslim mohalla? Did the respondents really speak the truth? Who will ask these questions, and who will answer them? Besides, no one seems to ask why we need election forecasting at all. Would democracy be poorer without election forecasting?
It is all very well to blame the polling companies but the media organisations creating the demand to which these mushrooming polling companies cater are equally to be blamed. They can't take credit when they get it right, pass the buck on the polling company when they get it wrong, and still hire the same polling company again for the next election.
It needs to be asked of our editors: Does election forecasting even serve a journalistic purpose? How do opinion polls further the exercise of estimating the truth? Why do editors need the aid of opinion polls to know political trends in an election season? What happened to having an ear to the ground, travelling around the country to listen to the people? It is not as if the media thought the NDA was returning to power in 2004 and the opinion polls effected a course correction. On the contrary, old-fashioned journalists who travel the ground to get a sense of the hawa feel belittled and silly when their impression is contradicted by surveys that claim to have asked the voter's choice in every corner of the state or country.
Do we try to estimate through any scientific or empirical means who is going to win a sports tournament? We wait for the tournament to play out, match by match. We bet on it, we hope and pray. We analyse the strengths, weakness and opportunities of each player. We look at the weather and the health of the star players. These are methods we should follow during an election. We should leave the seat results to the Election Commission.
Regulation if not banning
It is unlikely that the media will allow the political class ban election surveys. They will claim that there's freedom of speech and expression to be protected, even if actually it is television ratings that are to be safeguarded. The television ratings must be good because these surveys serve the viewer's desire to know the future. If the ratings were not good, the election surveys wouldn't have become a weekly affair, with news anchors telling us about the fortune of parties rising and falling by percentage points as though it was the stock index they are speaking of.
In 2008, the Election Commission of India banned the broadcast of "exit polls" (conducted on the same day as the polling, of voters who have just cast their votes) between different phases of the same election. There were only token protests from the media. Nobody went to the Supreme Court crying about censorship. This suggests that even the grand daddies of election forecasting tacitly agree with the idea that opinion polls affect voter behaviour. When the Election Commission recently suggested that pre-election opinion polls be prohibited, it wasn't saying so for the first time. It first considered the idea nine years ago. This time when it asked political parties, 14 of them agreed. Only one, the BJP, wants them to stay. Most parties seem to agree that at least after when the election dates are announced – usually six weeks before voting – no opinion polls should be published.
Even if election forecasting through opinion polls is not to be banned, it definitely needs to be regulated. Here are some suggestions to be considered:
Those predicting the number of seats should be forced to state which specific seats they think are going to which party.
Revealing detailed methodology should be mandatory, and not just in one corner of the polling company's website.
There should be a minimum ratio of sample size to population that should be adhered to.
There needs to be an independent body whose accreditation should be mandatory for any polling company to be hired by a media house.
The independent body should entertain complaints, publish annual reports showing how widely the companies got it right or wrong, perhaps give them ratings.
It should also investigate whether the survey is actually taking place on the ground.
Polling firms that get their results wrong by 30% after claiming an error margin of 3% in an election where 1% can make all the difference, should be blacklisted.
The excuse that psephology is new in India does not hold water anymore. It's been around since 1980, and there's only so long that a few number-crunchers should be allowed to use Indian democracy as a guinea pig.
Respond to this article with a post
Share your perspective on this article with a post on ScrollStack, and send it to your followers.