Tuesday 30 June 2015

New report: Electoral integrity in Africa


Since its launch in 2012, the Electoral Integrity Project has studied electoral integrity around the world, considering such notions as why electoral integrity matters, why elections fail, and what can be done to address these problems.

A regional focus on Africa

EIP's research programme on Perception of Elections Integrity (PEI) is an ongoing initiative which does not only allow a comparison between countries, but over time will enable us to compare consecutive elections within countries as well as identifying regional trends. Providing an in-depth analysis of recent elections in 28 African countries, this is EIP's first report that presents findings of the study of electoral integrity in a specific region. A continent of great diversity, African elections are under-studied in comparison with Europe or America.

The Hanns Seidel Foundation, a German non-profit organization promoting democracy, good governance and the rule of law across the African continent,  commissioned the report, which was launched in Cape Town on 22 June 2015 by EIP's Ferran Martínez i Coma and Judge Johann Kriegler, former Constitutional Court Judge of the South African Constitutional Court.

The Foundation has welcomed the report, and the PEI index on which the findings are based. Noting that "it is currently the best rating tool available", the Foundation recognises that this is the first attempt to measure electoral integrity across the African continent, and hopes it will stimulate the debate on the integrity of political contests across Africa.

During 2015, Zambia, Nigeria, Togo, Benin, Burundi and Burkina Faso, among others, have voted or are expected to do so. The integrity of the elections is crucial, not only for normative reasons, but for instrumental factors, such as the internal stability of the country, and citizens’ satisfaction with their regimes. We are currently gathering data on those contests and hopefully this will be the first of many reports to come.

Purpose of the report

The purpose of this report is twofold. First, to present the African results of the Perceptions of Electoral Integrity expert surveys, and then to analyse important elements at play in shaping the integrity of African elections. Much attention has been placed on polling day and the immediate administration of elections, but Ferran Martínez i Coma and Max Grömping show that many other elements of the electoral cycle are key to the integrity of the elections. 

Eight main findings:

  1. The degree of threats to electoral integrity is more severe in Africa when compared to the rest of the world.
  2. The types of problems in Africa are similar to those found in the rest of the world. Put simply, there is no African electoral exceptionalism.
  3. The report highlights the fact that elections can fail long before election day, so attention should be paid to the electoral dynamics and institutional quality over the entire election cycle not just election day.
  4. State resources for elections are important, but not determinant.
  5. Difficulties in regulating campaign finance extend across the continent.
  6. The vote count is consistently the highest rated part of the election cycle.
  7. Countries with good overall electoral integrity may still perform poorly in certain dimensions of the electoral cycle, on the other hand, low overall performers may excel in certain dimensions.
  8. Two country case studies of Malawi and Mozambique highlight that countries with similar levels of economic development can have vastly different outcomes of electoral integrity.

Download the report

Tuesday 9 June 2015

How seriously should we take the opinions of academics and experts when it comes to complicated issues like electoral integrity?

This blog post appeared on LSE's Democratic Audit UK on 9 June 2015.

By Ferran Martínez i Coma and Carolien van Ham

The result of the 2015 General Election came as a surprise for most people, but particularly those in the academic and polling community. But what is the appropriate role for academics in an electoral setting, particularly when it comes to complicated issues like the integrity of electoral contests. Ferran Martinez i Coma and Carolien Van Ham seek to answer this question, and conclude that expert surveys are useful even when treating complex and multi-faceted issues, such as electoral integrity; and even when carried out in institutional settings as different as liberal democracies and electoral autocracies.

Senate House, University of London (Credit: Steve Cadman, CC BY SA 2.0)

For many years, social scientists have been using different databases that measure and classify complex, multidimensional and contested concepts such as democracy, freedom or corruption.

The utility of such data is evident not only for academics but also for the policy and the advocacy community. At a glance, such data summaries the state of democracy or corruption in a country, and position it relative to others with a score or ranking. Yet, such scores are not created in a vacuum. On the contrary, data on multi-dimensional and complex concepts such as democracy, freedom or corruption are normally generated through a process of measurement of multiple indicators and subsequent aggregation of those indicators.

In order to measure complex concepts, we need to gather information about their different elements or components. To do so, researchers could rely on public opinion surveys or information contained in media and other secondary sources. However, the level of complexity and the necessary knowledge to address some issues may not be an easy task. Consider, for example, the electoral management body’s autonomy. While probably the general public is able to have an overall assessment of its performance and also the public may know whether such body is formally dependent of the government or not, it is unlikely that the public know the details of the implications of changes in the autonomy of the electoral management body is violated.

An alternative approach is to measure complex and multidimensional concepts with expert surveys. There are good reasons for the use of expert surveys. First, experts are aware of the specificities of the concrete matter, since they have the knowledge and capacity to grasp the fine details. Moreover, experts may have access to information that citizens do not have access to, potentially providing better data on covert practices such as corruption. Third, they considerably diminish the costs of other polling alternatives. Finally, they have been widely used to study, to mention a few, corruption; democracy and its components; party and policy positioning, the power of prime ministers, evaluations of electoral systems, or policy constraints horizons.

However, expert surveys are not risk free. There are several limitations. The first question has to do with the object of evaluation: do experts judge the same aspects of the concept under study? The second is on the criteria that experts use when judging: do they rely on their expertise, or do they also provide personal views? Third, as expert surveys become more comprehensive and encompassing with more and more diverse experts around the world, it is fair to ask whether they share the same criteria when evaluating concepts or whether their judgments are dependent on the context in which the election take place. Finally, it is also the case that, in contrast to mass surveys, there is still no common methodology to construct expert surveys, nor agreed technical standards and codes of good practice. This is very relevant not only for research but also for policy-makers and practitioners using indices and rankings based on expert surveys.

Given the potential advantages and limitations of expert surveys, in our paper in the European Journal of Political Research (EJPR), we assess the validity of experts’ judgments when judging the integrity of elections. We analyse three sources of bias that may arise in expert evaluations: the object, the experts and the context. These sources of bias are applicable to almost all expert surveys. First, the object of evaluation may be defined and perceived differently by different experts. Election integrity is a complex, multifaceted concept, and different experts may emphasise different aspects, ranging from media bias to election violence. Second, experts may differ, both in their level of expertise as well as in their degree of political neutrality. Third, contexts may differ – that is, expert evaluations may be context-bound, limiting the capacity of both concepts and data to ‘travel’.

We test these three sources of bias and evaluate expert judgment validity using a new dataset on expert perceptions of election integrity, the Perceptions on Electoral Integrity (PEI) that asks experts to evaluate 49 specific indicators of election integrity. This database includes 49 variables measuring 11 dimensions of electoral integrity over the electoral cycle. The survey encompasses the full electoral cycle, ranging from the pre-electoral period, the campaign, to polling day and its aftermath, as outlined by the United Nations. The PEI data currently has responses of over 800 experts on 66 parliamentary and presidential elections that took place in 2012 and 2013, covering countries as diverse as Angola, Kuwait, Malaysia and Norway.

An expert is defined as a political scientist (or social scientist in a related discipline) who has published or who has other demonstrated knowledge of the electoral process in a particular country. By ‘demonstrated knowledge’ PEI understands one of the following criteria:

(1) membership of a relevant research group, professional network, or organised section of such a group;

(2) existing publications on electoral or other country-specific topics in books, academic journals, or conference papers;

(3) employment at a university or college as a researcher or professor.

For each election, the PEI survey identified and contacted around forty experts, seeking balance between domestic and international experts. When the number of available domestic experts was limited, as was the case in some developing countries, PEI relies more on international experts.

There are three main findings of our research. First, considering the object of evaluation, we find that questions of a factual nature generate lower deviation in expert judgments than more evaluative questions. We also find evidence that questions that are more difficult to answer, either because the issues are technical or because the information might not be publicly available (i.e. voter registration, campaign finance) generate higher deviation in expert judgments.

Second, when we analyse the heterogeneity of the experts, we argue that they may differ both in their level of expertise as well as in their degree of neutrality. We find that having a high level of knowledge about the election (as indicated by the number of questions answered and age) is not significant in predicting expert variance. However, having strong ideological preferences appears to affect variance between experts. This result underscores the importance of careful selection of experts as well as the consideration of their partisan background.

Third, we also study whether the context –the election they assess and the country in which they are living- impacts experts’ judgments. Among all the factors we include to capture context, almost none seems to impact the variation of expert judgments. The only element that seems to matter is the ideological polarisation between experts, increasing the variability of expert judgments.

Concluding, our overall results demonstrate that expert surveys are useful even when treating complex and multi-faceted issues, such as electoral integrity; and even when carried out in institutional settings as different as liberal democracies and electoral autocracies.

There are several implications from our research both for policy as well as for future research. First, our findings demonstrate the importance of testing the validity of expert surveys prior to using these data for substantive analyses, so that validity problems can be identified and dealt with. Second, our findings underscore the importance of careful selection of experts and taking into account their partisan background when collecting expert survey data. Third, the widespread use of indices based on expert survey data, such as indices on corruption and democracy, by policy-makers and practitioners, underscores the need for developing technical standards and codes of good practice for gathering data using expert surveys.


This post represents the views of the author and not those of Democratic Audit or the LSE. 


Dr Ferran Martínez i Coma is a Research Associate at the Electoral Integrity Project at the University of Sydney. Prior to this position, he was a Technical Adviser for elections for the General Direction of Interior Policy of the Ministry of Internal Affairs, in Madrid, Spain.

Carolien Van Ham is a Lecturer in Politics at University of New South Wales, Sydney, Australia