about-about-new2

Service Delivery Indicators (SDI) – a new Africa wide initiative that collects actionable data on service delivery in schools and health facilities – has been launched by the World Bank in partnership with the African Economic Research Consortium and the African Development Bank. The SDI data are used to assess the quality and performance of education and health services for decision makers to track progress over time, and for citizens to hold governments accountable for public spending.  No other set of indicators is available for measuring service delivery performance and quality at frontline schools and health facilities from the citizens’ perspective.

about-what-new2

What are the Service Delivery Indicators?

The Service Delivery Indicators are a set of 20 indicators that examine the effort and ability of staff and the availability of key inputs and resources that contribute to a functioning school or health facility. The specific indicators are as follows:

EDUCATION HEALTH
What providers know (Provider Ability)

Minimum knowledge
Test scores on English, Mathematics, Pedagogy

Diagnostic accuracy
Adherence to clinical guidelines
Management of maternal /neonatal complications

What providers do (Provider Effort)

Absence from school
Absence from classroom
Time spent teaching

Caseload
Absence from facility

What providers have to work with (Availability of Resources)  

Students per textbook
Equipment availability
Infrastructure availability

Drug availability
Equipment availability
Infrastructure availability

What is the Service Delivery Indicators Initiative?

The Initiative is a unique 10-year program and the World Bank is the implementing partner for the first five years.

Data Collection

The SDI surveys are conducted at schools and health facilities. The surveys measure the performance and quality of education and health services using cutting edge methodologies and a high degree of quality assurance. The surveys generate nationally representative indicators, disaggregated by rural and urban location, and public and private provider type.

Started in 2012, SDI tracks service delivery in education and health across Africa. Kenya was the first SDI country, following the pilot surveys in Tanzania and Senegal. Surveys were conducted in Mozambique, Nigeria, Togo and Uganda in 2013. More countries will follow in 2014. The findings are available to government officials, donors, civil society and other stakeholders to track service delivery quality and performance.

The two other components of the SDI initiative are (i) capacity building of stakeholder in analysis and (ii) creative communication to disseminate the data and findings.

Capacity Building

The African Economic Research Consortium's (AERC) mandate is to strengthen national organizations in research and policy in more than 30 African countries. Under SDI, the AERC will implement training courses targeting mid-level and senior analysts and researchers. Further, the AERC will convene high-level policy seminars in each SDI country on relevant policy issues. Participants will include analysts, policymakers, parliamentarians, representatives from civil society and other key stakeholders.

Creative Communication

The SDI initiative will share and distribute the findings through methods.  In addition to being posted on the World Bank website (www.worldbank.org/SDI) and the SDI website, (www.SDIndicators.org) findings will be disseminated through the media and directly to government officials, donors, civil society, and the private sector.

about-why-new2

Addressing the unfinished quality agenda

Access to schools and clinics has increased in the majority of African countries, but many children who leave school are unable to read or do basic arithmetic, and the quality of care in clinics remains uneven. Increased spending and expansion in access to education and health services have not been matched with equivalent improvements in human development outcomes, suggesting an unfinished quality agenda.

Quality is critically dependent on what service providers know and what they do

Based on findings from the World Bank’s 2004 World Development Report Making Services Work for Poor People, we know that the key characteristics of provider behavior are knowledge, skills and effort. While knowledge and skills are determined by levels of education and ability to perform in the classroom and health facility, effort is highly discretionary: determining how much time to spend with a patient or student is a judgment. This complicates relationships of accountability for education and health services.

Accountability for public resources

Developing country governments allocate roughly a third of their recurrent budgets to education and health. Demands for accountability and for the efficient use of public resources—from citizens and taxpayers in developed or developing countries alike—are gaining in prominence, partly because of the global economic situation.

You can’t hold service providers accountable for what you don’t measure

Without consistent and accurate information on the quality of services, it is difficult for citizens or politicians to assess how service providers are performing, to work towards corrective action, and ultimately to bring about improvements in service delivery.

The SDI surveys and data are different from other available studies in a few key ways:

  • The SDI surveys use robust and cutting edge data collection methods.
  • The survey instrument is nimble allowing for relatively rapid fieldwork and data analysis, making it more useful for decision-making and policy discussions.
  • It is focused on the links between expenditure and human development outcomes.
  • The indicators are standardized allowing comparison between nations and across subnational boundaries and over time.
  • The surveys are repeated every two years.

sdimap-new_NEW

The SDI surveys were pilot-tested in Tanzania and Senegal in 2010. Based on the pilot survey, the first SDI survey was conducted in Kenya in 2012. The SDI survey in 2013 was completed in Uganda, Nigeria and Togo. Plans for 2014 are to conduct the SDI survey in six countries including Cote d’Ivoire, Democratic Republic of Congo, Mali, Mozambique, Niger, and South Sudan.

design-partnersnew-sm

aerlogoadblogo
hewlett-logo wblogo

 A Steering Committee with representation from a broad array of stakeholders and supporters provides guidance for the execution of the initiative. It includes the following members:

ritva-st

RITVA REINIKKA

World Bank

mwangi-st

MWANGI S. KIMENYI

The Brookings Institution (Africa Growth Initiative)

lemmet-s

LEMMA SEMBET

African Economic Research Consortium

ruth-st

RUTH LEVINE

Hewlett Foundation

jacob-st

JAKOB SVENSSON

Institute for International Economic Studies, Stockholm University

nathalie-st

NATHALIE DELAPALME

Mo Ibrahim Foundation

ory-st

ORY OKOLLOH

Omidyar Network Africa

leonard-st

LEONARD WANTCHEKON 

Institute for Empirical Research in Political Economy, Princeton University

shant-st

SHANTAYANAN DEVARAJAN

World Bank

mthuli-st

MTHULI NCUBE 

African Development Bank

agnes-st

AGNES SOUCAT 

African Development Bank

  

The Technical Team is made up of professionals from education and health research institutions in Africa and elsewhere who are at the cutting edge of service delivery research. Their focus is especially on the quality and technical integrity of the SDI surveys and analysis.

SDI Technical Panel

jacob-st

JAKOB SVENSSON

Institute for International Economic Studies, Stockholm University

deon-st

DEON FILMER

World Bank

james-st

JAMES HABYARIMANA 

Georgetown Public Policy Institute, Georgetown University

jishnu-st

JISHNU DAS 

World Bank

ottar-st

OTTAR MAESTAD 

Chr. Michelson Institute

tessa-st

TESSA BOLD

Goethe University

Team SDI

gayle-st

GAYLE MARTIN 

(Program Leader) World Bank

christophe-st

CHRISTOPHE ROCKMORE

World Bank

obert-st

OBERT PIMHIDZAI 

World Bank

waly-st

WALY WANE

World Bank

Overview

Sources of data

Implementation and funding

Where has SDI been conducted?

Perspectives on/from SDI

Design / Methodology

Overview

What is the Service Delivery Indicator initiative?

The SDI initiative is a partnership of the World Bank, the African Economic Research Consortium, and the African Development Bank. The World Bank is the implementing partner of this initiative for the first 5 years of this ten-year program. The World Bank implements the SDI surveys jointly with host country governments. In addition to data collection, the two other components of the SDI initiative are (i) capacity building of stakeholder in analysis and, (ii) creative communication to disseminate the data and findings.

Why SDI?

Addressing the unfinished quality agenda

Access to schools and clinics has increased in the majority of African countries, but many children who leave school are unable to read or do basic arithmetic, and the quality of care in clinics remains uneven. Increased spending and expansion in access to education and health services have not been matched with equivalent improvements in human development outcomes, suggesting an unfinished quality agenda.

Quality is critically dependent on what service providers know and what they do

Based on findings from the World Bank’s 2004 World Development Report Making Services Work for Poor People, we know that the key characteristics of provider behavior are knowledge, skills and effort. While knowledge and skills are determined by levels of education and ability to perform in the classroom and health facility, effort is highly discretionary: determining how much time to spend with a patient or student is a judgment. This complicates relationships of accountability for education and health services.

Accountability for public resources

Developing country governments allocate roughly a third of their recurrent budgets to education and health. Demands for accountability and for the efficient use of public resources—from citizens and taxpayers in developed or developing countries alike—are gaining in prominence, partly because of the global economic situation.

You can’t hold service providers accountable for what you don’t measure

Without consistent and accurate information on the quality of services, it is difficult for citizens or politicians to assess how service providers are performing, to work towards corrective action, and ultimately to bring about improvements in service delivery.

The SDI surveys and data are different from other available studies in a few key ways:

  • The SDI surveys use robust and cutting edge data collection methods.
  • The survey instrument is nimble allowing for relatively rapid fieldwork and data analysis, making it more useful for decision-making and policy discussions.
  • It is focused on the links between expenditure and human development outcomes.
  • The indicators are standardized allowing comparison between nations and across subnational boundaries and over time.
  • The surveys are repeated every two years.

Back to top

Sources of data

Doesn’t this information already exist?

Although there are currently several surveys and data collection activities that capture aspects or portions of education and health service delivery, the SDI data are different from those in several ways. SDI is designed to provide the link between investment and the performance outcomes; it includes both education and health; it is more frequent (surveys are conducted every two years rather than five or 10 year-cycles); and the SDI survey instrument has been standardized so the data can be compared internally as well as across countries.

Finally, SDI does not rely on data that are self-reported by educators and health workers at the community level, which can be inconsistent.

SDI is committed to having consistent and accurate information on the providers knowledge and performance to assess the quality and accountability of the delivery services.

How is SDI different from DHS and other surveys funded by donors?

SDI is more than just a survey, it has three distinct components: the service delivery indicators data, capacity building of local organizations, and communication of the results.

With its partner the African Economic Research Consortium, the SDI Initiative builds the capacity of local organizations in research and policy analysis, and in communication SDI makes sure the data and information are made available to all interested parties so it can help in making future decisions in education and health service delivery.

Also, SDI is different from other surveys because it is more frequent – it is conducted every two years; it focuses on both the education and health sectors; and the methodology is standardization to be used and compared across countries.

What are examples of Service Delivery Indicators?

The indicators are designed to measure both the efforts and abilities of health providers and educators as well as the education and health facilities’ resources.

There are established acceptable standards that schools and clinics must meet for service delivery and the SDI survey monitors whether a country’s facilities and professional staff are regularly meeting those standards.

For example, in education the indicators record how much time a teacher spends teaching in the classroom or what the student to teacher ratio is. In health, indicators capture a health provider’s adherence to clinical guidelines or what the caseload is per clinician. In terms of resources, the survey records whether the appropriate books and materials are available in schools and medicine and equipment in health clinics.

Back to top

Implementation and funding

Who is funding this project? Why isn’t the World Bank funding this?

Funding is being provided by the World Bank. The World Bank saw this need and made the commitment to design and initially fund the SDI in six countries.

The World Bank was joined by the African Economic Research Consortium and the African Development Bank to implement the SDI. Contributions also have been made by several private foundations with in-kind technical leadership from universities and international organizations.

The World Bank has made a major financial commitment to SDI and plans to continue to support it with technical assistance and grants. The World Bank saw there was a gap in frequent frontline service delivery indicators and so developed the SDI to address this need. With its partners it has conducted the SDI survey in six African countries and there are plans to expand to other countries.

The World Bank welcomes additional support from other donors and organizations, public and private, to allow SDI to expand beyond these initial countries.

The Service Delivery Indicators benefit decision makers in government and private sector as well as donors and their programs to track progress in performance and investment returns over a specific period of time. It also allows citizens to hold their governments accountable for public spending.

Who is implementing the project?

The World Bank has joined with the African Development Bank and the African Economic Research Consortium to develop and implement the Service Delivery Indicators. The World Bank is the initial partner with the AERC handling the country level implementation that includes gathering the data and providing technical leadership to local entities on conducting policy relevant economic inquiries.

SDI has a Steering Committee made up of representatives from various international think tanks and international organizations that provide advice and guidance.

SDI also has a Technical Panel comprised of experts from leading education and research institutions that focus on the quality and integrity of the SDI survey and analysis.

Back to top

Where has SDI been conducted?

The Service Delivery Indicators survey has been conducted in six countries: Uganda, Tanzania, Kenya, Senegal, Togo and Nigeria.

The Kenya Service Delivery Indicators will be launched on July 12, 2013 in Nairobi attended by the government of Kenya’s Ministers of Health and Education and representatives from civil society, The World Bank, and other donor organizations.

Later in 2013 the SDI will be released in Nigeria, Tanzania, Senegal and Uganda.

Back to top

Perspectives on/from SDI

Could the Service Delivery Indicators be interpreted as a judgment on a country’s commitment to education and health?

SDI is an evidence based tool that helps policy makers and implementers health providers and educators measure the performance of their education and health delivery services. It is not a final grade or a judgment on people or services but a status report.

It reflects where the services are in a given time period and whether there is progress or - hopefully not regression.

One of the values of the SDI is its frequency. By being conducted every two years it allows for adjustments and changes in the service delivery system and time to capture those changes.

Many African countries spend roughly a third of their budgets on education and health. While access to care at public education and health facilities in schools has increased in many African countries, the quality of these services remains uneven.

The quality of education and health care services is heavily reliant on the knowledge and quality of care of teachers and educators.

Consistent and accurate information on the providers’ knowledge and performance is needed to assess the quality and accountability of these services.

How do you respond to education and health officials that see this as critical of the work performed by many dedicated professionals in their sectors?

Educators and health professionals deserve tremendous credit for their work in often less than ideal situations. SDI is in fact providing evidence for many of the issues raised by unions, civil society and private individuals that the demands of more students and patients and less funding available is making it more difficult to do their jobs.

SDI shows where there is progress and where there is regression and possible ways to fix these. It is not a final grade, SDI is a long term look at where changes can be made to improve quality and performance in service delivery.

Is it fair to only use quantitative research to measure the indicators?

The methodology for the service delivery indicators is quantitative; however, we take into consideration other issues when analyzing the data. For example, any political or social conditions or even natural disasters that may cause absenteeism or delays in delivery are documented and are incorporated into the overall analysis.

 Back to top

Design/Methodology

What are the analytical underpinnings of SDI?

The Service Delivery Indicators takes as its starting point the literature on how to boost education and health outcomes in developing countries. While resources alone appear to have a limited impact on the quality of education and health in developing countries, it is possible that inputs are complementary to changes in incentives and so coupling improvements in both may have large and significant impacts.(i) The fact that budgets have not kept pace with enrollment, leading to large student-teacher ratios, overstretched physical infrastructure, and insufficient number of textbooks, etc., is problematic.(ii) However, simply increasing the level of resources might not address the quality deficit in education and health without also taking providers’ incentives into account.(iii) A production function for service delivery is a key theoretical underpinning of the service delivery indicators. Service delivery is thought of as a function of key inputs, service provider ability and service provider effort. Service delivery outcomes are determined by the relationships of accountability between policymakers, service providers, and citizens. In turn, health and education outcomes are the result of the interaction between various actors in the multi-step service delivery system, and depend on the characteristics and behavior of individuals and households.

How were the indicators chosen?

SDI proposes three types of indicators: (i) provider competence and knowledge; (ii) proxies for effort, broadly defined; and (iii) availability of key infrastructure and inputs.(iv) In addition, we wanted to select indicators that are (i) quantitative (to avoid problems of perception biases that limit both cross-country and longitudinal comparisons)(v); (ii) ordinal in nature (to allow within and cross-country comparisons); (iii) robust (in the sense that the methodology used to construct the indicators can be verified and replicated); (iv) actionable; and (v) cost effective.

How does SDI link with other surveys?

  • Education.  The Southern and Eastern African Consortium for Monitoring Educational Quality (SACMEQ) focuses primarily on education outcomes. The System Assessment and Benchmarking for Education Results (SABER) initiative of the World Bank focuses mainly on policy and institutional environment. The focus of the SDI on quality offers potential complementarities with these instruments by linking inputs, policy and institutional environment factors on the one hand, and education outcomes on the other.
  • Health. Service Availability Mappings (WHO) and Service Readiness Assessment Surveys (USAID/Macro International) are health facility surveys, which are very comprehensive. We view these comprehensive data efforts complementary to the SDI, which will focus on a sub-set of key health facility indicators, but be a more nimble tool that can be repeated at lower cost and with greater frequency.(vi)
  • Health. Demographic and Health Surveys (DHS) (USAID/Macro International) are household surveys aimed at assessing: health service access and utilization, and health outcomes (such as infant mortality rate and under 5 mortality rate). In this sense the SDI and DHS surveys are highly complementary and SDI goes beyond measuring utilization, but assesses the functioning of services that people have access to.

What is the level of representatively of SDI surveys?

The sampling strategy aims to generate nationally representative data disaggregated by rural/urban locations and provider type. To maximize the utility to the in country dialogue, the stratification and/or selective oversampling of certain geographic locations will be adapted to country level needs.

Why are no qualitative data collected by SDI surveys?

SDI focuses on quantitative facility based data. That said, information is collected on many institutional factors that help one understand and interpret the results of the indicators i.e. SDI surveys do not only collect the data for the 5-6 indicators per sector, but collect other contextual information to correctly interpret the indicators and for more detailed analysis beyond the indicators.

Why does SDI have an exclusive supply side focus? 

SDI surveys focuses on facility based data, which is a gap in the data landscape. While there are standardized sources of household surveys (e.g. DHS, LSMS) that give insight into education and health outcomes and their socio-economic determinants, this information is not always helpful to talk about what the service delivery systems should be held accountable for. This is the gap that SDI aims to fill.

Is there a risk of laying the blame on providers for service delivery weaknesses?

The actions of service providers are viewed as a reflection of underlying systems’ weaknesses or strengths — rather than an indictment of individual service providers. From the consumer’s point of view, however, providers’ ability and effort are what they see, hence this focus in SDI.

What is SDI doing to build capacity to better use the data?

The SDI partners are committed to make anonymized raw SDI data available. But it cannot be automatically assumed that everyone will be able to access and use the data. One of the SDI partners, the African Economic Research Consortium (AERC) will implement in each SDI country two types of training workshops: one for basic analysis targeting young researchers, and a second shorter more compact module for advanced analysts/researchers. Mini-grants will be awarded to promising analyses to develop them into SDI Policy Briefs.

Do the SDI surveys not undermine the investment made in education and health management information systems?

The experience with MIS has been mixed. Take the example of health MIS. There is a long history of investment in the HMIS, but the WHO’s structured assessments of the HMIS has continued to point to serious weaknesses in the system that undermines the conclusions that one is able to draw. For example, the completeness of HMIS submissions is a challenge, and those performing in the range of 70% are performing reasonably well. But it is rarely that the same 70% of facilities are reporting (in which case corrections can be made in the analysis). Also, it is often assumed that this is a capacity problem, and investments in training and information systems have been made — yet the problem persists. It can be argued that not all types of data should be collected in management information systems in the first place, and that the type of quality data that are being collected in SDI are best suited to collection by (independent) third parties.

What are the 15-20 SDI countries, how were they decided and what are the minimum requirements to participate as an SDI country?

All the SDI countries have not been selected. The main requirement is country demand, the commitment to making the findings available within 3 months of data collection, data transparency, i.e., to make the data available for primary analysis (with the necessary ethical protections such as anomymization). Finally, the survey should be implemented in both the education and health sectors.

Are private sector providers included in SDI surveys?

SDI covers both public and private sectors. In the education sector both non-profit private sector providers (mainly faith-based organizations or FBOs) as well as low cost for- profit providers have been included. In the health survey only non-profit private providers (mainly FBOs) have been included. Some aspects of the survey instrument need to be adjusted for private providers, as the conclusions and policy implications may be quite different. For example, drug stock-outs at a private practitioner’s office may be quite different. In some countries drug dispensing is not allowed, or limited to certain drugs. But the clinical competence modules will still be relevant. The inclusion of private for-profit providers also depends on their magnitude as service providers.

Why not include services other than education and health?

The quality of services such as water and sanitation must typically be measured through a household survey. This differs from the facility-based approach of the SDI surveys. Surveys on water and sanitation are probably best administered independently of the SDI surveys.

How sure are we that the appropriate sample size is 200-300 units per sector?

The pilots in Tanzania and Senegal showed that the precision of the estimates of the indicators depends a lot on the efficiency of the stratification process. It also depend on how some of the variables are measured (whether a dichotomous or a continuous variable). The precision of the estimates should be continuously monitored to determine whether the sample size in future surveys should be increased or decreased. However, the most critical variable for the sample size is the level of precision that we require in our estimates. Changes in the confidence intervals of a few percentage points have large implications for sample size. Experience has shown that 200-300 facilities per sector are enough to meet these needs.

Why is there not a management or leadership indicator?

Management can be seen as a primary input because organization of the inputs is crucial for efficient service production. If a good indicator of this aspect can be identified, it can be included among the indicators. So far, we have not identified this indicator. Leadership is clearly an important factor explaining quality of services, but again, a simple and telling indicator is hard to identify. We suggest however that various proxy indicators are collected as part of the underlying data.


(i) See Hanushek, 2007.

(ii) As noted by Duflo, Dupas, and Kremer (2009).

(iii) For an overview, see Hanushek (2003). Case and Deaton (1999) show, using a natural experiment in South Africa, that increases in school resources (as measured by the student-teacher ratio) raises academic achievement among black students. Duflo (2001) finds that a school construction policy in Indonesia was effective in increasing the quantity of education. Banerjee et al (2000) find, using a randomized evaluation in India, that provision of additional teachers in nonformal education centers increases school participation of girls. However, a series of randomized evaluations in Kenya indicate that the only effect of textbooks on outcomes was among the better students (Glewwe and Kremer, 2006; Glewwe, Kremer and Moulin, 2002). More recent evidence from natural experiments and randomized evaluations also indicate some potential positive effect of school resources on outcomes, but not uniformly positive (Duflo 2001; Glewwe and Kremer 2006).

(iv) The suggested indicators for education and health are partly based on an initial list of 50 PETS and QSDS indicators devised part of the project “Harmonization of Public Expenditure Tracking Surveys (PETS) and Quantitative Service delivery Surveys (QSDS) at the World Bank” (Gauthier, 2008). That initial list, which covers a wide range of variables characterizing public expenditure and service delivery, was streamlined using this project’s criteria and conceptual framework.

(v) See for instance Olken (2009).

(vi) SDI surveys aim to address a key methodological challenge experienced by facility surveys, namely they usually consist of interviews of the medical officer or head nurse. This method has been found to have sub-optimal validity in a recent review of health facility surveys (International Facility Assessment Network, presentation at American Association of Public Health Conference, 2011).  The SDI surveys are based on primary collection of observational data that is verified by the enumerator. For example, in assessing access to electricity SDI assesses whether the electricity is functioning— and for example, in schools whether the light quality that the electricity generates is adequate (using a light meter).

 Back to top

Timeline_NOV_2013