0% found this document useful (0 votes)
170 views52 pages

Monitoring & Evaluation Course Overview

The document provides information on a course on Monitoring and Evaluation (M&E) offered by AMREF International University. The purpose of the course is to equip learners with knowledge and skills on M&E to design, implement, monitor and evaluate public health programs. By the end of the course, learners should be able to explain M&E concepts and principles, develop indicator plans and frameworks, apply evaluation designs, and design M&E systems for health programs. The document then discusses the historical context and evolution of M&E and provides details on key M&E concepts like monitoring, evaluation, supervision and review.

Uploaded by

Vpnteste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views52 pages

Monitoring & Evaluation Course Overview

The document provides information on a course on Monitoring and Evaluation (M&E) offered by AMREF International University. The purpose of the course is to equip learners with knowledge and skills on M&E to design, implement, monitor and evaluate public health programs. By the end of the course, learners should be able to explain M&E concepts and principles, develop indicator plans and frameworks, apply evaluation designs, and design M&E systems for health programs. The document then discusses the historical context and evolution of M&E and provides details on key M&E concepts like monitoring, evaluation, supervision and review.

Uploaded by

Vpnteste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

AMREF INTERNATIONAL UNIVERSITY (AMIU)

MPH COURSE
MAP 720: MONITORING & EVALUATION

Dr. Nzomo Mwita, PhD


Senior Monitoring, Evaluation & Learning (MEL)
Consultant

October/November 2020
Purpose of the Course

The purpose of this course is to equip the learner with


knowledge and skills on monitoring and evaluation so
that they can design, implement, monitor and evaluate
public health programmes.
Expected Learning Outcomes

By the end of the course, learners should be able to:


 Explain the concepts and principles of M&E.

 Develop M&E Indicator plans for health programs.

 Design M&E frameworks for health programs.

 Apply appropriate evaluation designs.

 Design and implement M&E systems for health


programs.
Lecture 1 & 2: Historical Context, Concepts
and Principles of M&E
Historical Context of M&E

Q. When in history did M&E become an instrument of


good governance and performance?

 Nowadays governments, NGOs, CSOs, donor agencies


and even the private sector are embracing M&E as a
practice and discipline that can improve service delivery
and learning.

 Historically, M&E can be traced to various events in the


past.
Historical Context of M&E

 Societies in the past have had some form of performance


tracking system. M&E has been on the development
agenda of many institutions.
 Kusek and Rist (2004) say: “there is tremendous power in
measuring performance. The ancient Egyptians regularly
monitored their country’s outputs in grain and livestock
production more than 5,000 years ago. In this sense, M&E is
certainly not a new phenomenon. Modern governments, too,
have engaged in some form of traditional M&E over the
decades. They have sought to track over time their
expenditures, revenues, staffing levels, resources, etc”.
Historical Context of M&E

 From the days of Ancient Egyptians, there has been a


great deal of evolution in the philosophical orientation of
M&E.
 In the 1960s, M&E practice underwent a substantial
paradigm shift, which was predominantly quantitative in
focus. This domination continued in the social sciences in
the 1970s.
 In the decades that followed, M&E methodologies shifted
from the emphasis on quantitative to more qualitative,
participatory approaches and empowerment techniques.
Historical Context of M&E

 In 1980s and 1990s, the increasing demand for M&E, in


contemporary governance systems resulted because of the
benefits which were associated to its (M&E) practice. For
instance, benefits such as the provision of relevant
information resulting from good feedback-loop systems
were what result-based M&E offered decision makers and
other stakeholders.
 In the Millennium, M&E has increasingly been digitalized,
putting emphasis on Data Science, Result-based M&E, and
Performance Monitoring.
Historical Context of M&E

 Contemporary M&E practices have their roots in the Result-


based Management (RBM) approach, which is a
management strategy centred on performance and
achievement of outputs, outcomes and impacts for a policy,
programme and project.
 RBM employees tools such as strategic planning, results
frameworks, monitoring and program evaluation to
improve organizational performance.
 M&E has enhanced the understanding and practice of RBM
over the years.
Historical Context of M&E

 M&E can help in bringing about transparency,


accountability and good governance in the society.
However, as a carrier and messenger of both good and
bad news, M&E can face challenges of passive
implementation in organizations.

 Hence, the need to institutionalize M&E in institutions to


enhance transparency, accountability and good
governance.
Understanding M and E
 Monitoring and evaluation (M and E) are two
complementary but distinct processes.
 Monitoring consists of tracking inputs, activities, outputs,
outcomes, and other aspects of the project on continuous
basis during the implementation period, as an integral
part of the project management function.
Understanding M and E
 Evaluation, on the other hand, is a periodic assessment of the
objectives of the project.
 It assesses the extent to which the project has achieved its
objectives.

 Projects are evaluated at discrete points in time (usually at the


project’s mid-point and end-point) along some).
key dimensions (i.e. relevance, efficiency, effectiveness, impact,
sustainability
 Evaluations may often seek an outside perspective from relevant

experts.
 M&E helps to answer the So What? Question
Differences Between Monitoring and Evaluation

Dimension Monitoring Evaluation


Frequency Continuous, occurs regularly Periodic, episodic
Function Tracking Assessment
Purpose Improve efficiency, provide information Improve effectiveness,
for re-programming to improve impact, value for money,
outcomes future programming,
strategy and policy-
making

Focus Inputs, outputs, processes, workplans Effectiveness, relevance,


(operational impact, cost-effectiveness

implementation) (population effects)


Differences Between Monitoring and Evaluation

Methods Routine review of reports, registers, Scientific, rigorous research


administrative design, complex and intensive

databases, field observations

Cost Consistent, recurrent costs spread Episodic, often focused at the


across implementation period midpoint and end

of implementation period
Key M&E Activities in the Project Cycle
Types of Monitoring: What we need to monitor

1. Outcome monitoring:
A measure of changes which show whether the conditions
of the target group and its environment have changed in
a significant way as a result of the programme
intervention.

2. Physical progress monitoring:


Focuses on continuous review and surveillance of
activities and results of a project/programme. In
particular overseeing the planned verses the actual
performance, collecting relevant information and the
rescheduling of activities and resources, where necessary.
Types of Monitoring: What we need to monitor

3. Technical monitoring:
Focuses on use of technology in relation to resources.

4. Financial monitoring:
Monitoring actual expenditure patterns against planned
budgets and implementation schedules.

5.Assumption monitoring:
It involves assessment of the conditions that might exist if the
programme is to succeed but which are not under the direct
control of the programme, e.g. collaboration with other
agencies.
Inputs, process, outputs and outcomes

Outcomes Short-term and medium-term effects of an


intervention’s outputs

Outputs Products and services which result from a development


intervention
Activities Actions taken or work performed through inputs, such
as funds, technical assistance and other types of
resources mobilized to produce outputs

Inputs Financial, human, and material resources used for the


intervention
Monitoring of inputs

 Monitoring of inputs helps to ensure that:


➢ Work continues according to schedule.

➢ Personnel are available according to


assignment.
➢ Resource consumption and costs are within
planned limits.
➢ Required information is available.
Monitoring of process

 Monitoring of process helps to ensure that:

➢ The expected activities and tasks are


performed in accordance with set norms and
plans.
➢ People meet the set work standards.
Monitoring of outputs
 Monitoring of outputs helps to ensure that:
➢ Services are delivered as planned.
➢ Decisions are timely and appropriate.
➢ Records are reliable and reports are issued.
➢ Conflicts are resolved.
➢ Beneficiaries of the services are satisfied.
Review
 Review is an assessment at one point in time of the
progress of a project.

 The basic purpose of a review is to take a closer


look than is possible through routine monitoring.

 Review can be carried out to look at different


aspects of a programme and can use a range of
criteria to measure progress.
 Appropriate decisions are taken about the
direction the programme should take.
Supervision

 Supervision is a way of ensuring staff


competence, efficiency and effectiveness
through observation, discussion, support and
guidance, (SCF 1995, WHO 1992).

 Supervision concentrates on people and sets out to


improve performance. It is justified mainly by the fact
that it gives the supervisor the opportunity not only to
provide guidance, advice and help, but also to learn.
Supervision

There are three main styles of supervision:


 Autocratic: Do What I say!

 Anarchic: Do what you like!

 Democratic: Let us agree on what we are to do.


Supervision

Importance of Supervision

1. Objectives:
− make sure that objectives correspond to needs

− discuss, explain, justify, and obtain the commitment of


workers to the objectives of the programme
− seek solutions to any conflict that arises between
management, staff and users regarding the programme
objectives.
Supervision

Importance of Supervision
2. Performance:
− observe how the tasks entrusted to different
categories of workers are carried out, and under
what conditions
− analyze the factors that result in satisfactory
performance and the obstacles to satisfactory
performance ( knowledge and attitudes of workers,
environment, resources )
Supervision

Importance of Supervision
3. Staff motivation:
− obtain a clear picture of workers’ fundamental

needs (especially the need to ‘belong’, the need for


respect, and the need for sense of achievement).
− Discover shortcomings in staff skills in
communication, problem-solving, and resolution of
conflict
Supervision

Importance of Supervision
4. Staff competence:
− determine staff needs for information on the

community, on programme goals, and on standards


to be attained
− set up a programme of continuing education (if
necessary).
Supervision

Importance of Supervision

5. Resources:
- identify particular needs for logistics or financial
support.
Evaluation
Evaluation is a systematic and objective assessment of
an ongoing or completed project, program, or policy,
including its design, implementation, and results.

The aim is to determine the relevance and fulfilment of


objectives, development efficiency, effectiveness,
impact, and sustainability.

30
Evaluation

 Evaluations can address three types of questions:

1. Descriptive questions: The evaluation seeks to determine


what is taking place and describes processes, conditions,
organizational relationships, and stakeholder views.

2. Normative questions: The evaluation compares what is


taking place to what should be taking place; it assesses
activities and whether or not targets are accomplished.
Normative questions can apply to inputs, activities, and
outputs.
Evaluation

3. Cause-and-effect questions: The evaluation examines


outcomes and tries to assess what difference the
intervention makes in outcomes.
Types of evaluation

 Evaluation may be classified in terms of timing, agent


and scope.

By agent

 Internal or self-evaluation: conducted by those directly


involved in the formulation, implementation and
management of the project.
 External or independent evaluation: conducted by
those who are NOT directly involved in the
formulation, implementation and management of the
project.
Types of evaluation
By Timing

 Mid-term evaluation: conducted at the mid-point


of a project implementation. Focuses on:
✓ Relevance

✓ Performance (efficiency, effectiveness, and


timelines)
✓ Initial lessons learnt
Types of evaluation
 Terminal evaluation: Conducted at the end of the
project implementation. Focuses on:
➢ Relevance
➢ Performance (efficiency, effectiveness, timeliness)
➢ Early signs of potential impact
➢ Sustainability of results
➢ Contribution to capacity development
➢ Recommendations for the second phase
Types of evaluation
 Ex-post evaluation: Conducted usually two years or
more after the completion of the project. Focuses
on:
✓ Relevance
✓ Performance (efficiency, effectiveness, timeliness)
✓ Success (impact, sustainability, contribution to
capacity development)
✓ Lessons learnt as basis for policy formulation and
programming.
Types of evaluation
By Scope
 Project evaluation: evaluation of a project
 Sectoral evaluation: cluster evaluation of projects in
a sector.
 Thematic evaluation: cluster evaluation of projects

addressing a particular theme.


 Policy evaluation: Cluster evaluation of projects
dealing with particular policy issues at the sectoral
or thematic level.
Types of evaluation

 Process evaluation: A cluster evaluation of


projects to assess the efficiency and
effectiveness of a particular process or
modality they have adopted.
Common Terms Used in M&E

 Efficiency: The amount of outputs created and their quality


in relation to the resources invested. It reflects the relative
effort to accomplish the objective measured. This is usually
measured in terms of cost incurred in the process.

 Effect: The more immediate, tangible and observable


change, in relation to the initial situation and established
objectives, which is felt has been brought about as a direct
result of project activities. There are direct and wider effects
(Oakley P. et al 1998).
Common Terms Used in M&E

 Effectiveness: This expresses the extent to which the


planned outputs, expected effects and intended impacts are
being or have been produced or achieved. It expresses the
degree to which a programme achieves its objectives (Jose'
Garcia-Nuez 1992).

 Impact: This refers to the long term, largely indirect


consequences or 'end products' of the programme for the
intended beneficiaries and any other people. Impacts can
be positive or negative.
Common Terms Used in M&E

A project's impact could be understood as a series


of outputs and effects (outcomes) which occur at
different times and which accumulatively cause
some noticeable and lasting change in the
livelihoods of the people who have been involved
(DFID 1997).
Common Terms Used in M&E

Relevance: The extent to which the programme is


addressing or has addressed problems of high
priority, mainly as viewed by stakeholders,
particularly the programme's beneficiaries and any
other people who might have been its beneficiaries.
Common Terms Used in M&E

Sustainability: This means the maintenance or


positive changes induced by the programme after its
phase out. Oakley P. et al (1998) describes
sustainability as a 'withdrawal strategy' for a
development programme.
Common Terms Used in M&E

Replicability: The feasibility of replicating the


particular programme or parts of it in another context.
Results Chain
Logic Model

What is a Logic Model?

 A Logic Model is a visual diagram that illustrates how


the program will work.

 Logic models can be used in project/program


planning, implementation and evaluation.
Logic Model

 Logic models are useful:


1. To build understanding and clarity about the
program.
2. To identify the sequencing of activities that should be
implemented.
4. To serve as a basis for program evaluation.
Logic model- Process Components
Logic Model-Outcome Components
Sample of Logic Model
Activity

Formulate a Logic Model for your Program


Contact:
Dr. Nzomo Mwita, PhD
Senior Consultant-Monitoring, Evaluation & Learning (MEL)
E-mail: [email protected]
Phone: +254 721 440462

You might also like