Guide To Teaching and Learning in Higher Education
Guide To Teaching and Learning in Higher Education
Guide To Teaching and Learning in Higher Education
Module
11
Evaluation of Teaching and Learning
in Higher Education
6
JUST BEFORE
YOU BEGIN
36. One consequence, as the Tokyo declaration mentions, is that it is not possible to arrive at one set
of quality standards applicable to all countries, and against which institutions can be assessed.
Quality embraces all the main functions and activities of higher education: teaching and academic
programmes, research and scholarship, staffing, students, infrastructure, community services and
the academic environment. The Arab States declarations consider that ‘all higher education
systems and institutions should give a high priority to ensuring the quality of programmes,
teaching and outcomes. Structures, procedures and standards for quality assurance should be
developed at the regional and national levels commensurate with international guidelines while
providing for variety according to the specificities of each country, institution or programme’.
37. The Dakar declaration includes the idea that quality ‘entails the operationalization of the
envisaged outcomes (a clear definition of goals and objectives), of the inputs the institutions will
work with (thus a review of admissions criteria) and the processes and procedures for working
with the inputs (the way the management system coordinates structures, resources and the
institutional culture to obtain the required products)’. The Arab States Conference states that
‘quality mechanisms are implemented through continuous assessments and comparisons
between observed and intended processes and constant search for the sources of dysfunctions to
correct them’.
65. Recommendations addressed to each higher education institution were approved by the Tokyo
Conference and seem also implicit in the Havana and Dakar Conferences. The Tokyo statement
says:
- Each higher education institution must define its mission in harmony with the overall goals of the
sector itself, translate this mission into observable indicators and allocate the required resources’.
In the same vein, the Beirut Conference states that the missions ‘should be translated into well-
defined objectives, with allocation of the required resources, and the establishment of concrete
mechanisms proper to ensure adequate monitoring and evaluation of progress and achievements
based on observable indicator’.
109. The Dakar Conference urges that each institution ‘create appropriate structures for evaluating
and controlling the quality of its curricula (including the performance of students) in keeping with
agreed guidelines’ and recommends that ‘each Member State establish a mechanism for
evaluating the quality of higher education institutions, building on existing practices in the region.
Such a body would be responsible for evaluating training, research and consultancy activities in
the light of institutional missions, national education programmes and the needs of changing
times. This should be a control rather than a punitive mechanism, and should use a
combination of external and internal evaluation strategies’.
110. The Tokyo Conference proposals are summarised through the affirmation that ‘each country of
the region should establish a mechanism for evaluating the quality of its higher education
institutions. Countries must introduce quality assurance methods at both institutional and
systemic levels. These may include academic accreditation, academic audits and institutional
evaluations, performance funding, review of disciplines and professional areas, qualification
frameworks and competency-based approaches to vocational education and training’.
113. The Dakar Conference proposes that institutions of higher education should be required to
establish minimum teaching-learning guidelines for each course model. It says that they should
be explicit about ‘entry and exit behaviours in terms of skills, values and attitudes, the teaching
and evaluation methods, all within a specific time frame’.
Introduction
E
valuation is a core part of the educational delivery process. The Dakar
Conference endorses this view as can be seen in the Declaration quoted
above. Many practitioners in the higher education system are reasonably
skilled in the art (and science) of evaluating teaching and learning. Yet, many lack
essential skills of evaluation. This module is to provide learning experiences for the
“experts” and “novices” alike. Dison and De Groot (1999) have stressed the need for
compatibility between content, teaching methods and assessment procedures. It is
only when these are in tandem that learning can be appropriately assessed. The
outcome-based evaluation (OBE) model of student assessment in higher education
is now gaining wide acceptance.
Depending on the goals aimed at, evaluation may be conducted based on various
methods. The method of evaluation may be:
a. diagnostic
b. normative
c. criterion-based
performance levels. It seeks to determine the extent to which goals and objectives
targeted by a particular course have been reached.
Forms of Evaluation
a. learners’ achievements
b. Courses
c. Teachers
d. Educational institutions
Activity 11.2 are designed on the basis of subject content necessary for good
performance in the first-year courses. Applicants for enrolment in
the first year will be listed on a merit basis after taking the tests. Only the first 20
applicants will be authorised to enrol.
Which method or methods of evaluation are used by the higher education institutions
to select candidates for enrolment in the first year ?
A teacher has arranged the content of his annual courses in modules. He gives tests
at the end of each module. The teacher has in mind a
Activity 11.3 double objective: 1°. To compare all learners’
performance levels and to classify them based on their
achievements with each test ; 2°. To provide each learner with adequate information
to help him improve his performances. What is the type of tests organised by the
teacher at the conclusion of each module?
11.2
Tools and Techniques for Evaluating Learning
a. Tests
b. Questionnaires
c. Observation Schedules
d. Interview Guides
The major tool of evaluation in higher institutions are tests. Tests can be of various
types. They can be classified on the following basis :
By Kind of item
• Choice items (true-false, multiple choice, matching)
• Completion items
• Short answer items
• Essay items
By degree of standardization
• Standardised tests
• Non standardised tests
By administrative conditions
• Individual tests
• Group tests
By emphasis on time
• Power tests
• Speed tests
By score-referencing scheme
• Norm-referencing
• Criterion-referencing
• Readiness tests
• Projective techniques
• Structured tests
• Self-report questionnaires
• Interest inventories
• Vocational or career interests
Description of Tests
To plan a test, you prepare a two-way table, called a test blueprint. The names of
the major categories of a taxonomy head the table columns while the row heading
indicates the major topics of the subject matter to be tested. In the body of the
table, the “cells”, formed by a combination of a particular taxonomy category and a
particular subject-matter topic, contain specific instructional objectives. Thus, the
blueprint serves as a double-entry classifying scheme for specific objectives. After
objectives are classified, the number of test items that will be used to test each
objective is recorded in the table. Thus, the test blueprint serves as a plan which
assures that all important objectives are included and that they receive the proper
emphasis on the test.
♦ Define the behaviour the examinee is expected to exhibit or describe the process
to be exhibited before beginning to write the essay question.
♦ Ask questions that require the examinee to demonstrate the ability to use
essential knowledge and to do so in situations that are new or novel for the
examinee, rather than simply recalling information from a textbook or a
classroom.
♦ Ask questions that are relatively specific or focused, and which require relatively
brief responses.
♦ If a test includes several essay questions, be sure that they cover the appropriate
range of topics and complexity of behavior called for in the test blueprint, but be
sure that the complexity of the questions are within the educational maturity level
of the examinees.
♦ Require all examinees to answer the same questions; don’t give optional
questions.
♦ Word questions so that all examinees will interpret the task the way you intend.
♦ Word questions so that all examinees know the limits of the tasks, their purposes,
and can answer them in the time allotted.
♦ Word questions so that experts can agree on the correctness of an examinee’s
response.
♦ Word questions calling for examinee opinion on controversial matters so that they
ask the examinee to give evidence to support the opinion and evaluate the
examinee’s response in terms of the evidence presented rather than the opinion
or position taken.
♦ Word questions so the examinee can judge the approximate length of the answer
desired and knows the point-value of weight each will be given.
1. COMPARING:
Describe the similarities and differences between…….
Compare the following two methods for………
2. RELATING CAUSE AND EFFECT:
What are major causes of ….?
What would be the most likely effects of …?
3. JUSTIFYING:
Which of the following alternative would you favour and why?
Explain why you agree or disagree with the following statement:
4. SUMMARISING:
State the main points included in ….
Briefly summarise the contents of ….
5. GENERALISING:
Formulate several valid generalisations from the following data.
State a set of principles that can explain the following events.
6. INFERRING:
In light of the facts presented, what is most likely to happen, when…?
What deductions can you make from the statement of….?
7. CLASSIFYING:
Group the following items according to….
What do the following items have in common?
8. CREATING:
List as many ways as you can think of for….
Write a list of questions that should be answered before….
9. APPLYING:
Using the principle of…. As a guide, describe how you would solve the following
problem situation.
Describe a situation that illustrates the principle of…
10. ANALYSING:
Describe the reasoning errors in the following paragraph.
List and describe the main characteristics of…
Describe the relationship between the following parts of…
11. SYNTHESIZING:
MODULE 11: Evaluation of Teaching and Learning in Higher Education 11.18
GUIDE TO TEACHING AND LEARNING IN HIGHER EDUCATION
2. Word each item in specific terms with clear meanings so that the intended
answer is the only one possible, and so that the answer is a single word, brief
phrase, or number.
3. Word each item so that the blank or answer space is toward the end of the
sentence.
4. Avoid copying statements verbatim from texts or classroom materials.
5. Omit important rather than trivial words.
6. Avoid “butchered” or “mutilated” sentences; use only one or two blanks in a
completion sentence.
7. Keep the blanks of equal length and arrange the items so the answers are placed
in a column at the right or left of the sentences.
8. State the precision, numerical units, or degree of specificity expected of the
answer.
9. Word the items to avoid irrelevant clues or specific determiners.
TO DO TO AVOID
If possible, write as a direction question. Avoid extraneous, superfluous, and non-
functioning words and phrases that are
mere “window dressing.”
If an incomplete sentence is used, be Avoid (or use sparingly) negatively
sure it implies a direct question. worded items.
The alternatives come at the end (rather
than in the middle) of the sentence.
Control the wording so that vocabulary Avoid phrasing the item so that the
and sentence structure are at a relatively personal opinion of the examinee is an
low and of non-technical level. option.
In items testing definitions, place the Avoid textbook wording and “textbookish”
word or item in the stem and use or stereotyped phraseology.
definitions or descriptions as alternatives
TO DO TO AVOID
1. In general strive to create three to five functional 1. Avoid overlapping alternatives.
Alternatives.
2. All alternatives should be homogeneous and appro- 2. Avoid making the alternatives a
priate to the stem collection of true-false items.
3. Put repeated words and phrases the stem. 3. Avoid using “not given”, “none of
the above”, etc, as an alternative
in best-answer type of items (use
only with correct-answer variety).
4. Use consistent and correct punctuation in relation 4. Avoid using “all of the above”; limit
to the stem. Its use to the correct-answer variety.
5. Arrange alternatives in a list format rather than in 5. Avoid using verbal clues in the
tandem. Alternatives.
6. Arrange alternatives in a logical or meaningful order. 6. Avoid using technical terms, unknown
words or names, and “silly” terms or
names
as distractors.
7. All distractors should be grammatically correct with 7. Avoid making it harder to eliminate a
The ultimate goal of any evaluation should be to collect relevant valid, reliable and
economical information for decisions to be appropriately made. Relevance,
validity, reliability and the economical aspect are currently the most expected
requirements or criteria for any evaluation test.
The relevance of data collected implies that the subject under evaluation
precisely and specially corresponds to the objectives targeted by the evaluation. For
example, the relevance of examination tests at the end of a given curriculum makes
it necessary to differentiate between examinations meant to evaluate learners’
qualification for promotion to an upper class or to move into the job market and
active life.
The validity of data collected implies that the evaluation has actually been
focused on the subject initially targeted for evaluation. For instance, for the sake of
validity, learners’ written and oral skills cannot be evaluated with the same tests.
The reliability of data collected implies that they are not determined by the
free will and choice of the individual who collected them. For example, the double
grading of examination papers is meant to consolidate and further ascertain
reliability.
Data analysis follows the administration or taking of tests. Data collected at the end
of a quantitative evaluation or a qualitative evaluation may nearly always be
amenable to statistical processing.
• Evaluation for accountability- usually at the end of a specified period for the
purpose of passing judgement on the extent to which what is expected to be
achieved has been achieved. This is sometimes referred to as summative
evaluation. It is often followed by reward for success e.g. promotion or award of
a certificate, or punishment for failure e.g. repetition of a course, expulsion from
the institution or witholding of a certificate.
• Evaluation for improvement of services or performance – usually at various
stages in the process of learning or teaching for the purpose of identifying areas
of weakness and strength which might influence success or failure at the end of
the course. This is sometimes called Formative evaluation and is usually
followed by feedback to the learner or performer and the initiation of corrective
action to improve the chances of success at the end of the course. In this sense
formative evaluation may be said to be diagnostic.
SCOPE OF EVALUATION
• Written tests
• Oral tests.
• Practical tests in laboratories.
• Projects.
• Term papers.
• Theses.
STANDARDISATION OF SCORES
For the purpose of making scores from different tests and other measuring tools
comparable, the raw scores need to be converted to standard scores using the mean
and standard deviation, e.g. stanines, t-scores, z-scores. WAEC uses reversed
stanines for reporting scores in SSCE while JAMB uses modified Z-scores for
reporting UME scores.
EVALUATION OF TEACHING
The most commonly obtained curve with traditional teaching especially with large
classes is the normal curve.
Bloom (1971) in his exposition of Mastery learning says:
“There is nothing sacred about the normal curve. It is the distribution most
appropriate to chance and random activity. Education is a purposeful activity, and
we seek to have the students learn what we have to teach. If we are effective in our
instruction, the distribution of achievement should be very different from the normal
curve. In fact, we may even insist that our educational efforts have been
unsuccessful to the extent that the distribution of achievement approximates the
normal distribution.”
If we accept this assertion, we may regard the three curve shapes in a hierarchical
order indicating effectiveness of instruction. Thus, a positive skew would indicate the
lowest level, the normal curve an average level and a negative skew the highest
level.
Excerpted from
Yoloye, E.A. (1998, September). Evaluation in higher Education. Presented at the UNESCO
Workshop on Teaching and Learning in Higher Education. University of Ibadan, Nigeria
Introduction
The aim of this presentation is to introduce the basic guidelines and pertinent issues
in evaluation practices in higher education, with special reference to Kenya. In the
area of teaching and learning, the UNESCO regional workshop held in Dakar,
Senegal, in March 1999 provided useful basic evaluation concepts, the various forms
of evaluation and the qualities of good evaluation tools. The forms of evaluation
identified by the Dakar workshop included: coursework, written examinations
(essays, multiple choice, etc) oral and aural examinations, project work, laboratory
reports, class tests, direct observations (clinical education, teaching practice,
practicum), term papers and theses. The features and the qualities of a good
evaluation tool are as follows: validity, reliability, fairness, practicability, relevance
and economical.
This paper approaches evaluation in four ways. First, what is evaluation? Although
in the institutions of higher education, evaluation is a household work, it might be
helpful to consider just what it is. I define an evaluation as a measure of getting to
know the quality of learning and teaching in higher education. Second, what is the
institutional mission and philosophy? In what philosophical context does evaluation
take place? The mission of the institution guide the process of evaluation. In some
ways, the mission statements specify the measurable and qualitative attributes the
graduates should achieve in their course of their learning which make the institution
unique. These attributes are reflected at the teaching/subject matter level in the form
of course major and minor objectives. In this context, the mission statement should
be known by all the teaching members of staff for operational translation and
interpretation. Third of what internal use is evaluation? That is, what use is
evaluation to the institution and the student? The rationale for evaluation is for the
internal management of teaching and learning in the institution. In a teaching and
learning situation, evaluation has accrued benefits to both the teacher and the
learner in that it enables:
• the teacher to identify what the learners know or do not know in order to facilitate
him/her to teach more effectively, and
• the learner to learn more effectively.
• teachers’ misconception of both the purpose of evaluation and the various forms
of evaluation;
• teachers’ setting of cognitively less demanding tasks that encourage students to
reproduce subject matter;
• teachers’ misuse of examinations to instil fear among students.
In addition, given the current large class sizes, lecturers have not been effective in:
Excerpted from:
Maritim, E. (1999). Evaluation in higher education. Presentation at the Regional Workshop
on Teaching and Learning in Higher Education, University of Witwatersrand, Johannseburg, South
Africa, September.
MODULE 11: Evaluation of Teaching and Learning in Higher Education 11.30
GUIDE TO TEACHING AND LEARNING IN HIGHER EDUCATION
Evaluation of Teaching
11.3
Evaluation of teaching
EVALUATING TEACHING
As regards the evaluation of teaching, it can at least be done at two levels: that of
content (development and evaluation of curricula and syllabi), and methods. Given
the specific goals which the institutions should henceforth pursue, we have to ask
ourselves certain questions before preparing curricula. What links should they
(institutions) maintain with those of developed countries? In other words, is any
development work done if one is contented with teaching in the former the same
course contents as in the latter? The question of standards seems to us to be all the
more relevant as the neutrality and universality of scientific and technical knowledge
appear to be more and more pseudo-evident. Scientific and technical knowledge
has, as a matter of fact, a cultural impact in the sense that they more or less
MODULE 11: Evaluation of Teaching and Learning in Higher Education 11.32
GUIDE TO TEACHING AND LEARNING IN HIGHER EDUCATION
It is advisable to examine our students’ basic scientific knowledge and consider the
appropriate course content they need to make up for the inadequacies. If this
preliminary survey is not carried out, the education we are providing could end up as
a duplication of the knowledge already acquired by our student (formal knowledge,
empirical personal and social knowledge); besides, one might wonder if the scientific
and technical knowledge that we impart to them still address the issues that actually
arise from their relationship with their environment.
The problem connected with evaluating teaching methods is less attributed to the
lack or inadequacy of appropriate pedagogic techniques than to the negative
attitudes of certain teachers in this respect. Such evaluation can inevitably be done
by observing the performance of teachers because an objective assessment can be
made only on the basis of their conduct. With regard to the objectives of training,
qualification and assessment of lessons taught and their feed-back, the evaluation
can induce the academic staff to improve their performance and thus enhance the
quality of teaching; it also helps to ascertain whether the objectives of a coure or a
study programme have been attained or to discover the discrepancies between the
students’ expectations, the teacher’s intentions and the demands of the discipline.
All the same, with what tools can one make an objective assessment of a teacher’s
performance? There is in the first place the video technique which not only allows
for the observation of the teacher’s performance by a third person, but also
encourages self-observation. There is also the practice of course inspection by a
high-ranking professor or expert, analysis of syllabi by an administrative body and
course evaluation by students.
The assessment of teachers’ performance has often been resisted by some teachers
and this deserves attention. The refusal can be explained as a resistance to
pedagogic innovation, a mean of averting the risks of upsetting the “master” image
which the teacher enjoys and the established monopoly of learning power which the
master arrogates to himself in the classroom. However, this lack of conformity to the
pedagogic practice of evaluation seems to be linked to the cultural context,
educational system, the manner in which institutions and their teachers have been
MODULE 11: Evaluation of Teaching and Learning in Higher Education 11.33
GUIDE TO TEACHING AND LEARNING IN HIGHER EDUCATION
The inherited educational system actually allows for the assessment of teachers in
tertiary institutions only on the basis of their research works. Such evaluation has
the merit of trying to define certain objective criteria for assessment such as
publications and theses, and forcing young graduate assistants or assistant lecturers
to distinguish themselves in a discipline before occupying positions of responsibility.
On the other hand, it accounts for the preference given to scientific research to the
detriment of teaching. Most of the teacher who took part in the debate on scientific
research and teaching reached a consensus on the need for training through and for
research even though some of them wondered about the place of educational
research in the training institutions.
This explains the fact that, in the countries where teachers’ performance is
assessed, the practice is attributed to the search for the institutions’ internal
efficiency on account of the economic crisis and/or the phenomenon of student
unrest. The revival of educational practices as well as the evaluation of teachers
and teaching derives from this situation. That is why students now indulge in making
a quasi-systematic evaluation of their teachers. If it more or less provides
information on the quality of the pedagogic relationship (criteria for teaching and
research: characteristics of a course), it scarcely helps to evaluate the contents and
their level of assimilation – at least because the assessments are based on human
relations, the sole guarantee, in the eyes of the learners, for efficient teaching. This
experience now reveals that students’ assessments can measure the level of
conviviality and human warmth!
However, the assessment that students make of the instruction received is a source
of vital information for every lecturer who cares about improving his course. This
practice is fruitful because it is rare to find instances where no deductions are made
from the remarks given or from the analysis of their comments for improvement. A
policy on this type of evaluation can only be incorporated into a comprehensive effort
by an institution determined to provide quality education. This practice indeed goes
beyond the questionnaires filled by students to take into account all the other
pedagogic parameters. With all precautions taken (agreement on the validity of
questionnaires, objectiveness in data gathering), it is possible to ask the teachers
Excerpted from:
Dia, O (1998). Quality of Higher Education in Francophone Africa. In J. Shabani (Ed.). Higher
Education in Africa: Achievements, Challenges and Prospects. Dakar: UNESCO BREDA.
In the university sector, quality has been assured via professional accreditation
(where applicable) and through a peer-based system of external examination,
although in the latter case not uniformly so. A recent development is the
establishment by the Committee of University Principals (CUP) of a Quality
Promotion Unit. Overall , in the previous system quality assurance was erratic, the
use of external examiners inspired little confidence and quality was largely
determined by reputation.
Excerpted from:
Cloete, N. (1998). Quality of Higher Education in South Africa: Conceptions, Contestations and
Comments. In J. Shabani (Ed.). Higher Education in Africa: Achievements, Challenges and Prospects.
Dakar: UNESCO BREDA.
Evaluation of teaching as we can glean from Readings 5.2. and 5.3, can be carried
out at the internal level and at the external level.
The internal or external evaluation of studies may be implemented by :
• each teacher concerned about giving information on the results of his or her own
work
• each institution wishing to better control the quality of its performance
• the local or national authority organising and funding educational systems
• international agencies specially for comparison of the performances of
educational systems in various countries
independent scholars and researchers.
Internal evaluation does not target the same objectives as external evaluation.
Ordinarily, internal evaluation seeks to measure and assess the pedagogical quality
and the costs of studies. Conversely, external evaluation focusses on the impact of
studies outside educational systems in connection with social, cultural, religious and
economic factors etc. Generally speaking, the internal evaluation of studies seeks to
measure and assess :
• the number of students to be moved to the next higher level of the course
• the number of students repeating the classes
• the number of students who drop out
• the number of students who successfully complete their studies
These percentages may be calculated by taking into account the students who
started classes in the same year. In this case, they are gross rates of internal
efficiency. The percentages may be established by taking into account all the
students attending the same classes but who did not start their studies at the same
time. In this case, we have net rates of internal efficiency.
Example:
In 1998-99, there were 20 students in the first year of medical school. Among these
152 had enrolled for the first time in 1998-99 in the first year of medical studies. At
the end of the same year, 167 students had passed their exams, including 134 of the
152 students enrolled for the first time in 1998-99 and 33 among the 53 repeaters.
The gross rate of internal efficiency for promotion to the second year of studies in
1999-2000 is, in this case, equal to = (134/152)* 1OO = 88.15 %
The net rate of internal efficiency for promotion to the second year in 1999-2000 is
here equal to = (167/2O5)* 1OO = 81.46 %
• the percentage of degree holders (or fully trained students) who secure jobs
• the percentage of degree holders (or fully trained students) who have secured
jobs corresponding with their qualifications.
Examples :
At the end of the 1996-97 academic year, 37 students had completed their medical
doctorate dissertations. Of these, 29 found jobs but only 15 of the 29 got jobs in the
field of medicine in 1997-98.
The overall percentage of medical degree holders who secured jobs in 1997-98 is :
(29/37)* 100 = 78.37 %
The percentage of medical degree holders who found jobs in the health sector in
1997-98 is : (15/37)* 100 = 40.05 %
In 1991-1992, there were 585 students in the second year of economics. Among
these, 399 had enrolled for the first time in the second
year of studies in 1991-92. By the end of the year, 377 Activity 11.5
students among the 585 had qualified to enroll in the
third-year of studies in 1992-93. They included 77 of the 339 students enrolled for
the first time in 1991-92.
1. Calculate the gross rate of internal efficiency for the second year of studies in
economics
2. Calculate the net rate of internal efficiency for the same second year of studies in
economics.
Evaluation of teachers
There are several tools for the evaluation of teachers by learners. The tools involve
several areas including :
• the academic background of teachers
• their professional skills
depending on students’ ages or their contacts and connections which the teachers to
be evaluated.
The most promising results of research in education may probably come from the
different contents of the evaluation of teachers by their students based on the
ultimate purpose of the results of such evaluation. An evaluation conducted for
administrative purposes should not have the same content as a pedagogical
evaluation. It is generally admitted that students’ opinions are very useful to make
teachers aware of the strengths and weaknesses of their teaching as well as of the
methods and strategies used by the teachers etc.
In discussing the place and the role of evaluation in teaching and learning in higher
education, the discussion groups identified several issues and observations. These
can be summarised as follows:
• Despite the cost implications, external examiners system be retained and their
roles be expanded to include evaluation of the marking schemes, the type of
questions set, the curriculum and the teaching facilities including laboratory and
textbooks.
• Moderation of examinations be strengthened through the establishment of
external moderation system and departmental moderation committee.
• The Commission for Higher Education (CHE) should strengthen quality control
and quality assurance of the academic departments through frequent visitation.
• As part of the course assignment, students be required to carry out research
projects.
MODULE 11: Evaluation of Teaching and Learning in Higher Education 11.42
GUIDE TO TEACHING AND LEARNING IN HIGHER EDUCATION
Recommendations
Expand external examiners’ role to include evaluation of teaching/learning facilities,
course textbooks, course syllabuses.
2. The Commission for Higher Education carries out visitation and academic
audit of the academic departments for quality assurance.
3. Departments recommend external examiners.
4. Use a variety of assessment tools to evaluate students.
5. Train lecturers on evaluation skills.
6. In-service lecturers on educational theories and practices.
7. Minimise examination cheating.
References