Event Evaluation
Event Evaluation
Q1) Did you enjoy the event? If no, then please state the reason.
Q4) what are the problems you faced during the event?
Q5) what could have been done to make this event better?
Q6) How do you rate the various services provided by us (please check one of the option):
Hospitality: Excellent, good, average, poor
Catering: Excellent, good, average, poor
Transportation: Excellent, good, average, poor
Management staff behavior: Excellent, good, average, poor
Management staff services: Excellent, good, average, poor
Note: Your questionnaire should not have more than 10 questions. You don't want to irritate your guests.
Ask only relevant questions and keep the questionnaire short to 5-6 questions. Of course the type of
questions you will ask may change from event to event.
And don't forget to mention the following line in you feedback form: "Thank you for taking the time to
complete this feedback form" .
Evaluation is an activity that seeks to understand and measure the extent to which an
event has succeeded in achieving its purpose. The purpose of an event will differ with
respect to the category and variation of event. However, to provide reach and interaction
would be a generic purpose that events satisfy.
There can be two attitudes with which evaluation can be put in its proper perspective. The
concept of evaluation stated above was a critical examination digging out what went
wrong. A more constructive focus for evaluation is to make recommendations about how
an event might be improved to achieve its aims more effectively.
To carry out an evaluation and measurement exercise it is essential that the predefined
objectives of the events have been properly understood. The brief should contain all the
data to be communicated since if an event has been organized without a clearly defined
purpose, any evaluation would be rather pointless.
Setting objectives for an event is easier said than done. It is more difficult to set standards
and declaring an event successful after it meets them. To provide tangibility to the
problem, the best approach is to begin with definition of the target audience for whom the
event has been organized. In the case of commercial events, the audience could be end
users who use the company’s products. An event might be conceptualized to achieve
different things for different audience. Once the audience has been defined, the next step
is to identify and put on paper what each of the audience is expected to think, feel and do
having been to the event, that it did not think, feel or do beforehand. This adds an element
of tangibility to the evaluation and measurement proceedings.
The number of mega-events has increased dramatically in the past few years and the costs
of organizing events have also increased exponentially. The costs of production in major
events can be enormous and therefore, in the near future one can expect companies to
start asking questions about the effectiveness of their events to see whether their money is
being spent prudently.
Creativity is derived from the Greek word enthousiasm which literally translates into
‘God, within’. Setting out to evaluate such an effort that is considered to be the work of
Gods themselves demands a certain amount of sensitivity during evaluation. Objective
evaluation should also take into consideration the nature of the concept and the process of
execution of the event in their entirety. However professional the evaluation, there is
scope for error and misjudgment if sensitivity is not adhered to. This is because it takes a
creative and sensitive mind to spot wrong questions or situations where asking questions
might be the wrong method and observation might be more appropriate. One of the ways
of nurturing and encouraging this sensitivity is to place evaluation within the context of a
team approach all the way from conceptualization to carrying out of the event.
From experience, it is known that people involved in an event are more open. minded and
less committed to any particular course of action before the event occurs. Yet another
learning is that, if things are shown to be wrong after a decision has been taken, the
majority of people involved in the decision-making process may try to wash their hands
of the fault. Thus, adding sensitivity to the evaluation process is very important.
2. Measuring Performance
3. Correcting Deviations
The fundamental reason why event evaluation is carried out is to navigate the event so as
to ensure that the event objectives are achieved in total. And since deviations may occur
during any stage in the event designing phase, it is important that measurement is carried
out at all possible stages.
Events can be evaluated based on the critical success factors listed below; from both the
clients’ and event organizer’s viewpoints.
There are multiple criteria for evaluating the success of an event from the event
organizer’s point of view. These are over and above ensuring perfect reach and
interaction for the client by networking on-time & at lowest cost. The client event-target
audience fit should match the clients’ brand/product/company image and personality
perfectly, keeping the target audience as the focal point. This is a very critical evaluation
point. Ensuring the profitability of an event such that there is maximum profitability with
minimum mark ups is another critical evaluation point. Since resources are also a major
constraint for event organizers, the resource management efficiency i.e., resources
committed and span of time for which it stays committed – financial, human, equipment
and infrastructure should be a minimum. The number of staff and volunteers involved
should be appropriate to offer quality service.
Logistics and efficiency of event execution for ensuring smooth proceedings without
unnecessary delays and damages is another critical success factor. Creating avenues for
lead generation & its proper management during the event is a critical factor. Each and
every completed event should generate more inquiries and these should be responded to
immediately. Opportunities for explanation of available synergies and expansion of
services offered to client to keep strategic integration and diversification options open is
also an important factor. Since an event is essentially a one-off affair and any last
moment problem can convert an exceptionally well-planned event into a disaster, all care
needs to be taken during the event execution. Yet, another important critical success
factor is the degree of localization or customization accommodated in the concept to suit
the demographic and other variables of various places where the event is to be carried
out.
We have discussed earlier that the impact an event has on its target audience is equivalent
to the measure of reach and interaction that occur during the event. Whereas reach is
tangible, interaction to a certain extent is intangible as well as not always quantifiable.
Immediate and long-term benefits that accrue from an event are important when
evaluating an event from the clients’ point of view. A cost-benefit analysis concerning
the effectiveness of reach and interaction is a must as a pre-event activity. Post-event
stock taking activity should be done to confirm whether the event has occurred as per
plans. This analysis should consider the actual cost of the event that includes the non-
budgeted expenditure as well as the actual benefits that accrued to the client from the
event. The accrual of benefits can be judged by measuring the tangible parts of the
objectives that have been achieved.
Measuring Reach
Reach is of two types – external and actual event reach. Since events require massive
external publicity-press, radio, television and other media are needed to ensure that the
event is noticed and the benefit of reach is provided to the client. Measurement of
external reach is possible by using the circulation figures of newspapers and promotions
on television and the radio. The DART and TRP ratings that rate the popularity of
programmes on air and around which the promotion is slotted, is a very tangible though
approximate method for measuring the external reach of a promotion campaign on
television. Measurement of external reach should be tempered with the timing of the
promotions as effectiveness of recall and action initiated amongst the target audience is
highly dependent on this important variable. For example, releasing ads and promos one
month ill advance should be considered more as an awareness exercise for propagating
the event concept, time, date and venue of these owe to the audience. The entry criteria –
free, invited or ticketed show should be clearly mentioned here. The measurement of the
actual reach of an event is relatively simple. The capacity of the venue is a figure that
provides the upper limit for the actual reach. Ticket sales or numbers of invitees are also
direct measurement tools. Registration of participants and requests for filling in
questionnaires are also common methods of measuring the actual reach of an event.
Concept of event quality and measuring quality of event
Exactly on the lines of the evaluation of effectiveness of an event comes the concept of
event quality. In essence, quality of an event exists in the clients’ perspective and thus
varies from client to client. By aiming for quality by maintaining standards, preventing
mistakes, never cutting corners and using only top quality infrastructure is looking at
quality from a skewed angle.
Unless the target audience and the clients perceive the quality of the job in the same way
as the event organizers, the big picture of quality is not complete. Therefore, it is critical
to match the clients’ expectations and experiences by including even the minutest details
to arrive at the perceived quality of event. In matters of dispute, it is value to the client
that finally matters.
For the client, quality of an event is a bundle of attributes. A few of these critical
attributes are quality and reliability of equipment used, aesthetic appeal, appropriate cost
and timely completion of the project.
Each client will care more about some attribute than others. Thus, it is important to find
out how clients would define quality event service. Competence in project management
from conceptualization to carryout, reliability and integrity as in the past performances of
events that have been executed by the event organizer is a very important quality
criterion. Responsiveness to the clients’ requirements i.e., empathy, mutual confidence
and trust are also criteria used by clients to size up the quality of event organizers. In
addition, an easy-to. Work with manner, personal involvement and caring that the event
organizer exudes also helps. Delivery of promises and deals should be ensured.
Every client expects the event to provide the ideal audience to associate with; impress
and entice. Thus, the quality of an event can also be defined in terms of the audience
quality. Clients should focus on three major statistics that define audience quality:
• Net buying influences which can be defined as the ratio of the number of
audience that can recommend, specify or approve purchase to the total population
at the event.
• Total buying plans imply the percentage of the audience planning to buy a
product/service from the sponsors’ stables within the next 12 months after the
show.
• Average audience interest is the percentage of audience that shows an interest in
the sponsors’ products or services during the event itself and immediately after.
This may be measured by keeping track of the number of visitors to the sponsors’
stall or exhibit area during the event.
Was your event a success?
When writing this web page, I looked for some published research on this kind of
evaluation. I couldn't find much at all. Perhaps this kind of research seems too trivial to
take seriously. However, a lot of people put a lot of effort into these events; they are
genuinely interested in audience reactions, and how the event could be improved next
time.
Because not much seems to have been written on this, and because Audience Dialogue
has helped evaluate several hundred such events in the last few years, I thought it was
worthwhile to try to record some of the principles we've learned: how to do it, and how
not to do it.
The questionnaires are sometimes called "happy sheets," suggesting that the participants
give too favourable an opinion. The implication is that if they were asked the same
questions after the event, by a third party, opinions would be less favourable. Actually,
that's not what Audience Dialogue has found. If you ask audience members to rate the
event they've just attended on a scale between 0 and 10 - where 0 is the worst possible
rating, and 10 the best possible - the average answer seems to be around 7 out of 10, no
matter when the questions are asked or who asked them.
1. Always think of the event in its context. The study is never solely about the event as
experienced by participants: that's just one part of the evaluation. Every event is done for
some purpose, and those attending it usually don't know the full purpose. It's useful to
think of events using program logic, like this:
Considering the entire planning and effects of the event, you can see that a questionnaire
filled in on the spot produces only a small proportion of the information needed to
evaluate the event, and the program it forms part of. Consider all the people involved -
participants, those affected, and those who did not attend but were still affected in some
way. Even if an event is a flop at the box-office, it may still have important effects on
artistic life. Perhaps the spending on sets for a play helped to keep some precious skill
alive in the local area.
And even if this particular event wasn't a box-office success, and had no effects on
artistic life, it can still fulfil a broader purpose. For example, if a local drama group
produces an avant-garde play, this may help to attract the attention of distant funding
sources. (But if that's one of the purposes of such a production, the achievement of that
purpose shouldn't be left to chance: it should be sought as a planned outcome, in the
framework mentioned above.)
Broadening the context further still, consider benchmarking your event against others.
This is done by (i) gathering data in a standard format, then (ii) comparing the results for
your event with other results in the same format. One such format is the Transfer of
Training Evaluation Model (TOTEM) which can be used in a wide variety of educational
evaluation contexts. This can be found at the US Department of Energy's Knowledge
Transfer Website at www.t2ed.com, though benchmarking data doesn't seem to be
available there.
In general, I suggest that you try to find and use a standard evaluation scale, rather than
trying to develop your own. There are many pitfalls in developing a new scale, some of
which are not obvious till it's too late.
Another aspect of context is peer review. Other people and organizations that produce
events of this kind can be useful sources of evaluation - even if they are biased. Though
they'll all have different viewpoints, if a wide range of experts agree on a criticism, you'd
better take it seriously. For successful peer evaluations, you need to have 3 or more
people present, with experience in the same type of event. Get them to fill in a special
questionnaire (based on the same one that ordinary participants fill in, but with extra
questions), and see if they all agree about the strong points or weak points of your event.
If they do, you should take notice.
Even when you have all this information, you can still be left wondering. Perhaps you
asked participants to rate the event on a scale of 0 to 10, and the average rating was 7 out
of 10. Is that high or low? (In fact, it's a little below average - based on our results from
hundreds of surveys). Unless you have a context to place the results in, such figures will
be meaningless. This is an argument that lots of little evaluations are more useful than
one big one.
For long courses and events - more than about half a day - participants often forget
suggestions they thought of making. And if communication isn't working well in a course
that runs for a week, it's no help to discover that at the very end. Quick evaluation
sessions - using both written and spoken form - at the beginning or end of each day can
be very helpful.
Mix multiple-response questions (easy to answer, but not very informative) with open-
ended (more valuable responses, if people take the time to think about the answers, and if
the questions are fully relevant).
Keep it to one page (A4 or letter size) if possible, but definitely to one sheet of paper. If
both sides of the paper are printed, write PTO or OVER or MORE at the bottom right of
both pages. Consider the nature and size of the surface that will people have to write on.
For example, if they are sitting in theatre-type seats, without table tops, will they rest the
questionnaire on their knees to fill it in? In that case, maybe it should be printed on card,
not on thin paper.
Open-ended questions should span a range of generality. For example, if you ask the very
general question "What other comments would you like to make about this seminar?"
nobody's comments are excluded, but many people will not have time to think of
comments. (Usually, at the end of an event, most people are in a rush to leave.) On the
other hand, if you ask only specific questions, such as "Which slides, if any, had writing
that was too small for you to read?" people who had problems you hadn't expected will
have nowhere to give an answer. The solution: use both types of open-ended question, the
specific as well as the general.
Ask behaviour questions as well as attitude questions. Questions such as "How would
you rate the quality of tonight's performance, on a scale of 0 to 10?" are about attitudes or
feelings. While these are perfectly valid, they don't necessarily relate to future behaviour
- which may interest you more. Perhaps what you really want to know is "If we put on
another play like this in a few months' time, how likely are you to attend?" A behavioural
intention question like that, though far from a perfect prediction, normally produces more
useful results than an attitude question.
Other useful behavioural intention questions are along the lines of "What changes will
you make in your organization as a result to attending today's workshop?" A list of
actions can then be presented, and respondents invited to tick those that apply. The
interesting thing about this approach is that it can be (for some people) self-fulfilling: the
act of making the choice on the questionnaire can actually cause them to carry out their
intention.
Even more accurate than behavioural intention is behavioural reporting. For this to work,
you could collect their name and address on the questionnaire given out at the event, and
ask their permission to recontact them later. Perhaps a month or two later you can
recontact those respondents and ask what they have actually done as a result of attending
that event. If the results are favourable - that is, if the respondents have done what they
said they'd do - this can be a very powerful argument for seeking more funding.
Though the questionnaire wording and layout is important, its environment is even more
important. Ideally, you want everybody present to fill in their questionnaire, and you
want honest answers from them.
Imagine you're a member of the audience at a seminar. What you heard and saw over the
last hour or two was quite interesting, but it's getting late now, and you have to go home
and cook dinner for the children. Everybody is asked to pick up a questionnaire on their
way out, fill it in, and put it in a box. You don't really feel like doing this (it seems hardly
worth the effort to simply record "It was OK") but the compere asked everybody to make
the effort and fill in the form. So you pick up a form off the heap as you leave. It's long -
about 20 questions - and they seem to be repetitive. Some look quite difficult to answer,
but obviously worthwhile. They want your comments or suggestions for "next time" - not
that you plan to come along "next time." Some of the wording is hard to understand.
For example, one question was "How adequately did the presentation meet your learning
expectations?" This was to be answered on a 0-10 scale. So, when you have figured out
exactly what this question is asking, what might a score of 10 out of 10 mean? "I
expected it to be perfect, and it was." Or (equally valid) "I expected it to be useless - and
it was." In practice, the question was so opaque that you didn't read it very carefully, and
just gave a general rating out of 10 based on what you thought you had learned from the
seminar.
So you decide to fill it in (giving that question 7 out of 10), but then you realize you don't
have a pen with you. Maybe you can borrow one. Also, you have nothing to rest the
paper on when you fill it in. People around you are putting their forms up to the wall, and
trying to write with ballpoints - which don't work well unless pointing down. The lights
are dim, and the questionnaire is printed on blue paper - very hard to read. Also, you can't
see the place where the presenter said the completed forms should be left. So you put the
questionnaire in your bag, and take it home. Maybe you'll fill it in later tonight, and mail
it back to the organizers tomorrow.
But when you get home, there is a minor crisis (perhaps the cat was sick) and you forget
to fill in the form. You put it away for later, then lose it. A week later, it surfaces in a
heap of paper. By then you've forgotten what you were going to write, there's no address
on the questionnaire for you to mail it back to, and by now it's probably too late anyway.
Still, you're reluctant to throw it out, so you move it into a heap of papers that you might
think about some day. Maybe six months later, you find it and finally throw it out.
That story (not so uncommon) shows why response rates for event evaluations are often
so low. Organizers have been known to congratulate themselves for getting a 20%
response rate, falsely believing the average is 3%. If 100 people attend a seminar, and
only 20 forms are returned, what did the other 80 people think? Did they believe the
seminar was so great that they had nothing to add? Did they think it was so terrible that
they'd be embarrassed to hand in a form full of criticism? Or were they so underwhelmed
that it made no impression on them at all? The organizers will never know.
That's why it's vital to get a high response rate. If you get at least two thirds of the
questionnaires back, the other third of the audience would need to have very different
opinions to make a large difference to the results. And the way to get that two-thirds
response is to remove the barriers that prevent people from completing and returning the
questionnaire. The steps needed can be grouped into five main headings:
1. Make the questionnaire easy to fill in.
- Keep the questionnaire short and relevant.
- Avoid questions that need a lot of thinking time (unless you distribute the
questionnaires before the event begins)
- Also avoid questions that encourage an instant, thoughtless response.
3. Encourage response.
- The more strongly the presenter encourages people to fill in the forms, the better the
response rate. However the presenter should ask respondents to be critical, and should not
collect the questionnaires in person.
Triad discussions
Though multiple-choice questions on an evaluation questionnaire enable the comparison
of different events (e.g. in a series of events), they don't provide useful information for
improving an event. If all you know is that 73% of respondents disliked the event, how
can you use this information? You can't, and that is why you need to include open-ended
questions.
But open-ended questions have their problems too. Because they rely on respondents to
think of their own answers, you tend to get a lot of unique responses. This makes it hard
to summarize the results. If 3% of respondents commented favourably about some aspect
of the event, does that mean the other 97% disliked it? Or didn't they even notice it?
Another common problem with open-ended questions is that people write cryptic
comments. They know what they mean, but to the person processing the completed
questionnaires the answer is unclear or ambiguous. This is usually because the answer is
too short.
After thinking about these problems, I developed a solution: triad discussions. It works
like this:
If you have read about our consensus group technique, you'll recognize the method of
triads as a miniature consensus group.
Groups of four or more people can also do this. However, the larger the group, the longer
it takes - and there's usually not much time left for an evaluation at the end of an event.
The triad process can often be finished in five minutes, for an event questionnaire with up
to about 6 open ended questions.
If the event is some form of training, the Kirkpatrick model will apply. Donald
Kirkpatrick (in his book Evaluating Training Programs, published by Berrett-Koehler,
San Francisco, in 1994) described a 4-level model for evaluating the success of training...
1. Were the trainees pleased with the event? This is an aspect of customer
satisfaction, as commonly assessed in the kind of survey mentioned above.
2. How much did they learn? This can be assessed by educational tests, exams, etc.
3. How much did they change their behaviour? In the case of industrial training, this
can be assessed by supervisors, on-the-job performance measures, etc.
4. How much did that changed behaviour contribute towards the organization's
goals? (E.g. a training department would hope that its activities increased the
organization's operating efficiency).
With the Kirkpatrick model, success at each level depends largely on success at the
previous levels. If the trainees didn't like the course, they probably won't learn much. If
they don't learn much, they probably won't change their behaviour. And if they don't
change their behaviour, the organizational goals for the course probably won't be
achieved. Notice the word "probably" - there might be the odd exception, but it's much
harder to achieve a higher level of success if the lower levels haven't also been achieved.
The higher the level, the more difficult it is to be sure how much difference the course
made. Participant satisfaction is easily measured, but it's often not clear to what extent a
course might have increased a company's profit. For that reason, success at Kirkpatrick's
Level 4 is often judged too difficult to assess.
When we tried the Kirkpatrick model, we found that it omitted some important questions,
that Kirkpatrick perhaps took for granted...
• Was the event actually held, in the way that was planned? (For a body that's
funding an event far away, organized by others, this can't be taken for granted.)
• Did people actually attend, of the type and in the numbers planned?
• What indirect influences did the event have - other than on those who attended?
(For example, more benefits may come from personal networks formed at a
conference than from participants acting on the conference papers.)
When answers have been gathered for the above questions, interesting cost ratios can be
worked out - such as how much per person attending it cost to achieve the goals of the
event. If that figure seems unduly high, it's worth considering a different method of
achieving the goals.
If you've read our page on program logic models you'll realize the direction this is
heading: there's nothing as useful as a logic model in evaluating the success of anything -
including a simple event.