Performance Measurement Design Impact
Performance Measurement Design Impact
Allan Hansen
Copenhagen Business School
Solbjerg Plads 3
DK-2000 Frederiksberg
[Link]@[Link]
Abstract:
Knowledge of how and to what extent the design of the performance measurement system adds organizational value is
complex. The complexity arises for several reasons. One reason is that the number of the design options available is
high. Another is that organizational value is a multi-facetted phenomenon and a particular design may have a positive
effect on one facet but a negative effect on another facet of organizational value creation. This paper contributes to
research by providing an integrated framework that link a set of key design choices (with respect to the choice of
performance measures and performance targets) with their potential effect on organisational value. In extant research
only limited attention has been directed towards juxtaposing the multiple choices involved in performance measurement
system design and making the design choices comparable and relatable to their potential effect on the organisational
value. This paper addresses these issues and seeks to consolidate and advance knowledge of how different design
choices in combination make up the performance measurement system design and affect organizational value creation.
Keywords: Performance management, choices of performance measures, target setting, social mechanisms,
organizational value.
1
1. Introduction
Performance measurement system design has been debated intensely by scholars for decades.
Research is full of illustrations of how performance measurement can be counter-productive and
lead to myopia, alienation, and manipulation (Ridgway, 1956; Blau, 1965; Johnson & Kaplan,
1987; Jensen, 2003). In contrast, research has also argued for the potential and hope of performance
measurement systems in terms of improving planning and communication; creating higher
efficiency and learning effects (Jönsson & Grönlund, 1988; Eccles, 1990; Simons, 1995; Kaplan &
Norton, 2004). The question of the organizational value of the performance measurement system
design is certainly a complex one but research clearly illustrates that the design of the performance
measurement system matters and provides significant costs and benefits for the individual
organization.
The aim of this paper is to review a set of design choices related to performance measurement
systems and provide a framework for analysing their organizational value. The design choices
addressed in this paper are generic and in combination they shape a quite comprehensive
framework that can be used for characterizing the design of a performance measurement system in
an individual organizational setting. Many of design choices have already been addressed in
research and issues of the individual design choices and their effect on organizational value have
been investigated. However, research typically tends to focus on a single design choice or a certain
aspect of organizational value. The aim of this paper is to juxtapose the multiple design choices by
which performance measurement systems are made up of and to trace potential effects of the
choices to the various dimensions of organizational value. By doing so the paper seeks to provide a
more integrated view on performance measurement system design that directs attention towards
how a series of design choices in combination will determine the value of the performance
measurement system for the organisation.
The design choices discussed in this paper relate to two basic steps in designing any performance
measurement system: 1) choice of performance measures, and 2) target setting. These steps
represent two fundamental questions: Which scale is performance to be measured upon? What is the
target or standard by which good performance is distinguished from poor performance? The
analysis provided in the paper addresses a set of general design choices related to each of the steps
2
and outlines propositions for each of them in regard to their impact on organizational value.
The way in which the design choices’ effects on organisational value are analysed is based on four
criteria: distortion, risk, manipulation and measurement costs. The four criteria have roots in
organisational economics (Milgrom and Roberts, 1992; Roberts, 2003; Lazear and Gibbs, 2009) and
have previously been mobilised in discussions of performance measure properties (Bouwens & van
Lent, 2006; Gibbs et al. 2009; Moers 2006). The four criteria will in this paper be characterised as
four types of transaction costs that are related to using a performance measurement system as a
management tool for coordination and motivation in organisations (e.g. Milgrom & Roberts, 1992).
Thus, the paper’s perspective on measurement system design is that it is in principle not possible to
design the perfect and frictionless system. Performance measurement systems will always create
some distortion, risk, manipulation opportunities and measurement costs. Therefore, the overall
criterion for design of performance measurement systems is to reduce the transaction costs of the
system as much as possible, and the four types of costs become the multiple dimensions along which
the best design can be sought.
This paper contributes to research by providing an integrated view of the design choices across
different steps in performance evaluation (a set of general choices relates to choices of performance
measures as well as targets) and indicates how combinations of various design choices may affect
organisational value. In extant research, the discussion of design choices and their effect on
organizational value is if often characterized by focusing on the individual design choices or some
dimensions of organisational value. This paper, however, tries to convey a broader perspective on
the discussion of design choices and thereby facilitate an analysis of the combinations of different
types of design choices. This can help in the formulation of new research questions, and for
practitioners, the broader perspective can be useful because it creates an awareness of the different
alternatives that the organisation is confronted with when it comes to performance measurement
system design.
The remainder of the paper is organised into seven sections. The next section outlines how design
choices and design criteria are conceptualised in this paper. Section three clarifies how
organisational value and performance measurement systems are linked in this paper. Section four
introduces and defines the four design criteria one by one. Section five goes through four design
3
choices related to the choice of performance measures and outlines proposition of how they affect
organisational value. Section six discusses target setting and the impact that the four design choices
related to target setting may have on organisational value. Section seven summarises the findings
and provides an overview of the mechanisms related to the design choices that affect organisational
value. Section eight concludes the paper.
Even though a designer does not explicitly reflect on and consider all the options related to the
design of a performance measurement system, choices are made (consciously or not). And the
choices made give the performance measurement system a distinct character in the individual
organisation, which determines how valuable the system will be for the organisation.
The design choices discussed in this paper relate to two steps of any performance evaluation: 1)
choice of performance measure, and 2) target setting. The issues of choosing performance measures
are divided into matters related to the choice between a) one-dimensional and multi-
dimensional measures, b) individual and collective measures, c) objective and subjective
measures and d) absolute and relative measures. The issues of target setting are conceptualised in
terms of a) objective and subjective targets, b) absolute and relative targets, c) target setting and
actual performance and d) achievable and difficult targets.
Obviously, the design choices discussed in this paper are not all-inclusive. The list of choices does
not represent all issues that the manager is faced with regard to the design of performance
measurement systems, but it represents the most common ones in practice and the ones typically
addressed in research. Furthermore, the design choices discussed in this paper have a fairly general
character and are not a reflection of the specific design choices that managers are quite often confronted
with: “Should we choose a measure of customer loyalty over a measure of service call rate in our
performance evaluation or include them both?” Or “should the target for the sales budget be 100 million
euro or 120 million euro?” Nevertheless, the design criteria introduced in this paper also apply to the more
specialised and concrete design choices and thus, the mechanisms discussed in this paper can be used for
assessing these alternatives as well.
Finally, it is worth noting that that the alternatives represented by the design choices discussed in this paper
4
(e.g. objective versus subjective measures) are not always an either/or. Many of the alternatives discussed in
this paper may be used together. For example, both individual and collective performance measures can be
included in the same performance measurement system. In addition, several design choices within the
individual steps will be mixed. For example, the individual and collective measures can be both objective
and subjective measures. Furthermore, design choices related to the two different steps in the performance
evaluation process will be combined. For example, different design choices of performance measures will be
mixed with different design choices of target setting. Thus, a wide range of combinations of design choices
related to the four steps exists in companies. Which one that is most valuable depends on the individual
organisational setting.
The point of this paper is that the effect of the eight design choices in terms of organisational value
can be related to all four criteria: distortion, risk, manipulation and measurement costs. Thus,
choosing a particular design when it comes to performance measures (going from one-dimensional to
multi-dimensional) or target setting (choosing subjective over objective target setting) does not only
affect the distortion of the system, but also risk, manipulation and the measurement cost. It is
therefore the impact that the individual choice has not only on one but on all of the design criteria that
is important for understanding how a particular design affects organisational value creation. Figure 1
below outlines this connection between the design choices and the design criteria.
Design'choices'
Choice'of'measure' Target'se1ng'
• One$dimensional,vs,mul/$dimensional, • Objec/ve,vs,subjec/ve,
• Individual,vs,collec/ve, • Target,vs,actual,
• Objec/ve,vs,subjec/ve, • Absolute,vs,rela/ve,
• Absolut,vs,rela/ve, • High,vs,low,achievability,
Design'criteria'
• Distor/on,
• Risk,
• Manipula/on,
• Measurement,costs,
Figure 1: Linking design choices and design criteria of performance measurement system
5
The idea in this paper is that listing the series of design choices can help in creating an awareness of the
design alternatives that exists when it comes to performance measurement systems. This is important for
formulating research questions and design analyses but also for practitioners that do not necessarily direct
attention to all the design alternatives that may have value creating effect. Some may ‘black box’ the
performance measurement system design too quickly and disregard the many different opportunities that
actually exist to improve the value of the system or the many risks that exist in terms of loosing
organisational value. In this respect, a framework that integrates a series of design choices and illustrates
their potential effect on organisational value can be used for guidance and navigation in the process of
analysing and designing the performance measurement systems.
This paper is not suggesting that the effects of the design choices on organizational value described
are general effects that represent social laws. Thus, it is impossible to state that for example going
from a one-dimensional to a multi-dimensional performance measurement system will always
reduce distortion or that going from an individual measure to a collective measure will always
increase risk. In contrast, the effects described in relation to the design choices are all considered to
be propositions. The assumption is that the value effects described and discussed in the paper are
frequently occurring causal patterns related to the individual design choices, but that their
occurrence and impact within the individual organisational setting are dependent upon the specific
conditions in the individual setting.
In this paper, these causal patterns are conceptualised as mechanisms in accordance with the large
stream of research on social mechanisms (cf. Merton 1968; Elster 1989; Hedstrom and Swedberg
1998; Stinchcombe 2005; Espeland & Sauder, 2007). This paper draws primarily on Jon Elster’s
definition of a social mechanism that is defined as ‘frequently occurring and easily recognisable
causal patterns that are triggered under generally unknown conditions or with indeterminate
consequences’ (Elster, 2007: 36). At least three aspects of this definition provide more insight into
how the effect of a design choice of a performance evaluation system can be understood.
First, Elster underlines in his definition that what characterises social mechanisms are ‘recognisable
causal patterns’. As will be described in the following sections, the literature is full of descriptions
of effects of a wide range of design choices and they stand as ‘easily recognisable causal patterns’
6
affecting organisational value. This means that they are well known effects of a given design
choice, but that the question of how and to what extent they occur in practice is tricky. In addition
to this, Elster emphasises that social mechanisms are ‘triggered under generally unknown
conditions’ and that their emergence is linked with ‘generally indeterminate consequences’. Thus,
the implication of his definition is that the conditions under which design choices affect value in a
certain way are to be found in the individual organisational setting and that the mobilisation and
combination of the value effects depend on specifics within the individual organisational setting
which are impossible to generalise.
Thus, a mechanism is like a causal switch that, if it is turned on, directs action in one way rather
than another. The challenging issue is to understand the conditions under which the switch is turned
on and analyses of mechanisms produce deeper causal knowledge of social relationships and their
dynamism (cf. Merton 1967, 1968; Elster 1989; Hedstrom and Swedberg 1998; Stinchcombe 2005).
The focus in this paper is, however, not to convey deep insight into the individual mechanisms, but
to produce an overview of a wide selection of mechanisms that relate to performance measurement
system design and have organisational value effect. This can be considered a first step of
understanding performance measurement system design. The next step will be to explore more
carefully the potential effects in specific empirical settings and illuminate to what extent
mechanisms will be activated.
In this paper, the coordination and motivation problems in organisations are used as the point of
departure for analysing the organisational value of a performance measurement system. Coordination
and motivation can be portrayed as problems that arise due to decentralisation and the division of
labour in organisations (Milgrom & Roberts, 1992). Decentralisation is often undertaken to exploit the
specific knowledge and increase the efficiency of decision-making (Jensen & Meckling, 1995), but
with specialisation and distribution of decision rights come the problems of coordination and
motivation. In this paper, the coordination problem is defined in parallel to Milgrom and Roberts as:
’ to determine what things should be done, how they should be accomplished, and who
7
should do what. At the organizational level, the problem is also to determine who makes
decisions and with what information, and how to arrange communications systems to
ensure that the needed information is available’ (Milgrom & Roberts 1992:126)
’ to ensure that the various individuals involved in these processes willingly do their parts in
the whole undertaking, both reporting information accurately to allow the right plan to be
devised and acting as they are supposed to act to carry out the plan’ (Milgrom & Roberts
1992:126).
The performance measurement system is often highlighted as a tool that adds organisational value
because it copes with these two problems (Jensen & Meckling, 1995). In other words, the benefits of
implementing a performance measurement system in an organisation are caused by the fact that it
communicates to the agents what tasks, activities and goals that are important for the organisation
and identify low and high performers. Furthermore, by measuring performance of the agent, his or
her efforts are monitored and can be rewarded, which conveys a way to cope with the motivation
problem and align the interests of the owner with the interests of the agent.
Unfortunately, realising these benefits also comes with a cost. Paradoxically, the attempt to resolve
coordination and motivation problems through performance measurement systems may produce new
coordination and motivation problems. These problems are to a wide extent a consequence of the
performance measurement system’s imperfection and they can be summarised by four types of
costs:
• Distortion
• Risk
• Manipulation
• Measurement costs
8
These costs are here characterised as the transaction costs of using performance measurement
systems for coordination and motivation. Subsequently, the question of the valuable design is then
converted into a question of how to minimise the transaction costs of the system design. In this
respect, the four types of transactions costs become four design criteria that can be used to assess
performance measurement system designs.
Analysing system design in this way, however, build of set of prerequisites. First, that the benefits
somehow exceed the costs of using performance measurement systems for motivation and
coordination in the individual organisation. Otherwise, the organisation would be better off not
using performance measurement. Alternatives to performance measurement can be mobilised in
terms of for example culture or self-management. Analysing the value of these alternatives - in
comparison with performance measurement – is, however, not within the scope this paper.
Second, the four criteria applied in this paper are assumed to represent a fairly large share of the
issues that affect the organisational value of performance measurement system design. Other
criteria can of course also be mobilised. For example, psychological aspects like the value of the
feeling of identity and role clarity that performance measures also provide and cognitive aspects
like understandability of performance measurements can lead to other types of criteria.
Nevertheless, if other aspects are considered to be so effective that they significantly change the
cost and benefits of system design (compared to propositions that the present analysis comes up
with) they can of course always be added to the discussions. In this paper, the four criteria
representing four types of transaction costs are used because they as a whole are assumed to
generate a relatively comprehensive analysis when it comes to understanding measurement
designs’ value effect.
Finally, the implication of using a transaction cost economics perspective is also that performance
measures are considered never to be perfect. There is no such thing as a frictionless performance
measurement system. There are always costs involved in their use for resolving coordination and
motivation issues in organisations.
systems when it comes to communicating what should be done in order to create value for the
company. The company’s value-creating tasks, activities and goals are not always completely
communicated through the performance measurement, which distorts information from the
principal to the agent. Thus, this criterion is then about the problem of coordination by
means of performance measures. The next two criteria – risk and manipulation – are related to
the company’s motivation problem. Risk is about how well the employee’s effort is reflected in
the performance measure. When risk factors out of the individual agent’s control on the
performance measured, the agent’s compensation for the effort is likely to be affected. This risk is
often assumed to be demotivating for the individual, unless the agent is compensated for this.
Manipulation is about the temptation that the agent is faced with regarding exploitation of the
asymmetric information relationship between agent and principal. Finally, measurement costs are
the costs related to the implementation and maintenance of the evaluation systems.1
Obviously, this communicative role also introduces the risk that the performance measurement
system excludes tasks, activities and goals, measures tasks and goals incompletely or sets
performance targets and weighting incorrectly. Precisely these kinds of incomplete specifications
result in distorted behaviour when the agent’s actions and decisions are coordinated through the
10
performance measurement system. Generally, the distortion problem can be divided into four parts:
All four matters affect the completeness by which the performance measurement system
communicates the tasks, activities and goals that create value for the organisation.
4.1.1. Partial value creation – the value of the things being measured?
One thing that can distort a performance measurement system is if it measures and directs attention
towards tasks and activities that are not valuable for the organisation. There are numerous examples
of non-value-adding activities in organisations. Some are unavoidable and give the organisation a
licence to operate, for example tasks and activities that are regulated by law or various policies.
However, there are also examples of activities and tasks that reflect old habits, routines and
imitations that are not really valuable for the company and that the company would be better off
without. Identification of those raises basic questions of what ‘the true value’ for the organisation is
all about. These questions can be hard to answer and require reflections on the organisation’s costs
and benefits of the individual task or activity, or the strategic relevance of a goal.
Several management concepts and models have been developed over the years to facilitate the
analysis of valuable activities, tasks and goals for the organisation. One example is the value-stream
analysis in lean manufacturing, where the aim is to identify value-adding and non-value adding
activities in the organisation’s value stream (Womack & Jones, 1996). Another example is the
discussion of strategy maps or cause-effect relationships in the performance measurement literature
(e.g. Kaplan & Norton, 2001). In this part of the literature, the purpose is actually to map
causal relationships between the company’s activities and goals (Rucci et al. 1998; Epstein et
al. 2000; for example, see Kaplan and Norton 2004). Here, the individual’s, the team’s or the
division’s performance can be traced in a larger value chain or business model which explains how
and why the performance with regard to a task, activity or goal creates value for the organisation
11
overall.
Figure 2ab below illustrates the issue of partial value creation of a selected task in an idealised form.
In this case, the task is defined as the creation of customer satisfaction. Figure 2.a depicts the
benefit function, b (x), and the cost function, c (x), of customer satisfaction. Benefit function
b(x) means the benefit that the organisation has at a given level of customer satisfaction (x). Customer
satisfaction creates benefit for the individual organisation for several reasons. For example, satisfied
customers result in resale. Furthermore, satisfied customers are also willing to pay a
higher price for the company’s products. The figure also shows that the marginal benefit of
customer satisfaction is declining (b(x) is a concave function). Thus, the additional benefit received
by the organisation by increasing satisfaction is declining. This means that increased customer
satisfaction results in relatively more resale if the company moves from 40% to 50%
satisfaction than if it moves from 90% to 100% satisfaction. Thus, the resale will be relatively
larger if the customer satisfaction changes from 40% to 50% compared to a change from 80% to 90%.
Cost function c (x) means the costs of the resource consumption for the production of customer
satisfaction at a given level. Here, we assume that the costs for production of customer satisfaction are
marginally increasing (c(x) is a convex function). This means that it requires relatively fewer resources
to increase satisfaction from 40% to 50% compared to an increase of satisfaction from 80% to
90%.
c,/b/
b/(x) v3(x)
c (x)
Customer
)
703% 903% Customer
)
satisfaction
(x) sa+sfac+on
(x)
Figure 2.a.: Benefit and cost curve for a dimension Figure 2.b.: The value function for a dimension of
of nonfinancial performance (customer satisfaction) nonfinancial performance (v(x)=b(x)-c(x))
12
Figure 2.b. illustrates the partial value function of the task. This function is put together by the
benefit and cost functions in figure 2.a., and it is defined by the benefit of the task minus the
costs of the task at a given level of customer satisfaction. Figure 2.b. shows that customer
satisfaction creates value for the company as long as it does not exceed 90% and that
the maximum value creation is found at a customer satisfaction level of 70%. This means
that if the company chooses to increase customer satisfaction from a level of 70% to 80%, for
instance, the additional costs associated with reaching this level will exceed the additional benefit
associated with the change. Therefore, this is not worthwhile for the company. Thus, the company
creates the most value regarding this performance dimension by positioning itself at a
performance level of 70%.
Obviously, the entire discussion about the value creation of the individual performances as regards
their benefit generation and costs also implies that a performance might very well have a non-existing
or even negative benefit function. Or a cost function that exceeds the benefit function by far in
value. This means that the value creation of the performance would be negative and therefore not
desirable for the company.
Figure 3ab below illustrates the main problem related to the partial value creation: the connection
between the tasks that are the objects of the performance measure and the company’s value creation.
Measure Measure
Task Task
Agent Agent
Figure 3.a. No partial value creation Figure 3.b. Partial value creation
13
4.1.2. The multi-tasking problem –the performance measurement system’s representation of the
agent’s complete task portfolio (multi- tasking)
The multi-tasking problem is related to the matter of whether the performance measurement system
includes all the tasks that the individual agent has to handle as part of his/her job. Most agents do not
just have one but several tasks to handle in their job function. This places demands on the
performance measurement system, and often, it can be difficult to identify and measure all tasks of the
individual agent’s job during a given period.
Figure 4 below illustrates the multi-tasking problem related to performance evaluation systems.
Figure 4.a. shows an incomplete performance measure, if assuming that the agent has to handle two
tasks during the period in order to ensure the value creation of the company. The measure is
incomplete, as it only measures the performance for task 1, whereas the performance related to
task 2 is excluded. Figure 4.b. illustrates a complete performance measure, measuring the
performances of both tasks.
Measure Measure
Task Task Task Task
1 2 1 2
Agent Agent
Figure 4.a.: Incomplete performance measurement in a Figure 4.b.: Complete performance measurement in a
multi-tasking situation multi-tasking situation
The problem with the incomplete measure is that the agent’s behaviour is distorted with regard to the
company’s value creation, as the agent will often only focus on the tasks/targets included in the
14
performance evaluation system. This distortion is costly for the organisation.
There are numerous examples of the dysfunctional consequences that this type of incomplete
performance measures have had in organisations. For example, Blau studied these problems already
in the 1940s in his case study of an employment agency. Here, there was a significant reduction of
interviews with clients, although this task was valuable for the organisation, precisely because this
task was excluded by the performance measurement system (Blau 1965). Thus, the performance
measurement system was distorted. Blau writes about the records in the measurement system:
»to be sure, these records did not indicate all aspects of performance. Indeed, the decision to exclude
a factor also influenced operating practices. Since the number of counselling interviews held per
month was not included in the departmental report, interviewers rarely asked permission to give
one. These time-consuming interviews would only have interfered with making a good showing on
the record« (Blau, 1965, p. 41).
Later, research has illustrated how other control systems in organisations, such as social control, can
complement the performance measurement system in the incentive management and thereby reduce
the dysfunctional effect that incomplete performance measures would otherwise have in multi-tasking
situations (Brüggen and Moers 2007).
4.1.3. Externalities – the measured performance’s effect on the performances of other parties in the
organisation
of performance from a organisation-wide perspective and thereby also on what an appropriate target
would be for the performance measure.
Figure 5 below illustrates two situations – one without externalities and one with. In figure
5a, agents 1 and 2 have been isolated. In figure 5b, agent 1’s performances have external
effects on agent 2’s handling of tasks. So here, there are externalities associated with the
measurement of agent 1’s performance.
Measure Measure
Figure 5.a.: No externality – the two agents’ performances Figure 5.b.: Externality – agent 1’s performance affects the
are independent performance of agent 2 (positive or negative externality)
The idea is to design a performance measurement system so that a negative external effect created
by an agent affects the performance evaluation and compensation of the agent in a negative
direction. Similarly, a positive external effect is designed to have a positive impact. This implies
that an agent pays a price for creating a negative external effect or receives a reward for a positive
external effect. The agent will internalise the external effect in his or her decision-making if the
agent seeks to maximise his or her payoff.
16
4.1.4. Adaptation –the dynamics of the valuable tasks for the organisation
Finally, yet another problem is that it can be difficult to specify at the beginning of the period (ex
ante) which tasks it will be valuable to perform in the organisation during the period in which the
agents’ performance is measured. Even though there are no measurement problems of a given set
of tasks that seem to be valuable for the organisation and which an agent performs, and the
principal therefore includes them in the performance measurement system, it may be that these
tasks loose their initial value over time due to for example changes in customer or supplier
relations. This easily leads to an incomplete performance measurement system.
The two figures (6a and 6b) below illustrate the problem of adaptation which is the fourth factor
that may create distortion of the performance measurement system design.
Measure
Measure
Task Task Task Task Task
1 2 1 2 3
Agent Agent
t1 t2
Figure 6.a.: The valuable tasks in an agent’s Figure 6.b.: The valuable tasks in an agent’s
job portfolio at t=1 job portfolio at t=2
17
4.2. Risk – about the performance measure’s reflection of the agent’s effort
Another criterion debated a lot in discussions about performance measures is the risk. Risk refers to
the risk that the agent runs of his or her real effort not being reflected in the performance measure.
From an economics perspective, the agent accepts an employment contract because the agent
believes that there is a match between what the agent delivers at the work place and the
compensation that he or she receives. Nevertheless, measures of the agent’s performance are and
will always be surrogates of the agent’s true effort, and the agent therefore always risks that the
measure does not manage to capture the real effort of the agent. There are several factors that create
this risk. Some of the typical risk factors are:
• External factors, such as fluctuations in the state of the market, competitor behaviour
uncontrollable for the agent etc.
• Decisions made by others in the company affecting the agent’s performance. For example if
the agent’s superior makes decisions that affect the agent’s performance (no decision rights)
• Random/biased performance measures
The higher the risk associated with the performance measure, the less the agent is willing to accept
this measure as the basis of his or her compensation for the work in the organisation, unless the agent
is compensated by a risk premium that reflects this risk (Milgrom & Roberts 1992). Therefore, the
principal is obviously more interested in a low-risk performance measure instead of a high-risk
measure – all things being equal – because this is less costly for the organisation. This idea is also
reflected in the controllability principle, which is often referred to in management
control/management accounting literature (Solomons 1965; Merchant 1998). Here, the principle is
about striving for a design of performance measures that minimises the impact of factors out of the
agent’s control.
An additional point is that the risk is not always caused by an exogenous variable that is
uncontrollable to the agent, but rather an endogenous variable that the agent can control or at least
influence. Thus, there might be reasons for the principal to inflict some risk on the agent. This
18
hypothesis is actually supported by Prendergast (2000), who illustrates how high environmental
uncertainty sometimes correlates positively with more frequent use of pay-for-performance. These
observations can be explained by organisationn’s that distribute decision rights to agents and expose
them to controllable risk. Subsequently, the agents are incentivised to reduce this type of risk
themselves. Furthermore, exposing the agent to controllable risk is not costly for the organisation, as
the agent can only demand a risk premium for the uncontrollable risk - assuming efficient labour
markets.
A third criterion that will be applied in this paper is manipulation. It is defined as behaviour where the
agent exploits the asymmetric information relationship between agent and principal for his or her own
gain. Asymmetric information reflects the fact that the agent often knows a lot more about the
performances that can be carried out or are actually being carried out than the principal. Therefore, the
agent is able to hide actions and information from the principal, which can be exploited by the agent.
Hidden action which is related to the »moral hazard« problem is the opportunistic behaviour that the
agent can carry out during the period where the performances are measured. It includes the hidden
actions that the agent carries out in terms of shirking, working on projects which serve the agent’s
own interests instead of those of the company. For example, Jensen (2003) discusses how sales
managers can move sales orders from the current budget period to the next budget period in order to
increase his or her chances of bonus payments in the coming period.
Hidden information is related to the »adverse selection« problem. It represents the possibility of
selection in and manipulation of the information that the agent has when the agent communicates with
the principal in connection with for example the choice of performance measures or setting of targets
to be used in the performance measurement system. A possibility that can be used to promote the
agent’s own interests instead of those of the company. For example, the agent’s incentive to
deliberately underestimate the sales budget, if the agent is included in the budget process (a bottom-
up process), in order to increase the chances of the agent being able to achieve the budget and get the
bonus (e.g. Jensen 2003).
19
4.4. Measurement costs – about the costs of implementing and maintaining the system
The final criterion for the design of the performance measures to be introduced here is the company’s
costs of implementing, carrying out and maintaining the performance evaluation system. Thus, the
measurement costs are also a transaction cost of using performance measures to coordinate and
motivate in companies. And the measurement costs are an explanation of why many initiatives
regarding performance measurement systems that would definitely reduce the distortion or the risk of
the system, for instance, are not incorporated. Simply because the measurement costs are larger
than what can be gained through reduction of distortion or risk costs.
The measurement costs are the system costs of the resources that develop and maintain the system,
but also the resources that are used by the agents that are subject to the measures. For example, an
employee satisfaction survey in a organisation requires resources with regard to the development of
the measure, the implementation of the measure (in the shape of the employees collecting and
analysing information) and the employees that are supposed to answer the questionnaire. If there
are circumstances that will change over time, it will be necessary to update the measures/questions,
which will create even more measurement costs.
After the introduction of the four criteria for design of performance evaluation systems, this and the
following section will present a number of design choices and discuss their potential impact on
organisational value. There is a wide range of propositions related to the design choices, but more
insight into them creates a foundation for understanding the value of specific design choices in the
individual organisational setting. This section (5.) focuses on the choice of performance measures.
The next section (6.) presents design choices regarding the target setting.
This paper differentiates between four different types of choices regarding the company’s
performance measures:
• Objective versus subjective measures
• Absolute versus relative measures
The reason that precisely these four types of choices are included in the analysis is that they cover
directions in which performance measurement systems often develop. The starting point of the
discussion in this article is a performance evaluation system that is characterised by being
one-dimensional, individual, absolute and objective. This system will also be referred to as the
initial system. Thus, what the analysis of the article focuses on is the consequences of movements from
the initial system towards performance evaluation systems with more diversity (more dimensions),
collectivity, subjectivity or relativity (see figure 7).
One-dimensional Multi-dimensional
(financial) (financial + non-financial)
Individual Collective
Objective Subjective
Absolute Relative
Figure 7: The four different types of movements analysed with respect to the choice of performance measures
The fact that the analysis focuses on the consequences of a movement, for example from a one-
dimensional to a multi-dimensional measurement, does not mean that the analysis cannot be used to
21
understand a change of a system in the opposite direction – i.e. from a multi-dimensional to a one-
dimensional system. Consequences of this type of movement will simply be »the other way around«
with regard to the conclusions. Thus, when movement from a one-dimensional system towards a
multi-dimensional system provides the possibility to reduce distortion, as will appear from the
analysis, movement from a multi-dimensional system towards a one-dimensional system will increase
the possibility of distortion.
Table 1.a. below summarizes the propositions related to the four types of design choices that will be
included in the discussions in this section. The four design choices are treated one by one in the
following. However, many of the design choices discussed will be used in combination in the organisation.
Nevertheless, as mentioned, the aim of this section is to analytically separate the
propositions that relate to moving in one or the other direction of the individual design
choices. This is done in order to better navigate between the wide range of effects that
relate to the many different types of choices when it comes to designing a performance
measurement system.
22
Choice Performance measures
Distortion ↓: Multi-dimensional measures provide new ↓: Aggregation creates incentive to ↓: Subjective measures provide new ↓: Comparison provides the possibility of
possibilities for informing agents about value cooperation between the agents in the group. possibilities for capturing the value creation discovering relevant performance dimensions
creating tasks and internalise externalities in the If this cooperation is an important part of and multi-tasking of the agent’s tasks. Can and capture multi-tasking in the agent’s work
organisation. the agent’s task portfolio this will reduce also be used for internalisation of (relevant in cases where there is uncertainty
↑: Multi-dimensional measures can increase distortion. externalities. about the specification of absolute
distortion due to the principal’s insufficient ↑: Subjective measures will increase performance dimensions).
knowledge of value creation, and immeasurability. distortion if there is lack of common ↑: Eliminates the incentive to cooperation
understanding. between the agents (distortion, if cooperation
is important).
Risk ↓: Adding multi-dimensionality can reduce the risk, ↑: Aggregation creates dependence on the ↓: Subjective measures can reduce risk ↓: Relative measures reduce the general risk
if the added measures provide more information performance of others because they may capture more information that the agent is exposed to, but not the
about the agent’s real effort (the informativeness ↓: Aggregation also provides a guarantee about the agents real effort (the individual/local risk.
principle). against failing performances from the agent informativeness principle) ↑: Relative measures will increase the risk for
↑: The added measures can be so uncontrollable and him- or herself. ↑: Subjective measures can increase the risk the agent if the principal’s subjective
biased that they increase the total risk of the agent’s of the measurement, if the subjective comparison and ranking is biased.
performance measurement system. evaluation is biased, for example in the shape
of favouritism, compression and lacking
competence.
23
Manipulation ↑: In situations where the choice of multi- ↑: Aggregated measures provide incentive to ↓: Subjective measures can reduce the agent’s ↓: Relative measures reduce the possibility of
dimensional measures depends on local and specific free-riding (moral hazard). opportunities for manipulation due to “adverse selection” regarding the choice of
knowledge, agents are given decision rights in ↓: Aggregated measures reduce the possibility relatively more direct monitoring. measures.
connection with the choice of performance of individual adverse selection. ↑: If the principal is susceptible to influence, and the ↑: If the principal (in his or her subjective
measures. The agent can exploit this for his own agent has incentive to attempt to achieve influence and comparison and ranking) is susceptible to
gain (adverse selection). affect the principal’s subjective evaluation, there is a influence, there is a possibility of influence
basis for influence activities from the agent. activities from the agents.
Measurement ↑: The development of non-financial measures is ↓: Aggregated measures are often less ↑: Subjective measures take time and require (↑): It is costly to select comparable agents
costs often more resource demanding than the expensive than individualised and insight into the agent’s performance. They and define dimensions to be used in relative
development of financial measures. disaggregated measures. are often more expensive than objective performance measures (compared to
measures. absolute).
Table 1a: Four types of design choices related to the choice of performance measures and propositions of their effect on organisational value creation
!
5.1. Movement from one-dimensional to multi-dimensional performance measurements
An important choice when it comes to the choice of performance measures is the question of whether
the measures should be one- or multi-dimensional. This distinction is used in this paper to refer to
the question of whether the performance measurement system should include 1) only a single
dimension of the agent’s performance, typically the output, result or financial aspect or 2)
multiple dimensions of the job reflecting activities and results or nonfinancial and financial
aspects of the agent’s job. Multi-dimensional performance measures in contrast to one-
dimensional performance measures to a greater extent seek to specify the content or the
qualities of the job. Multi-dimensional performance measures require that the principal
somehow knows what the agent’s job is about and is able to specify the valuable dimensions of
the job in terms of more than just a separable outcome. This implies that the asymmetric
information relationship between the agent and the principal that is often assumed in
organisational economics has been removed and that the principal can specify for the agent
what aspects of the job that are particularly important from a organisational value perspective.
This issue has been a part of the design debate for many years, and it has been expressed in
various discussions of »composite measures« (Ridgway 1956), »integrated measures« (Eccles,
1992), »nonfinancial performance measures« (Ittner & Larcker, 1998; Meyer, 2002), »diversity«
(Moers 2005) and »combination-of-measures« (Merchant 2006). Generally seen, this multi-
dimensionality provides a large number of possibilities for improving communication and
information to the agent about his or her job, but obviously, it also contains many pitfalls. This section
is reserved for outlining the mechanisms that are often used for explaining these positive and negative
effects of moving from a one-dimensional to a multi-dimensional performance measurement system.
24
5.1.1. Distortion
Multi-dimensional performance measures convey a way to deal with the multi-tasking problem.
Multi-dimensional performance measures can make multiple tasks that an agent’s job often consists
of visible. These tasks can for example be to ensure productivity as well as quality at the
production line. Thus, the combination of measures of costs (a financial measure) and quality (a
non-financial measure) provides a better possibility of reducing the distortion than a one-dimensional
system solely focused on productivity would create.
Furthermore, multi-dimensional measures also provide the possibility of making the external effects
or externalities (e.g. inter-departmental interdependencies) that one agent’s performance (e.g. the
purchasing manager) may have on another agent’s performance (the manufacturing manager) visible.
Positive as well as negative effects, multi-dimensional measures can be used to internalise these
effects in the performance measurement system. For example, the purchasing manager’s performance
can be measured according to purchasing budget as well as the quality in the manufacturing
department. This can give the purchasing manager incentive to ensure a reasonable purchase price as
well as good quality of the purchased goods
Proposition 1.1.a.: Multi-dimensional performance measures hold the potential to reduce distortion if the
multi-dimensional performance measurement system offers new opportunities to capture multi-tasking and
externality issues that are neglected in the one-dimensional performance measurement.
them exists. Thirdly, that the dimensions or tasks defined as valuable loose their value over time –
what is a valuable task today is not necessarily valuable tomorrow. Thus, moving from one-
dimensional to multi-dimensional performance measure can also increase distortion:
Proposition 1.1.b.: Multi-dimensional performance measures will increase distortion if the multiple
dimensions included 1) are based on the principal’s insufficient knowledge of value creating activities and
tasks for the agent, 2) are hard to measure or no common understanding of them exists between the agent and
the principal, or 3) fluctuate significantly in terms of their organisational value within the period of time where
performance is measured.
5.1.2. Risk
Going from one-dimensional to multi-dimensional performance measures also holds the potential to
increase the precision of the individual employee’s effort intensity. For example, if the principal
measures the quality as well as the productivity of an employee’s work, this gives a better indication
of how much effort the employee has put into the job, rather than the principal simply measuring
productivity. In this way, the agent becomes more certain that his or her effort will actually be
rewarded, as the measures are simply less risky and more precise with regard to establishing what the
agent has put the effort into. And seen in this perspective, more dimensions will be worth more than a
few. This is also emphasised in the so-called informativeness principle (for instance, see Milgrom &
Roberts 1992: p.219).
Proposition 1.2.a.: Multi-dimensional performance measures will reduce the risk of the performance
measurement system when the measures added provide more information about the agent’s real effort.
However, adding more dimensions to the performance measurement system can also increase the risk
of the system under circumstances where the measure added contains so much risk that the total risk
of the system increases. For example, if an unreliable performance measure of customer satisfaction is
added to supplement the measure of the agent’s performance. The customer satisfaction measure can
be so unreliable (e.g. asking too few customers) that the agent and company would be better off
without the measure, because it increases the risk of the system too much.
Proposition 1.2.b.: Adding a measure of a new dimension of the agent’s performance can be so biased or
unreliable that it in fact increases the total risk of the performance measurement system and the agent (and
organisation) will be better off without the measure.
26
5.1.3. Manipulation
Several authors point out that the manipulation possibilities of the many intangible and soft
dimensions of performances measured in a multi-dimensional system are significant (for example, see
Zimmerman 2006). Non-financial measures are often not as auditable as financial measures, which
can result in problems regarding their credibility. Furthermore, the principal often encounters the
problem of asymmetric information when selecting non-financial measures, as they concern
circumstances related to the tasks of the individual agents that the principal simply does not have any
knowledge of. Therefore, the principal is dependent upon the agent’s knowledge and input with
regard to the choice of performance measures. And this can be exploited by the agent. For example,
the agent might suggest performance measures that the agent knows that he or she can more easily
perform instead of measures that reflect the organisational value creation better.
All things being equal, working with multi-dimensional measures will increase the measurement costs
of the performance evaluation system. Often, measures of customer satisfaction, quality and
employee satisfaction require special measurement systems (and many of these measures are based
on surveys). Thus, these systems often require additional resources in terms of implementation and
maintenance of the systems. And it is not just the administration of the system that is costly. Often,
the agents also have to spend time answering questions that form the basis of the measures. All this
leads to measurement costs often becoming significant when developing multi-dimensional measures.
This may not only be of significance as to whether the measures are incorporated, but also regarding
the quality and credibility associated with the measures.
Proposition 1.4.: Multi-dimensional performance measures are likely to increase the measurement costs
because the implementation and maintenance of multiple (often nonfinancial) dimensions in the performance
measurement system are more costly than implementing and maintaining a single (often financial) dimension.
27
5.2. Movement from individual to collective performance measurement
Another important choice in connection with the design of performance evaluation systems is the
choice between individual and collective performance measures. Here, there is focus on the
aggregation level of the performance measures. In this connection, the aggregation level of a
performance measure is defined as the question of how many »performing units« that contribute to
the results of the measure. Thus, it is also about the degree of collectivity. Figure 7 below shows
how an aggregated performance measure will sum up more performing units (in this case
sales reps) than a disaggregated performance measure. The measurement of the total sales of a
sales team is a more aggregated measure than the disaggregated measurement of the sales of the
individual sales reps. And the aggregation can continue. The sales of various sales teams can be
aggregated into division sales, and the sales of various divisions can be aggregated into group sales.
Aggregation
Figure 8: Aggregation of sales reps’ performance from individual level (disaggregated) to team level (aggregated)
This type of aggregation is not necessarily related to financial performance measures. Non-financial
performance measures can also be aggregated, for example with regard to customer satisfaction. In
this case, an aggregated measure will be to calculate the average customer satisfaction of the customer
portfolio of the entire sales team. The corresponding disaggregated measure will be the calculation of
the average customer satisfaction of the customer portfolio of the individual sales rep.
28
In the following, the propositions of the effects of the movement from individual to collective
performance measures will be discussed with regard to the four design criteria.
5.2.1. Distortion
Aggregation of a performance measure does not affect the information value which the measure has
regarding the specification and communication of which dimensions of performance that are valuable
or not (information about partial value creation). It is still the same performance dimension (e.g.
sales revenue) that is aggregated to a collective measure. However, aggregation of performance
measures can help solve the multi-tasking problem in situations where cooperation and contribution to
team spirit are important and hence reduce distortion in this way. For example, if an agent in a
production team has two tasks. One is relatively straight forward - productivity – and the other is a
bit more intangible »contribution to team spirit and collegial support«. The »team spirit and
support« task can be captured in the performance measurement by measuring productivity as an
aggregate of the team’s total instead of trying to measure this dimension of the agents performance,
which can often turn out to be very difficult to measure directly. Because, if the collective
productivity measure is used, it can in itself give the agent incentive to provide collegial support, as it
is likely to increase the group’s total output.
29
Proposition 2.1.: Aggregating a performance measure holds the potential to reduce distortion if the
agent’s cooperation with or mutual support of other agents also included in the aggregated performance
measure is decisive for organisational value creation.
5.2.2. Risk
As regards the motivational aspects, aggregation has a number of negative consequences. First of all,
aggregation will increase the risk of the measure in the sense that more agents/performing units are
put together, which means that the performance of the individual agent becomes dependent upon the
performances of other agents. This increases the uncertainty of the performance measure with regard
to the reflection of the individual agent’s actual effort and contribution.
Second, aggregation also results in the »sensitivity« of the measure being reduced, which can have a
negative effect on the individual agent’s motivation. When the performance of the individual agent
is joined with the performances of other agents in an aggregated measure (including n agents in total),
the effect of an additional effort from the agent on the performance measure will be reduced by 1/n.
This implies that the marginal incentive can be reduced significantly when the measure is aggregated.
Proposition 2.2.a.: Moving from an individual performance measure to a collective performance measure
will increase the risk for the individual agent, as the individual agent’s performance will be assessed by
means of a performance measure where it is not only the individual agent’s but also other agents’
performance that counts.
On the other hand, there will also be a positive effect regarding the agent’s risk of a collective
measure in the sense that a collective measure provides the agent with a guarantee of performance
during times where the agent is prevented from performing (for example in case of illness). In this
case, the aggregated performance measure functions as a type of insurance.
Proposition 2.2.b.: Moving from an individual performance measure to a collective performance measure
will reduce the agent’s perceived risk if the aggregated performance measure is considered to be an
insurance by the agent.
5.2.3. Manipulation
Whether the opportunistic behaviour of the agents is increased as a consequence of aggregation of
30
a given performance measure is also equivocal. When moving to a collective measure, the impact
of the agent’s effort and decisions is reduced by 1/n (if n is the number of agents that are subject
to the collective measure). One of the consequences of this is that the agent in a collective
performance measure has incentive to free-riding, because the agent gets the whole benefit of
free-riding, but only pays 1/n of the costs caused by the lack of the agent’s effort. This leads to
the following proposition:
Proposition 2.3.a.: moving from an individual performance to a collective performance measure increases
the incentive to free-ride for the agent because the costs of free-riding is shared with all agents in the group
and the agent gets all the benefit him or her-self.
In contrast to proposition 2.2.3.a., another one can be formulated that directs attention towards the
individual’s incentive to manipulate the performance measurement system. Because, when the
impact of the agent’s decisions and actions is reduced, so is the impact of the agent’s potential
manipulative behaviour. For example, the effect of a salesperson transferring sales from the
current period to the coming period in order to increase the possibility of extraordinary results in
the following period is also reduced by 1/n. Thus, the incentive to reduce the effort in the shape of
work intensity, but therefore also in the shape of manipulative behaviour, is a potential
consequence of moving from an individual to a collective measure.
Proposition 2.3.b.: Moving from an individual performance to a collective performance reduces the
agent’s incentive to manipulate the system, because the gain of manipulating is shared with all the
agents in the group, but the agent holds all the costs alone.
Another condition that adds to the complexity of understanding the potential effects of moving from
individual to collective measures in organisations is that the group or team that the aggregated
measure is focused on can develop social control that either reduces free-riding and/or increases
manipulation of the system. I.e., there may be a social norm in the group that clearly interdicts free-
riding, andthus, the norm is both monitored and enforced by the group. However, there are also
examples of the social control developing into a mechanism that increases manipulation of the
performance measurement system. This is for example the case in situations where a group puts
31
social pressure on the individual member not to over-perform. The thing is that this can result in the
target for the performances of the group being increased by the principal. In those cases, it is a matter
of the group creating »quota restrictions« on the performances of the individual members (for
example, see Roy 1952).
To measure performance disaggregated at an individual level is often more costly than to measure
performance at an aggregated and collective level. The reason for this is that it is harder to retrieve
performance information from the individual level compared to the collective level. Of course, it all
depends on the conditions within the individual organisational setting and information system. For
example, if it is possible to attribute sales to each individual salesperson, the disaggregation can be
carried out quickly and easily, but in contrast, information about customer satisfaction at an individual
level can be harder to obtain.
Proposition 2.4. Moving from an individual to a collective measure often implies less
registration and fewer details, which will reduce the cost of the performance
measurement system.
The third type of choices discussed in this section is about the objective versus the subjective
performance measure. The significance of subjective measures is well-known in organisational
economics and accounting research, and many point out that precisely subjective measures (or
subjective evaluations) are core elements of an efficient performance measurement system design
(e.g. Baker et al. 1994; M o o r s , 2 0 0 5 ; Murphy and Oyer 2003).
A subjective measure means that the evaluation of an agent is based on the principal’s individual and
subjective judgment of the agent’s performance according to given performance criteria (e.g. ‘the
employee’s contribution to team spirit’, ‘innovativeness’ etc.). An objective measure, in contrast,
means that principal’s measurement of an agent’s performance is based on an objective scale that is
verifiable by a third party. For example, it could be that a foreman measures the performance of his
32
or her production workers based on ‘assembly time’ and that the foreman is informed of the workers
assembly time by the production management system. In this case, it is unequivocal how the
assembly time should be measured, as the measurement points and the measuring procedure are
clearly specified. This means that a third party will probably get to the same result as the foreman
when measuring the assembly time of an employee.
As mentioned above, an example of a subjective measure could be the foreman’s assessment of the
production employee’s »contribution to team spirit«. Obviously, the foreman’s interpretation of the
concept team spirit is vital to the measure. Contribution to team spirit is simply a much more
complex and dynamic task and much more difficult to specify compared to assembly time, for
example. The thing is that the employee’s contribution to team spirit depends on his or her handling
of a large number of staff-related and work-related situations that occur during the period and which
are impossible to specify in advance. Therefore, expressed in an unequivocal way, a method for
measuring this performance criterion that can be verified by a third party is not possible.
Nevertheless, there is a balance between ‘interpretive flexibility’ and specification of the subjective
performance measure. The point is that the subjective measure obviously must be flexible and
adaptable enough to include aspects of a task or performance dimension that turns out to be valuable
for the organisation. However, specification of the measures is also important in order to provide
some sort of common ground for performing along the performance dimension or give the agents
directions of what is meant by the performance dimension. For example, many organisation’s work
systematically on defining their subjective criteria – like for example »contribution to team spirit« –
by providing a lot of examples of what that criterion could be about. Such examples play an important
role in creating a common understanding of the subjective performance measures in organisations,
and in research, common understanding is emphasized as a key issue in terms of understanding the
success of subjective performance measures (Friis & Hansen, 2013).
In the following, the significance of moving from objective to subjective measures with regard to
the four design criteria are discussed to explore potential advantages and disadvantages of
subjective performance measures.
33
5.3.1. Distortion
Subjective measures are often emphasised for their contribution to a more complete performance
measurement system (Baker et al. 1994; Lazear & Gibbs, 2009). Subjective measures can be used
to include aspects of the individual employee’s tasks or objectives that cannot be specified
objectively, but which are still essential to organisational value creation.
Obviously, the improved measuring capacity that subjective measures provide can be used for
handling the multi-tasking problem. For example, teachers that assess their pupils are often
confronted with different degrees of measurability of the tasks they have regarding the individual
pupil (see also Hannaway, 1992). First, it is important to improve the pupils’ skills in terms of
reading, writing, math etc. (developing their basic skills), but it is often also considered to be an
important objective at a school to develop the pupils’ social skills. To measure the social
competencies of pupils is, however, much more difficult than to measure their basic skills. The
teacher’s subjective measures of the social skills of the individual pupil are necessary as a
supplement to the various tests of the pupil’s basic skills to get a more complete assessment of the
individual pupil.
In addition, subjective measures can help internalise externalities in cases where they are difficult
to include through objective measures (Gibbs et al. 2004; Moers 2005). For example in situations
where the interdependence between two teams in a new product development department is
complex. Here, the manager may be able to subjectively measure the effects that the teams
involved have on each other and thereby monitor the externalities, which in return gives the
parties incentive to take them into account in their decisions.
Proposition 3.1.a. Subjective measures can reduce distortion of the performance measurement system, as
the subjective measures can measure task performances and externalities that cannot be measured by
objective measures.
When it comes to understanding the problems of subjective performance evaluation, much of the
literature focuses on the incentives to manipulate (see below), but there is also a coordination
problem related to subjective performance measures which is about lack of common understanding
of the subjective measure between the agent and the principal. For example, how does the agent
34
understand what it means to be a team player and how does the principal understand this criterion.
Lack of common understanding creates several distortion problems that are not necessarily caused
by conflicts of interest but by a lack of understanding of the other party’s expectations. Lack of
common understanding makes it difficult for honest people to know what to do and creates a
coordination problem (but it also makes it easier for opportunists to manipulate).
Proposition 3.1.b.: A subjective measure will increase distortion if there is a lack of common
understanding between the principal and the agent.
5.3.2. Risk
In situations where subjective measures enable the principal to capture more aspects of an agent’s
behaviour, it may, like multi-dimensional performance measures in contrast with one-dimensional
performance measures, reduce the risk of the agent’s performance measurement system according
to the informativeness principle (please see the discussion above under multi-dimensional
performance measures). For example, a sales manager may be able to monitor a sales reps’
performance in terms of customer satisfaction for two key clients due to the manager’s contact with
the clients, and this may be a more informative measure of the sales reps’ performance compared to
an objective performance measure of customer satisfaction based on a survey.
Proposition 3.2.a: A subjective performance measure can reduce the risk of the performance
measurement system, because it may capture more information about the agent’s performance compared
to an objective performance measure.
In contrast, the subjective measures can also increase the risk of the performance measurement
system, if the subjective performance measure is biased. Several authors have discussed different
types of biases (Bol, 2008; Lazear & Gibbs, 2009; Prendergast & Topel 1993). One type of bias is
favouritism. This means that the principal carrying out the subjective evaluation favours some agents
rather than others. Another type of bias – leniency bias – manifests itself through the principal not
pointing out particularly bad or particularly good performances and making them visible, but drawing
all evaluations towards the middle instead. This tendency is caused by avoidance of conflicts and
equality thinking of the principal. A third type of bias can occur when the principal does not have the
competencies it takes to understand the agent job and assess the subjective performance dimension
and identify good or a bad performance. These different types of bias all result in risk for the agent
35
with regard to the subjective performance measure.
Proposition 3.2.b. A subjective performance measure will increase the risk of the performance
measurement system, if it is biased by favouritism, leniency, and incompetence.
5.3.3. Manipulation
Manipulation discussed in this section relates to the costs that are incurred through manipulation from
the agent. It is often argued that subjective measures in contrast to objective measures make it harder
for the agent to manipulate, because the principal often monitors the agent directly in this case. The
required monitoring is likely to reduce the agent’s manipulation (see e.g. Bol, 2008).
Proposition 3.3.a. A subjective measure can reduce the agent’s opportunities for manipulation, because the
principal monitors the agent more directly in order to be capable of undertaking the subjective performance
evaluation.
In contrast, several scholars argue that when the measures become subjective, the principal’s
monitoring and assessment also become more susceptible to manipulation, as some principals can be
influenced by agents (Prendergast & Topel 1993). This type of manipulation is also called influence
activities in literature (for example, see Milgrom and Roberts 1988). In situations where the agent
experiences favouritism from the principal in the subjective evaluation, some agents will find it
worthwhile to »suck up« to the principal in an effort to achieve acceptance and appreciation. The goal
is to influence the principal to make the decisions that benefit the agent. Decisions that do not
necessarily increase organisational value.
Proposition 3.3.b. Subjective measures will increase manipulation, because the agent in some cases has
the ability to influence the principal’s monitoring and assessment of the agent’s performance.
Subjective performance evaluation is resource demanding in the sense that it is not an automatic
measuring instrument but relies on an evaluator – often the principal - that puts time and effort into
the performance evaluation. Thus, the costs are associated with the evaluator’s efforts in connection
with the measure and the time needed, which is often considerable. The amount of time obviously
depends on the evaluator’s knowledge of the agent’s job and interactions with the agent. But the time
consumption is often the critical resource in terms of the quality of the evaluation. However, the
36
evaluator/principal does not always have the incentive to spend enough time on the subjective
measure, as this activity is in fierce competition with other (and sometimes more visible) management
tasks.
Proposition 3.4. Subjective measures will increase the costs of the performance measurement systems
due to the time and effort the evaluator/principal needs to put into the subjective evaluation.
The matter of absolute and relative measures is about whether the dimensions used to measure the
performance are absolute and firmly anchored or whether the dimensions are relative and determined
by comparing the performances of various agents/teams/divisions with each other. A relative measure
is a quite different measure compared to what we usually understand by a performance measure.
Usually, we consider performance measures to be absolute. This means that there is knowledge or
experience – either embedded in an objective evaluation system or used by a principal in a subjective
evaluation – that is used to define the dimensions of the evaluation of a performance. This is not the
case for a relative measure. Here, the dimensions expressing the value creation are created through
comparison of various agents’ performance of their work. Thus, the dimension(s) for the relative
performance measure is(are) deducted on the basis of comparison of agents’ performance.
An example of a relative performance measurement in use is the practices that sometimes evolve
around bonus payments. Relative performance measures are used, for example, at a business school
when the dean asks his or her scientific personnel to send in proposals for why they deserve
bonuses for their job performance for the past year. The professors and lecturers know the strategy
of the business school and so does the dean. Only very broad categories of the types of activity that
make the individual professor or lecturer eligible for bonuses are indicated: teaching, research,
dissemination etc. Based on the professors’ and lecturers’ descriptions of the tasks they have
undertaken the past year (in order to contribute to the business school’s value creation within the
different categories), the dean compares and ranks the job performance of the individual employees.
In this case, the dean has no specific indication of what type of job performance within the individual
categories that the school rewards – e.g. in terms of the research category: publication or editorial
roles in specific journals, initiation of research projects in specific subject areas, or international
network building. In contrast, the dean waits and sees, and on the basis of a comparison of the actual
37
performances by the individual professors and lectures, the dean ranks and assesses the employees’
performances subjectively.
So, what is radically different in the relative measure is that no task performances that will be used to
assess the performance are specified ex ante. The performances are specified ex post by means of the
principal’s subjective comparison and ranking of the agents’ descriptions of their job performance.
Thus, in this connection, it is up to the individual employee to invent tasks and dimensions of the job
– i.e. to find out how to do the job and define what creates value for the organisation. The idea is then
that the individual agent will ranked high for these inventions, if they are inventive enough compared
to the other agents. This matter is decided by the principal through his or her subjective comparison of
the job performance of the various agents
5.4.1. Distortion
Relative measures create a unique possibility of capturing value creation in situations where it is not
known in advance what the value creating tasks actually are. In other words, there is a lack of absolute
knowledge about the performance dimensions and tasks with regard to the individual jobs that
specifically create value for the organisation. This lack of knowledge can be caused by for example
dynamics and complexities, which create a need for adaptation. Thus, the basis for defining them in
this case is through a comparison of the agent’s task performances ex post. Obviously, there is no
guarantee that the comparison will result in the most value-creating dimensions and solutions, but all
other things being equal, a comparison among agents creates a better starting point than an attempt to
specify them ex ante – which under these circumstances will often be too rigid and irrelevant for the
individual agent’s job.
Proposition 4.1. Relative performance measures will reduce distortion in situations where the principal
does not have appropriate ex ante knowledge about the value creating performance dimensions and tasks
for a set of jobs, because the relative performance measures give the individual agents decision rights to
exploit their knowledge and undertake the tasks and perform along the dimensions that they find will create
the most value for the organisation.
5.4.2. Risks
The question of whether the agent will be exposed to more or less risk in connection with the relative
38
measure is determined by whether the tasks and performance dimensions that are selected by the
individual agents are more or less risky for the agent than the ones that would be outlined in an
absolute performance measure. There are two reasons why risk of the performance measurement
system is reduced by a relative performance measure compared to an absolute. The differentiation
between two different types of risk: local and general risk (Lazear & Gibbs 2009) can help explain
this. The local risk is the risk only related to the performances of the individual agent, whereas the
general risk is the risk that is associated with the performances of all agents and which is the same for
everyone. The general risk will be eliminated through the relative measure, precisely because
everyone is subject to this risk. For example, a general recession in the markets caused by a general
economic downturn will be eliminated as a risk factor for the individual agent when all agents in the
relative performance measure are subject to this risk factor. However, the local risk can still affect the
relative performance measure of the individual agent. For example, an agent may be exposed to
technological or supplier risks that the other agents are not. But, as the individual agent has the
decision rights to choose which tasks to perform, the agent will probably tend to choose task
performances with lower local risks to increase the probability that the agent will successfully
perform the task. This will, all other things being equal, reduce the local risk for the individual agent
as well.
Proposition 4.2.a.: Relative measures will reduce the risks of the performance measurement system for the
individual agent, because general risk factors are eliminated by comparison and the impact of the local risks
factors can be reduced by means of the decision rights that the individual agent has to choose the tasks/or
performance dimensions with lower local risks compared to tasks with higher risks.
In contrast, if the principal’s comparison and ranking – which is subjective - suffers from bias and
incompetence,the relative performance measures will increase the risk for the agent – see also the
discussion above under subjective performance measures.
Proposition 4.2.b. A relative performance measure will increase the risk of the performance measurement
system if the principal’s subjective comparison and ranking is inflicted by for example favouritism,
leniency, and incompetence.
5.4.3. Manipulation
As described earlier in connection with the discussion about multi-dimensional measures, adverse
39
selection can occur when choosing these absolute performance measures, as the principal is
dependent upon the agent’s knowledge with regard to the choice of performance measures. Thus, this
means that the agent is asked to define what creates value – and specify task performance, because
the principal does not have the knowledge to do so. This means that the agent can choose task
performances where the agent knows that he or she performs well instead of other performances that
better capture what creates value for the company. This adverse selection problem is actually relevant
for absolute as well as relative measures, but it is reduced significantly in a relative performance
evaluation system. The reason for this is that in connection with this type of measure, the principal
will compare the performances of various agents and have a benchmark that the principal can use to
challenge the agents on their individual performances.
Proposition 4.3.a.: The relative performance measure will reduce manipulation, because the possibility of
“adverse selection” regarding the choice of measures is reduced.
On the other hand, the relative measure is obviously also a type of measure where the principal’s
individual subjective selection of task performances is vital to the measure. This subjectivity means
that the agents get a chance to influence the choices of the principal – through influence activities
(Prendergast & Topel 1993) – just as in the case of subjective measures in general (please see above).
Proposition 4.3.b.: The relative performance measure will increase manipulation if the principal is
susceptible to influence due to influence activities from the agents.
Proposition 4.4.: Relative performance measures increase the measurement costs of the performance
measurement system, because they require a selection of comparable agents, presentations and reporting of
task performances by the agents, and assessment and ranking of task performances by the principal.
40
6. Target setting
Another key design choice in connection with the development of performance measurement systems
is target setting. The performance target or standard is the level of performance by which high and
low performance can be identified. Thus, one thing is to choose a dimension according to which
performance should be measured (the choice of performance measure), another thing is to choose the
level of performance (target setting) that can be used to distinguish high from low performance. This
paper discusses three design choices related to target setting:
41
Table 1.b. below summarizes the propositions related to the four types of design choices that will be
included in the discussions of target setting in this section. The four design choices are discussed one
by one in the following.
Distortion ↓: Subjective ex post ↑: Relative performance targets ↓: A focus on actual performance ↓: Moving from a less to a
corrections provide the create competition between the rather than compliance with more difficult goal can
possibility of correcting the agents and no incentive to performance standards will reduce create more commitment
performance level in order to cooperation. This distorts distortion when 1) adaptation is and direct more attention
reflect new priorities decisive behaviour, if cooperation is important, 2) the agent possesses towards the goal and
for adaptation. important. the knowledge required to decide hence reduce distortion.
↑: Subjective ex post ↓: Relative performance targets on the performance level that best ↑: If the goal commitment
corrections will lead to can create knowledge sharing serves the organisation’s interest created by challenging the
increased distortion if they are processes. and 3) the agent is non- agent with a more difficult
biased and affected by lack of opportunistic. target implies that other
common understanding of tasks and goals important
new priorities and adaptation. for organisational value
creation are neglected by
the agent, distortion will
increase.
Risk ↓: Subjective ex post ↓: Relative performance targets ↓: An actual performance corrects ↑: More difficult targets
corrections provide the eliminate the general risk that completely for uncontrollable risk, will lead to increased risk
possibility of correcting for the agents are exposed to, but as the performance is not of the performance
uncontrollable risk. not the individual/local risk. compared to a target. measurement system.
↑: Subjective ex post ↑: Relative performance targets
corrections increase risk if they increase the risk because the
are inflicted by bias and lack of agent’s performance evaluation
common understanding of depends on the peer group’s
uncontrollable and controllable performance which is uncertain
risk. to the agent.
Manipulation ↑: Subjective ex post ↓: Relative performance targets ↑: Actual performance reporting ↑: If achieving difficult
corrections can give the agent remove the agents’ possibility can increase manipulation if targets is perceived by the
the possibility of influencing of under-estimating opportunistic agents have employee to give
the principal to correct for performance targets (adverse incentives to shirk, because their opportunities for extra-
“anything” (establishment of selection), as no targets are performance is not controlled ordinary high rewards, this
culture of excuses) used. against a performance standard. can provide incentives to
↑: Relative performance targets ↓: Actual performance reporting manipulate.
can increase manipulation will reduce manipulation because
because the competitive the incentive for adverse selection
environment within the peer is removed.
group can provide incentives
for sabotage between the
agents.
Measurement ↑: The subjective corrections (↑): Increased measurement ↓: No costs for target setting. ↑: Setting a difficult and
costs of the principal require time costs for development of basis challenging target that
and insight into the agent’s for comparison. On the other motivates the individual
tasks. hand, no costs for target employee is more
setting. demanding that setting an
highly achievable target.
Table 1b: Four types of design choices related to the target setting and propositions of their effect on organisational value
creation
!
42
6.1. The movement from objective to subjective target setting
A key question in target setting practices in organisations is whether the principal will correct the
target to adjust for unexpected events and changes once the period is over. This is a practice that
is also referred to as subjective ex post correction (Merchant, 1989). There can be technological,
environmental or operational circumstances that result in it not being possible or expedient for the
agent to realise the target (or the opposite: that it became much easier to realise the initial plan).
Then, it would be appropriate or more fair to adjust the target downwards (or upwards) to better
reflect the actual conditions under which the agent has performed his or her tasks. Below is an
outline of the effects of this design option in relation to four criteria.
6.1.1. Distortion
Subjective ex post corrections can reduce distortion in performance measurement systems. From a
organisational perspective, unexpected events and changes in markets, supplies, technology etc. can
easily affect the value of performing according to a dimension or undertaking a specific task. For
example, if the prices from a supplier of a particular product suddenly change dramatically, it may be
relevant for the organisation to adjust the sales rep’s sales target for that particular product compared
to other products in the product portfolio. It may also be that customer visits and services to one group
of customers suddenly become much more important than for another group of customers initially
planned to be first priority. Thus, there can be very good reasons to adjust the performance targets to
better reflect new priorities and changing conditions, which requires that resources are allocated in
new ways to ensure organisational value creation.
Proposition 1.1.a.: Subjective adjustment of performance targets can reduce distortion, because it can
adjust initially planned targets that do not reflect new priorities and allocations of resources decisive for
adaptation and organisational value creation.
In this discussion, the focus is on subjective corrections. But objective targets can also contain
objective ex post corrections. This does not make the measure less objective, as it is specified
objectively ex ante how the adjustment will be made ex post. For example, Roberts (2004) provides
an example of how the performance measures of sales divisions in an oil company are adjusted
objectively according to an index that reflects the development of oil prices, because financial
43
performances are obviously heavily affected by the development of the oil prices. In this case, the
target for the financial performance is adjusted ‘automatically’ according to the development of the
oil prices. In this case, the adjustment is just objective instead of subjective.
Adding to the discussion of subjective ex post adjustment, it is worth noting that these types of
adjustments can be biased. And biased adjustment increases distortion of the performance target.
This is also the case if the subjective adjustments are incompetent and if no common understanding
between the principal and the agent exists with regard to how the organisation should adapt to
changes. Thus,
Proposition 1.1.b.: Subjective adjustment of performance targets is likely to increase distortion of the
performance measurement system if it biased by favouritism, leniency, incompetence, and lack of common
understanding between the agent and the principal of what adaptation means.
6.1.2. Risk
The subjective ex post correction (as well as the objective correction) obviously also makes it
possible to reduce the agent’s risk related to the performance measurement system, as unforeseen
incidents can be eliminated. A key issue in this connection is whether the unforeseen incidents are
reflecting controllable or uncontrollable risk for the agent (Lazear & Gibbs 2009). In principle,
the agent should only be corrected for the uncontrollable risk, because exposing the
agent to controllable risk will provide the agent with an incentive to reduce this risk.
Proposition 1.2.a: Subjective adjustment of performance targets can reduce risk for the agent when the
principal adjusts for uncontrollable risk affecting the agent’s performance measure.
In contrast, if the principal’s subjective adjustments suffer from bias, incompetence and lack of
common understanding between the agent and the principle of what the controllable and
uncontrollable risk factors related to the agent’s job are, they will increase the risk for the agent.
Proposition 1.2.b.: Subjective adjustment of performance targets will increase the risk of the performance
measurement system if the principal’s adjustments are inflicted by favouritism, leniency, incompetence,
and lack of common understanding between the principal and agent of controllable and uncontrollable risk.
44
6.1.3. Manipulation
As soon as subjective evaluations from the principal are made possible, the same applies to the
agent’s possibility to influence the principal. Something that is difficult for the principal to evaluate
with regard to subjective ex post corrections is obviously whether the unforeseen incidents indicate
uncontrollable or controllable risk for the agent. When the principal makes subjective corrections,
there is a danger that the agents will attempt to influence the principal to lower the target more than
what is justified due to the uncontrollable risk. If this influence is successful, there is a danger that a
basis for an excuse culture is formed (Merchant 1989). And excuses and corrections that are about
other things than uncontrollable risk only serve the interests of the agent, not the interests of the
company
Proposition 1.3. Subjective adjustment of performance targets will increase manipulation of the
performance measurement system to the extent that it is possible for agents to influence the principal’s
adjustments so that the principal adjusts too much in the agent’s favour (adjustment of controllable risk
and creation of an excuse culture).
All other things being equal, the measurement costs increase when the principal has to spend time
correcting the targets compared to target setting without subjective ex post corrections. Correction
activities are simply costly, and the principal can spend a lot of time finding out which explanations
that are good and which explanations that are simply poor excuses for a performance that does not
meet the target. Or vice versa. Which performances that indicate an actual extraordinary performance
and which performances that indicate luck in cases where the principal experiences over-achievement
from the agent. Obviously, the time and resources that the principal has to spend on correction
depend on the knowledge he or she has of the agent’s work. But all other things being equal, systems
with subjective ex post corrections will be more resource demanding than systems without.
Proposition 1.4. Subjective adjustments of performance targets will increase measurement costs, because
ex post adjustments require that the principal spends time and resources on finding out how controllable
and uncontrollable risks have affected performance.
45
6.2. Movement from absolute to relative targets
An alternative to setting an absolute target ex ante and measuring the agent’s performance against this
pre-set performance level is to use the performance levels of other agents ex post to assess the level of
the individual agent. This simply entails comparing the individual agent with others in order to
evaluate whether the agent has performed well or poorly. Please note that the discussion of
relativity undertaken in this section on relative targets is different from the one taken in the section
above on relative performance measures. In the present section, the focus is on understanding the
level of performance of a pre-specified performance dimension by means of comparison. In the
following, the potential effects of moving from an absolute target to a relative target are discussed
and the discussion will also be focusing on the effects on four transaction costs involved in
performance measurement: distortion, risk, manipulation and measurement costs.
6.2.1. Distortion
A relative performance target means that the evaluation of the performance of the individual agent (or
another performing unit – a team or division) depends on the performances of other agents (other
performing units). If the individual agent performs better than the others, the agent’s performance will
be evaluated as high. Conversely, if the agent performs at a level below that of the others, the
performance will be evaluated as low. Therefore, the agents compete with one another. There can be
advantages as well as disadvantages of this. It is an advantage, if the agents’ jobs are relatively
independent and the agents are motivated by the competition. This would then lead to higher
motivation and productivity. On the other hand, if the agents’ jobs are interdependent and they rely
on each other’s help and support in their everyday work, a relative performance target can destroy
otherwise value-creating cooperation between the agents and thereby distort the behaviour. Therefore,
it is essential to understand the dependence between the agents that are being compared before
relative targets are introduced into the performance evaluation system. Because, if cooperation is
important, the movement from absolute to relative target setting can result in increased distortion of
the performance measurement system:
Proposition 2.1.a.: Relative targets can create competition between the agents included in the relative
performance evaluation and hence reduce the incentive to cooperate. This distorts behaviour, if
cooperation is important due to interdependent work processes among the agents.
46
Relative performance targets, however, may also have another effect which is much more positive
for the coordination of the agent’s work. Relative targets may also be used for sharing knowledge.
By measuring performance relatively, the high-performers and the low-performers can be
identified, and this may not necessarily lead to severe competition among the agents but instead
provide an opportunity for learning. Where the high-performance work processes are used as a
benchmark for low-performers and the low performers seek to improve their work processes to
become a higher performer. This will increase productivity, but it would probably also lead to
improved goal achievement, not only in terms of cost efficiency but also according to other
dimensions, and this will then lead to a higher organisational value creation, it can be argued.
Proposition 2.1.b.: Relative targets can create knowledge sharing among agents included in the relative
performance evaluation because low-performers may improve their work processes by learning from the
high-performers’ work processes
6.2.2. Risk
Being part of a relative performance evaluation is often considered to be risky by the agent, because
his or her performance evaluation does not only depend on his or her own performance but also on
other agents’ performance. The other agents and their performance are – so to speak – a new risk
factor that enters the scene when it comes to relative performance evaluation. If the peer group that
the individual agent is compared with performs much better than the individual agent, it will reduce
the agent’s performance measure (although the agent had worked really hard) and vice versa if the
peer group has performed very poorly, the agent will look very good in the performance measure.
Thus, the other agents’ performances represent yet another risk factor added when using relative
performance targets.
2.2.a. Relative performance targets increase the risk, because the agent’s performance evaluation depends
on the peer group’s performance, which is uncertain to the agent.
Another aspect of risk when it comes to the effect that relative targets have on performance
measurement system design is the question of the impact of general risk versus local risk. Relative
target setting will eliminate general risks. The general risk is the risk that all agents in the relative
performance evaluation are exposed to. The reason for the elimination of this type of risk is that all
agents are influenced by the same factors, either negatively or positively. When the conditions are the
47
same for everyone, there will be no bias on the total relative performance evaluation. However, the
local risk, which consists of the factors influencing agents individually, will not be neutralised in the
same way as general risk. Therefore, it is very important to the success of relative target setting that
the agents that are compared »work under the same conditions« - meaning that the local risk is very
low. If not, the relative target setting will be highly risky for the individual agent and considered to
be unfair, as the agent perceives that he or she is being compared with incomparable others.
Proposition 2.2.: Relative targets eliminate the impact of the general risk that all agents in the relative
performance evaluation are exposed to.
6.2.3. Manipulation
Furthermore, relative performance targets eliminate the »adverse selection« problem in target setting,
where agents tend to under-estimate their forecast of performance targets. This problem is often
experienced in budget target processes where a bottom-up approach is often used (Jensen, 2003).
Relative performance targets eliminate adverse selection, because there is no need for a pre-set
target, as the basis of evaluation is created by a comparison of the individual agent’s performance
level with the performance levels of other agents rather than an absolute target , for example one
forecasted by the individual agent ex ante.
Proposition 2.3.a: Relative performance targets reduce the problem with adverse selection, as no target is
set ex-ante.
However, the relative target setting can in some cases create a whole other problem, as the agents in a
relative system do not only have incentive to compete with each other, but also to actually sabotage
each other’s work (Lazear & Gibbs, 2009). Sabotage is a distinctive manifestation of opportunistic
behaviour and clearly works against the interests of the company.
Proposition 2.3.b.: Relative performance targets can increase manipulation because competitive
environment within the peer group can provide incentives for sabotage.
Just as in the case of relative performance measures, the selection of agents for the peer group – the
other agents used in the comparison - is essential. The goal is to create a rather homogenous group of
agents. This process of defining the basis can be costly, but in return, there is no resource
48
consumption related to setting absolute targets. Understanding what a good performance is simply
comes from the comparison. Whether the relative design is less expensive than the absolute design
obviously depends on the knowledge that the principal has of the performance and how easy it will be
for the principal to set an absolute target. This resource consumption can then be compared to the
resource consumption of selecting comparable agents for the given performance measure.
Proposition 2.4.: Relative performance targets can create high measurement costs due to resource
consumptions used for creating and maintaining peer groups.
Performance evaluation is based on reporting of the deviation of the actual performance from the
performance standard. However, there are practices in which performance reporting is undertaken
without performance standard and where it is simply the actual performance that is reported. In
these situations, there are no indications of an expected performance level or a performance
standard upon which planning or performance evaluation can be based. In contrast, the actual
performance is reported ex post, because this type of information in itself can be used for
coordination and motivation purposes in the organisation. For example, reporting of actual
performance of activity measures can be quite forceful, because they create learning opportunities
for the individual employee (because performance targets are sometimes quite rigorous and less
informative for the individual) and furthermore, compensating individuals based on actual
performance – e.g. sales commission, and piece rate – can also be quite effective compared to lump-
sum bonuses based on targets (e.g. Jensen, 2003)
6.3.1. Distortion
something else than initially planned to best serve the organisation’s interest. For example, a
manager might be able to set a reasonable target for how many customer visits a sales rep should
carry out for the company during a six month period. This target, however, might turn out to be
obsolete, because other tasks – than visiting customers – become more important during the period
seen from the organisation’s perspective. In this kind of situation, it would be in the best interest of
the organisation that the sales rep had fewer customer visits, and hence, the performance target is
obsolete.
Thus, under conditions where there is a constant need for adaption and the agent knows what is best
for the organisation, performance targets loose their relevance as a coordination device and the
organisation will be better off reporting actual performance rather than performance deviations
from an obsolete performance standard.
Proposition 3.1.: A focus on actual performance rather than compliance with performance standards will reduce
distortion when 1) adaptation is important, 2) the agent possesses the knowledge required to decide on the
performance level that best serves the organisation’s interest and 3) the agent is non-opportunistic.
6.3.2. Risk
Reporting actual performance without a performance target is beneficial from a risk perspective,
because the agent is not longer held responsible for a deviance between actual performance and
planned performance. The reason is that performance deviations create risk in itself for the agent,
because the reasons for the deviation may be unclear to the principal. This implies that the
deviations become subject to discussions and debates in the organisation. The fact that the
performance deviations are debatable and reasons for them often opaque makes a performance
50
measurement system based on deviation more risky for the agent compared to a performance
measurement system based on actual performance reporting.
Proposition 3.2. Actual performance reporting reduces risk compared to performance deviance reporting,
because of the uncertainty related to the principal’s interpretation of the performance deviances.
6.3.3. Manipulation
The absence of performance deviance reporting of course implies that the principal looses a highly
recognised mechanism for monitoring. Thus, the opportunistic agent does, in principle, have the
opportunity to shirk (perform at a low level) without being revealed, because the agent’s
performance is not compared with a performance standard (historical or internal/external
benchmark.
Proposition 3.3.a. Actual performance reporting can increase manipulation if opportunistic agents have
incentives to shirk, because their performance is not controlled against a performance standard.
On the other hand, a system based on actual performances has no problems with gaming with regard
to target setting - simply because performance targets are not used. This eliminates the »adverse
selection« problem which is often referred to in target setting processes where agents have incentive
to under-estimate the performance level of the coming period. For example, Jensen (2003) argues
that this effect is a core argument for switching from budget targets to linear compensation profiles
in compensation policies.
Proposition 3.3.b. Actual performance reporting will reduce manipulation, because the incentive for
adverse selection is removed.
All other things being equal, the measurement costs will be lower for performance reporting based on
actual performance instead of reporting based on a target and deviation. Target setting and deviance
reporting is quite costly, and in addition to this, there are often a lot of resources tied to debating
deviances and correcting a target correctly, as described above.
Proposition 3.4.: Using performance reporting based on actual performance in contrast to performance
51
targets and deviation reduce the measurement costs of the performance measurement system.
The final design choice that this paper discusses is the extent to which the targets are achievable.
Achievability (or goal difficulties) is an issue that has been discussed intensively in psychological
informed research. In this stream of research, difficulty is measured as probability of success. In a
meta-analysis, Locke and Latham (1990) found a positive, linear function in that the highest or most
difficult goals produced the highest levels of effort and performance. Performance levelled off or
decreased only when the limits of ability were reached or when commitment to a highly difficult
goal lapsed. These propositions imply that about 25-40% probability of reaching the target is the
most efficient design. Others argues that targets should be ’tight but achievable’, which means that
the targets are achieved in less than 50% of the cases (Atkinson, 1958). A large number of
experimental studies support these propositions and the circumstances under which they found
evidence for them in fact differed. In contrast, empirical studies of budgeting target practices show
a much higher probability of budget target achievement – close to 100% (Merchant and Manzoni,
1989). Researchers argue that budgeting is a negotiation process between two parties whose
incentives to a large extent are aligned (both are interested in achievable budget targets).
Nevertheless, the discussion in this section assumes that setting more difficult targets for the agent
can increase the motivation (effort intensity) of the agent. The aim of the discussion below is thus to
explore how the movement from a less difficult target to a more difficult target that motivates the
individual employee in a positive way will affect the transactions costs addressed in this paper:
distortion, risk, manipulation and measurement costs. This discussion adds another set of
perspectives on the value of moving from high achievable targets to less achievable targets.
6.4.1. Distortion
Several psychological studies indicate that more challenging goals can create more goal
commitment and more goal-oriented behaviour (Locke and Latham, 1990). More difficult targets
are more stimulating for individuals with a relatively high self-efficacy and thus extra-ordinarily
high efforts are channeled towards these goals as behaviour becomes more goal-oriented. A
52
stronger goal commitment and goal orientation are positive impacts when it comes to alignment of
the owners and the agent’s interests in the organisation. This requires, however, that the targets and
goals are the right ones. But if this is the case, a more difficult target can reduce distortion when
more attention and commitment are directed towards the valuable performance dimensions and
tasks for the organisation:
Proposition 4.1.a.: Moving from a less to a more difficult target can create more goal commitment and
direct more attention towards the performance dimensions and tasks that create value for the organisation
In contrast, the strong orientation towards the difficult and challenging tasks and performance
dimensions can also result in other tasks and performance dimensions which are also important for
organisational value creation being neglected. If the goal commitment leads to a multi-tasking
problem,this increased task difficulty will have a negative effect on organisational value creation.
Thus, the use of the task difficulty mechanism relies on whether the management is able to balance
the multiple tasks and performance dimensions which are often related to the agent’s job.
4.1.b.: Increasing target difficulty can increase distortion if the goal commitment created by challenging
the agent results in the agent neglecting other tasks and performance dimensions decisive for
organisational value creation (the multi-tasking problem).
6.4.2. Risk
Target difficulty obviously also has an impact on the risk that the agent is exposed to via the
performance measurement system, because when target difficulty is increased, the probability of
achieving the target is reduced for the agent. This implies that a more difficult target is more risky
for the agent. The extra risk that the agent will be exposed to is, however, not risk that demotivates
the employee. In contrast, in this case, the extra risk is motivating for the individual and creates
more goal orientation because the individual believes that the extra risk is challenging and
something that the employee can use to demonstrate his or her abilities to achieve an extra-ordinary
result. The agent expects this to be beneficial for him or her, for example in terms of reputation
building, promotion, permanent salary increases etc. Nevertheless, the risk of the performance
measurement system increases when goal difficulty increases (the probability of success decreases
when goal difficulty increases). The point is, however, that increased risk comes with an
opportunity for the individual employee to demonstrate extra-ordinary results – and hence it seems
53
to be accepted.
Proposition 4.2. More difficult targets will lead to increased risk of the performance measurement system
6.4.3. Manipulation
The combination of a higher target difficulty and perceived opportunities for extra-ordinarily high
rewards (e.g. promotions) can turn out to be a dangerous cocktail, because it can give the employee
an incentive to manipulation (Lazear and Gibbs, 2009). Thus, if the challenging target creates a
high goal commitment, the question of the means the employee uses for reaching the target can turn
out to be an issue.
Proposition 4.3.: If achieving difficult targets is perceived by the employee to give opportunities for extra-
ordinarily high rewards, this can provide incentives to manipulate.
Setting the difficult target correctly can be quite challenging for the supervisor, because the target
obviously may not be too difficult. Locke and Latham (1990) emphasise that in experiments, it was
documented that the employees’ performance levelled off or decreased only when the limits of the
employees’ ability were reached. Identifying this target level, understanding the probability of
achievement and non-achievement and challenging the employee in the right way requires much
more information and analyses than ‘just’ setting an achievable target that the employee, for
example, has proven was achievable in the past.
Proposition 4.4.: Setting a difficult and challenging target that motivates the individual employee
consumes more resources than setting a highly achievable target.
7. Performance measurement system design and the organisational value effects of a set of key
design choices
By means of two tables (table 1a and table 1b – from section 5 and 6 above), this section briefly
discussed the propositions of the design choices’ organisational value effect, which have been
reviewed in this paper. In addition, the possible combinations of the different design choices in
performance measurement system are reviewed
54
7.1. On the value-effect of performance measurement system design
Table 1a (please see above) outlines the propositions related to the choice of performance measures
and table 1b characterises propositions related to target setting. With regard to the choices of
performance measures, the focus has been on the effect of moving from one-dimensional to multi-
dimensional, from individual to collective, from objective to subjective and from absolute to relative
performance measures. In terms of target setting, the discussion has concentrated on the
effects of moving from objective to subjective, from absolute to relative, from target-based to actual
performance, and from high achievability to low achievability.
The tables are not complete in their characterisation of the relationships between design choices and
criteria, as it is not all mechanisms in play when it comes to performance measurement system design
that have been analysed in the paper. Nevertheless, the paper has discussed core mechanisms debated
in the literature, and the summary in the tables therefore provide some indications of the directions in
which a number of key design choices will affect organisational value. Thus, the tables show some of
the considerations that must be taken into account when it comes to design and implementation of
performance measurement systems in organisations.
It is noteworthy that the table only shows the isolated effects of the various design choices on the
individual criteria. This means that the combination of multi-dimensional measures that reduces
distortion might not be the same combination as the one that will most effectively reduce risk. Thus, it
is only the general principle of the mechanisms involved in multi-dimensional measures that is
characterised in the tables.
Furthermore, it is obvious that the organisational value effects of the individual design choices are not
unequivocal. There will be different aspects of the individual design choice that will pull in different
directions. This, of course, also means that there are no simple solutions to the question of
performance measurement system. The mechanisms mobilised by the design choices often point in
different directions, and they have to be considered carefully in relation to their individual and
combined effect in the individual organisational setting to better understand how the individual design
choice will work.
A third point is that the tables can be read in two directions. When read vertically, the table explains
55
how the various design choices have organisational value effects in terms of distortion, risk,
manipulation and measurement costs. When read horizontally, the table provides suggestions to
how the various types of transaction costs can be reduced by means of performance measurement
system design. For example, the tables illustrate that the risk in the performance measurement system
can be reduced through the choice of performance measures (by implementing multi-dimensionality,
collectivity and subjectivity) as well as through the determination of a procedure for target setting (by
introducing subjective ex post corrections, relative targets and actual performance evaluations).
The analysis of the organisational value above has reviewed the potential effect of the design
choices one by one but in practice the design choices are combined in multiple ways. The claim in
this paper is, however, that by the insight of the potential value effect of one type of design choice
conveyed in the review above, the opportunities for understanding the potential effects of a complex
set of combined choices are much better. The effect of a combination of choices can be understood
as a sum of the effects of the individual choices.
The three tables in appendix summarises possible combinations of performance measures and
performance targets. Table 2 illustrates possible combinations of the different types of performance
measures. The table illustrates how, for example, an objective performance measures also can be
characterised as an one-dimensional or a multi-dimensional performance measure, as an individual
or an collective, and as an absolute. It is not possible to have an objective performance measures
that is also a relative performance measure because relative performance measures is always also a
relative performance measure (please see the discussion above).
Table 3 (also in the appendix) outlines possible combinations of performance targets. Here it is
clear that for example relative performance targets can be characterised as based on actual
performance and that subjective adjustment of the performance levels could also be added. In
contrast, objective target setting as well as standards (as opposed to actual) is not combinable with
relativity.
Finally, table 4 (also in appendix) reviews possible combinations of performance measures and
targets. For example, the table illustrates how an objective measure can be combined with all the
56
different types of target setting discussed in the paper. But how a subjective measure is not
combinable with objective or absolute targets or actual performance.
8. Conclusions
This paper illustrates how the organisational value of a number of design choices related to
performance measurement system design can be understood through four design criteria: distortion,
risk, manipulation and measurement costs. The four criteria are deduced from organizational
economics and criteria basically represent the costs of motivating and coordinating employees
decision making and actions by means of performance measures. Therefore, the design problem is
characterised as a question of reducing the costs of the system as much as possible. However, the
paper illustrates that tension often occurs between considerations regarding distortion, risk,
manipulation, manipulation and measurement costs in the organisation when it comes to the choice of
performance measures as well as target setting. And it is precisely the combined effect of various
types of transaction costs that become key analysis points in the development and implementation of
successful performance measures to be used for companies’ incentive management.
Obviously, there are many other questions than the ones focused on in this article that have to be
analysed when the organisation’s performance management system is designed. The aim of this paper
has been to outline a number of key design choices that are associated with the organisational value of
the performance measurement system. The design of the performance measurement system is critical
for to the opportunities to coordinate and motivate through the organisation’s incentive system.
57
References
Baker, G., Gibbons, R., & Murphy, K.J. 1994. Subjective performance measures in optimal incentive
contracts. The Quarterly Journal of Economics, 109, (4) 1125-1156
Blau, P.M. 1965. The dynamics of bureaucracy: A study of the interpersonal relations in two
government agencies Chicago, University of Chicago Press.
Brickley, J.A., Smith, C.W., & Zimmerman, J.L. 2004. Managerial economics and organizational
architecture Boston, McGraw-Hill.
Brüggen, A. & Moers, F. 2007. The role of financial incentives and social incentives in
multi-task settings. Journal of Management Accounting Research, 19, 25-50
Bouwens, J. & van Lent, L. 2006. Performance Measure Properties and the Effect of Incentive
Contracts, Journal of Management Accounting Research, 18, pp. 55-75.
Eccles, R.G. (1990) The performance measurement manifesto. Harvard business review, 69(1),
131-137.
Elster, J. 1989. Nuts and Bolts for the Social Sciences. New York: Cambridge University Press.
Elster, J. 2007. Explaining Social Behavior: More Nuts and Bolts for the Social Sciences. New
York: Cambridge University Press.
Epstein, M.J., Kumar, P., & Westbrook, R.A. 2000. The Drivers of Customer and Corporate
Profitability: Modeling, Measuring and Managing the Causal Relationship. Advances in Management
Accounting, 9, (1) 43-72.
58
Espeland, W. N. & Sauder; M. 2007. Rankings and Reactivity: How Public Measures Recreate
Social Worlds, American Journal of Sociology, Volume 113, Number 1: 1–40
Ferreira, A. & Otley, D. (2009) The design and use of performance management systems: An
extended framework for analysis. Management Accounting Research, 20(4), 263-282.
Gibbs, M.J., Merchant, K.A., Van der Stede, W., & Vargus, M.E. 2004. Determinants and
effects of subjectivity in incentives. The Accounting Review, 79, (2) 409-436
Gibbs, M.J., Merchant, K.A., Van der Stede, W. A., Vargus, M. E. 2009. Performance Measure
Properties and Incentive System Design, Industrial Relations: A Journal of Economy and Society,
2009, 48(2), pp. 237-64.
Hannaway, J. 1992. Higher order skills, job design, and incentives: An analysis and proposal.
American Educational Research Journal, 29, (1) 3-21
Hansen, A. 2010. Nonfinancial performance measures, externalities and target setting: A comparative
case study of resolutions by planning, Management Accounting Research, 21, (1), 17-39
Hedström, P. and Swedberg, R. 2005. Social mechanisms - An analytical approach to social theory,
Cambridge, Cambridge University Press.
Jensen, M.C. 2003. Paying people to lie: The truth about the budgeting process. European Financial
Management, 9, (3) 379-406
Jensen, M. C. & Meckling, W. H. 1995, Specific and General Knowledge, and Organizational
Structure, Journal of Applied Corporate Finance, Volume, 8,2, 4-18
Johnsson, T.H. & Kaplan, R.S. (1987) Relevance lost: the rise and fall of management accounting.
Harvard Business School Press, Cambridge, MA
59
Jönsson, S. & Grönlund, A. (1988) Life with a sub-contractor: new technology and management
accounting. Accounting, Organizations and Society, 13(5), 513-532.
Kaplan, R.S. & Norton, D.P. 1996. The Balanced Scorecard: Translating Strategy into Action
Boston, Massachusetts, Harvard Business School Press.
Kaplan, R.S. & Norton, D.P. 2001, Transforming the Balanced Scorecard from Performance
Measurement to Strategic Management: Part I. Accounting Horizons, 29, (3) 87-104
Kaplan, R.S. & Norton, D.P. 2004. Strategy maps: converting intangible assets into tangible
outcomes Boston, Mass., Harvard Business School Publishing.
Lazear, E.P. & Gibbs, M. 2009. Personnel economics in practice Danvers, John Wiley & Sons.
Locke, E. A. & Latham, G. P. (1990) A theory of goal setting and task performance, Englewood
Cliffs, NJ: Prentice Hall.
Malina, M.A., Nørreklit, H., & Selto, F.H. 2007. Relations among measures, climate of control, and
performance measurement models. Contemporary Accounting Research, 24, (3) 935-982
Merchant, K.A. 1989. Rewarding results – motivating profit center managers Boston, Harvard
Business School Press.
Merchant, K.A. 1998. Modern Management Control Systems – Text and Cases New Jersey,
Prentice Hall.
Merchant, K.A. 2006. Measuring general managers’ performances. Accounting, Auditing and
Accountability Journal, 19, (6) 893-917
Merchant, K. A. & Manzoni, JF (1989) On the achievability of budget targets in profit centers: A
field study, The Accounting Review, July, p. 539-558.
Merton, R. K. 1968. Social Theory and Social Structure. New York: Free Press.
60
Milgrom, P. & Roberts, J. 1988. An economic approach to influence activities in organizations.
American Journal of Sociology, 94, (Suppl.) 154-179
Milgrom, P. & Roberts, J. 1992. Economics, Organization and Management Upper Saddle River,
New Jersey, Prentice-Hall.
Moers, F. 2005. Discretion and bias in performance evaluation: the impact of diversity and
subjectivity. Accounting, Organizations and Society, 30, 67-80
Moers, F. 2006. Performance Measure Properties and Delegation, The Accounting Review, 81, (4), p.
897-924.
Murphy, K.J. & Oyer, P. 2003. Discretion in Executive Incentive Contracts. SSRN
Prendergast, C. 2000. What trade-off of risk and incentives? American Economic Review, 90, (2)
421 425
Prendergast, C. & Topel, R. 1993. Discretion and bias in performance evaluation. European
Economic Review, 37, (2-3) 355-365
Roberts, J. 2004. The modern firm – organizational design for performance and growth Oxford,
Oxford University Press
Roy, D. 1952. Quota restriction and goldbricking in a machine shop. The American Journal of
Sociology, 57, (5) 427-442
Rucci, A.J., Kirn, S.P., & Quinn, R.T. 1998. The employee-customer-profit chain at sears.
Harvard Business Review, 76, (1) 82-97
61
Simons, R. (1995) Levers of Control – How Managers Innovative Control Systems to Drive
Strategic Renewal, Boston, Mass.: Harvard Business School Press.
Stinchcombe, Arthur L. 2005. The Logic of Social Science Research. Chicago: University of
Chicago Press.
Womack, J. P. & Jones, D. T. 1996. Lean thinking - Banish waste and create wealth in your
corporation. New York: Simon & Schuster.
Zimmerman, J.L. 2006. Accounting for Decision Making and Control New York, McGraw-Hill
Irwin
62
Appendix
dimensional
dimensional
Individual
Subjective
Collective
Objective
Absolute
Relative
Multi-‐
One-‐
One-‐
NA
✔
✔
✔
✔
✔
dimensional
Multi-‐ NA
✔
✔
✔
✔
✔
✔
dimensional
Individuel ✔ ✔ NA ✔ ✔ ✔ ✔
Collective ✔ ✔ NA ✔ ✔ ✔ ✔
Objective ✔ ✔ ✔ ✔ NA ✔ ÷
Subjective ✔ ✔ ✔ ✔ NA ✔ ✔
Absolute ✔ ✔ ✔ ✔ ✔ ✔ NA
Relative ÷ ✔ ✔ ✔ ÷ ✔ NA
63
Subjective
Objective
Standard
Absolute
Relative
Actual
Objective
NA
✔
÷
✔
÷
Subjective NA ✔ ✔ ✔ ÷
Absolute ✔ ✔ NA ✔ ÷
Relative ÷ ✔ NA ✔ ✔
Standard ✔ ✔ ✔ ÷ NA
Actual ÷ ÷ ÷ ✔ NA
dimensional
dimensional
Subjective
Individual
Collective
Objective
Absolute
Relative
Multi-‐
One-‐
Objective
✔
✔
✔
✔
✔
÷
✔
÷
Subjective
✔
✔
✔
✔
✔
✔
✔
✔
Types
of
targets
Absolute ✔ ✔ ✔ ✔ ✔ ÷ ✔ ÷
Relative ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Standard ✔ ✔ ✔ ✔ ✔ ÷ ✔ ÷
Actual
✔
✔
✔
✔
✔
÷
÷
✔
64
Endnote
1
The assumptions about human behaviour applied in this paper when discussing the mechanisms explaining the organisational value
effect of performance measurement system design reflect the ones usually applied in organisational economics e.g. contract theory,
transaction cost economics and agency theory (Brickley et al. 2004). Throughout the paper, the notions of the principal and the agent
will be used, where the agent is defined as the employee, who is hired by the principal to undertake a given job. The agent is the
object of the performance measurement and incentive system. The principal is the one who represents the owners of the organisation
and designs the performance measurement system, coordinate and motivate the agent and align the interests of the owners and the
agents. The relationship between the principal and the agent is characterised by assumptions like asymmetric information and self-
interest maximisation. However, some of the mechanisms discussed in this paper gradually loosen up for the behavioural
assumptions made in organisational economics. Like for instance that the principal represents the owners’ interests. Sometimes, the
principal can best be described as a pseudo principal that pursues his or her own interests rather than the owner’s. In other instances,
the principal is assumed to be quite knowledgeable of the agents’ job and is therefore able to set up performance measures for
specific elements of the agent’s job. This somehow breaks with the assumption of asymmetric information between the principal and
the agent with regard to the agent’s job.
65