E-Gov Usability: Enhancing Benchmarks
E-Gov Usability: Enhancing Benchmarks
a r t i c l e i n f o a b s t r a c t
Available online 9 August 2008 Several E-Government website usability studies in the United States employ content analysis as their
research methodology. They use either dichotomous measures or a generic scale to construct indexes to
conduct comparative reviews. Building on those studies, this article suggests a content-analysis methodology
utilizing Guttman-type scales wherever possible to refine usability assessments. This methodology advances
E-Government performance through enhanced usability benchmarks to stimulate the organizational
dynamics that drive performance improvement.
© 2008 Elsevier Inc. All rights reserved.
1. Introduction to content-analysis based website usability studies ogy. Fourth, it explains the proposed methodology for benchmarks.
in the United States And, fifth, the study comments on the limitations and contributions of
the research. It concludes by arguing that more robust benchmarks
Several E-Government website usability studies in the United States cultivate the organizational dynamics that drive performance
employ content analysis as their research methodology. They use improvements.
dichotomous measures to record the absence or presence of selected
variables (Gant, Gant, & Johnson, 2002; Stowers, 2002; West, 2003a, 2. Importance of E-Government website usability
2003b, 2006). Constructed indexes rank website classes (i.e., cities) for
comparative review. The Holzer and Kim (2004) international assess- A fading public service delivery paradigm dominates the “normal
ment raises the bar by introducing a scaling system for some variables science” (Kuhn, 1996) springing from the field of public administra-
(although the New York City website serves as the only U.S. website tion. That paradigm sees public service delivery in three modes: face-
examined). Their four point scale is generic. It measures the absence or to-face, telephone, and postal mail service (Brown, 2003). Advances in
presence of selected variables, the availability of downloadable items, information technology create a fourth mode of service delivery with
and online governmental interaction capabilities. E-Government through the internet.
Building on existing studies, this article suggests a content-analysis E-Government supplies the means to transcend the obstacles of
methodology utilizing Guttman-type scales wherever possible to time and distance (Jaeger, 2003; Moon, 2002; Thomas & Strieb, 2003;
refine usability scrutiny. This methodology advances E-Government West, 2004). While it involves more incremental than transformational
performance by providing needed “how to” practitioner guidance change (West, 2005), E-Government precipitates a shift in public
(Heeks & Bailur, 2006) in enhancing usability benchmarks. Further, service delivery since the internet exists ubiquitously (Ho, 2002). With
it partially responds to Bertot and Jaeger's (2006) call “to improve E-Government, the orientation is on “user satisfaction, control, and
E-Government for users…” [through] “research into… ‘best practice flexibility,” external communication is both “formal and informal,
user-centered design.” direct… [with] fast feedback, [with] multiple channels,” and service
The analysis unfolds in five sections. First, it punctuates the delivery is amenable to “user customization.” Research suggests that
importance of E-Government usability as the U.S. endeavors to serve a frequent citizen use of E-Government improves government respon-
growing digital majority. Second, it discusses usability dimensions and siveness evaluations leading to more “process-based trust” (Tolbert &
identifies typical respective variables. Third, the article reviews the Mossberger, 2006). Exponential internet usage increases and private
theoretical framework for the proposed content-analysis methodol- sector e-commerce pressure public agencies to serve constituencies
electronically. Indeed, “E-Government truly has become a global
phenomenon” (Jaeger, 2003).
The much-discussed “digital divide” describes the disparity
☆ Earlier versions of this paper were presented at the 2007 American Society for
Public Administration Conference in Washington, DC, and in the author's 2004
between those with access to computers and the internet and those
dissertation: E-Government: Website Usability of the Most Populous Counties. without. Beyond digital access, “website usability” refers to the
E-mail address: dbaker@[Link]. relative ease with which a novice maneuvers around an actual website
0740-624X/$ – see front matter © 2008 Elsevier Inc. All rights reserved.
doi:10.1016/[Link].2008.01.004
D.L. Baker / Government Information Quarterly 26 (2009) 82–88 83
and “does something.” The term refers to a qualitative appraisal of the independent of interpretation of the website sponsors and their
relative friendliness of a website with ease of use as the standard of employees.
measurement. Regardless of one's acumen in negotiating public A review of six studies suggests variables for constructing website
agency websites, democratic values underlying governmental opera- usability benchmarks. The selected studies have been chosen to
tions require that E-Government aim for user-friendliness. If websites illustrate the proposed methodology because they use content
fail to perform readily from a usability standpoint, and instead block analysis with dichotomous measures or a generic scale to assess
less-knowledgeable citizens from satisfactory contact with their website usability. Further, they represent a range of public agency
government, the evolution of E-Government will be stymied. types (i.e., international, federal, state, and urban).
Usability gains importance as public agencies turn to E-Government
• Global E-Government, 2003 (West, 2003a): The survey explores
to facilitate citizen access and minimize costs (Brown & Brudney,
2166 government websites in 198 different nations, including the
2004). The PEW Internet and American Life Project finds that internet
U.S. federal website.
penetration reaches 73% of American adults (Madden, 2006). How-
• State of Federal Websites: The Pursuit of Excellence (Stowers, 2002):
ever, earlier research indicates only 29% of those who contact
The research reviews 148 U.S. federal websites.
government do so via E-Government (Horrigan, 2004). The Council
• State Web Portals: Delivering and Financing E-service (Gant et al.,
for Excellence in Government (2003) states that many non-govern-
2002): The inquiry evaluates all 50 U.S. state web portals.
mental users do not interact with government online because of
• State and Federal E-Government in the United States, 2006 (West,
difficulty in finding the website information they want. Despite its
2006): The study covers 1564 state and federal sites (about 30 per
rapid evolvement (Holden, Norris, & Fletcher, 2003), E-Government
state, and 48 federal sites).
fails to achieve its full potential unless the website usability barrier is
• Urban E-Government, 2003 (West, 2003b): The population
recognized and bridged. Schultz (2001) urges that E-Government
researched consists of 1,933 U.S. city government websites in
takes a user service approach to close the gap. “The face the
large metropolitan areas.
government presents to the public should look like an easily navigable
• Digital Governance in Municipalities Worldwide: An Assessment of
array of services.” Users must be able to master tasks quickly.
Municipal Web Sites throughout the World (Holzer & Kim, 2004): The
Pearrow (2000) notes three possibilities why people go to a
review analyzes the 100 cities worldwide with the highest percen-
website: (1) surfing (exploring websites in a random manner), (2)
tage of internet users (based on United Nations data), including New
known item searching, and (3) task oriented interaction. E-Govern-
York City. This research is particularly noteworthy for its introduc-
ment usability studies concentrate analysis on the latter two. This
tion of generic scaling to refine comparisons between cities.
contrasts with the notion of website “likability,” or the degree to
which a user favors a website (Spool, Scalon, Schroeder, & Snyder, Arguably, some may question the “representativeness” of the
1997). From Pearrow's (2000) perspective, usability attempts to referenced, content-analysis based studies. However, the thrust of this
ensure that regardless of how, when, or where users enter a website, research establishes a benchmarking usability methodology, rather
they can use it. He addresses the notion of usability from a than suggesting a fixed benchmarking standard. That means the
“usage centered design” vantage point and offers this definition: number of studies and the specific agencies examined may be
adjusted to discern usability performance nuances. Moreover, such
Usability is the broad discipline of applying sound scientific
flexibility offers opportunities to modify the variable universe as
observation, measurement, and design principles to the creation
usability studies evolve and become more sophisticated. Likewise,
and maintenance of… [websites] to bring about the greatest ease
different dimensional schemas (Stowers, 2002) may be employed in
of use, ease of learnability, amount of usefulness, and least amount
the proposed methodology in response to improved approaches for
of discomfort for… [those] who have to use the system.
organizing usability variables.
With the foregoing in mind, the following reviews each of Stowers
3. E-Government website usability dimensions and variables dimensions and identifies typical variables. The variable range stems
from a review of the six studies cited earlier. Association of variables
This section moves from the importance of E-Government website by dimension permits imposition of a coding protocol for usability
usability to a dimensional schema for organizing variable analysis and measurement.
the identification of the range of variables. E-Government assess- Online services refer to those tasks that may be accomplished by
ments specifically explore variables germane to government against a electronically contacting an E-Government website 24 hours a day,
backdrop of general website usability heuristics (Nielsen, 1990; 7 days per week, via the internet. The extent of online services acts as a
Nielsen & Tahir, 2002; Pearrow, 2000). For example, Stowers (2002) critical dimension because it determines the relative website value to
conceptually organizes these E-Government variables along six users. The conventional wisdom holds that if you cannot “do what you
dimensions of website usability: (1) online services, (2) user-help, want or need to do” at a website, it has no utility for the user. Hence, a
(3) navigation, (4) legitimacy, (5) information architecture, and (6) user will not stay long and repeat website visits may be unlikely. The
accessibility accommodations. After explaining the rationale for following variables are typical of this dimension: (1) basic information
identifying the website usability studies referenced, the article about the jurisdiction, (2) documents, (3) communications with officials,
reviews these dimensions and typical variables. (4) downloadable forms, (5) interactive forms, (6) interactive
Many studies cover broad swaths of U.S. governmental agencies on databases, (7) multi-media applications, (8) e-commerce applications,
the topic of E-Government website usability. Generally, these fall into (9) facility location services, (10) mapping applications, (11) employ-
three groups. First, some present website analyses concerning ment information, (12) publications, (13) statistics about the jurisdic-
particular aspects regarding the state of E-Government. Second, tion, (14) ordinance code, (15) community-based organization funding
some provide survey-based website usability reviews. Third, a much information, and (16) webcasts of public meetings (Gant et al., 2002;
more narrowly-focused group analyzes E-Government usability Holzer & Kim, 2004; Stowers, 2002; West, 2003a, 2003b, 2006).
through content analysis. The first two types of studies build User-help identifies mechanisms that facilitate satisfactory electronic
knowledge about the developmental status of E-Government and contact and interaction. Failure to provide satisfactory website contact
survey responses to researchers, respectively. The third group of and interaction through user-help tools results in user frustration
studies informs this research. They appraise “manifest content,” (Nielsen, 1990; Pearrow, 2000). User frustration impedes website use.
content that is visible on the surface (Babbie, 2005). This methodol- User-help tools provide general instructional guidance about moving
ogy, to be discussed further in the next section, probes usability within a website. Typical variables for this dimension include the
84 D.L. Baker / Government Information Quarterly 26 (2009) 82–88
following: (1) site information, (2) e-mail for technical assistance, (3)
responses to frequently asked questions, (4) site feedback mechanism,
(5) “help” feature, (6) site index content, (7) site map, (8) “search”
feature, (9) “tips,” and (10) translations (Gant et al., 2002; Holzer & Kim,
2004; Stowers, 2002; West, 2003a, 2003b, 2006).
Navigation aids, not to be confused with user-help, allow the user
to maneuver through the website readily to specific destinations. After
all, the entire web represents a vast navigational system. “The basic
user interaction is to click on hypertext links in order to move around a
huge information space with hundreds of millions of pages” (Nielsen,
2000). User friendly navigational aids tell users where they are, where
they have been, and where they may go. While the emphasis is on the
latter, the former provides comfort through user awareness and sense
of control. These features differ from user-help tools in that navigation
features specifically identify quick routes to services that users want
most (Stowers, 2002). The following types of variables reside in this Fig. 1. Website usability dimensions' impact on overall usability.
dimension: (1) service availability data, (2) “answers a to z,” or some
other logical query/response system, (3) jurisdiction calendar, (4)
service links, (5) office/agency links, (6) Freedom of Information Act, establish common variables, discusses benchmarking and bench-
or comparable state law, (7) “what's new,” (8) press releases, (10) marks, and considers scale and index construction. Through content
students/kids link, (11) contact information, (12) most visited site analysis, an additive index comprised of common dichotomous and
designations, (13) special initiatives information, and (14) welcome scale variables is constructible. Such an index can be utilized to rank
statement (Gant et al., 2002; Holzer & Kim, 2004; Stowers, 2002; and to establish benchmarks to assess E-Government website
West, 2003a, 2003b, 2006). usability among public agencies.
Legitimacy features reassure the user that a particular website is
designed to conduct official government business (Stowers, 2002). E- 4.1. Content analysis
Government users want credible evidence that electronic records
generally and government websites specifically supply security, Content analysis refers to a method of organizing variables of a unit
privacy, and legitimacy. Representative variables of this dimension of analysis into operationalized attributes and measuring the results
include the following: (1) security policy, (2) privacy policy, (3) (Babbie, 2005; Hodder, 2000; McNabb, 2002; Ryan & Bernard, 2000).
disclosure/disclaimer statements, (4) site contact information, and (5) Usually, researchers apply content analysis to forms of human
update notification information (Gant et al., 2002; Holzer & Kim, communication (Leedy & Ormrod, 2001) and use such analysis
2004; Stowers, 2002; West, 2003a, 2003b, 2006). descriptively (McNabb, 2002). It proceeds in systematic steps to ferret
Information architecture devices illustrate website information out particular attributes of specific variables with research objectives.
structure and organization, or how the information first appears to Selected variables require exacting operational definitions (Babbie,
the user (Stowers, 2002). It identifies the footbridges arching over the 2005). The researcher prepares an item dictionary to define the
knowledge gap of users about a website's organization and the measured constructs (McNabb, 2002). Measurements of the occur-
services that it offers. Typical variables for this dimension include the rence of variable attributes in the content make tabulation and
following: (1) audience-focused, (2) organizational listing of units, (3) statistical analysis of the data possible using descriptive or inferential
events, (4) use of metaphors, (5) identification of accountable officials, statistics. Content analysis relies on the coding of data classified in a
(6) identification of services offered, (7) topics/issues, (8) personalized conceptual framework driven by the objectives of the research
and customized features, and (9) newspaper format (Gant et al., 2002; questions. The coding keys-off predetermined and strictly defined
Holzer & Kim, 2004; Stowers, 2002; West, 2003a, 2003b, 2006). attributes (Leedy & Ormrod 2001). Coding requires detailed descrip-
Accessibility accommodations contain mechanisms that address tion with inclusion and exclusion criteria. Abstract material may
requirements for the disabled. E-Government websites need to demand examples to distinguish boundaries (Ryan & Bernard, 2000).
accommodate users equally by lowering any disability related hurdles Content analysis appraises “manifest content” in website usability
(Jaeger, 2006). Usability variables found in this dimension include the assessments. Manifest content relates to that which is visible on the
following: (1) text telephone/telephone device for the deaf, (2) “Bobby” surface (Babbie, 2005). The researcher devises a coding, or rating,
software created by Watchfire Incorporated to assess disabled- protocol and system of measurement. Variable tabulation frequencies
accessibility (Garson 2006), (3) legal compliance, and (4) text version are recorded and analyzed. Analysis and reporting strive to make
of site (to accommodate the visually impaired) (Gant et al., 2002; Holzer sense out of patterns, trends, and themes reflective in the data.
& Kim, 2004; Stowers, 2002; West, 2003a, 2003b, 2006). Content analysis contributes two significant advantages for
In summary, E-Government website usability is operationalized as website usability assessment. First, it provides a structured methodol-
an assessment of the relative “ease with which a novice user interacts ogy for quantifying the contents of a qualitative or interpretive
with a public agency website to accomplish the user's goal(s)” (Baker, analysis (McNabb, 2002). It does this “in a simple, clear, and easily
2006). Further, it embodies an aggregate measure of discrete variables repeatable format” (McNabb, 2002). Second, it is unobtrusive (Babbie,
of manifest content in a particular website. Fig. 1, for conceptual and 2005). That means it has no effect on the unit of analysis while
data analysis purposes, organizes variables along Stowers (2002) six assessment transpires. The researcher studies attributes of variables in
dimensions to portray the aggregate influence on website usability. anonymity. This is ideal for website usability studies. The researcher
approaches the unit of analysis (the website to be studied) similar to
4. Theoretical framework for methodology an actual user while pursuing website usability assessment.
This section presents the theoretical framework for a content- 4.2. Triangulation
analysis based methodology to construct a valid and reliable protocol
for enhanced E-Government website usability measurement. Accord- Individual governmental websites serve as the units of analysis in
ingly, it reviews content analysis, recommends triangulation to E-Government website usability research, usually by class type (i.e.,
D.L. Baker / Government Information Quarterly 26 (2009) 82–88 85
cities). The measurement of website usability involves the aggregate E-Government. In an information society with many instantaneous
contribution of discrete variables. However, E-Government website services (e.g., ATM cards, retail merchandise scanning, e-commerce,
usability measurement continues in a developmental state. “There is and online education) citizens are likely to grow impatient and
no agreement on appropriate benchmarks or what constitutes an intolerant of public agencies that drag their heels on improving
effective government website…” (West 2004). Triangulation (Leedy & website usability. Through comparison, public agencies can evaluate
Ormrod, 2001) establishes commonality based on review of whatever deficiencies and review benchmarks to help design the desired
the universe of E-Government website usability studies a researcher improvements. Establishing website benchmarks documents the
considers. It reduces potential bias and improves validity (Yang & strength of variables by usability dimensions and spotlights deficien-
Melilski, 2007) in the determination of the commonality of usability cies for focused attention. Learning through the experiences of others
variables. For example, let us suppose a researcher approached an “reduces both the time required to move up the learning curve and
analysis using previous studies involving website usability variables. cost of improvement” (Keehley, Medlin, Macbride, & Longmire, 1997).
Using triangulation, the researcher lists the variables in each study
and then counts the number of studies in which each specific variable 4.4. Scale and index construction
appears. In this example, let us suppose that “x” number of variables
appear in the majority (or some established threshold) of “y” number Scales and indexes function as ordinal measures, rank-ordering
of studies. The researcher may then qualify the “commonality” of a data (Babbie, 2005). As composite measures, they mirror data about
variable. Thus, triangulation establishes commonality based on multiple items. Scale construction requires assignment of scores to
reviewing whatever the universe of E-Government website usability response patterns with a logical or empirical structure that reveals an
studies the research should consider. array of indicators. This recognizes “that some items reflect a relatively
weak degree of the variable while others reflect something stronger”
4.3. Benchmarking and benchmarks (Babbie, 2005). It offers assurance of ordinality because scales tap into
structures of intensity, or subtle variations, within a variable. In
Benchmarking describes a type of performance measurement that contrast, an index refers to a set of variables used to measure a more
serves as a means for shaping organizational performance (Ammons, abstract concept (O'Sullivan, Rassel, & Berner, 2007). It accumulates
1995; Griesemer, 1995; Hatry, 1999). It investigates the measurement scores assigned to individual data elements (Babbie, 2005).
of an organization's own performance in contrast to the “best-in-class Composite measures (scales and indexes) facilitate understanding
performance” (Gore, 1997; Zhu, 2003). Benchmarking does not simply complex concepts (Babbie, 2005). Three principal reasons explain this.
establish what is. It influences what will be. It focuses organizational First, looking at several aspects of a variable provides a richer, more
energy and resources on the future direction of the organization. full-bodied picture. Second, several perspectives enhance the varia-
Spendolini (1992) defines benchmarking as “a continuous, systematic tion range. This allows finer distinctions, especially those involving
process for evaluating the products, services, and work processes of ordinal measures. Third, composite measures allow efficient data
organizations that are recognized as representing best practices for reduction whereby a numerical score may flag ordinal placement
the purpose of organization improvement.” Done well, it methodically while retaining some specificity within indicators.
pinpoints and searches for improvement opportunities through Care must be taken when selecting variables for composite
outwardly focused data gathering processes. Public sector bench- measure (O'Sullivan et al., 2007). Selecting the “right” variables
marking targets the best services possible with the lowest cost and the entails choosing those that represent the focal point of interest and
least delay (Bullivant, 1994). Fitz-enz (1993) adds that benchmarking nothing more. Sufficient variables allow differentiation among
“stretches” organizations in the direction of better performers. In this important gradations of the dimension studied. Marginal information
sense, it is future oriented. It stimulates trying for the best-known items should be excluded for efficiency. Variables must be linked
performance targets. theoretically or conceptually to what the research seeks to measure.
“Benchmarks, in contrast to benchmarking, are measurements to Inclusion decisions should be supported with evidence or reasoning to
gauge the performance of a function, operation, or business relative to establish content validity.
others” (Bogan, 1994). Used as a noun rather than a verb, a Index construction requires several steps. First, the concept or
“benchmark” defines a reference point from which measurement variable to be measured requires definition. Definitions should be
may be made, something that serves as a standard by which others consistent with past research unless a rationale exists for doing
may be evaluated. Fitz enz (1993) cautions that benchmarks do not otherwise (O'Sullivan et al., 2007). Definitions have two dimensions:
“provide answers, suggest priorities, or prescribe action.” However, conceptual and operational. Conceptual definitions connect terms
without benchmarks, managers are often tempted to ascribe changes with other concepts. Essentially, they function as dictionary defini-
in citizen service needs to history or “gut feelings” instead of objective tions. Operational definitions detail how to measure a concept or a
evaluations of service outcomes and deficiencies. Benchmarks estab- variable, precisely. The correspondence between conceptual and
lish the underlying goal of continuous improvement (McNair & operational definitions must be close. Second, items included in an
Leibfried, 1992). This involves identifying what is possible supported index should manifest “face validity.” Face validity relates to “that
by evidence that, indeed, some other agency has achieved it. A quality of an indicator that makes it seem a reasonable measure of
benchmark illustrates a best-in-class standard. some variable” (Babbie, 2005). Third, an index measure requires
At the organizational level of analysis, the creation of benchmarks “unidimensionality.” That means it must measure one and only one
promotes a change climate in three ways (Codling, 1992). First, the characteristic. Fourth, index measurement involves variance. Does
disparity between present performance and best practice generates each measure depict something different and add to the overall
dissatisfaction with the status quo and motivation for change. Second, descriptiveness of the measure in the process? Fifth, the relationships
scrutinizing the benchmark facilitates figuring out what and how to among measures compel consideration. Do certain items gauge
change current performance. Third, observing benchmarks brings to degrees of an aspect or are they measures of the same aspect? In
the focal point an achievable picture of a realistic future. Change the latter instance, are both measures needed?
climate promotion cultivates increased service responsiveness, effi- After selecting index measures, attention turns to index scoring
ciency, and the civic engagement possibilities of a continuously (Babbie, 2005; O'Sullivan et al., 2007). Critical issues here deal with the
improving public service capability. range of index scores and score assignment, or weighting. The score
Identifying website usability benchmarks gains importance by range calls for evaluation of gradation variance. Score assignment, or
providing guideposts to public agencies in the rapidly evolving field of weighting, involves contemplating the score for each response. Does
86 D.L. Baker / Government Information Quarterly 26 (2009) 82–88
each item receive the same score or are there rationales for different Table 2
weights? Babbie (2005) recommends that items receive equal scores Example of scale for downloadable forms
unless definitive justifications exist for differential weighting. Normally, Score Measurement attributes
the overall value of an index is calculated using an equation inclusive 0 Absence of downloadable forms
of all the index items. The resulting number represents a composite of 1 One to three downloadable forms
two or more numbers with each depicting a variable in the index 2 Four to six downloadable forms
3 Seven to nine downloadable forms
(O'Sullivan et al., 2007).
4 More than nine downloadable forms
Continuing with the prior illustrations, the six previously cited Usability dimensions and variables Raw acore Weighted score
studies (Gant et al., 2002; Holzer & Kim, 2004; Stowers, 2002; West, Online services
2003a, 2003b, 2006) identify a total of 87 variables (23 — online Basic information 1
services, 16 — user-help, 23 — navigation, 10 — legitimacy, 11 — Interactive forms 1
Interactive databases 1
information architecture, and 4 — accessibility accommodations).
Multimedia applications 1
Through triangulation, the research qualifies common variables as Chat areas/message boards 1
those identified in four or more of the six studies. This results in 37 E-mail updates/listserv 1
common variables among the studies (11 — online services, 7 — user- Communication with officials 4
help, 5 — navigation, 6 — legitimacy, 6 — information architecture, and Documents/publications 4
Downloadable forms 4
2 — accessibility accommodations). Let us suppose the researcher E-commerce applications 4
finds eleven common variables (as previously defined) that fit the Employment information 4
operationalized dimension labled “online services.” The researcher Subtotals 26 16.67
designates six of these (basic information, interactive forms, inter-
User-help
active databases, multimedia applications, chat areas/message boards,
About the site 1
and e-mail updates/listserv) for dichotomous treatment. For instance, E-mail us 1
if the website does not have “basic information,” the website Personal digital assistant/wireless 1
assessment receives “0” for that variable; if the website has “basic Index 1
information,” the website assessment receives “1” for the variable. Feedback 4
Foreign language 4
Table 4 provides a sample index recognizing six online services for
Search 4
dichotomous measures. For the online services dimension, the five Subtotals 16 16.67
remaining common variables are listed for scale use (communication
with officials, documents/publications, downloadable forms, e com- Navigation
E-Government services 1
merce applications, and employment information). These are sup-
Link to contact information 1
ported with sufficient research to enable scale construction with Chat areas/message boards 1
operationalized attributes. Each of these variables may receive scores Link to other agencies 4
from “0” to “4” depending on the strength of the variable attributes Volume of aids 4
present. This review process aligns with the appropriate dimension. Subtotals 11 16.67
More sophisticated benchmarks spur better-informed efforts Holden, S. H., Norris, D. F., & Fletcher, P. D. (2003). Electronic government at the local
level: Progress to date and future issues. Public Performance & Management Review,
toward improvement through content analysis. Two examples demon- 26(4), 325−344.
strate the proposed methodology's impact on advancing website Holzer, M., & Kim, S. T. (2004). Digital governance in municipalities worldwide: An
usability. First, establishing an overall benchmark more robustly assessment of municipal web sites throughout the world. Newark, NJ: National Center
for Public Productivity.
identifies further detail for emulation by other public agencies. Such Horrigan, J. B. (2004). How Americans get in touch with government. Washington, DC:
emulation and performance enhancement do not require any new PEW Internet & American Life Project. Retrieved June 26, 2007, from [Link]
breakthroughs in website usability. Modeling the usability achievement [Link]/pdfs/PIP_E-Gov_Report_0504.pdf
Jaeger, P. T. (2003). The endless wire: E-government as global phenomenon. Government
of the overall benchmark makes improvement reachable. Second, Information Quarterly, 20(4), 323−331.
public agencies within a study population can benefit from comparative Jaeger, P. T. (2006). Assessing section 508 compliance on federal e-government web
analysis with the more robustly documented benchmarks for each sites: A multi-method, user-centered evaluation of accessibility for persons with
disabilities. Government Information Quarterly, 23(2), 169−190.
established website usability dimension. Overall usability performance
Kaylor, C., Deshazo, R., & Van Eck, D. (2001). Gauging e-government: A report on
improvement results from public agencies taking successful steps to implementing services among American cities. Government Information Quarterly,
achieve the usability benchmark dimensional scores. Once again, this 18(4), 293−307.
performance requires no new breakthroughs in website usability. Keehley, P., Medlin, S., MacBride, S., & Longmire, L. (1997). Benchmarking for best
practices in the public sector. San Francisco: Jossey-Bass Publishers.
In the methodological sphere, the research invites attention to the Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago: The University
benefits of operationalizing website usability variables and respective of Chicago Press.
attributes in some standardized form. While triangulation affords a Leedy, P. D., & Ormrod, J. E. (2001). Practical research (7th ed.). Upper Saddle River, NJ:
Merrill Prentice-Hall.
sound research strategy for designating common variables, quicker Madden, M. (2006). Internet penetration and impact. Washington, DC: PEW Internet &
progress may be made in advancing website usability. Principal American Life Project. Retrieved June 26, 2007, from [Link]
researchers could catapult website usability measurement forward pdfs/PIP_Internet_Impact.pdf
McNabb, D. E. (2002). Research methods in public administration and non-profit
through the recommended standardization. management: Quantitative and qualitative approaches. Armonk, NY: M. E. Sharpe.
In conclusion, this article makes a case for more sophisticated McNair, C. J., & Leibried, K. H. L. (1994). Benchmarking: A tool for continuous improvement.
benchmarking in E-Government website usability analyses. Rigorous Indianapolis, IN: John Wiley & Sons, Inc.
Moon, J. M. (2002). The evolution of e-government among municipalities: Rhetoric or
benchmarking engages and further informs the potential for sig-
reality? Public Administration Review, 62(4), 424−434.
nificant improvement in the evolution of E-Government studies and Nielsen, J. (1990). Ten usability heuristics. [Link]. Retrieved October 4, 2003, from
corresponding task performance. This promises to push public [Link]
Nielson, J. (2000). Designing web usability: The practice of simplicity. Indianapolis, IN: New
performance forward through better E-Government.
Riders Publishing.
Nielsen, J., & Tahir, M. (2002). Homepage usability: 50 websites deconstructed. Indianapolis,
References IN: New Riders Publishing.
O'Sullivan, E., Rassel, G. R., & Berner, M. (2007). Research methods for public
Ammons, D. N. (1995). Accountability for performance in local government: Measurement administration (5th ed.). New York: Longman.
and monitoring in local government. Washington, DC: International City/County Pearrow, M. (2000). Web site usability handbook. Rockland, MA: Charles River Media, Inc.
Management Association. Ryan, G. W., & Bernard, H. R. (2000). Data management and analysis methods. In N. K.
Babbie, E. (2005). The basics of social research. Belmont, CA: Wadsworth Publishing. Denzin, & Y. S. Lincoln, Handbook of qualitative research (2nd ed., pp. 769–802).
Baker, D. L. (2006). Website usability of the most populous counties in the United States. Thousand Oaks, CA: Sage Publications, Inc.
Journal of E-Government, 3(3), 65−89. Schultz, J. A. (2001). Information technology in local government: A practical guide for
Bertot, J. C., & Jaeger, P. T. (2006). User-centered e-government: Challenges and benefits managers. Washington, DC: International City/County Management Association.
for government web sites. Government Information Quarterly, 23(2), 163−168. Spool, J. M., Scanlon, T., Schroeder, W., & Synder, C. (1997). Web site usability: A designer's
Bogan, C. E. (1994). Benchmarking for best practices: Winning through innovative guide. New York: Morgan Kaufman Press.
adaptation. New York: McGraw Hill. Spendolini, M. J. (1992). The benchmarking book. New York: American Management
Brown, M. M. (2003). Digital government innovation. School of Government, University Association.
of North Carolina at Chapel Hill. Retrieved August 24, 2003, [Link] Stowers, G. N. L. (2002). The state of federal websites: The pursuit of excellence. Arlington,
edu/pubs/electronicversions/pdfs/[Link] VA: The PricewaterhouseCoopers Endowment for the Business of Government.
Brown, M. M., & Brudney, J. L. (2004). Achieving advanced electronic services: Opposing Thomas, J. C., & Strieb, G. (2003). The new face of government: Citizen-initiated contacts
environmental constraints. Public Performance & Management Review, 28(1), 96−113. in the era of e-government. Journal of Public Administration Research and Theory, 13
Bullivant, J. R. N. (1994). Benchmarking for continuous improvement in the public sector. (1), 83−101.
Harlow, United Kingdom: Longman Information and Reference. Tolbert, C. J., & Mossberger, K. (2006). The effects of e-government on trust and
Codling, S. (1995). Best practice benchmarking: A management guide. Amherst, MA: Human confidence in government. Public Administration Review, 66(3), 354−369.
Resource Development Press. West, D. M. (2003a). Global e-government, 2003. Providence, RI: Center for Public Policy,
Council for Excellence in Government. (2003). The new e-government equation: Ease, Brown University. Retrieved October 4, 2003, from [Link]
engagement, privacy, and protection. Retrieved July 17, 2007, from [Link] [Link]
[Link]/admin/FormManager/filesuploading/[Link] West, D. M. (2003b). Urban e-government, 2003. Providence, RI: Center for Public Policy,
Fitz-enz, J. (1993). Benchmarking staff performance: How staff departments can enhance Brown University. Retrieved October 4, 2003, from [Link]
their value to customers. San Francisco: Jossey-Bass Publishers. [Link]
Gant, D. B., Gant, J. P., & Johnson, C. L. (2002). State web portals: Delivering and financing West, D. M. (2004). E-government and the transformation of service delivery and
e service. Arlington, VA: The PricewaterhouseCoopers Endowment for the Business of citizen attitudes. Public Administration Review, 64(1), 15−27.
Government. West, D. M. (2005). Digital government: Technology and public sector performance.
Garson, G. D. (2006). Public information technology and e-governance. Sudbury, MA: Jones Princeton, NJ: Princeton University Press.
and Barlett Publishers. West, D. M. (2006). State and federal e-government in the United States, 2006. Providence,
Gore, A. (1997). Serving the American public: Best practices in customer-driven strategic RI: Center for Public Policy, Brown University. Retrieved July 2, 2007, from http://
planning. Washington, DC: Federal Benchmarking Consortium Report. [Link]/[Link]
Griesemer, J. R. (1995). The power of performance measurement. In D. N. Ammons (Ed.), Yang, K., & Melilski, J. (2007). Competing and complementary values in information
Accountability for performance: Measurement and monitoring in local government technology strategic planning. Public Performance & Management Review, 30(3),
(pp. 157−168). Washington, DC: International City/County Management Association. 426−452.
Hatry, H. P. (1999). Performance measurement: Getting results. Washington, DC: Urban Zhu, J. (2003). Quantitative models for performance evaluation and benchmarking.
Institute Press. Norwell, MA: Kluwar Academic Publishers.
Heeks, R., & Bailur, S. (2007). Analyzing e-government research: Perspectives,
philosophies, theories, methods, and practice. Government Information Quarterly,
24(2), 243−265. David L. Baker is an assistant professor of public administration, California State
Ho, A. Tat-Kei (2002). Reinventing local governments and the e-government initiative. University, San Bernardino. He has published in the Journal of E-Government and The
Public Administration Review, 62(4), 434−444. Public Manager while contributing entries to the Handbook of Public Administration
Hodder, I. (2000). The interpretation of documents and material culture. In N. K. Denzin, and the Encyclopedia of Public Administration and Public Policy. His local government
& Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 703–715). practitioner background brings extensive county government experience including 12
Thousand Oaks, CA: Sage Publications, Inc. years as county manager in two California counties.