0% found this document useful (0 votes)
248 views4 pages

Performance-Based Assessment: Some New Thoughts On An Old IDEA BY LAI (2011)

This document summarizes an article by Lai (2011) about performance-based assessments (PBAs). It discusses arguments in favor of and against PBAs. Proponents argue that PBAs are more motivating for students, assess deeper knowledge and skills, and provide professional development for teachers. However, critics note that PBAs introduce rater error, lack standardization, and may not be suitable for all contexts like high-stakes testing. The document concludes that while PBAs have value, relying solely on them to drive curriculum may not be effective and that developing reliable and comparable PBAs requires close collaboration between educators and assessment developers.

Uploaded by

IratusGlennCruz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
248 views4 pages

Performance-Based Assessment: Some New Thoughts On An Old IDEA BY LAI (2011)

This document summarizes an article by Lai (2011) about performance-based assessments (PBAs). It discusses arguments in favor of and against PBAs. Proponents argue that PBAs are more motivating for students, assess deeper knowledge and skills, and provide professional development for teachers. However, critics note that PBAs introduce rater error, lack standardization, and may not be suitable for all contexts like high-stakes testing. The document concludes that while PBAs have value, relying solely on them to drive curriculum may not be effective and that developing reliable and comparable PBAs requires close collaboration between educators and assessment developers.

Uploaded by

IratusGlennCruz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Republic of the Philippines

PRESIDENT RAMON MAGSAYSAY STATE UNIVERSITY


(Formerly Ramon Magsaysay Technological University)
Iba, Zambales, Philippines

GRADUATE SCHOOL
MAED-SOCIAL STUDIES
2nd TRIMESTER, A.Y 2019-2020
__________________________________________________________________________________________
Topic: Relevant Researches in Performance Based Assessment (PBA)
Objective: Identify and Present empirical/salient findings on researches about PBA significant to
teaching Social Studies.
Research Article: Performance-Based Assessment: Some New Thoughts on an Old Idea by Lai (2011)
Professor: Marie Fe D. De Guzman, Ed.D. (Professor V)
Presenter: Zendy A. Egmao
_____________________________________________________________________________________________

PERFORMANCE-BASED ASSESSMENT: SOME NEW THOUGHTS ON AN OLD


IDEA BY LAI (2011)

INTRODUCTION

Our field has devised many terms to describe assessments in which examinees demonstrate some
type of performance or create some type of product (performance, performance-based, “authentic,”
constructed response, open-ended). Whatever you call them, performance-based assessments (PBAs)
have a long history in educational measurement with cycles of ups and downs. And once again, PBAs
are currently in vogue. Why? To address the federal government’s requirements for assessment systems
that represent “the full performance continuum,” the two consortia formed in response to Race to the
Top funding have both publicized assessment plans that involve a heavy dose of performance-based
tasks. Thus, PBAs are relevant to any discussion about the future of testing in America.

Performance assessments have become a major tool for educational reform and have captured
the attention of policymakers, practitioners, researchers, and stakeholders such as principals, teachers,
students, and parents. Performance assessments are defined as concrete and authentic tasks that require
students to do something with their knowledge and skills, such as give a demonstration or presentation,
or write a report. They focus on doing something, not merely knowing, and on process as well as
product.

ARGUMENTS IN FAVOR AND ARGUMENTS AGAINST THE USE OF PBAs

 Dawson R. Hancock
- Proponents claim these types of tests are more motivating to students.
- An important feature of performance assessment is that it involves the student deeply. If
it is well conducted, it can help students reach good levels per skills and abilities
development.
 Baron
- They provide a model for what teachers should be teaching and students should be learning.
- According to Baron, performance assessment is defined as a constructed response.

 Borko
- They serve as professional development opportunities for teachers involved in developing
and scoring.

 Messick
- They constitute complex, extended performances that allow for evaluation of both process
and product.
- Performance Assessment could more equitable than the other forms of assessment because
PA can engage students in “authentic,” contextualized performance, closely related to
important instructional goals.

 Frederiksen
- They are able to assess students’ knowledge and skills at deeper levels than traditional
assessment approaches and are better suited to measuring certain skill types, such as writing
and critical thinking.
- Evaluations of student achievement on such open-ended tasks usually rely on the
professional judgment of the assessor, and some proponents view such subjectivity of scoring
to be the hallmark of performance assessment

 Robert L. Linn
- They are more meaningful because they are closer to criterion performances, constituting
representations of “criterion activities valued in their own right”.
- Performance-based assessments focus on higher-order thinking and critical reasoning to
perform task. To prove that performance-based assessment are valid testing methods, we
need to build upon the existing criteria suggest using the following additional criteria to
judge the validity of performance-based assessments.

 Stephen B. Dunbar
- PBAs are frequently scored by humans—a process that introduces a certain amount of rater
error.
- In recent years, the education community has paid greater attention how we test our students
and evaluate the results. Instead of using traditional testing methods such as standard,
multiple-choice tests, the focus is on assessing student performance of tasks. Complex,
performance-based assessments, also referred to as authentic measurements , take various
forms:

 Open-ended problems
 Essays
 Hands-on problems
 Computer simulations of real-world problems
 Portfolios of student work
 RJ Shavelson
PBAs have found that significant proportions of measurement error are attributable to task
sampling.

 Suzanne Lane
- PBAs are used in a variety of contexts, including summative, high-stakes contexts, such as
certification and licensure, as well as employment and educational selection.

 Haertel, Edward H.
- In high-stakes contexts, strict standardization of task development, administration, and
scoring is critical for promoting comparability, reliability, and generalizability.

- Regardless of the value of performance assessments in the classroom, a measurement-driven


reform strategy that relies on performance assessments to drive curriculum and instruction
seems bound to fail.

 Linn, Betebenner & Wheeler, 1998; Webb, 1993


- What makes a particular PBA useful for one context will make it less so for the other.

 There are ways of making PBAs, even those used for classroom purposes, more reliable and
comparable. For example, thoughtful reflection on the construct to be assessed (Messick,1994),
coupled with carefully-crafted test specifications (Haertel & Linn, 1996), can go a long way in
creating comparable tasks. Although the measurement field has traditionally avoided classroom
assessment, certain groups have begun participating in collaborative initiatives to create curricula
with psychometrically-sound, embedded PBAs (Furtak et al., 2008).

Doing this well requires new assessment development models that incorporate close collaboration
between curriculum designers and assessment developers to ensure tight alignment and seamless
integration of assessment and instruction.

Finally, such embedded assessments will need to be piloted along several dimensions: to investigate
task and rubric performance, to examine the cognitive processes students use to complete the tasks, and
to collect student responses for anchoring performance scoring rubrics.
REFERENCES
Haertel, E. H. & Linn, R. L. (1996). Comparability. In G. W. Phillips (Ed.), Technical issues in large-scale
performance assessment (pp. 59-78). Washington, D.C.: U.S. Department of Education.

Hancock, D. R. (2007). Effects of performance assessment on the achievement and motivation of graduate
students. Active Learning in Higher Education, 8(3), 219- 231.

Lane, S. & Stone, C. A. (2006). Performance assessment. In R. L. Brennan (Ed.), Educational Measurement
(4th Ed.) (pp. 387-424). Westport, CT: Praeger.

Linn, R. L., Betebenner, D. W., & Wheeler, K. S. (1998). Problem choice by test takers: Implications for
comparability and construct validity. (CSE Tech. Rep. No. 482). Los Angeles: University of California,
National Center for Research on Evaluation, Standards, and Student Testing.

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments.
Educational Researcher, 23(2), 13-23.

Shavelson, R. J., Ruiz-Primo, M. A, & Wiley, E. W. (1999). Note on sources of sampling variability in science
performance assessments. Journal of Educational Measurement, 36(1), 61-71.

Webb, N. M. (1993). Collaborative group versus individual assessment in mathematics: Group processes and
outcomes. (CSE Tech. Rep. No. 352). Los Angeles: University of California, Center for the Study of
Evaluation.

Wise, L. L. (2011, February). Picking up the pieces: Aggregating results from through-course assessments.
Paper presented at the Invitational Research Symposium on Through-Course Summative Assessment. Atlanta,
GA.

You might also like