So19 Lay Aff
So19 Lay Aff
1ac
I affirm the resolution, resolved: in the united states, colleges and universities
ought not consider standardized tests in undergraduate admissions
Framing
I value morality because “ought” implies a moral obligation
Finally, to recognize the operation of structural violence forces us to ask questions about how and why we tolerate it,
questions which often have painful answers for the privileged elite who unconsciously support it. A final question of this
section is how and why we allow ourselves to be so oblivious to structural violence. Susan Opotow offers
an intriguing set of answers, in her article Social Injustice. She argues that our normal perceptual cognitive
processes
divide people into in-groups and out-groups. Those outside our group lie outside our scope of justice.
Injustice that would be instantaneously confronted if it occurred to someone we love or know is barely noticed if it
occurs to strangers or those who are invisible or irrelevant. We do not seem to be able to open
our minds and our hearts to everyone, so we draw conceptual lines between those who are in
and out of our moral circle. Those who fall outside are morally excluded, and become either invisible,
or demeaned in some way so that we do not have to acknowledge the injustice they suffer. Moral exclusion
is a human failing, but Opotow argues convincingly that it is an outcome of everyday social cognition. To reduce its nefarious
effects, we must be vigilant in noticing and listening to oppressed, invisible, outsiders. Inclusionary thinking can be fostered by
relationships, communication, and appreciation of diversity. Like Opotow, all the authors in this section point out that
structural violence is not inevitable if we become aware of its operation and build
systematic ways to mitigate its effects. Learning about structural violence may be discouraging, overwhelming,
or maddening, but these papers encourage us to step beyond guilt and anger, and begin to think about how to reduce
structural violence. All the authors in this section note that the same structures (such as global communication and normal
social cognition) which feed structural violence, can also be used to empower citizens to reduce it. In the long run, reducing
structural violence by reclaiming neighborhoods, demanding social justice and living wages, providing prenatal care,
alleviating sexism, and celebrating local cultures, will be our most surefooted path to building lasting peace.
b. Subpoint B is bias—
Standardized tests are racially and socioeconomically biased
Goldfarb 14. [Zachary A Goldfarb is a deputy business editor at the Washington Post. He went
to the Woodrow Wilson School of Public and International Affairs at Princeton.] “These four
charts show how the SAT favors rich, educated families.” The Washington Post. March 5, 2014.
https://www.washingtonpost.com/news/wonk/wp/2014/03/05/these-four-charts-show-how-
the-sat-favors-the-rich-educated-families/ TG
The first chart shows that SAT scores are highly correlated with income. Students from families
earning more than $200,000 a year average a combined score of 1,714, while students from
families earning under $20,000 a year average a combined score of 1,326. The writing test has
the widest score gap, perhaps explaining why College Board officials are dropping the essay.
The second chart shows that students from educated families do better. A student with a
parent with a graduate degree, for example, on average scores 300 points higher on their SATs
compared to a student with a parent with only a high school degree. No doubt this is the same
dynamic reflected in the income graph, given that there are high returns to college education.
But it also dispels the notion that students in America have good opportunities to advance
regardless of the family they're born to.
The third chart shows that Asians and whites get much higher scores than other ethnic
groups. Asians top the test with an average score of 1,645, while African Americans record the
lowest score with an average of 1,278. It appears that the advantage of white students over
black and Hispanic students is roughly similar for the reading, math and writing test.
The fourth chart shows that taking the PSAT once or twice tends to lead to a higher
score. Students who don't take the PSAT, for instance, have an average score of 1,409, while
students who take it twice - once in their junior year and then once before that -- have an
average score of 1,612. This almost certainly reflects the fact that schools in wealthier
communities do a better job of preparing students for standardized testing, including by
offering PSATs.
c. Subpoint C is graduation—
Selective colleges are critical for minority graduation
Carnevale PhD et al 19 [Anthony P. Carnevale, research Professor and Director of the
Georgetown University Center on Education and the Workforce, PhD public finance economics
from Syracuse; Jeff Strohl, Director of Research, PhD economics from American; Martin Van Der
Werf, former reporter and editor at The Chronicle of Higher Education, award-winning reporter,
columnist and editor at The St. Louis Post-Dispatch and The Arizona Republic; Michael C. Quinn,
Research Analyst, MA public policy from Georgetown; Kathryn Peltier Campbell, Senior
Editor/Writer and Postsecondary Specialist, MA English from Virginia] “SAT-Only Admissions:
How Would It Change College Campuses?” Georgetown University Center on Education and the
Workforce RE
In addition, since Black and Latino students have lower median SAT scores than Whites, an
overreliance on the SAT puts Black and Latino students at a disadvantage in admissions, even
though the test results mean little about whether they will actually succeed in college. That’s
particularly unfortunate because Black and Latino students stand to benefit strongly from
attending selective colleges: a Black or Latino student with a score above 1000 on the SAT has an
81 percent chance of graduating at a selective college, but only a 46 percent chance of
graduating at an open-access college.10 Unfortunately, the overwhelming majority of Black and
Latino students attend open-access colleges, severely diminishing their opportunities to
graduate.
d. Subpoint D is inequality—
Poor education locks minority students into cycles of inequality
Carnevale PhD et al 18 [Anthony P. Carnevale, research Professor and Director of the
Georgetown University Center on Education and the Workforce, PhD public finance economics
from Syracuse; Jeff Strohl, Director of Research, PhD economics from American; Martin Van Der
Werf, former reporter and editor at The Chronicle of Higher Education, award-winning reporter,
columnist and editor at The St. Louis Post-Dispatch and The Arizona Republic; Michael C. Quinn,
Research Analyst, MA public policy from Georgetown; Dmitri Repnikov, MS Applied Econ from
Johns Hopkins] “Our Separate and Unequal Public Colleges” Georgetown University Center on
Education and the Workforce RE
Separate and unequal public college systems increase earnings disparities and hurt the careers
of Blacks and Latinos. In combination, all these fiscal, demographic, and educational forces have
resulted in racially separate and financially unequal public colleges.
The fact that we are devoting more public resources to the colleges where Whites are highly
concentrated while underfunding the open-access public colleges where minority students are
more likely to enroll is of great consequence. Since Whites are disproportionately attending the
colleges that produce the highest graduation rates, the result is a continued widening of the
already yawning gaps in college degrees among Whites, Blacks, and Latinos. In the United
States, 37 percent of Whites have a bachelor’s degree or higher, compared to 22 percent of
Blacks and 17 percent of Latinos (Figure 4).
These disparities in educational credentials carry over into the workforce. On average, Whites
earn $50,000 annually while Blacks earn $38,000 and Latinos earn $33,000.37 In other words,
for every dollar a White worker earns, a Black worker earns 76 cents , and a Latino worker earns
66 cents.38
All workers, no matter their race or ethnicity, see a huge earnings boost from completing a
bachelor’s degree compared to those who at most completed a high school education. While
Blacks and Latinos with bachelor’s degrees still earn less than Whites, their earnings gains are
greater, on a percentage basis, after they earn the degrees. In 2015, for Blacks, the median
earnings for a prime-age working adult were 67 percent higher for those with a bachelor’s
degree than for those who had only a high school diploma. The jump in earnings between a high
school diploma and a bachelor’s degree was even larger for Latinos: 78 percent. For Whites, a
bachelor’s degree resulted, on average, in a 59 percent boost in earnings (Figure 5).
Why are the results so poor? Are the underperforming test takers simply not ready for college?
The answer to these questions is the very reason many in the field of cognitive science believe
these exams shouldn’t matter in the first place – moment-in-time evaluations are fraught with
problems and don’t provide an accurate view of real knowledge or potential. We have the
technology to do better.
Think back to the exams you’ve taken in your life. What do you remember most? Is it the
material tested on the exam, or the anxiety you felt about how much of your future was riding
on one set of questions? That anxiety illustrates the underlying problem with the SAT and ACT
exams and an inherent unfairness that negatively impacts many students. Ultimately, so many
factors that affect exam scores, like cramming, anxiety, physical health, and luck, aren’t what we
really want to measure.
Perhaps most worryingly, many of these factors are more affected by who we are than what
we’ve learned. A whopping two thirds of high school students have experienced an
uncomfortable level of test anxiety at some point, with severe and chronic test anxiety affecting
up to one in four.
More generally, 32 percent of adolescents have suffered from an anxiety disorder – numbers
that have been rising in tandem with the prevalence of standardized testing. Research has also
shown a strong correlation between performance on exams and factors such as minority status
and family income.
Considering these inequities, it’s no surprise that large scale studies across thousands of
students find ACT or SAT submissions to be a poor predictor of college success. Those same
studies have also shown that high school grade point averages, measuring achievement over
time and multiple test opportunities, were more successful indicators of future performance
and success.
This is a good start, but even the entrenched use of GPAs has room for improvement. Despite
the positives of GPA – the fact that it’s a long-term, data-driven process measuring knowledge
with consistent data points across a student’s entire high school career – it’s also heavily
influenced by major exams.
What these current standards for knowledge assessment are missing is the large scale
application of cognitive science (how we learn), technology (artificial intelligence and machine
learning), and rich learner data sets that help adapt the learning experience to each individual.
This is the path to accurately assessing real knowledge and potential – a GPA 2.0, if you will.
Educators, admissions officers, and, most of all, students have so much to gain by moving to a
better model of assessing knowledge. Imagine if your coursework could predict exactly when
you were about to forget the types of chemical bonds you needed to master or recognize that
you hadn’t yet mastered Shakespeare’s typical literary devices and could then deliver that
information to you at exactly the right moment you needed to build long-term memory and
retention.
Or if the teacher was able to see a dashboard of how students were progressing towards
mastery of information and use that insight to decide when to intervene and assist the students
that really need help, as opposed to treating them all the same.
The combination of cognitive science and technology can do far more than simply assess
knowledge – it can help us learn more effectively in the first place. Cognitive scientists have
spent decades rigorously mapping out the most efficient techniques to build long-lasting
memories – deeper engagement, challenging self-testing, and optimally distributed reviews – as
well as identifying common approaches like cramming, mnemonic devices, and re-reading,
which lead to poor retention. Unfortunately, the latter approaches are extremely prevalent in
ACT and SAT testing, while tactics like rereading have no effect on recall.
Today, the ACT and SAT matter – a lot. But as learner tools and data continue to improve the
learning experience, they shouldn’t. The real test won’t be the ability to stay cool, complete test
forms, and outsmart exam day, it will be objectively tracked, long-term knowledge and
understanding.
Thus, I affirm
1AR
Extensions
Overview 1ar
Standardized testing disproportionately effects racial minorities, low income, and female
students
Overview 2ar
For more than 100 years standardized testing has stood as the crux of a racist admissions
process being a better predicter of wealth than anything – getting rid of standardized testing
allows a shift to a more holistic admissions process increasing diversity –
Standardized test - (Studyusa): a test that is given to students in a very consistent manner
Governments create policies that are meant to affect millions of people. These
people will conflict on certain ideas, meaning that governments should use
empirics and net benefits to decide whether the policy is acceptable. The only
ethical system that makes sense for the state is utilitarianism.
Woller 97 [Gary, BYU Prof., “An Overview by Gary Woller”, A Forum on the Role of Environmental Ethics, June 1997, pg. 10]
Appeals to a priori moral principles, such as environmental preservation, also often fail to acknowledge that public
policies inevitably entail trade-offs among competing values. Thus since policymakers cannot justify inherent value conflicts to the public in any
philosophical sense, and since public policies inherently imply winners and losers, the policymakers' duty
[is] to the public interest requires them to demonstrate that the redistributive effects and value trade-offs implied by their polices are
somehow to the overall advantage of society. At the same time, deontologically based ethical systems have severe practical limitations
as a basis for public policy. At best, a priori moral principles provide only general guidance to ethical dilemmas
in public affairs and do not themselves suggest appropriate public policies, and at worst, they
create a regimen of regulatory unreasonableness while failing to adequately address the
problem or actually making it worse. For example, a moral obligation to preserve the environment by no means implies the best way, or any
way for that matter, to do so, just as there is no a priori reason to believe that any policy that claims to preserve the environment will actually do so. Any number of policies
might work, and others, although seemingly consistent with the moral principle, will fail utterly. That deontological principles are an inadequate basis for environmental policy is
evident in the rather significant irony that most forms of deontologically based environmental laws and regulations tend to be implemented in a very utilitarian manner by
street-level enforcement officials. Moreover, ignoring the relevant costs and benefits of environmental policy and their attendant incentive structures can, as alluded to above,
actually work at cross purposes to environmental preservation. (There exists an extensive literature on this aspect of regulatory enforcement and the often perverse outcomes
of regulatory policy. See, for example, Ackerman, 1981; Bartrip and Fenn, 1983; Hawkins, 1983, 1984; Hawkins and Thomas, 1984.) Even the most die-hard
preservationist/deontologist would, I believe, be troubled by this outcome. The above points are perhaps best expressed by Richard Flathman, The number of values typically
involved in public policy decisions, the broad categories which must be employed and above all, the scope and complexity of the consequences to be anticipated militate against
reasoning so conclusively that they generate an imperative to institute a specific policy. It is seldom the case that only one policy will meet the criteria of the public interest
between policy alternatives and the problems they address, and the public must be reasonably assured that a policy will
actually do something about an existing problem; this requires the means-end language and methodology of
utilitarian ethics.
experiencing such value. Without potential consequences at the level of experience—happiness, suffering, joy,
despair, etc.—all talk of value is empty. Therefore, to say that an act is morally necessary , or evil, or blameless, is to make
(tacit) claims about its consequences in the lives of conscious creatures (whether actual or potential). I am
unaware of any interesting exception to this rule. Needless to say, [For example,] if one is worried about pleasing God or His angels, this
assumes that such invisible entities are conscious (in some sense) and cognizant of human behavior. It also generally assumes [and] that it is possible to suffer their [his] wrath or
enjoy their approval, either in this world or the world to come. Even within religion, therefore, consequences and conscious states
remain the foundation of all values.
The first chart shows that SAT scores are highly correlated with income. Students from families
earning more than $200,000 a year average a combined score of 1,714, while students from
families earning under $20,000 a year average a combined score of 1,326. The writing test has
the widest score gap, perhaps explaining why College Board officials are dropping the essay.
The second chart shows that students from educated families do better. A student with a
parent with a graduate degree, for example, on average scores 300 points higher on their SATs
compared to a student with a parent with only a high school degree. No doubt this is the same
dynamic reflected in the income graph, given that there are high returns to college education.
But it also dispels the notion that students in America have good opportunities to advance
regardless of the family they're born to.
The third chart shows that Asians and whites get much higher scores than other ethnic
groups. Asians top the test with an average score of 1,645, while African Americans record the
lowest score with an average of 1,278. It appears that the advantage of white students over
black and Hispanic students is roughly similar for the reading, math and writing test.
The fourth chart shows that taking the PSAT once or twice tends to lead to a higher
score. Students who don't take the PSAT, for instance, have an average score of 1,409, while
students who take it twice - once in their junior year and then once before that -- have an
average score of 1,612. This almost certainly reflects the fact that schools in wealthier
communities do a better job of preparing students for standardized testing, including by
offering PSATs.
The racial disparities in standardized tests are appalling – this is explicit evidence
from college board themselves
Richard Reeves writes for the brookings institute in 2017 [Richard V. Reeves and Dimitrios
Halikias, *John C. and Nancy D. Whitehead Chair Senior Fellow - Economic Studies Director -
Future of the Middle Class Initiative Co-director - Center on Children and Families ** Research
Assistant - Center on Children and Families , 2-1-2017, "Race gaps in SAT scores highlight
inequality and hinder upward mobility," Brookings, https://www.brookings.edu/research/race-
gaps-in-sat-scores-highlight-inequality-and-hinder-upward-mobility/, accessed 8-15-2019]
LHSBC
If someone asks about the study, “In a perfectly equal distribution, the racial breakdown of scores at every point
in the distribution would mirror the composition of test-takers as whole i.e. 51 percent white, 21 percent
Latino, 14 percent black, and 14 percent Asian”
Taking the SAT is an American rite of passage. Along with the increasingly popular ACT, the SAT is critical in identifying student
readiness for college and as an important gateway to higher education. Yet despite efforts to equalize academic opportunity, large
racial gaps in SAT scores persist. THE GREAT SCORE DIVIDE The SAT provides a measure of
academic inequality at the end of secondary schooling. Moreover, insofar as SAT scores predict
student success in college, inequalities in the SAT score distribution reflect and reinforce racial
inequalities across generations. In this paper, we analyze racial differences in the math section of
the general SAT test, using publicly available College Board population data for all of the nearly
1.7 million college-bound seniors in 2015 who took the SAT. (We do not use the newest data released for the
class of 2016, because the SAT transitioned mid-year to a new test format, and data has so far only been released for students who
took the older test.) Our analysis uses both the College Board’s descriptive statistics for the entire test-taking class, as well
as percentile ranks by gender and race. (The College Board has separate categories for “Mexican or Mexican American” and “Other
Hispanic, Latino, or Latin American.” We have combined them under the term Latino.) The mean score on the math section of the
SAT for all test-takers is 511 out of 800, the average
scores for blacks (428) and Latinos (457) are significantly
below those of whites (534) and Asians (598). The scores of black and Latino students are
clustered towards the bottom of the distribution, while white scores are relatively normally
distributed, and Asians are clustered at the top: Race gaps on the SATs are especially
pronounced at the tails of the distribution . In a perfectly equal distribution, the racial breakdown of
scores at every point in the distribution would mirror the composition of test-takers as whole i.e. 51
percent white, 21 percent Latino, 14 percent black, and 14 percent Asian. But in fact, among top scorers
—those scoring between a 750 and 800—60 percent are Asian and 33 percent are white, compared to 5
percent Latino and 2 percent black. Meanwhile, among those scoring between 300 and 350, 37 percent
are Latino, 35 percent are black, 21 percent are white, and 6 percent are Asian The College Board’s publicly
available data provides data on racial composition at 50-point score intervals. We estimate that in the entire country last year at
most 2,200 black and 4,900 Latino test-takers scored above a 700. In comparison, roughly 48,000 whites and 52,800 Asians scored
that high. The same absolute disparity persists among the highest scorers: 16,000 whites and 29,570 Asians scored above a 750,
compared to only at most 1,000 blacks and 2,400 Latinos. (These estimates—which rely on conservative assumptions that maximize
the number of high-scoring black students, are consistent with an older estimate from a 2005 paper in the Journal of Blacks in Higher
Education, which found that only 244 black students scored above a 750 on the math section of the SAT.) A
STUBBORN
BLACK-WHITE GAP Disappointingly, the black-white achievement gap in SAT math scores has
remained virtually unchanged over the last fifteen years. Between 1996 and 2015, the average gap between
the mean black score and the mean white score has been .92 standard deviations. In 1996 it was .9 standard deviations and in 2015
it was .88 standard deviations. This
means that over the last fifteen years, roughly 64 percent of all test-
takers scored between the average black and average white score. These gaps have a significant
impact on life chances, and therefore on the transmission of inequality across generations. As
the economist Bhashkar Mazumder has documented, adolescent cognitive outcomes (in this case, measured by the
AFQT) statistically account for most of the race gap in intergenerational social mobility. COULD
THE GAP BE EVEN WIDER?
There are some limitations to the data which may mean that, if anything, the
race gap is being understated. The
ceiling on the SAT score may, for example, understate Asian achievement. If the exam was
redesigned to increase score variance (add harder and easier questions than it currently has),
the achievement gap across racial groups could be even more pronounced. In other words, if the
math section was scored between 0 and 1000, we might see more complete tails on both the
right and the left. More Asians score between 750 and 800 than score between 700 and 750, suggesting that many Asians
could be scoring above 800 if the test allowed them to.A standardized test with a wider range of scores, the LSAT, offers some
evidence on this front. An analysis of the 2013-2014 LSAT finds an average black score of 142 compared to an average white score of
153. This amounts to a black-white achievement gap of 1.06 standard deviations, even higher than that on the SAT. This is of course
a deeply imperfect comparison, as the underlying population of test-takers for the LSAT (those applying to law school) is very
different from that of the SAT. Nonetheless the LSAT distribution provides yet another example of the
striking academic achievement gaps across race: Another important qualification is that the SAT is no
longer the nationally dominant college-entrance exam. In recent years, the ACT has surpassed the
SAT in popularity. If the distributions of students taking the two exams are significantly different, focusing on one test alone
won’t give a complete picture of the racial achievement gap. A cursory look at the evidence, however, suggests that race gaps
on the 2016 ACT are comparable to those we observe for the SAT. In terms of composition, ACT test-
takers were 54 percent white, 16 percent Latino, 13 percent black, and 4 percent Asian. Except for the substantially reduced share of
Asian test-takers, this
is reasonably close to the SAT’s demographic breakdown. Moreover, racial
achievement gaps across the two tests were fairly similar. The black-white achievement gap
for the math section of the 2015 SAT was roughly .88 standard deviations. For the 2016 ACT it
was .87 standard deviations. Likewise, the Latino-white achievement gap for the math section of the 2015 SAT was
roughly .65 standard deviations; for the 2016 ACT it was .54
Results and Discussion The ANCOVA performed on the number of items correctly solved yielded a significant main effect of race, F(\,
35) =10.04, p < .01, qualified by a significant Race X Test Description interaction, F( 1, 35) = 8.07, p < .01. The mean SAT score
for Black participants was 603 and for White participants 655. The adjusted means are presented in Figure 2.
Planned contrasts on the adjusted scores revealed that, as predicted, Blacks in the diagnostic condition performed
significantly worse than Blacks in the nondiagnostic condition /(35) = 2.38, p < .02, than Whites in the
diagnostic condition t(35) = 3.75, p < .001, and than Whites in the nondiagnostic condition Z(35) = 2.34, /?< .05. But the planned
contrasts of the Black diagnostic condition against the other conditions did not reach conventional significance, although its
contrasts with the Black nondiagnostic and White diagnostic conditions were marginally significant, with ps of .06 and .09
respectively. Blacks completed fewer items than Whites, ^(1,35) = 9.35, p < .01, and participants in the diagnostic conditions tended
to complete fewer items than those in the nondiagnostic conditions, F(\, 35) = 3.69, p < .07. The overall interaction did not reach
significance. But planned contrasts revealed that Black participants in the diagnostic condition finished fewer items (M = 12.38) than
Blacks in the nondiagnostic condition (M = 18.53), ?(35) = 2.50, p < .02; than Whites in the diagnostic condition (M= 20.93), /(35) =
339,p< .01; and than Whites in the nondiagnostic condition (M = 21.45), t(35) = 3.60,p < 0.1
These results establish the reliability of the diagnosticity-byrace interaction for test performance that was marginally significant in
Study 1. They also reveal another dimension of the effect of stereotype threat. Black
participants in the diagnostic
condition completed fewer test items than participants in the other conditions. Test
diagnosticity impaired the rate, as well as the accuracy of their work. This is precisely the
impairment caused by evaluative pressures such as evaluation apprehension, test anxiety, and
competitive pressure (e.g., Baumeister, 1984). But one might ask why this did not happen in the nearidentical Study 1.
Several factors may be relevant. First, the most involved test items—reading comprehension items that took several steps to answer
—came first in the test. And second, the test lasted 25 min in the present experiment whereas it lasted 30 min in the first
experiment. Assuming, then, that stereotype
threat slowed the pace of Black participants in the
diagnostic conditions of both experiments, this 5-min difference in test period may have made it harder for these
participants in the present experiment to get past the early, involved items and onto the more quickly answered items at the end of
the test, a possibility that may also explain the generally lower scores in this experiment. This view is reinforced by the ANCOVA
(with SATs as a covariate) on the average time spent on each of the first five test items—the minimum number of items that all
participants in all conditions answered. A marginal effect of test presentation emerged, F{ 1, 35) = 3.52, p < .07, but planned
comparisons showed that Black participants
in the diagnostic condition tended to be slower than
participants in the other conditions. On average they spent 94 s answering each of these items in contrast to 71 for
Black participants in the nondiagnostic condition, ((35) = 2.39, p < .05; 73 s for Whites in the diagnostic condition, f(35) = 2.12, p < .
05, and 71 s for Whites in the nondiagnostic condition, Z(35) = 2.37, p < .05. Like other forms of evaluative pressure, stereotype
threat causes an impairment of both accuracy and speed of performance. No differences were found on
any of the remaining measures, including self-reported effort, cognitive interference, or anxiety. These measures may have been
insensitive, or too delayed. Nonetheless, we lack an important kind of evidence. We have not shown that test diagnosticity causes in
Black participants a specific apprehension about fulfilling the negative group stereotype about their ability—the apprehension that
we argue disrupts their test performance. To examine this issue we conducted a third experiment.
b. Subpoint b is empirics—
Eliminating tests is feasible – when Hampshire College did so they saw an
increase in diversity and first generation college grads
Hampshires president Jonathan Lash writes in 2015 MA/JD 15 [Jonathan, Director of
World Resources Institute, a DC-based environmental think tank, where he previously served as
president. Jonathan is a widely recognized environmental leader who chaired President Bill
Clinton's Council on Sustainable Development and was the State of Vermont’s Environmental
Secretary and Commissioner. He holds a law degree and master’s degree in education from
Catholic University of America and a bachelor’s from Harvard College. President, Hampshire
College], "Results of Removing Standardized Test Scores from College Admissions,"
hampshire.edu, https://www.hampshire.edu/news/2015/09/21/results-of-removing-
standardized-test-scores-from-college-admissions 9-21-2015 RE
In our admissions, we review an applicant’s whole academic and lived experience. We consider
an applicant’s ability to present themselves in essays and interviews, review their
recommendations from mentors, and assess factors such as their community engagement and
entrepreneurism. And yes, we look closely at high school academic records, though in an
unconventional manner. We look for an overarching narrative that shows motivation, discipline,
and the capacity for self-reflection. We look at grade point average (GPA) as a measure of
performance over a range of courses and time , distinct from a one-test-on-one-day SAT/ACT
score. A student’s consistent "A" grades may be coupled with evidence of curiosity and learning
across disciplines, as well as leadership in civic or social causes. Another student may have
overcome obstacles through determination, demonstrating promise of success in a demanding
program. Strong high school graduates demonstrate purpose, a passion for authenticity, and
commitment to positive change.
We’re seeing remarkable admissions results since disregarding standardized test scores:
Our yield, the percentage of students who accepted our invitation to enroll, rose in a single year
from 18% to 26%, an amazing turnaround
The quantity of applications went down but the quality went up, likely because we made it
harder to apply, asking for more essays; Our applicants collectively were more motivated,
mature, disciplined and consistent in their high school years than past applicants
Class diversity increased to 31% students of color, the most diverse in our history, up from 21%
two years ago
The percentage of students who are the first-generation from their family to attend college rose
from 12% to 18% in this year’s class.
Our “No SAT/ACT policy” has also changed us in ways deeper than data and demographics: Not
once did we sit in an Admissions committee meeting and "wish we had a test score." Without
the scores, every other detail of the student’s application became more vivid. Their academic
record over four years, letters of recommendation, essays, in-person interviews, and the
optional creative supplements gave us a more complete portrait than we had seen before.
Applicants gave more attention to their applications including the optional components, putting
us in a much better position to predict their likelihood of success here.
c. Subpoint C
College is esseintial for the path out of poverty
Emily Yount writes in the Washington post in 2014 , 12-15-2014, she is a journalist
who worked for WAPO and the Texas tribune "How to beat the forces that keep low-income
students from graduating," Washington Post,
https://www.washingtonpost.com/sf/business/2014/12/15/the-college-trap-that-keeps-people-
poor/?utm_term=.fced93ec2bff
The path from poverty to the middle class has changed — now, it runs through higher education. In 1965,
a typical man whose education stopped after four years of high school earned a salary 15 percent higher than the median male
worker. By 2012, a high-school-only grad was earning 20 percent less than the median. The swing has been even more dramatic for
women who stopped their education after high school: They earned almost 40 percent more than the median female salary in 1965
and 24 percent less in 2012. Collegegraduates, meanwhile, have widened their income advantage over high
school grads, as several recent studies demonstrate — including one from MIT economist David Autor, who found that the
annual income gap between a college-educated family and a high-school-educated one grew by
$28,000 over the past 35 years, after adjusting for inflation. Nine out of 10 children who grow up at the
bottom of the income ladder but then graduate from college move up to a higher economic
bracket as adults, according to the Pew Charitable Trusts. Less than half of kids without a degree make the
same leap.
Contention 2
Standardized tests are not effective at measuring school success – they can tell
us little more than the size of students house, and multiple choice problems
can’t test conceptual understanding
Kohn in 2000 writes [Alfie Kohn (B.A. from Brown University in Providence, Rhode Island in 1979, having created his own
interdisciplinary course of study, and an M.A. in the social sciences from the University of Chicago in Illinois in 1980), “The case
against standardized testing: raising the scores, ruining the schools”, 2000, https://purpletod.co.za/docs/Standardized
%20Testing.pdf]
The results of these tests must tell us something. The main thing they tell us is how big the students' houses
are. Research has repeatedly found that the amount of poverty in the communities where
schools are located, along with other variables having nothing to do with what happens in
classrooms, accounts for the great majority of the difference in test scores from one area to the
next. To that extent, tests are simply not a valid measure of school effectiveness. (Indeed, one educator suggested that we
could save everyone a lot of time and money by eliminating standardized tests and just asking a
single question: "How much money does your mom make? ... OK, you're on the bottom.") Only someone
ignorant or dishonest would present a ranking of schools' test results as though it told us about the quality of teaching that went on
in chose schools when, in fact, it primarily tells us about socioeconomic status and available resources. Of course, knowing what
really determines the scores makes it impossible to defend the practice of using them as the basis for high-stakes decisions. But
socioeconomic status isn't everything. Within a given school, or group of students of the same status, aren't there going to be
variations in the scores? Sure. And among people who smoke three packs of cigarettes a day, there are going to be variations in lung
cancer rates. But that doesn't change the fact that smoking is the factor most powerfully associated with lung cancer. Still, let's put
wealth aside and just focus on the content of the tests themselves. The
fact is that they usually don't assess the
skills and dispositions that matter most. They tend to be contrived exercises that measure how
much students have managed to cram into short-term memory . Even the exceptions—questions
that test the ability to reason—generally fail to offer students the opportunity "to carry out
extended analyses, [or] to solve open-ended problems, or to display command of complex
relationships, although these abilities are at the heart of higher order competence," as Lauren
Resnick, one of our leading cognitive scientists, put it. Part of the problem rests with an obvious truth whose implications we may
not have considered: These
tests care only about whether the student got the right answer. To point
this out is not to claim that there is no such thing as a right answer; it is to observe that right
answers don't necessarily signal understanding, and wrong answers don't necessarily signal the
absence of understanding. Most standardized tests ignore the process by which students arrive
at an answer, so a miss is as good as a mile and a minor calculation error is interchangeable with
a major failure of reasoning. The focus on right answers also means that most, if not all, of the items on the test were
chosen precisely because they have unambiguously correct solutions, with definite criteria for determining what those solutions are
and a clear technique for getting there. The
only thing wrong with these questions is that they bear no
resemblance to most problems that occupy people in the real world.
These tests only prepare students for the test – not the real world
LA Times 16 We all SAT down for nothing: Why the SAT is useless for college admission by LA Times HS Insider - February 23,
2016 (https://highschool.latimes.com/port-of-la-high-school/we-all-sat-down-for-nothing-why-the-sat-is-useless-for-college-
–Aldo Andrade, Mika Verner, Ashley Anderson, Ashley Ardaiz, Jaelene
admissions/
Galaz, Austin Labador, Vania Patino, Darlene Radell, Ximena Ruiz, Malia Street
and Jesus Zamora – its written by students)
In fact, the SAT is not a measure of intelligence at all. Instead, it is a mere indication of how well
someone takes the SAT. That’s about it. To “study” for these tests, students often repeatedly take
practice tests over and over again, until they gradually get a higher score. To get better scores,
students memorize the “tricks” of the SAT test questions. Take the POLAHS SAT Prep class, for
example. In class, students are given a binder full of tips and shortcuts to memorize in order to find
answers more effectively. They spend time analyzing the unique format of the SAT and its test questions.
In the end, success on the SAT only measures how well a student can take the SAT itself, not how
well a student may do in college courses or how many college-related skills a student may have.
Although students at POLAHS greatly benefit from the SAT Prep class, it doesn’t discount the overall
ineffectiveness of the SAT test as a whole.
High School GPA is a better predictor of college success than the SAT – they
reflect a students overall work ethic and commitment
Preston Cooper writes in 2016 for forbes , 11-17-2016, "What Predicts College
Completion? High School GPA Beats SAT Score," Forbes,
https://www.forbes.com/sites/prestoncooper2/2018/06/11/what-predicts-college-completion-
high-school-gpa-beats-sat-score/#466cabba4b09
I am a higher education analyst based in Washington, D.C. I formerly worked in higher education
research at the American Enterprise Institute and the Manhattan Institute. In addition to writing
for Forbes, my writing has appeared in the Wall Street Journal, the Washington Post, the Seattle
Times, U.S. News and World Report, the Washington Examiner, Fortune, RealClearPolicy, and
National Review. I hold a B.A. in economics from Swarthmore College.
One of the most pressing problems in American higher education is the high college dropout rate. Spending time in college without a
degree to show for it means students will lose opportunities to work or cultivate skills elsewhere. College dropouts are also far more
likely than graduates to default on their student loans. In many ways, dropping out of college is worse than not going to college at
all. Knowing which factors predict completion, and intervening accordingly, can save students and colleges a world of grief. That’s
where a new report by Matthew Chingos of the Urban Institute comes in. (The report was published through the American
Enterprise Institute, my employer, but I had no involvement with its production.) For obvious reasons, students who exhibit better
academic preparation in high school are more likely to complete college. But “academic preparation” can mean different things.
There are two primary ways to measure a student’s academic aptitude: scores on standardized tests such as the SAT, and grades in
high school coursework. Today In: Leadership The SAT and similar tests exist to account for differences in
how high schools grade students. Some teachers feel pressured to give students high marks despite middling academic
performance, a phenomenon known as grade inflation. Certain high schools may run more rigorous courses than others. As a result,
an A-average GPA at one high school might be equivalent to a B+ at another. As SAT scores are a more consistent indicator of
aptitude, one might expect them to better predict a student’s chances of graduating college than high school GPA. But Chingos’
research shows exactly the opposite. For instance, a student with a high SAT score (above 1100) but a
middling high school GPA (between 2.67 and 3.0) has an expected graduation rate of 39%. But
students with the opposite credentials—mediocre SAT scores but high GPAs—graduate from
college at a 62% rate. Put another way, the expected graduation rate of a student with a given
GPA doesn’t change very much depending on her SAT score. But the expected graduation rate of
a student with a given SAT score varies tremendously depending on her GPA . Given differences in
grading standards across high schools, GPA may not provide a consistent measure of a student’s ability in mathematics, reading, and
other subjects. But
GPA usually captures whether a student consistently attends class and completes
her assignments on time. Students need to cultivate these behaviors in order to succeed in
college, and such good habits can lead to success even for students of modest academic ability.
“Students could in theory do well on a test even if they do not have the motivation and
perseverance needed to achieve good grades ,” notes Chingos. “It seems likely that the kinds of
habits high school grades capture are more relevant for success in college than a score from a
single test.” To paraphrase various celebrities and motivational posters, most of life (and college) is just showing up. Granted,
colleges cannot rely solely on high school GPA as a proxy for likelihood of graduation, as there’s evidence that many high
schools do artificially lift their students’ grades. Simply pumping up grades doesn’t boost a student’s preparation
for college, so the SAT and other standardized tests are useful as a check on such grade inflation. But perhaps colleges looking to
identify students at risk of dropping out should pay more attention to high school marks.
Contention 3 – Anxiety
The SAT adds unnecessary stress to students' lives
Page 14 I am more than a number: The case against SAT scores in college admissions by Kelsey Page, a student at Stanford
University published in The Stanford Daily on December 2, 2014 (https://www.stanforddaily.com/2014/12/02/i-am-more-than-a-
number-the-case-against-sat-scores-in-college-admissions/)
Over two million students take the SAT test every year, making it the most widely-used college admissions
test in the country. One of the most the most pressure-packed tests a young adult can take, the SAT
brings back memories of stress and anxiety for many students . The American Psychological
Association’s annual Stress in America survey reported 31 percent of teens feeling overwhelmed and
another 30 percent feeling sad or depressed as a result of stress, pointing to school and school-related
activities as a key cause. With stress levels rivaling those of adults, students could really benefit from
eliminating some stress-inducers from their daily lives. Considering the SAT is a proven to be reflection of
socioeconomic status (SES) and a poor indicator of success in college, it is time that the test gets
removed from the college admissions process once and for all.
Not using standardized testing in admissions processes will take the stress off of
highschool students around the country while making college more fair towards
all students
Thus, I affirm
1AR
Extensions
Overview 1ar
Overview 2ar