Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 16, 2019

How the ACT and SAT exams are built to fail students

We have the technology to do better


How the ACT and SAT exams are built to fail students

A growing number of universities in the US are abandoning the use of standardized tests as a key factor in the admissions process. Last month, Creighton University announced that it won’t require hopeful applicants to submit ACT or SAT test results beginning in 2020, joining the likes of Arizona State University, DePaul University, Drake University, University of Arizona, and University of Chicago, among others.

Why are these respected institutions ditching these test scores, which have been a core tenet of assessing college readiness for nearly 60 years? The answer is simple: they aren’t an accurate way to assess students’ knowledge and potential.

Despite the growing movement against using ACT and SAT exams as the be-all-end-all benchmark for assessing college readiness, more students than ever are taking them – roughly four million in 2018. The results haven’t been promising. More than half of SAT takers still aren’t considered ready for college-level courses, while recent ACT scores actually showed a drop in overall college readiness.

Why are the results so poor? Are the underperforming test takers simply not ready for college? The answer to these questions is the very reason many in the field of cognitive science believe these exams shouldn’t matter in the first place – moment-in-time evaluations are fraught with problems and don’t provide an accurate view of real knowledge or potential. We have the technology to do better.

Think back to the exams you’ve taken in your life. What do you remember most? Is it the material tested on the exam, or the anxiety you felt about how much of your future was riding on one set of questions? That anxiety illustrates the underlying problem with the SAT and ACT exams and an inherent unfairness that negatively impacts many students. Ultimately, so many factors that affect exam scores, like cramming, anxiety, physical health, and luck, aren’t what we really want to measure.

Perhaps most worryingly, many of these factors are more affected by who we are than what we’ve learned. A whopping two thirds of high school students have experienced an uncomfortable level of test anxiety at some point, with severe and chronic test anxiety affecting up to one in four.

More generally, 32 percent of adolescents have suffered from an anxiety disorder – numbers that have been rising in tandem with the prevalence of standardized testing. Research has also shown a strong correlation between performance on exams and factors such as minority status and family income.

Considering these inequities, it’s no surprise that large scale studies across thousands of students find ACT or SAT submissions to be a poor predictor of college success. Those same studies have also shown that high school grade point averages, measuring achievement over time and multiple test opportunities, were more successful indicators of future performance and success.

This is a good start, but even the entrenched use of GPAs has room for improvement. Despite the positives of GPA – the fact that it’s a long-term, data-driven process measuring knowledge with consistent data points across a student’s entire high school career – it’s also heavily influenced by major exams.

What these current standards for knowledge assessment are missing is the large scale application of cognitive science (how we learn), technology (artificial intelligence and machine learning), and rich learner data sets that help adapt the learning experience to each individual. This is the path to accurately assessing real knowledge and potential – a GPA 2.0, if you will.

Educators, admissions officers, and, most of all, students have so much to gain by moving to a better model of assessing knowledge. Imagine if your coursework could predict exactly when you were about to forget the types of chemical bonds you needed to master or recognize that you hadn’t yet mastered Shakespeare’s typical literary devices and could then deliver that information to you at exactly the right moment you needed to build long-term memory and retention.

Or if the teacher was able to see a dashboard of how students were progressing towards mastery of information and use that insight to decide when to intervene and assist the students that really need help, as opposed to treating them all the same.

The combination of cognitive science and technology can do far more than simply assess knowledge – it can help us learn more effectively in the first place. Cognitive scientists have spent decades rigorously mapping out the most efficient techniques to build long-lasting memories – deeper engagement, challenging self-testing, and optimally distributed reviews – as well as identifying common approaches like cramming, mnemonic devices, and re-reading, which lead to poor retention. Unfortunately, the latter approaches are extremely prevalent in ACT and SAT testing, while tactics like rereading have no effect on recall.

Today, the ACT and SAT matter – a lot. But as learner tools and data continue to improve the learning experience, they shouldn’t. The real test won’t be the ability to stay cool, complete test forms, and outsmart exam day, it will be objectively tracked, long-term knowledge and understanding.

As we move to online, on-demand curriculum and assessment, the data on student performance, cognition, and ability to learn will only increase and standalone tests will matter less. That would be a perfect score.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with