Admit one: Better admissions metrics for university selection
Research 14 Jun 2016 6 minute readRelying on only one metric – be it the ATAR, an admissions test or other tool – reduces quality in admissions processes. Dr Daniel Edwards explains.
Admit one: Better admissions metrics for university selection
Using multiple tools for determining the appropriateness of applicants for study in a certain course is more effective than relying on only one. This is a key message from ACER to the Australian Government’s Higher Education Standards Panel that is currently undertaking a consultation about transparency in university admissions.
Using two or three well-developed metrics for assessing applicants can offer much more information to higher education providers in making decisions. This has been demonstrated with medical admissions where selection on an achievement metric like the Australian Tertiary Admission Rank (ATAR) or undergraduate Grade Point Average (GPA) alone is rendered almost impossible given the ‘perfect scores’ of many applicants. Implementing an aptitude test to differentiate between very high achievement levels, alongside an interview, is a method of admissions with proven benefits for these highly contested places.
While the medical admissions context is vastly different from many others within a university, the point remains that the practice of using multiple tools in an admissions process already happens and research suggests this constitutes best practice within Australian higher education.
Why using the ATAR as the single metric is problematic
While the ATAR has been shown to correlate with completion or achievement in university to a certain extent, its current use as a single metric for selection can be problematic.
When competition for university was more intense due to capped places, the ATAR or earlier versions of the ATAR was a useful tool to identify and select-in relatively high-performing candidates. In the current setting, however, substantially more providers are attempting to differentiate applicants for selection into courses on ATARs that are in the middle range.
The ATAR relies on an aggregate of unrelated variables so it provides no defensible way of comparing students in the middle range. For example, in testing three students in three subjects, giving each a score out of five, results might look like this:
Mathematics score | Psychology score | Theatre Studies score | Aggregate score | |
Ahmed | 0 | 5 | 5 |
10 |
Bianca | 5 | 5 | 0 |
10 |
Corinne | 3 | 3 | 4 |
10 |
The aggregate score suggests that these students are identical, but in reality they are clearly different.
The ATAR was designed in a supply-driven context where students were competing for places: in that context universities were able to consider ATARs at the top of the scale where students did well at everything. In the current demand-driven context where universities are competing for students, the ATARs under consideration are well into a range where they enable little valid comparison between candidates.
It nevertheless remains perfectly feasible to match Ahmed, Bianca and Corinne with tertiary courses that play to their strengths, if we can get past the aggregate and instead match the components of the aggregate with appropriate courses. The secondary school sector already provides the data we need to do this. In a demand-driven system it does not make sense to mash all of that data into one ‘magic number’ that prevents access to the information it contains.
This problem with aggregate scoring is not confined to the ATAR but frequently arises when institutions attempt to improve admissions by considering additional variables. Those variables might be valid and useful, but the way they are frequently converted onto a single scale is not.
Using other metrics to promote transparency in admissions
If higher education providers are clear on the types of selection tools available, the situations in which they are best used and the ways in which they can be ordered, sorted and filtered to compare the capacity of applicants, the selections they make are more likely to be appropriate to the expectations and requirements of the course.
There is potential to collect more thorough information through the Australian Government Department of Education and Training’s Higher Education Information Management System (HEIMS), which would enable more specific information to be available about admissions. HEIMS is already an effective and efficient means for collecting important data but its use as a tool for monitoring trends in admissions could be vastly improved by adding supplementary variables.
As a starting point, a basic process for compiling information could operate using a hierarchy of variables that might begin with identifying whether the student was enrolled through a direct application to the institution or through a Tertiary Admissions Centre. It could then gather information about selection based on ATAR, and/or admissions test, and/or interview, and/or previous further education course. A further level of data could be collected relating to the admissions processes that also took into account non-academic variables, such as equity categories.
An essential element in the admissions process that does not always appear to be clear in current practice is the existence of a link between the academic expectations relating to a particular degree, and the selection criteria used to choose applicants for that degree.
Careful mapping of the stipulated learning outcomes for each degree and the Graduate Learning Outcomes/Graduate Capabilities statements of the institution, with the information used to select candidates, would offer institutions the ability to begin to more systematically identify gaps in admissions metrics. They could then seek appropriate ways to improve processes to strengthen selection. While this could be a potentially burdensome activity, if improvements in the selection process lead to lower attrition and higher completion, the benefits may outweigh the initial costs.
Further information:
This article draws on ACER’s submission to the Australian Government Department of Education and Training’s Higher Education Standards Panel consultation on transparency in higher education admissions processes.