This year the Australian film and TV industry’s night of nights on January 30, the AACTA Awards ceremony, concluded with a dramatic twist. Like the best surprise revelations, nobody saw it coming.
Announcing the Best Film category, Cate Blanchett opened the envelope and declared not one winner but two: a tie between Jennifer Kent’s psychological creepy-crawly The Bababook and Russell Crowe’s WWI melodrama The Water Diviner.
Both were the work of first-time directors, though in terms of critical and popular success they fared very differently.
Kent’s thriller made pittance at the box office but was one of the most critically acclaimed Australian films in years; Crowe’s handsome tear-jerker was more tepidly received by reviewers but became a commercial juggernaut.
Quickly following the initial shock came the inevitable question of how, in a preferential system with six nominees and 1733 eligible voters, could the winner possibly be a draw?
As one Australian feature film director put it to me shortly after the event: “The chances of that happening would be millions and millions to one”.
At the time of the announcement AACTA chief executive Damian Trewhella told SMH writer Garry Maddox there was nothing engineered about the dead heat. He insisted “it was a mathematical tie” and “just a freakish outcome.”
Last Friday a follow up story from Maddox published numbers from the vote leaked to the Sydney Morning Herald. Maddox revealed that The Babadook won the first count:
When the “weighted value” of votes for the six nominees were tallied, with six points for a first vote and one point for a sixth, the horror film had 855.5 votes to The Water Diviner‘s 838.5.
But there was an aberration: The Water Diviner was easily the first choice of most voters but, in either a backlash against the film or its director-star, more voters also placed it last.
That led the academy to running the numbers other ways, including giving a higher weighting to first votes and lower weighting to last votes. That system had The Water Diviner as clear winner.
Trewhella confirmed to Daily Review yesterday that the AACTA board decided on a tie. He maintains the results were a freakish outcome, though perhaps not in the same way he implied back in January.
According to Trewhella, every year the AFI consider both of the methodologies referred to above, which are different forms of preferential analysis: a border test (also known as ‘most preferred’) and a ‘factional border test (also known as ‘most excellent’).
He says the border test is “an inverse linear count that aims to try and find who the most preferred candidate is. Then there is a double check test to see who the most excellent candidate test is according to the majority. That’s called a fractional border count test.
“Normally they are both the same. What we have seen is some very irregular voting or very strange voting pattern in the year just gone, which has led to distinct outcomes by each of the statistical analysis. Normally the count you first described is accurate because it encompasses the second as well and yields a singular result.”
When I ask which of the methodologies is the fairest, Trewhella says: “in artistic endeavours I don’t think it’s clear, hence the tie.”
He reinforces what he believes was a strange result. “What we haven’t seen before is this scenario where something can be voted highest preference first by a very large number of people yet by a very marginal amount on overall preferences run second.
“According to one of the statistical analysis The Babadook was marginally ahead, .3 percent. According to another robust statistical analysis, the border fractional count, which was taking into account all the preferences also, to determine what the majority thought was most excellent, that led to another outcome for the first time, a different outcome, which is where the issue was. How do you knock out either of those?”
Trewhella concedes that the organisation could have handled or announced the surprise result differently.
“Perhaps, yeah, we should have gone into more detail on this at the time, but it just wouldn’t have necessarily been helpful for the films.
“If we’d been able to move this to a head to head run off between the two, that would be a perfect world if the process had have permitted that. Maybe that is something that can happen in the future but for a range of reasons the process is constrained and doesn’t really permit that.”
One of the key questions is whether the factional and border factional counts have indeed been run over the data consistently every year. Trewhella insists they have.
Hanging over this issue – and potentially making it a sensitive one in the eyes of some onlookers – is a perception that AACTA is pro-mainstream. Since its inaugural ceremony in 2011, the highest performer at the box office has always won in the Best Film category. Red Dog beat Snowtown in 2011, The Sapphires won in 2012 and The Great Gatsby creamed the competition (including The Turning and Mystery Road) in 2013.
If we take AACTA’s word for it and accept that the ceremony does use both methodologies to analyse each count, another question is whether that is a good idea given – as this year’s debacle has demonstrated – the methodologies are capable of offering wildly different interpretations of the data.
Trewhella reiterates that up until now they have always delivered the same result. But that logic cuts both ways: if they have always delivered the same result, why has the organisation always used both?
Whatever the answer, Trewhella is right when he points out AACTA is far from the only film awards ceremony to have announced a tie. The Golden Globes for example gave a three-way tie in the Best Actress category in 1989 and there have been six ties in the history of Academy Awards.
One was as recent as 2013, when Zero Dark Thirty and Skyfall tied for Best Sound Editing. Given the Oscars have over three times as many eligible voters as the AACTAs, and reportedly a high voter rate engagement, if the odds for an AACTA tie for Best Film constituted millions and millions to one, the odds for a tie in any category at the Oscars would presumably be just as far-out.
It is reasonable to assume most high profile awards ceremonies apply different sets of data analysis to the same numbers. Or that decisions are ultimately made not by voters per se but small groups of people making judgement calls.
In other words: most or all of them do it, but rarely do they have their internal processes revealed. If there is a lesson to be learnt from the 2015 AACTA hullaballoo, perhaps it is simply to take every announcement from an awards ceremony with a grain of salt.