Researchers have published a damning attack on the use of quantitative measures to assess artistic quality, warning Arts Council England of the potential consequences of rolling out the Culture Counts system.

Photo of man as a jester

Metrics-based approaches to assessing cultural value “invite political manipulation and demand time, money and attention from cultural organisations without proven benefit,” according to a group of academics involved in a research project examining the value of culture.

In a new paper, published in the academic journal Cultural Trends, researchers deliver a damning assessment of the type of standardised system for measuring artistic quality that Arts Council England (ACE) is currently preparing to make compulsory for many of its National Portfolio Organisations.

Their paper, ‘Counting culture to death’, refutes the “widely held belief” that “a set of numbers can provide vindication, or at least insurance, in the constant struggle to justify public funding”. They conclude that attempts to quantify cultural value are not delivering on their promises, and bring “destructive” unintended consequences.

The paper states that using indicators and benchmarks to assess cultural activities, “which exhibit no obvious capacity for scalar measurement”, is a “political act”. The “ostensible neutrality” of this approach is, they say, “a trick of the light trying to launder responsibility for judgment in the competition for scarce resources”.

The research, conducted by Robert Phiddian, Julian Meyrick, Tully Barnett and Richard Maltby from Flinders University in Adelaide, was funded by the Australia Research Council as part of the ‘Laboratory Adelaide: the Value of Culture’ project, which is examining issues around the assessment of culture.

Culture Counts

The researchers focused on the use of Culture Counts, the metrics-based assessment system that originated in Western Australia and was adopted by the state government as part of its Public Value Measurement Framework. It is the same system that ACE used in its pilot Quality Metrics scheme in 2015/16, which involved 150 arts organisations across England.

The researchers warn that the effects of using Culture Counts in Australia “are worth considering in places pointed in the same direction, such as the UK”. Following trials in Australia, the state of Victoria has decided not to continue with the system, and the Australia Council, the national cultural agency, does not intend to implement it either.

The research findings add further fuel to controversy over the forthcoming roll-out of this scheme across England, participation in which will be a condition of funding for larger National Portfolio Organisations from 2018.

The scheme will see arts organisations ask audiences to rate their artistic work on a numeric scale in relation to a set of standardised metrics of quality, and triangulate this data with self and peer assessment.

Despite concerns expressed by many who took part in the pilot in England, a £2.7m contract to deliver the Quality Metrics scheme has been put out to tender. Although ACE has not yet committed to using the Culture Counts system, it is unclear how any other system would be able to meet the criteria for delivering it.

Misguided beliefs

The researchers describe Culture Counts as: “A sophisticated and entrepreneurialised version of the misguided belief that everything that matters in culture can be counted, and that doing so will secure the sector’s social and political base.” The metrics it uses are, they say, “essentially marketing analytics rather than a window on artistic value” and “there is simply no numerically valid way” of valuing one artform over another.

They continue: “…only a fool or a knave will claim [two different artforms] can be assessed comparatively without making use of a critically informed judgment.”

Attention is also drawn to the potential misuse of quantitative data generated in this way. They point out: “It is politically naïve to think that metrics such as Culture Counts can be quarantined for practitioners and policy officers who will use the numbers with care, as indicators of well-understood value-generating creative processes.” Rather, they will be “used as weapons, both of attack and self-defence. Artists competing for funds to realise a vision will not resist the temptation to trumpet ‘a 92 in the ACE metric’ any more than vice chancellors have refrained from boasting of their rankings in university league tables.”

ACE was due to announce the winning bidder for its Quality Metrics contract in April, although this has been delayed. A spokesperson told AP: “Due to pre-election sensitivities we have been unable to issue the decision letters.” These are now expected after 8 June.

Author(s): 
Liz Hill

Comments

To be fair, I'm not sure the ACE Quality Metrics are about "valuing one artful over another". Of course they're far from perfect, simplistic even, but they're a pretty good try at articulating what makes a cultural experience powerful and relevant to individuals or communities (rather than the artists themselves). Of course it's easy to criticise the system's shortcomings. It's far harder to come up with a better alternative that is deliverable in practice. It would be good to see comparable research time devoted to this important question.

I agree with the authors in suggesting that quantification naturally invites comparison against others, but then, who doesn't already make use of basic audience numbers, characteristics, investment/expenditure, dwell time and who knows what other 'key stats' in making their case? Even such relatively basic figures arguably face limited scrutiny or it is not always very clear exactly how they have or will be used in decision making. Opacity in decision making is a further theme here, regardless of measures. So quantification is not the enemy per but it is fair to say there is no point quantifying if you don't want to compare. The mountain of likert-scale, agree/disagree, rate 0-10 type questions all already exist and as far as I can tell, there is little to genuinely differentiate many standard claims of "89% agree with X" from a "75% agree with X" other than to say 'most people who filled in a survey felt they had a reasonably good time'. From this situation it is easy to argue that there need to be 'better measures of quality' but surely there has to be a more transparent and more sensitive way of doing it that doesn't immediately get the backs of everyone in the sector up. (That is, everyone interested in ACE funding. I assume artists outside of this world will no doubt immediately stop producing any quality work without these metrics to aid them) I agree with you that the sentiment of "well, what's the alternative then?" ends up becoming my view and I hope that artists and arts organisations have their voices heard and maintain the freedom to focus their limited resources and time on establishing methods and measures that are relevant to them. In some cases we could argue that arts organisations already suffer an undue burden of proof for the relatively small amounts of public money that they get while billions simply vanishes in IT contracts, PFI deals and other such shoddy political shenanigans. It might be a noble effort on the part of ACE to steer the overall conversation to 'quality' and 'excellence' but I think they are finding out this is a hell of a task.

“Counting culture to death” (Phiddian et al., Cultural Trends 26(2) 2017) derives from a well-known distrust of measurement in any form, a view that quantification is the enemy of judgement, and that it is used only for instrumental purposes, to serve special interests, or to further some political agenda. In the cultural arena, such views can be seen as anti-intellectual insofar as they stand in the way of serious efforts by scholars in the humanities and social sciences to understand what it is that we mean when we say that culture or art has value. This is a fundamental question that cannot be dismissed by asserting that all responses to art are subjective and individual and therefore cannot be measured. People are making absolute and comparative judgements about art all the time, and it is a legitimate exercise for researchers, art practitioners, policy makers and others to try to make some sense of how these judgements are made and what they imply. The article is based on a fundamental misunderstanding of what an assessment approach such as Culture Counts is doing. Take, for example, a question that could be asked of any audience member in a theatre, visitor to an art gallery or reader of a novel: Did you enjoy that experience? Why? What sort of effect did it have on you? The answers, if a person can articulate them, are relevant to what a play, an exhibition, or a piece of literature sets out to do. It’s true that a person’s evaluations are individual and entirely subjective. But what is interesting from accumulating answers across a lot of people is whether or not there are patterns, consistencies in judgements that can tell us something about whether the art work has had some effect - positive or negative, expected or surprising - on the people who have experienced it. I doubt whether the ACE or any other entity would use these data in a mechanical way. Rather, like so much measurement, these sorts of quantitative representations of qualitative assessments are likely to be used simply as information that will help to illuminate decisions made on much wider grounds.