Articles

Many voices, open minds, commitment to change

What should underpin our evaluation practice in the arts? Oliver Mantell reflects on the new Evaluation Principles from the Centre for Cultural Value. 

Oliver Mantell
6 min read

Late last year, the Centre for Cultural Value released a set of Evaluation Principles that, together with Dr Beatriz Garcia, I helped to formulate.  

The main part of developing the principles was our many conversations with the working group of experts who shared their encounters with evaluation: commissioning it, using it, or being themselves evaluated. They brought a range of perspectives and experience from years of reflection, as well as the challenges and ideas they had been tussling with in their own work.

As facilitators, our role was to create the space for the rich conversations that emerged, rather than provide our own ‘expert perspective’. Then later, away from the video gallery and chat pane, I tried to find a simple synthesis that would retain the distinctiveness of those conversations.  

The result was a set of Evaluation Principles that were often my words but rarely my ideas. That doesn’t mean I didn’t agree with them: although I tried to ensure they were meaningful enough statements that a reasonable person could disagree with them, in practice I was enthusiastic about the concepts and values being expressed.

The principles can be interpreted in different ways. Since the published version isn’t intended as my personal view, I thought it might be interesting to highlight some of my reflections on a handful of them.

1. Evaluation should be many-voiced 

The aspect of this principle that resonates with people most immediately is that many different types of people should contribute to evaluation and have their voices heard, especially those whose opinions are often ignored, undervalued or suppressed. This is, obviously, an important and central part of this principle where more remains to be done.

I’m also intrigued by what this principle might mean for the format or content of evaluations. Do we sometimes produce reports that are too coherent? With too singular a viewpoint or too even and untroubled a tone? Do evaluations over-simplify and smooth out conflicts? Do we assume that there is a ‘right’ understanding of ‘how it went’ and ‘what to do in future’, that allows the evaluator an unquestioned, even Olympian, authority and detachment? Is there a risk of letting evaluators have not only the last word but the only word, too?

Perhaps ruffling the texture of evaluations – making sure the tensions, disagreements or differences of perspective are clearly present – would be one way of mitigating these challenges. Including a range of authentic voices, another. 

On a side note, I’m wary of any amends to quotations from participants, whether ‘tidying up’ or ‘correcting mistakes’ [sic] (whose blushes are really being spared by such amends, and why?). And never more so than when the corrected version aligns too seamlessly with the evaluator, organisation or wider industry’s language and register. Perhaps they are being made to align with their interests, too.

2. Evaluation should be committed to learning and/or change 

This is one principle where the debates in the working group left a visible mark. The slash, scratch or pivot at the centre of that ‘and/or’ highlight our discussions. Is commitment to learning by definition commitment to change (because to learn is to change)? Is it enough to create change if you don’t learn in the process? Is it even possible?

What was agreed was what evaluation shouldn’t (just) be about. It’s not just about performing an administrative function. And although it is about accountability, it’s about accountability to the future, not only justification for past decisions. 

It should be about doing or making a difference. It should recognise its own agency and purpose and not affect detachment. Part of being appropriately humble is recognising that you are part of the world along with everyone else and can’t take an ‘outsider’ perspective. 

Ultimately, we have a perspective of our own and have to choose what we value. It is called ‘evaluation’, after all.

3. Evaluation should be open-minded 

So how do we square the tensions in these last two points? We have to have a stance and use evaluation for something. But we also have to recognise that many other views and voices should be heard. What we want to do with evaluation may not be what others want. 

We shouldn’t try to present a picture of the public and their views which is in fact a self-portrait. A giveaway that this could be happening is when the audience or public implied by an evaluation doesn’t feel like the range of people that actually exist (being too close to our own profile, preferences or politics, for example).

This, again, is where humility creeps in. We should be deeply sceptical of any claim to know people’s values, purposes or interests better than they do. We should also resist any instinct, or attempt, to dominate other viewpoints (without abandoning our own). 

We should recognise that most of the knowledge of what a particular cultural event or activity means, as well as what value it has, is held by participants and attenders. We also have to be ready to recognise different forms of value and meaning than we might intend, or anticipate, and to access them in different ways, too.

Doing this, of course, depends on the skills that the many excellent evaluators in the sector already have. Although we hope we have expressed them in useful and different ways, perhaps reanimating some through different emphases, the central ideas in the principles aren’t new inventions, either. 

Nonetheless, we hope they will be useful for people and cue further consideration of these ideas. These are just a few of my own reflections. You will no doubt have others. I am, of course, open-minded about learning from many other voices.

Oliver Mantell is Director of Evidence & Insight at The Audience Agency.
@OliverMantell | @audienceagents | @valuingculture

If you would like to take part in further discussions about these principles and what they could mean in practice, The Audience Agency and the Centre for Cultural Value are running a series of free online workshops in Autumn 2022.

Each of the three workshops will focus on a particular evaluation principle and will provide opportunities for creative and research practitioners to explore how we might embed the evaluation principles in practice, with speakers and case studies from across the arts and cultural sector. To be the first to receive event booking information, please sign up to our newsletter.

This article, sponsored and contributed by the Centre for Cultural Value, is part of a series supporting an evidence-based approach to examining the impacts of arts, culture and heritage on people and society.