• Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email
  • Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email

How effectively can arts organisations measure the audience, visitor or participant experience? Heather Maitland reveals the evidence.

I have found myself thinking a lot recently about how cultural organisations can measure the way that audiences, visitors and participants experience what they have to offer. It all started when I discovered that Scottish arts organisations had asked for a session on measuring public engagement at the Scottish Cultural Forum in February. As I said in my last article (AP166), what we tend to measure is what we think people ought to be doing and feeling. Several researchers have observed that our evaluation tends to involve profiling visitors, satisfaction surveys and anecdotal summaries of what we did based on our own unstructured observations with all our assumptions on display1. But there are other ways.

Researchers in Australia looked at the impact of pre-performance talks on audiences’ enjoyment and their confidence in interpreting dance work. The only difference between audience members who had been to the pre-show talk and those who had not was that their interpretation of the piece more closely matched the intentions of the choreographer. They didn’t feel any more confident and rated their enjoyment in a similar way. So, pre-show talks are not an effective tool for audience development unless our goal is to ensure the audience thinks like us. What made the difference was the time spent after the performance thinking about and discussing what they had seen as part of the research methodology. The researchers suggest that post-performance discussions where the audience gets to do the talking would work much better.2

A US academic evaluating education activities in a museum discovered that, much to the surprise and annoyance of the curators, teachers were using an exhibition intended to explore design, structure and form in the 1920s and 30s as away of delivering the social history curriculum. Does this really reduce the value of the educational experience as the curators seemed to believe?3  [[So, pre-show talks are not an effective tool for audience development unless our goal is to ensure the audience thinks like us.]]

If we are to find out how audiences, visitors and participants are really experiencing cultural activities, we need to bypass our own assumptions and frames of reference. We need open-ended research that is rigorous and reliable, not just anecdotal. There are many ways of collecting information in an imaginative way. The challenge is to be able to organise and interpret this mass of information then put it into context so we can draw robust conclusions on which to base decisions.

Barbara Soren4 helped a small gallery research whether providing longer text labels against each artwork increased visitors depth of engagement and gave them more confidence in looking at and responding to art. They decided on these indicators: that visitors, particularly those who were less confident, actually read the longer labels; that visitors felt they could easily read and understand the labels; that they looked back and forth between the label and the artwork; that they read them aloud to each other and discussed the information; and that visitors felt the labels had helped them access the art. They used a combination of methods to collect this information: visitor observation, semi-structured interviews after visitors had seen the exhibition, and a brief questionnaire asking for demographic information and frequency of gallery attendance.

With limited staff resources, the observation element had to be manageable. They thought carefully about what visitor behaviours would give them the answers they needed, and limited the observations to: counting the number of artworks a visitor stopped at; timing how long they stopped at four specific works; counting the number of long labels they read; seeing if they read the introductory text panel and how long they spent doing so; seeing whether they looked at the label or the artwork first and whether they looked back and forth between them; seeing how they talked and discussed with the other people they came with. Note that the gallery didn’t just rely on observation. It is important to use several different methods to avoid subjective interpretation: in this example, that involved interviewing visitors as well as interpreting their observed behaviour.

Most of the time, someone has done the work of creating a framework already. One example of a widely used framework is a way of categorising how deeply museum and gallery visitors have engaged with the exhibits. It involves observing just four aspects of their behaviour: how long overall do they spend in the whole exhibition; in what order do they look at exhibits; where do they pause and for how long. The observers just mark this information on a plan of the gallery, one for each visitor they watch. Analysing the sheets puts each visitor into one of four categories. ‘Ants’ make a long visit, looking at everything in the order that it is laid out and stand close to each work of art. ‘Butterflies’ spend half as much time looking at the exhibition, choose what they want to look at and don’t always look at things in the ‘right’ order. ‘Fish’ make a quick, superficial visit, don’t pause much, and don’t go up close to the works on display. ‘Grasshoppers’ also make a short visit: they make some stops to look at a few works but not in the ‘right’ order.5

Shobana Jeyasingh Dance Company interviewed 200 people as they came out of the performances at five venues. They were asked two questions (‘what three or four words would you use to describe what you’ve just seen?’ and ‘how many dance performances have you seen in the past 12 months?’), and their responses were recorded on minidisk. This meant they only collected a manageable amount of information that was easily analysed on an Excel spreadsheet. Responses were grouped around key words and themes, exploring differences and similarities and working out what was not being said.
Understanding engagement involves three basic things: observing what people say and do, putting ourselves through similar experiences and interacting with people.

So what stops us? The excuses we most often use are that we don’t have time, it’s too expensive and it’s too intrusive. But, as I hope I’ve shown, it’s possible to do simple, straightforward and manageable things to get inside the heads of our audiences, visitors and participants. It takes time, but it is a manageable commitment and it’s worth it. So what questions do you have about how your audiences engage with your work? Why not make a commitment now to find out the answers?

Heather Maitland is a consultant specialising in marketing and audience development
t: 01949 843161; e: hmaitland1@aol.com

1 e.g. Allard, Michael, ‘The Evaluation of Museum-School programmes: the case of historic sites’, Museums, Media, Message, ed. Eilean Hooper-Greenhill (Abingdon: Routledge, 1995), p235; Dufresne-Tassé, Colette, ‘Andragogy (Adult Education) in Museums: a critical analysis and new formulation’, Museums, Media, Message, ed. Eilean Hooper-Greenhill (Abingdon: Routledge, 1995), pp 245 – 260; Economou, Maria, ‘Evaluation Strategies in the Cultural Sector’, Museum and Society, 2 (2004), 30–46
2 Glass, Renee and Stevens, Catherine, ‘Making Sense of Contemporary Dance: an Australian investigation into audience interpretation and enjoyment levels’, available from http://marcs.uws.edu.au/people/stevens/pubs/
Glass_ConcConns_eforum.pdf
3 Hein, George E., ‘Evaluating Teaching and Learning in Museums’, Museums, Media, Message, ed. Eilean Hooper-Greenhill (Abingdon: Routledge, 1995), p195
4 Soren, Barbara J., ‘Labels that Stimulate Exploration’ (1998) available from www.informalscience.org/evaluation/report_view.php?id=70
5 Veron & Lavasseur, Ethnographie de l’Exposition, 1983, Paris: Bibliothèque Publique d’Information, Centre Georges Pompidou

Link to Author(s):