• Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email
  • Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email

Evaluating un-ticketed events in ways which are meaningful and sustainable is a challenge that Anne Torreggiani has faced on many occasions. She reflects on her experiences

Photo: Meet Me Far From Gravity, a pop-up dance performance © PHOTO Richard Thomson

In 2008 Audiences London pitched the idea of collaborative audience research to a network of outdoor arts promoters. While perhaps not everyone shares our unquenchable thirst for audience insight, the lukewarm reception to our suggestions was surprising. But more recently when we ran an event about understanding audiences for outdoor events, all 120 places were booked within a week. Clearly, something had changed: the outdoor arts sector has benefited from greater investment, a higher profile and the golden galvanising opportunity of 2012, and this momentum has created an urgent imperative for evidence of impact.

But gathering meaningful insight into public engagement, while essential to the sector’s health and far more do-able than many imagine, can be tricky. We recently published a series of advice sheets reflecting our experience of carrying out evaluation and audience research for multiple outdoor events and festivals. These are the top FAQs they address:

Can we afford to evaluate?

With resources tight for everyone, research and evaluation is frequently the first budget line to be squeezed. It is seen as a tick-box exercise, rather than a valuable process of reflection and learning, adaptation and advocacy. By the time it has been completed, staff are either too exhausted to think about the next iteration of the event, or have moved on to different projects. We encourage organisations to apply the learning that is generated, but if you can’t imagine a vital purpose, or don’t know what it will change, don’t waste resources obtaining it.

Prioritising helps find an affordable approach. Choosing the right methodology is a trade-off between resources, accuracy and scope. There is no absolute formula. Un-ticketed events, especially outdoors, present specific problems with accuracy. Audiences are often free-ranging and as they bypass any formal transaction with organisers, the manpower required to get a relevant sample can be prohibitively expensive. Capturing contact details for follow-up research can access bigger samples, but they may be less representative. Participation in a follow-up survey is subject to greater self-selection, and samples are likely to be skewed in favour of people who are unrepresentatively positive (or even negative) about it. Reactions are less fresh than when collected ‘live’ at an event. A good methodology aims to get the combination of quantitative and qualitative data needed, stays within budget and fits with the aesthetics of the event. One effective approach can be a combination of on-site interviews and a post-event e-survey using addresses collected on the day: the e-survey is a cheap and manageable way of filling gaps anticipated in the on-site data. This approach requires field-workers with a reasonable level of knowledge and skills, and we have developed a training programme for staff and volunteers.

When do we need professional advice?

Given that any evaluation worth doing should influence strategy and build advocacy, inaccurate conclusions are potentially lethal. If contracting the whole process out to experts is neither feasible nor desirable, it’s worth thinking about commissioning the critical elements. These might include research design (which methods and why, sample frames, what questions to ask and how), interviewer training, statistical analysis and interpretation of findings. On the DIY side, you can make useful economies by careful planning (specifying the purpose, outcomes and research brief), and communicating and acting on findings. At the operational end of the spectrum, consider doing your own data collection and data inputting to keep costs to a minimum.

What does success look like?

A major challenge for the outdoor arts sector is the lack of evidence of public impact. Without it, it is hard to show how the influence and value of the sector has developed, and difficult for individual organisations to set realistic goals. Other sectors have collaborated to streamline their approaches to evaluation to enable benchmarking and debate about performance and expectation. One of the aims of our advice sheets and accompanying toolkit is to establish a flexible but standardised approach to audience research which will enable the development of crucial benchmarks.

Can you afford not to?

This is my FAQ to the outdoor arts community. There seems to be a widely held belief that outdoor arts are “reaching new and more audiences and increasing engagement with the arts” (ACE New Landscapes, Outdoor arts development plan 2008–2011), and giving a unique boost to local economies. There is acknowledgement of the need for hard evidence of these impacts – no-one seems to question the concept itself. Our work suggests that free, un-ticketed events do not necessarily reach more socially diverse audiences nor those who are under-served by other cultural offers. Tickets and walls are not the only barriers to engagement, especially for people who have little sense of cultural entitlement. Excellence in community engagement and audience development are also required to reach different audiences. As outdoor arts enter a golden age, I’m hopeful that we’ll also see an enlightened approach to their evaluation, prompting a mature, informed debate about public engagement.

Audiences London¹s symposium exploring these issues is on 24 January 2012: contact bookings@audienceslondon.org. The advice sheets and other useful resources are at w www.audienceslondon.org

Link to Author(s): 
Image of Anne Torregiani