• Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email
  • Share on Facebook
  • Share on Facebook
  • Share on Linkedin
  • Share by email

Evaluation is held up as an important tool in the arts, yet sometimes arts organisations, nervous of being criticised, dont co-operate with evaluators. Cathie Daycock describes how evaluation can go wrong and can be put right.

Nobody likes to be criticised. And no one wants to pay someone to criticise them! However, as an external programme evaluator you are put in the position of getting paid to criticise the people who are paying you! Its like biting the hand that feeds. And this can cause friction, complications, and confused loyalties. You become the enemy in the ranks. You are low on the list of their priorities, but most of all you are viewed as a nuisance. Information is kept from you, and situations are orchestrated to influence you and facilitate a positive perception of the programme.

To keep the balance in the evaluation, you have to fight for control and this struggle for power characterised the first evaluation my consultancy carried out. In the end it became more like a boxing match, with both sides fighting to win each round of the evaluation process.

Round one

We were naïve going into our first evaluation. We imagined that the organisation we were evaluating would share our desire to conduct thorough research concluding with a balanced report. How wrong we were! From the very beginning, Organisation X tried every conceivable trick to manipulate the process. The staff were reluctant to provide us with all the information we asked for, and put forward obstacles as we tried to navigate our way through. In many ways they took advantage of our naïvety. They had the upper hand from the start, as we had relinquished the majority of control to them. This proved to have a detrimental effect on the first half of the evaluation, and left us fighting to claw our way back to a position of equal control.

Organisation X established its control from the beginning, as all meetings and negotiations were conducted in its offices, giving the team psychological advantage. Furthermore, the contract outlined that they had ultimate say over the final draft of the evaluation report. This meant they decided what issues could be discussed and what opinions expressed. However, we feel this defeats the whole purpose of having an external evaluation, or in fact any evaluation at all. What is the point if the results are not going to reflect the realities of the programme being evaluated?

Round two

As we slowly discovered the obstacles being placed in our way by Organisation X, we realised that this was not going to be straightforward. We would have to fight, and employ tactics of our own to ensure a balanced approach was maintained throughout. This was going to be tactical warfare. For each move they made, we would have to make a move to counteract it. Blow by blow we would have to fight our corner.

However, at this point, they still had the control. They had built up a strong position as our initial naïvety had led to our acquiescence over important strategic issues. They had final control over the distribution and, therefore, design of the questionnaires, and they had not released any contact details to us. Furthermore, questionnaires were to be returned to them. We later had our suspicions that a number of questionnaires had been kept back, because of criticisms made within them.

The selection of the focus group highlighted the fact that power truly lay in their hands. We prepared a list of people to call for the focus group, which consisted of a mixture of positive, negative, and balanced responses to the questionnaire. However, we received an email informing us that an executive decision had been made to remove the potentially negative respondents from the selection list. Having relinquished the power to them in the earlier stage of the evaluation, we had no procedure in place to counteract this blow to our balanced research.

Their position of power was furthered strengthened by their protective holding of contact details. They had the ultimate punch as they were the ones to contact the focus group participants. Still, at this point, our views were not taken seriously, as the suggestions we made to invite non-respondents of the questionnaires in order to involve those with literacy difficulties, were simply ignored. We had also lost Round Two of the evaluation. And in many ways we were still in the dark over any instances of political infighting and office politics, despite it having an effect on the evaluation process and outcome.

Round three

Until this point we were losing the fight. We had relinquished control, and were struggling to retain any power at all. But Round Three heralded our comeback, facilitated by one major event. The project manager was made redundant, leaving a period of time before a new project manager was appointed. This enabled us to regroup and instigate a new strategy. We obtained the contact details, and began conducting telephone interviews, providing us with information we might have gathered if those respondents had been invited to the focus group. This enabled us to some degree to redress the balance of the earlier irregularities in the methodology.

To further strengthen our position, we began to elicit support from other sources. We were concerned that the new project manager would try and alter our finished report after receiving an email about making any necessary changes. Enlisting the evaluation manager of Organisation Xs funders to proof-read our report, became a tool to help us stand our ground. A chance encounter at a networking event found a surprising ally in a member of staff from Organisation X. He advised us to not accept the new project manager making any changes to our report. Thus, in a surprising turn of events we had managed to claw back the upper hand.

Knock out?

We might have overcome the main obstacles of our first evaluation, but although we emerged more experienced, the integrity of our evaluation did receive a few blows. Our research could have been more rigorous, and our results more successful and balanced.
Our first evaluation was a steep learning curve. Most of all we learnt the necessity of establishing reasonable ground rules to outline control and procedure for the evaluation process. This would eliminate the need for the dirty fighting that had characterised our first evaluation.

We believe that the evaluation team can work together with the organisation being evaluated. After all, an arts evaluation team and an arts organisation presumably both want the same thing to ensure cultural projects achieve positive outcomes for participants. Having an evaluation does not mean imminent dismissal of the project, but can and should provide well-considered recommendations that can improve an already successful project. External evaluators should be treated as consultants rather than the enemy. There need to be universal good practice guidelines established for external evaluations in order to achieve the best results in terms of research and future recommendations for the evaluated project.

Cathie Daycock is a freelance arts consultant.