New England Creative Economy Network: Focus on Project Evaluation

Program Director, Research & Creative Economy

On September 29, the New England Creative Economy Network gathered at NEFA for a very interactive creative economy project evaluation working session, with small group assignments and then large group process and discussion. Pacey Foster, Ph.D. (UMass Boston) and Richard Maloney, Ph.D. (BU) provided framework and answered questions specific to the local projects, and other participants also helped their peers brainstorm solutions and clarify action steps.

First, the participants did a small group activity where they introduced themselves to each other and shared outputs and outcomes from their current creative economy project. The groups then shared three of each with the whole room, so that the differences and considerations could be discussed broadly:

Outputs are tangible indicators of project activity (# of performances, applications, participants, grants given, members, meetings, events, phone calls)
Outcomes are long-term results (usually changes) that are related to the outputs measured. Specific examples shared:

  •  Increased peer network in community
  •  New skills in project management and collaboration
  •  New skills in grant writing
  •  More arts activities for the underserved (cultural access)
  •  Change of public perception (more good comments)
  •  More collaborative space, less vacancy
  •  Sustainable collaborations

Clarifications and considerations – from the group discussion, and framework shared by Pacey called “The Art & Science of Evaluation”: 

  • What you measure always depends on the specific goals of the project (project activities are already tied to the desired outcomes, so you measure the outputs that are relevant to what you want to see.)
  • Outputs themselves don’t tell you anything unless they are connected to their related outcome or goal. For example, attendance of an event can be noted at 100 people (output), but if the if the goal is to reach a new audience (outcome), then you have to measure how many of those people you don’t know at the meeting. Only then do you know if you’ve reached your goal.
  • Line up your desired outcomes and relevant outputs at the beginning, so that you set up your measurement early (logic model). Be sure to define your desired outcomes clearly, even if only internally.
  • Determine the type of evaluation that is appropriate for your program – formative (gathering learning) or cumulative (meeting particular goals)
  • Plan for how you will deal with unexpected or undesired outcomes.
  • Pay attention and try not to make the same mistakes twice. Look at your data before the end of the project and incorporate necessary shifts in measurement.
  • Some organizations are not interested in being learning organizations (an organization that has human systems in place that encourage learning from mistakes and improving on current acts), even though evaluation is so important and a “no-brainer”. Be aware that you may get resistance from staff or board because it can feel threatening to organizations to admit their mistakes.
  • And what should we do when we find that we don’t meet our desired outcomes/intended goals? It can be hard to reveal, because we are in a culture where we’re not allowed to admit that we’re failing, and as a sector we are often trying to prove our value. Ultimately it’s better to admit shortcomings and show a clear plan of how to improve.
  • If you make inflated claims about project outcomes (especially economic impact), then people won’t take you seriously – about that claim or any other.


  • Don’t just measure what is easy, especially if it is not relevant to desired outcomes
  • Don’t assume that the thing you measure (output) has caused an outcome, instead of contributing to an outcome.
  • Don’t wait until the end of your project to make an evaluation plan.

After a short break, the small groups re-convened to flesh out the current status or challenge of one of their creative economy projects and apply the framework discussed.

Here are two that were then shared and triaged with the full group:


  •  The goal of the entrepreneurship program is to measure the success of program alumni, specifically how the skills they gained in their entrepreneurship program impacted their success later on.
  •  Challenges stated initially:
    •  Measuring by later income and job satisfaction is not substantive enough
    •  Program is co-curricular, so multiple departments involved
    •  There is little time to try embark on this information gathering, and little support from the board
    •  The program is new, so it’s difficult to measure this for students who graduated before the program launched
    •  An issue with the sample – the people taking the class may already be categorized as over-achievers
    •  When and how often should the effects of the program be measured?
  •  Best practices:
    •  Ultimately the goal is to measure the actual skills attained from the program, not just the perception of those skills
    •  To measure a change in skills, pre- and post- surveys are really the best way to go. Measure student’s understanding of entrepreneurship and entrepreneurial skills prior to the program, and then a follow-up survey to evaluate this after the program. Ideally in the future there would also be a comparison to a control group who does not receive the program but is the same in all other ways.
    •  Important to look to literature and stay on top of other similar tracking scenarios (of entrepreneurship skills and activities, for example.)
    •  Need to go back to the theory of change to determine the timing of asking for input – what do you really expect to see, and when?


  •  The goal of the program is to “change the perception of the downtown.” To make the town “cool” which could mean community pride, tourism, safe and friendly arts community.
  •  Challenges:
    •  How do you measure perception? Not measuring reality, but how people see “x” (town, US, program, etc).
    •  You can measure the decrease in vacancies in the downtown, but will that number really indicate a successful achievement of the outcome? What if the business is not a desired business?
    •  In some cases, perception changes because reality changes. Did perception change because of what this organization did or because reality changed? (ie., what we consider “cool” in 5 years from now may not be walk we consider “cool” today).
  •  Best practices:
    • Again, a pre- and post- comparison would be great if you can. Check assumptions of current perceptions: ask the subjects “What are three words that come to mind when you think of this town?”
    • Stay alert to the information at hand: though initially one of the empty storefronts downtown was filled with a seemingly unwanted store, paying attention to the sales at that store and the behavior of neighbors led to increased understanding of what they needed (brooms) and when (beginning or end of the month, to clean when moving in or out!) This opens a whole other line of formative inquiry.
    • Partnerships can be both an output and an outcome, depending on the project and its goals. A good tip is to chart the intended outcomes and outputs to measure according to each stakeholder or partner. What does each stakeholder group care about? What would their intended outcomes be? And work backwards from there.
    • With creative economy projects in particular, the desired outcomes are based on what is unique to the community/neighborhood. What would you like to see in the physical landscape that would tell you that the community has improved in some way? X number of locally owned businesses? X number of local coffee shops? A local grocery store? People walking around after 6:00pm?

Additional resources
Cultural Research Network
National Archive of Data on Arts and Culture (formerly CPANDA)
National Center for Arts Research
AFTA economic impact calculator

Photo: Pacey Foster, Ph.D. by Jeffrey Filiault