A Case and Call for Impact Evaluation – Even in Jewish Experiences

From Section:
Informal Education
Published:
Jul. 23, 2015
July 23, 2015

Source: eJewish Philanthropy 

 

Establishing what we set out to do in formal Jewish education settings is often complex, and evaluating it can be slippery as we try to develop measures for what seems highly personal. Adding the variable of informal Jewish settings, with its socio-emotional or other affective agenda, only adds even more complexity to this problem. Still, in an increasingly demanding philanthropic marketplace, with board members, foundations and supporters caring deeply about the impact of their investment, it is our responsibility to show the value of their investment. We need to move beyond our ‘feelings,’ anecdotal assessments or purely numerical accounts of people in chairs. We need to be able to say with authority, integrity, and even some degree of empirical certainty that we are doing great work.

 

We also have a responsibility to ourselves and to those who participate in our experiences – in our case, students – to understand whether or not what we’re doing is working. We move so quickly during a program year, continually generating and implementing new ideas, pushing stuff out, often without more than just a few minutes to reflect. We collect intuitive and anecdotal evidence – we might leave a conversation with a student, for example, and feel that we nailed it. But comprehensively, scientifically, are we really making a difference? And what is it that we do that does make a difference?

 

Impact evaluation is not foreign to Jewish education. Philanthropic organizations – namely, the Jim Joseph Foundation, Schusterman FoundationCovenant Foundation, and others – have been exploring the influence of their programs on participants (among other questions) for many years. Hillel as a movement is beginning to do this work with its Measuring Excellence project. Still relatively unique, though, is an organization taking on the responsibility itself of asking these questions, and not just about one program but all of its programs, all of its annual efforts. Driven by these reasons, by our own need for data and out of responsibility to supporters and participants, Berkeley Hillel took on such a project.

The survey and focus group protocols were based on Berkeley Hillel’s organizational theory of change, which unites everything that Berkeley Hillel tries to do, all of its strategic efforts and programs. Together, these three methods constitute a comprehensive program of measurement and evaluation, allowing Berkeley Hillel to understand the totality of the student experience (through the survey), the texture of the student experience (through the focus groups), and student participation trends (through student registration). These methods allow for an understanding of influence and also for an understanding of the programs and efforts that create that influence.

What have we learned through this measurement project? Not about Berkeley Hillel, but about the project of measurement?

Response rate means (almost) everything. We can have the most thorough survey, the strongest focus group questions, but without a tremendous effort to find Jewish and non-Jewish student respondents, we cannot analyze the data in deep and interesting ways. Our survey has had a decent response rate, and we even have had respondents with and without involvement in Hillel and prior Jewish experiences. In addition, while we can do a great deal of analysis with the entire dataset, we often have small sub-populations whom we want to study (such as in-state students or participants in Alternative Breaks) and the response rate is too low to isolate significant size populations in each of these areas. This prevents more complex learning about these various populations.

This effort benefits from collaboration. This seems like it would be a staff-driven effort, with deep subject matter expertise from outside of Hillel and leadership from the staff whose work is impacted the most. But Hillel’s volunteer leaders and students have questions about Hillel that they want to pursue and intuition and experience regarding how to ask these questions. And, the more perspectives involved in looking at the data, making connections, asking questions, the more rich the analysis can be. We also worked with subject matter experts on campus, professors with decades of sophisticated knowledge in studying psychological attitudes and in polling, and we consulted with the university’s expert in student data.

The work – not just of measurement, but the work itself – is complex. We have produced empirically-based “answers” for supporters and we have learned a great deal for ourselves, about the success of Hillel’s work and about where Hillel can be more effective. At the same time, the focus groups make particularly evident the many twists and turns the college experience holds.

That is all the more reason why this project is important: Because it reveals to us how critical the college years are on individuals’ Jewish journey, and thus we need to truly measure the effectiveness of our work.

Read the entire article at eJewish Philanthropy.


Updated: Feb. 07, 2017
Keywords:
Evaluation | Hillel | Informal education