I was asked by a student who had the opportunity to work with a group of clients about how to measure the success of the programme. Ah: the thorny problem of measurement. Success is easier said than measured, because as the old adage goes, "What we measure is what we get". If we measure how many people attend each session, we may focus on 'presenteeism', not engagement. If we measure how many attendees recommend the programme, the recommendations might be because people enjoyed themselves, not necessarily because they learned what we set out to teach. If attendees all passed an independent licencing test, then that says they reached a standard, but that may not have been because of our programme. It is easy to see that using only one measure will only tell part of the story, where all three measures taken together builds a clearer picture.
How do we do that? To begin with, we can seek papers on other successful rehabilitation or community development programmes and investigate how they measured 'success' in their projects. In searching for papers, while initially I would use keywords such as 'outcomes', 'measurement', 'effectiveness', I would be guided by the keywords from published papers I found as my reading progressed. As we find papers, we explore the researcher rationale for why they measured the things they measured, and the specifics of what they measured. We think about transferability.
As to 'what is measured, we do need to attempt to keep measurement as simple as we possibly can. This means that our scarce resources - our programme funding - are not wasted through measurement (think Jarndyce vs Jarndyce in Dickens's Bleak House, 1852). The less time we spend on actually measuring, and the more automatic we can make our data collection, the more time and resources we have to use for implementing and undertaking the actual project (Graeber, 2018).
Apps may also be our friend here for tracking engagement, for encouraging impromptu tests, and apps or swipe cards to track attendance. We need to use technology wherever we can, so all we have to do is analyse the data, not to collect it.
We also need to first be mindful that many of our participants may have a long way to travel to reach whatever 'success' looks like. We tend to under-estimate people's blocks and barriers, and just how much conditioning they may need to overcome before they get anything approaching what we determine as 'success'. What to us may seem like a small shift may seem monumental to our participants.
Because of this issue of scale, we might be best to begin at the outset, with a survey of each participant. This, if they cannot read well, or are not yet digitally competent, could be set up as a guided experience with someone who was well-briefed and trained to facilitate this initial data collection. The results of the initial survey could then be paired with a corresponding exit survey at the end of the programme.
Further, undertaking some final interviews with a selected sample of participants to conclude the evaluation cycle could also be helpful. A purposive selection (Braun & Clarke, 2017) of those who showed in their surveys that they (a) did not fully engage, or (b) engaged a lot, could allow us to seek differences between the two groups. The results could allow us to refine the programme delivery for the next run through, to create or modify elements needed for better engagement.
However, we need to remember that engagement, or the ability to engage, may have more to do with 'follower readiness', as per the Hersey Blanchard Life Cycle theory (here; Hersey & Blanchard, 1969) than with the design of our programme. We need to be prepared to adapt our delivery to take best advantage of our followers' 'readiness' - by evaluating "performance gaps and [the] underlying causes" of blocks and barriers; taking into account each follower's "ability and willingness" to be able to do what they are being asked to do (Goldsmith & Lyons, 2011, p. 28).
This is what being an educator is all about, too.
Sam
References:
Braun, V., & Clarke, V. (2017). Successful Qualitative Research: A practical guide for beginners. SAGE Publications Ltd.
Dickens, C. (1852). Bleak House. George Routledge and Sons.
Goldsmith, M., & Lyons, L. S. (Eds.). (2011). Coaching for leadership: The practice of leadership coaching from the world's greatest coaches (2nd ed.). John Wiley & Sons.
Graeber, D. (2018). Bullshit Jobs: A theory. Simon & Schuster.
Hersey, P., & Blanchard, K. (1969). Life cycle theory of leadership. Training and Development Journal, 23(5), 26-34.
Find the key to unlock the door - First things first. Thank you Sam.
ReplyDeleteGood point: yes, teaching is finding the key, again and again!
Delete