Saturday, September 12, 2009

Assignment #1

THE YOUNG ADULT OFFENDER (YAO) PROGRAM AT SCI-PINE GROVE:
AN EVALUATION OF THE LINK BETWEEN THERAPEUTIC COMMUNITY PARTICIPATION AND SOCIAL COGNITIVE CHANGE AMONG OFFENDERS
Principal Investigator: Ariana Shahinfar, Ph.D.

Let me begin by encouraging anyone who reads this program evaluation evaluation to read the document I researched. It’s really, genuinely, interesting. I’m not kidding. My family read it and it started some interesting conversations. You can find it here

I used Dr. Carter McNamara’s Basic Guide to Program Evaluation ,as well as sections of Program Evaluation: An Introduction By David Royse, Bruce A. Thyer, Deborah K. Padgett, as references in my investigation.

The program being evaluated is a therapeutic community program that is divided into graduated stages that precede an inmates’ release back into society. Dr. Ariana Shahinfar addresses a new indicator of the rehabilitation value of programming in the youth prison system by attempting to measure the changes in an offender’s social cognition as they progress through their programming.

I believe this was a form of outcomes-based evaluation. While this youth detention program is not a traditional charity answering to donors, one could see the taxpayers funding the program, and those administrators interested in increasing the effectiveness of the programs, being the audience.

The major outcome set out for this evaluation is to answer the following question: Does the current therapeutic program at SciPine Grove create lasting behavioral change, or, is the behavioral change seen simply an adaptive behavior within the program? The indicators that suggest that changes are being made in social cognitive skills were set out in the measures section under the categories of Social Cognition, Community Thinking and Personal Growth. Inmates are involved in two structured interviews during which the inmates are asked questions to measure attitudes, biases and goals. One interview is for intake measurement, the other measure if growth had occurred over a specified time period. Measurement for this evaluation are strictly structured “tests” and the gathering of observable indicators of behavior (ex: violent incident reports) Inmates are not asked to comment on their feeling towards the effectiveness of the program.

While the McNamara guide cites failing to include personal interviews as a pitfall to avoid, I think the absence of open-ended interviews in this case was highly appropriate. Even in a situation where collecting an inmates “story” was presented as wholly unconnected to an inmates’ assessment and reporting portfolio, I suspect many young offenders in a program such as this, would find it difficult to be honest if they felt they were unaffected by a rehabilitation program. It might, however, be useful to attempt to do a post-release interview with former inmates once this biasing factor has expired.

Upon looking for the "target" goal of this evaluation, I briefly reconsidered its status as an evaluation. I panicked a little, and wondered if this was simply research into social cognition, rather than a requested program evaluation. After digging around the Pennsylvania Correction site (and simply seeing the first person mentioned in the report’s acknowledgements is part of a Research and Evaluation department), I was able to accept that the evaluation did not deal in “target” goals, but it did explore the possibility and practicality of being able to measure a target set for a social cognition goal in a later evaluation.

It is the methodology of this evaluation that impressed me most. After reading the project background and the design description in the methods section, I turned over the paper and quickly scribbled down all of the pitfalls I could anticipate. When I resumed reading I was very impressed that Shahinfar addressed every initial doubt I had. This highlighted to me the importance of attempting to remove yourself (and/or a colleague) at various stages to “play devil’s advocate”, as McNamara suggests.

One of the goals of the project was to evaluate whether there was a correlation between increases in social cognitive skill and an inmate’s advancement through the prison’s system of promotion. Shahinfar finds almost no correlation, and what little temporal correlation she does find, is suggested to be the natural result of time and maturation. I was disappointed that Shahinfar did not make specific reference to the possible value of the current promotion system, however, she does suggest that applying a similar study to an adult population might help in interpreting future results.

As a final thought, I would like to gain a better understanding of the differences between Program Evaluations and whatever might be mistaken for a Program Evaluation. I think sections of the project I reviewed could be better classified as a study, while other portions seem to clearly serve the role of an evaluative tool. Are there hybrids? Unholy Program Evaluation/Research Study Chimeras? I’ll be back poking around the material Jay has provided and generally skulking around the internet to find out. When I do, I’ll post it.

1 comment:

  1. Nice work Lesley

    Your commentary is full of reflection and mistrust for the process of evaluation. You adequately explain the goals of the program and the evaluation approach that has been applied. The suggestions you make for improving this survey are relevant and would strengthen the evaluation. That being said you do find a way to accept the outcomes of this evaluation and ask an even greater question about the identity or cross pollination of the fields of evaluation and general research. I am interested to see if others are struck by the same dilemma.

    ReplyDelete