Saturday, September 19, 2009

Assignment #2

An Approach to Evaluating ECS Programming for Children with Severe Disabilities



Overview:

The ECS Programming for Children with Severe Disabilities document describes the programming offered from Alberta Education to Children with severe/profound disabilities aged 30 months to 6 years of age. The programming is provided in both community and home-based settings and involve educators, therapists, assistants and families in a blended program. The programming is regulated and must meet strict standards involving initial diagnosis and application, programming oversight, and time.


Assessment:
In my experience, the issues surrounding the effectiveness of a program often come down to the fit of a bureaucratic configuration to the system it seeks to guide. If a professional bureaucracy is to be effective, rules and regulations need to be understood as relevant, reasonable and acceptable to all parties involved. The quality and delivery of the organization’s professional development is also an important factor. Professional development is often seen as a process of accreditation, rather than pool of resource. When examining a program’s professional development, we ask if the stakeholders have the relevant information and support to make the best use of the resources available to them.

When I read the program description, the majority of questions I had were related to the effectiveness of the organizing system to apply an iterative process of exploring the effectiveness and relevance of its policies. My questions were:


Timelines:
What is the expected timeline between a diagnosis, and the child receiving the required additional assessment by qualified personnel, such as a Speech/Language Pathologist, Pediatrician, Chartered Psychologist or Child Psychiatrist?
Does this match the actual time line?
Applying:
Is the process of being qualified for the funding severely hindering the effectiveness of the program? If so, could suggestions on how to expedite the process be gathered in the process of evaluation?
Is the process of re-applying each year onerous to the point of hindrance and, if so, what suggestions can be presented by stakeholders.
Is the process not thorough enough? Are there a proportion of children in the program who don’t fit it?
Coordination:
How well coordinated are combined programs? Is there a cogent facilitation plan in place to allow to communication between programs provides in the case of a combined program?
Professional Development
Are the teachers in charge of developing the Individual Program Plans (IPP) comfortable with their level of training and expertise to be able to create these plans?
Are the other stakeholders comfortable with the level of teacher training and expertise to be able to create these plans?
What resources are available to teachers for building and implementing IPPs?
Are the Teachers aware of resources available for the child, and for the building and implementing of IPPs?
What level of training and resource is being provided to home caregivers to allow programming benefit to extend beyond the home visits?
Regulation:
What constitutes the visit time measurement of 1.5 hours?
Is it 1.5 hours of direct contact with the child, or does that time include documenting the visits, briefing and/or debriefing with the caregiver?
Morale:
What attitudes and opinions do supervising teachers and programming providers have towards their level of responsibility?

Approach:

I would take a two-pronged approach. Firstly, I would engage the stakeholders with a questionnaire that would explore the various groups ‘answers to these questions. This would give me a reading of the overall institutional health of the organization.

Often, when measurements of attitudes towards things like authority structures, enrollment management, remuneration and professional development are attempted; they are done so using survey rating scales. I disagree with applying them in this case. Asking about degrees of agreement or dissent doesn’t provide the opportunity for stakeholders to provide their input into building a better system, or lending argument to why beloved systems should remain unchanged. This could be considered a participant-oriented model.

Secondly, I would invite stakeholders to begin the process of creating a logic model. The following image from The University of Wisconsin’s Program Development and Evaluation Center gives and overview of how a program is interpreted through a logic model. The ability of a logic model to provide a common language and clarified, commonly determined outcomes would be of benefit to this organization




In closing, I would caution against attempting to quantify outcomes in an organization such as this. This form of feedback can be misleading as the results depend as much on the children and their individual abilities and circumstances, as it does on the program and its activities.






-

Wednesday, September 16, 2009

Step 1: my list of questions (focusing required)

-What is the expected timeline between a diagnosis, and the child receiving the required additional assessment by qualified personnel, such as a Speech/Language Pathologist, Pediatrician, Chartered Psychologist or Child Psychiatrist?

-Does this match the actual time line?

-Is the process of being qualified for the funding severely hindering the effectiveness of the program? If so, could suggestions on how to expedite the process be gathered in the process of evaluation?

-Is the process of re-applying each year onerous to the point of hindrance and, if so, what suggestions can be presented by stakeholders.

-Is the process not thorough enough? Are there a proportion of children in the program who don’t fit it?

-How well coordinated are combined programs? Is there a cogent facilitation plan in place to allow to communication between programs provides in the case of a combined program?

-Are the teachers in charge of developing the Individual Program Plans (IPP) comfortable with their level of training and expertise to be able to create these plans?

-Are the other stakeholders comfortable with the level of teacher training and expertise to be able to create these plans?

-What resources are available to teachers for building and implementing IPPs?

-Are the Teachers aware of resources available for the child, and for the building and implementing of IPPs?

-What level of training and resource is being provided to home caregivers to allow programming benefit to extend beyond the home visits?

-What constitutes the visit time measurement of 1.5 hours?

-Is it 1.5 hours of direct contact with the child, or does that time include documenting the visits, briefing and/or debriefing with the caregiver?

-What attitudes and opinions do supervising teachers and programming providers have towards their level of responsibility?

Sunday, September 13, 2009

I'm obviously a little touchy

Looking at my assigned case study I found a quote, whose purpose was obviously to inspire.
Instead it irked me, and I found myself looking back at the case more critically.

"A child can see a painting, but it takes a teacher to unlock the beauty that is contained within it.”

Is it just me, or does that seem a bit dismissive?

Anyhow, I like to take clear note of when ever I find myself being biased. I'll try and put more weight to the notes I took on the first reading, rather than the second pass.

Social Science Research vs. Program Evaluation

I found this little ditty entitled Michael Scriven on the Differences Between Evaluation and Social Science Research

"Evaluation determines the merit, worth, or value of things" while "Social science research does not establish standards or values and then integrate them with factual results to reach evaluative conclusions"

Saturday, September 12, 2009

Assignment #1

THE YOUNG ADULT OFFENDER (YAO) PROGRAM AT SCI-PINE GROVE:
AN EVALUATION OF THE LINK BETWEEN THERAPEUTIC COMMUNITY PARTICIPATION AND SOCIAL COGNITIVE CHANGE AMONG OFFENDERS
Principal Investigator: Ariana Shahinfar, Ph.D.

Let me begin by encouraging anyone who reads this program evaluation evaluation to read the document I researched. It’s really, genuinely, interesting. I’m not kidding. My family read it and it started some interesting conversations. You can find it here

I used Dr. Carter McNamara’s Basic Guide to Program Evaluation ,as well as sections of Program Evaluation: An Introduction By David Royse, Bruce A. Thyer, Deborah K. Padgett, as references in my investigation.

The program being evaluated is a therapeutic community program that is divided into graduated stages that precede an inmates’ release back into society. Dr. Ariana Shahinfar addresses a new indicator of the rehabilitation value of programming in the youth prison system by attempting to measure the changes in an offender’s social cognition as they progress through their programming.

I believe this was a form of outcomes-based evaluation. While this youth detention program is not a traditional charity answering to donors, one could see the taxpayers funding the program, and those administrators interested in increasing the effectiveness of the programs, being the audience.

The major outcome set out for this evaluation is to answer the following question: Does the current therapeutic program at SciPine Grove create lasting behavioral change, or, is the behavioral change seen simply an adaptive behavior within the program? The indicators that suggest that changes are being made in social cognitive skills were set out in the measures section under the categories of Social Cognition, Community Thinking and Personal Growth. Inmates are involved in two structured interviews during which the inmates are asked questions to measure attitudes, biases and goals. One interview is for intake measurement, the other measure if growth had occurred over a specified time period. Measurement for this evaluation are strictly structured “tests” and the gathering of observable indicators of behavior (ex: violent incident reports) Inmates are not asked to comment on their feeling towards the effectiveness of the program.

While the McNamara guide cites failing to include personal interviews as a pitfall to avoid, I think the absence of open-ended interviews in this case was highly appropriate. Even in a situation where collecting an inmates “story” was presented as wholly unconnected to an inmates’ assessment and reporting portfolio, I suspect many young offenders in a program such as this, would find it difficult to be honest if they felt they were unaffected by a rehabilitation program. It might, however, be useful to attempt to do a post-release interview with former inmates once this biasing factor has expired.

Upon looking for the "target" goal of this evaluation, I briefly reconsidered its status as an evaluation. I panicked a little, and wondered if this was simply research into social cognition, rather than a requested program evaluation. After digging around the Pennsylvania Correction site (and simply seeing the first person mentioned in the report’s acknowledgements is part of a Research and Evaluation department), I was able to accept that the evaluation did not deal in “target” goals, but it did explore the possibility and practicality of being able to measure a target set for a social cognition goal in a later evaluation.

It is the methodology of this evaluation that impressed me most. After reading the project background and the design description in the methods section, I turned over the paper and quickly scribbled down all of the pitfalls I could anticipate. When I resumed reading I was very impressed that Shahinfar addressed every initial doubt I had. This highlighted to me the importance of attempting to remove yourself (and/or a colleague) at various stages to “play devil’s advocate”, as McNamara suggests.

One of the goals of the project was to evaluate whether there was a correlation between increases in social cognitive skill and an inmate’s advancement through the prison’s system of promotion. Shahinfar finds almost no correlation, and what little temporal correlation she does find, is suggested to be the natural result of time and maturation. I was disappointed that Shahinfar did not make specific reference to the possible value of the current promotion system, however, she does suggest that applying a similar study to an adult population might help in interpreting future results.

As a final thought, I would like to gain a better understanding of the differences between Program Evaluations and whatever might be mistaken for a Program Evaluation. I think sections of the project I reviewed could be better classified as a study, while other portions seem to clearly serve the role of an evaluative tool. Are there hybrids? Unholy Program Evaluation/Research Study Chimeras? I’ll be back poking around the material Jay has provided and generally skulking around the internet to find out. When I do, I’ll post it.

Saturday, September 5, 2009

Process-based evaluation tweeked for my world

1. On what basis do staff, division, and/or the students decide that they belong at our school ?
2. What is required of staff in order to deliver the programs and courses?
3. How are staff trained about how to deliver the programs or courses?
4. How do students come into the program( where are they screened)?
5. What is required of students?
6. How do staff select which programs or courses will be provided to students?
7. What is the general process that students go through in a program or course?
8. What do students consider to be strengths of the program?
9. What do staff consider to be strengths of the program?
10. What typical complaints are heard from staff and/or students?
11. What do staff and/or students recommend to improve the courses or programs?
12. On what basis do staff, division and/or the students decide that the product or services are no longer needed?