Saturday, September 19, 2009

Assignment #2

An Approach to Evaluating ECS Programming for Children with Severe Disabilities



Overview:

The ECS Programming for Children with Severe Disabilities document describes the programming offered from Alberta Education to Children with severe/profound disabilities aged 30 months to 6 years of age. The programming is provided in both community and home-based settings and involve educators, therapists, assistants and families in a blended program. The programming is regulated and must meet strict standards involving initial diagnosis and application, programming oversight, and time.


Assessment:
In my experience, the issues surrounding the effectiveness of a program often come down to the fit of a bureaucratic configuration to the system it seeks to guide. If a professional bureaucracy is to be effective, rules and regulations need to be understood as relevant, reasonable and acceptable to all parties involved. The quality and delivery of the organization’s professional development is also an important factor. Professional development is often seen as a process of accreditation, rather than pool of resource. When examining a program’s professional development, we ask if the stakeholders have the relevant information and support to make the best use of the resources available to them.

When I read the program description, the majority of questions I had were related to the effectiveness of the organizing system to apply an iterative process of exploring the effectiveness and relevance of its policies. My questions were:


Timelines:
What is the expected timeline between a diagnosis, and the child receiving the required additional assessment by qualified personnel, such as a Speech/Language Pathologist, Pediatrician, Chartered Psychologist or Child Psychiatrist?
Does this match the actual time line?
Applying:
Is the process of being qualified for the funding severely hindering the effectiveness of the program? If so, could suggestions on how to expedite the process be gathered in the process of evaluation?
Is the process of re-applying each year onerous to the point of hindrance and, if so, what suggestions can be presented by stakeholders.
Is the process not thorough enough? Are there a proportion of children in the program who don’t fit it?
Coordination:
How well coordinated are combined programs? Is there a cogent facilitation plan in place to allow to communication between programs provides in the case of a combined program?
Professional Development
Are the teachers in charge of developing the Individual Program Plans (IPP) comfortable with their level of training and expertise to be able to create these plans?
Are the other stakeholders comfortable with the level of teacher training and expertise to be able to create these plans?
What resources are available to teachers for building and implementing IPPs?
Are the Teachers aware of resources available for the child, and for the building and implementing of IPPs?
What level of training and resource is being provided to home caregivers to allow programming benefit to extend beyond the home visits?
Regulation:
What constitutes the visit time measurement of 1.5 hours?
Is it 1.5 hours of direct contact with the child, or does that time include documenting the visits, briefing and/or debriefing with the caregiver?
Morale:
What attitudes and opinions do supervising teachers and programming providers have towards their level of responsibility?

Approach:

I would take a two-pronged approach. Firstly, I would engage the stakeholders with a questionnaire that would explore the various groups ‘answers to these questions. This would give me a reading of the overall institutional health of the organization.

Often, when measurements of attitudes towards things like authority structures, enrollment management, remuneration and professional development are attempted; they are done so using survey rating scales. I disagree with applying them in this case. Asking about degrees of agreement or dissent doesn’t provide the opportunity for stakeholders to provide their input into building a better system, or lending argument to why beloved systems should remain unchanged. This could be considered a participant-oriented model.

Secondly, I would invite stakeholders to begin the process of creating a logic model. The following image from The University of Wisconsin’s Program Development and Evaluation Center gives and overview of how a program is interpreted through a logic model. The ability of a logic model to provide a common language and clarified, commonly determined outcomes would be of benefit to this organization




In closing, I would caution against attempting to quantify outcomes in an organization such as this. This form of feedback can be misleading as the results depend as much on the children and their individual abilities and circumstances, as it does on the program and its activities.






-

1 comment:

  1. I would expect nothing less from you Lesley. You have obviously sat back and pondered the need and the purpose for this evaluation. You explore a number of important and undefined issues related to the program and its evaluation. I think that the 'up-front' work on this or any evaluation would allow you to gain a thorough understanding of what needs to be evaluated. I also applaud your identification of the importance of the individual child in this process and how they are impacted by an evaluation.

    ReplyDelete