Friday, November 13, 2009

Was this a mistake?

I struggled with one of my questions, and the struggle continues...

I wanted to have a question that would draw out opinions and attitudes towards provincial curriculum documents, without the question biasing the response. Essentially I was fishing for "beefs".
I settled on the following question:

"What do you believe is the main goal of the course as presented by Provincial Curriculum documents? "


I expected responses that would either make an attempt at answering the question at face value and perhaps a few "the goal of the ELA curriculum is make students hate reading novels”. I hoped to pull those quotes for use in Professional Development discussions as conversation starters.

I am getting several of the first kind of response, but in addition I am getting a healthy handful of " I don't understand the question.

Hmmmmm....
What would have been a more clear question I could have asked to achieve my goal?

Thursday, November 12, 2009

Survey o Rama

I created a survey to gather some baseline data surrounding the theme of "Relevance" and tested it with a few non-teaching staff. It did not go over well. It was too wordy, and had several sneaky distractor phrases in it I had not caught.
example:

"I feel I have been given the freedom to modify my course content to make it as relevant as I feel is possible."

The "I have been given" statement didn't allow for those who had simply "taken" the freedom. In this case, I was less interested in wether teachers felt "allowed" to do something than if they felt "free" to do it.

I orignally tried to set the survey up on the U of S site, but abandoned it when I saw how less "user-friendly" it was in comparison to the unfortunately named "Survey Monkey.com"

Here's a draft of the final question set I sent out to Staff:




I have been employed as an educator for:

0-9 Years
10-19 Years
20+ Years

Rate your level of agreement with the following statements:
Strongly Disagree
Disagree
Somewhat Disagree
Somewhat Agree
Agree
Strongly Agree

I consider myself a subject expert in at least one of the courses I am teaching this trimester.

I feel students at my school are receiving an education that addresses their basic needs.

I feel students at my school are receiving an education that addresses their academic needs.

I feel students at my school are receiving an education that addresses their social and emotional needs.

I have the freedom to modify my course content to make it as relevant as I feel is possible.

I have the freedom to modify my teaching methodologies to make the course as relevant as I feel is possible.


During the last month, I have heard a student make the following statement, or one with similar meaning:

0 times 1-3times 4-8times 9 or more times


This is stupid!

Why do we have to do this?

How much do I have to do?

Can’t you just give us the answers?

Am I going to pass?

This is interesting!

Can we (insert a learning activity the student is suggesting)?

This will help me (insert a future application for the skill the student is learning)

Don’t tell me the answer!

When answering the following open-ended questions, please indicate the general subject area at the beginning of your response. You may opt out of these questions if you have any concerns regarding anonymity.

What portion(s) of your course are relevant?


When answering the following open-ended questions, please indicate the general subject area at the beginning of your response. You may opt out of these questions if you have any concerns regarding anonymity.

What portion(s) of your course are relevant?

What portion(s) of your course lack relevance?

What portion(s) of your course lack relevance? 6. What do you believe is the main goal of the course is presented by Provincial Curriculum documents?

What do you believe is the main goal of the course as presented by Provincial Curriculum documents?

What barriers, if any, are there to presenting the course in a way that is most relevant to students?


What barriers, if any, are there to presenting the course in a way that is most relevant to students?


I will show you all the on-line version of the survey in class. I would include the link here for you all to peek at, but I'm afraid of empty response number creeping into my data....

Friday, October 23, 2009

Planning a Program Evaluation: Worksheet

Engage Stakeholders

Who should be involved?

Board Administration:
School based administration:
School based student services:
School based Learning Coordinators/ Learning Leaders : School based teaching and Educational assistance staff
Students

How might they be engaged?
Through participating in the design, implementation, and reporting of results.

Focus the Evaluation

What are you going to evaluate? Describe program (see logic model).

What is the purpose of the evaluation? _To determine the effectiveness of the relevance potion of the collegiate renewal program


Who will use the evaluation? How will they use it?

Who/users How will they use the information?

Students and teachers To inform their practice (what they are doing
during the evaluation)

Teachers and school based To inform and guide future practice
administration (what they will do post-evaluation)

School based administration To account for programs that are successful
and board administration so that they can be replicated, or
unsuccessful so that they can be addressed.


What questions will the evaluation seek to answer?

Does ____________ programming have an effect of student perception of their learning purpose and, if so, what is the effect and how was it made.
OR
Main Question:
Did students learn the relevance of schooling?

Sub Questions:
How aware were students of the relevance of schooling before it was addressed through collegiate renewal?

How often are teachers informing students of the relevance of what is being studied?

In what ways are teachers informing students of the relevance of what is being studied?

What support materials are provided and used by teachers and students to connect topics being studied and real-world application?

Is the attempt to make schooling more relevant successful?

Should increasing students’ awareness of relevance be a focus for 2010-2011?




What information do you need to answer the questions?

What I wish to know Indicators – How will I know it?

What are students’ understandings of their purpose in regarding to their education?

 Quotes will include indicator terms and phrases such as “I need my grade 12 to get a job” (indicated ritualistic participation) or personal relevance terms such as “like”, “enjoy”, “interested”, “understand” and “help”

How often are teachers informing students of the relevance of what is being studied

 Frequency charts, self reporting, interviews

In what ways are teachers informing students of the relevance of what is being studied?
 Frequency charts, self reporting, interviews

What support materials are provided and used by teachers and students to connect topics being studied and real-world application?
 Material and document review, observation

Is the attempt to make schooling more relevant successful?
 Students will be able to articulate how their engagement in school will affect their lives outside of school and into the future.
 Students will be able to articulate what skills they are being offered and how those skills are/will be useful in the future
 Students may be able to articulate how learning continues beyond the classroom and beyond their school years.
 Teachers will perceive a decreased need to “prod” or “cajole” students into engaging in a task or activity.
 Success/retention rates shown in number of credits earned year to year

Should increasing students’ awareness of relevance be a focus for 2010-2011?
 The success of moderate intervention during the school year

When is the evaluation needed? May 2010

What evaluation design will you use? Mixed Method Design

Collect the information
What sources of information will you use?
Existing information
: _______Tell them From Me data__________________

People: ____ School based administration: School based student services:
School based Learning Coordinators/ Learning Leaders 1 :
School based teaching and Educational assistance staff
Students

Pictorial records and observations: __Observation logs from __________and _________

What data collection method(s) will you use?

Survey
Interview
Observation
Document review
Testimonials
Journal, log, diary
Other (list) ___Credit review

When will you collect data for each method you’ve chosen?

Method Before program During program Immediately after Later
Survey X X
Interview X X
Observation
(Administration
and School
counselor
to keep
visitation
logs) X X
Log X
Document review X
Credit Review X X

Instrumentation: What is needed to record the information? Computer
Flip Camera
Digital Audio Recorder
Transcription software

Will a sample be used? No___________________

Analyze and Interpret

How will the data be analyzed?
Data analysis methods:,
video, textual and numerical data analysis

Who responsible: ___

How will the information be interpreted—by whom? The information will be interpreted in a report written by Lesley Walters and vetted by a selection of stakeholders

Use the Information

How will the evaluation be communicated and shared?

This has yet to be determined. While I can expect that the report will be shared with teaching, administration and para-professional staff, I has not been decided if this information will be appropriate beyond that scope. I have been counseled to keep the project in a limited sphere until it is completed and has been viewed by administration.


Notes:

1) Learning Coordinator is a position formerly referred to as Department Head
Learning Leader is a position created in 2007 to support the collegiate Renewal Initiative
2) Relevance
"Motivation is the reason for being engaged and relevance helps to provide that compelling reason. Learning is often engaging to the extent that students deem the learning as meaningful and interesting to self and world. “Contextualized learning involves being able to see the value and relevance of the skills [and understandings] being learned as opposed to learning that is abstract and divorced from real life (Feuerstein, 1980, Haywood, 1993). Learning is best placed in meaningful contexts that show its inherent utility and capitalizes on students’ interest” (McLean, 2003).

When will I ever use this in the “real world”? This is a perennial student question and thoughtful teachers consider relevance when viewing curriculum and considering instruction. The sentiment is reflected in our instructional planning question: “Why would students give their hearts and heads to this work?” For we know, learning is ultimately the choice of the learner. Still, we also understand that students must take responsibility “for generating interest, if they find the work boring, by finding ways to make it more challenging and worthwhile for themselves” (McLean, 2003). "
Taken from the Saskatoon Public schools Collegiate renewal website on October 15, 2009 http://schools.spsd.sk.ca/collegiaterenewal/

Saturday, September 19, 2009

Assignment #2

An Approach to Evaluating ECS Programming for Children with Severe Disabilities



Overview:

The ECS Programming for Children with Severe Disabilities document describes the programming offered from Alberta Education to Children with severe/profound disabilities aged 30 months to 6 years of age. The programming is provided in both community and home-based settings and involve educators, therapists, assistants and families in a blended program. The programming is regulated and must meet strict standards involving initial diagnosis and application, programming oversight, and time.


Assessment:
In my experience, the issues surrounding the effectiveness of a program often come down to the fit of a bureaucratic configuration to the system it seeks to guide. If a professional bureaucracy is to be effective, rules and regulations need to be understood as relevant, reasonable and acceptable to all parties involved. The quality and delivery of the organization’s professional development is also an important factor. Professional development is often seen as a process of accreditation, rather than pool of resource. When examining a program’s professional development, we ask if the stakeholders have the relevant information and support to make the best use of the resources available to them.

When I read the program description, the majority of questions I had were related to the effectiveness of the organizing system to apply an iterative process of exploring the effectiveness and relevance of its policies. My questions were:


Timelines:
What is the expected timeline between a diagnosis, and the child receiving the required additional assessment by qualified personnel, such as a Speech/Language Pathologist, Pediatrician, Chartered Psychologist or Child Psychiatrist?
Does this match the actual time line?
Applying:
Is the process of being qualified for the funding severely hindering the effectiveness of the program? If so, could suggestions on how to expedite the process be gathered in the process of evaluation?
Is the process of re-applying each year onerous to the point of hindrance and, if so, what suggestions can be presented by stakeholders.
Is the process not thorough enough? Are there a proportion of children in the program who don’t fit it?
Coordination:
How well coordinated are combined programs? Is there a cogent facilitation plan in place to allow to communication between programs provides in the case of a combined program?
Professional Development
Are the teachers in charge of developing the Individual Program Plans (IPP) comfortable with their level of training and expertise to be able to create these plans?
Are the other stakeholders comfortable with the level of teacher training and expertise to be able to create these plans?
What resources are available to teachers for building and implementing IPPs?
Are the Teachers aware of resources available for the child, and for the building and implementing of IPPs?
What level of training and resource is being provided to home caregivers to allow programming benefit to extend beyond the home visits?
Regulation:
What constitutes the visit time measurement of 1.5 hours?
Is it 1.5 hours of direct contact with the child, or does that time include documenting the visits, briefing and/or debriefing with the caregiver?
Morale:
What attitudes and opinions do supervising teachers and programming providers have towards their level of responsibility?

Approach:

I would take a two-pronged approach. Firstly, I would engage the stakeholders with a questionnaire that would explore the various groups ‘answers to these questions. This would give me a reading of the overall institutional health of the organization.

Often, when measurements of attitudes towards things like authority structures, enrollment management, remuneration and professional development are attempted; they are done so using survey rating scales. I disagree with applying them in this case. Asking about degrees of agreement or dissent doesn’t provide the opportunity for stakeholders to provide their input into building a better system, or lending argument to why beloved systems should remain unchanged. This could be considered a participant-oriented model.

Secondly, I would invite stakeholders to begin the process of creating a logic model. The following image from The University of Wisconsin’s Program Development and Evaluation Center gives and overview of how a program is interpreted through a logic model. The ability of a logic model to provide a common language and clarified, commonly determined outcomes would be of benefit to this organization




In closing, I would caution against attempting to quantify outcomes in an organization such as this. This form of feedback can be misleading as the results depend as much on the children and their individual abilities and circumstances, as it does on the program and its activities.






-

Wednesday, September 16, 2009

Step 1: my list of questions (focusing required)

-What is the expected timeline between a diagnosis, and the child receiving the required additional assessment by qualified personnel, such as a Speech/Language Pathologist, Pediatrician, Chartered Psychologist or Child Psychiatrist?

-Does this match the actual time line?

-Is the process of being qualified for the funding severely hindering the effectiveness of the program? If so, could suggestions on how to expedite the process be gathered in the process of evaluation?

-Is the process of re-applying each year onerous to the point of hindrance and, if so, what suggestions can be presented by stakeholders.

-Is the process not thorough enough? Are there a proportion of children in the program who don’t fit it?

-How well coordinated are combined programs? Is there a cogent facilitation plan in place to allow to communication between programs provides in the case of a combined program?

-Are the teachers in charge of developing the Individual Program Plans (IPP) comfortable with their level of training and expertise to be able to create these plans?

-Are the other stakeholders comfortable with the level of teacher training and expertise to be able to create these plans?

-What resources are available to teachers for building and implementing IPPs?

-Are the Teachers aware of resources available for the child, and for the building and implementing of IPPs?

-What level of training and resource is being provided to home caregivers to allow programming benefit to extend beyond the home visits?

-What constitutes the visit time measurement of 1.5 hours?

-Is it 1.5 hours of direct contact with the child, or does that time include documenting the visits, briefing and/or debriefing with the caregiver?

-What attitudes and opinions do supervising teachers and programming providers have towards their level of responsibility?

Sunday, September 13, 2009

I'm obviously a little touchy

Looking at my assigned case study I found a quote, whose purpose was obviously to inspire.
Instead it irked me, and I found myself looking back at the case more critically.

"A child can see a painting, but it takes a teacher to unlock the beauty that is contained within it.”

Is it just me, or does that seem a bit dismissive?

Anyhow, I like to take clear note of when ever I find myself being biased. I'll try and put more weight to the notes I took on the first reading, rather than the second pass.

Social Science Research vs. Program Evaluation

I found this little ditty entitled Michael Scriven on the Differences Between Evaluation and Social Science Research

"Evaluation determines the merit, worth, or value of things" while "Social science research does not establish standards or values and then integrate them with factual results to reach evaluative conclusions"

Saturday, September 12, 2009

Assignment #1

THE YOUNG ADULT OFFENDER (YAO) PROGRAM AT SCI-PINE GROVE:
AN EVALUATION OF THE LINK BETWEEN THERAPEUTIC COMMUNITY PARTICIPATION AND SOCIAL COGNITIVE CHANGE AMONG OFFENDERS
Principal Investigator: Ariana Shahinfar, Ph.D.

Let me begin by encouraging anyone who reads this program evaluation evaluation to read the document I researched. It’s really, genuinely, interesting. I’m not kidding. My family read it and it started some interesting conversations. You can find it here

I used Dr. Carter McNamara’s Basic Guide to Program Evaluation ,as well as sections of Program Evaluation: An Introduction By David Royse, Bruce A. Thyer, Deborah K. Padgett, as references in my investigation.

The program being evaluated is a therapeutic community program that is divided into graduated stages that precede an inmates’ release back into society. Dr. Ariana Shahinfar addresses a new indicator of the rehabilitation value of programming in the youth prison system by attempting to measure the changes in an offender’s social cognition as they progress through their programming.

I believe this was a form of outcomes-based evaluation. While this youth detention program is not a traditional charity answering to donors, one could see the taxpayers funding the program, and those administrators interested in increasing the effectiveness of the programs, being the audience.

The major outcome set out for this evaluation is to answer the following question: Does the current therapeutic program at SciPine Grove create lasting behavioral change, or, is the behavioral change seen simply an adaptive behavior within the program? The indicators that suggest that changes are being made in social cognitive skills were set out in the measures section under the categories of Social Cognition, Community Thinking and Personal Growth. Inmates are involved in two structured interviews during which the inmates are asked questions to measure attitudes, biases and goals. One interview is for intake measurement, the other measure if growth had occurred over a specified time period. Measurement for this evaluation are strictly structured “tests” and the gathering of observable indicators of behavior (ex: violent incident reports) Inmates are not asked to comment on their feeling towards the effectiveness of the program.

While the McNamara guide cites failing to include personal interviews as a pitfall to avoid, I think the absence of open-ended interviews in this case was highly appropriate. Even in a situation where collecting an inmates “story” was presented as wholly unconnected to an inmates’ assessment and reporting portfolio, I suspect many young offenders in a program such as this, would find it difficult to be honest if they felt they were unaffected by a rehabilitation program. It might, however, be useful to attempt to do a post-release interview with former inmates once this biasing factor has expired.

Upon looking for the "target" goal of this evaluation, I briefly reconsidered its status as an evaluation. I panicked a little, and wondered if this was simply research into social cognition, rather than a requested program evaluation. After digging around the Pennsylvania Correction site (and simply seeing the first person mentioned in the report’s acknowledgements is part of a Research and Evaluation department), I was able to accept that the evaluation did not deal in “target” goals, but it did explore the possibility and practicality of being able to measure a target set for a social cognition goal in a later evaluation.

It is the methodology of this evaluation that impressed me most. After reading the project background and the design description in the methods section, I turned over the paper and quickly scribbled down all of the pitfalls I could anticipate. When I resumed reading I was very impressed that Shahinfar addressed every initial doubt I had. This highlighted to me the importance of attempting to remove yourself (and/or a colleague) at various stages to “play devil’s advocate”, as McNamara suggests.

One of the goals of the project was to evaluate whether there was a correlation between increases in social cognitive skill and an inmate’s advancement through the prison’s system of promotion. Shahinfar finds almost no correlation, and what little temporal correlation she does find, is suggested to be the natural result of time and maturation. I was disappointed that Shahinfar did not make specific reference to the possible value of the current promotion system, however, she does suggest that applying a similar study to an adult population might help in interpreting future results.

As a final thought, I would like to gain a better understanding of the differences between Program Evaluations and whatever might be mistaken for a Program Evaluation. I think sections of the project I reviewed could be better classified as a study, while other portions seem to clearly serve the role of an evaluative tool. Are there hybrids? Unholy Program Evaluation/Research Study Chimeras? I’ll be back poking around the material Jay has provided and generally skulking around the internet to find out. When I do, I’ll post it.

Saturday, September 5, 2009

Process-based evaluation tweeked for my world

1. On what basis do staff, division, and/or the students decide that they belong at our school ?
2. What is required of staff in order to deliver the programs and courses?
3. How are staff trained about how to deliver the programs or courses?
4. How do students come into the program( where are they screened)?
5. What is required of students?
6. How do staff select which programs or courses will be provided to students?
7. What is the general process that students go through in a program or course?
8. What do students consider to be strengths of the program?
9. What do staff consider to be strengths of the program?
10. What typical complaints are heard from staff and/or students?
11. What do staff and/or students recommend to improve the courses or programs?
12. On what basis do staff, division and/or the students decide that the product or services are no longer needed?

Sunday, August 30, 2009

Notes:




"When the cook tastes the soup, that’s formative; when the guests taste the soup, that’s summative."
- Robert Stakes
I really like this quote as I often have to explain the difference between formative and summative assessments. It's simple and catchy.

I also was caught by the statement made by Micheal Scriven in our class reading:
"we had (a) group trying to convince us that there was no such thing in the real world as summative evaluation; reality, they said, evaluation was essentially always formative. This may well have been true of their experiences, but not of a reality containing Consumer Reports and the results of jury trials and appeals. " This is an argument I have also had with colleagues around assessment. What is the real value of your final mark other than to obfuscate where you are on your learning path. I like the idea of programs being evaluated with a formative emphasis.