Archive for March, 2012

Report Writing Day Two

We had our second SKiP call this afternoon on the topic of writing annual accreditation reports for the ECADA process. Today, there were eight people on the call from South Carolina, North Carolina, Illinois, Arizona, Idaho, and Alaska.

We started off talking about how participants are getting used to using their rubrics and collecting data. It does feel a bit overwhelming at first! Many people on the call talked about the importance of thinking about this as an ongoing process. We want to have everything finished and prepared, yet the process and procedures need to be fluid enough so that we can make changes based on the assessment information we are gathering. I think that really is the key to it all – easier said than done, I know!

There were a couple folks on the call who use online data collection and storage systems such as TaskStream and LiveText which work very well in terms of collecting, analyzing and storing assessment data.

Others use a simple Excel spreadsheet. For example:

The coordinator provides the spreadsheet template to all instructors and they enter the data and submit it back to her. It has one page with total grades for each assignment. One page has attendance. The third page has the break down of the key assessments and the fourth page is looking at some questions she asks about their class for that semester. She uses this for NAEYC and regional accreditation. The instructors have become familiar with submitting the data to her each semester as it is part of their routine. The coordinator can then run reports and look at the averages and so forth.

I will share that I have explored using SurveyMonkey to collect assessment data from multiple instructors. I use the professional account to create surveys that correspond with our key assessment rubrics and then instructors use a link to the survey to enter their data. Feel free to take a look at this DEMO survey which illustrates our key assessment on documentation. It is just a demo so feel free to interact with the survey to see what it is like to enter data this way. Below, is an example of a chart I generated from the survey results one semester.

People chatted about what they have found in their data so far. One person shared that once they could see what the data were telling them, they were able to add learning opportunities throughout the program that would help students build the skills they need in order to be successful with the key assessments. For example, someone talked about noticing that the students’ planning skills were weak during the practicum semester. They decided to build-in additional learning opportunities around planning earlier in the program in order to scaffold students through that planning process and she has noticed an improvement.

One process that was shared is that once the data are collected, a report is sent out to all faculty who examine the results and then discuss what it means to them and how they will make use of the data to make changes.

Question:

One question that was asked had to do with faculty buy-in to the accreditation process with a particular concern about adjunct faculty. I shared that we have done orientation sessions for our faculty where we invited everyone to attend and I did a workshop on how to use the key assessments. This was an important step when we were introducing an online data collection system as many of our adjuncts were uncomfortable with learning this new step.

I have also found that partnering with the adjuncts one-on-one has been an effective strategy. We have a big enough program where we decided it was best to develop a faculty partner system. Each full-time faculty member partners with a small group of adjuncts. The partnership is usually based on scheduling so it is convenient for partners to meet together before or after their classes. This provides a good opportunity to build a learning community that includes full-time and part-time instructors. We also do a lot of outreach to adjuncts to ask them their opinions about the rubrics – do they make sense? are they helpful? are there pieces we should change? do you see connections between what you are doing in class (learning opportunities) and what we are asking for in the key assessments?

Based on those conversations, we made a major change to one key assessment. Initially, we had an assessment that focused on activity planning. After discussing this with our adjuncts who all work in the field, we learned that what is really needed is for ECE teachers to understand how to critique lesson plans so they can be a good judge of whether or not the plans are developmentally, culturally, linguistically, and ability diverse – for example. We changed our whole key assessment and now call it the “Lesson Plan Analysis” rubric. Students analyze lesson plans and the instructors assess their analysis using the rubric.

Kathy Allen, VP of Collaborations and facilitator of these SKiP sessions, has shared more reflections below on how she is using the assessment report system. This was really generous and I’m grateful to be able to examine how she reports her data.

Here is the document: Examples of data we have collected over the years.

The message below is from Kathy:

The example report found in the link above is organized by standard and is broken out by key element for each standard. In this report you can also see what key assessment is addressed.  If a key element of a standard is addressed more than once in the key assessments, it will appear how ever many times it is assessed. For example- Key Elements 1a and 1b are both in the Lesson Plan Unit and the Child Case Study.

So this report shows us both how students are doing on the standards and also on the assessment itself. For example- Students are performing better on key element 1b in the Child Case Study (89%) than they are on 1b in the Lesson Plan Unit (83%). If this were a significant difference we would take a look at the Lesson Plan Unit and how we could give students more opportunities to learn and practice 1b: Knowing and understanding the multiple influences on development and learning.

What jumps out at me when looking at this data:

Standard 3b:  78% – This is low, and it’s also only assessed one time over the five key assessments.

This means we need to discuss as a faculty what we are going to do to provide students more learning opportunities to practice knowing about and using observation, documentation and other appropriate assessment tools.

We are in the process of switching all our assessments over to the 6 standards so when we revise the key assessments we will include at least one other opportunity to assess 3b along with looking at our learning opportunities chart and see how we can provide more practice.

Also, as we look at revision of the key assessments in our program our goal is to have each key element and supportive skill assessed more than once across all assessments. You can see that that’s not the case right now. So it’s always a work in progress!

Comments? Questions?

Report Writing

I am going to try something new and blog during the SKiP call. I’m not sure if this level of multi-tasking is a good idea for me or not, but I’m willing to give it a try! Today we are meeting together to talk about the process of writing the annual accreditation reports for NAEYC/ECADA. I hope it will be a useful chat session. I have been thinking all morning about this time of year and how difficult it can be to manage all of the deadlines that seem to converge during the spring semester. I know that some folks write their annual reports in the fall and I’m sure that is a very busy time too. In my life, spring is when everything seems to be due – grant reports, budgets, schedules, etc. All projects that have been going on for the academic year must come to some kind of close and there is a saying on my campus that if it doesn’t get done in April, it won’t get done until the fall semester. Maybe that is why there is so much pressure – we need to complete things before we adjourn for the summer session.

Luckily, the annual accreditation report is not that bad..really. I have found that as long as we are collecting assessment data as we should, writing the report is fairly easy. The only trick is making sure that I have enough time to do a good job with the writing process.

Today, we are meeting and there are about 7 people on the call representing several different states including North Carolina, Illinois, Michigan, and Alaska. There is a wide range in terms of the programs represented and their schedule for self-study and annual report writing. Some of us on the call have been writing annual reports for several years and some folks are really just starting the self-study process. It is interesting to have a discussion across this range of experience with the ECADA process.

Kathy Allen is facilitating the call and she is anchoring the discussion to the Accreditation Handbook, which is located in the ICOHERE online community for ECADA participants. This is really helpful as it makes the report seem very “doable”.

There is a specific question about the timeline for transitioning to the revised standards. Kathy has directed folks to the timeline listed on the NAEYC website “Transition to New NAEYC Standards“. Please take a look at this for your reference.

We are now discussing the question of collecting data and thinking about how to interpret different results coming from two different sections of a course. In this case, one section is offered online and the other is offered in a face-to-face format. One wonders if the format contributes to the different data or if it has more to do with inter-rater reliability in terms of instructors using the rubric differently. This is an opportunity for collaboration among faculty. What is the learning opportunity for students to be able to demonstrate this outcome? How much is this activity weighted? Will that weight influence students and their motivation to do their best work on that particular activity? Good questions to think about!

We are now starting to talk about various issues such as when a student fails a course or drops a course – does the data from his/her work still “count”. In other words, is that data included in the annual report? Most folks on the call suggested that the way the data are collected, all data are used in the report. Does this skew the data in any way?

The final note we ended on was a reminder that the self-study and ongoing accreditation process is a strengths-based process. This is important for all of us to remember.

Kathy shared “Examples of data we have collected over the years“, which is a terrific document as it gives us a good example to look at in terms of how one can report on key assessment data.

It was fun to hear various voices on the call and I think the discussion was helpful to all who participated. I look forward to tomorrow’s call!

I hope you can join us!



%d bloggers like this: