We had our second SKiP call this afternoon on the topic of writing annual accreditation reports for the ECADA process. Today, there were eight people on the call from South Carolina, North Carolina, Illinois, Arizona, Idaho, and Alaska.
We started off talking about how participants are getting used to using their rubrics and collecting data. It does feel a bit overwhelming at first! Many people on the call talked about the importance of thinking about this as an ongoing process. We want to have everything finished and prepared, yet the process and procedures need to be fluid enough so that we can make changes based on the assessment information we are gathering. I think that really is the key to it all – easier said than done, I know!
There were a couple folks on the call who use online data collection and storage systems such as TaskStream and LiveText which work very well in terms of collecting, analyzing and storing assessment data.
Others use a simple Excel spreadsheet. For example:
The coordinator provides the spreadsheet template to all instructors and they enter the data and submit it back to her. It has one page with total grades for each assignment. One page has attendance. The third page has the break down of the key assessments and the fourth page is looking at some questions she asks about their class for that semester. She uses this for NAEYC and regional accreditation. The instructors have become familiar with submitting the data to her each semester as it is part of their routine. The coordinator can then run reports and look at the averages and so forth.
I will share that I have explored using SurveyMonkey to collect assessment data from multiple instructors. I use the professional account to create surveys that correspond with our key assessment rubrics and then instructors use a link to the survey to enter their data. Feel free to take a look at this DEMO survey which illustrates our key assessment on documentation. It is just a demo so feel free to interact with the survey to see what it is like to enter data this way. Below, is an example of a chart I generated from the survey results one semester.
People chatted about what they have found in their data so far. One person shared that once they could see what the data were telling them, they were able to add learning opportunities throughout the program that would help students build the skills they need in order to be successful with the key assessments. For example, someone talked about noticing that the students’ planning skills were weak during the practicum semester. They decided to build-in additional learning opportunities around planning earlier in the program in order to scaffold students through that planning process and she has noticed an improvement.
One process that was shared is that once the data are collected, a report is sent out to all faculty who examine the results and then discuss what it means to them and how they will make use of the data to make changes.
Question:
One question that was asked had to do with faculty buy-in to the accreditation process with a particular concern about adjunct faculty. I shared that we have done orientation sessions for our faculty where we invited everyone to attend and I did a workshop on how to use the key assessments. This was an important step when we were introducing an online data collection system as many of our adjuncts were uncomfortable with learning this new step.
I have also found that partnering with the adjuncts one-on-one has been an effective strategy. We have a big enough program where we decided it was best to develop a faculty partner system. Each full-time faculty member partners with a small group of adjuncts. The partnership is usually based on scheduling so it is convenient for partners to meet together before or after their classes. This provides a good opportunity to build a learning community that includes full-time and part-time instructors. We also do a lot of outreach to adjuncts to ask them their opinions about the rubrics – do they make sense? are they helpful? are there pieces we should change? do you see connections between what you are doing in class (learning opportunities) and what we are asking for in the key assessments?
Based on those conversations, we made a major change to one key assessment. Initially, we had an assessment that focused on activity planning. After discussing this with our adjuncts who all work in the field, we learned that what is really needed is for ECE teachers to understand how to critique lesson plans so they can be a good judge of whether or not the plans are developmentally, culturally, linguistically, and ability diverse – for example. We changed our whole key assessment and now call it the “Lesson Plan Analysis” rubric. Students analyze lesson plans and the instructors assess their analysis using the rubric.
Kathy Allen, VP of Collaborations and facilitator of these SKiP sessions, has shared more reflections below on how she is using the assessment report system. This was really generous and I’m grateful to be able to examine how she reports her data.
Here is the document: Examples of data we have collected over the years.
The message below is from Kathy:
The example report found in the link above is organized by standard and is broken out by key element for each standard. In this report you can also see what key assessment is addressed. If a key element of a standard is addressed more than once in the key assessments, it will appear how ever many times it is assessed. For example- Key Elements 1a and 1b are both in the Lesson Plan Unit and the Child Case Study.
So this report shows us both how students are doing on the standards and also on the assessment itself. For example- Students are performing better on key element 1b in the Child Case Study (89%) than they are on 1b in the Lesson Plan Unit (83%). If this were a significant difference we would take a look at the Lesson Plan Unit and how we could give students more opportunities to learn and practice 1b: Knowing and understanding the multiple influences on development and learning.
What jumps out at me when looking at this data:
Standard 3b: 78% – This is low, and it’s also only assessed one time over the five key assessments.
This means we need to discuss as a faculty what we are going to do to provide students more learning opportunities to practice knowing about and using observation, documentation and other appropriate assessment tools.
We are in the process of switching all our assessments over to the 6 standards so when we revise the key assessments we will include at least one other opportunity to assess 3b along with looking at our learning opportunities chart and see how we can provide more practice.
Also, as we look at revision of the key assessments in our program our goal is to have each key element and supportive skill assessed more than once across all assessments. You can see that that’s not the case right now. So it’s always a work in progress!
Comments? Questions?