Please join us this afternoon for the third SKiP call of the year.
Wednesday April 6, 2011
Below is my contribution to the conversation. I’m sharing my personal views based on my own rubric-designing experiences and my observations of others as they have gone through the process. For up-to-date information on the official ECADA self-study process, please contact NAEYC directly at firstname.lastname@example.org
What is it about the word “rubric” that creates such varied responses including shivers, forced smiles, sighs of frustration, groans of despair, giggles of amusement, and every other emotional affectation? I think I’ve done all of that at one point or another. It’s interesting, isn’t it?
It’s hard work to design a tool that is truly useful in assessing student learning. The good news is that rubrics really should be living things. We work hard to design them, use them to collect data, reflect on the data results, think about whether or not the tool is actually assessing what we wanted to assess, revise the rubric accordingly, and start all over again. Maybe that’s where the groans come from. It’s almost easier to work really hard to get a rubric written and then forget about it! But, that would be missing the boat entirely.
In terms of the ECADA process, the most important part of developing the 5 key assessments is that we create tools that assess our students’ learning and application of all of the NAEYC standards, their key elements, and supportive skills. In many ways, this is what makes it challenging. I think most of us have some experience in designing a particular assignment and then creating some sort of guideline for grading it (checklist, rubric, main ideas, etc.). Are the key assessments different?
I think we can use what we know to be best grading practices when we design key assessments, but I believe that we have to look at a key assessment slightly differently than when we grade an assignment. The main difference is that in an assignment, I am looking at many different things that may or may not be related to the standards (did the student make the deadline, is it typed and submitted correctly, does it have the sections outlined in the assignment description – that kind of thing). With the key assessment, the main goal is to assess the student’s application of the standard so I think it is more targeted.
If we start with the outcome in mind, it makes designing the rubric a little bit easier. What do you want to know? What would a successful student outcome (paper, presentation, group project, documentation panel, lesson plan, case study, etc.) look like? How would that product demonstrate the student’s application of the standard? That’s the key. When we look at a paper, how does the student have the chance to demonstrate his/her understanding and application of the standards within that product? When we look at a presentation, how does the student have the chance to demonstrate his/her understanding and applications of the standards within that product, and so on?
For myself, when I first started working on our key assessments, I found it difficult to step away from grading the specific assignment, and focus on assessing student learning of the standard. This took time. It seemed to help once we had mapped out the learning opportunities, and took a good look at what the main assignments were across the program. When we looked at the learning opportunities chart, it became clear to us that we assign a lot of observations. That was a major theme that came up for us when we looked at what everyone was assigning across multiple sections of the ten core courses. We knew that observation was an important skill to us as a program. We could see that Observation was also represented in the NAEYC standards. That seemed to be a good place to develop a key assessment.
Once we knew we wanted to develop an observation rubric, we then took a closer look at the Standards, key elements, and supportive skills to see how they could be applied within an observation assignment. What were we already looking for? How did that relate to each element of the standard? Do we include measurement of supportive skills such as writing?
We created a table and started with a column for the things we wanted to see in our students’ observations (descriptive writing, objective language, etc.). We then looked at the column and looked at the standards. What relates? We then lined up the elements of the standard that most closely related to what we were already looking for in an observation.
After the SKiP call today, I will add more about the process we went through in designing the observation rubric. I’d be very glad to get your feedback on the process we went through and the current iteration of our rubric – which is in revision right now in order to include the updated 2010 standards!
to be continued…