I’m a literacy consultant who works with seven districts.
How do I know if I’m being effective?
Doing a good job?
Doing what really works?
I have to start with the original . . . Clint Eastwood . . . same birth year as my dad who always kept me grounded!
A Short Story
I’ve been traveling a lot over the last two weeks. Over three thousand miles in a trip to Kentucky for an adorable grandson’s second birthday, then on to Florida with Mom and an aunt and uncle who is one of my mom’s younger brothers for a nephew’s high school graduation, and then back to Kentucky for some more time with the kids.
Was the trip successful?
Four possible data points might be these:
- The number of miles driven successfully. That is important because it was my first out of state road trip with my new car and then many miles driving a Ford 150 which is about three times the size of my car.
What might constitute a success? No flashing red or blue lights and no major problems. The number of palindromes I noticed on my odometer and particularly the one as I traversed the Missouri River bridge in St. Louis.
What data would not point to a success? Uncle Leo might say it was the number of times I drove over a curb.
- The number of times my GPS and Aunt Shirley’s google maps agreed. Less successful might be our decisions about which to follow when there was a disagreement.
Success? Google maps was definitely more up to date than GPS.
Not a Success? The “shortest” trip was NOT always the ideal route to take.
- The number of card games played.
Success? The variety from hand and foot to pepper.
Not a Success? The number of 9’s and 10’s I had in EVERY pepper hand!
- The variety of experiences and places we went.
Success? Wading in the Atlantic, time with so many precious relatives, driving to the top of Lookout Mountain in Georgia, the flea market, a little homemade wine, the food, the movies, and stories after stories.
Not a Success? Not driving back down Lookout Mountain (remember, not my vehicle!).
Do you notice a possible pattern?
Each data point seems to have more than one side!
If you had to sort these data points, could you find some summative as well as formative measures?
So back to the beginning . . .
I’m a literacy consultant who works with seven districts. How do I know if I’m being effective? Doing a good job? Doing what really works?
We collect a lot of data. We spend a lot of time with data. We spend a lot of time talking about data. But do we EVER really address these questions? Or does each question have multiple data points similar to those listed above. This post is the result of many miles of driving and a push from Elizabeth Moore at Two Writing Teachers when she wrote this post last week, “Literacy Coaches: How do you assess your impact?” Beth talks about using goals, student-centered data, survey data and quantitative data in her post.
I have a ton of quantitative data to share. At our agency we have had team Wildly Important Goals (WIGS) for two years focusing on our K-3 readers and using screener data to determine the effectiveness of our goals. I like to use them also as a beginning point when I reflect on my own effectiveness although they are only a small portion of my K-12 job.
Here’s my data for four different types of my work in buildings by each month.
- PDC = Professional Development in Core Literacy Instruction K-3
- OCC = Observation/Coaching in Core Literacy Instruction Implementation K-3
- PDI = Professional Development in Research-Based Interventions K-3
- OCI = Observation/Coaching in Research-Based Intervention Implementation K-3
The green boxes show that I met my goals which are also outlined below:
- I met all four of my goals in December and in February.
- I met my monthly goals 21 times.
- I met my Observation/Coaching Intervention goal in December (after 5 months).
- I met my PD Core and Observation/Coaching goals in January (after 6 months).
- I met my total goal in January (after 6 months).
And to make me feel better . . .
- My annual total for PDI was 94% so it was close.
- Average percentage of goals met is 96.8%.
- Total number of interactions was well above the annual goal just in a different distribution. (146% above the goal)
I missed my monthly goal 19 times. (19/40)
I met either one or zero monthly goals in August, March, April, and May. (4 months/10)
There were zeros in four categories across the 10 months. (4/40)
i did not meet my PDI annual goal. (141/150)
Ugly: The hard reality of the data
August was not required for data collection but because it was almost a full month of work I decided to include the data.
I can offer excuses for the spring – horrific sudden death of my nephew and his wife in March and then my brother at the end of April, but the fact is that I only missed one PD session during either of those times – so excuses don’t change the data.
And if you would like to see the data in a larger format – Data Here
PART TWO – How did students do on the screener administered in the fall, winter, and spring?
Data is reported in terms of green boxes for buildings by grade levels if 80% of the students or more met the benchmarks set by the state. (Red if below 60% or fewer of the students met the benchmark criteria.) Districts can choose from several approved screeners but the state of Iowa only pays for one.
- The total number of grades meeting benchmark by 80% or more by building increased from 7 in fall to 8 in winter with changing criteria.
- The number of grades meeting benchmark criteria by 80% or more (green) building increased for kindergarten from 2 in fall to 4 in winter.
- The number of first and third grades remained the same from fall to winter (3- first, 1-third).
- The number of grades below 60% benchmark criteria decreased from 8 in fall to 3 in winter.
- The number of grades below 60% benchmark criteria decreased from 8 in fall to 4 in the spring.
Grades 1 and 3 did not have any buildings meeting 80% benchmark criteria in the spring and kindergarten and second had 2 and 1 respectively.
The spring green (80% benchmark criteria) was the lowest of the three reporting periods.
The 8 grade levels by building meeting 80% benchmark criteria in the winter dropped to 3 for the spring.
The 3 grade levels by building below 60% benchmark criteria at winter increased to 4 in the spring.
What questions arise?
How does this data compare to state-wide Iowa totals?
Which specific buildings have multiple levels of green? or red?
What is working? What is not working?
Is more practice needed across the day (distributed practice)?
Are discrete skills transferring to reading passages?
What about fidelity of implementation? What does that data reveal?
Did we over rely on our winter successes that did NOT appear to transfer to spring benchmarks?
Brave = sharing this data publicly.
It’s not all roses and sunshine. What works in one building doesn’t necessarily transfer to what works in another building.
Is all data equal?
- How many students made growth?
- How many students made significant growth?
- How many teachers changed instruction based on the data?
- How many teachers changed interventions based on the data?
- What if the summative data (Iowa Assessments) shows a different picture of these same students?
- How many students have reading goals for the summer?
- How many students love reading?
- How many students read at school by choice?
- How many students read at home by choice?
- How many students can name their favorite books?
- How many students can name their favorite authors?
- How many students can name their favorite illustrators?
- And how do the students REALLY feel about school?
What data is missing from this snapshot?
Another short story
I am in total grandmother heaven. He meets me at the door, takes my hand, leads me into the living room, and tells me what to do/play/where to sit. “Gramma play.” “Gramma here.” “Gramma ice cream.” Gramma choo choo.” “Gramma dinosaur train.” I can’t even begin to count the number of times that I heard, “Where Gramma go?” during the last two weeks. I count that as a success. To disappear into another room and to be missed makes my heart melt!
Those are all data points that convince me that I’m doing a GREAT job as a grandma. Are they numbers? Are there specific criteria or cut points?
What data points match your school values and core instructional principles? When do you need to make sure that you are triangulating data and not over relying on any one source?
If I had only shown you fall and spring student screener data, you would not have seen the growth that doesn’t seem to have been sustained. That’s why my #OLW “BRAVE” is a part of this post. This is our third year with this process. Because the cut points for benchmarks change annually, we can’t compare each grade level year after year but we can look at trend data to see whether grade levels of students continue to grow as the move up through the grades.
How are you reflecting on successes? The good? The bad? The ugly?
AND who are you reflecting with?
#DigiLit Sunday: Planning Process
Margaret Simon has invited us to blog about planning for the new school year today for DigiLit Sunday. You can read more posts here at Reflections on the Teche.
Planning has been on my mind lately and actually has been my blog topic the last two posts here and here.
Where to begin?
With my #OLW – JOYFUL!
What’s my end goal? (Backward Design)
Joyful Learning for all!
How will I achieve my end goal?
Careful appraisal of my current status,
Develop a plan to integrate my learning from this summer,
Plan, plan, plan
Short term targets and
Long range goals!
Bricks = Reading, Writing, Speaking, Listening
Mortar = Mindset, “YET”, Brave, JOYFUL
Filling the inside with all that I know . . .
Determining Priorities Based on Data . . .
and then continuing to collaboratively increase my knowledge with my colleagues who blog, tweet and vox about literacy, learning, passion, joy, leadership and fun for students!
Sounds simple? “The proof will be in the pudding . . .
I will be planning on monthly check ins with my plan.
Approximately 200 days to fruition.
How will you know if your plan is working?
I’m borrowing this MLK quote from a PD session led by Justin (@jdolci). . .
March is finished. I have studied my #sol14 data. 31 days of writing charted. Goals reviewed. New goals considered.
Write for the weekly “slice”?
Time for a break.
Done with “Slicing” for a bit.
Time to get caught up with housework, laundry, cleaning . . . . .
What? Slicing Again?
It’s Tuesday. Slice of Life – regular schedule! Once a week!
I loved the routine of writing daily. I did worry about tasks left hanging while I “sliced” daily. Just how far behind did I get?
It doesn’t matter because I need to share this story with you. No, I have to share this story with you! I really want to share this story with YOU!
Yesterday was the seventh and final day of our standards – based grading sessions (K, 1, 2, 3, 4, 5, 6). It was a smaller group because sixth grade is middle school for many of our districts but we still had sixth grade teachers from nine districts working collaboratively to deepen their understanding of the Iowa Core ELA Standards. Our purposes for the day were
Today, I can:•Increase my knowledge of standards-based reporting•Increase my skill at determining standards-based proficiency of a writing piece•Locate quality sources for instruction and assessment for grade 6 ELA standards to increase student learning•Begin to plan for communication processes for this continued work
New Learning # 1
Our new literacy specialist shared this with me, posted it on our working site and then shared it with the entire group:
This icon (comfy red chair) is then placed on your toolbar and is readily available to turn any “article” on the web into a “better” print version that can be enlarged or even shrunk to make it fit the “reader’s needs!”
New Learning # 2 (also shared by our new literacy specialist):
I love learning! Yesterday’s celebration: learning about readability, learning about pureview, learning from “participants”
It was a fantastic day because I was ALSO a learner! That’s the best part of professional development!
And thanks for feeling comfortable enough to share so everyone could learn, including me!
Orchestrating Writing Assessments
How do you know that your students are effective communicators?
Do you measure communication? Do you use writing assessments for that purpose? If so, what are those writing assessments? How do you know that your students have made growth in writing?
Those questions and their answers have been responsible for district-wide writing assessments for over ten years in a local district. Currently, narrative writing at third, and persuasive letters at eighth and tenth grade are assessed with a Six Traits rubric.
The work in this district = 900+ student papers that are all read by at least two scorers: teachers, administrators, university students, community members, retired teachers, and AEA staff. Over three days, approximately 100 scorers (30-35 each day) are greeted by the superintendent of schools for a welcome that includes history, data and purpose for the assessment. Professional development includes increasing knowledge of effective writing instruction, the writing process and the Six Traits before the group begins to look at the rubric and anchor papers. Each day the scorers must calibrate because that unique group has never been convened before. “What qualities of the rubric did the NWREL scorers see?” dominates the conversations. Confidence in the use of the rubric and identifying the traits increases with practice and even a “happy dance” may occur as participants match the NWREL anchor scores. And then (drum roll, please) the scoring begins . . .
The goal: adjacent scores. What does that mean? If Joe scores a trait a 3 and Suzie scores a 4, that is adjacent. If that “adjacency” has occurred for all six traits, the scoring for that paper is over. But if Joe scores a 3 and Suzie scores a 5, the paper will be reread by a third reader for that trait (or traits). If the third reader does not agree exactly with Joe’s 3 or Suzie’s 5, and believes that trait is a 4, the three readers will conference with the paper and the rubric and discuss their thinking. Imagine, teachers and others, spending time talking about student writing because students are counting on feedback about their writing!
Wow! Annual scoring of student writing at three grades. Sound easy?
The support staff prep and post work for scoring writing from three grades of students is phenomenal. The “behind the scenes” orchestration involves year round work! Winter scoring “work” begins in September when the packets with prompts, draft writing paper, and final copy writing paper are assembled for each classroom at grades 3, 8, and 10. Maintaining the anonymity of students and teachers involves the use of codes. Recruiting scorers begins. Later, students write and teachers return papers to central office where packets of five papers are assembled with a quick scan of every page of student writing to remove any possible student identification. These packets are readied for the scorers. Reminders to scorers about plans in the face of adverse winter weather, ordering food and snacks for scoring days, and packing up all the materials are just a few of the tasks that precede the scoring.
During scoring days, basic work schedules for key support staff members are put on hold. Checking scorer registration, last minute substitutes, phone calls to absent scorers are just a few of the early morning tasks after the materials for the day are set out. Checking the details for the next day also encompass some of the morning. With luck, there is some office time before lunch. But the entire afternoon is dedicated to routing scoring packets to readers, collecting and matching score sheets, recording final scores, noting 3rd reader or conference needs and meeting the needs of scorers. Busy, busy days!
But the work is not yet done! After the scoring days, data entry (six scores for every student paper) becomes the next task. Scores are compiled for district, building, grade level (dept.) and teacher totals. Papers are returned to teachers for February parent-teacher conferences. Notes are made about the work and filed in preparation for the next round. And then the process of scheduling for the next year begins.
I work with a small portion of this work, co-facilitator for the scoring days. I am always amazed by the enthusiasm and dedication of the scorers who are now on the main stage. They are conscientious about “getting the work done right” and are also eager to learn. Classroom teachers and administrators often find a gem to add to their instructional repertoire. Many first-time scorers are anxious about this responsibility. Other scorers have literally scored for at least ten years. It might be easy for them to become blase about their task, but they remain committed to finding a common language to describe the qualities of writing that they see!
Congratulations on a scoring job well done!
Good luck with continued instruction!
Will there be changes in the future? Sure! With implementation of the Iowa Core, assessments will inevitably change. Will SBAC be used to assess writing? Will there be a different writing assessment? A planful decision will be made as more information becomes available!
Are you assessing writing? Do you have experience with district-wide writing conversations? What is/ was your role? I would love to hear about your experiences!