Do they? Do they not?
Yesterday’s post was a poem about drivers using their turn signals (here).
Data time. Do note that this was raw data collected while driving. Not verified. Perhaps not 100% accurate. Tally marks on the back of an envelope.
Who was the best/worst at using their turn signals when driving?
Other car drivers?
Me – who was complaining?
Your predictions before reading further . . .
And the envelope of data that has been kept on Funk and Wagnall’s front porch . . . hermetically sealed . . .
Truck Drivers Changing Lanes
Car Drivers Changing Lanes
Me – Driving and Changing Lanes
21 / 23 times = 91.3%
Who was the best on Friday at using turn signals during this two hour sample?
Not I, but the truck drivers. I would be remiss if I didn’t point out that it was a smaller sample. And I was surprised as I was expecting other car drivers to be at 2% or less due to recent driving experiences when I know that no one used turn signals. Are tally marks the best recording device? Perhaps not, but they can verify or eliminate an initial hypothesis . . . and lead one to a different question or data collection method!
What data are you collecting today?
How will you collect it?
How will you display it?
Thank you, Betsy, Beth, Deb, Kathleen, Lanny, Melanie, and Stacey for this daily forum each March. Check out the writers, readers and teachers here.
So the data is in, now what?
Progress Monitoring and Intervention requirements are set by the system.
But how to focus?
What do students REALLY need?
What questions will help the teachers move forward?
How can we organize the data to use it?
Here is my thinking.
We have all this data from the screener used three times a year.
Step One: What if I put student names into the boxes so I can “see” who the students are that both did and did not meet the benchmark criteria? I plan to also record the score after the name so I can see those students who just made the benchmark and those who maxed out that part. Similarly, I can see those students who just missed the benchmark and those who are farther out from the targets.
Correction to Chart Above – Nonsense Words – Fall = 9, Winter = 15, Spring = 20
Step Two: So what?
Should I use “Messy Sheets” to triangulate the data and look for patterns? You can learn about “messy sheets” in the preview of Clare and Tammy’s Assessment in Perspective available here or in my post here.
Because this was a screener, there is no additional information about student performance/miscues.
What if we begin by looking at just the Sight Words subtest?
(Thinking about the fact that sight words, AKA snap words or heart words, drain time and brain power when a student has to stop and attempt to sound out “said” on every page of the book.)
What if we provide some instruction and begin to look for patterns in response to instruction?
Which students are successful?
Which students are on target for the end of the year goals?
Does EVERYONE in the class need some work with sight words?
ONE way to sort this out might be to begin with the whole class.
Hmm . . . This adds more detail and now I am considering more than “red, green” and “does or does not meet the benchmark”.
But is this more helpful?
What do the students in the group scoring from 0-10 on sight words need?
Is it the same as those students in the 11-20 group?
Is there a difference in intensity for the interventions? Frequency? Total time? What will really close the gap and get the students on a trajectory to close the gap?
How do ALL students get what they need in order to continue making progress?
Are there some commonalities that ALL students may need?
How do you handle this dilemma – When your data just causes more questions?
Tuesday is the day to share a “Slice of Life” with Two Writing Teachers. Thank you, Anna, Betsy, Beth, Dana, Deb, Kathleen, Stacey, and Tara. Check out the writers, readers and teachers here.