I’m pretty sure that the steam rising from my poor computer is clearly visible on all coasts. It’s been rising for awhile but I was determined to really focus more on narratives as I sliced this month.
But life interfered.
I applauded this tweet a week ago.
A reputable reading researcher.
I’ve talked about Dr. Nell Duke and research before.
She’s my “go to” when I need the details on research.
But then all this other gobbledy gook stuff comes up. Pseudo – journalists who, after 2.5 years of studying “the science of reading” bless it as the ONLY way to teach reading and now are having webinars on Edweek, radio shows, and articles purporting to tell teachers how to teach reading.
How to debunk the malarkey?
Start with P. L. Thomas’s “The Big Lie about the ‘Science of Reading'” here.
It’s an amazing article that debunks the whole issue.
And if you need additional reading material, here’s a direct plea for media also by Thomas.
Here is where the journalist said she did not have to report both sides – link
Because these are the journalist’s sources:
http://pamelasnow.blogspot.com/…/an-open-letter-to… “These “authorities” on teaching reading 1) pre-service teacher 2) teacher in his 4th year of teaching. The other link is a professor’s blog in Australia about their pre-service program.”
Sources for the condition of reading in the U.S.
Consider the source.
Is the person even in the field of education? What are their credentials? What is the source of their data?
The future of our children literally depends on all teachers.
Thank you, Betsy, Beth, Deb, Kathleen, Kelsey, Lanny, Melanie, and Stacey for this daily March forum from Two Writing Teachers. Check out the writers, readers and teachers here.
Administrator Webinar: How to communicate the need for evidence-based practices from the What Works Clearinghouse Link
What a year!
What does the data say?
My Top 5 Most Viewed Blog Posts of all time are:
Data analysis is interesting. Four of the five posts were in my top 5 all time last year. #2 this year is a new addition to the top 5. It leapfrogged to #2 by passing up three previous “all time” posts.
I continue to wonder if my OLD writing is more popular than my newer writing with two posts from 2013 in the top 5. “Or does the popularity mean that these posts are STILL topics/issues that present day literacy teachers are struggling with?” Maybe these are topics that I need to review during the course of the year. They are definitely already on my March Slicer “To Write About” list.
My Top 8 Posts (by the number of readers) out of the 109 posts that were written in 2018 were:
8. #SOL18: Lit Essentials – Regie Routman’s Literacy Essentials with an entire section dealing with Equity!
7. #TCRWP: 3 Tips – Patterns of Power (Jeff Anderson), Mentor Texts with Simone Frazier and Heart Maps with Georgia Heard
6. #SOL18: Reading Research – Is all reading research equal?
5. Bloom’s and Thinking – Reconceptualizing Bloom’s Taxonomy
4. #SOL18: March 25 – Updated Reprise of #3 above “Lexile Level is NOT Text Complexity (2013)
3. #NCTE18: Digging Deeper #1 – Kass Minor, Colleen Cruz & Cornelius Minor
2. #SOL18: March 15 – Barriers to Learning, Allington’s Six T’s, Student Progress
1.#SOL18: March 11 – Increasing Writing Volume
And this – Reading Research from the end of October and both a November post about NCTE and a December post can make it into the “Most Read in 2018” list within 4 – 8 weeks of the end of the year. So Interesting!
What patterns do you see?
Which topics did you find most compelling?
What work do you review annually or over even longer time frames?
Wrapping up Curious with a Focus on being Joyful for this first chance to CELEBRATE!
How do you make purchasing decisions about programs/materials? Vicki Vinton has a great post titled “What’s the Difference Between a Teacher and a Packaged Program? in case you have not yet read it. (I am not proposing that teachers should be replaced by a packaged program.)
Do you ask questions when you are reviewing supplementary computer reading materials before a purchasing decision? Who do you ask? Friends? Colleagues? Twitter PLN? What do you ask?
I consider myself fortunate as I was trained with a protocol to evaluate the quality of research according to the definition of “scientifically-based reading research” that was used under No Child Left Behind. I believe it helps “weed out silly stuff” pretty quickly.
Do you use a protocol or checklist to guide your review process? I have provided some guiding questions to think about as you read the research report summary (or dig into the actual research) that can also be found in its entirety through the International Reading Association. What additional information will you want to collect and review?
- What was the age/grade level of students in the study?
- Is there a match between the students in the study and students in your classroom?
- Is there a match between the “purposes/goals” of the computer reading programs and the “purposes/goals” of reading in your district and in the Common Core?
- Is there high-quality evidence as a result of the research?
- Does the research list an effect size?
- Was there a control group in the study? Or were students assigned to random groups?
- Is there evidence that the results were sustained over time? (Two years or more later)
- What resources (time, staff, technology) are needed to implement the program? Are the resources cost – prohibitive?
- How much professional development is needed to initiate and sustain the program?
- Is fidelity of implementation described well enough in the research to be replicated?
- Is the effect size .40 or greater? (range John Hattie says should be considered)
- When considering resources and “struggling readers” what effect size is needed in order for the reader to “close the gap” and reach grade level goals?
As you continue reading, please think about your reading process. What do you, as a reader, do to enhance your understanding? (Blog followers: “Are you close reading?”)
Marshall Memo 495 (7-23-13)
4. How Effective Are K-6 Supplementary Computer Reading Programs?
“Despite substantial investments in reading instruction over the past two decades, far too many U.S. students remain poor readers, which has profound implications for these children and for the nation,” say Alan Cheung (The Chinese University/Hong Kong) and Robert Slavin (Johns Hopkins University) in this Reading Research Quarterly article. “Learning to read is a complex task in which many things must go right for a student to become successful… Different students may be failing to learn to read adequately for different reasons. One student may recognize every letter and sound but be slow and uncertain in blending them into words. Another may be proficient in reading words but does not comprehend them or the sentences in which they appear. Yet another may lack vocabulary needed to comprehend texts.”
One-on-one tutoring is the most effective intervention for struggling readers, say Cheung and Slavin, but it’s expensive. What about software packages? “In theory, computers can adapt to the individual needs of struggling readers,” they say, “building on what they can do and filling in gaps” – plus, they’re motivating to students. This article reports on the efficacy of technology products on which there is solid research. The bottom line: effect sizes for almost all products are small (averaging .14) and almost all of them aren’t any better than non-computer approaches. Here are the specifics, from the most to the least effective programs:
– Lexia (Phonics-Based Reading and Strategies for Older Students): the mean effect size for Title I students is .67
– Captain’s Log (BrainTrain): median effect size .40
– RWT and LIPS for first graders at risk for dyslexia: overall effect size .32
– Fundamental Punctuation Practice, MicroRead, Spelling Program, and Word Attack program for fourth graders: effect size .30
– READ 180 for middle schools: weighted mean effect size of .24
– READ 180 for grade 4-6 students: overall effect size .21
– Jostens (an earlier version of Compass Learning): across three studies, the weighted mean effect size was .19.
– Alpine Skier, Tank Tactics, and Big Door Deal for fifth and sixth graders: median effect size .18
– Across 12 studies of supplemental Computer Assisted Instruction, the weighted mean effect size was .18.
– Thinking Reader: median effect size of .14 in vocabulary and .13 in comprehension
– Destination Reading: median effect size .12
– Computer Network Specialist for grades 2-5: effect sizes averaged .10
– Fast ForWord for grades 3-6: weighted mean effect size .06
– Failure Free Reading for third and fifth graders: combined effect size .05
– ReadAbout for fifth graders: weighted average effect size .04
– READ 180 for grade 4-6 students (in another district): overall effect size .03
– Destination Reading, Waterford, Headsprout, PLATO Focus, and Academy of Reading in first-grade classrooms: mean effect size .02 (the study didn’t break down individual programs)
– Leapfrog, READ 180, Academy of Reading, KnowledgeBox for fourth graders: effect size –.01 (no breakdown for individual programs)
“The most important practical implication of the review presented here is that there is a limited evidence base for the use of technology applications to enhance the reading performance of struggling readers in the elementary grades,” conclude Cheung and Slavin. “Within the existing literature, however, the largest effect sizes were found for small-group interventions that supplement first-grade instruction with phonetic activities integrating computer and non-computer activities and occupying substantial time each week.”
“Effects of Educational Technology Applications on Reading Outcomes for Struggling Readers: A Best-Evidence Synthesis” by Alan Cheung and Robert Slavin in Reading Research Quarterly, July/August/September 2013 (Vol. 48, #3, p. 277-299),
Cheung be reached at email@example.com.
What surprised you about the research? How will this information impact your work?
Did you do any close reading? How did you know?
Thanks for your response!