Readers, Close Readers, Friends, Followers,
For my post honoring a full year of blogging, it is my pleasure to announce a Twitter Chat for Falling in Love with Close Reading Lessons for Analyzing Texts and Life to be held on Monday, November 11, 2013 from 6-7 pm EST (some of us work on Veteran’s Day 😦 ).
The authors @ichrislehman and @teachkate will be joining us for that chat!
Our hashtag will be #FILWCloseReading.
What can you do during the next two weeks in order to “get ready” for the chat?
To prepare for the chat:
Read the book: Falling in Love with Close Reading Lessons for Analyzing Texts – and Life
Don’t have the book? Read a sample from the book available here at Heinemann.
- Continue Learning!
We will be talking about the “ritual” for teaching close reading that is the result of “loving the author’s craft” not a “must-do, lock-step procedure” that spans days of instruction for a two page story!
Questions for the chat can be found here http://goo.gl/2HXOwi
Christopher Lehman and Kate Roberts have written a masterful text, Falling in Love with Close Reading Lessons for Analyzing Texts – and Life. My interest in their book was heightened by the seven week blog-a-thon that led up to the publication date. My hope was that the book would enable me to really dig in, with teachers and students, to make close reading simple, easy, and understandable. Remember that Chris defined close reading in his blog post as:
“Close reading is when a reader independently stops at moments in a text (or media or life) to reread and observe the choices an author has made. He or she reflects on those observations to reach for new understandings that can color the way the rest of the book is read (or song heard or life lived) and thought about.” Sept. 2, 2013
Now that I have finished reading the book, I stand at a crossroads. It is not going to be simple, easy and understandable YET!
Where do I begin? Do I need to go back and check the level of understanding with text evidence? What about word choice? Structure? Point of View? Across Texts?
My plan this morning: Begin at the beginning and go back to CCR Reading Anchor Standard 1 because it is complicated and tricky!
- Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text.
This will mean that I will be reviewing both teacher and student actions, talk and thinking in order to determine when, where, and why students are independently using evidence to increase understanding of texts and lives. Then based on student data the plan will develop. (But for me personally, I am going to be using the lessons in the book to study, think, and reflect on text evidence, word choice, structure, point of view, and across texts. I will be working on this all year! I need to increase my own skills and knowledge at a variety of grade levels and content areas beyond ELA!)
What have you read closely lately? Check out the link below if you were not following the blog-a-thon or if you have not yet decided to study close reading.
Ultimate Goal for Close Reading = Close Reading Your Life – Kate Roberts at #tcrwp Summer Institute
Is it possible? Or is increased conversation (and or writing) a wonderful, unexpected result of close reading?
“Close reading is when a reader independently stops at moments in a text (or media or life) to reread and observe the choices an author has made. He or she reflects on those observations to reach for new understandings that can color the way the rest of the book is read (or song heard or life lived) and thought about.” (Lehman, Sept. 2, 2013)
As week six of the “Close Reading Blog-a-thon” winds down, I spent some time re-reading some of the earlier posts. What was I searching for? Was it deeper knowledge about specific blog content or was it the search for new understanding?
Patterns became evident as I found a variety of texts and life situations that included: books, chapters, articles, paragraphs, pictures, artwork, maps, schedules, interview results and community signs. Within these, point of view is readily apparent by what bloggers choose to include (thank you, Kate) and the structures used (thank you, Chris). My initial rereading goal was to study the myriad of informational text styles and structures present in the excellent blogs.
But the biggest aha for me, was the fact that close reading conversations ensued and both conversations and writing increased! I commented on posts as I nodded my head in agreement while reading. I mentally composed posts on my drive to work. I found myself writing posts to respond to Kate, Chris and of course Vicki Vinton. Ideas that had been “mulling around in my brain” seemed to crystalize and flow from my keyboard. And I even had internal conversations with myself during close reading!
In my personal journey over the last eight months to understand “close reading” and CCR Anchor Standard #1, it has been a combination of reading, writing, and conversations that has increased my understanding. The conversations in my head as well as those in person, or comments on blogs, and even as whole posts to respond to blogs have been helpful to me. Conversations have extended my learning and deepened my understanding.
Were the conversations necessary as part of close reading or were they what happened after the close reading? This question made my head spin as I compared it to the inevitable “Which comes first? The chicken or The Egg? Or does it really even matter in the bigger scheme of life?
Are conversations a “Critical Component” or an “End Result” of close reading? What do you think? ALL conversations welcome!
How do you make purchasing decisions about programs/materials? Vicki Vinton has a great post titled “What’s the Difference Between a Teacher and a Packaged Program? in case you have not yet read it. (I am not proposing that teachers should be replaced by a packaged program.)
Do you ask questions when you are reviewing supplementary computer reading materials before a purchasing decision? Who do you ask? Friends? Colleagues? Twitter PLN? What do you ask?
I consider myself fortunate as I was trained with a protocol to evaluate the quality of research according to the definition of “scientifically-based reading research” that was used under No Child Left Behind. I believe it helps “weed out silly stuff” pretty quickly.
Do you use a protocol or checklist to guide your review process? I have provided some guiding questions to think about as you read the research report summary (or dig into the actual research) that can also be found in its entirety through the International Reading Association. What additional information will you want to collect and review?
- What was the age/grade level of students in the study?
- Is there a match between the students in the study and students in your classroom?
- Is there a match between the “purposes/goals” of the computer reading programs and the “purposes/goals” of reading in your district and in the Common Core?
- Is there high-quality evidence as a result of the research?
- Does the research list an effect size?
- Was there a control group in the study? Or were students assigned to random groups?
- Is there evidence that the results were sustained over time? (Two years or more later)
- What resources (time, staff, technology) are needed to implement the program? Are the resources cost – prohibitive?
- How much professional development is needed to initiate and sustain the program?
- Is fidelity of implementation described well enough in the research to be replicated?
- Is the effect size .40 or greater? (range John Hattie says should be considered)
- When considering resources and “struggling readers” what effect size is needed in order for the reader to “close the gap” and reach grade level goals?
As you continue reading, please think about your reading process. What do you, as a reader, do to enhance your understanding? (Blog followers: “Are you close reading?”)
Marshall Memo 495 (7-23-13)
4. How Effective Are K-6 Supplementary Computer Reading Programs?
“Despite substantial investments in reading instruction over the past two decades, far too many U.S. students remain poor readers, which has profound implications for these children and for the nation,” say Alan Cheung (The Chinese University/Hong Kong) and Robert Slavin (Johns Hopkins University) in this Reading Research Quarterly article. “Learning to read is a complex task in which many things must go right for a student to become successful… Different students may be failing to learn to read adequately for different reasons. One student may recognize every letter and sound but be slow and uncertain in blending them into words. Another may be proficient in reading words but does not comprehend them or the sentences in which they appear. Yet another may lack vocabulary needed to comprehend texts.”
One-on-one tutoring is the most effective intervention for struggling readers, say Cheung and Slavin, but it’s expensive. What about software packages? “In theory, computers can adapt to the individual needs of struggling readers,” they say, “building on what they can do and filling in gaps” – plus, they’re motivating to students. This article reports on the efficacy of technology products on which there is solid research. The bottom line: effect sizes for almost all products are small (averaging .14) and almost all of them aren’t any better than non-computer approaches. Here are the specifics, from the most to the least effective programs:
– Lexia (Phonics-Based Reading and Strategies for Older Students): the mean effect size for Title I students is .67
– Captain’s Log (BrainTrain): median effect size .40
– RWT and LIPS for first graders at risk for dyslexia: overall effect size .32
– Fundamental Punctuation Practice, MicroRead, Spelling Program, and Word Attack program for fourth graders: effect size .30
– READ 180 for middle schools: weighted mean effect size of .24
– READ 180 for grade 4-6 students: overall effect size .21
– Jostens (an earlier version of Compass Learning): across three studies, the weighted mean effect size was .19.
– Alpine Skier, Tank Tactics, and Big Door Deal for fifth and sixth graders: median effect size .18
– Across 12 studies of supplemental Computer Assisted Instruction, the weighted mean effect size was .18.
– Thinking Reader: median effect size of .14 in vocabulary and .13 in comprehension
– Destination Reading: median effect size .12
– Computer Network Specialist for grades 2-5: effect sizes averaged .10
– Fast ForWord for grades 3-6: weighted mean effect size .06
– Failure Free Reading for third and fifth graders: combined effect size .05
– ReadAbout for fifth graders: weighted average effect size .04
– READ 180 for grade 4-6 students (in another district): overall effect size .03
– Destination Reading, Waterford, Headsprout, PLATO Focus, and Academy of Reading in first-grade classrooms: mean effect size .02 (the study didn’t break down individual programs)
– Leapfrog, READ 180, Academy of Reading, KnowledgeBox for fourth graders: effect size –.01 (no breakdown for individual programs)
“The most important practical implication of the review presented here is that there is a limited evidence base for the use of technology applications to enhance the reading performance of struggling readers in the elementary grades,” conclude Cheung and Slavin. “Within the existing literature, however, the largest effect sizes were found for small-group interventions that supplement first-grade instruction with phonetic activities integrating computer and non-computer activities and occupying substantial time each week.”
“Effects of Educational Technology Applications on Reading Outcomes for Struggling Readers: A Best-Evidence Synthesis” by Alan Cheung and Robert Slavin in Reading Research Quarterly, July/August/September 2013 (Vol. 48, #3, p. 277-299),
Cheung be reached at firstname.lastname@example.org.
What surprised you about the research? How will this information impact your work?
Did you do any close reading? How did you know?
Thanks for your response!