Monday, January 30, 2017

Assessment and Data Professional Learning Unit

Often times professional learning sessions exist as discrete sessions that are envisioned as turning points in the path of instruction.  That the session is going to be transformative and educators are going to have the time and wherewithal to follow through of their own accord.  This year we have decided to approach the topic of Assessment and Data usage from a perspective that is slightly more realistic.  Rather than attempting to craft a single, all-powerful and omnibus, professional learning session; we have decided to create a series of professional learning experiences that all aim toward improving the capacity of our peers to be strategic in the use of data AND in the all-to-often ignored laying of the necessary groundwork to do so.

What follows are the agendas and presentations that were used to anchor this unit of professional learning.

Saturday, February 13, 2016

What if Multiple choice questions allowed teachers to SEE their students' thinking instead of just the purported result of that thinking?

An interesting thought.

A couple disclaimers:


  1. It is late at night/early in the morning, and I am a bit fogged because of some cold-symptoms...the chance of this post containing non sequiturs and general rambling is high
  2. I do not have an answer to this question, so the following post is more of a thought experiment. If YOU have the answer, share it by commenting on this post.
Here is what made me think about this question in the first place.  I like the Mastery Connect tool, but I know that many are still warming up to it.  Part of my role this year is to check in with teachers about their use of assessment data to drive their instructional choices, and in doing so I get to hear some interesting feedback.  

During one conversation a math teacher bemoaned (rightfully so) the seeming dependence on multiple choice questions as the method of assessment.  Yes, there are other types of questions, as you all know by now, but when it comes down to a quick turn around a solid multiple choice question is hard to beat.  What I believe this teacher was responding to, was the way in which a multiple choice question's response is so easily divorced from the the thinking that ostensibly lead to it.  Was it a guess? Was it a happy accident?  If wrong, why?

At present (so far as I know) Mastery Connect does not allow you to capture student's work alongside their multiple choice responses.  You may be thinking, "so what? You can always collect their 'scrap' paper or test booklet/sheet, and review that for a more robust picture of their current state of mastery."  You are certainly right, but what if you did not have to, or more to the point...did not have time to?

Ok, I am going to jump to an idea I have been playing with in my head for a little bit.  It is important that you do not shuffle the story you just read off to your long or short term memory, but that you keep it in your active working memory so that you can see the connection (that I hope I am able to make).

I like small groups, and I think most do.  As a teacher you can really be efficient with you instruction, targeting precisely what is needed and by whom.  It seems to be a win win.  Lately, I have been playing with mid-class checks for understanding: little mini-quizzes that might seem to have a more likely home at the end of class in the form of an exit ticket, a mid-ticket, if you will.

They are uniformly short, so that the teachers can quickly score and sort them into groups based on the results.  To do this requires very precise questioning so that you can be sure that when a student misses question 1-3 they need the type of intervention being offered by small group A, whereas students missing questions 4 or 5 need what the other small group is offering.

This works well when there are multiple teachers in the room, and allows misconceptions to be cleared up right away so that students do not lose too much ground while they struggle until that difficulty is noticed.

What if I did not have a co-teacher?  Sad, yes, but not the end of the mid-ticket, for I can use Mastery Connect to deliver the assessment digitally.  That, in turn, allows students to see their level of mastery immediately upon submission.  If I set some criteria for 1-2 small group stations in the room (ex. re-teaching, partnered, and independent) students can self-sort on the basis of their own data. 

Now for the acrobatics.  Put the two stories together; that of the teacher wanting to see student thinking, and of the teacher trying to create data-based groupings on the fly.  

What if Multiple choice questions allowed teachers to SEE their students' thinking instead of just the purported result of that thinking?

Suppose you created a short assessment wherein each multiple choice question surfaced a student's knowledge of the process. The question is not about the final answer, but about a step that would need to be taken in pursuit of that answer.  In short, their thinking. Here I am envisioning the exploded views brought to us by the world of technical drawing.

Think of a multi-step problem.  Each step can be demanded in the form of a multiple choice question, and can be done in a few ways.
For example:
  1. Take one problem and explode its steps into the appropriate number of questions and:
    • ask students to choose the response that best shows what the proper step/work looks like
    • ask students to identify the statement that best describes the step necessary to solve the problem 

Directions: Your answers to the following questions will demonstrate your knowledge. Each one represents the completion of a step in the process needed to solve the equation below:

4x - 12 = 12

Question #1:
After completing the FIRST step in solving the equation you will be left with which of the following:
a.) 4x + 12 =  0
b.) 4x = 24
c.) x = 6
d.) -12 = 12 - 4x

Question #2:
After completing the SECOND  step in solving the equation you will be left with which of the following:
a.) 4x = 24
b.) 0
c.) x =6
d.) 4 = 24

Question #3:
After completing the THIRD  step in solving the equation you will be left with which of the following:
a.) 4x = 24
b.) x= 6
c.) 24/x = 4
d.) x = 3

OR


Directions: Your answers to the following questions will demonstrate your knowledge. Each question will ask what is needed next to solve the equation below:

4x - 12 = 12

Question #1:
What is the FIRST step that you should take?
a.) Isolate the term 4x by adding 12 to each side of the equation.
b.) Divide each side by x  remove the variable from the left side of the equation
c.) Subtract 12 from each side so that you have zero on the right side
d.) Multiply both sides of the equation by 4 to get rid of the variable

Question #2: 
What is the SECOND step that you should take?
a.) Multiply both side by 4
b.) Divide each side by 4
c.) Divide by zero
d.) there is no next step

Question# 3:
Finally, what is the correct answer?
a.) x = -12
b.) x = -6
c.) x = 6
d.) x = 24

I am certain that there are many more ways to ask students to demonstrate their knowledge of processes and, correspondingly, their thinking.   A math teacher could differentiate the same assessment by making it visual and actually providing students with the "notebook" page that SHOWS a problem in various states of completion, and thereby eliminate the classic barrier to math knowledge demonstration that literacy often presents.

This approach of taking an exploded view of a mental process is immediately applicable in science curriculums wherein students must follow procedures, or conduct observations and experiments in a specific sequence, but how does it translate to the more nebulous world of humanities?

Again, I am not sure that I have an answer, but I think that in classrooms all over the country ELA teachers are instructing students to analyze text in specific ways, and the same is true for writing. After all, much of teaching ELA is about slowing down the mental processes our adult brains go through as we analyze text and modeling that for students in the form of discrete steps.  Therefore analyzing the development of Jean Louise Finch is as much a product of systematic thinking as the graphing of linear equations.

There is simply no way that this is the first time something of this kind has been written, it is even more unlikely that I am the first to have thought about this, so please help push my thinking further and add your comments to this post.

One last disclaimer:

3. Somehow I have been listening to Conway Twitty the entire time I was writing this...especially odd, since I have only three songs of his mighty catalogue.  Doubly odd that I only recognized this here at the end. I don't know what happened.


What's that Darlin'
How am I doin'
Guess I'm doin' alright
Except I cant sleep
And I've blogged all night 'til dawn.

Friday, February 5, 2016

8th Grade Winter 2016 ELA Simulation Item Analysis

The 8th grade team, as is true for many of you as well, sat down to undertake some item analysis of our Winter 2016 ELA Simulation Exam [Day One]. What follows is a reflection of the process that we went through.

Before the meeting the team ensured that all student answer documents had been scanned into Mastery Connect so that we would be working with the most complete data set for the 8th grade team as possible.  Our simulated exam targeted many Common Core Standards under the umbrella of Reading Literature and Informational texts.  In order to create a manageable data set the team narrowed its focus to the first 23 questions, which were based on 4 passages (3 nonfiction, 1 fiction).

Our expressed purpose in conducting the item analysis was to surface and trends or patterns in our students reading skills that would allow us to strategically address those learning gaps or build on the places of relative strength.

Our meeting began by orienting ourselves to the data in a low inference way, wherein team members (with the Mastery Connect generated report in front of them) shared initial observations and noticings.
A few of those initial observations were:
  • There seems to be a big difference between literature and informational (see questions 1 and 3 and 20)
  • Question 6, 17
  • Question 5 has an interesting pattern

We transitioned from our low inference-inspired observations to a more critical and analytical mode, aided by the "Multiple Choice Item Deconstruction Chart". We followed the protocol and soon discovered that students struggled, universally, on questions that were tied to RL/RI 8.4. These questions are about defining unfamiliar words using context; something that we surfaced as an issue in September after our Baseline exam. Confused, because we triaged that particular issue early (and often), we dug deeper.

Looking at the particular items in question another interesting pattern came to light. Questions that asked students to define unfamiliar words were answered at a fairly high rate. However, also bundled in the 8.4 standards is a focus on figurative language. Soon we realized that the questions that students were struggling with (in all three classrooms) were those that asked students to engage with figurative language, and in some cases using figurative language to identify unfamiliar words (a double whammy).

Given that this pattern extended across the whole grade, we as a team, developed a response that took advantage of a common grade structure that holds one period a week in reserve for independent reading. During this class we decided not only to employ a bit of item analysis with our students, but to also address the way to analyze figurative language. Furthermore, we identified several key lessons in the upcoming unit wherein we can check in on the progress of student's skill in analyzing figurative language. We plan to include several short responses to Laura Hillenbrand's use of figurative language, organize at least one accountable talk about the how figurative language is used to develop the characters in the narrative, and asking students to generate their own figurative descriptions of what they are reading. In this way we will be addressing this instructional need in a myriad of ways and modalities.

Our final response to this discovery was to manipulate the standard itself. Instead of having RL.8.4 include unfamiliar words, figurative and connotative language, etc... we are going to create sub-standards to allow for a more granular collection of data. We have done this in the past with success, and look forward to the ways in which more discrete data will enable more targeted instruction in the weeks to come.

Wednesday, December 2, 2015

Customizing the Common Core: Making Mastery Connect ...Connect

Customizing the Common Core: Making Mastery Connect ...Connect

Read the following Common Core Standard:

W.8.1
1. Write arguments to support claims with clear reasons and relevant evidence.
  1. Introduce claim(s), acknowledge and distinguish the claim(s) from alternate or opposing claims, and organize the reasons and evidence logically.
  2. Support claim(s) with logical reasoning and relevant evidence, using accurate, credible sources and demonstrating an understanding of the topic or text.
  3. Use words, phrases, and clauses to create cohesion and clarify the relationships among claim(s), counterclaims, reasons, and evidence.
  4. Establish and maintain a formal style.
  5. Provide a concluding statement or section that follows from and supports the argument presented.
 Look familiar? I am sure that the ELA teachers can just about quote it verbatim by now.  This is ONE standard, but look at the components.  It is one thing when a standard has multiple components that seem to build on one another, but when I look at W.8.1 I see a standard that is more of a salad bowl than a melting pot.  The sub-standards seem so discrete to me that they could almost be their own standards altogether.  Which is exactly what my student's performance shows, for they may be selecting high-quality evidence, generating middling reasons, and failing entirely to include a counterclaim.  How can I give them one score to indicate the mastery of this standard when their performance is so varied? Do I err on the side of their strengths, or do I tie their rating to their areas of need?  I am no longer interested in choosing between one or the other.  I want to get actionable data for each piece of the puzzle.

Mastery Connect can be used to truly report student's progress for even the most minute sub-standard, even if the Common Core Standard does not yet exist.  Can't find the standard(s) you want, or can't generate meaningful data around specific skills that your students struggle to master?  Make them up!

1.) Go to your curriculum map, locate a particular unit, and find the offending standard within that unit.


2.) Open the "Lines" menu and select, "Add Sub Standard"


3.) Maybe you'll get lucky and there will be suggestions or pre-existing sub standard, but I just went for the DIY option right away


4.) For the sake of organization and consistency, you are best off observing the same naming convention already established by the CCSS creators.  I simply appended my new sub standard with a letter that corresponded to the order it is listed in the description of the "parent" standard.  Then I borrowed the original CCSS language for the "Short Description".


5.) Create!

6.) Once you have created you new "custom" sub standard it will show up in the curriculum map nested under its "parent" standard.  Now you are free to create assessments that specifically target that sub standard, or break down any assessment of the parent standard into its constituent parts to get a clearer sense of what your students can actually do.

This more nuanced view of the data allows for instant small group, targeted, instruction that will address precisely what students need.

Setting Rubrics Up For Success

Setting Rubrics Up For Success

We have lived with the Common Core State Standards for quite some time now.  In fact, at WSC we have about a year or two more experience with many of the schools in New York City, for we saw the way the winds were blowing and tacked accordingly.  

Whether yo love them, hate them, or fall somewhere in between, you must admit that they have provided a common language that has sped our increase in academic rigor.  Still, they are bit imperfect.  They do not always fit in the way you want, or need, them to.

Recently I have been looking at may of our 8th grade rubrics and performing small, outpatient, surgeries on them.  I have had a lot of difficulty being confident in the rubric-based scores/ratings I have been giving students for their work.  In the 8th grade we have been using the New York State Argument Writing rubric as a basis for most of the feedback given to students about their mastery of skills, but there are places where criteria is co-mingled.


Placing multiple standards/criteria in a single row frustrates your ability to be precise in your rating and in your feedback.  Yes it is true that you can work around it, but why?  Split them up! Doing so will not take up any more space, and you can help your students tease out their level of mastery in a much more straightforward way.

Using Mastery Connect to Make Data-Driven Programming Decisions for PLMs

Using Mastery Connect to Make Data-Driven Programming Decisions for PLMs

Introduction:
As the first cycle of PLM comes to a close it is necessary, if we are being faithful to the original vision, to use data about students’ level of mastery.  In the past someone would have to look at multiple spreadsheets that contained data related to students’ performance on a variety of, at times, idiosyncratic assessments.  I say idiosyncratic because even if a team started out with the same intent, and maybe an agreement on the standards being addressed, there were often revisions that crucially altered the nature of the assessment from classroom to classroom.

Today as I sit down with the hopes of coaxing to the surface the oft-elusive snapshot of students’ mastery, I have tool that makes visible our renewed commitment to common assessment.  Mastery Connect allows a teacher/programmer to compare apples to apples, and know how good those apples are.  Because the CCSS are written in such a way that RI.8.2 is simply the older, more sophisticated, sibling of RI.7.2, we can safely bundle the two together and target the heart of the skill with the same instructional technique (RI.7.2, for those keeping score, is, Determine two or more central ideas in a text and analyze their development over the course of the text; provide an objective summary of the text.).

Method:
With a few clicks you can generate a grade-wide look at student mastery (see ‘Teacher Reports Overview’ for a step by step guide).  
What these reports show is the state of your grade’s mastery on whichever standards comprise your selected assessment.

Once you have surfaced the grade-wide needs it is time to drill down to determine the needs of the individual students within the larger grade-wide pattern.  I made a few false-starts when trying to facilitate this level of analysis, but with the help of our friendly neighborhood ed-tech-super-heroes at Teaching Matters I was able to try a method that seems to have some staying power.

I created a whole-grade tracker for the seventh and eighth grades.  These aren’t the type of instructional trackers that you use in your classroom wherein you add assessments and score them, rather think of it as a reporting warehouse.  These trackers provide me with the most up-to-date look at standards mastery on a student level.


This tracker will update with every addition of new data (including data entered as a result of assessments administered during PLM! Data indicated by the “dog-ear” in the upper left corner of the cell).

Results
I identified three areas of need that were shared by both the 7th and 8th grade and was able to generate a list of students who needed support with those skills.  When I had done that I needed to prioritize the needs of the students.  As is often the case, students were weak in multiple areas, and therefore I needed to consult the Mastery Connect tool to see what percentage of assessment questions (tied to each specific standard) they were able to answer correctly.  The lower the percentage correct the higher the priority.  This data was cross-referenced with data that we received as a result of ELA state test analysis.

Conclusions
Students, in the end, were placed in a PLM group that was either their first, or second highest area of need.  The Mastery connect tool was leveraged in such a way that the most recent assessment data informed every step of the process.  Now, PLM represents, to borrow an economics prefix, the macro application of Mastery Connect.  The same process, in miniature, is highly doable for your individual class.  This is especially true when you consider that the assessment calendars have been set ahead of time and teachers are able to see if there are standards that are going to repeat from unit to unit (or assessment to assessment).  These power standards that show up often are central to building a strong understanding of a student’s progress towards mastery, and if Mastery Connect can accelerate our instructional interventions, then it is certainly worth exploring.

Sunday, October 25, 2015

Feedback As a Lever

One of the things that we hope to solidify this year is the practice of providing students with clear and actionable feedback.  The importance of this cannot be overstated, for the more I think about it, the more I find that feedback is central to all of our present initiatives.

We are presently focused on:
  • Student Centered Learning
  • Student Self-Reflection
  • Goal-Setting as a Means To Improved Mastery
  • Release of Responsibility and the Fostering of Independence
  • Data-Driven Instruction
Each rests on a foundation of students knowing where they are.  We work extensively with rubrics in class, and do a lot to help students understand how they are a tool for them to understand their performance, and fodder for their goal-setting efforts.

Because of its centrality to our school's mission I did some reading of some professional text this summer.  Here are a few things to consider:

"John Hattie and Helen Timperley (2007) explained that its [feedbacks] purpose is 'to reduce discrepancies between current understandings and performance and a goal' (p. 86)" (Marzano 2009).

Researcher Valerie Shute said feedback is "information communicated to the learner that is intended to modify his or her thinking or behavior for the purpose of improving learning. (p. 154)" (Marzano 
2009).

"Hattie and Timperley (2007) synthesized the most current and comprehensive research in feedback and summarized findings from twelve previous meta-analyses, incorporating 196 studies and 6,972 ESs** (Effect Size[s]).  They calculated an overall average of 0.79 for feedback (translating to a 29 percentile point gain).  As shown by Hattie (2009), this is twice the average ES of typical educational innovations" (Marzano 2009).

** for an explanation of Effect Size see http://www.uccs.edu/lbecker/effect-size.html

Here is a quick graphic that clarifies the jargon above.  The higher the ES the more impactful the practice.



"They (Hattie and Timperley)  argued that feedback regarding the task, the process, and self-regulation is often effective, whereas feedback regarding the self (often delivered as praise) typically does not enhance learning and achievement.

Learning can be enhanced to the degree that students share the challenging goals of learning, adopt self-assessment and evaluation strategies, and develop error detection procedures and heightened self-efficacy to tackle more challenging tasks leading to mastery and understanding of lessons. (Hattie and Timperley (2007)" (Marzano 2009).

One of the things that I have incorporated into all of my written and verbal feedback is one, or two, quick things that a student could do next time to improve their performance.  We all know the importance chunking goals, and it is crucial that we temper student's expectations to go from a low level of performance to an extremely high one.  Just as with SMARTe goals in Base Camp we have to show them how realistic baby-steps are the surest way to guarantee success, and then offer them a couple baby steps to take right away.