top of page

MOTIVATIONS FOR CRITERION-BASED GRADING

The origins of the idea of support for criterion based grading first came to me when a former principal asked me in an interview exactly what an 84% said to me about a student.  I considered the question for a few seconds before to my surprise I had to respond, “Nothing.” I realized in that moment that an 84% as a grade in the category of tests or quizzes told me nothing about the student’s strengths or challenges and it was essentially meaningless as a report of how the student could achieve more or do better beyond just stating “do better” if the student or his family had higher expectations, or that he “did enough” if they were resigned to the necessity of him being in the class and “getting by”.  I reflected that assigning a quantitative score seems to impart a degree of validity, or rigor, because numbers often gives us that feeling, but beneath the veneer of the number resides an array of inherently subjective decisions that are invisible to the student in the assessment process such as:


  • How many points should a test/quiz/project be worth?

  • How many points should each problem on said assignment be worth? 

    • Points often ambiguously communicate the value a teacher assigns to the topic, or it could just be that more “steps” are necessary in finding the right answer - which are two different things, but students sometimes understand them to be synonymous.

  • How many points did a student earn out of the points there were available?

    • Teachers seldom give 0 pts if a student makes 0 correct steps if the problem is out of say 5 pts because they understand how costly a 0/5 could be when tallying points at the end if the test is out of 50 pts and a student earns 40 pts which results in 80% B- as opposed to 45 which is 90% A-, so then a teacher might only give a 0 if the student leaves the entire problem blank but if he shows any mathematical thinking at all, regardless of how irrelevant or incorrect, the teacher assigns the lowest threshold as a 3 since that would be 60% for the problem which is essentially a failing grade anyway.  


When a score of correct points out of total available points is presented to the student at the end, the only feedback that the student receives is exactly that, and it is difficult to see exactly what the student can understand about himself or his learning based on it.


Perhaps more importantly is where education is heading. Increasingly, educational researchers are recording and responding to the demands of an ever changing society and asking what economic and cultural value do our high school graduates bring to the challenges and problems that lie ahead.  What is it exactly that our young students will be tasked with doing to make our world a better place in the future? What skills will they need? How can we as educators support them in developing these skills? My personal response is to explicitly name these skills, define them, and then centralize them as the pillars of my courses.  As documented by Scott Mcleod and Dean Sharenski in their book, Different Schools for a Different World there are a number of fascinating pedagogical approaches being applied across schools and classrooms and it would be worth our while to explore and consider their utility in our learning community.  But these approaches must be complemented by an assessment format that puts front and central the skills we defined as being so critical to our students’ lives. Our assessment philosophy should make clear the learning skills we as a community value, describe the behaviors that constitute them in detail, agree on the evidence we would need to observe to know they are happening and at what level, and finally to construct tasks with which to engage our students that will challenge them to develop the skills and provide us as educators opportunities to provide support, feedback and instruction.  


The skills I have identified for my classes as being central to our learning are: Knowledge and Understanding, Inquiry and Investigation, Communication and Learner Skills (or otherwise Executive Functioning Skills).  I intend for these to be grade categories. I have invited a conversation with my students about their relative weights in calculating their final grade. Traditionally, I have noted that Haverford places emphasis on Knowledge and Understanding in the form of content assessment on tests and quizzes, so in this first year I might suggest we begin more in alignment with traditional expectations, but as we become more adept at learning tasks that evoke inquiry, communication and executive functioning skills from our students, our assigned weights may evolve from year to year.  


In this first year, the types of assessments aren’t really under debate.  Tests and quizzes will still be assigned under the category of Knowledge and Understanding.  Projects will be assigned under Inquiry and Investigation, homework under Learner Skills. For this first year, I propose to even grade tests and quizzes in the way I openly critiqued in the opening paragraph.  


The significant departure this first year is to create assignment categories that explicitly communicate the skills we hope to assess with each of these types of assessments.  Tests and quizzes assess the knowledge and understanding of content. Projects assess students’ inquiry and investigation skills. Homework assesses students’ learner skills in their ability to organize their time by practicing and studying the content.  


The second difference of note is that a single assessment can fall under multiple categories.  So a Chapter Test may be assessed for the knowledge and understanding of the content on display, but also the communication of this knowledge and understanding, a practice that is already employed by many teachers but not with explicit tools other than quick comments in the margins.  Projects can be assessed for inquiry and investigation, but for communication as well.  


Finally, although assessments in the category of knowledge and understanding will be scored traditionally in the manner of points for this first year (though there are other pedagogically sound methods for feedback that we can explore later), the other three categories will be assessed using rubrics that students themselves can have a hand in writing and that will be distributed entirely before they are put into use.  This gives students a chance to have a deep understanding of what these skills look like and to get meaningful feedback on where they are excelling and where they should concentrate their development. In short, students will be able to achieve a greater sense of themselves as thinkers, learners and individuals.  


Two last pieces of anecdotal evidence that have arisen in my time at Haverford.  I once had a student who consistently answer correctly on 92% of points available to him on an assessment.  The trouble was that he rarely wrote much as to mathematical thinking process. To be clear, I have no suspicion of academic dishonesty, indeed most of the teaching community recognized him for his incredible mental processing, and whenever I would ask him how he got an answer all he could produce was a vague, “it just sort of came to me”.  His ability to produce answers like this was well documented in an FEP. The fact that he did not communicate his thinking well was a problem, though, since I had no recourse as to how to use his problem solving to get him to achieve the remaining 8% of points he was still getting wrong. He would attest that I implored him to communicate more of his work, but without formal language as to what I expected and therefore recourse for documenting his level of communication skills, it simply remained a request that I would make that he would struggle to respond to and so resulted in his 92%.  Secondly, it is a problem because communication is perhaps the most important skill of our current and future workplace. If technology has advanced any aspect of our working lives it is in our technical ability to share ideas and provide feedback to produce superior products with an increasingly geographically, culturally, and technically diverse set of colleagues. If we don’t support our students to develop their communication skills we are putting them at a critical disadvantage. With a separate Communication rubric, I could point my student to the individual skills that constitute good communication, we could identify where he possesses strengths and also where to direct our attention for improvement and then obtain the resources for ensuring how he could improve.  


My second piece of anecdotal evidence is from a project I assigned to a group of upper level math students.  Since I had no other place to put it, I lumped it into the tests category because I felt it was a significantly large enough task and valuable enough task to place it in a more highly weighted grade category.  The problem was that as a result of putting it in the tests category, my students treated it as a test of their knowledge and understanding instead of, understandably, ascertaining its true purpose of investigating a fascinating and complex real world phenomenon.  They were tasked with choosing their thesis of investigation and in so doing one of two things occurs: either they chose a thesis that was so underwhelmingly simple that they were guaranteed to get the math right with very little exercise of their abilities, which resulted in a very boring learning experience for everyone, OR they really challenged themselves by choosing a fascinating aspect of the phenomenon to explore but then struggled with the math and we all felt restricted as to whether I could help them or whether they could use powerful computational tools since that would essentially mean they couldn’t do the math on their own.  Only a 1 or 2 groups got the balance correct, which resulted in a powerful learning experience. The others either produced by their own admission an overly reductive investigation, or they got themselves lost in the math and felt frustrated at being unable to reach a conclusion. The solution, I believe, is to assess projects under the Inquiry and Investigation category, not Knowledge and Understanding, where how to inquire and investigate is clearly detailed.  My intended purpose for my projects is for my students to take risks and lean into their curiosity without fear of retribution when they get something wrong knowing that I will support them in their mathematical understanding unconditionally (unlike on a test or quiz when it is their knowledge and understanding that I intend to assess).  


To conclude, employing Criterion-Based grading is a new and exciting inflection point for me at The Haverford School.  I believe that it will give me and my students the tools and vocabulary to talk constructively about their learning and to more accurately identify strengths and areas for targeted development.  As always, I will be first and foremost responsive to my students’ learning needs and already I get the sense that they are intrigued and excited for this new way of understanding their growth as a learner. 

Motivations for Criterion-Based: About
bottom of page