Preparing Final Grades?

Towards Fair and Meaningful Grading Practices for the Differentiated Standards-Based Classroom

Public school teachers today are charged with designing standards-based learning experiences that will be effective with increasingly diverse groups of students. To fulfill that goal, they have begun to use a wide variety of the assignments, learning activities, culminating projects and assessments with the students in their classes. But teachers working to improve their practice by differentiating their lesson plans sometimes worry, “How can I be sure my grades are fair and accurate if my students engage in different learning activities and express what they know in different ways?”

This pedagogy requires new, more flexible grading and reporting practices than we’ve used in the past; practices that help us better recognize and support all kinds of learners.  Well-regarded educational leaders believe that grading  practices alone could play an important role in reducing disengagement and school failure. Here are the big ideas.

Four elements of instruction for diverse learners, when considered together, help lead us logically to grading and reporting practices that are fair and meaningful.  These elements can be summarized as follows:

1. Standards are the “what” of teaching. They are the set of criteria describing what a student will know or be able to do when a course is completed.

2. Differentiation is the “how” of teaching.  Differentiation refers to the design of instruction to meet a range of learning needs within a diverse group of students as they progress towards mastering course standards.  Differentiated instruction offers multiple paths for students to take in information, apply knowledge, and express proficiency with content standards.  It makes learning accessible to all students, who vary in their prior knowledge/skills, learning profiles, linguistic/cultural backgrounds, abilities/disabilities, and individual interests.  It makes use of ongoing formative assessment to guide instruction.  Similar to the “universal design” of physical space in architecture and city planning, differentiated instruction is a universal design for learning.

3. Accommodation is a “term of art.”  It has a specific meaning in the context of education that makes it a tool for implementing the programmatic and non-discrimination laws that pertain to educating students with disabilities in public schools.  Accommodations are adaptations to instruction that  DO NOT  alter or lower the essential standards of a course. Accommodations are offered to students with disabilities in order to guarantee access to learning and to non-biased assessment.

Extra time for reading is an example of an accommodation for a student who has dyslexia, unless reading speed is the specific target of your assessment. For example, if the purpose of an assessment is to measure content knowledge, a timed administration of an assessment may not deliver valid results for a student who has dyslexia and can only read slowly with concentrated effort on the decoding process.  Chances are that a timed assessment would actually measure the student’s reading speed rather than his content knowledge.  Providing extra reading time is therefore necessary to create a non-biased assessment that produces valid results. In providing this accommodation, you do not alter the essential course standards, which in this case are about content knowledge and not about reading speed.

Providing accommodations to a student with a specific learning disability is similar to providing eyeglasses to a student who has poor vision.  Disabilities may not be apparent to the casual observer and are usually permanent characteristics that the individual learns to live with by adapting the environment.

Students who meet course standards with accommodations should receive standard grades, report cards, transcripts and diplomas, without notation that accommodations were provided. Indicators regarding accommodations are not appropriate for grade reports because the essential standards of the course were not altered for the individual.  Changes were made only to the format of instruction, to make it accessible and effective, or to the format of assessment, to make it non-biased and valid.  Accommodations are really just good instructional and assessment practices.

How do accommodations relate to differentiation? Any public school student who has been formally diagnosed with a disability has an Individualized Educational Program (IEP) or a Section 504 Plan that specifies what accommodations will be provided to him in school.  However, many students have cognitive processing strengths and weaknesses that are not severe enough to be considered disabilities or to warrant special education services. This is a well-accepted understanding documented in the literature of education, psychology and brain research.  In addition, language proficiency and other variations makes one-size-fits-all instruction and assessment less effective and accurate.  A learning environment that offers flexibility and multiple options for learning and assessment better addresses multiple learning profiles and increases the validity of assessments and proficiency reports.  Accommodations, because they do not lower the essential standards of a course, are not unlike the variations that normally occur between different teachers.  Accommodations are also similar to the variations than can be proactively built into the lesson plans for a single class of learners by using differentiated instructional design. Both differentiation and accommodations make learning more universally accessible and therefore increase student success rates.

4. Modification is another “term of art” that serves an important function in implementing education and disability law.  Modifications are adaptations made to classroom instruction that  DO  lower the course standards. Modifications are used for students with severe disabilities who may not be able achieve general education course standards because of the impact of their disability. Students with severe disabilities can participate in classroom instruction when it is modified to a lower level of difficulty so that it becomes accessible to them. Grades for students receiving modified instruction may be based on alternate standards via their IEP, which specifies individualized learning goals.  By specifying that a child is included in the regular classroom with “modifications” we are able to provide access to the general education program for students with severe disabilities without diluting the meaning of our standards and grades. If grades are based are modified (lowered) standards and instruction, grade reports and transcripts DO include a notation that the grade is a non-standard grade, and therefore is not equivalent to the standards-based course grades.

Grading and Reporting

The purpose of grading is worthy of philosophical debate, but a simple statement about the purpose of summative, final grades will serve the specific focus of this article. The purpose of summative grading is to communicate meaningfully about what a student knows or can do after completing a course. The goal is to provide information that is accurate and useful for planning a program of study, for documenting achievement of prerequisites, or for documenting qualification for any work that requires a specific standard of knowledge and performance.

In standards-based education the purpose of summative grading is to report what a student knows in can do in relation to the course standards. In other words, standards-based education calls for grading based on the mastery of specified criteria; it is criterion-based grading rather than normative grading.

In normative grading systems grades must be distributed over a curve and students compete for the limited number of high grades that are allowed.  The resulting grades do not provide information about what a student knows or can do, nor even what the teacher presented. They tell you a student’s relative standing amongst a group of learners.

Criterion-based grading does not need to reflect a curve, nor does it involve a competition between students, with winners and losers.  Criterion-based grading does not involve a comparison between students, only a comparison of each individual student to the standards. Criterion-based grading practices provide more meaningful information about what a student knows or can do, and therefore is more useful for planning a course of study, establishing pre-requisite achievement levels, and documenting qualifications.

Common Misunderstandings that Affect Grading

A theoretical framework for grading is suggested by the definitions of standards, differentiation, accommodations and modifications.  How does this model look when we put it into practice?  The following issues come up frequently among educators. Here are some misunderstandings to avoid:

Classroom Curriculum vs. Course Standards

Do not confuse your curriculum with the course standards.  They are NOT one and the same.  All students need to show competence with the same course standards, but not with the same materials, activities or products of instruction that an individual teacher has chosen for use in the classroom. If one student is not thriving with a given assignment, you can adapt it or provide an alternate assignment.  There is no particular reason why any student must to do the same exact work as other students.  Different work and different assessments can demonstrate competency with the same standards. Offering a variety of assignments and assessment that address the course standards is more effective than one-size-fits-all classroom assignments or assessments.  Rubrics can also be used to evaluate individual samples of proficiency.

Tests vs. Reality

Do not treat tests as if they are sacred!  Tests are only as valuable as they are valid. Validity refers to the likelihood that an assessment accurately measures what it is intended to measure.  Tests have no value if they produce inaccurate results about what a student knows or can do.  This can happen whenever there are confounding factors. A classroom test may be an aide to discovering the reality about what a student knows or can do, but it does not necessarily function that way.  Sometimes a test actually measures the impact of a student’s disability rather than the skill or knowledge you intend to measure. In that case, toss out the test data.

Remember also that assessments of proficiency are an attempt to ascertain a student’s usual and true mastery. They are NOT like a game of sport in which only one opportunity “counts.” For example, if you know that the habitual performance in the classroom is significantly better than a single test performance, throw out the test data when you compute your grades. As an assessment professional you are obliged to interpret your assessment data. Do not use invalid test scores when you compute final grades.

Fair vs. Same

Learning is not a competition between students.  A given learning activity or assignment may be effective for one student and ineffective with another.  Moreover, a given assessment instrument may be valid for one student but invalid for another.  Fairness is not using the same exact assessment for every student.  Fairness requires that you use assessment data that is valid for the individual you are grading. Fairness means giving credit for what is true about each individual.

Proficiency vs. Behavior

Avoid using grades for discipline.  The use of grades for discipline can cause inaccuracies in reporting proficiency, it can limit future learning opportunities, and it can cause disengagement from school.

Do not allow scores unrelated to proficiency to depress final grades below a student’s true level of proficiency. This can happen when grades are used as discipline for missing homework or late penalties are imposed for quality work.  Final letter or number grades in a standards-based system must be valid measures of proficiency with regard to the course standards.

Some teachers say they grade homework to build good work habits and study skills.  However, it is difficult to objectively measure effort and study skills unless you know your student very well. One student’s ten-minute effort is another student’s hour –long effort.  The number of practice problems necessary for one student to solidify a skill may be double or half that for another.  The study habits that are benficial for one student may not be the ones that are most beneficial for another.  Students each have to balance different sets of demands depending on their personal skill sets and schedules.

Grades given for effort or study skills are often really given for compliance with one-size-fits-all assignments that may not match an individual’s needs.  But according to Reeves (2008) and Gusky (2000), no studies show that using grades to punish students for missing work will prompt greater effort or help students learn good study habits.  In fact, low grades cause students to withdraw from learning.  Study habits for older students require the development of meta-cognitive regulation, and intrinsic control.  That is gained when teachers offer student-centered learning options, provide feedback with ongoing formative assessment, and foster student ownership of learning.

Do not grade missing work.  Missing work is something different from work that is of failing quality. Faced with students who do not turn in assignments, teachers sometimes feel they need to do something to increase participation, but there are at least 3 problems with assigning a failure for missing work: 1) It is not a valid measure of proficiency, 2) If averaged into final grades it distorts true proficiency, and 3) using grades as punishment doesn’t work for increasing student participation, engagement, or learning.  Besides these short-comings, assigning failures for missing work makes it difficult and unlikely that a student can recover from a setback when learning to learn.

Assigning grades for missing work is an easy way out for schools.  If something is not working, it is simpler to attribute the problem to the student and keep on doing what we’re doing.  In a way, it helps us to avoid facing the crisis of disengagement.

A better practice is to grade only real work and assign “I” for incomplete work, investigate the reason for a student’s missing or late work and attempt to hold the student accountable.  What type of assignment is missing – a practice assignment or a project? Does the assignment provide a meaningful learning experience for the individual student? Is a student’s learning disability, learning profile or another special need interfering with completing the assignment? Is it too easy or too difficult? Does the student understand the assignment?  Does he know where to begin?  Does he have a plan for how he can get it done?  Is the student over scheduled?  Is the student disengaged because of repeated negative school experiences? What does the student think is the problem?  What are the student’s goals? Does your formative assessment system give enough feedback to the student about how his effort contributes to reaching his goals?  What does the student say he needs?  Can the instruction be adapted to better meet the student’s needs? What else can be changed?

If you can be sure the student has been assigned work that is meaningful for his individual needs, is at his instructional level, he has enough time, he knows how to start, he has a plan for getting it done, and he understands how his efforts will affect him in reaching his goals, you may have solved the problem.  Once the work is turned in, assign full and accurate credit for proficiency. Do not distort proficiency with late penalties.  There is no standard that requires all students to become proficient on a specific day.

Of course, if a student does not complete enough work in order to evaluate his proficiency by the end of the semester, he cannot be given credit for the course.  Some districts have begun to replace traditional grading and reporting systems with new systems that assign Incompletes for courses that are attempted but not completed in a semester.  This is based on the theory that time and pacing must be a variable in learning to address the needs of some students.

Zero vs. 50%

Reeves (2004) and Wormeli (2006) have written extensively about the misuse of zeros. On a 100-point scale, using zeros distorts true proficiency in final grades. On a 4 point scale, it may be acceptable to use zero as a failing score.  However, these authors argue that on a 100 point scale, a “50” should be entered into your grade book if the student’s raw score is anything below 50.  Why?  The distance between the steps of the grading scale must be equal to prevent distortion when used in computations for final grades.  If failing grade, F, has a larger range (60 points as compared to 10 points each for A, B, C and D), a single zero can have a disproportionate impact on the final number.  A student who receives a zero may subsequently have to demonstrate a true proficiency at the high end of the grading scale several times in order to end up with a low passing final grade. Besides the distortion to proficiency, if it is too hard to recover from a mistake, a student may give up.  Using  “50” for failing leads to summative grades that more accurately reflect true proficiency and encourage students to recover from mistakes on the learning path.

Conclusion:

This doesn’t solve all our problems.  If we don’t have good way to report about personal progress, we lack important information.  This is particularly true for students at each end of the performance spectrum, those who repeatedly score either low or high with regard to uniform standards. A low performer may be making good steady personal progress, and a high performer may be making negligible improvement.  Personal progress information can be more helpful to fostering the success of individual students.

Secondly, today’s overcrowded classrooms, limited resources and large high schools make knowing kids well and using flexible grading practices a logistical challenge for many teachers.

However, greater clarity about final grading practices that are fair and meaningful for differentiated, standards-based assessment is a place to start. If we can begin to give appropriate credit for the different ways and rates that students learn and express what they know and can do, we may reduce disengagement and school failure rates. Drop out statistics suggest we can’t afford to alienate our students by measuring success in narrow and rigid terms that don’t match our rich diversity of intelligence, skills, abilities, learning rates, backgrounds and personal interests. We can’t afford to assert that a student who doesn’t complete a one-size-fits-all assignment according to a one-size-fits-all timeline is a failure.  We can’t afford to under-report proficiency and create unnecessary barriers to future learning opportunities for our students. We need to create universal designs for learning that allow us to appreciate each student’s strengths and empower each unique student to win at learning.

Works Cited

Conner, Jerusha, Denise Pope, and Mollie Galloway. “Success with Less Stress.” Educational Leadership 67.4 (2010): 54-58. http://www.ascd.org/publications/educational-leadership/dec09/vol67/num04/Success-with-Less-Stress.aspx

Darling-Hammond, Linda, and Olivia Ifill-Lynch. “If They’d Only Do Their Work!” Educational Leadership 63.5 (2006): 8-13.

Guskey, Thomas R. “Grading Policies That Work Against Standards…and How To Fix Them.” NASSP Bulletin 84.620 (2000).

Jung, Lee Ann, and Thomas R. G. “Grading Exceptional Learners.” Educational Leadership 67.5 (2010): 31-35. http://www.ascd.org/publications/educational-leadership/feb10/vol67/num05/Grading-Exceptional-Learners.aspx

Popham, James W. “Report Cards, Test Gaps, and Item Types.” Educational Leadership 65.2 (2007): 87-88. http://www.ascd.org/publications/educational-leadership/oct07/vol65/num02/Report-Cards,-Test-Gaps,-and-Item-Types.aspx

“Principles and Indicators for Student Assessment Systems | FairTest.” The National Center for Fair & Open Testing | FairTest. 28 Aug. 2007. Web. 10 June 2010. http://fairtest.org/principles-and-indicators-student-assessment-syste

Reeves, Douglas B. “Effective Grading Practices.” Educational Leadership 65.5 (2008): 85-87. Web. http://www.ascd.org/publications/educational-leadership/feb08/vol65/num05/Effective-Grading-Practices.aspx

Reeves, Douglas B. “The Case Against Zero.” Phi Delta Kappan 86.4 (2004): 324.

Stange, Alan. “Grading Practices for 2009-2010: The Big Ideas of Our Assessment Practice.” Weblog post. PrairieSouth Staff Sites. Oct. 2009. Web. 10 June 2010. http://staff.prairiesouth.ca/sites/stangea/2009/10/16/grading-practices-for-2009-2010/

Vatterott, Cathy. “Homework Myths.” Web. 1 June 2010. http://www.homeworklady.com/index.php?option=com_docman&task=cat_view&gid=13&Itemid=34

Vatterott, Cathy. “What Is Effective Homework?” ASCD Express 3.7 (2007). Web. http://www.ascd.org/ascd_express/vol3/3-07_vatterott.aspx

Wormeli, Rick. “Accountability: Teaching Through Assessment and Feedback, Not Grading.” American Secondary Education 34.3 (2006).

Wormeli, Rick. Fair Isn’t Always Equal: Assessing & Grading in the Differentiated Classroom. Portland, ME: Stenhouse, 2006.

Creative Commons License

Preparing Final Grades? Towards Fair and Meaningful Grading Practices for the Differentiated Standards-Based Classroom by Denise Herrenbruck is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.  You are free to copy and distribute this work as long as you attribute the work to the author.  You may not use this work for commercial purposes.  You may not alter, transform, or build upon this work. Permissions beyond the scope of this license may be available at http://headinthecloudsfeetontheground.com/contact-2/.

-- Download Preparing Final Grades? as PDF --


This entry was posted in Teaching Diverse Learners and tagged , , , , , , . Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  • denise herrenbruckDenise Herrenbruck

Rss Feed Tweeter button