Ch.+8+notes

** CRITERIA AND VALIDITY ** =__ The Need for Criteria __= =__ Key component __= =__ From Criteria to Rubric __= = = =__   Two types of rubrics __= 1. Holistic rubric provides an overall impression of a student’s work. 2. Holistic rubrics yield a single score or rating for a product or performance. =__ Rubrics to Assess Understanding __= =__ Backward Design from Criteria and Rubrics __= =__ The facts and Criteria __= =__ Designing and Refining Rubrics based on Student Work __= =__ The Challenge Of Validity __= =__ Backward Design to the Rescue __= __ Validity affects rubric design. __
 * CHAPTER 8 **
 * Because the kinds of open-ended prompts and performance tasks needed to assess for understanding do not have a single, correct answer or solution process, evaluation of student work is based on judgment guided by criteria.
 * Many teachers make the mistake of relying on criteria that are merely easy to see as opposed to central to the performance and its purpose.
 * A rubric is criterion-based scoring guide consisting of a fixed measurement scale (4 points, 6 points, or whatever is appropriate) and descriptions of the characteristics for each score point.
 * Rubrics describe degrees of quality, proficiency, or understanding along a continuum.
 * If the assessment response needs only a yes/no determination, a checklist is used instead of a rubric.
 * __ Rubrics answer the questions: __
 * 1) By what criteria should performance be judged and discriminated?
 * 2) Where should we look and what should we look for to judge performance success?
 * 3) How should the different levels of quality, proficiency, or understanding be described and distinguished from one another?
 * Rubrics focus on describing degrees of understanding, the trait being scored.
 * An explicit goal in Stage 1 implies the criteria needed in Stage 2, even before a particular task is designed.
 * 1) Clearly stated position or option
 * 2) Supporting details provided
 * 3) Appropriate sources cited (as needed)
 * Since we have argued that understanding is revealed via six facets, these prove useful in identifying criteria and constructing rubrics to assess the degree of understanding.
 * Figure 8.2 on p. 177
 * Figure 8.3 provides a general framework for making helpful distinctions and sound judgments.
 * Gather samples of student performance that illustrate the desired understanding or proficiency
 * Sort student work into different “stacks” and write down the reasons.
 * Cluster the reasons into traits or important dimensions of performance.
 * Write a definition of each trait.
 * Select samples of student performance that illustrate each score point on each trait.
 * Continuously refine.
 * The third question in thinking like an assessor asks us to be careful that we evoke the most appropriate evidence, namely evidence of the desired results of Stage 1.
 * Validity refers to the meaning we can and cannot properly make of specific evidence, including traditional test-related evidence.
 * Consider the challenge currently in any conventional classroom
 * A focus on understanding makes the issue of validity challenging in any assessment.
 * Recall the horizontal version of the Template (figure 7.2 p. 149) and see how it asks us to look at the logical links between Stage 1 and Stage 2.
 * Notice in Figure 8.4 how backward design, using two of the six facets, helps us to better “think like an assessor”.
 * Validity issues arise in rubrics, not just tasks.
 * We have to make sure that we employ the right criteria for judging understanding (or any other target), not just what is easy to count or score.
 * In assessing for understanding we must especially beware of confusing mere correctness or skill in performance (i.e. writing, PowerPoint, graphic representations) with degree of understanding.
 * A common problem in assessment is that many scorers presume greater understanding in the student who knows all the facts or communicates with elegance versus the student who makes mistakes or communicates poorly.
 * Two questions asked earlier also help us self assess the validity of criteria and rubrics: Given the criteria you are proposing and the rubrics being drafted from the, consider:
 * 1) Could the proposed criteria be met but the performer still not demonstrate deep understanding?
 * 2) Could the proposed criteria not be met but the performer nonetheless still show understanding?
 * If your answer to either question is yes, then the proposed criteria and rubric are not yet ready to provide valid inferences. Reliability: Our Confidence in the Pattern
 * We need not only a valid inference but also a trustworthy one.
 * We need to be confident that a result reflects a pattern.
 * Reliable assessments reveal a credible pattern, a clear trend.
 * Notes that whether various judges agree with one another is a different problem, usually termed “inter-rater reliability”.