Physics Education Research Conference 2011 Invited Talks
Upload a Poster
Assessment Lessons from K-12 Education Research: Knowledge Representation, Learning, and MotivationLorrie A. Shepard, University of Colorado at Boulder
For 30 years, research on the effects of high-stakes testing in K-12 schools has documented the negative effects of teaching to the test. Most obvious is the reduction or elimination of time spent on science and social studies instruction, especially in high poverty schools. Less obvious is the harm to student learning in reading and mathematics when instruction is limited to repetitive drill on worksheets that closely resemble test formats. The lack of generalized, flexible understanding of underlying principles in K-12 tested subjects is similar to Mazur's experience with plug-and-chug versus conceptual test questions. The PER community is well aware of the importance of more complete representation of learning goals as a remedy to this problem. Equally important, however, are the assessment "processes," especially feedback and grading, that can either promote or deter students' engagement and willingness to take responsibility for their own learning. In this talk, I summarize learning and motivation research that has particular bearing on effective classroom assessment practices, in K-12 classrooms certainly, but even in university courses.
Complex interactions between formative assessment, technology, and classroom practicesEdward Price, California State University, San Marcos
Interactive engagement (IE) methods provide instructors with evidence of student thinking that can guide instructional decisions across a range of timescales: facilitating an activity, determining the flow of activities, or modifying the curriculum. Thus, from the instructor's perspective, IE activities can function as formative assessments. As a practical matter, the ability to utilize this potential depends on how the activities are implemented. This talk will describe different tools for small group problem solving, including whiteboards, Tablet PCs, digital cameras, and photosharing websites. These tools provide the instructor with varying levels of access to student work during and after class, and therefore provide a range of support for formative assessment. Furthermore, they differ in physical size, ease of use, and the roles for students and instructor. These differences lead to complex, often surprising interactions with classroom practices.
Download Edward Price's Invited Presentation
Research and development of enhanced assessment tools for chemistry educationThomas A. Holme, Iowa State University
The ACS Exams Institute has been producing norm-referenced exams in chemistry for over 75 years. Over the past decade, demands for assessment within chemistry education have increased, and the need has grown to consider additional ways to analyze data or develop assessment tools. This talk will note several examples of research and development related to chemistry exams and their use in classroom settings. Topics include, item-order effects, criterion referencing of exam items and differential item functioning.
Download Thomas A. Home's Invited Presentation
Defining and Assessing Competence in Science: Lessons Learned the Hard WayJames W. Pellegrino, University of Illinois at Chicago
What do we want students to know and be able to do in disciplines such as Physics, Chemistry or Biology? How do we determine whether students are attaining our objectives? How can we use this information to improve student outcomes? Questions about defining and assessing competence are at the heart of the science education enterprise and they continue to challenge educators across K-16+. This presentation will provide concrete examples of how best to frame and address these issues using examples drawn from my work on the redesign of AP science courses and exams, my participation in developing the NRC Conceptual Framework for new Science Education Standards, and ongoing research on the validity and utility of instruments such as STEM concept inventories. A major point of the presentation is that principled assessment design should be an essential and driving part of the process of designing powerful and effective science learning environments.
Download James W. Pellegrino's Invited Presentation
Student Engagement in Disciplinary AssessmentCo-presenter: Janet E. Coffey, University of Maryland
Co-author: David Hammer
Assessment in classrooms is often viewed as the responsibility of teachers. They typically serve as the ones judge the quality of work and advise students about necessary steps to make progress towards conceptual gains. By and large, then, assessment is something done by teachers to students. When students are involved in assessment, it is often as recipients and users of feedback from teachers. There's an analog to this in science. Assessment encompasses the process of peer review, when community members make judgments about the quality of work and provide feedback in reviews, and when funders decide whether to provide support to a research program. However, in science, assessment also operates on a different level; assessment of ideas is intimately connected to doing science. In this talk, we examine the relationship between assessment and learning in science and in schools. We argue for engaging students in disciplinary assessment activities and for better coordinating the different purposes and roles for assessment.
Standards-based grading with voice: Listening for students' understandingAndy Rundquist, Hamline University
Standards-based grading is gaining popularity at the high school level, including physics courses. The basic notion is to give your students a list of objectives upfront that they need to master. Students can reassess often and their final grade is determined solely by their last reassessment on each standard. It's the instructor's job to help students find ways of showing their mastery to you. I implemented this in a junior-level mechanics course where the small numbers allowed me to introduce a novel twist: all assessments had to include student's voice. This meant that students turned in pencasts, screencasts, and in-person assessments. Several days were also set aside for collaborative oral assessments, where students offered up honest advice and scores were mutually determined. In this talk, I'll share my experience trying out this pedagogical experiment and try to convey how it has improved my own understanding of my students' understanding.
Download Andy Rundquist's Invited Presentation