Category Archives: Assessment

Student Evaluation of Teaching – The Next 100 Years

Mari K. Hopper, PhD
Sam Houston State University

Student evaluation of teaching (SET) has been utilized and studied for over 100 years. Originally, SET was designed by faculty to gather information from students in order to improve personal teaching methods (Remmers and Guthrie, 1927). Over time, SET became increasingly common. Reports in the literature indicate 29% of institutions of higher education employed this resource in 1973, 68% in 1983,  86% in 1993, and 94.2% in 2010 (Seldin, 1993).

Today, SET is employed almost universally, and has become a routine task for both faculty and students. While deployment of this instrument has increased, impact with faculty has declined. A study published in 2002 indicated only 2-10% of instructors reported major teaching changes based on SET (Nasser & Fresko, 2002). However, results of SET has become increasingly important in making impactful faculty decisions including promotion and tenure, merit pay, and awards. A study by Miller and Seldin (2010), reported that 99.3% Deans use SET in evaluating their faculty (Miller & Seldin, 2014)

The literature offers a rich discussion of issues related to SET including bias, validity, reliability, and accuracy. Although discussions raise concern for current use of SET, institutions continue to rely on SET for multiple purposes. As a consequence, it has become increasingly important that students offer feedback that is informative, actionable, and professional. It would also be helpful to raise student awareness of the scope, implications, and potential impact of SET results. 

To that end, I offer the following suggestions for helping students become motivated and effective evaluators of faculty:

  • Inform students of changes made based on evaluations from last semester/year
  • Share information concerning potential bias (age, primary language, perception of grading leniency, etc.)
  • Inform of full use including departmental and campus wide (administrative decisions, awards, P & T, etc,)
  • Establish a standard of faculty performance for each rating on the Likert scale (in some cases a 3 may be the more desirable indicator)
  • Inform students of professionalism, and the development of professional identity. Ask students to write only what they would share in face-to-face conversation.
  • Ask students to exercise caution and discrimination – avoid discussing factors out of faculty control (class size, time offered, required exams, classroom setting, etc.)
  • If indicating a faculty behavior is unsatisfactory – offer specific reasons
  • When writing that a faculty member display positive attributes – be sure to include written comments of factual items, not just perceptions and personal feelings
  • Give students examples of USEFUL and NOT USEFUL feedback
  • Distinguish between ‘anonymous’ and ‘blinded’ based on your school’s policy

Although technology has made the administration of SET nearly invisible to faculty, it is perhaps time for faculty to re-connect with the original purpose. It is also appropriate for faculty to be involved in the process of developing SET instruments, and screening questions posed to their students. Additionally, it is our responsibility to help students develop proficiency in offering effective evaluation. Faculty have the opportunity, and perhaps a responsibility, to determine the usefulness and impact of SET for the next 100 years.

Please share your ideas about how we might return to the original purpose of SET – to inform our teaching. I would also encourage you to share instructions you give your students just prior to administering SET. 

Mari K. Hopper, PhD, is currently the Associate Dean for Biomedical Sciences at Sam Houston State University Proposed College of Osteopathic Medicine. She received her Ph.D. in Physiology from Kansas State University. She was trained as a physiologist with special interest in maximum capabilities of the cardiorespiratory and muscular systems. Throughout her academic career she has found immense gratification in working with students in the classroom, the research laboratory, and in community service positions. Dr Hopper has consistently used the scholarly approach in her teaching, and earned tenure and multiple awards as a result of her contributions in the area of scholarship of teaching and learning. She has focused on curriculum development and creating curricular materials that challenge adult learners while engaging students to evaluate, synthesize, and apply difficult concepts. At SHSU she will lead the development of the basic science curriculum for the first two years of medical school. Dr Hopper is very active in professional organizations and currently serves as the Chapter Advisory Council Chair for the American Physiological Society, the HAPS Conference Site Selection Committee, and Past-President of the Indiana Physiological Society. Dr Hopper has four grown children and a husband David who is a research scientist.

Fostering an Inclusive Classroom: A Practical Guide

Ah, the summer season has begun! I love this time of year, yes for the sun and the beach and baseball games and long, lazy summer reading, but also because it gets me thinking about new beginnings. I’ve always operated on a school-year calendar mindset, so if you’re like me, you’re probably reflecting on the successes and shortcomings of the past year, preparing for the upcoming fall semester, or maybe even launching into a new summer semester now. As campuses become more diverse, fostering an inclusive learning environment becomes increasingly important, yet the prospect of how to do so can be daunting. So where to start?

First, recognize that there is not just one way to create an inclusive classroom. Often, the most effective tactics you use may be discipline-, regional-, campus-, or classroom-specific. Inclusive teaching is a student-oriented mindset, a way of thinking that challenges you to maximize opportunities for all students to connect with you, the course material, and each other.

Second, being proactive before a semester begins can save you a lot of time, headaches, and conflict down the road. Set aside some dedicated time to critically evaluate your course structure, curriculum, assignments, and language choices before ever interacting with your students. Consider which voices, perspectives, and examples are prominent in your class materials, and ask yourself which ones are missing and why. Try to diversify the mode of content representation (lectures, videos, readings, discussions, hands-on activities, etc.) and/or assessments types (verbal vs. diagrammed, written vs. spoken, group vs. individual, online vs. in-class, etc.). Recognize the limits of your own culture-bound assumptions, and, if possible, ask for feedback from a colleague whose background differs from your own.

Third, know that you don’t have to change everything all at once. If you are developing an entirely new course/preparation, you’ll have less time to commit to these endeavors than you might for a course you’ve taught a few times already. Recognize that incremental steps in the right direction are better than completely overwhelming yourself and your students to the point of ineffectiveness (Trust me, I’ve tried and it isn’t pretty!)

Below, I have included some practical ways to make a classroom more inclusive, but this list is far from comprehensive. As always, feedback is much appreciated!

Part 1: Course Structure and Student Feedback

These strategies require the largest time commitment to design and implement, but they are well worth the effort.

  • Provide opportunities for collaborative learning in the classroom. Active learning activities can better engage diverse students, and this promotes inclusivity by allowing students from diverse backgrounds to interact with one another. Furthermore, heterogeneous groups are usually better problem-solvers than homogeneous ones.
  • Implement a variety of learning activity types in order to reach different kinds of learners. Use poll questions, case studies, think-pair-share, jigsaws, hands-on activities, oral and written assignments, etc.
  • Select texts/readings whose language is gender-neutral or stereotype-free, and if you run across a problem after the fact, point out the text’s shortcomings in class and give students the opportunity to discuss it.
  • Promote a growth mindset. The language you use in the classroom can have a surprising impact on student success, even when you try to be encouraging. How many of us have said to our students before a test, “You all are so smart. I know you can do this!”? It sounds innocent enough, but this language conveys that “being smart” determines success rather than hard work. Students with this fixed mindset are more likely to give up when confronted with a challenge because they don’t think they are smart/good/talented enough to succeed. Therefore, when we encourage our students before an assessment or give them feedback afterwards, we must always address their effort and their work, rather than assigning attributes (positive or negative) to them as people.
  • Convey the same level of confidence in the abilities of all your students. Set high expectations that you believe all students can achieve, emphasizing the importance of hard work and effort. Perhaps the biggest challenge is maintaining high expectations for every student, even those who have performed poorly in the past. However, assuming a student just can’t cut it based on one low exam grade may be as damaging as assuming a student isn’t fit due to their race, gender, background, etc.
  • Be evenhanded in praising your students. Don’t go overboard as it makes students feel like you don’t expect it of them.

Part 2: Combating Implicit Bias

Every one of us harbors biases, including implicit biases that form outside of our conscious awareness. In some cases, our implicit biases may even run counter to our conscious values. This matters in the classroom because implicit bias can trigger self-fulfilling prophecies by changing stereotyped groups’ behaviors to conform to stereotypes, even when the stereotype was initially untrue. Attempting to suppress our biases is likely to be counterproductive, so we must employ other strategies to ensure fairness to all our students.

  • Become aware of your own biases, by assessing them with tools like the Harvard Implicit Association Test (https://implicit.harvard.edu/implicit/takeatest.html) or by self-reflection. Ask yourself: Do I interact with men and women in ways that create double standards? Do I assume that members of one group will need extra help in the classroom – or alternatively, that they will outperform others? Do I undervalue comments made by individuals with a different accent than my own?
  • Learn about cultures different than your own. Read authors with diverse backgrounds. Express a genuine interest in other cultural traditions. Exposure to different groups increases your empathy towards them.
  • Take extra care to evaluate students on individual bases rather than social categorization / group membership. Issues related to group identity may be especially enhanced on college campuses because this is often the first time for students to affirm their identity and/or join single-identity organizations / groups.
  • Recognize the complexity of diversity. No person has just one identity. We all belong to multiple groups, and differences within groups may be as great as those across groups.
  • Promote interactions in the classroom between different social groups. Even if you choose to let students form their own groups in class, mix it up with jigsaw activities, for example.
  • Use counter-stereotypic examples in your lectures, case studies, and exams.
  • Employ fair grading practices, such as clearly-defined rubrics, anonymous grading, grading question by question instead of student by student, and utilize activities with some group points and some individual points.

Part 3: Day-to-Day Classroom Culture

These suggestions fall under the “biggest bang for your buck” category. They don’t require much time to implement, but they can go a long way to making your students feel more welcome in your classroom.

  • Use diverse images, names, examples, analogies, perspectives, and cultural references in your teaching. Keep this in mind when you choose pictures/cartoons for your lectures, prepare in-class or take-home activities, and write quiz/test questions. Ask yourself if the examples you are using are only familiar or relevant to someone with your background. If so, challenge yourself to make it accessible to a wider audience.
  • Pay attention to your terminology and be willing to adjust based on new information. This may be country-, region-, or campus-specific, and it may change over time (e.g. “minority” vs. “historically underrepresented”). When in doubt, be more specific rather than less (e.g. “Korean” instead of “Asian”; “Navajo” instead of “Native American”).
  • Use inclusive and non-gendered language whenever possible (e.g. “significant other/partner” instead of “boyfriend/husband,” “chairperson” instead of “chairman,” “parenting” instead of “mothering”).
  • Make a concerted effort to learn your students’ names AND pronunciations. Even if it takes you a few tries, it is a meaningful way to show your students you care about them as individuals.
  • Highlight the important historical and current contributions to your field made by scientists belonging to underrepresented groups.
  • Limit barriers to learning. You will likely have a list of your own, but here are a few I’ve compiled:
    • Provide lecture materials before class so that students can take notes on them during class.
    • Use a microphone to make sure all students can hear you clearly.
    • Consider using Dyslexie font on your slides to make it easier for dyslexic students to read them.
    • Speak slowly and limit your use of contractions so that non-native-English speakers can understand you more easily.
    • Write bullet points on the board that remain there for the whole class period, including the main points for that lecture, important dates coming up, and key assignments.
    • Be sensitive to students whose first language is not English and don’t punish them unnecessarily for misusing idioms.

As a final parting message, always try to be mindful of your students’ needs, but know that you don’t have everything figured out at the outset. Make time to reevaluate your approach, class materials, and activities to see where improvements can be made. Challenge yourself to continually improve and hone better practices. Listen to your students, and be mindful with the feedback you ask them to give you in mid-semester and/or course evaluations.

For more information, I recommend the following resources:

  1. Davis, BG. “Diversity and Inclusion in the Classroom.” Tools for Teaching (2nd Ed). San Francisco: Jossey-Bass, A Wiley Imprint. p 57 – 71. Print.
  2. Eredics, Nicole. “16 Inclusive Education Blogs You Need to Know About!” The Inclusive Class, 2016 July 27. http://www.theinclusiveclass.com/2016/07/16-inclusive-education-blogs-you-need.html
  3. Handelsman J, Miller S, Pfund C. “Diversity.” Scientific Teaching. New York: W. H. Freeman and Company, 2007. p 65 – 82. Print.
  4. “Instructional Strategies: Inclusive Teaching and Learning.” The University of Texas at Austin Faculty Innovation Center. https://facultyinnovate.utexas.edu/inclusive

Laura Weise Cross is an Assistant Professor of Biology at Millersville University, beginning in the fall of 2019, where she will be teaching courses in Introductory Biology, Anatomy & Physiology, and Nutrition. Laura received a B.S. in Biochemistry from the University of Texas and a Ph.D. in Molecular and Cellular Pathology from the University of North Carolina. She recently completed her post-doctoral training in the Department of Cell Biology & Physiology at the University of New Mexico, where she studied the molecular mechanisms of hypoxia-induced pulmonary hypertension. Laura’s research is especially focused on how hypoxia leads to structural remodeling of the pulmonary vessel wall, which is characterized by excessive vascular smooth muscle cell proliferation and migration. She looks forward to engaging undergraduate students in these projects in her new research lab.

How to motivate students to come prepared for class?

The flipped classroom is a teaching method where the first exposure to the subject occurs in an individual learning space and time and the application of content is practiced in an interactive guided group space. Freeing up class time by shifting traditional lecture outside of class allows the instructor more time for student-centered activities and formative assessments which are beneficial to students. The flipped teaching model has been shown to benefit students as it allows self-pacing, encourages students to become independent learners, and assists them to remain engaged in the classroom. In addition, students can access content anytime and from anywhere. Furthermore, collaborative learning and peer tutoring can be integrated due to freed-up class time with this student-centered approach. Given these benefits, the flipped teaching method has been shown to improve student performance compared to traditional lecture-based teaching. Compared to the flipped classroom, the traditional didactic lecture is considered a passive type of delivery where students may be hesitant to ask questions and may omit key points while trying to write or type notes.

There are two key components in the flipped teaching model: pre-class preparation by students and in-class student-centered activities. Both steps involve formative assessments to hold students accountable. The importance of the pre-class assessment is mainly to encourage students to complete their assignments and therefore, they are better prepared for the in-class application of knowledge. In-class activities involve application of knowledge in a collaborative space with the guidance of the instructor. Although the flipped teaching method is highly structured, students still come to class unprepared.

Retrieval practice is yet another powerful learning tool where learners are expected to recall information after being exposed to the content. Recalling information from memory strengthens information and forgetting is less likely to occur. Retrieval of information strengthens skills through long-term meaningful learning. Repeated retrieval through exercises involving inquiry of information is shown to improve learning.

The use of retrieval strategy in pre-class assessments is expected to increase the chance of students completing their pre-class assignment, which is often a challenge. Students attending class without having any exposure to the pre-class assignment in the flipped classroom will drastically affect their performance in the classroom. In my flipped classroom, a quiz consisting of lower level of Bloom’s taxonomy questions is given over the pre-class assignment where the students are not expected to utilize any resources or notes but to answer questions from their own knowledge. Once this exercise is completed, a review of the quiz and the active learning portion of the class occurs. I use a modified team-based learning activity where the groups begin answering higher order application questions. Again, no resources are accessible during this activity to promote their preparation beforehand. Since it is a group activity, if one student is not prepared, other students may fill this gap. The group typically engages every student and there is a rich conversation of the topic being discussed in class. The classroom becomes a perfect place for collaborative learning and peer tutoring. For rapid feedback to the students, the group answers to application questions are discussed with the instructor prior to the end of the class session.

Student preparation has improved since the incorporation of the flipped teaching model along with retrieval exercises in my teaching, but there are always some students who are not motivated to come prepared to class. It is possible that there are other constraints students may have that we will not be able to fix but will continue to be searching for and developing newer strategies for helping these students maximize their learning.

Dr. Gopalan received her PhD in Physiology from the University of Glasgow, Scotland. After completing two years of postdoctoral training at Michigan State University, she began her teaching endeavor at Maryville University where she taught Advanced Physiology and Pathophysiology courses in the Physical Therapy and Occupational Therapy programs as well as the two-semester sequence of Human Anatomy and Physiology (A&P) courses to Nursing students. She later joined St. Louis Community College where she continued to teach A&P courses. Dr. Gopalan also taught at St. Louis College of Pharmacy prior to her current faculty position at Southern Illinois University Edwardsville where she teaches Advanced Human Physiology and Pathophysiology for the doctoral degrees in the Nurse Anesthetist and Nurse Practitioner programs. Besides teaching, she has an active research agenda in teaching as well as in the endocrine physiology field she was trained in.
Questioning How I Question

For some, “assessment” is sometimes a dirty word, with visions of rubrics, accreditation reports, and piles of data.  Readers of this blog hopefully do not have this vantage point, thanks in part to some great previous posts on this topic and an overall understanding of how assessment is a critical component of best practices in teaching and learning.  Yet, even as a new(ish) faculty member who values assessment, I still struggle with trying to best determine whether my students are learning and to employ effective and efficient (who has time to spare?!) assessment strategies.  Thus, when a professional development opportunity on campus was offered to do a book read of “Fast and Effective Assessment: How to Reduce Your Workload and Improve Student Learning” by Glen Pearsall I quickly said “Yes! Send me my copy!”

 

Prior to the first meeting of my reading group, I dutifully did my homework of reading the first chapter (much like our students often do, the night before…).  Somewhat to my surprise, the book doesn’t start by discussing creating formal assessments or how to effectively grade and provide feedback.  Rather, as Pearsall points out “a lot of the work associated with correction is actually generated long before students put pen to paper. The way you set up and run a learning activity can have a profound effect on how much correction you have to do at the end of it.” The foundation of assessment, according to Pearsall is then questioning technique. 

 

Using questions to promote learning is not a new concept and most, even non-educators, are somewhat familiar with the Socratic Method.  While the simplified version of the Socratic Method is thought of as using pointed questions to elicit greater understanding, more formally, this technique encourages the student to acknowledge their own fallacies and then realize true knowledge through logical deduction[1],[2].  Compared to the conversations of Socrates and Plato 2+ millennia ago, modern classrooms not only include this dialectic discourse but also other instructional methods such as didactic, inquiry, and discovery-based learning (or some version of these strategies that bears a synonymous name).  My classroom is no different — I ask questions all class long, to begin a session (which students answer in writing to prime them into thinking about the material they experienced in preparation for class), to work through material I am presenting (in order to encourage engagement), and in self-directed class activities (both on worksheets and as I roam the room).  However, it was not until reading Pearsall’s first chapter that I stopped to question my questions and reflect on how they contribute to my overall assessment strategy.

 

Considering my questioning technique in the context of assessment was a bit of a reversal in thinking.  Rather than asking my questions to facilitate learning (wouldn’t Socrates be proud!), I could consider my questions providing important feedback on whether students were learning (AKA…Assessment!).  Accordingly, the most effective and efficient questions would be ones that gather more feedback in less time.  Despite more focus on the K-12 classroom, I think many of Pearsall’s suggestions[3] apply to my undergraduate physiology classes too.  A brief summary of some strategies for improving questioning technique, separated by different fundamental questions:

 

 

How do I get more students to participate?

  • We can “warm up” cold calling to encourage participation through activities like think-pair-share, question relays, scaffolding answers, and framing speculation.
  • It is important to give students sufficient thinking time through fostering longer wait and pause times. Pre-cueing and using placeholder or reflective statements can help with this.

How do I elicit evidentiary reasoning from students?

  • “What makes you say that?” and “Why is _____ correct?” encourages students to articulate their reasoning.
  • Checking with others and providing “second drafts” to responses emphasizes the importance of justifying a response.

How do I sequence questions?

  • The right question doesn’t necessarily lead to better learning if it’s asked at the wrong time.
  • Questions should be scaffolded so depth and complexity develops (i.e. detail, category, elaboration, evidence).

How do I best respond to student responses?

  • Pivoting, re-voicing, and cueing students can help unpack incorrect and incomplete answers as well as build and explore correct ones.

How do I deal with addressing interruptions?

  • Celebrating good practices, establishing rules for discussion, making it safe to answer and addressing domineering students can facilitate productive questioning sessions.

 

After reviewing these strategies, I’ve realized a few things.  First, I was already utilizing some of these techniques, perhaps unconsciously, or as a testament to the many effective educators I’ve learned from over the years.  Second, I fall victim to some questioning pitfalls such as not providing enough cueing information and leaving students to try their hand at mind-reading what I’m trying to ask more than I would like.  Third, the benefits of better questioning are real.  Although only anecdotal and over a small sampling period, I have observed that by reframing certain questions, I am better able to determine if students have learned and identify what they may be missing.  As I work to clean up my assessment strategies, I will continue to question my questions, and encourage it in my colleagues as well.

 

1Stoddard, H.A. and O’Dell, D.A. Would Socrates Have Actually Used the Socratic Method for Clinical Teaching? J Gen Intern Med 31(9):1092–6. 2016.

2Oyler, D.R. and Romanelli, F. The Fact of Ignorance Revisiting the Socratic Method as a Tool for Teaching Critical Thinking. Am J of Pharm Ed; 78 (7) Article 144. 2014.

3A free preview of the first chapter of Pearsall’s book is available here.

Anne Crecelius (@DaytonDrC) is an Assistant Professor in the Department of Health and Sport Science at the University of Dayton where she won the Faculty Award in Teaching in 2018.  She teaches Human Physiology, Introduction to Health Professions, and Research in Sport and Health Science. She returned to her undergraduate alma mater to join the faculty after completing her M.S. and Ph.D. studying Cardiovascular Physiology at Colorado State University.  Her research interest is in the integrative control of muscle blood flow.  She is a member of the American Physiological Society (APS), serving on the Teaching Section Steering Committee and will chair the Communications Committee beginning in 2019.  In 2018, she was awarded the ADInstruments Macknight Early Career Innovative Educator Award.
Likely or unlikely to be true? I like to have students hypothesize

Throughout my science education career, if I was asked what I do, I responded “I write standardized tests.” Let me assure you, this doesn’t win you too many fans outside of science education assessment circles. But in my opinion, there is nothing better to help one develop an understanding and intuition about how students learn than interviewing hundreds of students, listening to their thinking as they reason through questions.

 

When I listen to students think aloud as they answer questions, I learn a lot about what they know and about their exam-taking processes too. For example, while interviewing a student on a multiple true-false format physiology question, the student answered all the true-false statements then said “Wait, let me go back. There’s always some exception I might be missing.” For this student, physiology always broke the rules and the exams they typically took tried to test whether they knew the exceptions. Although my intention for the question was to have the students apply general conceptual knowledge, the student, like most others I interviewed, was instead spending a lot of time making sure they had recalled all the right information. Eventually, moments like this led to a simple change in question format that created an interesting shift in the way questions elicited thinking from faculty and students alike.

 

The interview mentioned above occurred during the process of writing a programmatic physiology assessment, Phys-MAPS.2 The goal of this assessment and the others in a suite of Bio-MAPS assessments was to build tools that could measure student learning across biology majors. Our working team3 and I chose to build all the assessments using a multiple true-false format, where for each question, a short scenario is described, followed by several (often 4-6) statements about the scenario that students identify as either true or false. We chose this format for its high utility assessing how students can hold both correct and incorrect ideas about a topic simultaneously,4 highly pertinent to learning across a major. In addition, the multiple true-false format has the benefit of facilitating easy and quick grading for a large number of students while still allowing for a rich understanding of student thinking comparable to essay assessments.5

Example of Modified Multiple True-False Design (from a question similar to but not on the Phys-MAPS)

However, as I was creating the physiology-specific assessment and Dr. Mindi Summers was creating the ecology-evolution-specific assessment, we ran into challenges when writing statements that needed to be absolutely “true” or “false.” Sometimes we had to write overly complex scenarios for the questions because too many constraints were needed for a “true” or “false” answer. In addition, discipline experts were refusing to ever say something was “true” or “false” (especially, but not solely, the evolutionary biologists). Thus, many of our statements had to be re-written as something that was “likely to be true” or “unlikely to be true”, making the statements bulky and long.

 

Dr. Summers was the first to bring up in our working group meeting the idea of modifying the true-false format. She suggested changing the prompt. What initially read “Based on this information and your knowledge of biology, evaluate each statement as true or false,” became “Based on this information and your knowledge of biology, evaluate each statement as likely or unlikely to be true.” I was instantly sold. I thought back to the student who had spent so much extra time trying to search her brain for the exceptions to the general rules. Surely, this was going to help!

 

It did. For starters, the discipline experts we were consulting were much more inclined to agree the answers were scientifically accurate. And for good reason! We science experts do not often work in the absolutes of “true” and “false”. In fact, I’m pretty sure a whole field of math was created for exactly this reason. I also saw a difference in how students responded to the new language. In my interviews, I noticed students took considerably less time on the assessment and I never again heard a student stop to try to remember all the exceptions they might know. Better yet, I started hearing language that reflected students were applying knowledge rather than trying to remember facts. For example, in the previous true-false format, I often heard “Oh, I just learned this,” and then I would watch the student close their eyes and agonize trying to remember a piece of information, when all the information they needed to answer the question was right in front of them. With the new “likely or unlikely to be true” format, I was hearing more “well that’s generally true, so I think it would work here too.” It appeared that students had shifted to a more conceptual rather than factual mindset.

 

But what really convinced me that we were on to something worthwhile was the awareness of some students of what they were truly being asked to do. “Wait, so basically what you want me to do is hypothesize whether this would be true [in this new scenario] based on what I already know?” YES!!! (I do my inner happy dance every time.)

 

We educators hear the message from a million places that we should teach science as we do science. I maintain that this should count towards how we assess science knowledge and skills too, asking students to apply their knowledge in new contexts where there is no known answer. But when science explores the unknown, how do you ask about the unknown and still have a right answer to grade? (Easily, on a scantron, that is.) As scientists, we use our knowledge to make predictions all the time, not thinking that our hypotheses will absolutely be true, but that they are the mostly likely outcome given what we already know. Why not show our students how much we value that skill by asking them to do the same?

 

1 Answer: Likely to be true.

2 More information about the Phys-MAPS and all of the Bio-MAPS programmatic assessments can be found on: http://cperl.lassp.cornell.edu/bio-maps

3 The Bio-MAPS working group includes: Drs. Michelle Smith, Jennifer Knight, Alison Crowe, Sara Brownell, Brian Couch, Mindi Summers, Scott Freeman, Christian Wright and myself.

4 Couch, B. A., Hubbard, J. K., and Brassil, C. E. (2018). Multiple–true–false questions reveal the limits of the multiple–choice format for detecting students with incomplete understandings. BioScience 68, 455–463.

5 Hubbard, J. K., Potts, M. A., and Couch, B. A. (2017). How question types reveal student thinking: An experimental comparison of multiple-true-false and free-response formats. CBE Life Sci. Educ.

Dr. Katharine (Kate) Semsar finally found a job that uses all her diverse training across ecology, physiology, genetics, behavioral biology, neuroscience, science education, and community building. Kate is the Assistant Director of STEM Programming for the Miramontes Arts & Sciences Program (MASP), an academic community for underrepresented students in the College of Arts & Sciences at the University of Colorado Boulder.

She received her PhD from North Carolina State University and continued her training at University of Pennsylvania. She then became a science education specialist with the Science Education Initiative in the Integrative Physiology department at the University of Colorado Boulder, studying how students learn and collaborating with faculty to incorporate fundamental principles of learning in their courses. She continued her science education research with the Bio-MAPS team before finally landing in her dream career, teaching and mentoring students in MASP. Despite the career shift, she still loves watching people’s reactions when she tells them she used to write standardized assessments.

Thinking Critically About Critical Thinking

 

A few mornings ago, I was listening to a television commercial as I got ready for work.  “What is critical thinking worth?” said a very important announcer.  “A whole lot” I thought to myself.

But what exactly is critical thinking?  A Google search brings up a dictionary definition.  Critical thinking is “the objective analysis and evaluation of an issue to form a judgement.”  The example sentence accompanying this definition is “professors often find it difficult to encourage critical thinking among their students.” WOW, took the words right out of my mouth!

Have any of you had the following conversation? “Dr. A, I studied and studied for this exam and I still got a bad grade.  I know the material, I just can’t take your tests!”  The student in question has worked hard. He or she has read the course notes over and over, an activity that has perhaps been rewarded with success in the past.  Unfortunately re-reading notes and textbooks over and over is the most common and least successful strategy for studying (4).

In my opinion, as someone who has been teaching physiology for over 20 years, physiology is not a discipline that can be memorized.  Instead, it is a way of thinking and a discipline that has to be understood.

Over the years, my teaching colleague of many years, Sue Keirstead, and I found ourselves during office hours trying repeatedly to explain to students what we meant by thinking critically about physiology.  We asked the same probing questions and drew the same diagrams over and over.  We had the opportunity to formalize our approach in a workbook called Cells to Systems Physiology: Critical Thinking Exercises in Physiology (2).  We took the tough concepts students brought to office hours and crafted questions to help the students work their way through these concepts.

Students who perform well in our courses make use of the workbook and report in student evaluations that they find the exercises helpful. But we still have students who struggle with the critical thinking exercises and the course exams.  According to the comments from student evaluations, students who struggled with the exercises report they found the questions too open ended.  Furthermore, many of the answers cannot be pulled directly from their textbook, or at least not in the format they expect the answer to be in, and students report finding this frustrating.  For example, the text may discuss renal absorption and renal secretion in general and then the critical thinking exercises asks the student to synthesize all the processes occurring in the proximal tubule.  The information is the same but the organization is different.  Turns out, this is a difficult process for our students to work through.

We use our critical thinking exercise as a type of formative assessment, a low stakes assignment that evaluates the learning process as it is occurring.  We also use multiple choice exams as summative assessments, high stakes assessments that evaluate learning after it has occurred.  We use this format because our physiology course enrollment averages about 300 students and multiple choice exams are the most efficient way to assess the class.  We allow students to keep the exam questions and we provide a key a couple of days after the exam is given.

When a student comes to see me after having “blown” an exam, I typically ask him or her to go through the exam, question by question.  I encourage them to try to identify how they were thinking when they worked through the question.  This can be a very useful diagnostic.  Ambrose and colleagues have formalized this process as a handout called an exam wrapper (1).  Hopefully, by analyzing their exam performance, the student may discover a pattern of errors that they can address before the next exam.  Consider some of the following scenarios:

Zach discovers that he was so worried about running out of time that he did not read the questions carefully.  Some of the questions reminded him of questions from the online quizzes.  He did know the material but he wasn’t clear on what the question was asking.

This is a testing issue. Zach, of course, should slow down.  He should underline key words in the question stem or draw a diagram to make sure he is clear on what the question is asking.

Sarah discovers that she didn’t know the material as well as she thought she did, a problem that is called the illusion of knowing (3). Sarah needs to re-evaluate the way she is studying.  If Sarah is cramming right before the exam, she should spread out her studying along with her other subjects, a strategy called interleaving (3).  If she is repeatedly reading her notes, she should put her notes away, get out a blank piece of paper and write down what she remembers to get a gauge of her knowledge, a process called retrieval (3).  If she is using flash cards for vocabulary, she should write out learning objectives in her own words, a process called elaboration (3).

Terry looks over the exam and says, “I don’t know what I was thinking.  I saw something about troponin and I picked it.  This really frustrates me. I study and study and don’t get the grade I want.  I come to lecture and do all the exercises. I don’t know what else to do.” It is a challenge to help this student.  She is not engaging in any metacognition and I don’t claim to have any magic answers to help this student.  I still want to try to help her.

I feel very strongly that students need to reflect on what they are learning in class, on what they read in their texts, and on the activities performed in lab (3).  I have been working on a project in one of my physiology courses in which I have students take quizzes and exams as a group and discuss the answers collaboratively.  Then I have them write about what they were thinking as they approached the question individually and what they discussed in their group.  I am hoping to learn some things about how students develop critical thinking skills.  I hope I can share what I learn in a future blog posting.

  1. Ambrose SA, Bridges MW, DiPietro M, Lovett M, Norman MK. How Learning Works: 7 Research Based Points for Teaching. San Francisco CA: Jossey-Bass, 2010.
  2. Anderson LC, Keirstead SA. Cells to Systems: Critical Thinking Exercises in Physiology (3rd ed). Dubuque, IA: Kendall Hunt Press, 2011.
  3. Brown PC, Roediger HL, McDaniel MA. Make it Stick: The Science of Successful Learning. Cambridge MA: The Belknap Press of Harvard University Press, 2014
  4. Callender AA, McDaniel, MA. The limited benefits of rereading educational text, Contemporary Educational Psychology 34:30-41, 2009. Retrieved from http://ac.els-cdn.com/S0361476X08000477/1-s2.0-S0361476X08000477-main.pdf?_tid=22610e88-61b4-11e7-8e86-00000aacb35e&acdnat=1499281376_e000fa54fe77e7d1a1d24715be4bbf50 , June 22, 2016.

 

 Lisa Carney Anderson, PhD is an Assistant Professor in the Department of Integrative Biology and Physiology at the University of Minnesota. She completed training in muscle physiology at the University of Minnesota. She collaborates with colleagues in the School of Nursing on clinical research projects such as the perioperative care of patients with Parkinson’s disease and assessment of patients with spasticity. She directs a large undergraduate physiology course for pre-allied health students.  She also teaches nurse anesthesia students, dental students and medical students.  She is the 2012 recipient of the Didactic Instructor of the Year Award from the American Association of Nurse Anesthesia.  She is a co-author of a physiology workbook called Cells to Systems: Critical thinking exercises in Physiology, Kendall Hunt Press. Dr. Anderson’s teaching interests include teaching with technology, encouraging active learning and assessment of student reflection.
The Surprising Advantages Retrieval Practice

Retrieval practice,  retrieval __________,    _________ practice,  testing effect……wuh?!?!

Retrieval practice simply means to actively recall information following exposure (e.g., studying). Because tests are a particularly common and effective means by which to prompt the retrieval of specific pieces of information, the learning benefits of retrieval practice are also known as the testing effect. That is, effective tests can do more than simply assess learning; they can strengthen learning by prompting retrieval. It is important to clarify that the key to the testing effect is the retrieval and not the test per se. Therefore, the testing effect pertains to not only traditional assessments like tests and quizzes, but also to free recall. So, silently answering questions in your mind (e.g., self-testing) is an example of testing that promotes learning.

Landmark study by Roediger and Karpicke in 2006a

Figure 1. Repeated testing lead to better long-term recall when compared to repeated studying. Roediger and Karpicke, 2006a.

Although the testing effect has been described by studies that date back more than a century, researchers and articles often cite a 2006a study by Roediger and Karpicke as the source of renewed interest in the strategy and effect. In that study, the investigators asked three groups of undergraduates to read passages that were about 250 words long. One group of students learned the passages by studying (i.e., reading) them four times (SSSS group). A second group learned the passages by studying them three times and then completing a test in which they were prompted to retrieve information from the passages (SSST group). The last group studied the passages just one time and then performed the retrieval test three times (STTT group). All three groups were given a total of 20 minutes to learn each passage, following which their retention was assessed via free recall either 5 minutes or 1 week later. As you can see in Figure 1, there was a modest advantage with the SSSS strategy, as well as a modest disadvantage with the STTT strategy, immediately after learning the passages. However, the exact opposite pattern was observed one week later, as the STTT group’s recall scores were about 5% higher and 21% higher than those of the SSST and SSSS groups, respectively. The results of this study demonstrated that testing/retrieval practice can be a powerful means of improving long-term memory. These advantages to long-term recall have subsequently been confirmed by many different researchers and investigations (see Roediger and Butler 2011; Roediger and Karpicke, 2006b for review).

Retrieval practice and the ability to make inferences; it isn’t just about simple recall

Figure 2. Retrieval practice resulted in higher scores on verbatim and inferential questions. Derived from Karpicke and Blunt, 2011.

One might be concerned that retrieval practice is just a form of drill and practice that merely teaches people to produce a fixed response to a specific cue. Karpicke and Blunt (2011) addressed this concern by comparing the effects of retrieval practice and concept mapping on meaningful learning, which includes the ability to draw conclusions and create new ideas. The investigators chose concept mapping for this comparison because it known to promote elaborative (i.e., complex) learning. In one experiment, one group of students learned a science text by repeatedly reading (i.e., studying) it, another group studied the text and then used it create a concept map, and a third group studied and then recalled the text two times. The total amount of time the concept mapping and retrieval practice groups were given to learn the text was standardized. The students returned the following week and completed a short-answer test that included both questions that could be answered verbatim from the text and questions that required inferences. As is displayed in Figure 2, the retrieval practice strategy resulted in superior scores on not just the verbatim questions, but also on the inference questions. That is, the advantages of retrieval practice extended beyond simple recall and to meaningful learning. These findings are supported by numerous other investigations (see Karpicke and Aue, 2015 for review), including a subsequent study by the same authors (Blunt and Karpicke, 2014).

Okay, so retrieval practice has been shown to enhance recall and meaningful learning, but does it work with the types of information that are relevant to APS members?

Figure 3. The testing strategy resulted in superior performance on both sections of the six month assessment. Derived from Larsen, Butler and Roediger, 2009.

Yes………numerous studies support this claim. One notable example was a study by Larsen, Butler and Roediger (2009) in which two groups of medical residents first attended lectures on the treatments of both status epilepticus and myasthenia gravis. Immediately after the lectures, and then again about two and four weeks later, the residents studied (i.e., read) a review sheet pertaining to the treatment of one of those diseases and they completed a retrieval test that included feedback on the other treatment. Roughly six months after the lectures, the residents completed a final assessment that covered the treatment of both diseases. As you can see in Figure 3, the testing strategy resulted in scores that were about 11% and 17% higher than those associated with the studying strategy on the status epilepticus and myasthenia gravis sections, respectively. It is also worth noting that the overall effect size pertaining to those differences was large (Cohen’s d = 0.91). The same group of researchers went on publish similar findings with groups of first-year medical students (Larsen et al, 2013). In that follow-up study, a testing-based strategy produced superior recall and greater transfer of learning of four clinical neurology topics six months after the students first encountered them.

Our lab has also recently published numerous studies with relevant materials, and we observed several advantages with retrieval practice compared to more commonly-used reading and note-taking learning strategies. For example, we found that retrieval-based strategies resulted in superior recall of exercise physiology (Linderholm, Dobson and Yarbrough, 2016) and anatomy and physiology course information (Dobson and Linderholm, 2015a; Dobson and Linderholm, 2015b), including information that consisted of concepts and terminology that were previously unfamiliar to the students (Dobson, Linderholm and Yarbrough, 2015). We have also observed advantages to independent student learning that resulted in higher scores on course exams (Dobson and Linderholm, 2015a), as well as to the ability to synthesize themes from multiple sources (Linderholm, Dobson and Yarbrough, 2016), which is a skill that requires higher orders of cognition.

Just give me the take home messages.

  • Dozens of studies have demonstrated that retrieval practice can promote superior recall and meaningful learning when compared to more commonly-used strategies like reading. (Karpicke and Aue, 2015; Roediger and Butler, 2011; Roediger and Karpicke, 2006b).
  • Although some studies have provided evidence that essay and short answer (SA) questions can lead to a greater testing effect than multiple choice (MC) questions (Roediger and Karpicke, 2006b; Butler and Roediger, 2007), a recent study by Smith and Karpicke (2014) indicated that MC and SA questions are equally effective.
  • Multiple repetitions of retrieval practice promote more learning than a single retrieval event (Roediger and Butler, 2011; Roediger and Karpicke, 2006b)
  • The benefits of retrieval practice are enhanced if learners receive feedback after they retrieve (Roediger and Butler, 2011; Roediger and Karpicke, 2006b).

Great, but how do you apply retrieval practice in the classroom?

  • Summative assessments. Tests prompt retrieval, so one way to incorporate more retrieval practice into your classes is to have your students complete both more exams and more cumulative exams.
  • Formative assessments. There are numerous reasons to use low-stakes assessments like quizzes instead of tests. Quizzes may be just as effective at prompting retrieval, and they provide valuable feedback about performance to both instructors and students, but they typically elicit less anxiety and encourage less cheating. Suggested applications include starting class meetings with a short quiz that prompts students to retrieve information that will be developed during the lecture and/or end class meetings with a short quiz to get students to retrieve the important take home messages of the lecture.
  • In-class retrieval assignments. A great way to break up the monotony of lectures is to have students complete retrieval assignments during class meetings. For example, have individuals or groups of students retrieve information and then present it to the rest of the class.
  • Encourage students to use retrieval practice outside of class. One of the greatest benefits of retrieval practice is that it easy to use; all one needs to do is to recall information from memory. I encourage my students to use retrieval practice by first presenting to them some of the evidence of its effectiveness (described above), and then by suggesting some methods they may use to employ the strategy that (e.g., take turns quizzing or teaching fellow students, quiz one-self, or simply freely recall portions of the information). Again, it is important to emphasize that multiple retrieval events are more beneficial, and that each or most of those should include feedback. For example, have students study then retrieve then study again to receive feedback, etc.

 References

  1. Dobson JL, Linderholm T, Yarbrough MB. Self-testing produces superior recall of both familiar and unfamiliar muscle information. Advances in Physiology Education 39: 309-314, 2015
  2. Dobson JL and Linderholm T, The effect of selected “desirable difficulties” on the ability to recall anatomy information. Anatomical Sciences Education 8: 395-403, 2015.
  3. Dobson JL, Linderholm T. Self-testing promotes superior retention of anatomy and physiology information. Advances in Health Sciences Education 20: 149-161, 2015.
  4. Butler AC, Roediger HL. Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology 19: 514-527, 2007.
  5. Blunt JR, Karpicke JD. Learning with retrieval-based concept mapping. Journal of Educational Psychology 106: 849, 2014.
  6. Dobson JL, Perez J, Linderholm T. Distributed retrieval practice promotes superior recall of anatomy information. Anatomical Sciences Education DOI: 10.1002/ase.1668, 2016.
  7. Karpicke JD, Aue, WR. The testing effect is alive and well with complex materials. Educational Psychology Review 27: 317-326, 2015.
  8. Karpicke JD, Blunt JR. Retrieval practice produces more learning than elaborative studying with concept mapping. Science 331: 772-775, 2011.
  9. Larsen DP, Butler AC, Roediger HL. Repeated testing improves long-term retention relative to repeated study: A randomized controlled trial. Medical Education 43: 1174-1181, 2009.
  10. Larsen DP, Butler AC, Lawson AL, Roediger HL. The importance of seeing the patient: Test-enhanced learning with standardized patients and written tests improves clinical application of knowledge. Advances in Health Sciences Education 18: 409-25, 2013.
  11. Linderholm T, Dobson JL, Yarbrough MB. The benefit of self-testing and interleaving for synthesizing concepts across multiple physiology Advances in Physiology Education 40: 329-34, 2016.
  12. Roediger HL, Butler AC. The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences 15: 20-27, 2011.
  13. Roediger HL, Karpicke JD. Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science 17: 249-255, 2006.
  14. Roediger HL, Karpicke JD. The power of testing memory: Basic research and implications for educational practice. Perspectives in Psychological Science 1: 181-210, 2006.
  15. Smith MA, Karpicke JD. Retrieval practice with short-answer, multiple-choice, and hybrid tests. Memory 22: 784-802, 2014.
 John Dobson is an Associate Professor in the School of Health and Kinesiology at Georgia Southern University. John received his M.S. and Ph.D. in Exercise Physiology at Auburn University. Although most of his research has focused on the application of learning strategies that were developed by cognitive scientists, he has also recently published articles on peripheral neuropathy and concussion-induced cardiovascular dysfunction. He teaches undergraduate and graduate Anatomy and Physiology, Structural Kinesiology, Exercise Physiology, Cardiovascular Pathophysiology courses. He has been an active member of the American Physiological Society since 2009, and he received the Teaching Section’s New Investigator Award in 2010 and Research Recognition Award in 2011.
Critical thinking or traditional teaching for Health Professions?

“Education is not the learning of facts but the training of the mind to think”- Albert Einstein”

A few years ago I moved from a research laboratory to the classroom. Until then, I had been accustomed to examine ideas and try to find solutions by experimenting and challenging the current knowledge in certain areas. However, in the classroom setting, the students seemed to only want to learn facts with no room for alternative explanations, or challenges. This is not the way a clinician should be trained- I thought, and I started looking in text books, teaching seminars and workshops for alternative teaching methods. I quickly learned that teaching critical thinking skills is the preferred method for higher education to develop highly-qualified professionals.

Why critical thinking? Critical thinking is one of the most important attributes we expect from students in postsecondary education, especially highly qualified professionals in Health Care, where critical thinking will provide the tools to solve unconventional problems that may result. I teach Pathophysiology in Optometry and as in other health professions, not all the clinical cases are identical, therefore the application and adaptation of the accumulated body of knowledge in different scenarios is crucial to develop clinical skills. Because critical thinking is considered essential for patient care, it is fostered in many health sciences educational programs and integrated in the Health Professions Standards for Accreditation.

But what is critical thinking? It is accepted that critical thinking is a process that encompasses conceptualization, application, analysis, synthesis, evaluation, and reflection. What we expect from a critical thinker is to:

  • Formulate clear and precise vital questions and problems;
  • Gather, assess, and interpret relevant information;
  • Reach relevant well-reasoned conclusions and solutions;
  • Think open-mindedly, recognizing their own assumptions;
  • Communicate effectively with others on solutions to complex problems.

However, some educators emphasize the reasoning process, while others focus on the outcomes of critical thinking. Thus, one of the biggest obstacles to proper teaching of critical thinking is the lack of a clear definition, as observed by Allen et al (1) when teaching clinical critical thinking skills. Faculty need to define first what they consider critical thinking to be before they attempt to teach it or evaluate student learning outcomes. But keep in mind that not all students will be good at critical thinking and not all teachers are able to teach students critical thinking skills.

The experts in the field have classically agreed that critical thinking includes not only cognitive skills but also an affective disposition (2). I consider that it mostly relies on the use of known facts in a way that enables analysis and reflection of conventional and unconventional cases for the future. I have recently experimented with reflection in pathophysiological concepts and I have come to realize that reflection is an integral part of the health professions.  We cannot convey just pieces of information based on accumulated experience, we have to reflect on it. Some studies have demonstrated that reflective thinking positively predicted achievement to a higher extent than habitual action. However, those may not be the key elements of critical thinking that you choose to focus on.

How do we achieve critical thinking in higher education and Health Professions? Once we have defined what critical thinking means to us, it must be present at all times when designing a course, from learning objectives to assignments. We cannot expect to contribute to development of critical thinking skills if the course is not designed to support it. According to the Delphi study conducted by the American Philosophical Association (3), the essential elements of lessons designed to promote critical thinking are the following:

  1. “Ill structured problems” are those that don’t have a single right answer they are based on reflective judgment and leave conclusions open to future information.
  2. “Criteria for assessment of thinking” include clarity, accuracy, precision, relevance, depth, breadth, logic, significance, and fairness (Paul & Elder, 2001).
  3. “Student meaningful and valid assessment of their own thinking”, as they are held accountable for it.
  4. “Improving the outcomes of thinking” such as in writing, speaking, reading, listening, and creating.

There are a variety of examples that serve as a model to know if the course contains critical thinking elements and to help design the learning objectives of a course. However, it can be summarized in the statement that “thinking is driven by questions”. We need to ask questions that generate further questions to develop the thinking process (4). By giving questions with thought-stopping answers we are not building a foundation for critical thinking. We can examine a subject by just asking students to generate a list of questions that they have regarding the subject provided, including questions generated by their first set of questions. Questions should be deep to foster dealing with complexity, to challenge assumptions, points of view and the sources of information. Those thought-stimulating types of questions should include questions of purpose, of information, of interpretation, of assumption, of implication, of point of view, of accuracy and precision, of consistency, of logic etc.

However, how many of you just get the question: “Is this going to be on the test?”. Students do not want to think. They want everything to be already thought-out for them and teachers may not be the best in generating thoughtful questions.

As an inexperienced research educator, trying to survive in this new environment, I fought against the urge of helping the students to be critical thinkers, and provided answers rather than promoting questions. I thought I just wanted to do traditional lectures. However, unconsciously I was including critical thinking during lectures by using clicker questions and asking about scenarios with more than one possible answer. Students were not very happy, but the fact that those questions were not graded but instead used as interactive tools minimized the resistance to these questions. The most competitive students would try to answer them right and generate additional questions, while the most traditional students would just answer, no questions asked. I implanted this method in all my courses, and I started to give critical thinking assignments. The students would have to address a topic and to promote critical thinking, a series of questions were included as a guide in the rubric. The answers were not easily found in textbooks and it generated plenty of additional questions. As always, it did not work for every student, and only a portion of the class probably benefited from them, but all students had exposure to it. Another critical thinking component was the presentation of a research article. Students had a limited time to present a portion of the article, thus requiring analysis, summary and reflection. This is still a work in progress and I keep inserting additional elements as I see the need.

How does critical thinking impact student performance? Assessment

Despite the push for critical thinking in Health Professions, there is no agreement on whether critical thinking positively impacts student performance. The curriculum design is focused on content rather than critical thinking, which makes it difficult to evaluate the learning outcomes (5). In addition, the type of assessment used for the evaluation of critical thinking may not reflect these outcomes.

There is a growing trend for measuring learning outcomes, and some tests are used to assess critical thinking, such as the Classroom Assessment Techniques (CAT), which evaluate information, creative thinking, learning and problem solving, and communication. However, the key elements in the assessment of student thinking are purpose, question at issue, assumptions, inferences, implications, points of view, concepts and evidence (6). Thus, without a clear understanding of this process and despite the available tests, the proper assessment becomes rather challenging.

Another issue that arises when evaluating students critical thinking performance is that they are very resistant to this unconventional model of learning and possibly the absence of clear positive results may be due to the short exposure to this learning approach in addition to the inappropriate assessment tools. Whether or not there is a long term beneficial effect of critical thinking on clinical reasoning skills remains to be elucidated.

I tried to implement critical thinking in alignment with my view of Physiology.  Since, I taught several courses to the same cohort of students within the curriculum, I decided to try different teaching techniques, assessments and approaches at different times during the curriculum.  This was ideal because I could do this without a large time commitment and without compromising large sections of the curriculum. However, after evaluating the benefits, proper implementation and assessment of critical thinking, I came to the conclusion that we sacrifice contact hours of traditional lecture content for a deeper analysis of a limited section of the subject matter. However, the board exams in health professions are mostly based on traditional teaching rather than critical thinking. Thus, I decided to only partly implement critical thinking in my courses to avoid a negative impact in board certification, but include it somehow as I still believe it is vital for their clinical skills.

 

References

  1. Allen GD, Rubenfeld MG, Scheffer BK. Reliability of assessment of critical thinking. J Prof Nurs. 2004 Jan-Feb;20(1):15-22.
  2. Facione PA. Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction: Research findings and recommendations [Internet]. Newark: American Philosophical Association; 1990[cited 2016 Dec 27]. Available from: https://eric.ed.gov/?id=ED315423
  3. Facione NC, Facione PA. Critical thinking assessment in nursing education programs: An aggregate data analysis. Millbrae: California Academic Press; 1997[cited 2016 Dec 27].
  4. Paul WH, Elder L. Critical thinking handbook: Basic theory and instructional structures. 2nd Dillon Beach: Foundation for Critical Thinking; 2000[cited 2016 Dec 27].
  5. Not sure which one
  6. Facione PA. Critical thinking what it is and why it counts. San Jose: California Academic Press; 2011 [cited 2016 Dec 27]. Available from: https://blogs.city.ac.uk/cturkoglu/files/2015/03/Critical-Thinking-Articles-w6xywo.pdf

 

 

 

 

 

Lourdes Alarcon Fortepiani is an Associate professor at Rosenberg School of Optometry (RSO) at the University of the Incarnate Word in San Antonio, Texas. Lourdes received her M.D. and Ph.D. in Physiology at the University of Murcia, Spain. She is a renal physiologist by training, who has worked on hypertension, sexual dimorphism and aging. Following her postdoctoral fellowship, she joined RSO and has been teaching Physiology, Immunology, and Pathology amongst other courses. Her main professional interest is medical science education. She has been active in outreach programs including PhUn week activities for APS, career day, and summer research activities, where she enjoys reaching K-12 ad unraveling different aspects of science. Her recent area of interest includes improving student critical thinking.

 

Good Teaching: What’s Your Perspective?

Are you a good teacher? 

What qualities surround “good teachers? 

What do good teachers do to deliver a good class?

The end of the semester is a great time to critically reflect on your teaching.

For some, critical reflection on teaching is prompted by the results of student course evaluations. For others, reflection occurs as part of updating their teaching philosophy or portfolio.  Others use critical reflection on teaching out of a genuine interest to become a better teacher.  Critical reflection is important in the context of being a “good teacher.”

Critical reflection on teaching is an opportunity to be curious about your “good teaching.”  If you are curious about your approach to teaching I encourage you to ponder and critically reflect on one aspect of teaching – perspective.

Teaching perspectives, not to be confused with teaching approach or styles, is an important aspect on the beliefs you hold about teaching and learning.  Your teaching perspectives underlie the values and assumptions you hold in your approach to teaching.

How do I get started?

Start by taking the Teaching Perspectives Inventory (TPI).  The TPI is a free online assessment of the way you conceptualize teaching and look into your related actions, intentions, and beliefs about learning, teaching, and knowledge.  The TPI will help you examine your views about and within one of five perspectives:  Transmission, Apprenticeship, Developmental, Nurturing, and Social Reform.

What is your dominant perspective?

The TPI is not new.  It’s been around for over 15 years and is the work of Pratt and Collins from the University of British Columbia (Daniel D. Pratt and John B. Collins, 2001)(Daniel D. Pratt, 2001).  Though the TPI has been around for a while, it is worth bringing it up once more.   Whether you are a new or experienced teacher, the TPI is a useful instrument for critical reflection on teaching especially now during your semester break!  Don’t delay.  Take the free TPI to help you clarify your views on teaching and be curious.

 

Resources

Teaching Perspectives Inventory – http://www.teachingperspectives.com

How to interpret a teaching perspective profile – https://youtu.be/9GN7nN6YnXg

Daniel D. Pratt and John B. Collins. (2001). Teaching Perspectives Inventory. Retrieved December 01, 2016, from Take the TPI: www.teachingperspectives.com/tpi/

Daniel D. Pratt, J. B. (2001). Development and Use of The Teaching Perspectives Inventory (TPI). American Education Research Association.

 

 

 

Jessica M. Ibarra, is an Assistant Professor of Applied Biomedical Sciences in the School of Osteopathic Medicine at the University of the Incarnate Word. She is currently teaching in the Master of Biomedical Sciences Program and helping with curriculum development in preparation for the inaugural class of osteopathic medicine in July 2017. As a scientist, she studied inflammatory factors involved in chronic diseases such as heart failure, arthritis, and diabetes. When Dr. Ibarra is not conducting research or teaching, she is mentoring students, involved in community service, and science outreach. She is an active member of the American Physiological Society and helps promote physiology education and science outreach at the national level. She is currently a member of the Porter Physiology and Minority Affairs Committee; a past fellow of the Life Science Teaching Resource Community Vision & Change Scholars Program and Physiology Education Community of Practice; and Secretary of the History of Physiology Interest Group.

 

More detail = More complex = Less clear

The question that I’m going to tip-toe around could be expressed thus:

“More detail does not clarity make. Discuss.”

37194627bI’m not going to write an essay but I am going to offer a few different perspectives on the question in the hope that you realise that there might be a problem hiding a little further down the path we’re all walking.  In doing so I’m going to scratch an itch that I’ve had for a while now.  I have entertained a rather ill-defined worry for some time and this post provides an opportunity to try pull my concerns into focus and articulate them as best I can.

One of the first things I remember reading that muddied the water for me was ‘Making Learning Whole’ by David Perkins (Perkins, 2009).  He argues that in education we have tended to break down something complex and teach it in parts with the expectation that having mastered the parts our students would have learned how to do the complex thing – playing baseball, in his example.  The problem is that baseball as a game is engaging but when broken down into little bits of theory and skill it becomes dull – a drudge.  So, do we teach science as the whole game of structured inquiry, or do we break it down into smaller chunks that are not always well connected (think lecture and practical)?  That was worry number one.

Let me broaden this out.  I see a direct link between the risks of breaking down a complex intellectual challenge into smaller activities that don’t appear to have intrinsic value and  ‘painting-by-numbers’ – as a process, it might create something that resembles art but the producer is not working as an artist.  If you indulge me a little, I’ll offer an example from education; learning outcomes.   In his 2012 article, The Unhappiness Principle’, in the UK’s Times Higher Education magazine, Frank Furedi argues that learning outcomes distort the education process in a number of ways.  He worries that learning outcomes provide a structure that learners would otherwise construct for themselves and the adopted construct is rarely as robust as a fully-owned one.  He also worries that learning outcomes by their nature attempt to reduce a complex system in a series of statements that are both simple and precise.  Their seeming simplicity of expression gives students no insight into the true nature of the problems to be tackled.  I don’t imagine that Socrates would have set out learning outcomes for his students.

I see similar issues in the specification of the assessment process; the detailed mark scheme.  Sue Bloxham and colleagues recently published the findings of a study of the use of marking scheme, entitling it ‘Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria’.  The article is scholarly and it contains some uncomfortable truths for those who feel it should be possible to make the grading of assessments ‘transparent’.  In their recommendations they say, ‘The real challenge emerging from this paper is that, even with more effective community processes, assessment decisions are so complex, intuitive and tacit that variability is inevitable. Short of turning our assessment methods into standardised tests, we have to live with a large element of unreliability and a recognition that grading is judgement and not measurement [my emphasis] (Bloxham et al., 2016).

The idea that outcomes can be assured by instructions that are sufficiently detailed (complex) is flawed but it appears to have been adopted outside education as much as within.  The political historian, Niall Fergusson, makes this point well in one of his BBC Reith Lectures of 2012. In relation to the Dodd-Frank Act, he says, ‘Today, it seems to me, the balance of opinion favours complexity over simplicity; rules over discretion; codes of compliance over individual and corporate responsibility. I believe this approach is based on a flawed understanding of how financial markets work. It puts me in mind of the great Viennese satirist Karl Kraus’s famous quip about psychoanalysis, that it was “the disease of which it purported to be the cure” I believe excessively complex regulation is the disease of which it purports to be the cure.”  Niall Ferguson: The Darwinian Economy (BBC Reith lecture, 2012).

One of the problems is that detail looks so helpful.  It’s hard to imagine how too much detail could be bad.  There is are examples of where increasing detail led to adverse and unintended outcomes.  I have two examples, one from university management and another from education and training.  A colleague recently retold a story of a Dean who was shocked that, should a situation arise in an examination room, staff would themselves often decide on an effective course of action.  It turned out that the Dean had thought it more proper for the staff to be poring through university regulations.  He was also shocked to discover that the regulations did not contain solutions to all possible problems.  The example from education and training can be found in article by Barry Scwartz, published in 2011.   The article, called ‘Practical wisdom and organizations’, describes what happened when the training of wildland firefighters was augmented from just four ‘survival guidelines’ to a mental manual of very nearly 50 items.  He writes. ‘….teaching the firefighters these detailed lists was a factor in decreasing the survival rates. The original short list was a general guide. The firefighters could easily remember it, but they knew it needed to be interpreted, modified, and embellished based on circumstance. And they knew that experience would teach them how to do the modifying and embellishing. As a result, they were open to being taught by experience. The very shortness of the list gave the firefighters tacit permission—even encouragement—to improvise in the face of unexpected events. Weick found that the longer the checklists for the wildland firefighters became, the more improvisation was shut down.’  (Schwartz, 2011).  Detail in the wrong place or at the wrong level flatters to deceive.

By writing this piece I hoped to pull together my own thoughts and, speaking personally, it worked.  I now have a much clearer view of what concerns me about how we’ve been pushing education but that clarity has made my worries all the more acute.  Nevertheless, in order to round on a positive note I’ve tried to think of some positive movements.  I have always found John Dewey’s writing on education and reasoning to be full of promise (Findlay, 1910).  Active learning, authentic inquiry,  mastery learning and peer-learning seem to me to be close cousins and a sound approach for growing a real capacity to conceive of science as a way of looking to understand the unknown (Freeman et al., 2014) seem to me to have Dewey’s unspoken blessing.  I also think that Dewey would approve of Edgar Morin and his Seven complex lessons in education for the future (Morin, 2002). There is a video of Morin explaining some aspects of the seven complex lessons that I would recommend.

I’m off to share an hour with a glass of whisky in a dark room.

References

Bloxham S, den-Outer B, Hudson J & Price M. (2016). Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assess Eval High Edu 41, 466-481.

Findlay JJ, ed. (1910). Educational Essays By John Dewey. Blackie & Sons, London.

Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H & Wenderoth MP. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences 111, 8410-8415.

Morin E. (2002). Seven complex lessons in education for the future. Unesco.

Perkins DN. (2009). Making learning whole : how seven principles of teaching can transform education. Jossey-Bass ; Chichester : John Wiley [distributor], San Francisco, CA.

Schwartz B. (2011). Practical wisdom and organizations. Research in Organizational Behavior 31, 3-23.

langton

 

 

 

 

 

Phil Langton is a senior teaching fellow in the School of Physiology, Pharmacology and Neuroscience, University of Bristol, UK.  A biologist turned physiologist, he worked with Kent Sanders in Reno (NV) and then with Nick Standen in Leicester (UK) before moving to Bristol in 1995.  Phil has been teaching GI physiology for vets, nerve and muscle physiology for medics and cardiovascular physiology for physiologists. He also runs a series of units in the second and third (final) years that are focused on the development of soft (but not easy) skills.  He has been interested for years in the development of new approaches to old problems in education and is currently chasing his tail around trying to work out how fewer staff can mentor and educate more students.