Category Archives: Assessment

Critical thinking or traditional teaching for Health Professions?

“Education is not the learning of facts but the training of the mind to think”- Albert Einstein”

A few years ago I moved from a research laboratory to the classroom. Until then, I had been accustomed to examine ideas and try to find solutions by experimenting and challenging the current knowledge in certain areas. However, in the classroom setting, the students seemed to only want to learn facts with no room for alternative explanations, or challenges. This is not the way a clinician should be trained- I thought, and I started looking in text books, teaching seminars and workshops for alternative teaching methods. I quickly learned that teaching critical thinking skills is the preferred method for higher education to develop highly-qualified professionals.

Why critical thinking? Critical thinking is one of the most important attributes we expect from students in postsecondary education, especially highly qualified professionals in Health Care, where critical thinking will provide the tools to solve unconventional problems that may result. I teach Pathophysiology in Optometry and as in other health professions, not all the clinical cases are identical, therefore the application and adaptation of the accumulated body of knowledge in different scenarios is crucial to develop clinical skills. Because critical thinking is considered essential for patient care, it is fostered in many health sciences educational programs and integrated in the Health Professions Standards for Accreditation.

But what is critical thinking? It is accepted that critical thinking is a process that encompasses conceptualization, application, analysis, synthesis, evaluation, and reflection. What we expect from a critical thinker is to:

  • Formulate clear and precise vital questions and problems;
  • Gather, assess, and interpret relevant information;
  • Reach relevant well-reasoned conclusions and solutions;
  • Think open-mindedly, recognizing their own assumptions;
  • Communicate effectively with others on solutions to complex problems.

However, some educators emphasize the reasoning process, while others focus on the outcomes of critical thinking. Thus, one of the biggest obstacles to proper teaching of critical thinking is the lack of a clear definition, as observed by Allen et al (1) when teaching clinical critical thinking skills. Faculty need to define first what they consider critical thinking to be before they attempt to teach it or evaluate student learning outcomes. But keep in mind that not all students will be good at critical thinking and not all teachers are able to teach students critical thinking skills.

The experts in the field have classically agreed that critical thinking includes not only cognitive skills but also an affective disposition (2). I consider that it mostly relies on the use of known facts in a way that enables analysis and reflection of conventional and unconventional cases for the future. I have recently experimented with reflection in pathophysiological concepts and I have come to realize that reflection is an integral part of the health professions.  We cannot convey just pieces of information based on accumulated experience, we have to reflect on it. Some studies have demonstrated that reflective thinking positively predicted achievement to a higher extent than habitual action. However, those may not be the key elements of critical thinking that you choose to focus on.

How do we achieve critical thinking in higher education and Health Professions? Once we have defined what critical thinking means to us, it must be present at all times when designing a course, from learning objectives to assignments. We cannot expect to contribute to development of critical thinking skills if the course is not designed to support it. According to the Delphi study conducted by the American Philosophical Association (3), the essential elements of lessons designed to promote critical thinking are the following:

  1. “Ill structured problems” are those that don’t have a single right answer they are based on reflective judgment and leave conclusions open to future information.
  2. “Criteria for assessment of thinking” include clarity, accuracy, precision, relevance, depth, breadth, logic, significance, and fairness (Paul & Elder, 2001).
  3. “Student meaningful and valid assessment of their own thinking”, as they are held accountable for it.
  4. “Improving the outcomes of thinking” such as in writing, speaking, reading, listening, and creating.

There are a variety of examples that serve as a model to know if the course contains critical thinking elements and to help design the learning objectives of a course. However, it can be summarized in the statement that “thinking is driven by questions”. We need to ask questions that generate further questions to develop the thinking process (4). By giving questions with thought-stopping answers we are not building a foundation for critical thinking. We can examine a subject by just asking students to generate a list of questions that they have regarding the subject provided, including questions generated by their first set of questions. Questions should be deep to foster dealing with complexity, to challenge assumptions, points of view and the sources of information. Those thought-stimulating types of questions should include questions of purpose, of information, of interpretation, of assumption, of implication, of point of view, of accuracy and precision, of consistency, of logic etc.

However, how many of you just get the question: “Is this going to be on the test?”. Students do not want to think. They want everything to be already thought-out for them and teachers may not be the best in generating thoughtful questions.

As an inexperienced research educator, trying to survive in this new environment, I fought against the urge of helping the students to be critical thinkers, and provided answers rather than promoting questions. I thought I just wanted to do traditional lectures. However, unconsciously I was including critical thinking during lectures by using clicker questions and asking about scenarios with more than one possible answer. Students were not very happy, but the fact that those questions were not graded but instead used as interactive tools minimized the resistance to these questions. The most competitive students would try to answer them right and generate additional questions, while the most traditional students would just answer, no questions asked. I implanted this method in all my courses, and I started to give critical thinking assignments. The students would have to address a topic and to promote critical thinking, a series of questions were included as a guide in the rubric. The answers were not easily found in textbooks and it generated plenty of additional questions. As always, it did not work for every student, and only a portion of the class probably benefited from them, but all students had exposure to it. Another critical thinking component was the presentation of a research article. Students had a limited time to present a portion of the article, thus requiring analysis, summary and reflection. This is still a work in progress and I keep inserting additional elements as I see the need.

How does critical thinking impact student performance? Assessment

Despite the push for critical thinking in Health Professions, there is no agreement on whether critical thinking positively impacts student performance. The curriculum design is focused on content rather than critical thinking, which makes it difficult to evaluate the learning outcomes (5). In addition, the type of assessment used for the evaluation of critical thinking may not reflect these outcomes.

There is a growing trend for measuring learning outcomes, and some tests are used to assess critical thinking, such as the Classroom Assessment Techniques (CAT), which evaluate information, creative thinking, learning and problem solving, and communication. However, the key elements in the assessment of student thinking are purpose, question at issue, assumptions, inferences, implications, points of view, concepts and evidence (6). Thus, without a clear understanding of this process and despite the available tests, the proper assessment becomes rather challenging.

Another issue that arises when evaluating students critical thinking performance is that they are very resistant to this unconventional model of learning and possibly the absence of clear positive results may be due to the short exposure to this learning approach in addition to the inappropriate assessment tools. Whether or not there is a long term beneficial effect of critical thinking on clinical reasoning skills remains to be elucidated.

I tried to implement critical thinking in alignment with my view of Physiology.  Since, I taught several courses to the same cohort of students within the curriculum, I decided to try different teaching techniques, assessments and approaches at different times during the curriculum.  This was ideal because I could do this without a large time commitment and without compromising large sections of the curriculum. However, after evaluating the benefits, proper implementation and assessment of critical thinking, I came to the conclusion that we sacrifice contact hours of traditional lecture content for a deeper analysis of a limited section of the subject matter. However, the board exams in health professions are mostly based on traditional teaching rather than critical thinking. Thus, I decided to only partly implement critical thinking in my courses to avoid a negative impact in board certification, but include it somehow as I still believe it is vital for their clinical skills.

 

References

  1. Allen GD, Rubenfeld MG, Scheffer BK. Reliability of assessment of critical thinking. J Prof Nurs. 2004 Jan-Feb;20(1):15-22.
  2. Facione PA. Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction: Research findings and recommendations [Internet]. Newark: American Philosophical Association; 1990[cited 2016 Dec 27]. Available from: https://eric.ed.gov/?id=ED315423
  3. Facione NC, Facione PA. Critical thinking assessment in nursing education programs: An aggregate data analysis. Millbrae: California Academic Press; 1997[cited 2016 Dec 27].
  4. Paul WH, Elder L. Critical thinking handbook: Basic theory and instructional structures. 2nd Dillon Beach: Foundation for Critical Thinking; 2000[cited 2016 Dec 27].
  5. Not sure which one
  6. Facione PA. Critical thinking what it is and why it counts. San Jose: California Academic Press; 2011 [cited 2016 Dec 27]. Available from: https://blogs.city.ac.uk/cturkoglu/files/2015/03/Critical-Thinking-Articles-w6xywo.pdf

 

 

 

 

 

Lourdes Alarcon Fortepiani is an Associate professor at Rosenberg School of Optometry (RSO) at the University of the Incarnate Word in San Antonio, Texas. Lourdes received her M.D. and Ph.D. in Physiology at the University of Murcia, Spain. She is a renal physiologist by training, who has worked on hypertension, sexual dimorphism and aging. Following her postdoctoral fellowship, she joined RSO and has been teaching Physiology, Immunology, and Pathology amongst other courses. Her main professional interest is medical science education. She has been active in outreach programs including PhUn week activities for APS, career day, and summer research activities, where she enjoys reaching K-12 ad unraveling different aspects of science. Her recent area of interest includes improving student critical thinking.

 

Good Teaching: What’s Your Perspective?

Are you a good teacher? 

What qualities surround “good teachers? 

What do good teachers do to deliver a good class?

The end of the semester is a great time to critically reflect on your teaching.

For some, critical reflection on teaching is prompted by the results of student course evaluations. For others, reflection occurs as part of updating their teaching philosophy or portfolio.  Others use critical reflection on teaching out of a genuine interest to become a better teacher.  Critical reflection is important in the context of being a “good teacher.”

Critical reflection on teaching is an opportunity to be curious about your “good teaching.”  If you are curious about your approach to teaching I encourage you to ponder and critically reflect on one aspect of teaching – perspective.

Teaching perspectives, not to be confused with teaching approach or styles, is an important aspect on the beliefs you hold about teaching and learning.  Your teaching perspectives underlie the values and assumptions you hold in your approach to teaching.

How do I get started?

Start by taking the Teaching Perspectives Inventory (TPI).  The TPI is a free online assessment of the way you conceptualize teaching and look into your related actions, intentions, and beliefs about learning, teaching, and knowledge.  The TPI will help you examine your views about and within one of five perspectives:  Transmission, Apprenticeship, Developmental, Nurturing, and Social Reform.

What is your dominant perspective?

The TPI is not new.  It’s been around for over 15 years and is the work of Pratt and Collins from the University of British Columbia (Daniel D. Pratt and John B. Collins, 2001)(Daniel D. Pratt, 2001).  Though the TPI has been around for a while, it is worth bringing it up once more.   Whether you are a new or experienced teacher, the TPI is a useful instrument for critical reflection on teaching especially now during your semester break!  Don’t delay.  Take the free TPI to help you clarify your views on teaching and be curious.

 

Resources

Teaching Perspectives Inventory – http://www.teachingperspectives.com

How to interpret a teaching perspective profile – https://youtu.be/9GN7nN6YnXg

Daniel D. Pratt and John B. Collins. (2001). Teaching Perspectives Inventory. Retrieved December 01, 2016, from Take the TPI: www.teachingperspectives.com/tpi/

Daniel D. Pratt, J. B. (2001). Development and Use of The Teaching Perspectives Inventory (TPI). American Education Research Association.

 

 

 

Jessica M. Ibarra, is an Assistant Professor of Applied Biomedical Sciences in the School of Osteopathic Medicine at the University of the Incarnate Word. She is currently teaching in the Master of Biomedical Sciences Program and helping with curriculum development in preparation for the inaugural class of osteopathic medicine in July 2017. As a scientist, she studied inflammatory factors involved in chronic diseases such as heart failure, arthritis, and diabetes. When Dr. Ibarra is not conducting research or teaching, she is mentoring students, involved in community service, and science outreach. She is an active member of the American Physiological Society and helps promote physiology education and science outreach at the national level. She is currently a member of the Porter Physiology and Minority Affairs Committee; a past fellow of the Life Science Teaching Resource Community Vision & Change Scholars Program and Physiology Education Community of Practice; and Secretary of the History of Physiology Interest Group.

 

More detail = More complex = Less clear

The question that I’m going to tip-toe around could be expressed thus:

“More detail does not clarity make. Discuss.”

37194627bI’m not going to write an essay but I am going to offer a few different perspectives on the question in the hope that you realise that there might be a problem hiding a little further down the path we’re all walking.  In doing so I’m going to scratch an itch that I’ve had for a while now.  I have entertained a rather ill-defined worry for some time and this post provides an opportunity to try pull my concerns into focus and articulate them as best I can.

One of the first things I remember reading that muddied the water for me was ‘Making Learning Whole’ by David Perkins (Perkins, 2009).  He argues that in education we have tended to break down something complex and teach it in parts with the expectation that having mastered the parts our students would have learned how to do the complex thing – playing baseball, in his example.  The problem is that baseball as a game is engaging but when broken down into little bits of theory and skill it becomes dull – a drudge.  So, do we teach science as the whole game of structured inquiry, or do we break it down into smaller chunks that are not always well connected (think lecture and practical)?  That was worry number one.

Let me broaden this out.  I see a direct link between the risks of breaking down a complex intellectual challenge into smaller activities that don’t appear to have intrinsic value and  ‘painting-by-numbers’ – as a process, it might create something that resembles art but the producer is not working as an artist.  If you indulge me a little, I’ll offer an example from education; learning outcomes.   In his 2012 article, The Unhappiness Principle’, in the UK’s Times Higher Education magazine, Frank Furedi argues that learning outcomes distort the education process in a number of ways.  He worries that learning outcomes provide a structure that learners would otherwise construct for themselves and the adopted construct is rarely as robust as a fully-owned one.  He also worries that learning outcomes by their nature attempt to reduce a complex system in a series of statements that are both simple and precise.  Their seeming simplicity of expression gives students no insight into the true nature of the problems to be tackled.  I don’t imagine that Socrates would have set out learning outcomes for his students.

I see similar issues in the specification of the assessment process; the detailed mark scheme.  Sue Bloxham and colleagues recently published the findings of a study of the use of marking scheme, entitling it ‘Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria’.  The article is scholarly and it contains some uncomfortable truths for those who feel it should be possible to make the grading of assessments ‘transparent’.  In their recommendations they say, ‘The real challenge emerging from this paper is that, even with more effective community processes, assessment decisions are so complex, intuitive and tacit that variability is inevitable. Short of turning our assessment methods into standardised tests, we have to live with a large element of unreliability and a recognition that grading is judgement and not measurement [my emphasis] (Bloxham et al., 2016).

The idea that outcomes can be assured by instructions that are sufficiently detailed (complex) is flawed but it appears to have been adopted outside education as much as within.  The political historian, Niall Fergusson, makes this point well in one of his BBC Reith Lectures of 2012. In relation to the Dodd-Frank Act, he says, ‘Today, it seems to me, the balance of opinion favours complexity over simplicity; rules over discretion; codes of compliance over individual and corporate responsibility. I believe this approach is based on a flawed understanding of how financial markets work. It puts me in mind of the great Viennese satirist Karl Kraus’s famous quip about psychoanalysis, that it was “the disease of which it purported to be the cure” I believe excessively complex regulation is the disease of which it purports to be the cure.”  Niall Ferguson: The Darwinian Economy (BBC Reith lecture, 2012).

One of the problems is that detail looks so helpful.  It’s hard to imagine how too much detail could be bad.  There is are examples of where increasing detail led to adverse and unintended outcomes.  I have two examples, one from university management and another from education and training.  A colleague recently retold a story of a Dean who was shocked that, should a situation arise in an examination room, staff would themselves often decide on an effective course of action.  It turned out that the Dean had thought it more proper for the staff to be poring through university regulations.  He was also shocked to discover that the regulations did not contain solutions to all possible problems.  The example from education and training can be found in article by Barry Scwartz, published in 2011.   The article, called ‘Practical wisdom and organizations’, describes what happened when the training of wildland firefighters was augmented from just four ‘survival guidelines’ to a mental manual of very nearly 50 items.  He writes. ‘….teaching the firefighters these detailed lists was a factor in decreasing the survival rates. The original short list was a general guide. The firefighters could easily remember it, but they knew it needed to be interpreted, modified, and embellished based on circumstance. And they knew that experience would teach them how to do the modifying and embellishing. As a result, they were open to being taught by experience. The very shortness of the list gave the firefighters tacit permission—even encouragement—to improvise in the face of unexpected events. Weick found that the longer the checklists for the wildland firefighters became, the more improvisation was shut down.’  (Schwartz, 2011).  Detail in the wrong place or at the wrong level flatters to deceive.

By writing this piece I hoped to pull together my own thoughts and, speaking personally, it worked.  I now have a much clearer view of what concerns me about how we’ve been pushing education but that clarity has made my worries all the more acute.  Nevertheless, in order to round on a positive note I’ve tried to think of some positive movements.  I have always found John Dewey’s writing on education and reasoning to be full of promise (Findlay, 1910).  Active learning, authentic inquiry,  mastery learning and peer-learning seem to me to be close cousins and a sound approach for growing a real capacity to conceive of science as a way of looking to understand the unknown (Freeman et al., 2014) seem to me to have Dewey’s unspoken blessing.  I also think that Dewey would approve of Edgar Morin and his Seven complex lessons in education for the future (Morin, 2002). There is a video of Morin explaining some aspects of the seven complex lessons that I would recommend.

I’m off to share an hour with a glass of whisky in a dark room.

References

Bloxham S, den-Outer B, Hudson J & Price M. (2016). Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assess Eval High Edu 41, 466-481.

Findlay JJ, ed. (1910). Educational Essays By John Dewey. Blackie & Sons, London.

Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H & Wenderoth MP. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences 111, 8410-8415.

Morin E. (2002). Seven complex lessons in education for the future. Unesco.

Perkins DN. (2009). Making learning whole : how seven principles of teaching can transform education. Jossey-Bass ; Chichester : John Wiley [distributor], San Francisco, CA.

Schwartz B. (2011). Practical wisdom and organizations. Research in Organizational Behavior 31, 3-23.

langton

 

 

 

 

 

Phil Langton is a senior teaching fellow in the School of Physiology, Pharmacology and Neuroscience, University of Bristol, UK.  A biologist turned physiologist, he worked with Kent Sanders in Reno (NV) and then with Nick Standen in Leicester (UK) before moving to Bristol in 1995.  Phil has been teaching GI physiology for vets, nerve and muscle physiology for medics and cardiovascular physiology for physiologists. He also runs a series of units in the second and third (final) years that are focused on the development of soft (but not easy) skills.  He has been interested for years in the development of new approaches to old problems in education and is currently chasing his tail around trying to work out how fewer staff can mentor and educate more students.

 

Grading student lab reports (while keeping your sanity)

I love teaching undergraduate labs and watching students grow as scientists. However, I’m not at all excited by the prospect of grading student writing. There are three strategies I wish I had known about before giving my first lab report assignment.

  • Full-rubrics should be written for each writing assignment before the term starts
  • Students need practice and feedback. This can be achieved with short, low-stakes writing assignments, peer-review and scaffolded assignments, which require minimal grading on my part.
  • The biggest sanity and time saver of all was telling students that I am not their editor or proofreader.

Each of those is probably worthy of its own blog post, so this is a brief overview of strategies I’ve adopted to save my sanity while grading lab reports (and other student writing assignments).

 

1) Full-rubrics

A lab report is usually a long, high-stakes assignment, that is worth a substantial portion of the final grade. A full-rubric is invaluable for streamlining the grading process and communicating expectations to students. A full rubric is not just a check-list of presence or absence of criteria needed to complete the assignment. Instead, for each criteria there is a detailed description of different levels of mastery or quality. Rubrics can be used to give formative or summative feedback, analytical or holistic assessments, or a combination. Another advantage of rubrics is that they help standardize grading across multiple sections of a course that are taught by graduate teaching assistants.

A good rubric is very time-consuming to create, but it has potential to save you many hours when it comes time to assess student writing. [This is especially true if you use an online course management system that has a built-in grading tool (e.g. Canvas Speed Grader). You can link your rubric to the assignment, and give comments and numerical scores for each criterion on the rubric. The tool will add the scores and put them directly into the online grade book. Hooray!]

Here is an example of a summative grading rubric from the methods section of a lab report.

Excellent Average Inadequate No Effort
Contains sufficient detail for the audience to validate the experiments Contains clear descriptions of all necessary steps for the reader to be able to validate the experiments without having to contact the author for more explanations. Descriptions of the experimental methods are provided, but some minor steps are missing so the reader will not be able to validate the experiments without further assistance. Descriptions of the experimental methods are provided, but one or more key steps are missing so the reader will not be able to validate the experiments without major further assistance. Descriptions of methods are so poor that the reader cannot grasp what experiments were done OR no description included at all.
Includes brief description of how data were analyzed (equations, statistics etc.) A clear description of how data were analyzed, including all relevant steps and calculations. A description of how data were analyzed but is missing some steps or calculations. A poor description of how data were analyzed and is missing substantial steps or calculations. Reader is unable to understand how data were analyzed OR no description is given at all.

 Resources to help you get started on your own rubrics

 

2) Practice and feedback

Students sometimes tell me that they are “not very good at writing”. My reply is that writing is a skill and as a skill, it requires practice, practice, practice. To this end, I use a mix of short, low-stakes writing assignments and scaffolding.

Low-stakes writing assignments are short, informal assignments that are designed to help students reflect on what they have been learning or doing, but don’t require much grading effort from the instructor. It’s important to give students the rationale for the assignment and present it as equally important as larger assignments, even though it’s worth fewer points. One popular example is a “minute paper.” These are brief in-class written responses to an instructor-posed question. Some sample prompts that align with writing a lab report:

  • What was the most surprising result from your experiment?
  • In your opinion, what would be a good follow-up experiment to yours?
  • What relationship did you see between ____ and ____?
  • Would you agree or disagree with this statement __________?
  • List the keywords, phrases and databases that you are going to use to search for references for your lab report.

Examples of other low-stakes, minimal grading assignments are timed “free-write” (write everything that comes to mind about the topic from memory for 5-10min), journals (separate from lab notebooks), outlines, or concept maps.

Scaffolding refers to taking a larger assignment and breaking it into smaller parts. I have my students write their lab reports in stages over five weeks. Each stage they receive formative feedback from me and/or go through peer-review. At each stage they are also required to explain how they incorporated feedback from the previous stage. By breaking a large assignment into stages, I can provide more detailed feedback so that their final lab report is more polished and easier to read.

Resources to help you with low-stakes assignments and scaffolding:

 

3) You are not the editor or the proofreader

Fixing spelling, punctuation and grammar are the student’s responsibility, not yours. Yes, students need to know when they have made technical errors, but it shouldn’t consume all of your time. One strategy is to simply make an X or other mark at the end of each line that contains an error. It is then the student’s job to analyze their writing and find the error. Another is to edit one paragraph and then instruct the student to look for similar errors throughout their writing.

Focus your time on making meaningful comments about content, especially on early drafts. Some of the most helpful comments are actually questions. For example, rather than tell a student to delete a sentence, ask the student how that sentence helps their argument. It is easy to overwhelm students with too many comments, so prioritize which comments to give. Don’t forget to give students positive feedback about the strengths of their writing! We tend to focus too much on the weaknesses.

Finally, plan ahead for how much time you realistically have for grading, and how much time you’ll need to grade each submission. Set a timer to keep yourself on track. If you find that one submission is taking too long, set it aside and take a break.

Resources to help responding effectively to student writing

 

AguilarRoca

 

 

Nancy Aguilar-Roca is an assistant teaching professor at the University of California, Irvine in the Department of Ecology and Evolutionary Biology. She studied respiratory and cardiovascular physiology of air-breathing fishes for her PhD at Scripps Institution of Oceanography and did a postdoc in evolutionary genomics of E. coli at UCI. She currently runs the high-enrollment upper division human physiology labs and is in the process of revamping the course with flipped lab protocols and more inquiry based activity (instead of “cookbook”). She also teaches freshman level ecology and evolutionary biology and is interested in using online ecology databases for creating inquiry-based computer activities for this large lecture course. Her other courses include Comparative Vertebrate Anatomy, Marine Biology, Physiology of Extreme Environments and non-majors physiology. At the graduate level, she co-organizes a seminar series for graduate students  and postdocs who are interested in learning evidence-based teaching techniques.  She was recently appointed Director of the Undergraduate Exercise Sciences Major and welcomes any advice about developing curriculum for this major.

Teaching Toolbox: Tips and Techniques for Assessing What Students Know

GanzImage.What has to shift to change your perspective? Thomas Kuhn coined the term paradigm shift and argued that science doesn’t progress by a linear method of gathering new knowledge, rather, a shift takes place when an anomaly subverts the normal practice, ideas and theories of science. Students learn through interaction with the surrounding environment mediated by prior knowledge from new and previous interactions with family, friends, teachers, and other sociocultural experiences (Falk & Adelman, 2003). Deep understanding of concepts depend on the interaction of prior experience with new information. As Kuhn stated in his 1962 book The Structure of Scientific Revolutions, “The challenge is not to uncover the unknown, but to obtain the known.”

In order to assess what students know, you need to find out what they already knew. An assessment can only provide useful information if it is measuring what it is intended to. In the medical field, assessments are used all the time, for example, an MRI is a useful diagnostic tool to determine the extent of tissue damage but it is not necessarily useful for establishing overall health status of an individual. Assessing what a student knows with a multiple choice test may also not be useful in establishing an overall picture of what knowledge a student possesses or how that knowledge is applied, especially if the items are not measuring what they are supposed to. Construct validity provides evidence that a test is measuring the construct it is intended to. How to measure construct validity is beyond the scope of this article, for information, see the classic work by Messick (1995). Outside of the psychometrics involved in item or assessment construction, I’ll provide some quick tips and techniques I have found useful in my teaching practice. What can you do to separate real learning with deep understanding from good test taking skills or reading ability? How can you assess what students know simply and effectively?

Instruction in a classroom environment needs to be connected with assessment rather than viewing instruction and assessment as separate activities. Understanding student thinking can be done with formative assessment which benefits students by identifying strengths and weaknesses and gives instructors immediate feedback regarding where students are struggling so that issues can be addressed immediately. By providing students with context in the form of a learning goal at the start of a class, the clear objective of the lesson allows them to begin making connections between what they already know and new information. When designing or preparing for a class, ask yourself:

  1. What do I assume they already know?
  2. What questions can I ask that will help me confirm my assumptions?
  3. What are the most common misconceptions related to the topic?

Tips for checking students background knowledge

  • On a whiteboard or in a presentation, begin with one to three open ended questions and/or multiple choice questions. Ask students to respond in two to three sentences, or circle a response. It’s important to let them know that the question(s) are not being graded, rather, you are looking for thoughtful answers that will help guide instructional decisions. Share the results at the start of the next class or with a free tool like Plickers for instant feedback.
  • Short quizzes or a survey with Qualtrics, Google Forms, or Doodle Poll can be used via Black Board prior to class. Explain that you will track who responded but not what the individual student responded at this point. Share the results and impact on course design with students.
  • Group work. Using an image, graph, or some type of problem regarding upcoming course content, have students come up with a list of observations or questions regarding the material. Use large sheet paper or sticky notes for them to synthesize comments then review the themes with the class.

Formative assessment is used to measure and provide feedback on a daily or weekly basis. In addition to learning goals communicated to students at the beginning of each class and warm up activities to stimulate thinking about a concept, formative assessment can include comments on assignments, projects or problem sets, asking questions that are intentional towards essential understanding rather than a general, “Are there any questions?” at the end of a lesson. To add closure and summarize the class with the learning goal in mind, provide index cards or ask students to take out a piece of paper and write in a couple of sentences what the most important points of the lesson were and/or ask them to write what they found most confusing so that it can be addressed in the next class. Formative assessments provide tangible evidence for you to see what your students know and how they are thinking and they provide insight and feedback to students in improving their own learning.

Summative assessment includes quizzes, tests and projects that are graded and used to measure student performance. Creating a well-designed summative assessment involves asking good questions and using rubrics. In designing an assessment that will accurately measure what students know, consider:

  1. What do you want your students to know or be able to do? This can also be used in each lesson as a guiding objective.
  2. Identify where you will address the outcomes in the curriculum.
  3. Measure what they know with your summative assessment.
  4. Based on the measurement, what changes can be made in the course to improve student performance?

Good questions

  • Measure what you intend for them to measure.
  • Allow students to demonstrate what they know.
  • Discriminate between students who learned what you intended versus those that did not.
  • Examine what a student can do with what they learned versus what they simply remember.
  • Revisit learning goals articulated at the beginning of a topic, unit or course.
  • Use a variety of questions such as multiple choice, short answer and essay questions.

Rubrics

  • Used for oral presentations, projects, or papers.
  • Evaluate team work.
  • Facilitate peer review.
  • Provide self-assessment to improve learning and performance.
  • Motivate students to improve their work.

Online rubric resources for educators include, Rubistar, Online Instruction Rubric, and Value Rubrics.

Students do not enter your classroom as a blank slate. Assessing and determining what students know targets gaps in knowledge. By incorporating an activity or a question in a small amount of time at the start and end of a class, you can check on potential and actual misconceptions so that you may target instruction for deep understanding. Background checks of prior knowledge provide awareness of the diversity of your students and their experiences further designing and improving instruction for active, meaningful learning. Creating a bridge between prior knowledge and new material provides a framework for students for a paradigm shift in learning and makes it very clear for them and for you to see what they learned by the end of a lesson or the end of a course.

 

References

Falk JH, Adelman, L.M. Investigating the Impact of Prior Knowledge and Interest on Aquarium Visitor Learning. Journal of Research in Science Teaching. 2003;40(2):163-176.

Kuhn TS. The Structure of Scientific Revolutions. 4th ed. Chicago: The University of Chicago Press; 1962.

Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational measurement: Issues and practice,14(4), 5-8.

 

PECOP Gatz picture

 

 

Jennifer (Jen) Gatz graduated from Ithaca College in 1993 with a BSc in Exercise Science and began working as a clinical exercise physiologist in cardiac and pulmonary rehabilitation. Jen received her MS in Exercise Physiology from Adelphi University in 1999, founded the multisport endurance training company, Jayasports, in 2000, and expanded her practice to include corporate health and wellness for Brookhaven National Laboratory, through 2012. Along the way, Jen took her clinical teaching practice and coaching experience and returned to school to complete a Master of Arts in Teaching Biology with NYS teaching certification from Stony Brook University in 2004. A veteran science teacher for 12 years now at Patchogue-Medford High School in Medford, NY, Jen is currently teaching AP Biology and Independent Science Research. A lifelong learner, Jen returned to Stony Brook University in 2011 and is an advanced PhD candidate in Science Education anticipating the defense of her dissertation in the fall of 2016. Her dissertation research is a melding of a love of physiology and science education focused on understanding connections among cognitive processes, executive functioning, and the relationship to physical fitness, informal science education, and environmental factors that determine attitudes towards and performance in science. In 2015, Jen was a recipient of a Howard Hughes Medical Institute Graduate Research Fellowship.

Summative Assessment – Does the End Justify the Means?

I recently heard two students in academic difficulty recount painfully similar stories about how their own studies had come off the rails following the attempted suicide of younger siblings, who were themselves college undergraduates. What are the chances of hearing two such stories in one day? Well, according to Emory University’s statistics, there are 1000 suicides per year among college students and as many as 1 in 10 students have made a plan for suicide at some point1.

I do not pretend to understand such shocking statistics. Known stressors for college students include interpersonal factors such as new social environments and relationships, personal factors like poor sleeping and eating habits and financial problems, academic workload and poor grades2. There are many things here I cannot help with directly, but on the academic side it does make me reflect on what can I do as a professor to help?

stressedI am a physiologist, not a counselor or a psychiatrist. However, I can begin by learning what counselling services my university has (they are excellent as it turns out – and I bet yours are too), and I can do a better job of guiding distressed students to seek their help; if the need arises I can ask students straight out if they have suicidal thoughts and I can dial 911 if necessary. But another thought occurs to me….at certain points I become the focal point for student stress and that happens each time I choose to set a high stakes exam.

It is an old axiom that assessment drives student learning but with such power comes great responsibility! The stress incurred by students through testing (especially when graded) must come with some tangible educational benefit. In other words, I must weigh the costs and benefits of deciding to set up a particular assessment and especially how much summative testing to include in the block. After all, we know that the rate of forgetting is significant, even after the mega high stakes United States Medical Licensing Exams3.

One strange observation I have made over time is that students and faculty often align with wanting more testing; students want to lessen the burden of information per test and faculty want more complete sampling of the material. I have struggled in three different institutions to reduce summative testing load and to replace some tests with formative testing instead. Each time, student score distributions at the end of a course were not affected, whereas student stress levels seemed lower and the classroom was a more relaxed and enjoyable place.

Is all testing bad or can assessment be a win-win where positive educational impacts outweigh the negatives? Progressive testing methods such as project-based assessment and collaborative assessment align with 21st century goals of graduating students with competencies in critical thinking, communication skills, technology literacy etc., perhaps without the same level of stress that cramming for knowledge-based tests produces. Recent studies have convincingly shown that frequent zero-stakes testing used as a means to rehearse content produces major learning gains in what has been coined the “testing effect4”. Commercially available adaptive learning platforms are also available in which the technology helps students to continually self-assess towards achievement of mastery5.

As a faculty member I can help to address student burnout and stress by carefully considering my choices of summative assessment and maximizing testing for learning. I believe we need to be intentional about teaching students how to learn by addressing learning preferences, motivation and self-regulated learning habits. The dismaying statistics I started with suggest universities should also provide more learning opportunities on wellness, nutrition, resiliency, lifestyle management, financial planning, etc., as part of all our programs. I realize there are many other factors to think about and hope some discussion will follow to explore these gaps.

Resources

  1. Emory Cares 4 U. Suicide Statistics http://www.emorycaresforyou.emory.edu/resources/suicidestatistics.html Accessed 4/22/16
  2. Ross SE, Niebling BC, Heckert TM. Sources of stress among college students. College Student Journal 33 p312-318, 1999
  3. Ling Y, Swanson DB, Holtzman K, Deniz Bucak S. Retention of basic science information by senior medical students Academic Medicine 83(10 Suppl):S82–S85, 2008
  4. Karpicke JD, Roediger HL 3rd. The critical importance of retrieval for learning. Science 319:966–968, 2008
  5. Flashcards. Memory Aids. An automatic study plan for every lecture. https://www.osmosis.org/ Accessed 4/22/16

PECOP picture Kibble

J.D. (Jon) Kibble graduated from the University of Manchester in 1994 with his BSc and PhD in physiology. In his first faculty position at the University of Sheffield Medical School, Dr. Kibble started a research laboratory to investigate the molecular physiology of renal tubular ion transport. His passion for teaching was ignited at this time as he began to teach medical physiology and anatomy. Next he became a Course Director for Medical Physiology at St. George’s University in the West Indies and later at The Memorial University of Newfoundland in Canada. The experience of teaching over 4,000 medical students in different parts of the world established his academic base as a medical physiology teacher.
Jon moved to the United States in 2008 to join the founding faculty of the University of Central Florida, College of Medicine. In 2010 he was appointed as Assistant Dean for Medical Education and is responsible for overseeing the development of basic science content throughout the curriculum. His scholarly work includes publication of learning resources in the form of a textbook on medical physiology, flashcards and electronic resources for adaptive learning. His primary research interest relates to the efficacy of formative assessment and understanding student engagement in self-assessment.
Jon became a Fellow of UK Higher Education Academy in 2007, is deputy editor of the journal Advances in Physiology Education, currently chairs the American Physiological Society’s Teaching Section and is a member of the International Union of Physiological Society’s Education Committee. He was the recipient of the Alpha Omega Alpha Robert J Glaser Distinguished Educator Award, 2015.

Faculty Peer Partnerships for Teaching

Do your student evaluations of teaching sound like mine?

  • The instructor is clear and interesting, except when confusing and boring.
  • The pace of the class is too fast, except when it’s too slow.
  • Exams are fair, except when they’re too hard.
  • This instructor is __insert amusing but inappropriate comment about personal appearance or personality

Do you worry, like me, about what a promotion and tenure committee will think about your teaching based on these comments? One day I shared my concerns with an administrator and suggested that faculty might be better suited for evaluating each other. A couple weeks later, the administrator let me know that they had formed a committee to develop a peer-mentoring for teaching program for our department and that, by the way, I was the chair of this new committee. And thus began a quest to find ways for faculty to help each other.hands puzzle pieces

I found that most faculty agreed with the AAAS Vision and Change report (AAAS, 2011) that we should incorporate more active teaching into undergraduate STEM courses. Unfortunately, the majority of faculty were not trained in evidence-based teaching and one-time workshops have not been very effective for helping faculty make lasting changes in their teaching (Henderson and Dancy, 2009). As with learning any new skill, regular feedback is essential, but the primary sources of feedback on teaching are often student evaluations, which are problematic (Nasser and Fresko, 2002) and inadequate for professional development. Many campuses have faculty observe each other to write summative evaluations for promotion and tenure, but what most of us want is formative assessment to help us improve.

To address these issues, more institutions are developing faculty peer mentoring programs. Peer mentoring programs have several features in common (Gormally et al, 2014).

  1. Faculty observers should make multiple classroom visits because one-time visits don’t provide feedback about whether or not adjustments made during a course have been effective. Have you ever visited a colleagues class more than once?
  2. Before a classroom visit, the observer and observee should meet to discuss goals and expectations, and they should meet as soon as possible after class to review the feedback. Ideally they should also switch roles. Feedback can go both ways!
  3. In the same way that we use rubrics to give feedback to students, we should use rubrics to give feedback about teaching. Teaching rubrics vary wildly in length and details, making this one of the most difficult parts of peer-review. We settled on the UNC Peer-observation Form. I highly recommend Gormally et al (2014) as a resource for finding rubrics. Do you have a favorite teaching rubric?
  4. Regardless of the rubric used, faculty should be trained in how to use it. Fortunately, this need not be an ominous time-consuming task. Training can be as simple as having faculty get together to watch a few short videos of other people teaching, and then discussing how they rate the videos based on the rubric.
  5. A peer mentoring program should be voluntary and details of class visits should not be part of promotion and tenure files. However, if documentation is needed for a dossier, a summary letter should suffice.

Two difficult issues that we are still grappling with are time-commitments and incentives. Brownell and Tanner (2012) cited lack of time and lack of incentives as barriers to changing teaching, but that these barriers can be overcome by making pedagogy a component of one’s “professional identity”. Anecdotally, based on my conversations with peers, collaborations and partnerships are natural components of successful science and we should approach teaching the same way. So, rather than a mentoring relationship, we hope to create a culture of teaching collaborations and partnerships that will encourage faculty to continue to refine their pedagogy.

What strategies is your institution using to encourage faculty continue to develop their teaching skills?

 

Resources

AAAS (2011) Vision and Change in Undergraduate Biology: A Call to Action, Washington DC.

Gormally, C., Evans, M. and P. Brickman (2014) Feedback about teaching in higher ed: neglected opportunities to promote change. CBE – Life Sci Ed. 13: 187-199.

Henderson, C. and M.H. Dancy (2009) Impact of physics education research on the teaching of introductory quantitative physics in the United States. Phys Rev Spec Top – Phys Ed Res 5: 020107

Nasser, F. and B. Fresko (2002) Faculty views of student evaluation of college teaching. Assess Eval in Higher Ed. 27: 187-198.

 

AguilarRocaNancy Aguilar-Roca is an assistant teaching professor at the University of California, Irvine in the Department of Ecology and Evolutionary Biology. She studied respiratory and cardiovascular physiology of air-breathing fishes for her PhD at Scripps Institution of Oceanography and did a postdoc in evolutionary genomics of E. coli at UCI. She currently runs the high-enrollment upper division human physiology labs and is in the process of revamping the course with flipped lab protocols and more inquiry based activity (instead of “cookbook”). She also teaches freshman level ecology and evolutionary biology and is interested in using online ecology databases for creating inquiry-based computer activities for this large lecture course. Her other courses include Comparative Vertebrate Anatomy, Marine Biology, Physiology of Extreme Environments and non-majors physiology. At the graduate level, she co-organizes a seminar series for graduate students  and postdocs who are interested in learning evidence-based teaching techniques.  She was recently appointed Director of the Undergraduate Exercise Sciences Major and welcomes any advice about developing curriculum for this major.