Faculty Peer Partnerships for Teaching

Do your student evaluations of teaching sound like mine?

  • The instructor is clear and interesting, except when confusing and boring.
  • The pace of the class is too fast, except when it’s too slow.
  • Exams are fair, except when they’re too hard.
  • This instructor is __insert amusing but inappropriate comment about personal appearance or personality

Do you worry, like me, about what a promotion and tenure committee will think about your teaching based on these comments? One day I shared my concerns with an administrator and suggested that faculty might be better suited for evaluating each other. A couple weeks later, the administrator let me know that they had formed a committee to develop a peer-mentoring for teaching program for our department and that, by the way, I was the chair of this new committee. And thus began a quest to find ways for faculty to help each other.hands puzzle pieces

I found that most faculty agreed with the AAAS Vision and Change report (AAAS, 2011) that we should incorporate more active teaching into undergraduate STEM courses. Unfortunately, the majority of faculty were not trained in evidence-based teaching and one-time workshops have not been very effective for helping faculty make lasting changes in their teaching (Henderson and Dancy, 2009). As with learning any new skill, regular feedback is essential, but the primary sources of feedback on teaching are often student evaluations, which are problematic (Nasser and Fresko, 2002) and inadequate for professional development. Many campuses have faculty observe each other to write summative evaluations for promotion and tenure, but what most of us want is formative assessment to help us improve.

To address these issues, more institutions are developing faculty peer mentoring programs. Peer mentoring programs have several features in common (Gormally et al, 2014).

  1. Faculty observers should make multiple classroom visits because one-time visits don’t provide feedback about whether or not adjustments made during a course have been effective. Have you ever visited a colleagues class more than once?
  2. Before a classroom visit, the observer and observee should meet to discuss goals and expectations, and they should meet as soon as possible after class to review the feedback. Ideally they should also switch roles. Feedback can go both ways!
  3. In the same way that we use rubrics to give feedback to students, we should use rubrics to give feedback about teaching. Teaching rubrics vary wildly in length and details, making this one of the most difficult parts of peer-review. We settled on the UNC Peer-observation Form. I highly recommend Gormally et al (2014) as a resource for finding rubrics. Do you have a favorite teaching rubric?
  4. Regardless of the rubric used, faculty should be trained in how to use it. Fortunately, this need not be an ominous time-consuming task. Training can be as simple as having faculty get together to watch a few short videos of other people teaching, and then discussing how they rate the videos based on the rubric.
  5. A peer mentoring program should be voluntary and details of class visits should not be part of promotion and tenure files. However, if documentation is needed for a dossier, a summary letter should suffice.

Two difficult issues that we are still grappling with are time-commitments and incentives. Brownell and Tanner (2012) cited lack of time and lack of incentives as barriers to changing teaching, but that these barriers can be overcome by making pedagogy a component of one’s “professional identity”. Anecdotally, based on my conversations with peers, collaborations and partnerships are natural components of successful science and we should approach teaching the same way. So, rather than a mentoring relationship, we hope to create a culture of teaching collaborations and partnerships that will encourage faculty to continue to refine their pedagogy.

What strategies is your institution using to encourage faculty continue to develop their teaching skills?



AAAS (2011) Vision and Change in Undergraduate Biology: A Call to Action, Washington DC.

Gormally, C., Evans, M. and P. Brickman (2014) Feedback about teaching in higher ed: neglected opportunities to promote change. CBE – Life Sci Ed. 13: 187-199.

Henderson, C. and M.H. Dancy (2009) Impact of physics education research on the teaching of introductory quantitative physics in the United States. Phys Rev Spec Top – Phys Ed Res 5: 020107

Nasser, F. and B. Fresko (2002) Faculty views of student evaluation of college teaching. Assess Eval in Higher Ed. 27: 187-198.


AguilarRocaNancy Aguilar-Roca is an assistant teaching professor at the University of California, Irvine in the Department of Ecology and Evolutionary Biology. She studied respiratory and cardiovascular physiology of air-breathing fishes for her PhD at Scripps Institution of Oceanography and did a postdoc in evolutionary genomics of E. coli at UCI. She currently runs the high-enrollment upper division human physiology labs and is in the process of revamping the course with flipped lab protocols and more inquiry based activity (instead of “cookbook”). She also teaches freshman level ecology and evolutionary biology and is interested in using online ecology databases for creating inquiry-based computer activities for this large lecture course. Her other courses include Comparative Vertebrate Anatomy, Marine Biology, Physiology of Extreme Environments and non-majors physiology. At the graduate level, she co-organizes a seminar series for graduate students  and postdocs who are interested in learning evidence-based teaching techniques.  She was recently appointed Director of the Undergraduate Exercise Sciences Major and welcomes any advice about developing curriculum for this major.

Posted in Assessment on by Miranda Byse (|Profile)
Leave a comment

Leave a Reply