Monthly Archives: September 2015

Technology in the Classroom: A Double-Edged Sword?

studyAs I sat down at the end of the summer to write this blog post, I was in the midst of revising syllabi and planning out my fall semester.  For me, this tends to be a very reflective time.  What worked last year?  Or more importantly, what didn’t work and needs to be revised?  Which activities did the students like?  Which ones did I like?  What new case studies, problem sets, or online models should I add?

Over the last few years, I have been incorporating more computer simulations, online demonstrations, and website resources into my physiology courses.  I often send emails to students reminding them to bring a laptop or tablet to class because we will be using an online Nernst-Goldman simulator, creating cell-signaling animations in Power Point, etc. I receive positive feedback from my students about these interactive exercises, and I am always on the lookout for new ones.

And it appears that I am not the only one. Each new issue of Advances in Physiology Education features an article on a new technology aid – interactive iPad apps for acid-base physiology, increasing physiology interest through Facebook, or the effectiveness of online quizzes.  These technological advances allow us to provide additional self-assessment tools to our students and give them instantaneous feedback.  Models and simulations help engage visual and experiential learners.  Perhaps most importantly, these tech tools attempt to clarify hard to explain or challenging physiological concepts through interactive interfaces and dynamic models.

However, I worry that technology in the classroom may be a double-edged sword.  At the same time that I have been embracing and encouraging these technology tools in class, I have noticed some disturbing trends about improper technology use during class. No teacher is immune from the angst of a ringing or vibrating cell phone during a lecture.  Under the desk texting has become ubiquitous. Several years ago, I team-taught a course with a colleague.  I sat at the back of the classroom during her lectures and vice versa.  Over half the students in the course “took notes” on their laptops during lecture.  I use the term “took notes” loosely because my back-row observations indicated that these students were spending a considerable amount of the lecture time updating their Facebook status, looking at Power points for other class (e.g. studying for an upcoming O-Chem test), or online shopping.

This trend of multi-tasking and web-surfing during class has been noted across the country and at all levels of higher education and has driven many professors to include penalty clauses in their syllabi or ban laptops altogether.  Moreover, recent studies suggest that note taking on the computer is not as effective as traditional pen and paper.  Students who type their notes tend to do less processing of the material and simply transcribe the lecture verbatim.

So what’s the answer?  Accept technology warts and all, banish it from the lecture hall altogether, or seek some middle ground? To be perfectly honest, I’m not quite sure. But I would love to hear your opinions and experiences….

 

PECOP Blythe headshot cropped

 

 

 

 

Sarah Blythe is an assistant professor of Biology at Washington & Lee University University in Lexington, VA. She received her PhD in Neuroscience from Northwestern University. She teaches anatomy and physiology, vertebrate endocrinology, neurophysiology, and nutrition courses. Her research interests focus on understanding the effects of diet-induced obesity on the brain and the reproductive system. She is a strong advocate for undergraduate research experience both in and out of the classroom. She was recently awarded a Jeffress Trust Interdisciplinary Research grant along with two of her W&L colleagues, which allowed the team to fund three summer research fellowships for undergraduates.

 

Statistical Strategies to Compare Groups

A blog about statistics. How great is this?! If it’s a blog, it has to be short. My wife, however, would say that even a blog about statistics is still going to be way too long.

In physiology education, we usually want to compare the impact of something—a new instructional paradigm, say—between different groups: for example, a group that gets a traditional approach and a group that gets a new approach. Depending on the number of groups we want to compare, there are different ways to design the experiment and to analyze the data.

Two Samples: to Pair or Not to Pair?

Suppose you want to see if formative assessments over an entire semester impact learning. Clearly, your students can either have formative assessments or not. So you randomly assign your 12 students to be in one group or the other. You teach your course, give the 6 students formative assessments, and then grade your 65-point final. The question is, did formative assessments (given to the students in Group 1) impact their grade on the final? These are the grades:

Group 1 2
47 40
48 56
63 65
64 33
62 65
50 51
Mean 55.7 51.7

These groups are independent of each other: the observations in one group are unrelated to the observations in the other group. So we want an unpaired 2-sample test. One option is a 2-sample t test. Here, the grades in the 2 groups are similar (P = 0.54): in this fictitious experiment, formative assessments did not impact grades.

What happens if the observations in one group are related to the observations in the other group? This could happen if you gave formative assessments to each student (Treatment 1) for half of your course and then gave an exam. During the other half of your course, each student got no formative assessments (Treatment 2). For each student you randomly assign the order of the treatments so that half get Treatment 1 first, the other half get Treatment 2 first.

In this situation each subject acts as her own control—this makes the comparison of the treatments more precise—and we want a paired 2-sample test. These are the data:

Subject Treatment 1 Treatment 2 Difference
1 49 58 9
2 47 55 8
3 52 39 –13
4 39 19 –20
5 59 58 –1
6 44 46 2
    –2.5 Mean

Here, the grades after each treatment are similar (P = 0.62): in this fictitious experiment, formative assessments did not impact grades.

When You Have Three or More Samples

Let’s pretend we want to think about the amount of fat donuts absorb when they are cooked. These numbers represent the amount of fat absorbed when 6 batches of donuts are cooked in 4 kinds of fat.

Fat Type 1 2 3 4
64 78 75 55
72 91 93 66
68 97 78 49
77 82 71 64
56 85 63 70
95 77 76 68
Mean 72 85 76 62

If you are watching your diet, the lower the number, the better. There is good news and bad news about this example. The good news is there are 24 donuts in a single batch. The bad news is 100 has been subtracted from the actual amount in order to simplify the numbers.

The first question: why not just use a 2-sample (unpaired) test to compare the amount of fat absorbed? There are two answers. First, if we compare just 2 groups at a time, we fail to use information about the variation within each of the two remaining groups. Second, if we compare just 2 groups at a time, we can make a total of 6 comparisons (1–2, 1–3, 1–4, 2–3, 2–4, 3–4). And if we do that, the chances we find at least one of the 6 comparisons to be statistically meaningful when all 6 are all statistically equivalent is about 1 in 4 (26%). The more comparisons we make, the greater the chances that we find a comparison to be statistically meaningful simply because we are making more comparisons.

What’s the solution? Use a procedure that initially compares all 4 groups at the same time. One option is analysis of variance. In analysis of variance, if the variation between groups is enough bigger than the variation within groups, then that is unusual if the group means are equal. Here, by analysis of variance, the amount of fat absorbed differs among the 4 fat types (P = 0.007). You can then use other techniques to identify just which groups differ.

The Big Picture

No matter how many groups you want to compare, the idea is the same: you want to design the experiment to account for—as best you can—extraneous sources of variation (like individual differences) that can impact the thing you want to measure, and you want to use all the information you collected when you compare the groups.

References

  1. Curran-Everett D. Multiple comparisons: philosophies and illustrations. Am J Physiol Regul Integr Comp Physiol 279: R1–R8, 2000.
  2. Curran-Everett D. Explorations in statistics: hypothesis tests and P. Adv Physiol Educ 33: 81–86, 2009.
  3. Curran-Everett D. Explorations in statistics: permutation methods. Adv Physiol Educ 36: 181–187, 2012.
  4. Snedecor GW, Cochran WG. Statistical Methods (7th edition). Ames, IA: Iowa State Univ. Press, 1980, p 83–106, 215–237.

Curran-Everett

Doug Everett (Curran-Everett for publications) graduated from Cornell University (BA, animal behavior), Duke University (MS, physical therapy) and the State University of New York at Buffalo (PhD, physiology). He is now Professor and Head of the Division of Biostatistics and Bioinformatics at National Jewish Health in Denver, CO. In 2011, Doug was accredited as a Professional Statistician by the American Statistical Association; he considers this quite an accomplishment for a basic cardiorespiratory physiologist. Doug has written invited reviews on statistics for the Journal of Applied Physiology and the American Journal of Physiology; with Dale Benos he has written guidelines for reporting statistics; and he has written educational papers on statistics for Advances in Physiology Education. Doug and his wife Char Sorensen officiate for USA Swimming and US Paralympic Swimming. After 32 years in 6th-grade classrooms, Char is now on her Forever Summer schedule: she retired in May 2009.