Category Archives: Education Research

Education Research: A Beginner’s Journey

Why does it seem so hard to do education research? I have never been afraid to take on something new – what is stopping me?  These thoughts were burning in my mind as I sat around in a circle with educators at the 2016 Experimental Biology (EB) meeting. During this session, we discussed how we move education research forward and form productive collaborations. Here are my takeaways from the meeting:

EDUCATION RESOURCES

Here are some tips to get started on education research that I learned from the “experts”.

1. Attend poster sessions on teaching at national conferences such as Experimental Biology.

2. Get familiar with published education research and design.

3. Attend the 2016 APS Institute of Teaching and Learning

4. Reach out to seasoned education researchers who share similar interests in teaching methodologies.

6. Get engaged in an education research network such as APS Teaching Section – Active learning Group

“Doubt is not below knowledge, but above it.”
– Alain Rene Le Sage

As seasoned research experts discussed education research in what sounded like a foreign tongue, I began to doubt my ability to become an education researcher. However, the group quickly learned that we had a vast array of experience in the room from the inspiring new education researchers to the seasoned experts. Thus, the sages in the room shared some valuable resources and tips for those of us just starting out (see side bar).

“We are all in a gutter, but some of us are looking at the stars”
– Oscar Wilde

You may already have all the data you need to actually publish a research study. In my mind, education research had to involve an intervention with a placebo and control group. However, it can also be approached like a retrospective chart review. To proceed, you should consult with your local Institutional Review Board to see if you will need informed consent to utilize existing data or if it qualifies for exemption.

“Setting out is one thing: you also must know where you are going and what you can do when you get there.”
– Madeleine Sophie Barat

It became clear at our meeting that the way forward was collaboration and mentorship. A powerful approach that emerged is taking a research idea and implementing it across a number of institutions in a collaborative research project. By doing this, we would have a network of individuals to discuss optimal research design and implementation strategies and increase statistical power for the study.

At the end of my week at EB, I reflected on my experiences and realized that education researchers are a unique group – in that, we are all passionate about the development of others. Collaborating with individuals who seek the best of each other will lead to great friendships and good research.

If you are interested in joining the APS Teaching Section “Active Learning Group”, please contact Lynn Cialdella-Kam.

Resources:

Suggested Readings:

Alexander, Patricia A, Diane L Schallert, and Victoria C Hare. 1991. “Coming to terms: How researchers in learning and literacy talk about knowledge.”  Review of educational research 61 (3):315-343.

Matyas, M. L., and D. U. Silverthorn. 2015. “Harnessing the power of an online teaching community: connect, share, and collaborate.”  Adv Physiol Educ 39 (4):272-7. doi: 10.1152/advan.00093.2015.

McMillan, James H, and Sally Schumacher. 2014. Research in education: Evidence-based inquiry: Pearson Higher Ed.

Postlethwaite, T Neville. 2005. “Educational research: some basic concepts and terminology.”  Quantitative research methods in educational planning:1-5.

Savenye, Wilhelmina C, and Rhonda S Robinson. “Qualitative research issues and methods: An introduction for educational technologists.”

Schunk, Dale H, Judith R Meece, and Paul R Pintrich. 2012. Motivation in education: Theory, research, and applications: Pearson Higher Ed.

PECOP Lynn Cialdella Photo

 

Lynn Cialdella Kam joined CWRU as an Assistant Professor in Nutrition in 2013. At CWRU, she is engaged in undergraduate and graduate teaching, advising, and research. Her research has focused on health complications associated with energy imbalances (i.e. obesity, disordered eating, and intense exercise training). Specifically, she is in interested in understanding how alterations in dietary intake (i.e., amount, timing, and frequency of intake) and exercise training (i.e., intensity and duration) can affect the health consequences of energy imbalance such as inflammation, oxidative stress, insulin resistance, alterations in macronutrient metabolism, and menstrual dysfunction. She received her PhD in Nutrition from Oregon State University, her Masters in Exercise Physiology from The University of Texas at Austin, and her Masters in Business Administration from The University of Chicago Booth School of Business. She completed her postdoctoral research in sports nutrition at Appalachian State University and is a licensed and registered dietitian nutritionist (RDN).

Statistical Strategies to Compare Groups

A blog about statistics. How great is this?! If it’s a blog, it has to be short. My wife, however, would say that even a blog about statistics is still going to be way too long.

In physiology education, we usually want to compare the impact of something—a new instructional paradigm, say—between different groups: for example, a group that gets a traditional approach and a group that gets a new approach. Depending on the number of groups we want to compare, there are different ways to design the experiment and to analyze the data.

Two Samples: to Pair or Not to Pair?

Suppose you want to see if formative assessments over an entire semester impact learning. Clearly, your students can either have formative assessments or not. So you randomly assign your 12 students to be in one group or the other. You teach your course, give the 6 students formative assessments, and then grade your 65-point final. The question is, did formative assessments (given to the students in Group 1) impact their grade on the final? These are the grades:

Group 1 2
47 40
48 56
63 65
64 33
62 65
50 51
Mean 55.7 51.7

These groups are independent of each other: the observations in one group are unrelated to the observations in the other group. So we want an unpaired 2-sample test. One option is a 2-sample t test. Here, the grades in the 2 groups are similar (P = 0.54): in this fictitious experiment, formative assessments did not impact grades.

What happens if the observations in one group are related to the observations in the other group? This could happen if you gave formative assessments to each student (Treatment 1) for half of your course and then gave an exam. During the other half of your course, each student got no formative assessments (Treatment 2). For each student you randomly assign the order of the treatments so that half get Treatment 1 first, the other half get Treatment 2 first.

In this situation each subject acts as her own control—this makes the comparison of the treatments more precise—and we want a paired 2-sample test. These are the data:

Subject Treatment 1 Treatment 2 Difference
1 49 58 9
2 47 55 8
3 52 39 –13
4 39 19 –20
5 59 58 –1
6 44 46 2
    –2.5 Mean

Here, the grades after each treatment are similar (P = 0.62): in this fictitious experiment, formative assessments did not impact grades.

When You Have Three or More Samples

Let’s pretend we want to think about the amount of fat donuts absorb when they are cooked. These numbers represent the amount of fat absorbed when 6 batches of donuts are cooked in 4 kinds of fat.

Fat Type 1 2 3 4
64 78 75 55
72 91 93 66
68 97 78 49
77 82 71 64
56 85 63 70
95 77 76 68
Mean 72 85 76 62

If you are watching your diet, the lower the number, the better. There is good news and bad news about this example. The good news is there are 24 donuts in a single batch. The bad news is 100 has been subtracted from the actual amount in order to simplify the numbers.

The first question: why not just use a 2-sample (unpaired) test to compare the amount of fat absorbed? There are two answers. First, if we compare just 2 groups at a time, we fail to use information about the variation within each of the two remaining groups. Second, if we compare just 2 groups at a time, we can make a total of 6 comparisons (1–2, 1–3, 1–4, 2–3, 2–4, 3–4). And if we do that, the chances we find at least one of the 6 comparisons to be statistically meaningful when all 6 are all statistically equivalent is about 1 in 4 (26%). The more comparisons we make, the greater the chances that we find a comparison to be statistically meaningful simply because we are making more comparisons.

What’s the solution? Use a procedure that initially compares all 4 groups at the same time. One option is analysis of variance. In analysis of variance, if the variation between groups is enough bigger than the variation within groups, then that is unusual if the group means are equal. Here, by analysis of variance, the amount of fat absorbed differs among the 4 fat types (P = 0.007). You can then use other techniques to identify just which groups differ.

The Big Picture

No matter how many groups you want to compare, the idea is the same: you want to design the experiment to account for—as best you can—extraneous sources of variation (like individual differences) that can impact the thing you want to measure, and you want to use all the information you collected when you compare the groups.

References

  1. Curran-Everett D. Multiple comparisons: philosophies and illustrations. Am J Physiol Regul Integr Comp Physiol 279: R1–R8, 2000.
  2. Curran-Everett D. Explorations in statistics: hypothesis tests and P. Adv Physiol Educ 33: 81–86, 2009.
  3. Curran-Everett D. Explorations in statistics: permutation methods. Adv Physiol Educ 36: 181–187, 2012.
  4. Snedecor GW, Cochran WG. Statistical Methods (7th edition). Ames, IA: Iowa State Univ. Press, 1980, p 83–106, 215–237.

Curran-Everett

Doug Everett (Curran-Everett for publications) graduated from Cornell University (BA, animal behavior), Duke University (MS, physical therapy) and the State University of New York at Buffalo (PhD, physiology). He is now Professor and Head of the Division of Biostatistics and Bioinformatics at National Jewish Health in Denver, CO. In 2011, Doug was accredited as a Professional Statistician by the American Statistical Association; he considers this quite an accomplishment for a basic cardiorespiratory physiologist. Doug has written invited reviews on statistics for the Journal of Applied Physiology and the American Journal of Physiology; with Dale Benos he has written guidelines for reporting statistics; and he has written educational papers on statistics for Advances in Physiology Education. Doug and his wife Char Sorensen officiate for USA Swimming and US Paralympic Swimming. After 32 years in 6th-grade classrooms, Char is now on her Forever Summer schedule: she retired in May 2009.

 

Building Critical Thinking through Technical Writing: Are We Taking the Right Approach?

thinking croppedI religiously read (ahem…meaning I quickly skim the RSS feeds) Faculty Focus for tips, tricks, latest educational research trends and general teaching strategies to help me overcome my classroom anxieties. Over the last year or so these blogs and articles have helped me with many ideas and issues with the courses that I teach. Recently, one particular blog resonated with me “How Assignment Design Shapes Student Learning” (Weimer, 2015). This blog spoke about how specific assignments guide students to think and perform in specific ways and how that influences their overall learning. You may be thinking; well of course. But my current overall educational research is: “How do writing lab reports contribute to a student’s understanding of the scientific method?”.  This makes me wonder if when we are working on helping students build critical thinking skills while using the premise of the scientific method, we may be going about lab report writing assignments in the wrong way.

When I first started teaching undergraduate General Biology and Anatomy and Physiology courses three years ago I was dismayed at what was the normal for student writing of lab reports. The first thing that I asked myself was “Was my science writing that bad when I was an undergrad?” My answer: a resounding Yes!

In Spring 2013, I set out to develop a series of assessments that helped students practice technical writing skills and create clear rubrics to help them develop this skill. I have collected data on student’s technical writing skills with the goal of correlating these new skills with student understanding and use of the scientific method.

The science technical writing assignments that I build into my lab courses are to help students via low stakes practice to reflect on the labs performed and the implication(s) of the data obtained. I have specific “chunked” assignments where students write components of a lab report. For example, the first lab write-up may have students write their testable hypothesis and the methods used in the lab. For the second lab, students write their testable hypothesis and results. These types of assignments continue until the students have had a chance to practice all of the components of a lab report prior to writing a complete lab report. Our data show students who perform practice chunked assignments do significantly better on the final lab report assignment (Hannah & Lisi, 2015).

Now, let’s mentally jump back to the blog  “How Assignment Design Shapes Student Learning” and the corresponding article “Private Journals versus Public Blogs The Impact of Peer Readership on Low-stakes Reflective Writing” (Foster, 2015). The data from Foster’s research illustrates that students have inherently different styles of writing depending on the target audience. Specifically, students who have open writing assignments (blogging to their peers) where they have to respond to peers and defend their information are more mentally adventurous than when they write journal assignments for only the professor or teaching assistants to read.

My technical writing data suggests students’ science technical writing improves with practice and regular prompt feedback. But are they only practicing the “form” and the rules that I set up in the assignments, or are they truly working through the material and using the scientific method to develop their critical thinking skills? In the end I want to help people explore science so that they can apply and evaluate scientific information to determine its impact on their daily lives. How does the traditional lab report accurately reflect a student’s ability to work through data? I would love to have comments if you have any thoughts or suggestions regarding how I might investigate students critical thinking skills using the blog format when writing science lab reports.

 

References

Foster, D. (2015). Private Journals versus Public Blogs: The Impact of Peer Readership on Low-stakes Reflective Writing. Teaching Sociology, 43(2), 104–114. http://doi.org/10.1177/0092055×14568204

Hannah,R., Lisi,M., (2015) Technical Writing for Introductory Science Courses – Proficiency Building for Majors and Non-majors, 2015 Experimental Biology Meeting Abstracts, Abstract #678.25; Accessed June, 10, 2015

Weimer, M. (2015). How Assignment Design Shapes Student Learning. (2015, August). Retrieved June 10, 2015, from http://www.facultyfocus.com/articles/teaching-professor-blog/how-assignment-design-shapes-student-learning/

 

PECOP rachael hannah
Rachel Hannah is a new Assistant Professor of Biological Sciences at University of Alaska, Anchorage. Previously, she was an Assistant Professor in the Math and Sciences Department at the University of Maine at Presque Isle. Helping people become scientifically literate citizens has become her major career focus as a science educator. As a classroom and outreach educator, Rachel works to help people explore science so they can apply and evaluate scientific information to determine its impact on one’s daily life. She is trained as a Neurophysiologist and her graduate degree is in Anatomy and Neurobiology from the University of Vermont College of Medicine. Recently Rachel’s research interests have migrated to science education and how students build critical thinking skills.