Incorporating Technology to Provide Students with “Baby Step” Skill Development Practice

By Sara Folta, Associate Professor, Friedman School of Nutrition

It was an ongoing issue that bothered me – students were not demonstrating an ability to entirely effectively tie together learnings from the semester on their final assignments, despite doing well throughout the rest of the semester. There were always the elite few who did a beautiful job – enough to put my concerns to rest for another year. I didn’t understand why so many had missed the mark (at least to some degree), but the fact that some hit it was an indication, I hoped, that it was a student problem and not an instructor one.  Then I did the CELT Faculty Fellowship in 2015. I came to realize that I was perhaps making a classic teaching mistake in the final assignment for my course, “Theories of Behavior Change and their Application in Nutrition and Public Health Interventions.”  The course involves learning a new theory each week, which students apply to a behavioral challenge of their choosing, such as a clinical case study or a public health intervention.  At the end, students are asked to write a final paper that involves choosing among all the theories from the semester and then, if they’ve chosen more than one (which they usually do), to describe how they have integrated the theories and developed an overall treatment plan or intervention strategy.  The problem was that I was asking them, after a semester of thinking fairly discretely about each theory, to choose and integrate without providing any real direction or practice in developing these skills. 

This realization came early enough in the semester that I was able to add an activity on choosing and integrating theory during one of the class sessions. The activity seemed to go reasonably well, but I felt that more could be done.  This past Fall, I was fortunate to have a very technology-minded TA, and had already been considering how to incorporate more technology for learning when I saw the announcement for the Instructional Technology Exploration Program run by Education Technology (Tufts Technology Services).  I proposed a project that involved use of technology – draw.io – as a tool for an in-class exercise to give students practice in combining multiple behavioral theories.  The activity was similar to the one I had introduced in Fall 2015, in which examples of visual frameworks were provided, but the technology gave students a way to think through how to link theoretical constructs and gain practice in creating their own integrated visual frameworks.  I hypothesized that the draw.io technology would thereby improve the ability of students to integrate and apply multiple theories.

 

Did It Work?

To try to see if the technology helped, I compared average grades of class final papers from three years: 2014, during which there was no activity at all; 2015, during which this activity was done more briefly and without the use of technology; and 2016, making use of the draw.io technology.

The percent of papers with a visual framework of theory integration increased over the three years, from one single paper in 2014 to one-third of papers in 2016 (see Table).  However, there was no appreciable difference in the grade average over the three years. 


Table.  Multi-year comparison

 

2014
No activity

2015
In-class activity, no technology

2016
In-class activity,
with technology

% papers with a graphic framework

3.8% (1/26)

25.0% (6/241)

36.7% (11/302)

Grade average (15 points total)

14.2 (94.7%)

14.3 (95.3%)

14.0 (93.1%)

1Four final papers excluded: all 4 appropriately chose a single theory (none used a visual framework)

2Three final papers excluded: one granted extension; two appropriately chose a single theory (one of the two had a visual framework)


Focusing on 2016, I then compared the grade averages of papers with and without a visual framework.  I found that the average was 14.4 out of 15 (96%) for papers with one, and 13.3 (88.7%) for those without one (this is statistically significant, p=0.02). This of course has potential for bias – perhaps I had been more impressed by papers that included one!  This cannot be ruled out. It should also be noted that the class did very well as a whole, whether they had included a visual framework or not.

 

Take-Aways

I think there are several lessons to take away from this.  Overall, despite my past uneasiness, students have actually done reasonably well on the final assignment across the years (although I have to admit, I also gave them a lot of credit in past years for trying – it’s hard to do a true comparison). The overall goal was to provide an opportunity to practice the skills needed to be effective in the final assignment, and the question was whether technology would be useful in providing this practice.  Based on this experience, the answer appears to be “yes”, since use of a visual framework increased from 2015 when the activity was done without technology; and inclusion of one was associated with a higher grade on the final paper. It should be noted, however, that despite the activity and use of technology, in 2016 less than half of students included a visual framework. More practice may be helpful. I hope to further explore this question and related ones (e.g., even if there are some students who are not “visual thinkers” and who may do well without providing a visual framework, is the activity still useful?) It is worth investing more time in exploring this, especially since the skills of choosing among theories, integrating them, and applying an integrated framework to a clinical or public health issue are ones they are likely to use in their future careers.

Tags: