ANS 109-TAT-year-five-lessons

From OpenWetWare
Jump to navigationJump to search

These things keep getting longer, and taking me longer to post. (It’s November!?) I’ll see what I can do about returning to high-level, not mercilessly edited reflections next year…

20.109, Laboratory Fundamentals in Biological Engineering, Fall 2011

During the fall semester, I play a suppporting role in 20.109, running one of the two laboratory sections. I give pre-lab lectures and do some grading, but only do curriculum design in spring. Because fall has historically been my more relaxed term, it provides a great opportunity to do one-on-one work with students and also to experiment.

Students were eager for regular (weekly) office hours , so I tried this approach instead of only having office hours right before major assignments are due. Although the near due-date OH were definitely better attended, it was good to have some other structured opportunities to meet with students, since in-lab time is less private and in short supply certain days. I had a great experience with one student in particular working on her writing. Apparently my many written pages and verbal explanations of writing tips boil down to a simple mantra that she developed and now uses for all writing assignments: “go step-by-step; be concise.”  :)

20.109, Laboratory Fundamentals in Biological Engineering, Spring 2012

In many ways, I’m more proud than ever of the design of 20.109, both the theory and how it plays out in practice. Yet I didn’t carefully heed my own advice from last year, namely to create a less stressful environment, so that remains my key goal for next year.

Overall structure and scientific/experimental content

Several students really “got” the semester-long trajectory from more defined, somewhat contrived problems to more independent, open-ended investigations. Many also appreciated the chance to hone their writing chops on a fairly straightforward project before writing up said open-ended work. In this way, students were able to partly decouple technical and writing expertise, first focusing on developing the latter and then the former. The optimal extent of this decoupling is something to consider further.

With respect to content, the Protein Engineering module led by Alan Jasanoff (here Module 2, or M2) continues to set the bar for the 20.109 experience. Two crucial elements that contribute to its success are its substantive design element and the feeling that a project has really been taken from start to finish. A few minor tweaks improved the spring 2012 version of M2 further. Most importantly, students were offered more than one reference mutation to run in parallel with their own novel mutation. This change allowed for more coherent narratives in the final reports and yet another element of choice. On the technical side, the assay buffer was changed to improve data reliability, and the timing of the protein gel was changed (moved to Day 7) to reduce time pressure on Day 6. Areas for further improvement are better integrating lab and lecture content, emphasizing the big picture throughout, and pacing homework (including cutting some?) to avoid overload.

Cell Engineering/Module 3 also has a solid design element, though continues to be less technically robust than I’m pleased with. Although other obligations prevent a major overhaul for the coming semester, this module is ripe for one the following summer. The group who added TGF to their chondrocytes got excellent results, but this might be an expensive solution applied across the board.

RNA Engineering/Module 1 was revised this year to offer students somewhat more choice in shaping the experiment. However, only one group chose to investigate a parameter besides the two default ones that we offered; for this one group, their sense of ownership did seem to enhance the M1 experience.

Overall, folks had trouble grasping the purpose of M1 and saw it as artificial, even though I think we made a case for its substantive connection to “real” research better than in the past. On the one hand, many people appreciated that M1 had an especial focus on communication along with relatively simple technical content. Students thus could learn general communication principles here and apply them during the rest of the semester as the complexity of the science ratchets up. However, in some sense the experiment is a bad practice case for learning to write a research article, because much of the Results we ask for would be “data not shown” (at best) in a real publication, and the Discussion could be similarly spare. Again, I think we responded to this student sentiment better than in the past, explaining that publications regularly establish credibility by summarizing things that are already understood, and then taking that theory/mechanistic analysis further.

Any revision of M1 should enhance the design element. However, it’s worth noting that students who came in with little or no lab experience thought that Module 1 made for a great lab orientation. We don’t want to abandon this population by introducing complex, open-ended data interpretation too quickly.

Improving writing instruction

I implemented several changes in writing instruction this semester –- not the best science perhaps (changing several parameters at once!), but effective by several metrics.

With respect to the Writing Across the Curriculum (WAC) faculty, folks with a more technical (specifically biology) background were recruited in hopes of reducing the disconnect perceived by students between comments from writing and technical faculty. To further buttress students’ trust, WAC faculty had a more sustained relationship with them than usual. During the first module, writing faculty commented on interim homework assignments such as a single figure or draft introduction section (as technical faculty do), rather than only responding to a complete report draft. In the past, with only one week to revise the draft report and knowing that their grade ultimately comes from the technical faculty, many students didn’t make much use of writing faculty comments. In contrast, this semester the sustained relationship not only built trust, but also reduced time pressure for implementing changes related to writing. In fact, although receiving comments on the complete draft report was optional, every student asked to do so. Due to resource limitations, WAC was hardly involved in the class during the second module. During the end-of-semester discussion, several students said they would have liked to receive even more feedback from WAC (namely during M2), a previously unheard of comment!

As a second experiment, WAC was supplemented with a professional writing tutor with two aims: to increase one-on-one time with students, and to focus on fundamental big picture writing skills that span the writing process (outlining, drafting, revision). He aimed to teach “how to make sense of and revise according to feedback” versus “how to improve your grade on this lab report.” He was also meant to provide a more informal, completely opt-in resource. Students who visited the tutor seemed to truly internalize good habits that will serve them well in other classes, such as having a pre-writing/outlining process, writing with audience in mind (how much detail is needed?), and reducing confusing verbiage. Students with both stronger and weaker foundational skills were helped, though a greater number of students who had the technical content down benefitted. For students who struggled with both writing and technical issues, a very useful decoupling occurred: as the primary technical instructor, I was freed up to work on content and analytical skills exclusively when I knew that a student was also meeting the writing tutor regularly.

On a personal note, it was gratifying to have one student state that she finally internalized a piece of writing advice because I clearly told her why it’s important: Oftentimes, if I feel the convention has merit I’m much more likely to remember to respect it.

Improving reflection assignment

I’ve tried a few different types of reflective assignments, and this semester finally hit on a winning approach. One purpose of reflection is to help the student internalize what s/he has learned, while another is to give me a glimpse of less tangible contributions and learning (than the major assessments do) that might otherwise not be evaluated in final grades.

In 2009 and 2010, students simply filled out a participation rubric (Did I ask questions? Etc.) once per module, and were encouraged to write a few sentences of summative reflection but seldom did. The exercise took little time but had even less worth. In 2011, students wrote an open-ended reflection once per module. The only guidance given was “You might comment explicitly on your participation (such as asking/answering questions during lecture and lab) as well as on your learning experience (such as ways that you are developing as a scientist or writer).” Of the ten students who replied to a query about the utility of these reflections (on end-of-semester evaluations), half found them useless. In contrast, in 2012 only 1 of 7 students who answered the question replied negatively, with a soft “not really.” What changed?

In spring 2012 students wrote four narrowly defined reflections, primarily focused on communication skills. The first was a self-evaluation of a journal club presentation. The next two were linked: one was submitted after revision of the first laboratory report, and the next after writing a draft of the second report. Students were asked to describe 2-3 key lessons they learned from the revision process during the first module, and to revisit these during the second module in order to assess their own success along with what areas needed further growth. The final reflection could be written in one of a few categories, including meeting with a writing or technical instructor. A representative end-of-semester response: “I think the reflections/self-assessments helped me reflect on how I learn, and what changes I need to make to continue to be successful.” I seem to be closer to my goal of getting students to take charge of their own learning, particularly in transferrable skills such as communication and analysis.

A few more notes about transferrable skills

Besides improved writing success and engagement (2/3s saw at least one writing instructor/tutor at least one time), for a large cohort of students, end of semester evaluations revealed positive impressions of gains in other big picture, transferrable skills. In particular, students learned to read papers, design experiments, and think critically/analytically. Just a few representative comments:

  • After 20.109, reading papers goes much faster. I now know how to skim them effectively too.
  • I feel more confident [reading papers], but also believe that this is something that could be given greater emphasis in the class.
  • I feel much better prepared on my ability to design and carry out an experiment.
  • I think 20.109 has helped me think about what are appropriate controls for experiments. Before the class, I was always unsure of what controls were appropriate, and the differences between positive and negative controls.
  • This class has greatly changed my perspective in terms of experimental design. In this sense the final research proposal was a great culminating assignment for

the class, even though I did find this particular assignment very challenging.

Revisions to major assignment scheduling and group work

Inspired by some work by my colleague Natalie did during the fall semester (in the module taught by Prof. Angela Belcher), I encouraged students to work in larger “super-groups” of 2-3 lab pairs during the third module. Although multi-group coordination has always been an option, this time I emphasized and incentivized it: super-groups were allowed to write a single report (rather than one per lab pair), and students were told that pooling samples could provide more robust results. Compared to previous years, the report narratives were more cohesive and the data analyzed in more depth, on average. It’s not clear whether this was due to working in larger groups (more peer feedback/revision by necessity), due to changes in writing instruction, or both, and/or due to other factors.

Because of oddities in the 2012 calendar, students had much more time than usual between the last day of Module 2 and the submission of their draft reports. These drafts were noticably stronger than in the past, particularly with respect to clarity. Again, it is hard to say whether the increased time, the enhanced writing instruction, or both (or neither) contributed. Intriguingly, the final drafts were of similar quality to previous years, suggesting that students reached the same plateau in scientific understanding while more quickly improving their communication of that understanding (higher draft grades). This result is consistent with both changes in concert: better writing instruction and more time to implement it.

The two findings above, along with increasing enrollment in the major, put some interesting notions in my head. First, that we start with group work and communication &ndash so students can learn from each others’ successes and mistakes &ndash and then move to individual assignments. That is, M1 would be group work and M2 would be individual. For reasons of convenience/class structure, M3 would remain group work as well. Second, only the M1 report would be submitted as a draft and a final version, while the M2 report would be submitted without opportunity to revise. If the first report is written in pairs, the faculty members associated with M1 and M2 would actually have equal grading loads: one has to look at twice as many reports, but only once, and with limited time pressure; the other looks at half as many reports, but twice, and must work quickly on the draft to give feedback. Students would be given more time for the second report than for the first, and expected to do internal rather than formal revision, further contributing to the trajectory of independence built into the class.

Primary area for improvement: need to decrease stress

The comments below mostly speak for themselves:

  • We are always rushing from one thing to another during lab hours
  • Some of the [homework assignments] are far too long. I know that all this work has to get done but I spend so much time on this class and its so many times a week. It just tires me out.
  • Sometimes I feel that my work isn’t top quality or that I don’t have enough time to meet with the instructors before a paper is due.
  • The excess of assignments and quizzes seemed overly daunting to me rather than encouraging to stay up-to-date with the material.

Students feel overwhelmed and frustrated that they can’t put forward their best efforts. While a slightly different perspective is shown in the comments below, clearly some recalibration of classroom environment and workload must be done.

  • This class has brought a lot of stress to my life, but I’ve also improved the most from this class this semester
  • Although the class demands a lot of work, I’ve benefited [sic] and improved in multiple ways, including my work ethic, time management skills and my scientific writing. Central to this improvement was the feeback I received.

I should note that my overall evaluations have been lower these past two years than the two prior years, although responses to questions such as “I understand/can apply concepts” and “I learned a lot” are mostly unchanged. Moreover, my personal scores are always higher in fall than in spring, as are most general scores including oral communication, which unlike other aspects of the curriculum is taught identically both semesters. So I believe the lower evaluations reflect an atmosphere issue, not a teaching per se issue.

Need to increase big picture emphasis

Students want more big picture emphasis throughout 20.109, as revealed by both mid-term and end-of-semester anonymous written evaluations . Specifically, they want to be reminded early and often of the overall goal of the experiment, so they don’t get lost in the day-by-day goals without making connections. Both the early (Mod 1 issue) and the often (Mod 2 issue) are clearly important. Students also want broader context and content in the formal lectures. For example, one student asked to hear more about “protein engineering in general” in module 2. Although I know for a fact this topic is covered in the first two lectures, it’s worth revisiting later in the module when students can better understand the relevance of and innovation demonstrated by the examples. We can really learn a lesson here from Prof. Bevin Engelward’s work in the fall semester. She is a master at keeping both the overall goal and relevant contextual topics at the forefront, without losing important details.

Most implementable ideas from student feedback (mid-term anonymous evals, end-of-term discussion, and some individual emails)

Communication

The only low-key practice students get in oral communication before doing their individual journal club talks is a group discussion about a current research article. Each pair is assigned one figure of the paper to discuss, but this happens at the last minute (in order to encourage students to have read the whole paper!) and presentation is very informal. A great student suggestion was that this group discussion be made more formal, with each team getting their figure assignment in advance and preparing a single slide. Our oral communication instructor could then attend this presentation and give feedback. Similarly, some students requested writing instructor “study halls” rather than lectures. A substantial population of students did benefit from the lectures, though, and again I wouldn’t want to leave these students at a disadvantage just to avoid boring our more experienced students.

As for the journal club itself, an idea was raised to provide supplemental readings such as a review article on each topic. I did this once in S09, and could return to it. Seemed to have a small effect, if any, but certainly doesn’t hurt.

Along the lines of practice, students suggested getting mock data early in the semester and having to turn that into a results+discussion section. I’m torn, because this mock work would take time away from novel work. On the other hand, if it were very limited in scope, such an exercise might be useful without being a distraction.

Whatever the mechanism, students generally were eager to get additional guidance on writing discussion sections, particularly with respect to kind of content. Here, model examples will be useful. I did send a model student report out this semester, and it was much appreciated, but came somewhat late (a few days before their first report was due) and was in a slightly different format as well. In concert with student models, I should introduce more literature models than I do, taking the time to dissect what type of information is in the discussion, paragraph-by-paragraph. We do this with the M1 journal article, but more examples would be better. (Here they tell the implications of the most surprising result; here they provide alternate explanations besides their preferred model; etc.)

Students thought they might benefit from hearing directly from the professor grading the report about his/her expectations (before they write it), in addition to getting the perspective of the writing and technical instructors. I think it could indeed be very useful for both the class and me to formally hear what each professor values most. Certainly we discuss these matters, but could work more to be on the same page with respect to what we emphasize most. Some variety in perspective is of course both good and realistic for students to encounter.

One excellent student observation that I’ll share with my future classes is that the interim homework assignments (a figure here, another there) can’t just be pasted together to make a results section! There is transitional threading that needs to be done, not every piece of useful information is a figure, and the opening context must be set.

Changes to assessment

Some of the most fascinating comments were about quizzes. A very honest student said that she could “memorize” the lecture and pre-lab information so as to do well on quizzes, but had to start from scratch putting together the concepts when it came time to do her report. Other students felt uneasy about the quizzes, offering suggestions such as having longer quizzes less often or writing a summary of the previous lab instead of being asked questions about it. It seems possible that quizzes could be one place to reiterate the big picture. Say, the first question could always be “how did last time fit into the entire scheme of the experiment?” and the rest could be analytical.

Somewhat relatedly, there was a desire on the part of students to have more comprehension checks that are decoupled from writing, whether in the homework assignments or in the class as a whole. Again, I’m torn. We pride ourselves on (relative) authenticity in 20.109, and try to avoid purely “made-up” problems. However, my colleague Natalie has experimented with a lab exam of sorts in the fall semester, and I must admit it is nice to directly see students’ ability to analyze data without all the confounding factors of disparate experimental success and general complexity. Really this comes down to what we purport to teach in 20.109, and whether we should shift the balance of evaluation toward comprehension checks alone and lessen the emphasis on communication (while still maintaining the minimum+ standards for a “communications-intensive” class). Did I mention that I’m torn? With respect to authenticity, rarely does one get to show competence without communicating it in a self-directed, creation-rich (rather than contrived) way.

Additionally, there were a few small/technical points about wiki organization and resource availability that I will attempt to implement but not exhaustively document here.

TA Training, Summer 2012

Building on our successful program from last year, we made a few tweaks and additions that were primarily well received. And so it will be next time, one hopes!

Things that were done well with minor suggestions for improvement:

The practice teaching session was again the most frequently mentioned positive aspect of the training: Nothing beats the practice of preparing and giving an actual presentation. Even filming helps crank the tension properly. Several trainees asked for more practice, whether having a longer practice session (currently 6-7 min) or giving a revised talk the day after receiving feedback. These changes may be difficult to implement without getting yet more faculty involved. On the other hand, complaints that the sample problems could be improved (relevance, etc.) are addressable I think. We did shorten them this year, but didn’t broaden the topic base.

Gratifyingly, the second most appreciated component (as measured by the written feedback forms) was my session on teaching in diverse classrooms. I achieved my goal of making the session both more casual and more interactive, and received comments -– under the [what in the training was] particularly helpful? heading –- such as bias discussion… concept I have never thought about before and [diversity] talk… particularly ways that we might try to make a more positive learning environment. A few trainees did suggest increasing the interactivity and number of case studies further (re: language barriers, learning styles, etc.), and a couple could not relate well to the material or found it too narrow.

Many trainees once again appreciateed having a structured opportunity for interacting with a former, class-matched TA. However, several commented that the lunch session was too long! We overcompensated from a previous year, I guess, but can easily shorten the lunch again. A more worrisome comment spurred me to wonder if we should have a very brief training for the mentors themselves, to be sure we are all on the same page. So far recommendations from faculty and/or great student evaluations seem to be working well though. One useful suggestion was to have the former TAs share their evaluation comments, if they are willing.

Finally, a substantial number of students enjoyed the opportunity to see a sample teaching session with a veteran and superstar teacher in the department, Prof. Engelward. Moreover, Bevin expanded upon our usual microteaching demonstration format by explicitly dissecting her own teaching strategies afterward. This approach led to comments – again under [what was] particularly helpful? such as [Bevin]… walking us through the thoughts of teaching. I also noticed in the microteaching session I led on the second day that the trainees had really internalized her ideas about providing big picture context.

Things that were missing or can otherwise be substantially improved:

While people generally appreciated the Day 1 content, several requested cutting down any purely lecture material even further, and primarily teaching those same concepts interactively. In a somewhat different but related vein, two people wanted to make the training shorter, even cramming the practice teaching all into the first day using multiple rooms. I think this would not provide sufficient preparation time.

The chalkboard exercise was a bit of a failure, but shows promise. (Much like the position of my diversity talk last year.) A couple folks put it under particularly helpful while a couple others put it under could be made more useful. Specifically, trainees would have liked some feedback on their practice boards – from the faculty and/or being given more time to cross-compare across groups. These ideas are obvious additions, and I wish I had made time for them! I wanted to tell a number of groups, "Hey great that you did this thing visually, but you could have also approached this concept visually," but held back because I didn’t think I would have time to give feedback to all. I did sense that this group exercise &ndash in which trainees had to create something together &ndash improved bonding over only having the reflection exercise.

In line with the comment about enjoying the discussion that ensued after Bevin’s practice teaching, a few trainees seemed hungry for more explicit instruction about pedagogic theory in general and in particular how to lead a recitation. I’ve primarily been relying on the link to Arthur Mattuck’s handbook (and the SoE-wide training) for accomplishing this part of our proto-TA’s learning, but maybe that’s misguided.

Two great related suggestions that should be easy to implement are (1) encouraging the trainees to have an expectations-setting meeting with their professors before attending formal TA training and (2) providing a formalized way for TAs/Profs to have this discussion. While (1) would surely give the trainees some fodder to bring to the Q&A, I wonder if (2) is perhaps more understandable after the training. Two other comments became grouped together in my mind with this idea of formal expectations-setting: tell us something about the UG curriculum (what do they know?) and tell us something about our class (what do we need to know?).

While one TA-to-be appreciated getting advice about helping “struggling” students, three others were concerned that they didn’t know enough about helping “difficult,” “distressed,” or “ill [esp. with mental and physical components]” students. It may be worth talking to S^3 and the Student Disability Office for resources. I also remember seeing a handbook geared toward faculty re: the distress issue in MIT Medical.

All in all, an invigorating experience to see our graduate students take their teaching responsibilities so seriously.

20.309 apprenticeship, Summer 2012

This summer I wanted to do something a bit different from my usual activities. So I tried to work my way through some of the 20.309 content, 20.309 being the other laboratory piece in our major, focused on instrumentation and measurement. Part of my reasoning was similar to choosing to TA 20.110 – use or lose my quantative/modeling/etc. chops! – and part had to do with trying something very unfamiliar, both in theoretical content (optics? electronics? not since freshman physics!) and approach (make stuff visible to the naked eye and manipulatable with bare hands?).

It was an extremely humbling and valuable experience to be a learner of a new to me (essentially) field again. I realized that no matter how clear and appropriate someone may think their (written or verbal) explanations are, they may have particular gaps that I need filled in or frames that I need shifted to “get” it. Personally, I’m a very bottom-up learner. I like to get into the nitty-gritty details and grow them up into the big picture, then reflect/compare my synthesis with others’ big picture views. No doubt this bias affects how I teach my own students, and I should compensate for it. In contrast, some other folks want to hear the big picture, and only dig into necessary and sufficient details (rather than delving into each derivation, say). Each approach has strengths and weaknesses.