The IBL Blog focuses on promoting the use of inquiry-based learning methods in college mathematics classrooms. Learn more about IBL at The Academy of Inquiry Based Learning
Thursday, May 31, 2012
IBL Instructor Perspectives: Jackie Jensen Vallin, Slippery Rock University
Jackie Jensen Vallin is a faculty member at Slippery Rock University in PA. Jackie has been the co-organizer for the Legacy of R. L. Moore conference, and has been actively involved in mentoring new IBL instructors, the MAA, and Project NExT.
1. How long have you been teaching and what was your teaching style before you started using IBL?
Counting my teaching experiences in graduate school, I have been teaching for 15 years. In the early days, including when I finished graduate school and had to write my first “Teaching Statement,” I was much more of a lecture-based instructor. I even said in that statement “I am more of a sage on the stage than a guide on the side.” This was probably a little misleading since I really had a very interactive style with my students – knowing them all by name and calling on them to further the conversation in class. However, it was pretty unusual to do anything in my classroom except lecture and a few group-work style worksheets.
2. How did you learn about IBL and when did you begin using it in your classes?
The spring of my first year of “real teaching” (in a faculty position), I attended the Legacy of RL Moore Conference in Austin, TX. There were a lot of people talking about the Moore Method (and, appropriately enough) refusing to define that method. But I met some great educators and was intrigued by the idea of putting the responsibility for learning more firmly on my students by guiding them to the answers without pretending that my lecturing them would be enough to get them to understand.
I began simply after that – introducing more worksheets into my classes, doing more group work, and implementing presentation days during calculus. It wasn’t until a couple of years later that I taught a class in which the students were really responsible for presenting almost all of the material. And even then I didn’t do that in all of my classes, but only in upper level courses (intro to proofs and abstract algebra, in particular). It was another couple of years before I integrated IBL into all of my classes, and now I teach everything (from Math as a Liberal Art to Math for Future Teachers to Intro to Proofs) with as many student-centered activities as I can.
3. What was one of your best IBL experiences?
My best IBL experiences actually happen in lower level courses – students take responsibility for their own learning. This means that frequently students who thought that they were bad at math get to master that fear, the material, and convince themselves that they are not “bad at math.” This happened this semester in my Financial Mathematics (for non-majors) courses – students were shown a couple of examples of how to do problems, and then completed problems on their own, in groups, sharing answers at the end of the period. I walked around the room, checking work, answering questions (by asking more questions) and making sure everyone was on task. Since this course is based entirely on applying formulas to word problem scenarios, the only way for students to learn to read word problems is by making them *do* word problems. And they are succeeding! What a great semester!
4. What advice do you have for new IBL instructors?
Start in a way that you feel comfortable – if that means turning over presentation to students one day per week and lecturing the other days, then do that. Don’t let anyone else tell you how *you* have to do IBL. Find your own method. And ask lots of people for ideas, suggestions about notes or activities, or if you feel stuck. I have gotten great pieces of advice from many different people – I steal the parts that work for me and discard the pieces that don’t to have found my own method of IBL. Everyone is willing to help and talk about teaching – we all care a great deal about our students, or we wouldn’t spend so much time developing good course notes!
Note: This is great advice -- "find a level that is comfortable for you." If you want to be "scripted" at the beginning that is also fine, and lets you concentrate on the day-to-day teaching aspects. Most importantly, if you need help, let me know!
Wednesday, May 23, 2012
John Dewey + Moneyball = A Key Insight to Change
A quote from John Dewey
One major issue behind the lack of change is data. More specifically an issue that persists is in assessing teaching and learning, and more pointedly how this data might change our fundamental beliefs (or axioms) about teaching and learning. What we assess and how we assess it determines our evaluation of student ability and achievement. Herein lies one of our fundamental issues. Lack of good assessments can lead us to continue doing what we have been doing.
To get some insights, let's look outside of education to provide a backdrop for analyzing our own system. One of the unique aspects of baseball is the wealth of statistical information that has been available for generations upon generations of players. It's one of the reasons why baseball is such a wonderfully interesting sport to be a fan of.
Earned Run Average (or ERA) is one of the traditional measures of a pitcher's ability. A lower ERA is considered better, since the pitcher gives up fewer runs per 9 innings. The problem with ERA is that it is a noisy and flawed measurement system of pitching effectiveness. It depends on factors not under control of the pitcher, such as the quality of the defense supporting the pitcher and the effects of stadiums on balls batted in play. Some pitchers are overvalued and some are undervalued in terms of their contributions to team wins, if ERA is weighted too heavily as a measure of ability. Voros McCraken conducted some groundbreaking analysis, establishing the concept itself and subsequently methods to measure pitchers that are "defense independent." This story among others is chronicled in "Moneyball" by Michael Lewis. But Voros didn't expect baseball teams to rejoice when learning about his findings. He knew better.
What is the implication for us in the teaching profession? It should be stated that major league baseball and education are not very similar as institutions. That said, we have some similarities and we can draw conclusions about our shortcomings from baseball's own struggles. Indeed teaching is also a self-populating institution. Students who do well in the current system are the ones who end up become teachers or professors. Some future math teachers state things like, "The reason why I like math is because there's always one right answer, and there's a simple, straightforward structure to all problems." They are good at memorizing rote skills, the rote skills appear on tests, they get good grades, they are labeled as good in math (which may or may not be true), and then they model themselves after their favorite teacher. Thus the cycle perpetuates.
Colleges and universities have in their mission the goal of seeking truth and knowledge. When it comes to teaching, however, discussions among faculty often are about style, "what my students like...," and about delivery of information. The focus is usually not on learning and what students are doing. A major point is that we do not use the scientific method to evaluate teaching, just as major league baseball didn't use any scientific methods to validate their player valuation systems.
Consequently we have several metric problems. Are the usual metrics like skills-based tests and student evaluations the right ones? Clearly the answer is no. Let's consider the typical calculus sequence with a thousand-plus page texts. In the typical chapter on optimization in calculus books, the authors usually highlight in a colored box the steps for how to find relative extrema. What this tells many (but not all) students is that they should memorize the recipe and regurgitate it on an exam. That's how one can get a good grade after all. These students will not walk away with a conceptual understanding of the subject, and probably will forget what they have memorized once the term is over. In short, their education is unintentionally of a lower quality than what we want. Mathematics is reduced to applying recipes that many students do not understand or even care to understand.
If you don't believe this can happen, here's data from Physics by Professor Eric Mazur, Harvard University, presenting at the University of Waterloo. (It's 1 hour long, but worth it!) At Harvard, 40% of the students in freshmen physics who did well on the procedures had inadequate understanding of basic concepts.
The result of traditional assessments is that many students who are traditionally given good grades have major gaps in understanding of basic concepts. Students think they know it, but maintain "Aristotelean understanding of Physics" rather than a Newtonian one. Their education amounts to very little, even at a sublime places like Harvard.
Now let's consider traditional teaching assessments (i.e. student evaluations), and consider data from Physics, based on the Force Concept Inventory (FCI). All of the red data points are from traditional instructors who lecture. Represented in these data points are teachers who are highly rated and lowly rated on student evaluations -- the red dots contain some star teachers and the teachers on the "oh bummer" list. And they all do about the same on the FCI within statistical significance! Students gain on average about 23% of what is possible in the pre-post test design. Student evaluations are like ERA. They are a noisy, flawed metric. Actually student evaluations are worse than ERA. ERA has some value in aggregate (whole team ERA), and outliers tend to have outlier ERAs. In contrast, the highly-rated, award winning instructors are doing no better than Dr. Boring or Professor Snoozer.
The green data points represent faculty who use Interactive Engagement in their classrooms. One of the main trends is that there is very little overlap between the reds and the greens. The average green gain is double compared to traditional instruction. Thus a better way to measure if an instructor is effective is to know what skills and practice he or she utilizes in the classroom. While crude and incomplete, it at least it tells you whether the instructor is on the red or green distribution. But these qualities and practices are not usually assessed or measured in teaching evaluations, so there does not exist sufficient data or incentive for the system to embrace change. We keep on doing the usual, while the traditional assessments tell us things are okay. And the results keep staying in the "red zone" above. Education has a bunch of Voros McCrakens, so there is hope. Baseball has changed, and I believe education will continue to improve for the better.
What about Math? Calculus Concept Inventory has been rolled out and studies are underway. Thus there is hope that we will embrace new assessments that tell us what is going on. Preliminary results suggest similar outcomes to the FCI. Interactive Engagement and Traditional instruction are different distributions. I look forward to seeing the published results. Moreover, a growing body of evidence in research in undergraduate education also suggests that students in traditional courses are not learning what we want them to learn. (More on this in a future post.)
The MAA's Calculus Study indicates that 80% of college calculus courses are taught in sections of 40 students or less. Additionally, very few institutions have large lectures for upper-level courses. Ample opportunities exist for IBL methods to be deployed courses across the nation.
What can an individual instructor do personally? Looking at data can be demoralizing at times, but one should be optimistic. In particular, one can turn assessments into valuable tools that guides students and instructors in the right direction.
Assessment is more than grading stuff so that you can assign course grades. Assessments should be utilized in ways that provide students with regular feedback (formative), instructors with information about their students (formative), and to evaluate demonstrated achievement (summative). Assessments should provide incentives for the qualities we actually value, including creativity, clarity, exploration, problem-solving ability, and communication.
Ideas for what to assess:
"I may have exaggerated somewhat in order to make plain the typical points of the old education: its passivity of attitude, its mechanical massing of children, its uniformity of curriculum and method. It may be summed up by stating that the centre of gravity is outside the child. It is in the teacher, the textbook, anywhere and everywhere you please except in the immediate instincts and activities of the child himself. On that basis there is not much to be said about the life of the child. A good deal might be said about the studying of the child, but the school is not the place where the child lives. Now the change which is coming into our education is the shifting of the centre of gravity. It is a change, a revolution, not unlike that introduced by Copernicus when the astronomical centre shifted from the earth to the sun. In this case the child becomes the sun about which the appliances of education revolve; he is the centre about which they are organized." http://bit.ly/JvR3cGThis passage is from "The School and Society," originally published more than 100 years ago (in 1899). It is relevant today, sadly. Are students at the center of instruction in math classes? Mostly no. Math teachers predominantly lecture at students, and the great change that Dewey saw has not yet come about, though many teachers have made the shift. A question one can ask is "Why have things not changed significantly in all these years?" Sure the books have colors, and we have technology beyond our grandparents' wildest dreams. But when you look beyond mere surface beauty, you can see that the heart of it is still the teacher telling, and the students following.
One major issue behind the lack of change is data. More specifically an issue that persists is in assessing teaching and learning, and more pointedly how this data might change our fundamental beliefs (or axioms) about teaching and learning. What we assess and how we assess it determines our evaluation of student ability and achievement. Herein lies one of our fundamental issues. Lack of good assessments can lead us to continue doing what we have been doing.
To get some insights, let's look outside of education to provide a backdrop for analyzing our own system. One of the unique aspects of baseball is the wealth of statistical information that has been available for generations upon generations of players. It's one of the reasons why baseball is such a wonderfully interesting sport to be a fan of.
Earned Run Average (or ERA) is one of the traditional measures of a pitcher's ability. A lower ERA is considered better, since the pitcher gives up fewer runs per 9 innings. The problem with ERA is that it is a noisy and flawed measurement system of pitching effectiveness. It depends on factors not under control of the pitcher, such as the quality of the defense supporting the pitcher and the effects of stadiums on balls batted in play. Some pitchers are overvalued and some are undervalued in terms of their contributions to team wins, if ERA is weighted too heavily as a measure of ability. Voros McCraken conducted some groundbreaking analysis, establishing the concept itself and subsequently methods to measure pitchers that are "defense independent." This story among others is chronicled in "Moneyball" by Michael Lewis. But Voros didn't expect baseball teams to rejoice when learning about his findings. He knew better.
"The problem with major league baseball... is that it is a self-populating institution. Knowledge is institutionalized. The people involved with baseball who aren't players are ex-players... They aren't equipped to evaluate their own systems. They don't have mechanisms to let in the good and get rid of the bad." (Voros McCraken)This is a striking insight! Voros essentially identifies why baseball resisted modern statistical methods that could help. Baseball is not set up as an institution to evaluate how it evaluates players. Baseball people normally did not have the knowledge, ability or willingness to entertain ideas developed by people like Voros, who is a baseball outsider.
What is the implication for us in the teaching profession? It should be stated that major league baseball and education are not very similar as institutions. That said, we have some similarities and we can draw conclusions about our shortcomings from baseball's own struggles. Indeed teaching is also a self-populating institution. Students who do well in the current system are the ones who end up become teachers or professors. Some future math teachers state things like, "The reason why I like math is because there's always one right answer, and there's a simple, straightforward structure to all problems." They are good at memorizing rote skills, the rote skills appear on tests, they get good grades, they are labeled as good in math (which may or may not be true), and then they model themselves after their favorite teacher. Thus the cycle perpetuates.
Colleges and universities have in their mission the goal of seeking truth and knowledge. When it comes to teaching, however, discussions among faculty often are about style, "what my students like...," and about delivery of information. The focus is usually not on learning and what students are doing. A major point is that we do not use the scientific method to evaluate teaching, just as major league baseball didn't use any scientific methods to validate their player valuation systems.
Consequently we have several metric problems. Are the usual metrics like skills-based tests and student evaluations the right ones? Clearly the answer is no. Let's consider the typical calculus sequence with a thousand-plus page texts. In the typical chapter on optimization in calculus books, the authors usually highlight in a colored box the steps for how to find relative extrema. What this tells many (but not all) students is that they should memorize the recipe and regurgitate it on an exam. That's how one can get a good grade after all. These students will not walk away with a conceptual understanding of the subject, and probably will forget what they have memorized once the term is over. In short, their education is unintentionally of a lower quality than what we want. Mathematics is reduced to applying recipes that many students do not understand or even care to understand.
If you don't believe this can happen, here's data from Physics by Professor Eric Mazur, Harvard University, presenting at the University of Waterloo. (It's 1 hour long, but worth it!) At Harvard, 40% of the students in freshmen physics who did well on the procedures had inadequate understanding of basic concepts.
The result of traditional assessments is that many students who are traditionally given good grades have major gaps in understanding of basic concepts. Students think they know it, but maintain "Aristotelean understanding of Physics" rather than a Newtonian one. Their education amounts to very little, even at a sublime places like Harvard.
Now let's consider traditional teaching assessments (i.e. student evaluations), and consider data from Physics, based on the Force Concept Inventory (FCI). All of the red data points are from traditional instructors who lecture. Represented in these data points are teachers who are highly rated and lowly rated on student evaluations -- the red dots contain some star teachers and the teachers on the "oh bummer" list. And they all do about the same on the FCI within statistical significance! Students gain on average about 23% of what is possible in the pre-post test design. Student evaluations are like ERA. They are a noisy, flawed metric. Actually student evaluations are worse than ERA. ERA has some value in aggregate (whole team ERA), and outliers tend to have outlier ERAs. In contrast, the highly-rated, award winning instructors are doing no better than Dr. Boring or Professor Snoozer.
The green data points represent faculty who use Interactive Engagement in their classrooms. One of the main trends is that there is very little overlap between the reds and the greens. The average green gain is double compared to traditional instruction. Thus a better way to measure if an instructor is effective is to know what skills and practice he or she utilizes in the classroom. While crude and incomplete, it at least it tells you whether the instructor is on the red or green distribution. But these qualities and practices are not usually assessed or measured in teaching evaluations, so there does not exist sufficient data or incentive for the system to embrace change. We keep on doing the usual, while the traditional assessments tell us things are okay. And the results keep staying in the "red zone" above. Education has a bunch of Voros McCrakens, so there is hope. Baseball has changed, and I believe education will continue to improve for the better.
What about Math? Calculus Concept Inventory has been rolled out and studies are underway. Thus there is hope that we will embrace new assessments that tell us what is going on. Preliminary results suggest similar outcomes to the FCI. Interactive Engagement and Traditional instruction are different distributions. I look forward to seeing the published results. Moreover, a growing body of evidence in research in undergraduate education also suggests that students in traditional courses are not learning what we want them to learn. (More on this in a future post.)
The MAA's Calculus Study indicates that 80% of college calculus courses are taught in sections of 40 students or less. Additionally, very few institutions have large lectures for upper-level courses. Ample opportunities exist for IBL methods to be deployed courses across the nation.
What can an individual instructor do personally? Looking at data can be demoralizing at times, but one should be optimistic. In particular, one can turn assessments into valuable tools that guides students and instructors in the right direction.
Assessment is more than grading stuff so that you can assign course grades. Assessments should be utilized in ways that provide students with regular feedback (formative), instructors with information about their students (formative), and to evaluate demonstrated achievement (summative). Assessments should provide incentives for the qualities we actually value, including creativity, clarity, exploration, problem-solving ability, and communication.
Ideas for what to assess:
- Student presentations and/or small group work
- Reading or journal assignment
- Math portfolios
- Exams
- Homework
The items above are not revolutionary. What matters is what we put into them. Exams can be rote skill based or they can also test for conceptual understanding, application of ideas, and problem solving. Homework can be made more interesting.
Student presentations and/or small groups are a wonderful way to assess understanding. When students present their proofs or solutions, it often a rich experience and rich source of information. You see it all in IBL classes: great ideas, small ideas, half-baked ideas, insightful questions, victories and defeats. It's a slice of real life, and it's really great. It's very easy to detect where students are at, and then take action.
Reading assignments can be used to offload (i.e. flip a class) basics to homework, leaving time in class for the harder tasks, where inquiry is useful. Portfolios are like a CV, and can be used to demonstrate what a student has been able to prove on his or her own. Additionally portfolios can be used to create a record of the theorems proved by the class. (I'll write more about portfolios in future posts.)
Reading assignments can be used to offload (i.e. flip a class) basics to homework, leaving time in class for the harder tasks, where inquiry is useful. Portfolios are like a CV, and can be used to demonstrate what a student has been able to prove on his or her own. Additionally portfolios can be used to create a record of the theorems proved by the class. (I'll write more about portfolios in future posts.)
In IBL courses, one has continuous formative assessment. Instructors are always analyzing whether students understand an idea or not, by giving students meaningful tasks and then working with them to overcome learning challenges. If students are stuck, then there is another question or problem that can be posed, and then students are off on another mathematical adventure. Students are continuously engaged, are monitored, are self-monitoring, get feedback, and so on. This rich, integrated assessment system of actual learning is a core advantage in IBL teaching.
Returning to Dewey... If we continue to teach and assess teaching in traditional ways, we will not gather the data and information necessary that support change for individuals and systemwide. Gathering good data about our students' thinking, which is also a core part of effective teaching, is a key to the way out. Engage your students, collect good data, share it, publish it.
Upward and onward!
Upward and onward!
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." - Mark Twain
Monday, May 14, 2012
Reflective Writing at the End of Year
The end of the academic year is a busy, busy, busy time of year. Commencement, grading, fatigue, summer plans, submitting finals grades,... so much going on.
My recommendation is to stop for 30 minutes. Get a cup of tea or coffee. Grab a sheet of paper and a pen, and write about your year in teaching. Write down what your teaching successes were and what you want to improve on the year. The hard won knowledge and experience from the academic year, from all the grading, and class time can slip away with time. But if you just spend a few minutes to write your thoughts, it makes it much easier to recall what happened. This makes it much more likely that you will see big gains the next time.
As they say luck is when preparation meets opportunity. In teaching, part of preparation is having a framework that effectively guides the next round of preparation.
Take notes, then take a break.
Congrats to all new graduates this month and next!
My recommendation is to stop for 30 minutes. Get a cup of tea or coffee. Grab a sheet of paper and a pen, and write about your year in teaching. Write down what your teaching successes were and what you want to improve on the year. The hard won knowledge and experience from the academic year, from all the grading, and class time can slip away with time. But if you just spend a few minutes to write your thoughts, it makes it much easier to recall what happened. This makes it much more likely that you will see big gains the next time.
As they say luck is when preparation meets opportunity. In teaching, part of preparation is having a framework that effectively guides the next round of preparation.
Take notes, then take a break.
Congrats to all new graduates this month and next!
Wednesday, May 2, 2012
The Role of Inspiration
One of the most frequent things I hear from my fellow colleagues is "I like lecture. I love listening to lucid presentations." I do to. In fact, great lectures are wonderful experiences, and many instructors want to inspire students just as they have been inspired. This is a noble sentiment, and shows that instructors care deeply about their students. In this post I discuss the role of inspiration and lectures and their limitations.
When we attend performances by musicians it is an aesthetic experience. We are living in the moment. If we are lucky enough to attend a great performance, we can be swept away by the beauty and passion of the event. Art, music, great scholarship are indeed high points of society.
Yo-Yo Ma plays "The Swan"
Herein lies the complexity of teaching and some of the pitfalls of the profession. We can mistake beauty for effectiveness, and this is very easy to do. Just as Sirens luring sailors with their enchanting songs, beauty, lucidly, and inspiration can lure a teacher to lose sight of the point of school and effective teaching.
I make my point by analogy. Going to see musicians like Yo-Yo Ma perform are an inspirational experiences. It is a necessary but not sufficient condition for studying to become a musician. If you want to be a musician, you have to play. Play lots. Play often, and reflect.
Similarly learning mathematics requires one to see the subject as meaningful. But one also has to do the hard work to build the mind. Listening to a lucid, inspirational math talk can be enlightening. BUT lectures do not build capacity to do mathematics with nearly all listeners, especially novices. Mathematicians already have the capacity to think mathematically -- we can learn what we need from a lecture much of the time. For students who are not yet advanced in mathematical thinking, they cannot take away the same lessons from the very same lecture.
If you want your novice cellist to learn to play, you can't just show a vid of Yo-Yo Ma, and say "There you go. Now go home and practice hard." It's not that simple. Telling isn't teaching. Likewise really understanding calculus, deciphering nested quantified statements in theoretical math courses, and learning to build differential equation to model a physical situation requires more than just following a recipe, processed by the professional mathematician (i.e. the equivalent of Yo-Yo Ma).
Students need a supportive environment, well-matched problems, time to be stuck, and opportunities to figure things out for themselves. It is this long, arduous, and rewarding process that unlocks potential. Good musicians know this. They don't just listen to someone else play. They also practice with intent with teachers, with collaborators, and with new music to keep their minds and hearts fresh. They experiment. They learn new skills, they interpret pieces in their own way. They do.
To put it simply, lectures and IBL methods should be used for different purposes. Lectures can be used to inspire. IBL methods should be used to build students' abilities and capacities to do.
The myth that "Teaching is Art" is one that lets us rationalize "aesthetics = effectiveness." I am not criticizing aesthetics, by the way. Beautiful ideas are why we are here. But there is a difference between showing students something beautiful and helping students become young mathematicians. Horses for courses, as they say across the pond. Or use the right tool for the job over here in the U.S.
What's the right mix of lecture and IBL? We don't know exactly. Data suggest a small percentage of the time should be lecture, and that as instructors talk more, students report less learning gains. Many of the most effective and experienced IBL instructors use IBL daily, and intersperse mini lectures as needed or at opportune times. Examples are (a) when the students have completed a body of work, (b) to showcase for students how certain ideas or techniques can be further used, (c) to summarize big ideas, (d) enculturation, (e) exposing students to things that there is not sufficient time for, (f) summarizing student strategies and proof techniques from the week,...
"Science is 1% inspiration and 99% perspiration." -- Albert Einstein