Massachusetts

Learning about Learning

Principal Investigator: 
Project Overview
Background & Purpose: 

This grant is an NSF CAREER award to Dr. Heffernan and began in 2005. The project is designed to use a web-based computer tutoring program as a tool allowing powerful scientific questions about learning to be explored.

The grant is divided in to five, somewhat overlapping, research thrusts.The first research thrust is designing cognitive models. Dr. Heffernan and his students use methodologies such as difficulty factors analysis and learning factors analysis in order to create good-fitting models that can predict student learning and transfer. This means that they are trying to discover how students learn but within a model that can be transferred from one school to another, from one student to another.

In the second research thrust, he wants to infer what students know and are learning. Dr. Heffernan and his students will study novel model learning and inference mechanisms in order to try to predict when learning and transfer occur. This effort will entail combining psychometric methods with intelligent tutoring systems.

In the third thrust, optimizing learning, the he focuses on discovering what pedagogical strategies used by teachers lead to better learning by students. Through the iterative refining of his models, he will devise mechanisms that select problems that maximize the ratio of expected test-score gain to the expected time needed for completion. This thrust sound complicated but is the simple idea of how to get the very most, very best learning for the least amount of time, effort and money on the part of the student, teacher and schools.

In the fourth thrust, informing educators, the PI will study how best to present information from the finer-grained model to teachers. The types and availability of reports for teachers is key to helping teachers used data effectively to better their teaching. This is not a skill easily mastered.

In the final thrust, allowing user adaptation, Dr. Heffernan and his students have developed a method for K-12 teachers to design the pedagogy used in an intelligent tutoring system. This is emerging as a powerful tool for teachers. They are able to make questions specific to their tests and classrooms. Teachers can adapt the system for their own uses, changing existing ASSISTment questions or creating new problems for their students. This has given teachers the ability to show student work anonymously and allow the entire class to comment on the work.

On the ASSISTment.org website, teachers can find not only information about the nuts and bolts of using the system, but also teachers can find some best practices videos of lessons, lesson plans, and advice from other teachers.

Setting: 

We have been working with schools primarily in central Massachusetts as well as a handful of schools across the country.

Research Design: 

The research design for this project is longitudinal and is designed to generate causal evidence through experimental methods and statistical modeling. This project collects original data through computer based tutoring. The ASSISTment System is a web-based tutoring program for middle school mathematics. The word “ASSISTment” blends tutoring “assistance” with “assessment” reporting to teachers. This gives teachers fine grained reporting on roughly 90 skills that the system tracks per grade level. More generally, the system is a great tool to help achieve the US Dept of Education’s goal of turning education research into an evidence-based science, as the system is able to do large-scale studies of what constitute effective educational practice.

Longitudinal data were analyzed with a generalized mixed-effect linear model. For the binary data collected, we used a number of different techniques depending on the question being asked and the exact data set. Some of these methods include mixed effect and fixed-effect logistical regression models. We have shared our database with other researchers and have active collaborations with many researchers.

Findings: 

The project has made significant progress in its major research activities (please see description of publications and products for references to published works). As stated in the grant proposal there were five main thrusts that we will now review.

1) The first thrust is Designing cognitive models.
One of the attractions of using intelligent tutoring is being able to make accurate predictions or to draw reliable conclusions about student performance based on data collected (vanLehn, 2006). This year we’ve worked to refine our model looking at motivation and homework.

Study 1A – Razzaq, L., Heffernan, N.T., Shrestha, P., Wei, X., Maharjan, A. Heffernan, C. (submitted)
Current research in e-learning systems has found both interactive tutored problem solving and presenting worked examples to be effective in helping students learn math. However, which information presentation method is more effective is still being debated among the cognitive science, e-learning and HCI societies and there is no widely accepted answer. This study is focused on comparing the relative effectiveness between these two strategies. We presented both strategies to groups of students in local middle schools and the results showed significant learning in both conditions. In addition, our results are more in favor of the tutored problem solving condition as it showed significantly higher learning. We propose that the level of interactivity plays a role in which strategies are more effective. An answer to the question of which tutoring strategy is more effective could help determine the best way to present information to students thus improving e-learning system design.

Study 1B – Feng, M, Heffernan, N., Heffernan, C. & Mani, M. (submitted)
Can we have our cake and eat it, too? That is can we have a good overall prediction of a high stakes test, while at the same time be able to tell teachers information about fine- grained knowledge components? In this paper we present some encouraging results about our attempt to provide a fine-grained model for a United States state test. In step 1, a fine-grained skill model was developed by having content specialists review the state test items to identify their required skills. In step 2, we performed statistical analyses of the model based on data collected in two school-years’ usage of an online tutoring system, the ASSISTment System. We show that our fine-grained model could improve prediction compared to other coarser-grained models, and an d IRT-based unidimensional model. With that said we don’t know a great deal about the validity of each individual knowledge construct; all we report is that in total, using the finer-grained model we can better predict state test scores, but we don’t know which knowledge components are the ones that are doing a great job versus which ones are maybe not as valid as others.

Study 1C – Razzaq, L. & Heffernan, N.T., (submitted)
Intelligent tutoring systems often rely on interactive tutored problem solving to help students learn math, which requires students to work through problems step-by-step while the system provides help and feedback. This approach has been shown to be effective in improving student performance in numerous studies. However, tutored problem solving may not be the most effective approach for all students. In a previous study, we found that tutored problem solving was more effective than less interactive approaches, such as simply presenting a worked out solution, for students who were not proficient in math. More proficient students benefited more from seeing solutions rather than going through all of the steps. However, our previous study controlled for the number of problems done and since tutored problem solving takes significantly more time than other approaches, it suffered from a “time on task” confound. We wanted to determine whether tutored problem solving was worth the extra time it took or if students would benefit from practice on more problems in the same amount of time. This study compares tutored problem solving to presenting solutions while controlling for time. We found that more proficient students clearly benefit more from seeing solutions than from tutored problem solving when we control for time, while less proficient students benefit slightly more from tutored problem solving.

2) The second this is Inferring what students know and are learning.
How do we know what students are learning? We’ve tagged each of the questions with skills built from our cognitive model. Much of our work this year on this thrust has involved refining our understanding of student learning within an ASSISTment session, how to provide feedback on homework.

Study 2A – Mendicino, M., Razzaq, L., and Heffernan, N. T. (accepted)
We compared in classroom and ASSISTment System problem solving to help understand the role of feedback on learning. Students were in a counter balanced presentation with some of the students receiving classroom instruction in problem solving alone, followed by a session of extra credit homework involving problem solving with the ASSISTment System that offers immediate feedback. Other students in the experiment received the ASSISTment System as homework followed by the classroom instruction. We gave each student tests along the way to assess learning and found that the students and found evidence that using the web-based ITS to practice problem solving was much better than the classroom problem solving with an effect size of 0.5.

Study 2B – Feng, M., Heffernan, N. T., Beck, J., & Koedinger, K. (2008)
We have shown that students learn from the ASSISTment System in a variety of ways yet we’ve only been able to show that while the students are also learning from their classroom teachers. We wanted to test to see if the ASSISTment System itself was the cause of some learning in students so we tested to see if students learned from the System in a single day. Our results suggested that when students went thru scaffolding questions early in a session, they scored better on similar skills later in the same session.

Study 2C – Feng. M., Heffernan, N.T. & Koedinger, K. (in submission)
In this paper, we review the findings that the ASSISTment System is a good tool for continuously assessing student learning. Since the ASSISTment System tutors while it assesses a student, the criticism has been made that we are creating a moving target of learning. In fact, this paper showed that because the system teaches while it assesses, its does a better job of assessing (if you hold the number of items done constant, instead of time). One might also be concerned that using the ASSISTment system may take longer than taking a paper practice test. However, unlike paper tests, the ASSISTment system is contributing to instruction (Razzaq et al., 2005; Feng, Heffernan, Beck & Koedinger, 2008). While every minute spent on a paper test takes away a minute of instruction, every minute on the ASSISTment system contributes to instruction. We end with a tantalizing question: Are we likely to see states move from a test that happens once a year, to an assessment tracking system that offers continuous assessment (Computing Research Association, 2005) every few weeks? While more research is warranted, our results suggest that perhaps the answer should be yes.

Study 2 D – Feng, M., Beck, J,. Heffernan, N. & Koedinger, K., (2008)
We developed a new method of using student test scores from multiple years for determining whether a student model is as good as a standardized test at estimating student math proficiency. The result showed that the models that takes in data on 8th grade students during the course of the year, can predict student 10th grade MCAS scores as well as the 8th grade MCAS can. This work was presented at the First International Conference on Educational Data Mining last summer at Montreal, Canada.

3) The third thrust is Optimizing learning.
One of the goals of the ASSISTment System is to have a positive effect on student learning. It is important to not only show that the System is effective in helping students learn but also to continually refine the System so every question, hint, and scaffolding question produces the most amount of learning. Over the last year, we have worked to improve and refine the questions and the methods of presenting the questions so we helped students learn as much as possible while using the ASSISTment System.

Study 3A – Mendicino, M., Razzaq, L., & Heffernan, N. T. (in press)
We compared two groups of fifth grade students – one who use the traditional model of paper and pencil homework learning to one who used the ASSISTment System to complete their nightly homework. Students who use the paper and pencil homework received feedback on their work the next day immediately before the posttest. The results were clear that students using the web-based intelligent tutor resulted in more learning. With an effect size of 0.61 sd, we argue that the costs associated with running this program are well worth the expenses especially in schools with one-to-one computer programs in place.

We recognize that the immediacy of the feedback on the homework may have a confounding effect on the results. In addition, the students in paper and pencil homework group may have the advantage of receiving feedback immediately prior to the posttest. Nevertheless, the effect in the opposite direction was very compelling.

Study 3B – Feng, M., Heffernan, N.T., and Beck, J. (2009)
Although we believe that each and every question in the ASSISTment System is valuable, we wanted to discover if some questions, and associated scaffolding, were better at teaching students than others. Many researchers, including us, use a randomized controlled trial experimental design to determine the effectiveness of certain questions. In this paper, we explore a less expensive way of determining which questions are key determiners of student learning. We analyzed 60,000 performance data across 181 items from over 2,000 students in the 8th grade. As expected, some questions were better at producing student learning than others.

3C – Pardos, Z. A., Heffernan, N. T., (submitted)
In this study we evaluated learning content and a novel method for measuring how much learning each ASSISTment question is producing relative to the other ASSISTment questions in a problem set. A simulation was run to validate this method, which was also used on real tutor data, and the results presented.

3D – Pardos, Z. A., Heffernan, N. T., Anderson, B., Heffernan, L. C. (submitted)
Students come into the classroom and to any intelligent tutor with different skills. Given the skill level of each student, do different models better predict how they will perform on standardized tests? In this study we looked at modeling student knowledge at different levels of skill generality and evaluating the different models with MCAS test prediction and prediction of performance on the tutor.

4) The fourth thrusts on Informing educators is about quickly informing teachers and parents about student work.
Most schools give standardized tests once or twice a year giving teachers a long wait time between giving the test and getting results back. Yet research has shown that in order for teachers to meaningfully use data collected on students to make their teaching better, they need immediate feedback. We continue to improve our reports to teachers using their feedback to modify and enhance reporting.

This year we’ve worked to train teachers to use the reports effectively to inform their teaching in meaningful ways. We began offering a course (MME 562) at WPI as part of the Masters in Mathematics Education called Using Advanced Educational Technology to support Data-Driven Decision Making. This class uses coaching, journaling, video taping, and projects to train teachers to use the data they collect. Teachers create content, watch each other teach and build collaborate teaching communities.

5) The final thrust is Allowing user adaptation.
We want to ensure that our Quick Builder is easy to use to allow for teachers and graduate students with little training to customize the ASSISTment System. The Quick Builder gives teacher the opportunity to create quizzes, ask specific questions, and survey students right during a lecture or lesson. We had a number of teachers with us all summer doing research. Their projects included creating content, building the website, checking the assumptions of questions, and refining the skill tagging on questions. By having actual teachers work with us, this gives us a reality check and helps ensure we are still meeting the needs of students, teachers and parents.

Many teachers are adapting the ASSISTment System to help them outside traditional review or assessment. For example, one teacher we’ve been working with at Montechusett Regional Technical High School, uses this in her classroom to create quizzes “on the fly” to assess her student’s knowledge. The System collects the data and can grade it rapidly allowing her to discuss the results with the students in the very same lecture. She can also use this to record attendance and participation saving her time.

Study 5 A – Razzaq, L., Parvarczki, J., Almeida, S.F., Vartak, M., Feng, M., Heffernan, N.T. and Koedinger, K. (Submitted).
In Razaaq et al. (2009), we describe new adaptations to our builder that makes creating content faster and more cost effective. The builder as described above, gives people with little or no programming experience, to create, edit, and deploy content.

Publications & Presentations: 

These papers, and others from previous years, are available online at:
http://teacherwiki.assistment.org/Publications

Feng, M., Heffernan, N.T., & Koedinger, K.R. (in press). Addressing the assessment challenge in an Online System that tutors as it assesses. To appear in User Modeling and User-Adapted Interaction: The Journal of Personalization Research (UMUAI journal).
 

Feng, M, Heffernan, N., Heffernan, C. & Mani, M. (accepted). Using mixed-effects modeling to analyze different grain-sized skill models. Accepted by the IEEE Transactions on Learning Technologies Special Issue on Real-World Applications of Intelligent Tutoring Systems.

Patvarczki, J., Mani, M. and Heffernan, N. (Submitted) "Performance Driven Database Design for Scalable Web Applications", Advances in Databases and Information Systems, 2009

Razzaq, L., Heffernan, N.T. (Submitted) To Tutor or Not to Tutor: That is the Question. Submitted to the Annual Conference on Artificial Intelligence in Education 2009.

Razzaq, L., Heffernan, N.T., Shrestha, P., Wei, X., Maharjan, A. Heffernan, C. (Submitted) Are Worked Examples an Effective Feedback Mechanism During Problem Solving? Submitted to the Annual Meeting of the Cognitive Science Society, 2009.

Razzaq, L., Parvarczki, J., Almeida, S.F., Vartak, M., Feng, M., Heffernan, N.T. and Koedinger, K. (Submitted). The ASSISTment builder: Supporting the Life-cycle of ITS Content Creation. Submitted to the IEEE Transactions on Learning Technologies Special Issue on Real-World Applications of Intelligent Tutoring Systems.

2009

Mendicino, M., Razzaq, L. & Heffernan, N. T. (2009) Comparison of Traditional Homework with Computer Supported Homework: Improving Learning from Homework Using Intelligent Tutoring Systems. Journal of Research on Technology in Education (JRTE). Published by the International Society For Technology in Education (ISTE).

Patvarczki, J., Mani, M., and Heffernan, Neil. (2009) "Performance Driven Database Design for Scalable Web Applications", MIT, New England Database Day, 2009, MIT, Boston

Other Products: 

We will be generating new cyberlearning techniques with the ASSISTment System. For example, students who use the ASSISTment System to complete their homework had more learning than counter parts who completed their homework with paper and pencil (Mendicino, Razzaq, and Heffernan, 2009).

Target Population: 
Research Design: 

Pages

Subscribe to RSS - Massachusetts