|Attendance and outcomes in a
large, collaborative learning, performance assessment course
|Presented at the Annual Meeting of the American
Educational Research Association (AERA)
Presenter: Mark Urban-Lurain
East Lansing, MI 48824
The "conventional wisdom" is that students college grades are related to class attendance: students who attend classes more frequently obtain better grades. Several studies of large, lecture-based courses have examined the relationship between class attendance and final course grades.
Since fall semester, 1997, we have been teaching an introductory computer science for non-computer science students. This course has an enrollment of 1800 students per semester. For more information about the course see (Urban-Lurain & Weinshank, 1999a; Urban-Lurain & Weinshank, 1999b; Urban-Lurain & Weinshank, 2000) or visit the course Web site at www.cse.msu.edu/~cse101.
This course is unlike conventional ones based on lectures, homework, quizzes and exams.
It has no lectures but instead is entirely lab-based.
Students work in collaborative groups that are generated randomly each class day from those present. Therefore, over the semester, students work with other students of different ability levels and play different roles in the zone of proximal development (Vygotsky, 1978). On days when students are more capable than their partners, they must communicate their knowledge clearly. Conversely, when students are the less capable partners, they must reflect on their own knowledge get help from their partners.
Grades are based solely upon criterion-referenced, modified mastery-model performance assessments we call Bridge Tasks (BTs). Students must pass each BT in turn before being allowed to take the next BT. The highest BT passed at the end of the semester determines the course grade. We take attendance only to generate the daily groups: attendance, class work and homework are not included in the calculation of student grades.
What factors predict student outcomes in this type of course? We had three hypotheses:
H1: Students who attend classes more frequently pass a higher number of BTs by the end of the course than do students who do not attend classes as frequently. The class exercises are designed to help students develop the rich, conceptual understanding of the material needed to pass the BTs, so students who attend more frequently should perform better.
H2: Student incoming experience with computers should not predict the highest BT passed. If it does, this would mean we are measuring incoming knowledge but, not instructional effect, i.e. BTs would not be measuring what students learn in the course.
H3: Year-in-school should not be related to the number of BTs passed. Because there is no prerequisite for the course, year-in-school should have no effect on grade.
There have been several studies examining the impact of attendance on course grades. Street (1975) investigated outcomes in a large, introductory business course. He found a .72 correlation between attendance and course grade and reported that each day of absence costs students two points in their final grade. Gunn (1993) found a .66 correlation between attendance and final grade in a first-year psychology class. Van Blerkom (1992) also looked at attendance and outcomes in a large psychology class with 17 sections. He found the correlation varied across sections from .29 to .73 with a mean of .55. He also reported that overall class attendance declined as the semester progressed. In a follow-up study looking at both introductory psychology and educational psychology classes, he found a correlation of .46 between class attendance and final grades. (Van Blerkom, 1996) Other studies of students in psychology courses (Buckalew & Daly, 1986, Jones, 1984 #421) found correlations of about .3 between attendance and course grades.
Because these are correlation studies, the direction of the relationship is not clear. Does coming to class "cause" improved grades, or do students who earn better grades attend classes more frequently? Some authors have attempted to study the factors that are related to attendance. Gussett (1976) examined the hypothesis that students were absent from classes more frequently on Friday afternoon but found only a moderate (p < .10) relationship between attendance and grades in the Friday afternoon classes. Buckalew (1986) thought that there students preferred seating in the classroom (in the front vs. the rear of the classroom) might predict their grades, but found no relationship.
There have been few studies about the relationship between attendance, grades and students reasons for missing classes. Van Blerkom (1992) found that students most frequent reasons for missing class were the pressures of other courses, becoming discouraged or believing that attending class will have little effect on their grade. He offers a social cognitive view of self-regulated learning and self-efficacy theory as a framework for understanding student absences. He recommends possible ways of improving students self-efficacy with such things as dropping poor exams and informing students about the relationship between attendance and course grades. In a follow-up study (Van Blerkom, 1996), he had students complete a questionnaire about academic perseverance and self-efficacy. Their responses were correlated with both attendance and performance in these classes. However, correlations between academic perseverance, self-efficacy, class attendance, and course grades were all fairly low, ranging from .12 to .20. Galichon and Friedman (1985) surveyed students to determine their reasons for cutting class. They reported that students felt that education was of little importance to their future careers, had a high need to socialize, had less interest in school organizations and were more likely to consume alcohol, marijuana and other drugs than those students who did not cut class. However, they found only a -.11 correlation between overall GPA and attendance.
Hovell, Williams and Semb (1979) looked at the effect of giving in-class exams or quizzes on attendance. They found that attendance at classes in which students took quizzes or exams was approximately 90 percent. Attendance was less than 55 percent for the class meetings in which no exams or quizzes were given. In courses that had more frequent quizzes, non-test days had even lower attendance than did the non-test days in the course that had fewer exams during class days. They also discovered that, in all courses, class attendance declined as the semester progressed.
Budig (1995) reported that Vincennes University (VU) found the relationship between attendance and student performance strong enough to institute a program of mailing postcards to students who miss class. While students disliked receiving the postcards, VU found that the program improved attendance and student success, with fewer students receiving D and F grades.
Jones (1984) investigated four causal models of the relationship between absences and grades.
1. A motivational model that proposes that absences and grades are related to student motivation.
2. An ability model where absences and grades are related to student academic abilities.
3. An attendance model where absences and grades are related directly, with absences causing lower grades.
4. An achievement model that claims absences and grades are related directly, with low grades causing more frequent absences, possibly because the students are discouraged.
Jones used partial correlations to control for each of these factors and found no support for the first two models. The strongest evidence was for the attendance model. The evidence for the achievement model, while significant, was much lower than for the attendance model. Jones concluded that "it is possible that absences and grades interact to trap some students in a self-perpetuating spiral of declining achievement." (p. 136)
Thus, the literature seems to establish a correlation between attendance and course grades. However, these courses were generally standard, large, lecture-based courses with homework, quizzes and multiple-choice exams. We wanted to know if this relationship exists for a course that is lab-based, one in which students grades are determined by modified-mastery model performance assessments that focus on students problem-solving rather than memorization and recall of facts.Data sources
We maintain a database about each student, including the following.
Incoming computing experience: We survey students on the first day of class about their experience in seven areas (Web, email, word-processing, spreadsheets, etc.), rated from none to over one year. The total responses for each student can range from 0 70.
Year-in-school: First, second, third or fourth year.
Total number of BT attempts: There is a maximum of 12 opportunities to pass five successive BTs.
Number of attempts on each BT: Students must pass each BT before being allowed to take the next BT. We record the number of times each student attempts each of the BTs.
Highest BT passed: The last BT passed by each student by the end of the semester.
Attendance for each class: We record attendance for generating each days collaborative groups, but students receive no "points" or other extrinsic reward for merely attending.
We analyzed data from the 3899 students for whom we had measures on all variables from the inception of the course in fall semester, 1997 through spring semester, 1999. There were no significant differences by semester, so all data were pooled for this analysis.Methods and Results
To test H1 students who attend class more frequently pass a higher number of BTs we performed an ANOVA on the percentage of classes attended by the highest BT passed. It was significant (F=247.343, DF 5, 3893, p < .001), showing that there is a difference in the attendance rates of students who pass higher BTs. The means are plotted in Figure 1.
Figure 1: Relationship between attendance and highest BT passed
Since the relationship is nearly linear, we fitted a regression equation with highest BT passed as the dependant variable and percentage of classes attended as the independent variable. It yielded an R of .487, accounting for 23.7% of the variance (p < .001). This clearly indicates that there is a strong relationship between percentage of classes attended and the highest BT passed.
One explanation of this result could be that students who come to class "get the answers" that they need to pass the BTs. We therefore looked at the relationship between attendance on the days preceding particular BTs and the number of times students attempted each BT. We examined the correlation coefficients for attendance before each BTs instructional class days. See Table 1.
|Attendance before BT||
Correlation with the number of attempts to pass that BT
Table 1: Relationship between particular class attendance and number of BT attempts.
For example, the correlation between attendance preceding the 1.0 BT and the number of attempts at the 1.0 BT is -.103. Contrast the magnitude of these correlations with the correlation between overall attendance and the highest bridge task passed (.487). We therefore conclude that it is not attendance on particular class days, but rather the overall attendance, that predicts success on BTs.
If overall attendance is the key factor, what characteristics of the classroom experience foster improved performance? Because the majority of classroom time is spent in collaborative workgroups, we ask students to evaluate their group learning experiences on the end-of-semester student evaluations. Students rate the statement I learned a lot in the group exercises in class on a scale from Strongly Agree to Strongly Disagree. The results are shown in Figure 2.
Figure 2: Relationship between attendance and attitude towards group exercises
The ANOVA on the difference between mean attendance is significant (F=35.899, DF 4, 4320, p < .001). The more frequently students attend class, the more highly they rate their group experiences.
To test H2 the impact of previous experience and year-in-school on highest BT passed we added the variables incoming computing experience and year-in-school to the regression equation. This regression equation has an R of .518 accounting for 26.8% of the variance (p < .001). In this equation, incoming computing experience accounts for only 0.82% of the variance, with a correlation of .091. This indicates that performance on the BTs is not a function of incoming knowledge. See Figure 3.Figure 3: Relationship Between Incoming Computing Experience and Final Grade
To test H3 -- Year-in-school should not be related to the number of BTs passed we added Year-in-school to the regression equation. Doing so accounts for 1.46% of the variance, with a correlation of .121. To clarify this relationship, we looked at the correlation between year-in-school and percentage of classes attended (-.109, p < .01). An ANOVA of the mean attendance by year-in-school shows attendance steadily declines as year-in-school increases (p < .001). While year-in-school contributes very little to student performance, upper division students do somewhat worse than lower division students, presumably because upper division students cut class more frequently. See Figure 4.
Figure 4: Attendance vs. Year in school (class standing)
Research into Practice
At the beginning of the semester, we show students the above graphs and stress the importance of class attendance. At the middle of the semester, we repeat this and show students the correlation between midterm grade and final course grade for the preceding semester. We stress the importance of  attending all classes and  taking every available BT, whether in-class or during out-of-class makeup periods, listed on the course calendar.
Based on the trends we had observed, we began in spring semester, 1999 to send out E-mail from our database to each student who missed specific classes. This is an database-driven version of Budigs (1995) attendance postcards. These "nag-o-grams" remind students of the correlation between attendance and grades and were targeted specifically to current course content, e.g., "We note that you missed the first day of spreadsheets. You will build on that work in the next three class sessions, so we urge you to review the materials on the Web, go to the Help Room and study your textbook very carefully." Each of these messages also urges the student to come to instructor office hours. For one measure of the impact of the messages, we compared the number of office hour visits in fall, 1998 (before we began sending these reminders) with the corresponding number for fall, 1999. See Figure 5. We found a 32% increase in the number of office hour visits. Furthermore, students who came to office hours in fall, 1999, were more focused on specific instructional questions than those during fall, 1998. Many of the 1998 students were so far behind by the time they came to office hours that their problems were irremediable. This is reflected in the average grades of students who came to office hours. In fall, 1998, the average final grade of students who came to office hours was 2.6 while the average course grade for all students was 2.8. In fall, 1999, the average final grade of students who came to office hours was 3.0, the same as average course grade for all students that semester.
Figure 5: Average number of office visits Fall 98 and Fall 99
Conclusions: Educational or scientific importance of the study
The strongest predictor of student performance is overall class attendance. Irrespective of year-in-school or prior computing experiences, the more that students come to class, the better their performance. As we show above, these results are not simply the result of acquiring particular "facts" for a particular BT. Rather, students who have higher overall attendance perform better, regardless of which particular days they may have attended class. Our conclusion is that the overall course environment lab-based classrooms, collaborative learning groups and criterion-referenced, modified mastery-model assessments fosters student learning. Our conjecture is that having different partners each day alternating roles from more knowledgeable to less knowledgeable peer helps students reflect on their learning and problem solving. We believe that, as students practice the concepts with a wider range of peers, their problem solving and transfer abilities become stronger. Unpacking these various effects will require further studies.
As faculty consider replacing traditional lecture classes and exams with collaborative classroom activities and performance assessments, it is important to understand the factors that impact student performance. While there is a body of research that demonstrates the relationship between attendance and outcomes in traditional courses, there have not been studies of the effect of attendance on outcomes in other course structures and assessment environments. Because ours is a large, introductory course that serves students from colleges throughout the university, these findings should be of interest to faculty who are considering implementing collaborative learning and/or performance assessments in a wide variety of subjects.References
Buckalew, L. W., & Daly, J. D. (1986). Relationship of initial class attendance and seating location to academic performance in psychology classes. Bulletin of the Psychonomic Society, 24(1), 63-64.
Budig, J. E. (1995). Postcards for student success (pp. 23). West Lafayette, IN: ERIC ED 381208.
Galichon, J. P., & Friedman, H. (1985). Cutting college classes: An investigation. College Student Journal, 19, 357-360.
Gunn, K. P. (1993). A correlation between attendance and grades in a first-year psychology class. Canadian Psychology, 34(2), 201-202.
Gussett, J. C. (1976). Effect of Friday afternoon classes on grades and attendance. Psychological Reports, 39, 1035-1038.
Hovell, M. F., Williams, R. L., & Semb, G. (1979). Analysis of undergraduates' attendance at class meetings with and without grade-related contingencies: a contrast effect. Journal of Educational Research, 73, 50-53.
Jones, C. H. (1984). Interaction of absences and grades in a college course. The Journal of Psychology, 116, 133-136.
Street, D. R. (1975). Non-compulsory attendance: Can state supported universities afford this luxury? Journal of College Student Personnel(March), 124-127.
Urban-Lurain, M., & Weinshank, D. J. (1999a, March 26, 1999). "I Do and I Understand:" Mastery model learning for a large non-major course. Paper presented at the Special Interest Group on Computer Science Education, New Orleans, LA.
Urban-Lurain, M., & Weinshank, D. J. (1999b). Mastering computing technology: A new approach for non-computer science majors . http://aral.cse.msu.edu/Publications/AERA99/MasteringComputing.html.
Urban-Lurain, M., & Weinshank, D. J. (2000). Computing concepts and competencies. In D. Brown (Ed.), Interactive learning: Vignettes from America's most wired campuses (pp. 73-74). Bolton, MA: Anker Publishing Company.
Van Blerkom, M. L. (1992). Class attendance in undergraduate courses. The Journal of Psychology, 126(5), 487-494.
Van Blerkom, M. L. (1996). Academic perseverance, class attendance, and performance in the college classroom (Vol. ED 407618, pp. 11): ERIC.
Vygotsky, L. (1978). Internalization of higher psychological functions, Mind in Society: The development of higher psychological processes (pp. 52-57 and 79-91). Cambridge, MA: Harvard University Press.