Anujit Chakraborty
UC Davis
Organised by
Indian Statistical Institute (ISI), Delhi Center
Abstract:
Massive open online courses (MOOCs) pose a great challenge for grading the answer- scripts at a high accuracy. Peer grading is often viewed as a scalable solution to this challenge, which largely depends on the altruism of the peer graders. In this paper, we introduce a mechanism, TRUPEQA, that (a) uses a small, constant number of instructor-graded answer scripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, (b) penalizes underperforming, and (c) vastly reduces the total cost of arriving at the true grades. Our human subject experiments show that our mechanism improves the grading quality over the mechanisms currently used in standard MOOCs.
Date: July 15, 2019
Time: 03:30 P.M.
Venue:
Seminar 2
Indian Statistical Institute Delhi Centre,
7, S. J. S. Sansanwal Marg,
New Delhi-110016 (INDIA)
Location:
View Larger Map
UC Davis
Organised by
Indian Statistical Institute (ISI), Delhi Center
Abstract:
Massive open online courses (MOOCs) pose a great challenge for grading the answer- scripts at a high accuracy. Peer grading is often viewed as a scalable solution to this challenge, which largely depends on the altruism of the peer graders. In this paper, we introduce a mechanism, TRUPEQA, that (a) uses a small, constant number of instructor-graded answer scripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, (b) penalizes underperforming, and (c) vastly reduces the total cost of arriving at the true grades. Our human subject experiments show that our mechanism improves the grading quality over the mechanisms currently used in standard MOOCs.
Date: July 15, 2019
Time: 03:30 P.M.
Venue:
Seminar 2
Indian Statistical Institute Delhi Centre,
7, S. J. S. Sansanwal Marg,
New Delhi-110016 (INDIA)
Location:
View Larger Map
No comments:
Post a Comment