News

Lifting the lid on online grading tool Crowdmark

Published: 6 September 2019

Crowdmark – an online grading tool developed especially to handle large classes – has been attracting a growing following across North America, with members of McGill’s own Faculty of Science among its most ardent enthusiasts. Kira Smith, reporter-at-large for the OSE, went undercover to find out more.

I was in the depths of the McLennan Library Building – room MS-12. It was late in the term, late in the day, and I was poised in front of the printer. I’d been listening to it rattle on for at least five hours as I scanned final exams for the winter term. Around hour three the room began to smell pungent: it was the printer’s cry for help. I realized then that I struggled to empathize – I had become a Crowdmark convert.

Few had borne witness to the elusive and oft clandestine activities surrounding collaborative online grading software, but for the past six months, I’ve been chronicling my deep dive into this world in the hopes of unravelling the mystery surrounding Crowdmark. Little did I know that I would become a Crowdmark advocate and ally in the process.

My first encounter with Crowdmark was in September 2018. Representing the culmination of several scintillating, but hitherto off-the-record, conversations across the university, the inaugural meeting of “Instructors Who Use Crowdmark” was hosted by Teaching and Learning Services (TLS). By word of mouth and by using the online Crowdmark database, TLS had assembled a passionate group of instructors. Among them, three unabashed devotees from the Faculty of Science offered to share with colleagues their strategies, insights and thoughts on Crowdmark. I was tasked with recording the meeting.

By the fluorescent light of Redpath Visualization Room A, Adam Finkelstein opened the meeting in a reverent tone with words that somehow resembled a call to the creator. As Adam recounted, Crowdmark was developed by James Colliander, a math professor at UBC, who was grading thousands of exams by hand. Dr. Colliander created Crowdmark to respond to the need to efficiently provide meaningful feedback to students, and enable consistency and transparency in grading. Carrying on the legacy, Laura Pavelka (Department of Chemistry), Ken Ragan (Department of Physics) and Nik Provatas (Department of Physics) reflected on how Crowdmark has helped them achieve these goals.

Each instructor shared the teaching context in which they use Crowdmark. Laura has been using the tool for several years. She first tried it out during a summer course of 200 students, then used it for fall courses, and finally went “all out”. There were 700 students in the fall class she taught, using Crowdmark for the midterm and final exams. Ken and Nik use Crowdmark in a similar way for their midterm and final exams, but have also used Crowdmark for labs and assignments. All three instructors noted they have teaching assistants who support them.

Laura outlined the process for administering Crowdmark exams. First, she creates the exam with a formatted cover page, which allows Crowdmark to match the exam to the student name entered into the system at an earlier stage. On rare occasions – Laura mentioned a figure of 10 out of a total of 700 exam papers –instructors have to manually match the exam to the corresponding student. After students write the exam, the exam papers are transported to Teaching and Learning Services (TLS), where they are scanned and uploaded to the Crowdmark website by TLS staff. Laura described the process as “seamless,” saying the only difficult part was physically carrying the exams over to TLS.

Ken shed light on the differences between administering an exam and an assignment through Crowdmark. After creating and releasing the assignment to students, Crowdmark sends a link to students asking them to create an account and upload their completed copy. Ken suggested that instructors create a student account in addition to their instructor account, so they can perform a trial run, releasing the assignment to themselves before doing so with students.

From there, the grading process is the same. Instructors rely on a team of graders (e.g. teaching assistants) to support them. Laura typically holds a meeting with all her graders, during which they discuss the exam, rubric (or marking scheme), and marking schedule. From there, they all go their separate ways and begin grading – working simultaneously and in whatever location suits them, a primary advantage of Crowdmark. Ken remarked that in the past, teaching assistants struggled to shuffle all the paper around, but Crowdmark circumvents this problem, resulting in “time better spent”. Graders work on one question independently or in pairs. Either a head TA or the instructor acts as the lead grader, ensuring consistency across all graders. In addition, the ability to create a unique comment “template” for each question, which can be accessed by all graders, ensures that all students receive high quality feedback. As Ken mentioned, these comments can include solutions and be linked to a specific mark. After all the marks have been inputted, Crowdmark generates an Excel file with the results, which are then uploaded to myCourses. When students receive their grade, they also are sent an image of their exam. Ken stated that at one point he had been concerned about the number of students who might want to follow up after the exam, but with the additional feedback allowed for by Crowdmark, he in fact didn’t see any increase in the number of students coming to see him. As Laura and Ken put it, grading takes the same amount of time as it did without Crowdmark, but the quality of grading and feedback given are much higher.

Encouraged by the depth of the conversation and the praise for Crowdmark that I had heard at this meeting, I sought the perspectives of TAs who had graded exams with the system. Stefanie Perrier (Head Crowdmark TA and neuroscience PhD student) and Sarah Sanderson (STEM Teaching Fellow and biology MSc candidate) worked with Andrew Hendry to grade BIO 111 exams. Both Stefanie and Sarah had used Crowdmark as administrators, and found it efficient and user-friendly. While Stefanie had little direct contact with the students, Sarah found that students with whom she interacted rarely mentioned Crowdmark by name. The students who did give her feedback on Crowdmark said that they found it unusual not to have to complete a multiple-choice exam, which was the assessment method with which they were most accustomed. As Marcy Slapcoff, Director of McGill’s Office of Science Education, would later remark, it is important for faculty to prepare students for this new method of assessment, perhaps through the use of practice questions.

Stefanie and Sarah reiterated comments made by the instructors at the September meeting – they both appreciated the ability to give rich, detailed feedback in the form of comment templates. They enjoyed working remotely, noting that graders could send the instructor a screenshot of part of an exam about which they had questions, facilitating more efficient communication. Sarah also appreciated having the ability to monitor the time that each grader was taking to mark each question, which allowed her to ensure that expectations of graders were realistic. She added that sending an image of the exam to students increased transparency, which was valuable for students' learning.

It was clear from all of my conversations with folks who had used Crowdmark that there was fervent enthusiasm for it. However, it wasn’t until I was speaking with colleagues at TLS that I realized that I, too, had been bitten by the Crowdmark bug. Crowdmark is a phenomenal tool and I urge any curious instructor grappling with a large class to visit the Teaching & Learning Services website to discover for yourself the difference Crowdmark can make to your grading workflow.

Back to top