Marking and moderation

Rigorous marking and moderation processes ensure assessments are valid by aligning grades with intended learning outcomes while also upholding fairness through consistent application of standards across markers and cohorts.  

Key points to consider:

  • The ‘right’ approach to marking should be based on a reflection of the assessment method.
  • Marking criteria need to ensure assessments measure the intended learning outcomes while moderation promotes consistency in understanding and applying academic standards.
  • Assessment guidance and criteria must be communicated clearly and consistently across all channels to enable an effective feedback process.

There are three approaches to marking summative work at LSE: double-blind marking, sighted double marking, and moderated single marking. These three approaches all involve a second examiner. The default position is double-blind marking but this is not always possible with some assessment methods (such as class participation).  

Double-blind marking

First marker marks all assessed work. Second marker marks all work, without seeing the first marker’s grades/comments. Where there are any differences in mark, the two markers discuss and agree the final mark (usually meeting in person, but potentially by email/phone/other). 

Pros

  • Reduces ‘random distortions’ which might occur in an individual’s marking, increasing reliability.
  • Markers can have slightly different perspectives, leading to a more rounded evaluation.
  • Students may understand this to be a fair and reliable form of marking. 

Cons

  • Agreeing marks may be mainly done by ‘splitting the difference’ which draws marks towards the middle of the available range.
  • Writing and grading with an eye to justification may lead to ‘defensive’ grading (and thus lower grades).
  • High overall marking loads for each marker.

Sighted double marking

First marker marks all assessed work. Second marker reads all work, with sight of the first marker grades/comments, and evaluates the appropriateness of the first markers’ grades/comments. If the second marker proposes any changes to marks, markers discuss and agree the mark, or second marker changes marks. 

Pros

  • Less time-intensive than double-blind marking, while retaining the potential for students to benefit from different perspectives.
  • Can be a good way to introduce new markers into an established process.
  • Works as a check on the markers as well as the marking. 

Cons

  • For handwritten exams, sighted marking can require a long turnaround time, as the second marker must wait for all work and marks/comments from the first marker.
  • Second marker may be influenced by the marks/comments of the first.
  • Pre-existing hierarchies of expertise or experience may impact the process.

Moderated single marking

First marker marks all assessed work. Second marker considers (with sight of first marker’s marks/comments) a selection of pieces of work, similar to that sent to an external examiner. A typical sample might include all borderline cases; all work graded ‘fail’; all first class work, and 10% of all other pieces.

Pros

  • On a larger course, moderated single marking can be significantly quicker than double marking.
  • A representative sample of work can allow markers to clarify the specific qualities that establish a grade; in this way, the approach need not be less reliable than double marking. The scrutiny of borderline pieces of work can have a positive impact on student’s programme outcomes and degree classification.

Cons

  • Some pieces of work only receive one academic’s judgement.
  • A second sample of pieces of work may not adequately represent the first marker’s work, and thus not allow an evaluation of their overall judgement.
  • If the second marker substantially challenges the first marker’s judgement, it may be necessary for another academic to re-do the first marker’s marking.

Marking Criteria

In-class assessment (presentations, group work etc.) can be marked by any of these methods. For moderated single marking, the seminar or class tutor would be the first marker. The course leader would be the moderator, attending a sample of the work/activity (e.g. 2-3 presentations in a seminar group).

Marking criteria consist of a set of descriptive (not evaluative) statements that explicitly communicate to students what knowledge and skills will be assessed. Each assessment criterion should be accompanied by a set of pre-defined statements outlining different standards of achievement (1st, 2:1, etc.).  

Defining clear and transparent marking criteria 

Marking criteria or standards are agreed in advance by discussion between academic colleagues. This can help clarify the scope of potential valid answers. Students should be involved to review the new criteria and assess whether they are clear and meaningful to their peers.    

Calibration meetings and moderation processes can help to develop and embed this shared understanding, as well as ensuring markers feel prepared and supported in their role. 

Simple teaching activities such as peer assessment (against the criteria), class discussions (about marking criteria), the provision of exemplars, and even mock assessments can be used to prepare students to better understand how their work will be graded. 

Effective use of marking criteria for marking and feedback 

Marking student work requires markers to internalise marking criteria and interpret them against the assessment task. In practice, it involves one or two re-readings (depending on length, format, layout) and an iterative process of deciding marks. 

Marking criteria should be used to inform both evaluative judgements on students’ work and the construction of feedback: feedback comments should be set against marking criteria and learning outcomes. For more guidance on effective feedback see the ‘Effective Feedback’ page of our Toolkit.

Supporting marking teams

Working in teams to design, administrate and mark assessment is widespread in universities. This collaborative approach can reduce the workload for individual colleagues, yet when the work produced by a large cohort of students is marked by many markers, marking variation is likely to occur, i.e. a single piece of work may be marked differently by different markers. The quality of marking can be further affected by having a frequent change of markers. Research shows the following protocols can counter this marking variation issue. 

Marking Calibration before the marking window  

This involves all markers discussing their viewpoints around a particular assessment piece to reach a consensus. This method may require significant effort by all teaching staff. The complexity of the assessment task and the characteristics of markers should be considered when deciding whether this process should be adopted. 

4 steps of marking calibration

Marking guidance to support individual markers during the marking window 

During the marking window, it is important to remind all colleagues of processes, and any shared resources to support them, and with seasonal markers, the course convener should schedule check-ins where required. The following shared resources are found useful by university teachers: 

  • FAQs on the marking criteria.
  • A personal marking log file, which encourage markers to be self-aware of any personal biases and keep them in check.
  • A Teams channel for quick communication between markers. 
  • Exemplar essays or model answers.
  • A spreadsheet with common phrases for feedback.
  • Guidelines on feedback, e.g. how much you should write, what form should it take etc., and/or a couple of ‘good’ examples of feedback from a past cohort.

The table below shows one example of a judgement process when marking a large number of submissions. 

Judgement process when marking a large number of submissions
Step 1: pre-marking
  • Become familiar with the assessment brief, marking guidance and marking criteria.
  • Pre-sort submissions into sets according to first scan and perceptions of quality. 
  • Step 2: during marking
    • Refer to marking criteria. 
    • Check other students' work. 
    • Mark individually and construct the feedback. 
    • Mark in sets and compare their performance. 
    Step 3: post-marking  
    • Final checks and comparisons, such as:  
      • Check if the feedback (e.g., tone, depth and criteria) matches the mark.
      • Comparing earliest and latest marked work to confirm grading standards remained stable. 
      • Reviewing borderline cases (e.g., grades near classification thresholds) for fairness.