Staff FAQs on the use of GenAI in Teaching, Learning and Assessment

Your students are using GenAI. Many are using it in their learning and to support assessment, and some may be using it inappropriately. This changes how we need to think about teaching and assessment. 2025/26 is a transition year in which we are aiming to achieve two goals: maintaining the rigour and value of LSE degrees and supporting staff and students to use GenAI appropriately.

The following FAQs will help you to make informed choices about:

  • Assessment design - how to design GenAI-resilient assessments, and where and how to integrate GenAI appropriately and safely into your assessments
  • What to communicate to your students
  • Whether and how you might use GenAI in your teaching and marking
  • What to do if you suspect inappropriate use of GenAI.

What is the School's position on GenAI in assessment this year?

The School’s three-position framework remains the same as before (no authorised use*, limited authorised use, or full authorised use). Your department will have confirmed its position for 2025/26. The key change is that ‘observed assessment’ is now required across all programmes. This means including in-person exams, oral assessments, in-class components or appropriate formative assessments to verify that students are achieving learning outcomes at programme-level. For additional help, see FAQ below on observed assessment options.

In general, you are encouraged to:

  • Use LSE-approved GenAI tools (Claude, Copilot) to assist your teaching and marking

  • Design assessments that integrate GenAI appropriately

  • Teach students about effective and ethical GenAI use

Further information: School Position on Generative AI 2025/26

*Unless your Department or course convenor indicates otherwise, the use of GenAI tools for grammar and spell-checking is not included in the full prohibition under Position 1.  

What position should I adopt in my course or programme?

All departments will have adopted an overarching position however this decision might be to delegate to programme directors or course convenors. Check what this is with your departmental AI lead, Deputy Head of Department for Teaching/Education or Department Manager.

If the decision is yours, consider Table 1 below which offers a brief comparison of the three positions and consider the questions that follow.

GenAI positions

 

Position 1: No authorised use 

Position 2: Limited authorised use

Position 3: Full authorised use

What students can do 

No GenAI for assessment*

GenAI used predominantly to assist learning development

GenAI may be used but documented

Examples

Grammar checking only

Brainstorming, literature searches

All tasks including drafting

GenAI-generated text

Not permitted

Not permitted

Permitted with disclosure

Best for courses where...

Independent skills are core 

Balancing GenAI literacy with fundamentals

GenAI use reflects professional practice

Table 1 - understanding the three positions

*Unless your Department or course convenor indicates otherwise, the use of GenAI tools for grammar and spell-checking is not included in the full prohibition under Position 1.

If the position has not yet been agreed by your department, the following questions may help you to decide on what position to adopt:

  1. What skills and knowledge are you assessing?

  2. Can you tell if students achieved the learning outcomes with your current assessment methods?

  3. How might your students need to work with GenAI professionally, and what foundational skills might you help them develop?

  4. Can you clearly explain boundaries to students? (see FAQ below on communicating your approach to students)

If you are a programme director, you will need to engage in discussion with course convenors to agree a whole programme approach which addresses the requirement for observed assessment (See FAQ below on observed assessment).

Need further guidance? Contact your departmental GenAI lead or Eden Centre departmental adviser

How can I best communicate my position to students?

Students say that they are confused as they're taking courses that have a range positions on the use of GenAI within the same programme, and they struggle to keep track of what is permitted for particular courses.

Without clear, consistent communication, students may assume all courses have the same rules, they may struggle to remember where the information about your position is when they need it or there is a risk of accidental misconduct through confusion, rather than intent.

The most effective policy is to make your position impossible to miss, so here are some places you can share it:

  • Course Moodle page

  • Course outline/syllabus

  • In class - provide an explanation with examples and Q&A

  • On every formative and summative assessment brief that provides specific guidance when they're actually doing the work.

To help staff, the Eden Centre has published effective communication guidance with templates that are based on and draw upon departmental good practice in 2024/25. Using these will save you time and provide you with ideas.

Resource: Good practice guide: communicating with students about Generative AI in formative and summative assessments 

What counts as 'observed assessment'?

Observed assessment is a recognised assessment method which reliably confirms that individual students have achieved the work themselves and demonstrated the intended learning outcomes. This applies to formative and summative assessment.

The School has decided on this policy as some assessment methods, such as unsupervised take-home coursework, cannot reliably verify learning as the use of GenAI cannot be reliably detected. Observed assessment provides that verification.

LSE requires some elements of observed assessment across a programme. Current sector practice varies from 25% to 60%. Check with your programme director – you may already meet the requirement.

Methods that count as ‘observed’ include:

Linked formative and summative work

  • Students submit preparatory stages (e.g. annotated outlines, research proposals, draft sections) allowing you to track their development and verify that summative submissions authentically represent their learning journey.

In-person

  • Closed-book invigilated exams

  • Oral assessment (presentations, vivas, interative Q&A)

  • In-class work (quizzes, problem-sets, simulations and short-answer test).

Technology-enhanced:

  • Edit-tracking platforms: a digital assessment tool called Cadmus is being piloted at LSE.  It records the writing process with analytics making it hard for students to cut and paste large chunks of text

  • Moodle’s quiz settings: these can be optimised to reduce the opportunities for GenAI misuse. For example, set time limits to reduce the likelihood of students looking up answers using GenAI; release correct answers only once the quiz has closed which prevents sharing GenAI generated answers with classmates who haven’t yet taken it; randomise questions from a question bank so that each student gets a unique version of the quiz; show one question at a time to make it harder to collate all the answers at one time; restricting the number of attempts to do the quiz makes it harder to test answers using GenAI and re-enter answers

  • Video presentations: using Zoom or Echo360, students record themselves explaining their work, analysing a case, or solving a problem on camera. Examples may include explaining dissertation findings, talking through a coding solution, or applying theory to a new scenario. You may also pose a random question as part of the assessment that require students to respond or ask them to produce a short 1-2 min video explaining what students have written. These aspects can support verification of authenticity. 

Contact your learning technologist departmental adviser for further advice and support.

Hybrid:

  • Written work followed by oral defence

  • Staged assessments that combine written and oral components

  • Formative activities directly linked to summative grades.

To help you choose what might work best for you and your students, consider what you are really assessing and how this relates to the learning outcomes. Your choice may depend on practical constraints such as cohort size or teaching resources.

If you require further guidance, contact your departmental AI lead or Eden Centre departmental adviser

Resource: Observed Assessment Policy and Implementation

How do I redesign assessment to work in our GenAI-enabled environment?

With GenAI widely available, many traditional assessment formats can no longer reliably verify student learning. As detecting the use of GenAI is unreliable, better assessment design is one way forward.

Redesigning your assessment may include the following:

  1. Make learning visible through processes that track student development over time. For example, you could link formative assessment (outlines, drafts, bibliographies) to summative. This can provide your students with checkpoints that might also require some form of documentation such as a reflective journal.

  2. Design tasks that GenAI can't easily complete by requiring the specific use of course-specific materials and knowledge, or ask student to apply their knowledge to novel scenarios that focus on higher-order thinking (analysis, synthesis, evaluation).

  3. Build in verification by combining written work with a short oral defence.

These are good assessment practices – they verify learning whether GenAI is used or not.

For further information visit the LSE Assessment and Feedback Toolkit including:

  • Assessment and Feedback Principles: Actionable principles that guide coherent, programme-level assessment design, balancing consistency with flexibility and addressing challenges like GenAI automation. 
  • Programme and Course-Level Design: Guidance on mapping formative vs. summative assessments, ensuring clear links between learning outcomes and assessment choices. 
  • Assessment methods: A collection of diverse formats, from traditional essays and exams to portfolios, group work, presentations, and alternative formats, to meet varied student needs. 
  • Inclusive assessment: Designed for all students, reducing need for individual adjustments by ensuring accessibility, authenticity, fairness, scaffolded support, and culturally sensitive content. 
  • Technology integration: Guidance on leveraging tools such as Moodle, Turnitin, Gradescope, Digi-exam for streamlined submission, marking, peer feedback, and enhanced digital literacies. 

The Eden Centre has designed CARA – an GenAI tool written in Claude to help staff through the process of designing assessments. Contact Dr Yang Yang via y.yang170@lse.ac.uk if you wish you use this tool. 

What do I need to know about oral assessment?

Oral assessment is used in undergraduate and postgraduate programmes in many countries. The UK HE sector has tended to rely heavily on written work to demonstrate learning outcomes. Whilst there are clear benefits of demonstrating evidence of learning through writing, this form can be vulnerable to GenAI.

LSE therefore is encouraging the careful introduction of oral assessment though its Observed Assessment Policy and Implementation. The School offers support to staff through practical guidance and an active community of practice where colleagues can regularly share their experiences and resources and learn from each other. If you would like to join this group, please contact Dr Alex Standen via a.m.standen@lse.ac.uk.

The following points offer some initial guidance around the process of designing an oral assessment:

  • Start by aligning your approach with course learning outcomes and being clear about what you're assessing through the use of making criteria (which should be published in advance so students know what to expect).

  • Typical oral assessments last 10-20 minutes per student and can be conducted with individuals or groups. Student preparation is crucial to reduce anxiety and ensure fair assessment. Sample questions or scenarios can illustrate the approach. Providing formative or mock assessments allow students to practice the format in a safe low risk environment. Indeed, GenAI can play a useful role here in building confidence – you could encourage your students to practice independently either with their peers or by using the oral mode in the Claude phone app.

  • It is crucial to address inclusivity and accessibility early in your assessment planning. During the assessment, create a welcoming environment and allow students time to settle in before starting. Focus your assessment on students’ knowledge and understanding rather than presentation skills or language fluency, which is especially important for students with English as an additional language.

  • Record the assessment (audio or video) for quality assurance purposes.

  • For grading purposes, use detailed marking criteria and complete your marking during or immediately after the assessment. Having two assessors present achieves double marking efficiently, though single marking is permitted for components that represent 15% or less of the course mark.

Resource: Assessment and Feedback Toolkit – Oral assessments  

The Eden Centre has designed CARA – an AI tool written in Claude to help staff through the process of designing assessments, including a specific tool trained on our oral assessment guidance. Contact Dr Yang Yang via y.yang170@lse.ac.uk if you wish you use this tool.

How do I safeguard dissertations in a GenAI-enabled environment?

Dissertations represent students' deepest engagement with their subject, typically counting for a quarter of the students’ overall programme and representing months of research and writing. This makes them high-stakes assessments that are particularly vulnerable to cognitive offloading to GenAI. However, there are safeguards you can introduce to verify the authenticity of the individual’s work without altering the grading structure.

For example, introducing oral components provides direct verification of student understanding. You could consider mini-viva examinations where students defend their work, or require research proposal presentations and progress presentations at key stages. The resulting conversations reveal whether students genuinely understand their research.

Progressive milestones not only make the research journey visible and motivating, but also encourage students to see their assessment as a process rather than just the end product. Requiring formative submissions at key stages such as annotated outlines, comprehensive literature reviews, or chapter drafts that receive supervisor feedback creates a documented trail of development that emphasises the evolution of ideas. You may also ask students to keep research journals or reflective logs documenting their thinking process, or require primary source evidence such as interview transcripts or survey data. Bibliographies that include personal reflections on source relevance also help to verify genuine engagement with the literature.

This kind of enhanced supervision also strengthens your relationship with your students.

 Resource: Assessment and Feedback toolkit - Dissertations

Can I interview students about their work?

Yes, you can use interviews to verify student understanding and maintain academic standards. LSE regulations permit two distinct types of interview, each serving a different purpose.

Random interviews involve selecting up to 10% of students at random to discuss their submitted work. These interviews are about quality assurance across your cohort rather than suspicion of individual misconduct. They help you verify that learning standards are being met but can serve as a deterrent, as students know there's a chance they'll be asked to explain their work.

Selected interviews are used when you suspect a specific student has used GenAI in an unauthorised way. These follow standard LSE academic misconduct procedures and give students an opportunity to demonstrate authorship and understanding of their work. Unlike random interviews, these are triggered by specific concerns about a particular submission.

Both approaches are permitted under LSE regulations and help maintain academic standards while deterring misconduct. The key is transparency: you must communicate your intention to use interviews in your course outline from the very start of teaching, so students know from day one that they may be asked to discuss their work.

More information: Academic Discipline and Academic Misconduct

Can I use GenAI in my teaching?

Yes, you can use GenAI in your teaching, provided you follow appropriate safeguards around data protection and transparency. This academic year, LSE provides two approved tools for staff and students: Claude (Anthropic) and Microsoft Copilot. Both offer Commercial Data Protection, meaning your prompts, responses and any uploaded files are confidential and cannot be used to train the GenAI models. This gives you and your students the highest level of data protection available. The School encourages you to use these tools to enhance your teaching.

Some colleagues are developing custom learning assistants for their students using these platforms. You can also use these tools to develop course materials, generate examples, explore how GenAI might be used in your discipline and to enhance feedback and the quality of marking (see FAQ below on marking and feedback).

If you choose to use platforms other than Claude or Copilot, please be aware that free versions typically don't offer the same data protection guarantees. You and your students’ prompts and work may be used for training, and privacy protections vary significantly. There are also significant differences in capability between free and premium versions of most LLMs, including access to more advanced models, additional tools and higher usage limits. If you have only used the free offerings, we encourage you to experiment with the premium version of Claude to see how it compares. For example, try using 'Research' mode, which is not currently available on free accounts.

Important note: Using GenAI in your teaching doesn't change your department/programme or course position on student use of GenAI. You still need to communicate clearly to students what they are permitted to do (see FAQ above on communicating your position).

For further information, visit Copilot at LSE or Claude for Education at LSE

Resource: Supporting LSE educators to use GenAI

How can I use GenAI in my teaching?

There are many practical ways to introduce students to GenAI meaningfully in your classroom, moving beyond simply allowing or prohibiting its use to actively teaching about it.

You could start by building shared understanding with your students. Consider co-creating ground rules on using GenAI together, including how to cite GenAI appropriately if its use is permitted in your assessments. This collaborative approach helps students understand the reasoning behind your position and gives them ownership of the guidelines.

Model critical use through in-class activities. Demonstrate both effective and problematic uses of GenAI in your discipline, pointing out where tools produce hallucinations, errors, or misleading outputs. Encourage students to ask questions about methodology, attribution, citations, authorship, and ethics as you work through examples together. You might also use GenAI collectively to build understanding of a topic, explicitly noting the strengths and weaknesses of GenAI-generated content compared to traditional scholarly sources. Promoting discussion, modelling tool use, and creating shared understandings all help reduce student anxiety while enhancing their GenAI fluency. Students learn to use these tools critically rather than accepting their outputs without evaluation.

Create custom learning assistants for your course. Using Claude Projects or Copilot, you can develop teaching assistants that have access to the sources and literature used in your course. These custom tools can support student learning by following your instructions about how to behave while remaining grounded in your course materials. If you are interested in creating a course related chatbot, please contact Steven Williams, Eden’s GenAI and education specialist via s.williams10@lse.ac.uk.

Resource: Practice and perspectives from LSE

How do I teach students about ethical GenAI use?

Students need to understand both the practical limitations and broader ethical implications of GenAI tools to use them responsibly in their academic and professional lives.

GenAI-generated content frequently contains errors, biases, or completely fabricated citations and references (hallucinations). Students who over-rely on these tools risk undermining their own deeper learning. They need to understand that while GenAI can support their work, they remain fully responsible for accuracy and must put in genuine intellectual effort. The learning and the work still needs to be theirs.

Beyond concerns about accuracy, GenAI also raises significant ethical issues that include substantial environmental impact through increased energy and water consumption, problematic labour practices in training GenAI models, widespread use of copyrighted materials without permission, perpetuation of societal biases and stereotypes, and growing inequalities between those who can afford premium GenAI tools and those who cannot.

In class, you can open up discussions about these issues and model responsible GenAI use yourself. Show students how GenAI can support rather than replace their thinking, demonstrate the importance of verification and fact-checking, and model effective prompting to help students use tools efficiently. Discuss the specific limitations and applications of GenAI in your discipline and explore how GenAI is currently being used in professional contexts in your field, including any relevant professional or ethical standards that apply.

Resource: Supporting your students

Can I use GenAI to mark, grade and give feedback on students' work?

In collaboration with King's College London and the University of Southampton, LSE has developed a staff policy containing guidanceand scenarios that illustrates the necessary safeguards to assist you in navigating this emerging practice.

There are many ways that GenAI can help you with feedback and enhancing quality assurance processes. For example, converting handwritten notes to typed feedback to improve accessibility and legibility; analysing feedback patterns across a cohort to reveal common student challenges and inform your teaching; and checking feedback clarity to ensure students understand your guidance.

What remains your responsibility:

  • You must read and assess all student work yourself. You make all marking decisions independently - GenAI supports your process but never replaces your judgment.
  • You maintain oversight at every stage and remain transparent with students about GenAI use.
  • You avoid uploading identifiable student work without explicit consent.

GenAI cannot determine marks, replace second markers or moderators, or make final assessment decisions. These remain entirely with academic staff.

The Use of GenAI-assisted marking and feedback at LSE policy includes seven practical scenarios: scaling feedback for large cohorts, supporting staff accessibility needs, digitising handwritten annotations, calibrating feedback tone with grades awarded, identifying teaching improvements from feedback patterns, checking whether feedback makes sense to students, and supporting moderation calibration.

These examples show GenAI enhancing marking practice within appropriate boundaries.

Resource: Use of GenAI-assisted marking and feedback at LSE

What if I suspect a student has used GenAI inappropriately?

At present, GenAI detection tools are unreliable and should not be used. Unlike plagiarism detection software, tools claiming to detect GenAI-generated content produce too many false positives and false negatives to be trustworthy in academic decision-making. Using them risks unfairly penalising students whose genuine work might trigger flawed algorithms.

However, other indicators may alert you to suspect misconduct, such as changes in a student's writing style, sudden changes in levels of performance, unusual language structures, or inconsistencies between formative and summative work might raise questions for you about authorship. These observations, while not definitive proof, can form the basis for further investigation.

If you suspect unauthorised GenAI use, LSE's existing rules and processes for assessment misconduct cover GenAI misuse, just as they cover plagiarism or other forms of academic dishonesty. You can find comprehensive staff guidance on how to handle allegations of assessment misconduct at the link below, including when and how to raise concerns, what evidence to gather, and how to conduct investigative conversations with students.

The preventive approach remains most effective. Rather than trying to detect GenAI use after submission, design assessments that verify learning directly through observed methods, progressive checkpoints, or formats that are more resiliant to GenAI misuse or that make unauthorised GenAI use more obvious (see FAQ above on observed assessment and FAQ above on re-designing assessment) and engage in discussion with your students about the effective and appropriate use of GenAI.

Resource: Staff Guidance for dealing with allegations of assessment misconduct

How do I deal with marking if students have used GenAI?

How you handle student use of GenAI in assessment depends on the position you have adopted and communicated to students.

If you've adopted Position 1 (No Authorised Use), the boundaries are clear. Undocumented GenAI use constitutes potential academic misconduct and should be handled through proper channels. However, grammar checking is generally acceptable unless your department has specified otherwise. When you encounter work that raises concerns, mark it on its academic merit first, then flag any integrity concerns separately through the standard misconduct procedures.

For Position 2 (Limited Authorised Use), you need to check how students have documented their GenAI use against the instructions given. If students have documented their GenAI use properly but appear to have relied on it too heavily, this is an academic quality issue rather than misconduct. Mark the work accordingly, recognising that over-reliance on GenAI typically results in weaker demonstration of original thinking and lower marks.

With Position 3 (Full Authorised Use), students have significant freedom but must still document their GenAI use. When GenAI is used appropriately with proper documentation, mark the work normally on its academic merits. If the work shows little original thought or critical engagement despite technical proficiency, mark it accordingly – this is an academic quality issue. However, if students fail to document their GenAI use at all, this missing documentation may still constitute misconduct as it violates the transparency requirements you've established.

LSE’s key principles remain constant across all positions: mark the work on its academic merit first, assessing how well it demonstrates the learning outcomes and meets the assessment criteria. Address any integrity concerns separately through proper channels. This separation ensures fair academic assessment while maintaining appropriate processes for potential misconduct.

Getting support and staying updated

Where can I get help implementing these policies?

Assessment and pedagogy support:

Technical and administrative:

Policy and regulations:

Will LSE's policies change?

Yes. 2025/26 is explicitly a transition year. You can expect the School to continue to review and as appropriate refine policy and guidance throughout the year.

If you have feedback or questions, please email eden@lse.ac.uk or speak with your departmental AI lead. For the latest version and supporting materials, visit the Eden Centre website.