Skip links
Laptop on a desk with an AI graphic on the screen

Embracing Artificial Intelligence in my Law and Technology Assignment

You have probably heard of academic staff who are discouraging the use of generative artificial intelligence (AI) tools, such as ChatGPT, in your assignments. It is also likely that you have heard discussions in the media regarding markers’ potential to identify assignments that have been produced using artificial intelligence, which has contributed to prolonged and challenging marking periods.

But what if your assignments specifically included AI use? This is what happened during the Law and Technology module’s level three LLB Law assessment. This article examines how university assessments are evolving and the advantages and limitations of explicitly incorporating generative AI into them.

Assignments at universities have consistently changed; with the invention of the internet, evaluations witnessed a significant shift from handwritten theses and hours spent searching libraries in person to typed, submitted assignments and online research. This change raised serious questions about plagiarism and how to identify it, which were addressed by the inclusion of plagiarism checkers within tools such as Turnitin. Today, university assignments are experiencing comparable issues. With the growing use of generative AI tools, universities and academic staff are becoming increasingly concerned that students can artificially generate their assignments without plagiarism tools detecting its use. As a result, marking assignments is becoming an arduous task, increasing the amount of time it takes for feedback to be returned to students and greater numbers of students being identified as committing academic offenses, even where they are using appropriate learning tools, such as Grammarly, for refining their writing.

For these reasons, academics are confident the way in which students are assessed must change again; the question remains as to how. Some staff advocate for all assessments to become in-person exams which abolish the use of generative AI in students’ assignments as they would not have access to it at the time of writing. However, this might not be feasible for present students functioning in a post-Covid era of remote learning, as many, including myself, have not taken an exam in person since finishing their GCSEs. Other academic staff promote assessment via oral presentations where, even if a student generates their content using AI tools, they will also be assessed on their delivery and ability to publicly speak. While this might boost students’ confidence and communication abilities, the reality of this type of assessment is that it will take a lot of time to conduct and might not be suited to each student’s capabilities.

AI generated image: A modern university classroom with students sitting at desks using laptops. A professor stands at the front, pointing to a large screen displaying AI tools like ChatGPT. The classroom has a mix of students attentively listening, taking notes, and interacting with the professor. The atmosphere is collaborative and technologically advanced, highlighting the incorporation of AI in education.

Innovatively, therefore, some academics are turning to the incorporation of generative AI into written university assignments and coursework. During my third year of my law degree, I chose to enroll in the ‘Law and Technology’ module, which included this kind of assignment in addition to a conventional essay-stye assessment. The second part of our assessed coursework asked us to use ChatGPT to generate a 100-word essay with OSCOLA referencing using the prompt “Critically discuss the challenges between Law and Technology in the UK legal system”. Upon generation, we were required to write 500 words critically analysing the output generate, examining whether the AI tool created an objectively good or bad argument regarding its structure, use of sources, language, and engagement with case law and legislation. The work was not graded on the output itself, which was not necessary to include in our responses, but rather on our engagement and reflection of the output.

Personally, I found this assignment refreshing and engaging, having predominantly only completed essay and scenario-style assignments prior to this. This reflection encouraged me to adapt my critical analysis skills stimulating my ability to carefully consider the outputs provided by generative AI tools in an academic environment. Where I found the generated essay to be largely poor, the exercise prompted me to think more critically about my own use of generative AI in the future and developed information literacy skills which will be necessary for a future career where AI is often used in the workplace. For staff, this type of assignment may also be welcomed to curtail concerns about artificially generated assignments and to assess students on their ability to package their opinions and critical analysis in a different format.

AI generated image: A split-screen image contrasting two assessment scenarios. On the left side, a traditional setting with a professor marking a handwritten essay with a red pen. On the right side, a student is sitting at a desk with a laptop, critically analyzing an AI-generated essay. The student is highlighting and making notes on the screen. The image illustrates the evolution of academic assessments from traditional methods to the inclusion of generative AI tools.

Nevertheless, this form of assignment could have some limitations. Firstly, this change in evaluation style requires academic staff to judge the quality of the students’ work differently to usual, possibly deviating from the usual Conceptual Equivalents Scale, and therefore without guidance could initially result in some inconsistency within awarded grades. Secondly, the scope of this type of coursework could be limited. For a single assignment, alongside a traditional essay piece, staff can ensure that the students are still assessed in every area including their ability to engage with academic literature. If this assignment were to be issued on its own, it would lose this element of academic evaluation. As the results of this reflection-style evaluation can be vague and similar in many areas, it might be difficult to repeat without students providing the same answers again and again.

Although universities must tread cautiously, considering the limitations of generative AI assignments, the incorporation of this technology into university assessments, such as the Law and Technology reflection piece, offers a novel approach to teaching and assessing critical thinking. By embracing these tools within academia, I think institutions may better prepare students for a future where AI is integral, while addressing the ongoing challenges of academic integrity. Have you heard of any academic staff advocating for the use of AI in academia? If so, please reach out to the AI Hub (AI-Hub@qub.ac.uk) to share your own experiences!