Skip links
Woman interviewing a robot on armchairs

A Socratic Debate with ChatGPT: Reflecting on the Use of Gen AI in Research

By Dr Fabiana Fausto McCracken, Researcher Development Consultant at the Thomas J. Moran Graduate School

As a researcher, I have always been interested in the use of computational tools to support data analysis, as for example, the use of corpus tools to identify textual patterns in literary discourse. Hence, since the release of ChatGPT in November 2022, I have been keen to explore how Generative Artificial Intelligence tools can streamline and enhance the research process. The university has since launched the Artificial Intelligence Hub, which provides helpful advice on the responsible use of AI by staff and students, in line with the Russell Group guidelines and QAA directives.

I often refer postgraduate students to the QUB Student Guide on the Use of Generative AI as part of my role as a Researcher Development Consultant in the Graduate School, and support them to address common queries around the ethics and implications of using AI in their studies.  Instead of describing students’ main concerns and my advice regarding the use of AI in this blog post, I have utilised the strategy of using a ChatGPT as a Socratic friend (UNESCO, 2023) to ask challenging questions about my practice and help me reflect on the quality of the advice given. A summary of my virtual debate with ChatGPT is presented below:

Fabiana

I am a Researcher Development Consultant at a UK university and support postgraduate students to develop academic and research skills in order to succeed in their studies. The postgraduate students that I support have increasingly asked questions regarding the use of generative artificial intelligence in academic writing and data analysis. I often refer them to the university’s own guidelines on the responsible use of AI and academic integrity regulations. Please ask me a question to help me reflect on my current approach.

ChatGPT

Given the growing interest and inquiries from postgraduate students about the use of generative AI in academic writing and data analysis, how might you adapt your current approach to not only provide guidance on the university’s policies but also proactively equip students with the critical skills and ethical considerations necessary to responsibly integrate AI tools into their research processes?

Fabiana

Depending on the nature of the student’s query, I might encourage the student to seek guidance from the Research Governance Team regarding queries related to the use of AI tools in data analysis, for example. If the student is planning to use artificial intelligence in the analytical stage of their research, it is important to include this decision in the ethical approval process and demonstrate how participant confidentiality and data protection regulations will be followed. This includes ensuring that the content of interviews will not be stored in the AI tool’s repository of data or used to train the language model. Have I answered your question?

ChatGPT

Yes, you’ve provided a thorough and thoughtful answer that highlights your approach to guiding students on the ethical use of AI in research. You emphasize the importance of involving the Research Governance Team and integrating AI-related decisions into the ethical approval process, which is crucial for maintaining academic integrity and compliance with data protection regulations.

However, considering your role in supporting postgraduate students, here are a few additional reflective questions:

  1. How do you ensure that students are not only aware of the guidelines but also fully understand the potential risks and implications of using AI tools in their research, particularly concerning confidentiality and data security?
  2. In what ways could you further enhance your support for students to critically evaluate the outputs generated by AI tools, ensuring that they can distinguish between AI-generated content and their own intellectual contributions?
  3. How might you encourage students to document their use of AI tools transparently in their research, including how they addressed any ethical concerns?

Reflecting on these aspects might help you refine your approach even further to meet the evolving needs of postgraduate students in relation to AI use.

Fabiana

Thank you for sharing these reflective questions with me. I will address your second question, on the critical assessment of AI-generated outputs. For example, I cover the AI-powered Autocode function of NVivo in my Data Discovery workshops and explain to students that it can be used to reduce the time spent in more mechanical tasks, such as identifying speakers’ responses in an interview transcript. The Autocode feature of NVivo automatically identifies structural elements, such as speakers’ names, or trends and patterns in qualitative data, via comparisons with a language model. I do, however, recommend a cautious approach to using the Autocode function to identify themes or carry out a sentiment analysis of the content of interviews. The use of AI-powered thematic analysis should not replace the researcher’s insight and interpretation of their data. However, when used ethically and responsibly, it may be used for triangulation purposes or to finetune manually generated codes. Did I answer your second question?

ChatGPT

Yes, you’ve provided a well-considered answer to the second question, demonstrating a balanced approach to the use of AI in research. By highlighting the benefits of AI-powered tools like NVivo’s Autocode function in reducing mechanical tasks, while cautioning against over-reliance on AI for critical interpretative work, you are reinforcing the importance of the researcher’s role in data analysis.

Your emphasis on ethical and responsible use, particularly in the context of thematic analysis and sentiment analysis, is crucial. You’ve also offered a practical strategy for integrating AI-generated insights as a supplementary tool rather than a replacement for the researcher’s expertise.

To build on this, you might also consider:

  1. Encouraging reflective practice: Asking students to document and reflect on their decision-making process when using AI tools. How did they decide when to rely on AI versus their own judgment?
  2. Providing case studies or examples: Sharing examples where AI-generated outputs either enhanced or misled analysis could help students develop a more nuanced understanding.
  3. Offering follow-up discussions: Creating opportunities for students to discuss their experiences with AI tools in data analysis, allowing for peer learning and deeper reflection.

Overall, your approach aligns well with promoting both critical thinking and ethical research practices.

Fabiana

Thank you for your suggestions. I particularly liked the second one, in which you suggest identifying examples of use of AI in analysis. In my Data Discovery workshops, I usually share examples on the use of NVivo in the analysis of various types of data, such as interviews, focus groups, classroom observations, and others, but will also identify published articles that either discuss the use of the Autocode function more specifically, or the use of AI-powered tools for thematic analysis more generally. Thank you for sharing such insightful questions and suggestions with me!

ChatGPT

You’re very welcome! I’m glad the suggestions resonated with you, especially the idea of incorporating specific examples and case studies. By sharing published articles and real-world applications, you’ll be able to provide your students with concrete examples of both the opportunities and challenges of using AI in data analysis. This approach will undoubtedly enhance their understanding and encourage them to engage more critically with these tools.

If you ever need further ideas or support in developing your workshops or resources, feel free to reach out. Best of luck with your continued work in supporting postgraduate students on their academic journeys!

Reference list: