Skip links

Queen’s Guidance on the Responsible Use of AI in Research

Queen’s is committed to supporting the responsible, ethical, and innovative use of Artificial Intelligence (AI) tools in research. AI technologies hold transformative potential to enhance research productivity and address complex challenges. Equally, we must ensure their use aligns with our principles of research integrity, ethics, and accountability. This guidance outlines essential principles and practical advice to help researchers integrate AI responsibly into their research activities.

This guidance applies to all staff and postgraduate researchers involved in research at Queen’s, with an aim to cover aspects of AI use across the research lifecycle.

For broader guidance on generative AI, including teaching and general staff guidance, please see the Queen’s Staff Guidance on Generative AI, and explore the resources available at the Queen’s AI Hub and the Library. Please also be aware that some Schools may issue additional guidance to reflect discipline-specific contexts/needs.

For detailed ethical guidelines, researchers should consult the Queen’s webpages on Research Integrity and ensure compliance with the University’s Data Protection  policies, and consider relevant Export Control requirements where applicable if the use of AI may result in the export of data/technology outside the UK.

Female student with dreadlocks in park working on laptop

What is AI in Research?

AI broadly refers to computer systems capable of performing tasks that normally require human intelligence. Within research contexts, AI is increasingly used to automate tasks, process and interpret data, and generate new insights.

Two primary types of AI are relevant for researchers:

Predictive AI (Machine Learning)

Uses algorithms trained on historical data to forecast trends, classify data, and identify patterns. This includes comparative models (e.g. ranking algorithms, similarity networks) which support evaluation, prioritisation, and selection tasks in both STEM and humanities research.

Generative AI

Creates new content—e.g. text, images, audio, or code—by learning from existing data patterns. Tools based on Gen AI include large language models (LLMs), such as Microsoft Copilot, Gemini, and ChatGPT.

Scope of this Guidance

This guidance specifically addresses the responsible use of commonly available AI tools in research activities – particularly general-purpose generative AI platforms such as large language models (LLMs). It does not cover the process of developing new AI technologies or systems. If you are developing or creating AI as a primary research goal, you should refer directly to the Queen’s Code of Conduct and Integrity in Research.

This guidance does not seek to address well-established ML techniques that are routinely used for data analysis within disciplines. Researchers using such techniques should continue to follow standard ethics, data protection, and research integrity protocols and ensure compliance with export control requirements pertinent to their research area.

This guidance applies to:

  • All Queen’s researchers (staff, postgraduate research students, visiting researchers, and contractors) conducting research under the auspices of the University.
  • All types of research activities across the research lifecycle including planning, funding proposal development, data collection and analysis, publication and dissemination, peer review and evaluation, and research management.

This guidance may not apply to:

  • Those developing or creating AI as a primary research goal.
  • General administrative use of AI outside of research.
  • Undergraduate or taught postgraduate student assessments or coursework (for this, see the separate Queen’s Guidance on Generative AI in Assessment).

Navigating this Guidance

Guidance is structured across key areas relevant to the entire research lifecycle. Each tab below provides recommendations and resources to help you integrate AI responsibly and effectively in your research:

Queen’s encourages the responsible and ethical integration of AI into research activities as a support tool, respecting disciplinary norms, authorship integrity, and institutional and sectoral expectations. Our approach is guided by our institutional values, the relevant sector-wide regulations and best practices, and the following five core principles:

1

Innovation and Exploration

Researchers are encouraged to explore the potential of AI tools to aid in the process of undertaking research and disseminating research findings. Responsible experimentation is welcomed, provided it is underpinned by critical thinking and clear accountability.

2

Integrity and Transparency

All use of AI in research must be clearly documented and acknowledged, especially where it constitutes material use in the research process/output. Researchers are expected to validate AI-generated content and uphold the standards of academic honesty, avoiding misrepresentation or plagiarism.

3

Accountability and Reproducibility:

Researchers remain personally responsible for the accuracy and rigour of all research outputs, including those assisted by AI. While reproducibility may be limited by the nature of some AI tools, researchers should take reasonable steps (such as recording prompts and tool versions) to support transparent and traceable research practices.

4

Inclusivity and Sustainability

AI tools must be used in ways that minimise harm and promote equity. Researchers should actively consider the risks of bias or exclusion, and seek to reduce the environmental impact of AI usage in line with Queen’s Sustainability Policies.

5

Alignment with External Standards

Researchers should follow evolving guidance from UKRI, UKCORI, the Russell Group, Jisc, and other relevant bodies. Engagement with sector-wide discussions is encouraged to support good practice and continuous improvement.

When using AI tools in research, you must adhere to ethical, legal, and compliance responsibilities to ensure we meet institutional and individual responsibilities for safeguarding participants, maintaining trust, and research integrity.

Ethical Approval and Governance

Researchers using AI in projects involving human participants, personal data, or sensitive information must explicitly outline AI usage in their ethics applications. Ethics applications must include clear details about how AI will be used in data collection, analysis, or management, and how participants’ data privacy will be protected. AI tools should not be used to complete or generate ethics documentation, as researchers must personally demonstrate understanding and oversight of the ethical implications.

Researchers should consult their Faculty/School’s Research Ethics Committee (REC) if uncertain about ethical considerations associated with their AI use. For further guidance, refer to Queen’s Code of Conduct on Research Integrity and the Policy on the Ethical Approval of Research.

Data Protection, Privacy, and Confidentiality

Using AI tools raises important considerations regarding data protection and privacy. Researchers must comply with UK GDPR and Queen’s Data Protection Policy. Personal, sensitive, or confidential data should not be uploaded to external or public AI tools unless specifically approved. Data protection risks must be assessed via a Data Protection Impact Assessment (DPIA) when processing personal or sensitive data using AI tools.

When handling data, careful anonymisation or pseudonymisation is required to prevent accidental identification of individuals, especially when integrating datasets with AI tools. Researchers must carefully review the privacy policies and terms of use for any external AI tools, ensuring these align with GDPR requirements and University guidelines. For guidance on data management and sharing, consult the Queen’s Research Data Management Policy.

Intellectual Property (IP) and Copyright

Researchers must carefully consider IP implications when using AI. Depending on the specific AI tool and its terms & conditions, uploading research materials or data into a tool can inadvertently transfer rights to the AI provider or make the data publicly accessible. Researchers must ensure they have explicit permissions before inputting third-party copyrighted content into AI systems.

For patentable research or commercially sensitive content, researchers must avoid entering this into external or public AI platforms without a clear understanding of the platform’s terms of use, data handling practices, and the potential risks involved to IP and copyright. If uncertain, seek advice from R&E’s Commercial Development and IP Team to ensure compliance and protect IP.

Transparency, Accountability, and Authorship

Researchers are responsible for transparently acknowledging AI usage in their research outputs, clearly explaining AI’s role and contributions. While reproducibility may be limited by the nature of some tools, researchers should keep clear records of how outputs were tested, validated, and accepted (including prompts or tool versions where appropriate) to support transparency and traceability.

AI systems cannot be credited as authors due to their inability to take responsibility or accountability. Researchers must comply with the Committee on Publication Ethics (COPE guidance) and individual publisher guidelines. Always clearly disclose any substantial AI use in manuscripts, presentations, and reports.

Compliance with Funding Body Requirements

Researchers must comply with external funders’ AI-related guidelines (e.g., UKRI guidelines). AI use in grant applications should be acknowledged, and AI-generated content must be carefully validated to ensure accuracy and integrity. Researchers should exercise caution when using AI tools during the development of funding applications. Particular care should be taken when handling confidential, sensitive, or unpublished material. If in doubt, researchers are advised to seek guidance from R&E’s Commercial Development and IP Team to ensure compliance with relevant confidentiality, intellectual property, or data protection requirements.

Sustainability and Environmental Impact

AI tools have substantial computational demands, resulting in significant carbon emissions. Researchers are encouraged to be mindful of sustainability and resource efficiency, choosing AI tools and methods that align with the University’s commitment to environmental responsibility outlined in our Sustainability Policies.

Queen’s supports a flexible, principles-led approach to selecting AI tools for research. Given the fast-evolving nature of AI technologies, this guidance does not maintain a list of “approved” tools. Researchers are encouraged to use tools that align with the University’s core values around responsibility, data protection, security, sustainability, and ethical research conduct.

Where researchers are unsure, they should:

  • Prioritise tools that are transparent about how they handle data, including whether user inputs are retained or used for training models – check the terms & conditions of use before trialling tools.
  • Avoid tools that pose risks to personal data, confidential material, or intellectual property, e.g. where terms of use are unclear or where outputs may be publicly stored or redistributed, or where data may be stored or accessed outside the UK in sensitive subject areas (in accordance with Export Control).
  • Use institutional platforms/licensed tools (e.g. MS Co-Pilot) wherever possible, particularly when working with sensitive or unpublished research data.

Data Security and Privacy Considerations

Remember that researchers remain individually responsible for ensuring that:

  • Any data input into AI tools complies with data protection legislation (e.g., UK GDPR).
  • Tools are not used to process sensitive or identifiable research data unless adequate safeguards are confirmed.
  • Research integrity and confidentiality are upheld throughout the lifecycle of tool use, from hypothesis generation to publication.
  • Effective Use

  • To maximise the benefits of AI, researchers may wish to develop effective prompting and query skills. The structure of instructions and any accompanying data will shape the output, though outputs may still vary over time as AI models evolve. While full reproducibility may not always be possible, maintaining clear records of prompts, tools used, and key parameters can support transparency. Crucially, effective use also depends on robust validation. AI-generated outputs must be critically assessed and thoroughly checked for accuracy, relevance, and bias before being used in research. Researchers should apply a consistent, documented approach to verifying and contextualising AI outputs..

    Always validate AI-generated outputs thoroughly. AI models can produce inaccurate or misleading information (“hallucinations”), so rigorous fact-checking and critical evaluation are essential.

  • Mitigating Bias

  • AI systems can unintentionally reproduce or amplify biases present in training data. Researchers must actively assess outputs for bias, particularly when AI-generated outcomes influence decision-making or interpretations. It is important to critically reflect on potential societal biases or inequalities inherent in AI outputs and take proactive steps to mitigate these issues.

    Researchers should ensure inclusive practices by verifying that AI tools do not disadvantage particular groups or individuals. When disseminating results, clearly discuss any limitations or biases discovered through AI analysis.

  • Sustainability

  • AI tools typically require significant computing resources, which can have high energy consumption. Researchers are encouraged to adopt sustainable practices, choosing tools and methods that minimise environmental footprint in line with Queen’s commitment to sustainability. This includes evaluating the necessity of intensive computational use and opting for more sustainable alternatives where possible.

  • Avoiding Over-reliance

  • While AI tools offer substantial benefits, researchers must avoid becoming overly reliant on them, particularly in areas requiring critical thinking, creativity, and nuanced human judgement. AI should complement, rather than replace, essential research skills and processes. Researchers should remain actively engaged with their research and critically evaluate all AI-supported activities.

  • Peer Review and Evaluation

  • AI tools should not be used for peer review, research evaluation (such as REF submissions), or reviewing funding applications due to confidentiality, IP concerns, Export Control requirements and accuracy risks. Researchers involved in these processes should provide critical, independent assessments based on their own expertise and judgement, adhering strictly to relevant guidelines provided by funding bodies and publishers. This expectation is aligned with Queen’s Responsible Research Assessment policy, which sets out the University’s principles for responsible and fair evaluation of research.

To maximise the benefits of AI, researchers may wish to develop effective prompting and query skills. The structure of instructions and any accompanying data will shape the output, though outputs may still vary over time as AI models evolve. While full reproducibility may not always be possible, maintaining clear records of prompts, tools used, and key parameters can support transparency. Crucially, effective use also depends on robust validation. AI-generated outputs must be critically assessed and thoroughly checked for accuracy, relevance, and bias before being used in research. Researchers should apply a consistent, documented approach to verifying and contextualising AI outputs..

Always validate AI-generated outputs thoroughly. AI models can produce inaccurate or misleading information (“hallucinations”), so rigorous fact-checking and critical evaluation are essential.

AI systems can unintentionally reproduce or amplify biases present in training data. Researchers must actively assess outputs for bias, particularly when AI-generated outcomes influence decision-making or interpretations. It is important to critically reflect on potential societal biases or inequalities inherent in AI outputs and take proactive steps to mitigate these issues.

Researchers should ensure inclusive practices by verifying that AI tools do not disadvantage particular groups or individuals. When disseminating results, clearly discuss any limitations or biases discovered through AI analysis.

AI tools typically require significant computing resources, which can have high energy consumption. Researchers are encouraged to adopt sustainable practices, choosing tools and methods that minimise environmental footprint in line with Queen’s commitment to sustainability. This includes evaluating the necessity of intensive computational use and opting for more sustainable alternatives where possible.

While AI tools offer substantial benefits, researchers must avoid becoming overly reliant on them, particularly in areas requiring critical thinking, creativity, and nuanced human judgement. AI should complement, rather than replace, essential research skills and processes. Researchers should remain actively engaged with their research and critically evaluate all AI-supported activities.

AI tools should not be used for peer review, research evaluation (such as REF submissions), or reviewing funding applications due to confidentiality, IP concerns, Export Control requirements and accuracy risks. Researchers involved in these processes should provide critical, independent assessments based on their own expertise and judgement, adhering strictly to relevant guidelines provided by funding bodies and publishers. This expectation is aligned with Queen’s Responsible Research Assessment policy, which sets out the University’s principles for responsible and fair evaluation of research.

This section outlines where you can currently find training and support, and how to request additional guidance.

Current Training and Learning Opportunities

A growing suite of resources is available via the Queen’s AI Hub, including introductory guidance on the responsible use of AI in research.

Currently R&E offers an online workshop “AI Tools for Research Writers”, which introduces researchers to trusted AI platforms and how they might be used to support literature reviews, writing, and summarisation tasks.

In addition, researchers are encouraged to explore the following external learning opportunities:

  • Jisc: Artificial Intelligence and Ethics – A free online course covering the key ethical implications of AI in education and research. View course

Requesting Support or Additional Training

If you have a specific training need or would like to request a session for your domain or School, please get in touch with us at ResearchFutures@qub.ac.uk. We are especially keen to hear from researchers interested in:

  • Exploring or piloting new AI platforms
  • Responsible AI use in interdisciplinary research
  • Sharing case studies or experiences using AI in research to help build a community of practice and support peer learning across disciplines
  •  

Yes, and you must follow specific publisher or funder guidelines. In general, any substantial use of AI tools in your research—including literature reviews, data analysis, writing, or proofreading—must be transparently acknowledged in publications and funding proposals. Include details of the AI tool used, how it was used, and verify outputs carefully to ensure accuracy and integrity.

You may use AI to support the development of grant applications, provided its use is transparently acknowledged and outputs are carefully validated. Refer explicitly to guidelines provided by your funding body—for example, UKRI guidelines—and ensure all AI-generated content is thoroughly checked and critically evaluated before submission.

No. AI tools cannot be credited as authors or co-authors. Authorship implies accountability and responsibility, which are attributes only human researchers possess. Clearly acknowledge any AI use in your methods or acknowledgements section, and refer to guidelines from your target publisher.

No. Personal, sensitive, or confidential data should generally not be entered into external or publicly available AI tools as doing so may breach GDPR and Queen’s Data Protection Policy. Personal data can be either direct (e.g. name, email) or indirect (e.g. IP addresses, device identifiers, precise location data, or combinations of demographic details that could identify individuals). Researchers should take care to recognise when data may be identifiable, even if it does not appear obviously personal. If processing personal data with an AI tool is essential, a Data Protection Impact Assessment (DPIA) must be completed and explicit approval obtained from your School Ethics Committee and/or Queen’s IT Services.

Yes, you can use AI to help structure or simplify complex academic content for lay summaries or policy briefs. However, researchers remain fully responsible for the accuracy and clarity of the final output. Where AI tools are used to adapt tone or translate concepts, outputs should be reviewed carefully to avoid loss of nuance or inadvertent misrepresentation.

AI tools can be used to generate synthetic data in situations where real data cannot be shared (e.g. due to privacy constraints), but this must be carefully documented. Researchers must assess the quality, bias, and limitations of the synthetic data, and declare its use transparently in all outputs. Ethical approval may be required, depending on how the data is generated or used.

Yes. If AI tools are used to generate, process, or analyse data, their use should be described in your Data Management Plan. This includes naming the tool, outlining the purpose of its use, any data handling involved, and any potential data protection or reproducibility considerations.

Most funders do not yet have dedicated sections for AI, so relevant details should be included where most appropriate (typically under data description, tools, ethics, or legal considerations, depending on the DMP template).

You may use AI to help with the language or formatting of research documentation to support an ethics application, but researchers must not rely on AI to generate content that reflects ethical reasoning or consent processes. Researchers must retain full understanding and ownership of the ethical issues and ensure documents are appropriate, accurate, and personalised for their study. Substantial use of AI in preparing ethics documentation should be declared.

You must first contact Queen’s IT Services to request evaluation and approval of any new or external AI tool. IT Services will review the tool for data protection compliance, privacy standards, licensing, and security. Submit your request via the IT Service Desk portal or via email on itservicedesk@qub.ac.uk for assistance.

No. AI should not be used for peer review, research evaluations (such as REF submissions), or reviewing funding applications, as these tasks require critical human judgement and confidentiality. Using AI risks breaches of confidentiality, intellectual property rights, and introduces the possibility of biased or inaccurate assessments. This is also consistent with our Responsible Research Assessment policy, which sets out expectations for fair, expert-led, and transparent evaluation processes.

Yes, AI tools can support non-native English speakers by helping to improve clarity, grammar, and structure in written texts. However, the researcher remains responsible for verifying accuracy, nuance, and tone. Any substantial AI assistance should be transparently acknowledged when submitting publications or theses. For further guidance, refer to Guidelines on Authorship and Publication. Also, be aware of overarching considerations such as around data protection, IP, and copyright – see relevant sections of this guidance for those considerations.

Also see the FAQ below for advice on using AI tools for translation.

Yes, but caution is essential. When using AI for translating research materials or auto-transcribing interviews, researchers must ensure the tool is GDPR-compliant and secure. For participant data, obtain explicit consent for AI-based processing, and avoid tools that retain or train on uploaded content. Always review and correct AI outputs for accuracy and document the process in your Data Management Plan. See also: Queen’s Code of Conduct and Integrity in Research.

It's recommended to keep a clear record of prompts, outputs, and revisions when using AI tools for transparency and reproducibility, which should be retained and included as supplementary material where appropriate. Where feasible, use built-in session histories or exportable chat transcripts from the AI tool itself, as this preserves sequencing and nuance. Alternatively, clearly labelled Excel spreadsheets or Word documents may be used. Retain these records and include them as supplementary material where appropriate.

Yes. AI tools often require significant computing resources and energy consumption. Researchers are encouraged to minimise environmental impact by choosing more efficient AI tools, reducing unnecessary computational usage, and aligning practices with the University’s policies on Sustainability.

Training and resources, including workshops and guidance documents, are available via the Queen’s AI Hub and the Library. If you have specific training needs or would like tailored support, please contact the Research Futures team directly at ResearchFutures@qub.ac.uk.

We will develop a set of optional, non-prescriptive resources to help researchers apply this guidance in ways suited to their individual context. These tools are intended to support good practice, and this section will be updated as resources become available.

Queen’s encourages the responsible and ethical integration of AI into research activities as a support tool, respecting disciplinary norms, authorship integrity, and institutional and sectoral expectations. Our approach is guided by our institutional values, the relevant sector-wide regulations and best practices, and the following five core principles:

1

Innovation and Exploration

Researchers are encouraged to explore the potential of AI tools to aid in the process of undertaking research and disseminating research findings. Responsible experimentation is welcomed, provided it is underpinned by critical thinking and clear accountability.

2

Integrity and Transparency

All use of AI in research must be clearly documented and acknowledged, especially where it constitutes material use in the research process/output. Researchers are expected to validate AI-generated content and uphold the standards of academic honesty, avoiding misrepresentation or plagiarism.

3

Accountability and Reproducibility:

Researchers remain personally responsible for the accuracy and rigour of all research outputs, including those assisted by AI. While reproducibility may be limited by the nature of some AI tools, researchers should take reasonable steps (such as recording prompts and tool versions) to support transparent and traceable research practices.

4

Inclusivity and Sustainability

AI tools must be used in ways that minimise harm and promote equity. Researchers should actively consider the risks of bias or exclusion, and seek to reduce the environmental impact of AI usage in line with Queen’s Sustainability Policies.

5

Alignment with External Standards

Researchers should follow evolving guidance from UKRI, UKCORI, the Russell Group, Jisc, and other relevant bodies. Engagement with sector-wide discussions is encouraged to support good practice and continuous improvement.

When using AI tools in research, you must adhere to ethical, legal, and compliance responsibilities to ensure we meet institutional and individual responsibilities for safeguarding participants, maintaining trust, and research integrity.

Ethical Approval and Governance

Researchers using AI in projects involving human participants, personal data, or sensitive information must explicitly outline AI usage in their ethics applications. Ethics applications must include clear details about how AI will be used in data collection, analysis, or management, and how participants’ data privacy will be protected. AI tools should not be used to complete or generate ethics documentation, as researchers must personally demonstrate understanding and oversight of the ethical implications.

Researchers should consult their Faculty/School’s Research Ethics Committee (REC) if uncertain about ethical considerations associated with their AI use. For further guidance, refer to Queen’s Code of Conduct on Research Integrity and the Policy on the Ethical Approval of Research.

Data Protection, Privacy, and Confidentiality

Using AI tools raises important considerations regarding data protection and privacy. Researchers must comply with UK GDPR and Queen’s Data Protection Policy. Personal, sensitive, or confidential data should not be uploaded to external or public AI tools unless specifically approved. Data protection risks must be assessed via a Data Protection Impact Assessment (DPIA) when processing personal or sensitive data using AI tools.

When handling data, careful anonymisation or pseudonymisation is required to prevent accidental identification of individuals, especially when integrating datasets with AI tools. Researchers must carefully review the privacy policies and terms of use for any external AI tools, ensuring these align with GDPR requirements and University guidelines. For guidance on data management and sharing, consult the Queen’s Research Data Management Policy.

Intellectual Property (IP) and Copyright

Researchers must carefully consider IP implications when using AI. Depending on the specific AI tool and its terms & conditions, uploading research materials or data into a tool can inadvertently transfer rights to the AI provider or make the data publicly accessible. Researchers must ensure they have explicit permissions before inputting third-party copyrighted content into AI systems.

For patentable research or commercially sensitive content, researchers must avoid entering this into external or public AI platforms without a clear understanding of the platform’s terms of use, data handling practices, and the potential risks involved to IP and copyright. If uncertain, seek advice from R&E’s Commercial Development and IP Team to ensure compliance and protect IP.

Transparency, Accountability, and Authorship

Researchers are responsible for transparently acknowledging AI usage in their research outputs, clearly explaining AI’s role and contributions. While reproducibility may be limited by the nature of some tools, researchers should keep clear records of how outputs were tested, validated, and accepted (including prompts or tool versions where appropriate) to support transparency and traceability.

AI systems cannot be credited as authors due to their inability to take responsibility or accountability. Researchers must comply with the Committee on Publication Ethics (COPE guidance) and individual publisher guidelines. Always clearly disclose any substantial AI use in manuscripts, presentations, and reports.

Compliance with Funding Body Requirements

Researchers must comply with external funders’ AI-related guidelines (e.g., UKRI guidelines). AI use in grant applications should be acknowledged, and AI-generated content must be carefully validated to ensure accuracy and integrity. Researchers should exercise caution when using AI tools during the development of funding applications. Particular care should be taken when handling confidential, sensitive, or unpublished material. If in doubt, researchers are advised to seek guidance from R&E’s Commercial Development and IP Team to ensure compliance with relevant confidentiality, intellectual property, or data protection requirements.

Sustainability and Environmental Impact

AI tools have substantial computational demands, resulting in significant carbon emissions. Researchers are encouraged to be mindful of sustainability and resource efficiency, choosing AI tools and methods that align with the University’s commitment to environmental responsibility outlined in our Sustainability Policies.

Queen’s supports a flexible, principles-led approach to selecting AI tools for research. Given the fast-evolving nature of AI technologies, this guidance does not maintain a list of “approved” tools. Researchers are encouraged to use tools that align with the University’s core values around responsibility, data protection, security, sustainability, and ethical research conduct.

Where researchers are unsure, they should:

  • Prioritise tools that are transparent about how they handle data, including whether user inputs are retained or used for training models – check the terms & conditions of use before trialling tools.
  • Avoid tools that pose risks to personal data, confidential material, or intellectual property, e.g. where terms of use are unclear or where outputs may be publicly stored or redistributed, or where data may be stored or accessed outside the UK in sensitive subject areas (in accordance with Export Control).
  • Use institutional platforms/licensed tools (e.g. MS Co-Pilot) wherever possible, particularly when working with sensitive or unpublished research data.

Data Security and Privacy Considerations

Remember that researchers remain individually responsible for ensuring that:

  • Any data input into AI tools complies with data protection legislation (e.g., UK GDPR).
  • Tools are not used to process sensitive or identifiable research data unless adequate safeguards are confirmed.
  • Research integrity and confidentiality are upheld throughout the lifecycle of tool use, from hypothesis generation to publication.
  • Effective Use

  • To maximise the benefits of AI, researchers may wish to develop effective prompting and query skills. The structure of instructions and any accompanying data will shape the output, though outputs may still vary over time as AI models evolve. While full reproducibility may not always be possible, maintaining clear records of prompts, tools used, and key parameters can support transparency. Crucially, effective use also depends on robust validation. AI-generated outputs must be critically assessed and thoroughly checked for accuracy, relevance, and bias before being used in research. Researchers should apply a consistent, documented approach to verifying and contextualising AI outputs..

    Always validate AI-generated outputs thoroughly. AI models can produce inaccurate or misleading information (“hallucinations”), so rigorous fact-checking and critical evaluation are essential.

  • Mitigating Bias

  • AI systems can unintentionally reproduce or amplify biases present in training data. Researchers must actively assess outputs for bias, particularly when AI-generated outcomes influence decision-making or interpretations. It is important to critically reflect on potential societal biases or inequalities inherent in AI outputs and take proactive steps to mitigate these issues.

    Researchers should ensure inclusive practices by verifying that AI tools do not disadvantage particular groups or individuals. When disseminating results, clearly discuss any limitations or biases discovered through AI analysis.

  • Sustainability

  • AI tools typically require significant computing resources, which can have high energy consumption. Researchers are encouraged to adopt sustainable practices, choosing tools and methods that minimise environmental footprint in line with Queen’s commitment to sustainability. This includes evaluating the necessity of intensive computational use and opting for more sustainable alternatives where possible.

  • Avoiding Over-reliance

  • While AI tools offer substantial benefits, researchers must avoid becoming overly reliant on them, particularly in areas requiring critical thinking, creativity, and nuanced human judgement. AI should complement, rather than replace, essential research skills and processes. Researchers should remain actively engaged with their research and critically evaluate all AI-supported activities.

  • Peer Review and Evaluation

  • AI tools should not be used for peer review, research evaluation (such as REF submissions), or reviewing funding applications due to confidentiality, IP concerns, Export Control requirements and accuracy risks. Researchers involved in these processes should provide critical, independent assessments based on their own expertise and judgement, adhering strictly to relevant guidelines provided by funding bodies and publishers. This expectation is aligned with Queen’s Responsible Research Assessment policy, which sets out the University’s principles for responsible and fair evaluation of research.

To maximise the benefits of AI, researchers may wish to develop effective prompting and query skills. The structure of instructions and any accompanying data will shape the output, though outputs may still vary over time as AI models evolve. While full reproducibility may not always be possible, maintaining clear records of prompts, tools used, and key parameters can support transparency. Crucially, effective use also depends on robust validation. AI-generated outputs must be critically assessed and thoroughly checked for accuracy, relevance, and bias before being used in research. Researchers should apply a consistent, documented approach to verifying and contextualising AI outputs..

Always validate AI-generated outputs thoroughly. AI models can produce inaccurate or misleading information (“hallucinations”), so rigorous fact-checking and critical evaluation are essential.

AI systems can unintentionally reproduce or amplify biases present in training data. Researchers must actively assess outputs for bias, particularly when AI-generated outcomes influence decision-making or interpretations. It is important to critically reflect on potential societal biases or inequalities inherent in AI outputs and take proactive steps to mitigate these issues.

Researchers should ensure inclusive practices by verifying that AI tools do not disadvantage particular groups or individuals. When disseminating results, clearly discuss any limitations or biases discovered through AI analysis.

AI tools typically require significant computing resources, which can have high energy consumption. Researchers are encouraged to adopt sustainable practices, choosing tools and methods that minimise environmental footprint in line with Queen’s commitment to sustainability. This includes evaluating the necessity of intensive computational use and opting for more sustainable alternatives where possible.

While AI tools offer substantial benefits, researchers must avoid becoming overly reliant on them, particularly in areas requiring critical thinking, creativity, and nuanced human judgement. AI should complement, rather than replace, essential research skills and processes. Researchers should remain actively engaged with their research and critically evaluate all AI-supported activities.

AI tools should not be used for peer review, research evaluation (such as REF submissions), or reviewing funding applications due to confidentiality, IP concerns, Export Control requirements and accuracy risks. Researchers involved in these processes should provide critical, independent assessments based on their own expertise and judgement, adhering strictly to relevant guidelines provided by funding bodies and publishers. This expectation is aligned with Queen’s Responsible Research Assessment policy, which sets out the University’s principles for responsible and fair evaluation of research.

This section outlines where you can currently find training and support, and how to request additional guidance.

Current Training and Learning Opportunities

A growing suite of resources is available via the Queen’s AI Hub, including introductory guidance on the responsible use of AI in research.

Currently R&E offers an online workshop “AI Tools for Research Writers”, which introduces researchers to trusted AI platforms and how they might be used to support literature reviews, writing, and summarisation tasks.

In addition, researchers are encouraged to explore the following external learning opportunities:

  • Jisc: Artificial Intelligence and Ethics – A free online course covering the key ethical implications of AI in education and research. View course

Requesting Support or Additional Training

If you have a specific training need or would like to request a session for your domain or School, please get in touch with us at ResearchFutures@qub.ac.uk. We are especially keen to hear from researchers interested in:

  • Exploring or piloting new AI platforms
  • Responsible AI use in interdisciplinary research
  • Sharing case studies or experiences using AI in research to help build a community of practice and support peer learning across disciplines
  •  

Yes, and you must follow specific publisher or funder guidelines. In general, any substantial use of AI tools in your research—including literature reviews, data analysis, writing, or proofreading—must be transparently acknowledged in publications and funding proposals. Include details of the AI tool used, how it was used, and verify outputs carefully to ensure accuracy and integrity.

You may use AI to support the development of grant applications, provided its use is transparently acknowledged and outputs are carefully validated. Refer explicitly to guidelines provided by your funding body—for example, UKRI guidelines—and ensure all AI-generated content is thoroughly checked and critically evaluated before submission.

No. AI tools cannot be credited as authors or co-authors. Authorship implies accountability and responsibility, which are attributes only human researchers possess. Clearly acknowledge any AI use in your methods or acknowledgements section, and refer to guidelines from your target publisher.

No. Personal, sensitive, or confidential data should generally not be entered into external or publicly available AI tools as doing so may breach GDPR and Queen’s Data Protection Policy. Personal data can be either direct (e.g. name, email) or indirect (e.g. IP addresses, device identifiers, precise location data, or combinations of demographic details that could identify individuals). Researchers should take care to recognise when data may be identifiable, even if it does not appear obviously personal. If processing personal data with an AI tool is essential, a Data Protection Impact Assessment (DPIA) must be completed and explicit approval obtained from your School Ethics Committee and/or Queen’s IT Services.

Yes, you can use AI to help structure or simplify complex academic content for lay summaries or policy briefs. However, researchers remain fully responsible for the accuracy and clarity of the final output. Where AI tools are used to adapt tone or translate concepts, outputs should be reviewed carefully to avoid loss of nuance or inadvertent misrepresentation.

AI tools can be used to generate synthetic data in situations where real data cannot be shared (e.g. due to privacy constraints), but this must be carefully documented. Researchers must assess the quality, bias, and limitations of the synthetic data, and declare its use transparently in all outputs. Ethical approval may be required, depending on how the data is generated or used.

Yes. If AI tools are used to generate, process, or analyse data, their use should be described in your Data Management Plan. This includes naming the tool, outlining the purpose of its use, any data handling involved, and any potential data protection or reproducibility considerations.

Most funders do not yet have dedicated sections for AI, so relevant details should be included where most appropriate (typically under data description, tools, ethics, or legal considerations, depending on the DMP template).

You may use AI to help with the language or formatting of research documentation to support an ethics application, but researchers must not rely on AI to generate content that reflects ethical reasoning or consent processes. Researchers must retain full understanding and ownership of the ethical issues and ensure documents are appropriate, accurate, and personalised for their study. Substantial use of AI in preparing ethics documentation should be declared.

You must first contact Queen’s IT Services to request evaluation and approval of any new or external AI tool. IT Services will review the tool for data protection compliance, privacy standards, licensing, and security. Submit your request via the IT Service Desk portal or via email on itservicedesk@qub.ac.uk for assistance.

No. AI should not be used for peer review, research evaluations (such as REF submissions), or reviewing funding applications, as these tasks require critical human judgement and confidentiality. Using AI risks breaches of confidentiality, intellectual property rights, and introduces the possibility of biased or inaccurate assessments. This is also consistent with our Responsible Research Assessment policy, which sets out expectations for fair, expert-led, and transparent evaluation processes.

Yes, AI tools can support non-native English speakers by helping to improve clarity, grammar, and structure in written texts. However, the researcher remains responsible for verifying accuracy, nuance, and tone. Any substantial AI assistance should be transparently acknowledged when submitting publications or theses. For further guidance, refer to Guidelines on Authorship and Publication. Also, be aware of overarching considerations such as around data protection, IP, and copyright – see relevant sections of this guidance for those considerations.

Also see the FAQ below for advice on using AI tools for translation.

Yes, but caution is essential. When using AI for translating research materials or auto-transcribing interviews, researchers must ensure the tool is GDPR-compliant and secure. For participant data, obtain explicit consent for AI-based processing, and avoid tools that retain or train on uploaded content. Always review and correct AI outputs for accuracy and document the process in your Data Management Plan. See also: Queen’s Code of Conduct and Integrity in Research.

It's recommended to keep a clear record of prompts, outputs, and revisions when using AI tools for transparency and reproducibility, which should be retained and included as supplementary material where appropriate. Where feasible, use built-in session histories or exportable chat transcripts from the AI tool itself, as this preserves sequencing and nuance. Alternatively, clearly labelled Excel spreadsheets or Word documents may be used. Retain these records and include them as supplementary material where appropriate.

Yes. AI tools often require significant computing resources and energy consumption. Researchers are encouraged to minimise environmental impact by choosing more efficient AI tools, reducing unnecessary computational usage, and aligning practices with the University’s policies on Sustainability.

Training and resources, including workshops and guidance documents, are available via the Queen’s AI Hub and the Library. If you have specific training needs or would like tailored support, please contact the Research Futures team directly at ResearchFutures@qub.ac.uk.

We will develop a set of optional, non-prescriptive resources to help researchers apply this guidance in ways suited to their individual context. These tools are intended to support good practice, and this section will be updated as resources become available.

Version Control

  • Approved by: Research & Enterprise Directorate
  • Version: 2.0
  • Approval Date: 15/09/2025
  • Next Review Date: March 2026
  • Responsible Officer: Programme Manager (Research Futures)
  • Acknowledgement: This guidance was developed with editorial support from ChatGPT (OpenAI GPT-4.5, 2025) for tasks such as formatting, proofreading, suggesting alternative phrasing. Human authorship shaped all stages of development, including drafting, sector benchmarking, and final revisions.

Feedback

We welcome feedback from all researchers to ensure this guidance remains relevant and supportive. To provide feedback, seek clarification, or to discuss further please contact ResearchFutures@qub.ac.uk.