Responsible Use of AI
Introduction
Incorporating AI into higher education is more than just a technological endeavour – it involves cultivating ethically-aware professionals. By establishing an environment that explores AI’s potential with consideration for ethics, privacy, and inclusivity, we can prepare students for success in an AI-augmented world. The responsible use of AI is not a one-size-fits-all solution, but rather a dynamic and evolving field that challenges us to consistently question, evaluate, and adapt our practices to ensure AI aligns with our values. By remaining informed about ethical considerations, understanding the societal implications, and acknowledging challenges, we can collectively navigate the transformative AI landscape while safeguarding our shared human values.
To support the push for responsible AI integration in education, the Russell Group of universities has introduced a set of principles to ensure that students and staff have AI literacy, enabling them to effectively leverage technological advancements in their academic journey. These principles build upon the foundation of ethical AI integration, emphasising the responsible use of generative AI and technology, such as ChatGPT. Endorsed by Vice Chancellors from 24 Russell Group Universities, these principles highlight the critical importance of fostering AI literacy, training staff to guide students in using AI tools, integrating ethical AI use in teaching, upholding academic integrity, and promoting the collaborative sharing of best practices. This effort coincides with the UK Government’s ongoing assessment of generative AI in education.
Throughout academic year 2023-24, it is important to remember that responsible AI use is not solely a technological issue. Rather, it is about shaping a future where innovation aligns with our human values.
Purpose and Delivery of Education
As Artificial Intelligence (AI) continues to evolve, it is essential to consider the ethical and social implications that come with its adoption. When we talk about ethical and social considerations in AI, we are referring to the moral and societal impacts of using AI technologies. As AI systems make decisions and predictions, they can sometimes produce outcomes that raise concerns about bias, fairness, transparency, privacy, security, employability, and the shaping of future societies.
By addressing bias, promoting transparency, safeguarding privacy and security, adapting curricula for employability, and envisioning the future of education, we can harness AI’s potential to enhance learning experiences while upholding the values that underpin education as a cornerstone of society. Through collaborative efforts and thoughtful implementation, AI can become a valuable tool at Queen’s that empowers educators and students alike.
One of the main challenges in AI is bias. Bias can be introduced into AI systems through the data they are trained on. If historical data used for training contains biases, AI can unintentionally perpetuate those biases in its decisions. This can lead to unfair treatment of certain groups of people. It is crucial to actively address bias by using diverse and representative datasets and developing algorithms that are designed to be fair to all individuals, regardless of their background.
AI systems often work as "black boxes," meaning their inner workings can be complex and difficult to understand. This lack of transparency can be problematic, especially when AI is used in critical applications like healthcare or legal decisions. Ensuring transparency in AI involves creating methods to interpret how AI arrives at its conclusions. This way, users can understand the reasoning behind AI-generated decisions. Transparent AI can empower educators and students to trust and utilise AI tools more effectively.
AI systems rely on vast amounts of data to function effectively. However, this data can contain sensitive information about individuals. Protecting people's privacy and ensuring the security of their data is essential. Striking a balance between using data to improve AI and safeguarding personal information requires robust data anonymisation techniques and secure storage practices.
When using an AI tool, it is necessary to make an ethical judgment about the data or information that you put into the system when you use it to complete a task. Any information that is submitted to an AI tool then becomes part of the data that the tool draws upon to complete future tasks for anyone who uses the tool. You need to consider whether you have permission to submit the information that you do. For example, if you submit a piece of art that is created and attributed to an individual as their personal intellectual property, then you put that piece of art at risk of being reused or adapted without the permission of the owner.
The integration of AI in education influences the skills students need for the job market. While AI can automate routine tasks, it also emphasises the importance of critical thinking, problem-solving, and creativity. Higher education institutions need to equip students with skills that AI cannot easily replicate. This proactive approach prepares students for a rapidly changing job landscape. Preparing them for these changes and focusing on reskilling and upskilling programs is crucial to ensure that individuals can adapt to the evolving job landscape.
The adoption of AI in education contributes to shaping the future society by influencing how students learn and engage with information. It is essential that we consider the long-term impacts of AI on education systems. By promoting discussions on the ethical use of AI, we can ensure that education evolves in a way that aligns with societal values and needs.