
UK universities, including prestigious institutions like Oxford, Cambridge, Bristol, and Durham, have developed guiding principles to address the growing use of generative artificial intelligence in education.
All 24 universities within the Russell Group have actively reviewed and updated their academic conduct policies and guidance with the help of AI and education experts. The Guardian wrote that by adhering to these principles, universities embrace the potential of AI “while simultaneously protecting academic rigour and integrity in higher education.”
Russel Group shared the five principles:
- Universities will support students and staff to become AI-literate.
- Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
- Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
- Universities will ensure academic rigour and integrity is upheld.
- Universities will work collaboratively to share best practice as the technology and its application in education evolves.
The guidance suggests that instead of prohibiting software like ChatGPT that can generate text, students should learn how to use AI ethically and responsibly in their academic work, as well as be aware of the potential issues of plagiarism, bias, and inaccuracy in AI outputs.
Teachers will also need the training to support students, of whom many already rely on ChatGPT for their assignments. New methods of evaluating students will likely emerge to prevent cheating.
“All staff who support student learning should be empowered to design teaching sessions, materials and assessments that incorporate the creative use of generative AI tools where appropriate,”
the statement said.
According to Prof Andrew Brass, the head of the School of Health Sciences at the University of Manchester, educators should prepare students to effectively navigate generative AI.
Prof Brass emphasized the importance of collaborative efforts with students to co-create guidelines and ensure their active engagement with AI technology. He also stressed the need for transparent communication, stating that clear explanations are essential when implementing restrictions.
Can Regulations Affect AI Use in Universities?
The use of AI in universities presents ethical, legal, and social concerns that require appropriate regulations. Ensuring data privacy and security, preventing bias and discrimination, and promoting responsible AI practices among students and faculty is crucial.
For example, the European Union’s General Data Protection Regulation (GDPR) has implications for using AI in universities. The GDPR requires that personal data be processed transparently and securely, which can be challenging when using AI systems.
However, critics argue that proposed EU AI regulations undermine Europe’s competitiveness and fail to address potential AI challenges. They urge the EU to reconsider its approach and embrace AI for innovation.
Read more:
Read More: mpost.io