At Moodle, we are sure that AI will be an important part of all educational technology in the future. We also recognise that AI can bring high risks to education, especially when introduced without careful thought.
We are committed to a human-centered approach to AI that maximizes the safety, efficiency, and accessibility of learning for everyone, regardless of their location or financial situation.
Guided by this vision and aligned with our responsibility, ethics and values, we have crafted robust AI principles that govern our approach to AI across everything Moodle does.
As users, we should always know when AI is being used. This includes using best efforts to label content clearly and visibly so that humans are always in the loop and able to catch any problems early in the process.
Institutions, organizations, and learners must always have a choice to decide which AI capabilities they want to enable and use. Our famous modular architecture enables you to have maximum control around the tools you integrate into your own Moodle-based environment.
Long ago we pledged privacy and security of data around all our products and services (in addition to the natural advantages open source provides), and our commitment extends to any AI capabilities we will create in Moodle. We will also do our best due diligence on any third party AI models that we may recommend to use with Moodle.
We actively support (and are excited by!) the use of AI tools to create an inclusive and accessible environment fostering positive learning experiences for all, without discrimination.
In line with our core values, we are strong advocates for the ethical use of AI globally. One example is our pledge to the EdSAFE Alliance Global, but we are also directly committed to supporting ethical choices for all of our partners, institutions, teachers, and learners.
We have already conducted training among Moodle team members and the general Moodle community around best practices in using AI, and will continue to do so.
We are committed to continuously tracking work from other bodies, such as the EU AI Act, the OECD AI Principles, and NIST AI Risk Management Framework, to help us adapt to the evolving AI regulatory and social landscape.