If you’ve been to an EdTech conference lately, you already know: you can’t grab a lukewarm coffee without bumping into a conversation about AI. They’re everywhere — and for good reason. But at a recent higher education conference, one exchange cut through the noise in a way I haven’t been able to stop thinking about.
An attendee rushing off to a session asked me a question I wasn’t expecting:
She wasn’t asking because she wants to go back to paper. She was asking because she’s afraid — afraid that the web browser has become a door that AI agents can walk right through, invisibly, undetected.
In the swarm of the conference, I never got a chance to fully respond. But if we had more time, here’s what I would have told her:
The idea that agents are invisible in your LMS isn’t actually true. Not with Moodle LMS, anyway.
What’s changed — and what’s possible
Generative AI isn’t new. Most people have already used tools that synthesise content, draft text, or explain concepts. What’s changed is that AI can now act.
AI agents can move through systems, complete tasks, and follow multi-step instructions. In a learning environment, that means they can do many of the things we’ve relied on as signals of learner participation — submitting work, completing activities, progressing through a course. And for a long time, the assumption was that this agentic activity was simply undetectable.
That assumption is wrong.
Joseph Thibault, founder of Cursive, a Moodle Certified Integration, has been building tools in the Moodle ecosystem for years — starting with writing analytics and academic integrity, and more recently turning his attention to the question of agents. His finding is direct:
The key is looking beyond what standard LMS logs capture. Agents and humans interact with a platform very differently. The output might look the same. The behaviour underneath usually doesn’t — and that difference is only visible if your platform is built to look for it.
What Moodle’s open architecture makes possible
This is where Moodle’s design philosophy becomes practically important.
Moodle LMS is built to be extended. Our open framework for AI in Moodle solutions gives institutions full control — choice of provider, educator-level permissions, data sovereignty, and the freedom to innovate without vendor lock-in. This openness, made possible by our AI Substem, is what allows the community to respond to emerging challenges quickly, in ways that fit each institution’s own context.
It’s a philosophy we’ve written about before — and one that shapes every decision we make about AI in Moodle platforms. Agent detection is just the latest example of what that openness makes possible.
Detecting AI agents in Moodle LMS
Cursive’s Agent Detection Lite plugin — available now in the Moodle plugins directory — is a direct example of this responsiveness in action. Built to Moodle’s standards, integrated with the Privacy API, with all data remaining local to your Moodle site, it works by expanding the session data your platform captures across five distinct detection layers: writing behaviour, site interaction patterns, browser fingerprinting, injection monitoring, and server-side request analysis. Together these capture thousands of signals per session — surfacing not just what was done, but how.
The system is designed to be lightweight. Despite the volume of signals it collects, Cursive reports that the overall server load is less than that of a typical quiz — so detection doesn’t come at the cost of performance or learner experience.
See Agent Detection Lite in action:
Administrators can use it to identify where agent activity may be concentrated across their Moodle site — and make more informed decisions about assessment design, proctoring, and policy as a result.
The bigger question underneath
Detection matters. But it’s not the end of the conversation.
I asked Marie what agent detection in an LMS really tells us, and she reframed it in a way that stuck with me:
That points to something important. If an agent can complete a task, the task itself needs a closer look. What’s often missing isn’t correctness — it’s evidence of the learning process. How someone arrived at an answer, how their thinking developed, where they revised or struggled. As Joe puts it:
Moodle platforms are well-placed to support that work. Live, synchronous learning. Collaborative and portfolio-based activities. Writing tools that capture the process behind a submission, not just the final output. These are the approaches that make authentic learning visible — and make it much harder to replicate without actually doing it.
Moving forward
Moments like this tend to create pressure to act quickly, and sometimes to lock things down. I understood that impulse when my worried conference attendee asked about running an LMS outside the browser entirely. But, rather than close things off, the answer to a fast-moving challenge is to have a platform that can move with you.
For most teams, the next step isn’t to overhaul everything at once. It’s to start building a clearer picture of what’s already happening — experimenting with tools like agent detection to understand patterns, reviewing key assessments and asking what they’re really measuring, and having more open conversations with instructors and learners about where, when, and how AI is being used.
You’re never locked into a single approach with Moodle solutions. You can test new tools, adapt your practices, and respond to what you’re seeing — without waiting for one fixed solution to arrive. And in a moment where so much feels uncertain, that ability to learn, adjust, and move forward deliberately is what makes progress possible.