source : the age
March 9, 2026 — 3:30pm
A year 10 girl spends five hours in the library on Saturday researching her history essay. A classmate scrolls all weekend, then spends 10 minutes Sunday night prompting a popular essay-writing AI bot. The first student gets a B. The second student’s AI-generated paper is superior. She gets an A.
This isn’t a dystopian warning. It’s a scenario already playing out across thousands of Australian schools. Teachers might be suspicious, but without proof they’re required to grant the benefit of the doubt.
Last week, at the SMH Schools Summit, Paul Martin, the chief executive of NESA, was asked about AI. As the leader of the organisation that sets educational standards for NSW schools, he seemed remarkably relaxed.
“We’re only about three or four years into the AI revolution. I think it’s probably reasonable for us to be cautious, wait and watch and take advice rather than leaping into some sort of change of process too early,” he said.
With due respect, Mr Martin, we’re already late.
Since ChatGPT arrived in 2022, students have been merrily loading assessments into AI models, adding a few “authentic” imperfections, and hitting submit. Cursory efforts to block AI websites on school wi-fi are useless against a generation of digital natives with smartphones and hotspots.
If an assessment can feasibly be gamed by AI, you can assume it’s being gamed. This isn’t because kids are nefarious, they’re just following the incentives. Our system is built around grades, and AI produces better grades with less effort.
The institutional response has been a mix of denial and watching briefs. Of course, ultimately, using a chatbot to do your homework is like using a forklift to do your bench press. But it’s beyond foolish to expect teenagers to accept worse outcomes indefinitely based on a long-term moral principle.
Time will tell if AI will be, as Google CEO Sundar Pichai has claimed, more “profound than electricity or fire.” But at this point it’s conservative to predict it will fundamentally alter society. In so doing, it’s going to throw up myriad wicked policy problems. But here’s the key point: education is not one of them.
Solving the AI crisis in our schools is as simple as pulling our heads out of the sand. The principle we can introduce right now is this: welcome AI into all aspects of learning; restrict it from all forms of assessment.
Currently, we’re doing the opposite. By treating AI as a forbidden cheat tool, we guarantee students use it like one – badly, secretly, and passively.
Students are prompting AI with “do my three-minute presentation on the Middle Ages” and then standing up and reading the output as their oral presentation assessment. This is a guaranteed path to learned helplessness.
What if we did the opposite? What if we accept that AI is part of the world and a potentially powerful tool for learning? Instead of teaching kids to use it like a sneaky drug, bring it into the open. Let them use it to interrogate topics, quiz themselves, and explore complex ideas. It can be the private tutor that closes the gap between wealthy students and those who can’t afford extra help.
But the flip side is non-negotiable: we must AI-proof every point of assessment. Fortunately this requires no innovation – just a return to methods as old as education itself.
In-class essays with a biro. Oral exams with unpredictable human prompts. And if a screen is really required, airlock the device from the internet. We must accept that take-home assessments and unsupervised laptop work now have a value of zero.
What we can’t do is spend years building an “AI curriculum” by committee because it will be obsolete before it’s published. The only rational response to the moment is to trust classroom experts: teachers. Empower them to use their judgment and experiment with AI in the open. Mistakes will be made, but they are nothing compared to the cost of “business as usual” denialism.
At the McKell Institute, we develop policy ideas that usually require some political bravery. This one requires next to none. Who would be against making education AI cheat-proof?
Much of the public’s mistrust of government stems from how slow it appears to react to change. This is a rare case where there is no noisy special interest campaign blocking progress. We should use “COVID speed” here. It doesn’t require a roundtable. One meeting of education ministers should suffice. Agree that AI is for learning. Agree that assessment must be AI-proof. Then move.
Every semester wasted is another cohort of students learning to cheat or learning to lose.
Ed Cavanough is the chief executive of the McKell Institute.
Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.