The emerging discourse on artificial intelligence in schooling is now anchored by a growing evidence base. UNESCO’s Guidance for Generative AI in Education and Research (2023) frames AI adoption as a “human‑centred” endeavour: it warns that unchecked deployment could erode equity and agency, yet it also sketches short‑, medium‑, and long‑term policy actions for ministries and institutions.(UNESCO, UNESCO Docs) The report’s release marked a pivotal shift—within six months, more than forty countries had cited it in official road‑maps or ministerial communiqués, signalling that AI’s inclusion is no longer a fringe topic but a mainstream policy priority.(OECD)
Despite high‑level interest, hard data show institutions are still scrambling to translate aspirations into rules. A 2025 UNESCO survey of 450 universities and schools found that fewer than one in ten had an institutional policy on generative AI, leaving most educators without formal guidance.(cielo24) Parallel student surveys echo the vacuum: the UK HEPI/Kortext study reports that 92 percent of undergraduates now use AI tools, up from 66 percent in 2024, with 64 percent inserting AI‑generated text directly into assessed work.(HEPI, HEPI) Teacher sentiment is catching up but remains uneven; an EdWeek Research Center poll of 990 U.S. educators in February 2025 found that a majority had not adopted AI, with 33 percent citing the absence of district policy as the chief barrier.(Education Week, Education Week)
On the pro‑innovation side, the U.S. Department of Education’s report Artificial Intelligence and the Future of Teaching and Learning (2023) argues that AI can personalise instruction, free teachers’ time, and expand assistive technologies—provided transparency, data privacy, and human oversight are built in from the start.(U.S. Department of Education) The report explicitly links successful adoption to economic competitiveness, a logic also advanced in the World Economic Forum’s Future of Jobs Report 2025, which forecasts 170 million new jobs this decade but notes that 40 percent of employers expect head‑count reductions where AI can automate tasks.(World Economic Forum, World Economic Forum)
Sceptics, meanwhile, have seen their concerns codified in law. The European Union’s AI Act, in force since August 2024, imposes human‑oversight, transparency, and risk‑assessment duties on “high‑risk” educational AI systems; its first obligations—on AI literacy and prohibited practices—become binding in February 2025.(Digital Strategy, European Parliament) OECD working papers add that without deliberate safeguards, AI can widen social divides by embedding bias and amplifying digital exclusion.(OECD)
Economic and political incentives nonetheless push decision‑makers toward rapid action. The 2024 EDUCAUSE AI Landscape Study, surveying more than 900 higher‑education technology leaders, finds that policy gaps—not enthusiasm—are now the principal obstacle; 71 percent of respondents ranked “drafting institution‑wide AI guidelines” as their top priority for 2025 budgeting cycles.(EDUCAUSE) Nationally, ministries from Spain to Sweden issued AI classroom handbooks in 2024, while U.S. states such as California and Oregon published K‑12 generative‑AI guidance, illustrating how peer competition accelerates rule‑making.(OECD)
Yet policymaking rarely unfolds in a straight line. OECD analysts remind us that successful AI governance must juggle quality, equity, teacher workload, and data‑protection aims—objectives that often point in different directions and require iterative adjustment rather than one‑off decrees.(OECD) The zig‑zag pattern is visible in universities where early bans on ChatGPT (late 2022) gave way to cautious pilots (2023) and, finally, structured “AI literacy” modules (2024‑25); EDUCAUSE’s 2024 Horizon Report captures this shift from prohibition to pedagogy across fifty‑eight global campuses.(EDUCAUSE Library)
Taken together, the facts confirm that AI is already reshaping education systems. Proponents marshal economic forecasts and labour‑market projections to argue for swift adoption; sceptics invoke ethics and human development to demand guard‑rails. Both positions now have empirical grounding. The policy task, therefore, is not to decide whether AI belongs in education—data show that debate is settled—but to craft governance that secures AI’s benefits while preserving the human purposes of teaching and learning.