TL;DR
South Africa’s Communications Minister Solly Malatsi withdrew the country’s draft national AI policy after News24 discovered that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals. The policy had been approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable lapse” and promised consequence management. The scandal leaves South Africa without an AI governance framework and raises questions about institutional capacity to regulate the technology.
South Africa’s Department of Communications and Digital Technologies spent months drafting a national artificial intelligence policy. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. It outlined five pillars of AI governance: skills capacity, responsible governance, ethical and inclusive AI, cultural preservation, and human-centred deployment. It adopted a risk-based approach modelled on the EU AI Act. Cabinet approved the draft on 25 March. The Government Gazette published it on 10 April for public comment. And then News24, the South African news outlet, checked the bibliography and discovered that at least six of the document’s 67 academic citations did not exist. The journals were real. The articles were not. The authors credited with foundational research on AI governance had never written the papers attributed to them. Editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The most plausible explanation, according to Communications Minister Solly Malatsi, is that the drafters used a generative AI tool and published the output without verifying a single reference. A government policy designed to govern artificial intelligence was undermined by the artificial intelligence it failed to govern.
The withdrawal
Malatsi announced the withdrawal on 27 April, calling the fictitious citations an “unacceptable lapse” that “compromised the integrity and credibility of the draft policy.” He said consequence management would follow for those responsible for drafting and quality assurance. “This failure is not a mere technical issue,” the minister said. The parliamentary portfolio committee chair offered a more concise assessment, suggesting the department “skip using ChatGPT this time” when redrafting. The document will be revised before being reissued for public comment, but no timeline has been given. South Africa is now without a formal AI governance framework at a time when governments worldwide are grappling with how to regulate AI, and the country’s credibility as a serious participant in that conversation has taken a blow that will outlast the policy revision.
The scandal is not simply that fake citations appeared in a government document. It is that they appeared in a government document about artificial intelligence, written by the department responsible for the country’s digital technology strategy, during the exact period when the world’s most consequential AI governance debates are being fought in Brussels, Washington, and Beijing. The EU AI Act, the most ambitious regulatory framework for artificial intelligence, is grappling with delayed standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States has no federal AI legislation and is watching states legislate independently while the White House attempts to preempt their efforts. China has enacted AI regulations but applies them selectively. Into this landscape, South Africa offered a policy that could not survive a bibliography check.
The pattern
South Africa’s hallucinated citations are an extreme case of a problem that is quietly spreading across institutions that use generative AI for research and drafting. A study published in Nature found that 2.6 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, up from 0.3 per cent in 2024. If that rate holds across the roughly seven million scholarly publications from 2025, more than 110,000 papers contain invalid references. GPTZero, a Canadian detection startup, analysed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s premier AI conferences, and found over 100 hallucinated citations across at least 53 papers. In a separate multi-model study, only 26.5 per cent of AI-generated bibliographic references were entirely correct. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They do not look up papers. They predict what a citation should look like based on the patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but points to nothing.
The South African case is distinctive not because the technology hallucinated, which is a well-documented and inherent limitation of generative AI, but because the hallucinations were published in an official government policy document that passed through Cabinet approval without anyone verifying the references. The drafting process included civil servants, subject matter consultations, and ministerial review. Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of acknowledging that we don’t know enough.” That acknowledgment did not extend to acknowledging that the tool being used to help draft the policy was itself unreliable. The six fake citations that News24 identified are the ones that were caught. Whether additional citations in the document’s 67 references are genuine has not been publicly confirmed. The entire bibliography is now under suspicion, and by extension, so is the analytical foundation on which the policy’s proposals were built.
The implications
The immediate consequence is that South Africa’s AI governance timeline has been reset. The draft policy, which was intended to position the country as a leader in responsible AI adoption on the African continent, will need to be redrafted, reconsulted, and resubmitted. The institutional credibility damage extends beyond the policy itself. If the department responsible for governing AI cannot verify whether the sources in its own policy document are real, the question becomes whether it has the capacity to evaluate the AI systems it proposes to regulate. The policy envisioned a multi-regulator model in which AI governance and human oversight would be embedded within existing supervisory frameworks rather than centralised under a single authority. That model requires each participating regulator to have sufficient technical understanding to assess AI systems in their sector. The hallucination scandal does not inspire confidence that the coordinating department meets that threshold.
The broader lesson is not that governments should avoid using AI in policy development. It is that the failure mode of AI is not dramatic. It does not crash. It does not display an error message. It produces fluent, formatted, confident text that looks exactly like the output of a competent researcher. The fake citations in South Africa’s AI policy were not obviously wrong. They were plausible. They cited real journals. They attributed work to real people. They followed the formatting conventions of academic references. The only way to catch them was to check whether each one actually existed, a task that requires exactly the kind of methodical human verification that AI is supposed to make unnecessary. Growing public distrust of AI is not irrational. It is a response to a technology that is simultaneously powerful enough to draft a national policy and unreliable enough to fabricate the evidence that policy rests on. South Africa’s embarrassment is singular, but the underlying failure, using AI without the capacity to verify its output, is not. It is happening in universities, law firms, newsrooms, and government departments around the world. South Africa is simply the first government to publish the receipts. The challenges of implementing AI regulation are real, but they begin with a prerequisite that South Africa’s department did not meet: understanding what the technology does before trying to write the rules for it.