The South African Department of Home Affairs (DHA) has suspended two senior officials after discovering AI-generated hallucinations in the reference list of its revised white paper on citizenship, immigration and refugee protection. The suspensions, announced on Thursday, target the Chief Director of the citizenship and immigration unit and the director involved in drafting the document, whose suspension takes effect at the start of the following week.
What happened
The DHA found that the reference list attached to the revised white paper contained fictitious citations — a phenomenon known as AI hallucination, where large language models produce erroneous or fabricated outputs due to lack of credible data. The department stated that these references "were generated and attached to the document after the fact, as they are not cited in the body of the text."
The incident is not isolated. A week earlier, the Department of Communications and Digital Technologies (DCDT) withdrew its own draft National AI Policy after discovering similar AI-generated fictitious sources and citations. Minister Solly Malatsi stated that "the most plausible explanation is that AI-generated citations were included without proper verification."
Department response
The DHA acknowledged the "embarrassment caused" and has taken several steps:
- Precautionary suspensions of the two officials
- Appointment of two independent law firms — one to manage the disciplinary process, the other to review all policy documents produced by the department dating back to 30 November 2022
- The review date was chosen because that period marked the public release of ChatGPT, the first large language model
- Plans to design and implement AI checks and declarations as part of internal approval processes moving forward
The department has withdrawn the reference list from the white paper but maintains that the revised policy itself "continues to accurately reflect the government's position" and is not materially affected by the hallucinations, as the document was produced through cross-departmental collaboration and public consultation.
Broader implications
The DHA case highlights the risks of unvetted AI use in critical public services. The department acknowledged that AI is "a transformative but disruptive technology" and stated that institutions must "adapt to keep up." However, the incident raises practical questions about verification processes for AI-generated content in government documents, particularly in high-stakes areas like immigration and citizenship policy.
Bottom line
Two South African government departments have now been caught with AI-hallucinated citations in official policy documents within a week. The DHA's response — suspensions, independent reviews, and planned AI checks — suggests that government agencies are beginning to recognize the need for human oversight of AI-generated content, but the incident underscores how easily unverified AI outputs can slip into formal policy work.