Coding

Two Home Affairs officials suspended after AI 'hallucinations' found

"Government accountability is strained by AI-generated errors, as two Home Affairs officials are suspended following the discovery of 'hallucinations' in a high-stakes immigration processing system, highlighting the risks of unvetted AI-driven decision-making in critical public services."

The South African Department of Home Affairs (DHA) has suspended two senior officials after discovering AI-generated hallucinations in the reference list of its revised white paper on citizenship, immigration and refugee protection. The suspensions, announced on Thursday, target the Chief Director of the citizenship and immigration unit and the director involved in drafting the document, whose suspension takes effect at the start of the following week.

What happened

The DHA found that the reference list attached to the revised white paper contained fictitious citations — a phenomenon known as AI hallucination, where large language models produce erroneous or fabricated outputs due to lack of credible data. The department stated that these references "were generated and attached to the document after the fact, as they are not cited in the body of the text."

The incident is not isolated. A week earlier, the Department of Communications and Digital Technologies (DCDT) withdrew its own draft National AI Policy after discovering similar AI-generated fictitious sources and citations. Minister Solly Malatsi stated that "the most plausible explanation is that AI-generated citations were included without proper verification."

Department response

The DHA acknowledged the "embarrassment caused" and has taken several steps:

  • Precautionary suspensions of the two officials
  • Appointment of two independent law firms — one to manage the disciplinary process, the other to review all policy documents produced by the department dating back to 30 November 2022
  • The review date was chosen because that period marked the public release of ChatGPT, the first large language model
  • Plans to design and implement AI checks and declarations as part of internal approval processes moving forward

The department has withdrawn the reference list from the white paper but maintains that the revised policy itself "continues to accurately reflect the government's position" and is not materially affected by the hallucinations, as the document was produced through cross-departmental collaboration and public consultation.

Broader implications

The DHA case highlights the risks of unvetted AI use in critical public services. The department acknowledged that AI is "a transformative but disruptive technology" and stated that institutions must "adapt to keep up." However, the incident raises practical questions about verification processes for AI-generated content in government documents, particularly in high-stakes areas like immigration and citizenship policy.

Bottom line

Two South African government departments have now been caught with AI-hallucinated citations in official policy documents within a week. The DHA's response — suspensions, independent reviews, and planned AI checks — suggests that government agencies are beginning to recognize the need for human oversight of AI-generated content, but the incident underscores how easily unverified AI outputs can slip into formal policy work.

Similar Articles

More articles like this

Coding 1 min

Fragnesia Made Public as Latest Linux Local Privilege Escalation Vulnerability

A previously undisclosed local privilege escalation vulnerability, dubbed Fragnesia, has been disclosed in the Linux kernel, exposing a critical flaw in the ext4 file system's handling of extended attributes. The vulnerability, assigned CVE-2023-41692, allows attackers to bypass access controls and execute arbitrary code with elevated privileges. Fragnesia affects Linux distributions as far back as kernel version 4.15.

Coding 1 min

Open Source Resistance: keep OSS alive on company time

As companies increasingly adopt "open-source everything" policies, a grassroots movement is emerging to ensure that employees can contribute to open-source projects on company time without sacrificing their intellectual property or compromising sensitive data. This pushback is centered around the concept of "open-source-compatible" enterprise software licenses, which would allow developers to contribute to OSS projects without risking corporate liability. The movement's advocates argue that such licenses are essential for preserving the integrity of open-source ecosystems.

Coding 2 min

The limits of Rust, or why you should probably not follow Amazon and Cloudflare

Rust's promise of memory safety is being put to the test as Amazon and Cloudflare's high-profile migrations to the language reveal a disturbing trend: the more complex the system, the more it exposes the limitations of Rust's borrow checker. Specifically, the language's inability to handle cyclic references and its reliance on manual memory management are causing headaches for developers. As a result, some are questioning whether Rust is truly ready for prime-time.

Coding 1 min

The AI Backlash Could Get Ugly

As the AI industry's carbon footprint and data storage needs continue to balloon, a growing coalition of environmental activists and community organizers is linking the expansion of data centers to rising rates of political violence and displacement, sparking a contentious debate over the true costs of AI's accelerating growth. The movement's focus on data center siting and energy consumption has already led to high-profile protests and municipal ordinances restricting new facility development.

Coding 2 min

The US is winning the AI race where it matters most: commercialization

As the global AI landscape shifts towards practical applications, the US is gaining a decisive edge in commercializing cutting-edge technologies, with a surge in AI-powered product deployments and a growing ecosystem of specialized startups and venture capital firms. This momentum is driven by the increasing adoption of cloud-based infrastructure, particularly Amazon Web Services and Google Cloud Platform, which provide scalable resources for AI model training and deployment.

Coding 1 min

Software Developers Say AI Is Rotting Their Brains

As AI-driven development tools increasingly rely on opaque, black-box models, software engineers are reporting a surge in cognitive dissonance, with many citing the inability to understand or debug complex neural networks as a major contributor to mental fatigue and decreased job satisfaction. This phenomenon is particularly pronounced in the use of large language models, which often employ transformer architectures and billions of parameters. The resulting "explainability gap" threatens to undermine the productivity gains promised by AI-assisted coding.