
Deloitte Australia has admitted that a four hundred forty thousand dollar report it produced for the federal government was generated using artificial intelligence, complete with fake academic references and even a made-up Federal Court quote. The firm now says it will issue a partial refund — the corporate equivalent of shrugging and handing back loose change after being caught red-handed.
The report, commissioned by the Department of Employment and Workplace Relations, was supposed to provide serious analysis of welfare compliance. Instead, it turned out to be what many describe as an AI-generated hallucination dressed up as consultancy. The revised version quietly uploaded by Deloitte removed the fake sources and confirmed that Azure OpenAI GPT-4o was used in drafting the document.
Deloitte insists that its “core findings” remain valid — a statement that would be funny if taxpayers weren’t paying for it. Critics, including lawmakers and academics, have branded the episode a symptom of corporate laziness, where multimillion-dollar firms now outsource thinking to chatbots and still collect premium fees.
But the scandal runs deeper than Deloitte’s sloppy reliance on AI. It exposes a broader pattern of government departments throwing taxpayer money at consulting giants for what increasingly looks like digital snake oil. Despite repeated audit warnings about wasteful consultancy spending, the cycle continues — inflated invoices, unverified reports, and political silence.
So far, no official has explained why a government swimming in economic warnings continues paying elite firms for work that could have been written in a single prompt. Accountability, like the report’s citations, appears to be missing in action.
According to online reports, Telegram had declined to cooperate through standard legal requests, prompting prosecutors to request—and obtain—a more aggressive remedy. The order allegedly demands that data retrieved be stored within the jurisdiction of the U.S. trial court. No broader or repeated hacking is permitted under the same warrant; future intrusions would require separate court approval.
Telegram has historically maintained a strong commitment to encryption and user privacy, making cooperation with many law enforcement demands controversial. In recent years, however, the platform has disclosed that it complied with legal requests for users’ IP addresses or phone numbers in over 2,000 cases in response to U.S. inquiries. This shift corresponds with increased regulatory and legal pressure on Telegram following controversies over the platform’s use in illicit content dissemination.
The court’s decision to allow direct access to Telegram’s systems raises urgent questions about the balance between combating severe crimes and preserving digital rights. Legal experts caution that remote access to cloud infrastructure—especially for foreign service providers—could set precedents with implications for cross-border privacy, surveillance regulation, and platform liability.
Telegram has not officially commented on the specific court order at the time of writing. The company’s public transparency efforts continue to center on quantified disclosures of how often it complies with lawful requests, but details on this remote access authorization remain sealed.
As the case proceeds, observers will closely monitor how the U.S. judiciary frames the limits of government hacking powers in the context of encrypted messaging services, as well as Telegram’s potential responses or policy adjustments.