Back
Navigating the risks of public LLMs: Why government agencies need safe, targeted AI solutions for their investigations

The Comtrac Team
Feb 19, 2025
6
Min Read
Government agencies are increasingly exploring the potential of artificial intelligence (AI) for government operations, enhancing efficiency and meeting the growing demands of their constituents. Among these tools, generative AI platforms like ChatGPT and Claude and now DeepSeek have gained significant attention.
While safe and targeted AI can deliver great efficiencies in investigative processes, the use of public large language models (LLMs) that are free but lack critical safeguards introduces serious risks. Considerable risks that public sector organisations must carefully consider with open LLM’s and ensure that staff are not inputting sensitive data into these.
Data privacy concerns
Government agencies often deal with highly sensitive information, ranging from personal identifiers to confidential case details. Publicly available AI systems, particularly those hosted by third-party providers, can introduce vulnerabilities when this data is input for processing. Without robust controls, there is a real risk that sensitive data could be inadvertently exposed, misused or accessed by unauthorised entities.
Hallucinations and integrity risks
Generative AI systems have the ability to produce responses that seem accurate but may be factually incorrect or entirely fabricated—a phenomenon known as hallucination. Additionally, by default, these systems use collected data for training purposes. For example, ChatGPT automatically incorporates user data into its training unless settings are adjusted.
For government agencies, where precision and integrity in decision-making and documentation are critical, these limitations pose significant risks. Inaccurate or misleading outputs could lead to reputational damage, legal consequences, and a loss of public trust.
Alternatively, when using targeted AI services like those within the Comtrac system, prompts are specifically designed to minimise hallucinations, ensuring more reliable and accurate outputs.
Data sovereignty challenges
Many generally available Open AI platforms operate on global infrastructures, which may mean data is stored or processed outside Australia. This can create significant issues with data sovereignty, as agencies lose control over where and how their data is handled. Such practices may not align with stringent local data protection laws, and the potential for sensitive information to leave Australian jurisdiction poses serious compliance and operational risks.
Case Study: Misuse of open ChatGPT in child protection
A recent report by the Office of the Victorian Information Commissioner (OVIC) revealed a concerning incident where a child protection caseworker used ChatGPT to draft a report. The report contained sensitive and identifying information about vulnerable children and families, which was entered into ChatGPT. OVIC’s investigation found that the use of ChatGPT breached privacy obligations and raised serious concerns about data security and accuracy.
The case highlights the inherent risks of using generic AI tools in sensitive governmental processes. While the caseworker’s intention was to save time, the use of an unvetted AI system jeopardised the confidentiality of highly sensitive information. This incident has led to Victoria’s Department of Families, Fairness and Housing banning the use of generative AI tools like ChatGPT for drafting sensitive documents, emphasising the need for strict policies and targeted AI solutions.
Learn more
The case for targeted AI in government
Despite these risks, the potential of AI for government to transform operations cannot be ignored. By focusing on targeted AI solutions specifically designed to meet the unique needs of agencies, governments can harness the benefits of AI while mitigating its inherent risks.
Addressing administrative burden
Law enforcement and government agencies are often weighed down by substantial administrative responsibilities. This burden not only drains resources but also contributes to workforce attrition—a challenge acutely felt by police forces across Australia. Targeted AI offers a real opportunity to lighten this load by automating repetitive tasks, streamlining workflows, and enabling staff to focus on high-value activities.
Custom-built AI services tailored to agency needs
Targeted AI solutions, trained on data and expertise specific to an agency’s domain, provide a secure and reliable alternative, especially for AI in investigations, assisting law enforcement in preparing court-ready documents, such as protection orders, with precision and consistency. These tools draw on the knowledge of seasoned investigators, ensuring outputs align with legal requirements and operational standards.
Improving employee well-being and retention
By alleviating the administrative burden, targeted AI can reduce stress and improve job satisfaction among agency staff. This is a critical factor in addressing attrition, particularly in demanding roles like policing, social services and child protection. Enhanced efficiency and support from AI can create a more sustainable work environment, helping agencies retain skilled personnel and maintain operational continuity.
Protecting the vulnerable
Most importantly, AI in investigations and targeted AI solutions empower government agencies to better fulfill their core mission of protecting vulnerable populations by streamlining processes and enhancing decision-making. These tools free up resources to focus on frontline services and proactive interventions.
While the promise of AI to enhance efficiency and reduce administrative burden is undeniable, the recent case of misuse in child protection highlights the critical need for caution. Data privacy, sovereignty challenges, and the potential for inaccurate outputs must be carefully managed, especially when dealing with sensitive information.
However, targeted AI solutions, specifically tailored to the unique needs of government agencies, offer a way forward. AI services, such as those available within the Comtrac platform, are designed with security, accuracy, and compliance in mind. They can provide tangible benefits by improving workflows, supporting staff, and ultimately helping agencies protect vulnerable populations.