CMOtech Asia - Technology news for CMOs & marketing decision-makers
Story image

DeepSeek's rise: Navigating the AI data and security challenge in Australia

Yesterday

DeepSeek, a Chinese artificial intelligence startup, took the market by storm and rapidly gained traction with 3 million downloads since its launch to the public on 20 January. However, it also raises concerns over the possibility of data collection and misuse. Australia Industry and Science Minister Ed Husic has since urged caution over DeepSeek. DeepSeek's emergence raises complex cybersecurity issues related to data privacy, ethics, national security and data leakage. These issues will need to be addressed as AI technology advances and becomes more widely used.

Businesses and governments in Australia are grappling with policies and regulations to ensure GenAI and LLMs can be used safely. Last year, the Australian government proposed introducing ten mandatory guardrails for AI developers and deployers. While they are currently under consultation, it could affect businesses later this year.

In the meantime, security teams are expected to set their own guardrails, and organisations are anxious. GenAI was singled out as the fastest-growing concern in Proofpoint's 2024 Voice of the CISO Report. In Australia, 40% of CISOs viewed ChatGPT and other Gen AI tools as the top technology risk to the organisation.

The proliferating engagement between our people and GenAI, LLMs and other advanced applications requires a flexible, human-centric, highly tailored cybersecurity response. The majority of users looking for productivity gains from GenAI tools will unfortunately put business and cyber risks as an afterthought which is demanding organisations adapt from legacy content-centric DLP (Data Loss Prevention) products to behaviour and human-centric platforms that can protect organisations from careless, malicious and compromised users across all communication channels email, endpoint and cloud.


AI in the workplace: what are the risks?

Data protection

Perhaps unsurprisingly, the most significant concern is data protection and privacy – particularly regarding LLMs. Tools like DeepSeek and ChatGPT require data input from a user. This could be a question or prompt, which is then processed as raw data. The LLM analyses this data using natural language processing (NLP) and semantic analysis before delivering a suitable output.

As users are free to copy text into the DeepSeek prompt box, with prompt splitters allowing for even more significant inputs, the potential for data loss, misuse and exposure is enormous. For example, employees can copy an email they are composing that is intended for an internal or external recipient. Before sending it, they copy the draft email into GenAI tools to improve structure, grammar, and tone. When the email contains sensitive information such as PII or company financials, this is a vast exposure and risk. Furthermore, when you consider students use these approaches and tools daily, the future workforce will accelerate these risks for both the private and public sectors.

AI systems can unintentionally expose sensitive information, from overfitting and inadequate data sanitisation to unauthorised integration into personal devices.

Perth's South Metropolitan Hospital Service staff banned doctors from using ChatGPT after discovering staff had been using the software to write medical notes which were then uploaded to patient record systems. While there was no breach of confidential information on this occasion, the risks are significant.


Governance and regulation

Many Australian organisations are unaware of who is using GenAI technologies and to what extent. This makes it impossible to monitor data inputs or implement policies regarding its application.

As well as increasing exposure to data loss, organisations are unaware of whether they comply with AI regulations and data protection laws in relevant jurisdictions. In October last year, the OAIC issued new guidance to clarify the application of Australia's Privacy Act 1988 to users and developers of AI products.

Data inputs are not the only cause of concern. GenAI output is just as potentially fraught. Should employees use LLMs and other AI tools to generate code or content, ensuring that any production is free from plagiarism, vulnerabilities, and inaccuracies is challenging – leaving organisations potentially exposed to security breaches and issues with patents and registered IPs.


Threat vectors

Unfortunately, the enormous potential of AI to increase efficiency is not for the exclusive use of well-meaning employees. NLP and LLM models also allow threat actors to train their attacks on vast datasets, such as social media feeds and chat logs, for hyper-personalisation and even more convincing lures.

Such tools also help cyber criminals avoid common giveaways such as mistranslations or spelling and grammatical errors. That many platforms are freely available also removes traditional barriers such as capital or skill level, opening up the ability to launch a sophisticated cyber-attack to anyone with basic computer knowledge and malicious intent. 

Developments in this area are so significant that 51% of Australian CISOs surveyed from Proofpoint's 2024 Voice of the CISO Report called out GenAI as posing a risk to their organisation.

In October 2024, OpenAI acknowledged that it had disrupted over 20 "operations and deceptive networks from around the world," following Proofpoint's report of the first signs of such activity. This marked the first official confirmation that mainstream AI tools could be used to enhance offensive cyber operations.


A people problem needs a people solution.

Like most significant technological advancements, AI is a double-edged sword. Just as it brings new risks and assists threat actors with their campaigns, so does it enhance our ability to bolster our defences and thwart such attacks.

While new technologies may alter the data loss landscape, our people remain very much at its heart. This starts with who, why and how.

The more you understand which users are interacting with AI, their reason for doing so, and how they use the technology, the easier it becomes to mitigate potential risks. Without context, you cannot determine intent and build evidence around incidents and offending users. You should have irrefutable evidence whenever you need to confront a malicious insider. One Proofpoint customer in Australia who experienced an employee taking sensitive data elaborated: "If we found someone leaking company data, we can't afford to be wrong. Visibility is in the best interest of every single user in the company. We need to walk through what exactly a user was doing." 

Organisations must take a blended and multi-layered approach combining threat intelligence, behavioural AI, detection engineering, and semantic AI to block, detect, and respond to more advanced threats and data loss incidents. This holistic human-centric approach alerts on behaviours not usually spotted with traditional, content-centric systems, resulting in higher efficacy, operational efficiencies and much lower to no false positives. 

Finally, while tools matter, they mean little without user awareness and behavioural change. You must address AI risks in your DLP (Data Loss Prevention) policies, set clear parameters for its use and deliver ongoing security training targeted to your employees' vulnerabilities, roles and competencies. After all, even the world's most technologically advanced cyber risks are no match for security-savvy staff.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X