https://www.orangecyberdefense.com

NCSC advice on ChatGPT

Large language models (LLMs) and AI chatbots have captured the world’s interest, ignited by the release of ChatGPT in late 2022 and the ease of querying it provides. It’s now one of the fastest growing consumer applications ever, writes the UK official National Cyber Security Centre ( NCSC).

Undoubtedly, the NCSC concludes in a blog post, risks are involved in the unfettered use of public LLMs. Individuals and organisations should take great care with the data they choose to submit in prompts. You should ensure that those who want to experiment with LLMs are able to, but in a way that doesn’t place organisational data at risk, the NCSC advises.

On phshing emails, the NCSC points out that LLMs excel at replicating writing styles on demand, there is a risk of criminals using LLMs to write convincing phishing emails, including emails in multiple languages. This may aid attackers with high technical capabilities but who lack linguistic skills, by helping them to create convincing phishing emails (or conduct social engineering) in the native language of their targets.

For the blog post in full visit the NCSC website.

Comment

If you are sending information to an external service provider for processing, then you must assume that it can store and distribute that information as part of its architecture. In the absence of a clear explicit statement where the third-party processor guarantees that the information will not leak, you must assume it is no longer private, says Wicus Ross, Senior Security Researcher at Orange Cyberdefense. “While AI-powered chatbots are trained and further refined by their developers, it isn’t out of the question for staff to access the data that’s being inputted into them. And, considering that humans are often the weakest element of a business’ security posture, this opens the information to a range of threats, even if the risk is accidental.

“Whenever sharing data with a third-party, businesses need to be aware that its security is now out of their hands because it’s difficult to apply their own policies, procedures and controls to data that is placed in an external environment. This even goes for SaaS platforms such as Slack and Microsoft Teams, which do have clear data and processing boundaries but which can be blurred when augmented with third-party add-ons.

“Therefore, businesses need to use the NCSC’s advisory to highlight the risk that these tools pose to any sensitive information that’s inputted into them. Staff need to be made aware that they’re not necessarily just talking to an AI, but that their conversations could be accessed by humans either with malicious intent or those that simply aren’t aware of cybersecurity best practices and therefore put the data at risk. It will be almost impossible to stop staff from using these tools completely, but education and awareness are the key to making their use as harmless as possible.”

Read more on ChatGPT and AI