What risk does AI pose to your charity?
7 August 2024AI is becoming more prevalent in the tools and software organisations use. The use of AI Chatbots has become something of a trending topic.
Whilst AI presents opportunities to increase productivity for the individual and the organisation, due to the early nature of regulation and governance it does pose a number of risks. The same technology is unfortunately being used to exploit vulnerabilities and increase cyber attacks, both new and old.
Data and unauthorised AI use
One issue is the problem of unauthorised AI use within charities and businesses, as highlighted in CSO’s article “Unauthorised AI is eating your company data, thanks to your employees”. Many employees, in their pursuit for efficiency, are turning to publicly available AI tools without proper due diligence as to what data is being exposed or input.
AI poisoning
Another newer concern is the threat of AI system poisoning, as discussed in the CSO article “AI poisoning is a growing threat — is your security regime ready?”. This emerging risk involves malicious actors manipulating the data used to train AI models, potentially leading to biased or harmful outputs. As organisations increasingly use AI for decision-making and automation, the integrity of these systems becomes crucial.
Exacerbated by longstanding cyber risks
These AI-specific challenges are exacerbated by longstanding cybersecurity issues such as poor password management, lack of employee training, and failure to keep software updated. These basic security steps can provide entry points for attackers looking to exploit systems and accounts.
Its more important than ever for leaders to look at AI and cyber-risk together as it is very much a governance issue and not just an IT one.
Request a quote – it’s simple Start today
Request a quote – it’s simple Start today