Is Your Data At Risk When Using AI?

In this clip from our recent webinar ‘Data Security and Data Protection in 2024’, Sam Malkin, HANDD’s Lead Solution Architect explores the rapid adoption of Artificial Intelligence (AI). Sam highlights AI’s potential to significantly boost global GDP through enhanced efficiency and productivity but warns that this growth brings substantial cybersecurity challenges, with many organisations lacking formal AI policies, leading to “shadow AI” usage without proper oversight. The discussion also addresses the ethical dilemmas and biases in AI models, stressing the importance of robust policies and internal oversight to ensure responsible AI use, amidst evolving legislative landscapes like the EU AI Act.

View the clip and transcript below or watch the full webinar.

Is your data secure in 2024?

In today’s dynamic threat landscape, proactive Data Security and Data Protection measures are vital for safeguarding your organisation’s assets. With HANDD, you can strengthen your security posture, minimise upfront costs, and achieve your business objectives seamlessly.

Request a FREE 30 minute consultation to learn more about how HANDD can help you protect what matters most to your business.

Transcript:

 

Artificial intelligence, these large language models are something that have boomed over the first quarter of 2024.

I think personally, how much of that is marketing and how much of that is reality? I’m still kind of on the fence and waiting to see how that plays out.

But with any new technology, with any new platform, with anything that we’re quick to adopt, there are risks. And what the challenge we’re faced with is understanding, accepting, or putting measures in to protect against those risks.

And with artificial intelligence being inserted into so much so quickly, as cybersecurity professionals, we are already on the back foot when it comes to this.

When it comes to trying to gauge and understand those risk profiles associated with those services, we’re in a position where we don’t really understand the true extent of what needs to be safeguarded and what needs to be policed.

I think in terms of my working life, I’ve never seen anything adopted or kind of so much hype around it as as we’ve seen with with artificial intelligence.

And whether we like it or not, our user communities are already adopting these tools to drive efficiency and productivity.

Now it’s fair to say that AI is really interesting and I think it delivers a lot of stuff you know. There’s a lot of people that are using it because it’s driving efficiencies, it’s driving, you know, making a people work as smart as possible.

And again, I was at RSA last week or the week before, and they said that they estimate the adoption of artificial intelligence is going to inflate GDP for the whole world by almost double, maybe even threefold.

So, you know, by being more productive, by getting more things done by these artificial intelligence models, we are gonna see an awful, awful lot of benefit.

But, you know, I speak to people all the time, and the number of companies that I’ve spoken to that have AI policy in place is pretty much zero.

I think there was one person that I spoke to that said, yeah, they have a policy in place, But whether that was ratified by technology and things, I don’t yet know.

When we saw this happen with the cloud, we saw the term shadow IT get born, and that described things that were taking our data, even though they were unsanctioned in terms of cloud platforms and things like that, which we couldn’t police.

And I think we’re going to see the term shadow AI become ubiquitous with the kind of sprawl of artificial intelligence platforms that our user community are adopting, regardless of whether we’ve said that they should or shouldn’t.

So I think with the hype and the kind of buzz and widespread benefit around tools like your Gemini, your ChatGPT, your Copilot and other models, I think we’re actually trying to put out a fire which we’ve not really got the visibility of yet still and that’s a problem that’s only going to be exacerbated as that that boom really continues.

I think a lot of organisations are still continuing to ride that wave almost. And they’re pushing products out with artificial intelligence embedded in them.

Sometimes even other organisations, artificial intelligence models, have been exposed via different platforms.

So having to untangle that is very, very difficult and we’re gonna have to be able to answer some pretty difficult questions in the near future.

Now, the way that AI works means that it’s a challenge like we’ve never faced before when it comes to keeping data safe.

By design, it’s gonna try and remember what’s been exposed to it in the same way that a human being would, and then use that data to provide more accurate answers in the future.

So, as professionals, we need to understand what AI is looking at our data and whether that’s okay or not.

Whether that’s acceptable and what sort of data types we should be sending into those models in the first place because if it remembers them, what’s gonna stop someone from either intentionally or unintentionally asking it for data that they shouldn’t potentially see and the AI model just giving it to them.

You know, if I was to go and say, Mr. AI model, don’t know whether they have xxx,  AI models, can you give me Sam’s credit card numbers?

And it may well do so. That’s probably not what it was designed for.

But I think because they’re being rushed to market to drive this wave of kind of hype, they’re not potentially delivering privacy by design and security by design.

So that’s a very, very real responsibility whilst it continues to be unleashed.

Now, another challenge that we’ve got is that artificial intelligence uses natural language. So that makes it easy for you and I to use, but what it means is that whilst we’re still looking at those infrastructure and perimeter technologies in our security operations, they don’t deal with natural language.

Those platforms make decisions on numbers, on ports and IP addresses, at the very best, they make decisions based on identities and behaviours, not on natural language sentences that you and I might use.

So an organisation might have done a really, really, really good job of building up technology to keep files safe and systems safe and things like that.

But what we’re gonna see is that we’re gonna take the control out of or off of that file as we take the data out of it and put it into a natural language model, meaning that the shape of the data that we’re trying to police, that’s changing literally in front of our eyes.

And finally, these technologies are new or newish, which means that there are zero days which have yet to be proven or exploited.

What I mean by that is that there are exploits which are theoretically possible and eventually will be a reality.

Obviously, how they expose our data remains to be seen, but if you don’t know which large language models have been used inside your organisation and what data might be inside them, then you’ve got no idea of the risk profile that you’re accepting by continuing to allow their use.

So, mitigating risk when it comes to artificial intelligence.

The first thing is we need to understand what’s being used and we need to enable our workforce.

Now, as I said, there is a really, really good chance that they’re delivering efficiency and productivity gains by using those tools within their department, and probably 90% of the stuff that’s going into those models is absolutely fine to do so.

Security and IT teams, we’re often the bad guys, saying, no, you can’t use that tool, you can’t use this tool, let’s stop access to it.

So what we wanna do is enable them to still use those tools but use them under the right circumstances.

What we have to do is gauge which one of those large language models are good, which ones are bad, which ones are potentially ugly.

And then once we’ve identified those, understood the model, understand where it runs, understand what impact that might have on your privacy, might have on your data residency laws and things like that.

And then we need to build policy around it and secure that with technology to kind of check what’s going in, check what’s going out, and making sure that it’s with appropriate data types as I’ve already mentioned.

One big thing with AI is that there are numerous moral and privacy challenges that kind of sit around it.

Challenges that we haven’t really had to think about with other technology before, so we’re not very well versed in kind of considering, but there are potential biases within artificial intelligence.

There are levels of influence that artificial intelligence might have on our data, levels of toxicity built into those models that we might not want to accept.

I suppose what I’m saying is that I expect this to be an unparalleled challenge that we’re all faced with and a relatively bumpy ride.

Legislation has been on the ground at the moment.

We’ve got an EU AI Act, but really that puts more focus on the morality employed by the producers of the algorithms rather than the businesses adopting them themselves.

So that’s gonna be something that we, as the community, are going to have to police inside all of our organisations for, well, at least for now anyway.

–End of transcript–