The Safety Interviews: Jason Nurse, College of Kent
Jason Nurse, reader in cyber safety on the College of Kent, firmly believes the blame for cyber weaknesses must shift in the direction of how methods are made fairly than pointing the finger at customers, stating: “5 or 10 years in the past, safety consultants have been saying the person is the weakest hyperlink, that customers are silly. Fortunately, I don’t actually hear that any extra – or, if I hear it, folks name it out, stating that if the system was constructed higher, the person wouldn’t should discover a workaround. That’s necessary to grasp.”
Nurse conducts analysis round a wide range of cyber safety points that have an effect on organisations and governments, however a lot of his work is targeted round one thing which is commonly missed in conversations, studies and analysis papers round cyber safety – psychology.
For all of the evaluation of cyber assaults and incidents, they nonetheless are likely to concentrate on CVEs and cyber legal gangs fairly than the people who find themselves affected when ransomware forces hospital appointments to be cancelled or a cyber assault ends in empty cabinets in supermarkets. Nurse is raring to voice a special perspective on what issues in cyber safety.
“For years, folks have centered on safety as know-how, making the know-how higher, making it extra superior. I fully agree that know-how performs a key position, but it surely’s additionally key to concentrate on know-how because it pertains to people and other people,” he says. “How we have interaction with know-how, how we’re capable of discover folks’s behaviour, how we’re capable of perceive how individuals are consistently being socially engineered and exploited – after which after all, what will we do about that?”
Along with his work in academia, Nurse can be director of science of analysis at CybSafe, a cyber safety firm which goals to assist cut back cyber threat within the office by measuring and influencing safety behaviours. Cybsafe partnered with the US Nationwide Cybersecurity Alliance to provide its annual Cybersecurity attitudes and behaviours report, surveying hundreds of individuals to look at how behaviours and attitudes form safety threat.
Among the key findings within the report embrace how 44% of individuals consider that staying secure on-line is intimidating, whereas solely 48% of respondents stated they accomplished cyber safety coaching at work up to now yr. The most typical causes folks gave for not finishing coaching have been “I already know sufficient” (23%) and “too busy” (22%). May or not it’s that there’s one thing fallacious with cyber safety coaching?
Cyber safety vs customers
Throughout his presentation at Infosecurity Europe 2025, Nurse polled the viewers of cyber safety professionals on their most popular technique of cyber safety coaching. The response was overwhelmingly in favour of video games and gamification.
Nurse then revealed the outcomes of the survey, which proved a shock to the viewers: of the strategies of coaching on supply, video games and gamification was by far the least standard technique of cyber safety coaching for customers, with solely 11% of individuals stating that this was their most popular manner of studying about cyber safety.
In the meantime, the most well-liked manner customers say they need to obtain cyber safety coaching is by way of video or written content material – which the cyber safety professionals within the viewers have been the least more likely to vote for. If the folks issuing the coaching aren’t catering to person wants, it’s no surprise that customers don’t soak up cyber safety consciousness coaching.
“It’s an advanced subject,” says Nurse. “There’s this disparity between what we predict is greatest after which what the customers suppose is greatest. I would suppose I do know what’s greatest, however which may not be what’s greatest for you. Are customers’ perceptions of what works for them right?”
A lot of company cyber safety coaching stays primarily based round warnings about issues which may go fallacious, reminiscent of phishing rip-off, or cost fraud. It doesn’t assist that many organisations nonetheless deal with a mistake as one thing the worker ought to be punished or mocked for – particularly if phishing exams are actively attempting to deceive workers. “Folks simply need to get on with their jobs,” provides Nurse.
Returning to the idea of cyber safety professionals inserting blame on the customers for incidents, Nurse is eager stress that this isn’t the fitting perspective, particularly when a lot of the web and internet-connected know-how and functions have been constructed with safety as an afterthought or one thing that’s bolted on afterwards – if it’s bolted on in any respect.
“When the web was first constructed, safety wasn’t a precedence – it was added afterwards. And we nonetheless see that with sure new applied sciences, safety is added on after. However we’re getting higher at that with secure-by-design and related ideas, making a distinction in how know-how is being constructed,” he says.
Nonetheless, what additionally must be taken under consideration is the thought of constructing one thing too safe. If the customers discover this ultra-secure product too troublesome to make use of, they’ll search for different strategies to get round it – and as demonstrated with shadow IT, when workers use their private cloud accounts fairly than the authorised enterprise accounts, this will convey its personal dangers.
“We are able to do extra with making certain methods are constructed with customers in thoughts. There must be a stability between safety, performance and value – these three parts are actually vital,” says Nurse. “If one thing is simply too safe, it dangers being unusable, which implies you’ll have the difficulty of workarounds. If one thing is of use however not safe, after all it’s going to be exploited, so there must be a stability.”
Nurse means that the reply to this might be involving customers within the improvement cycle, testing the brand new utility or product to make sure that it really is constructed with them in thoughts, whereas additionally making certain safety, performance and value are balanced nicely.
“That’s vital as a result of having customers concerned ensures that you’ve got that touchpoint. If you happen to contain them all through, you’ll be able to attempt to make sure that what’s constructed matches the person’s wants and hits the necessities round safety and privateness,” he provides.
Person security and accountable utilization within the age of AI
A number of new applied sciences have emerged over the twenty years which have all made the error of not eager about person safety and security from the beginning. Take into consideration social media, smartphones, the web of issues (IoT), all of which emerged, just for safety points to be thought-about as soon as they have been already within the wild. This cycle remains to be ongoing and now arguably shifting sooner than ever earlier than.
“We’ve seen it time and time once more and now it’s taking place with AI,” says Nurse. “And AI is shifting so rapidly, it’s actually blowing folks away with the influence it has and the influence it’ll have, and it exposes us to probably elevated threat.”
It’s that velocity of adoption which makes managing the dangers round AI troublesome. Whether or not it’s by accepted enterprise options or workers utilizing their private ChatGPT accounts, AI is within the office and wider society.
However many customers aren’t eager about the potential safety and privateness dangers round it. Persons are coming into delicate enterprise data into AI instruments to assist them with their work – sure, it helps with effectivity, however given the black field nature of so many AI fashions, this might put companies vulnerable to breaches or worse.
For Nurse, it comes again to the human degree, making certain that individuals perceive what AI is, the way it works and the potential dangers round it – and inspiring accountable utilization.
“Persons are simply utilizing AI like some other device with out correctly eager about the implications or if they need to be utilizing it in the best way they do,” he says. “It’s a website that we actually must concentrate on – the danger within the office, the danger within the private area – and there’s heaps to be unpacked round what’s secure AI use and what’s moral AI use.”
Nonetheless, he’s additionally eager to emphasize that the burden of managing threat shouldn’t be left to the customers. Removed from it – the AI corporations should take duty as nicely by inserting acceptable guardrails and security measures on their merchandise.
“Guardrails is a very attention-grabbing subject,” says Nurse. “Some AI fashions have higher guardrails than others. If you happen to ask an AI to create a phishing electronic mail, some received’t do this, Some will create it for you and a few will create it in case you circumvent the guardrails round asking the query.”
For Nurse, whether or not it’s round AI or it’s round cyber safety, the necessary factor is that these chargeable for constructing after which securing know-how and software program take into consideration the people who find themselves utilizing it. As a result of with out understanding not solely how folks use know-how, in addition to their behaviour and attitudes in the direction of it, it’s going to be troublesome to maintain folks secure and safe.
“We have to spend effort and time on understanding behaviours and higher appreciating that behaviour is the result of a posh community of variables that work together with one another. Behaviour will be knowledgeable by folks’s attitudes, tradition, alternatives, motivations and social norms – there’s so many issues that may inform behaviour that we have to perceive, particularly on the subject of cyber safety,” says Nurse. “Understanding these fundamentals is necessary for the way we strategy safety.”