- Consultant Accenture interviewed bank leaders on the impact of generative AI on cybersecurity.
- Eighty percent of the respondents said that AI allowed hackers faster than banks cannot follow.
- The accenture security expert explains why banks are hampered and what is at stake.
The generative AI could be one of the most promising technological progress in Wall Street – but it can also be one of the most threatening.
Bank managers have the impression that they cannot protect against what cybercriminals can do with generative AI, according to new accenture data on the basis of a survey of 600 banking cybersecurity managers. Eighty percent of respondents think that the AI generates pirates faster than their banks cannot respond.
Cybersecurity is an important element in customer confidence, Business Insider Valerie Abend told Accenture’s financial services.
“Banks that really understand how important customer confidence is, as the most precious active, and has put cybersecurity there as the heart of allowing it, it will be your winners,” she added.
Banks have praised AI’s generative capacity to make their workers more productive and effective. Technology is used to do everything, help with software developers to write code to allow analysts to summarize thousands of documents in research reports. But it’s not just Wall Street workers who use technology to their advantage.
Armed with a generative AI, bad players can ingest more data than before and use the ability of technology to imitate humans to perform more sophisticated and realistic scams.
These attacks, which target customers, bank employees and their technology suppliers, can have great -reaching consequences. Once criminals have had access, they can make fraudulent purchases, wire money and drain funds from the funds. They can also obtain deeper access In business technology batteries, steal data and download malware.
Bank managers do not ignore what is at stake; Jpmorgan said that it spent more than $ 600 million each year in cybersecurity, while America BankCyber-expense exceeded $ 1 billion a year. Certain key frames of technology, such as Stephanie Cohen by Goldman Sachs and the CTO Igor Tsyganskiy of Bridgewater, have even left the financial industry to combat the Cyber-Menace of AI more directly in technological companies.
But despite the millions of dollars that banks spend to consolidate their defenses, many IT functions believe that the progress of the generator is too fast to follow. About a third of respondents in the survey (36%) said they thought they had a solid understanding of the cybersecurity landscape rapidly evolving.
Admittedly, banks use AI to detect vulnerabilities, offer intelligence reports on more robust threats and try to get ahead of attacks by analyzing more real data, Abend said. They also used AI to identify so -called toxic combinations, such as employees who have access to the approval and execution of transactions, including wire requests. But these efforts and the speed at which they can be deployed are greatly hampered by the strict regulations that banks must follow, said Abend.
Abend, who has spent years working in regulators such as the office of the currency controller and the US Treasury Ministry, said that to use AI banks must demonstrate that they can maintain the controls and governance necessary to stay in their risk appetite. They must be reflected on how they adopt AI, the important languages of languages they use, how third parties provide these models, how they protect data that feeds models and has access to the output of these models.
Cybercriminals take advantage of new models, such as Deepseek, to write malicious code and identify weaknesses, such as the identification of weaknesses in the safety of the cloud of a given IP address, said Abend. Established generative providers, such as Chatgpt and Google Cloud, have blocked such an activity, but new models are always sensitive.
Third -party supplier risk
There are finches and startups that develop tools fueled by AI to help banks thwart cyber attacks. Alloy, who works with M&T Bank and Navy Federal Credit Union, published a new product this week to detect attacks, identify suspect volume peaks in applications and reduce manual exams during attacks.
But banking suppliers and technology suppliers could provide another opening to banks that bad players target. More than 70% of banks’ violations come from their supplier supply chain, Abend said. Cybercriminals use generative AI models to pass through data to find out which companies associate with banks and exploit this vulnerability. Technology providers are not brought to the same regulatory standards as banks, and the management of third -party banks is often manual and with limited and old data, Abend said.
“The reality is that you can outsource capacity as a bank, you are not externalizing the risk,” said Abend. “Customer confidence depends essentially on the bank to protect data from these customers and its financial information in the end -to -end supply chain.”
Accenture Research has revealed that maintaining customer confidence helps banks reach 1.5 times more customer retention rate and 2.3 times more income growth.
“This is not a back office problem, bank leaders must really stop dealing with this as a compliance problem,” she said.