The financial services industry could be increasingly vulnerable to cyber-enabled fraud perpetrated by threat actors leveraging artificial intelligence tools, according to a Treasury Department report released Wednesday that examines AI-specific cyber risks to the critical infrastructure sector.

The report, led by Treasury’s Office of Cybersecurity and Critical Infrastructure Protection to fulfill a requirement in President Joe Biden’s AI executive order, delivers no cyber-related mandates to the financial services sector, nor does it recommend or argue against the use of AI in the industry’s work. But the report, based in part on interviews with representatives from 42 financial services and tech-related companies, provides warnings to the industry at large about AI’s potential to worsen fraud while also sharing best practices and AI use cases for cyber and fraud prevention.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” Under Secretary for Domestic Finance Nellie Liang said in a statement. “Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”

The fear of an uptick in cyber-enabled fraud is fueled by increased accessibility to emerging AI tools, the report notes, giving threat actors an “advantage by outpacing and outnumbering their AI targets,” at least initially.

To combat that advantage, the report pushes financial institutions to “expand and strengthen their risk management and cybersecurity practices to account for AI systems’ advanced and novel capabilities, consider greater integration of AI solutions into their cybersecurity practices, and enhance collaboration, particularly threat information sharing.”

Managing AI-related cyber risks should be akin to best practices in the protection of IT systems, the report said. Several of the participating financial institutions told the report’s authors that their current practices match elements of the National Institute of Standards and Technology’s AI Risk Management Framework, though “many also noted that it is challenging to establish practical and enterprise-wide policies and controls for emerging technologies like Generative AI.”

Other financial sector report participants said they were developing AI-specific risk management frameworks in-house, many of which are guided by the principles laid out in NIST’s RMF as well as the Office for Economic Cooperation and Development’s AI principles and the Open Worldwide Application Security Project’s AI security and privacy guide.

But the experimentation with and development of financial firms’ in-house AI systems and frameworks underscores “a widening capability gap” between the biggest and smallest companies in the sector.

“One firm has stated that it has approximately 400 employees working on fraud-prevention AI systems, and AI service providers noted being approached with thousands of use cases by larger firms,” the report said. “Smaller firms report that they do not have the IT resources or expertise to develop their own AI models; therefore, these firms solely rely on third-party or core service providers for such capabilities.”

Many financial institution participants said they believed AI adoption was important because of the technology’s potential to “significantly improve the quality and cost efficiencies of their cybersecurity and anti-fraud management functions.” Among the ways in which cyber threat actors can utilize AI, the report specifically called out social engineering, malware and code generation, vulnerability discovery and disinformation. Cyberthreats to AI systems include data poisoning, data leakage, evasion and model extraction.

The automation currently used by financial institutions for “time-consuming and labor-intensive anti-fraud and cybersecurity-related tasks” will likely be enhanced by generative AI “by capturing and processing broader and deeper data sets and utilizing more sophisticated analytics.” Technologies of that kind, the report added, can also enable financial firms to take on “more proactive cybersecurity and fraud-prevention postures.”

Going forward, the financial services sector relayed that it would be helpful to have “a common lexicon” on AI tools to aid in more productive discussions with third parties and regulators, ensuring that all stakeholders are speaking the same language. Report participants also said their firms would “benefit from the development of best practices concerning the mapping of data supply chains and data standards.”

The Treasury Department said it would work with the financial sector, as well as NIST, the Cybersecurity and Infrastructure Security Agency and the National Telecommunications and Information Administration to further discuss potential recommendations tied to those asks.

In the coming months, Treasury officials will collaborate with industry, other agencies, international partners and federal and state financial sector regulators on critical initiatives tied to AI-related challenges in the sector.

The post Treasury report calls out cyber risks to financial sector fueled by AI appeared first on CyberScoop.

Leave a Reply

Your email address will not be published. Required fields are marked *

Explore More

Leaked documents show how firm supports Chinese hacking operations

February 22, 2024 0 Comments 0 tags

A tranche of documents posted to GitHub five days ago reveals that Chinese contractors working to support Beijing’s hacking operations are a lot like office drones everywhere but with a twist: They complain

Biden Executive Order to Bolster US Maritime Cybersecurity

February 21, 2024 0 Comments 0 tags

The White House Executive Order will give new powers to the US Coast Guard to manage cyber threats in ports and issue cybersecurity standards

Study Uncovers 27% Spike in Ransomware; 8% Yield to Demands

March 20, 2024 0 Comments 0 tags

Thales latest report also suggests less than half of organizations have a formal ransomware response plan