2023 ForgeRock Breach Report underscores the need for AI-powered identity
We are excited to announce the release of our fifth annual ForgeRock Identity Breach Report. Our goal each year is to discover what's trending — how enterprises are being breached, how many records are being exposed, and how attackers are getting past security controls that cost companies roughly $88 billion a year.1
As in previous years' reports, we have published our key findings, including the industries most vulnerable to attack, the rising costs, and the leading cause of breaches (unauthorized access wins that dubious honor once again!). But this year's report revealed one trend we have not seen before: a significant drop in the number of breached records. In fact, the number was the lowest we've seen in five years. So can we infer that the bad guys are giving up, taking their keyboards and going home? If only!
An obvious challenge of storing so much sensitive personal information in digital form is that this data is highly valuable to criminals. And it appears in the last year that they've been going for quality over quantity – the number of records containing protected health information (PHI) rose by 160%. Why? Because a single healthcare record in the U.S. is now fetching $675 on the black market. Such records can be used widely for fraud, from filing false insurance claims to obtaining prescription drugs illegally. No wonder healthcare is the most highly targeted sector, accounting for 36% of all breaches.
An emerging threat: they're using AI to devise new attacks (and we can use AI to stop them)
This year, we've all been hearing a lot about generative AI, which produces various types of content, including text, imagery, audio, and synthetic data. Expect to hear a lot more about how generative AI makes it easier to impersonate others and perpetrate fraud.
Researchers recently discovered underground hacking communities using ChatGPT to generate malware, create encryption suitable for ransomware, and devise other fraudulent schemes. The new AI wave is contributing to a huge increase in voice and video deepfakes.
As criminals find novel ways to use AI, we can use AI to thwart their efforts. Decisioning AI, which fuels our Autonomous Access product, is essential for elevating protection against AI-based threats in a granular and responsive way.
AI that specializes in risk decisioning can take in a range of signals about who is trying to do what – and then determine what they can or cannot do next. Decisioning AI can also prevent attempts to gain unauthorized access by incorporating multiple contextual signals into the decision process, such as login location, IP network reputation, and the distance between login attempts and registered MFA devices.
Organizations with AI-powered identity and access management (IAM) can detect unexpected activity, stopping intruders in real time as they try to authenticate. They can also automate the process of eliminating over-provisioned access that enables attackers to use one compromised account to move laterally to higher-value targets.
Third-party attacks: do your partners' security practices measure up to yours?
The 2023 report shows how a tactic that emerged in last year's report is now routine: breaching high-value organizations through their third-party partners and vendors. These attacks increased 136% from the year before and accounted for more than half of all breaches this time around. Attackers know that hospitals, for example, face strict regulations for protecting patient data – but hospitals' suppliers may be less stringent.
In one breach alone, an accounts payable vendor supporting hundreds of healthcare organizations was the victim of a ransomware attack, which allowed attackers to access systems and documents containing patient-related data. The breach affected more than 657 healthcare organizations and almost two million people.
Poorly protected integrations between third-party suppliers and the organizations that rely on them — weak access controls, vulnerable API integrations, or a lack of MFA for employee accounts — can be used to exploit third-party providers. Without strong identity security and governance, API security, and a least-privileged access model, an attacker can breach one workforce user's account and move laterally, not just across a vendor's systems but also its partners' systems, to find and exploit valuable data.
Our report shows that ransomware and unauthorized access were the leading attack vectors in third-party service provider breaches.
High-value data needs high-quality protection at the identity perimeter
The underlying theme of this year's report is that it takes only one compromised credential to pave the way for unauthorized access and the exposure of sensitive data, including customer data. This is not news to most of us, but it's worth reasserting now, in light of so many attacks on partner ecosystems.
This trend means that every identity must be secured — not just your workforce users and contractors, but every identity in your partners' organizations as well. Implementing single sign-on (SSO), passwordless multi-factor authentication (MFA), and effective identity governance practices is vital for preventing unauthorized access. And this year we're seeing AI become essential in stopping breaches and for handling an onslaught of threats that are themselves AI-generated.
Download the 2023 Identity Breach Report for all the data and to learn about ways you can protect your customers and your organization from breaches.