How could artificial intelligence help in avoiding data breaches?
Artificial intelligence powered security systems learn from historical activities, incidents, breaches to build their own models autonomously, without constant human supervision.
As organizations grow further, workforce becomes global, diverse, distributed, and enterprises adopt new cloud, on-prem systems and deploy intelligent devices, the old model of static policies based on a fixed set of contexts (for e.g. in the case of access management Time, geo-location, device OS, etc.) starts breaking down. Policies grow in number; context does not show account for past history of users and it becomes arduous to protect against future attack vectors.
This is where AI-powered security starts truly showing value. These types of security systems learn from historical activities, incidents, breaches to build their own models autonomously, without constant human supervision. They are intelligent, in terms of making decisions on their own, and insightful in terms of their ability to look at data both broadly and deeply. They constantly learn and evolve by leveraging new data, so they’re easy to maintain and proactive in nature. This area has evolved by leaps and bounds in the past few years and is critical in the detection and prevention of attacks and breaches. Some of the use cases outlined below.
- AI /w ML has been applied very effectively in sifting through gargantuan amounts of data to establish identity profiles, which are then used for detecting not just anomalous but also malicious behavior. Based on this, administrators can deploy “adaptive” authentication policies for instance or just in time privileges/rights in order to de-risk access related attacks, which permanent/longer lasting policies are vulnerable to.
- AI is all about quality of data, its comprehensiveness and the data science that drives how well it is analyzed (also known as the Model). Quality refers to how well cleaned, prepared, and wrangled the data is for downstream consumption. Comprehensiveness refers to the various contexts and sources from which the tool collects data from. For example, when a user accesses an app, he/she uses an endpoint device (such as a mobile phone), from a location, traverses a network comprising firewalls, gets authenticated, assumes a role, and then performs some activity. A good IAM tool is able to gather information from all of these contexts (device, location, time, network, directory services, roles-based access, etc.) and then “learns” about access patterns over a period of time. The learnings are then applied through adaptive/proactive policies to the critical resources. This approach goes a long way in avoiding data breaches.
- Evolving from being prescriptive (providing broad recommendations) on how to mitigate cyberthreats) to being directive (providing definite steps and automating them) on mitigating threats
- AI is actually, counter to its basic tenets of being completely autonomous, evolving from siloed, in many cases unsupervised learning to hybrid – combining human intelligence and inputs (supervised) along with unsupervised. This results in more robust policies which in turn means lesser false positives!
- AI is being used to orchestrate the configuration of adjacent and impacted systems to reduce the propagation and impact of breaches.
- Automated notifications and mitigation steps (for e.g. blocking access or reducing to least privilege). Robotic Process Automation (RPA) also brings in efficiencies in this area.
- Leveraging AI for roles engineering and identity governance use cases. Some of these include automated implementation of separation of duties and risk aware access workflow management.
This post originally appeared in a Quora Q&A session hosted in January 2020. Our CPO Archit Lohokare was asked to discuss the state of cybersecurity, Zero Trust, artificial technology and machine learning and working in the security field, among other things. Stay tuned as we share more of his answers in our blog!