The Bots are Coming, So How do We Control Them?
By Mike Kiser, Sr. Security Strategist, SailPoint
Want to know what the weather will be like today or the score of last night’s game? If asking your personal virtual assistant is your first port of call, you’re not alone–the explosion of virtual assistants in home environments underscores the reality that software applications are weaving themselves into the fabric of not only our personal lives, but also into our working lives.
Virtual assistants, and other “bots”, are experiencing a wave of popularity within today’s enterprises. From customer service chatbots, to order fulfillment, to making travel bookings for employees–the ability of bots to speed up and simplify such internal processes has many organisations looking to adopt this technology sooner than later.
However, as with many technologies, bots present both powerful opportunities and a significant challenge when it comes to identity governance.
Bots vs. identity
The wave of bot adoption provides an opportunity for identity to become more intuitive and pervasive within a business. Bots can be used to facilitate interaction between the business user and the identity infrastructure in the form of chatbots or other human-like request processors. This may allow business users to obtain reporting and analytics from the business more rapidly. For instance, the ability to check-in on the progress of a given certification campaign can be done with very little human effort.
Bots may also be more involved in the actual process of governance itself, for example through bot-facilitated access requests.
The wave of bot adoption provides an opportunity for identity to become more intuitive and pervasive within a business
Bots as identities
With such a potentially large wave of adoption, the potential for bots to be used without appropriate identity governance is significant. Automation programs that create bots ad hoc, for example, could present a real problem for businesses if they fail to ask the right questions and monitor such programs in the first place. In the face of rushed early adoptions, it’s crucial to ensure that identity standards are being met if businesses want to sustainably stay ahead of the curve.
Despite there being many new and promising bot-based initiatives, the use of models that have already been proven in production is one way to ensure success. Most often, this will mean treating bots in the same manner as contractor-based identities, where a dedicated bot repository is established in the same way it would be if the bot were a contractor. As they are created, modified, or eliminated, this repository must be updated, and that information subsequently brought into the identity governance solution. This also means applying time-based access and the application of policy to ensure tight restrictions on their capabilities within the environment.
Bots need to be controlled in the same way other identities are controlled, meaning that their actions should be confined strictly within set boundaries. Analytics may also be deployed to ensure that they have not been repurposed and are fulfilling their expected function. While one of the greatest benefits of bots is that they allow human effort to be lessened, human oversight of bots is still key to good governance, so every bot must (once again like a contractor) have a real-world person who is ultimately responsible for their governance.
The rapid rise in the use of bots throughout organisations grants identity programs a chance to be enhanced, while also introducing a new class of identities to govern. By being proactive, asking the right questions, and using proven governance models, identity can be utilised to retain governance and oversight while openly welcoming the rapid adoption of this new technology.