Many of the early and successful RPA deployments we often read about have been implemented by large, IT-disciplined organizations that are accustomed to evaluating and implementing new technologies in an organized fashion. But those days are rapidly coming to an end. Based on the pain endured, expense paid, and lessons learned by these intrepid early adopters, RPA is now going mainstream. This means we’re starting to see significant deployments in smaller organizations that unfortunately, don’t have the resources to establish COEs, or bandwidth to dedicate IT professionals towards managing the RPA production environment. Sure, IT or external consultants are still building most of the important automations in these organizations, however for smaller enterprises, managing the production environment, after all the RPA professionals have left the building, is often dumped in the “capable” hands of the end user groups. But are their hands really capable? Well, that depends upon a number of factors, not the least of which is the quality of the production management tooling left behind.
If you speak with any of the major RPA providers, you’ll hear lots of talk about performance metrics and analytics, (number of bots in production, average automation execution times, job retries, dollars saved per hour, etc.). No doubt, all important information. However, for smaller organizations, they’ve yet to get to the fun part of fine tuning and scaling their RPA programs. They’re just now rolling out their first projects and grappling with how to manage the production environment.
I believe the only way SMBs can effectively manage their RPA production environments is if the person(s), performing the Production Manager role understands the following with regard to a given automation’s execution:
· What did the bot actually do?
· How has data involved in the automation changed over the course of its execution?
· What was the bot supposed to do?
· What to do when things go wrong?
This being the case, it is critically important the production management tooling left behind not only provide this information, it must also do so in a way users can understand. Simply put, the tooling must “speak the language of the user”.
So how do we provide tooling that speaks the language of the user in a world where most error messages are more programmer-oriented and look like the following: “Invalid operation. An internal error occurred: 0x80070490”? What the heck?
What Did The Bot Actually Do?
For starters, the best way to help the user understand what a bot actually did is to do just that, “show the user exactly what the bot did”. And the best way to do this is via a video replay of the automation. Chances are, when the user conveyed the steps used to build the project specification document, he/she did so graphically. Whether it be via a video or an annotated narrative, the best way to convey a process is graphically. That being the case, doesn’t it make sense to convey what a bot did in production the same way via a video replay? This is especially important when it comes to conveying what an unattended bot did since unattended bots generally do not run on a user’s desktop. If you are employing automation video replay, you should keep the following two points in mind:
1. During the course of the automation’s execution, there may be pieces of information on the screen that are either not germane to the transaction or are of a sensitive nature and should not be captured. This is especially applicable when recording attended bot automations because the user may have other applications open (like a document), that are not related to the automation and therefore should not be memorialized in a video. If you intend on recording and replaying transactions, make sure they are secured and access controlled properly, and the user has the ability to blur out sensitive information. Also, make sure the folks in compliance are informed that these transactions are being recorded. They most likely will have an opinion regarding which and how automations get recorded.
2. Video recordings consume lots of storage, so you may want to employ recording strategies the lighten the storage load. For example, save only recordings of transactions that result in an error or exception. This will significantly reduce storage requirements.
How Has Data Involved In The Automation Changed Over The Course Of Its Execution?
Understanding how a given step in the process impacts a given piece of data is critically important when trying to diagnose problems. The best way to do this is to include a step-by-step data log that is synced with the video replay and displays the status of all transactional data as it changes. This will allow the user to see how data was changed over the course of an automation and pinpoint the exact step from which a problem arises.
What Was The Bot Supposed To Be Doing?
When an automation is rolled-out into production, the people who used to perform the process manually are still around and maintain the institutional knowledge regarding how the process is supposed work. However, over time, people move on and that institutional knowledge moves with them. Therefore, creating a detailed specification document and linking it to the video replay is an important part of the equation. While seeing what a bot did, and understanding how the data has changed is valuable, unless it is linked to a specification that details what the bot is supposed to be doing at each step, an important part of the story is missing. Linking the specification to the video replay and data log becomes more important the further away you get from launch.
What To Do When Things Go Wrong?
The goal of providing all the RPA production management tooling described above is to help users quickly understand how an error occurred or why an automated transaction was rejected. However, that doesn’t necessarily mean the user has the wherewithal to solve the problem. That is why these production management tools need to be integrated into an issue tracking system that links (if desired), all participants in the support network. By doing so, support personnel can easily review the same automation artifacts the user sees, and there is no time wasted scheduling to connect to bot machines or relying on the user to reproduce the problem. By linking everyone to the same bot machines and artifacts, issues can be resolved in minutes rather than days. In fact, it is often the case that the support network can identify and fix a problem before the user discovers it. That’s great customer service.
The bottom line is identifying worthwhile use cases and building automations is usually the easy part because it’s fun, the possibilities are seemingly endless, and enthusiasm is high. However, once the automation moves to production, that’s when the less glamorous work begins and where the risk of failure is highest. To minimize this risk, you need to do the following:
· Clearly define and assign the role of Production Manager(s).
· Arm the Production Manager with tooling that allows he/she to identify and respond to issues quickly (see Exhibit 1 below for an example console).
· Ensure those tools speak the language of the user and not the programmer.
· Connect all support resources to the tooling so everyone shares the same 360 degree view of the transaction.
Exhibit 1 – Ratchet-X Replay Viewer
This exhibit depicts the Ratchet-X Automation Replay Viewer playing back an automation that is adding an invoice into QuickBooks. This replay is at the point where the automation is searching for a matching purchase order against which the invoice will be created. Pane 1 contains the replay window where the actual automation video is being replayed. Pane 2 shows the synced data log and is describing how the automation has just searched for and found the PO. Pane 3 shows the synced automation specification document which explains precisely what the automation will do when the PO is found. All panes at this point are synced to Step 7 in the automation.