Robotic Process Automation (RPA) is an emerging technology that uses advanced scripting to learn by watching people perform a repetitive task and then automate the performance of the task directly through the user interface. According to McKinsey and Co., RPA, and other automation technologies, are projected to have a $6.7 trillion economic impact by 2025. The promised benefits of automating repetitive tasks, enhancing data entry accuracy, consistency at any scale, and reducing compliance risk through strict adherence to process and audit-logs make it an obvious choice.
With the rapid adoption of RPA technology across the enterprise, there is a strong case for TBM offices to exploit RPA. Once a TBM initiative moves away from a strategic program to being business as usual, TBM offices face cost pressures. This implies that TBM offices have to deliver more (as expectations have risen) with lesser resources than before. TBM offices can use RPA to alleviate these pressures by automating the mundane data-related tasks to focus on strategic analyses to transform.
Data is the lifeblood of TBM, and its quality is critical to the TBM office’s success. Anecdotally, about 75 percent of the time spent by TBM office resources is on data wrangling issues. TBM Analysts can focus on higher value-added tasks such as insight-hunting and strategic analysis if their productivity can be enhanced by automation. Anecdotally, in the billion-dollar IT spend club, there is an average of 50 or more data sources for getting the cost allocations as per consensus. The severity of this problem increases when international business is involved with Transfer Pricing, Tax, and other regulatory constraints. For as many data sources, the labor required to adhere to the time-sensitive TBM process of monthly book-closing increases significantly. RPA brings a deflationary force for the same.
Intelligent automation releases capital and talent to focus on other priorities of the TBM program, but it is not a silver bullet. To ensure a maximum ROI for automation, the process to be automated should pass a few simple criteria:
1. The process runs frequently.
- RPA takes resources (labor and non-labor) to implement and must be maintained just like another application. Frequency and duration of execution need to be included when deciding to automate. For instance, if the average RPA takes 200 hours to implement and saves 1 FTE day a month, the RPA will not be time-positive for 1.75 years.
2. The problem is expected to be ongoing.
- If the process is apt to change, is expected to undergo revision soon, or has an uncertain future, then automation is not prudent.
3. The process is repetitive, with little expert intervention needed.
- RPA does not use Artificial Intelligence to make business decisions. If an outcome of a scenario cannot be determined in a rules-based decision tree, then a process is not ready for automation. RPA may be combined with AI. However, for the scope of this discussion, we are excluding that.
In the context of TBM data, there are following considerations as well:
- Are you considering RPA when API may suffice? RPA may appear more user-friendly, but API can be more cost-effective and robust.
- Will the source system API change much? Changes in the User Interface (UI) of the source system(s) can interrupt an RPA, requiring higher support cost. Consider the frequency of UI change in the system to be interfaced.
- Is the data source stable? TBM offices deal with many fast-changing data sources, especially at the outset of a new TBM program. Changing datasets have the potential to cause RPA executions to fail, and therefore factored into the decision.
TBM offices can leverage RPA to boost productivity and scale its efforts in following three scenarios:
For data ingestion, the primary modes of data sourcing are:
- Data files, which reside in shared drives or a secured File Transfer Protocol (FTP)
- Data scraped from front-end User Interface (UI) systems, whose UI changes
- Data files coming in emails to a common account with a pre-defined format in a pre-determined mailbox
- Data coming in files sent to a preset mailbox in a pre-defined format
- Data procured through email interactions
Candidate strength for the use of RPA for data ingestion mode.
The first two modes merit elaboration as they are eventually a target state for a matured data ingestion process.
As a best practice, the authors recommend that the TBM office must shape the data ingestion process to have only the first mode of data ingestion other than APIs.
Once data in a system TBM, data issues often arise because this is the only place disparate sets of data can be reconciled against each other. There is a strong business case for using RPA to help maintain clean data once it is in the TBM system. RPA can take over the repetitive task of identifying, contacting, and tracking individual users that input bad data into a feeder system- sending thousands of customized messages in a few minutes to a few hours. A task that would normally take a dedicated FTE.
Problems in this space tend to group in three categories:
- Source data issues that become apparent in the system once the data is talking and the correct answer is known through logical deduction. For instance, in a list of middleware connections that also includes the names and keys of interfaced systems, if some of the keys are missing, it may be possible to look them up in the appropriate system using the name.
- Data where issues in source data can only be corrected through expert intervention. For example, having a mismatch in Network Attached Storage (NAS) volume names between a list of database backups and storage volumes.
- Errors that arise from individuals entering incorrect information into a feeder system. For example, miscoded time entry data.
Only the final problem fits the criteria for being a good RPA candidate, the problem occurs frequently, dealing with it is repetitive, and the problem is expected to be ongoing. If you are able to isolate and report on the errors in your TBM system, you have a strong candidate for automation.
The best way to correct these kinds of errors would be to contact each individual and discuss their problem and guide them to entering the correct data, but such an effort could be extraordinarily time-consuming in large organizations. RPA offers the next best solution by being able to customize each request for the specific audience and issue.
Tips on Building an RPA data cleansing solution in your TBM System
- Build your error report specifically for use with the RPA. A standard user report may not be the best case in the long run if your users’ needs for how the information is displayed diverge from your RPA’s configured format. To avoid change management issues in the future, its best to give the RPA its own report.
- Carefully consider what you want your communications to look like. Will it contact people once per issue or aggregate all issues affecting a user into a single contact? Will is aggregate across to test the RPA throughout its development thoroughly in different datasets? What other stakeholders should be informed of the issue?
- For each issue and stakeholder combination identified, you will likely need a custom communication
- Making sure to involve your stakeholders in the communications you will be sending them will help make your communications more salient to the audience, thereby increasing the response rate. Think of yourself as a direct mail marketer, your job with these communications is to get people to respond.
- Design your RPA’s use of the templates in a way that enables you to change the format of the template at any time.
- Consider whether you want the RPA to keep score. Data of who was contacted, who responded, and who has the most issues are useful to have for giving leadership visibility, and for finding areas where your feeder system processes may be breaking down.
Just as with the prior case for data cleansing, in which RPA is used to contact individuals with custom notifications and instructions for fixing their data, RPA can also be used to enhance customer engagement with the final reporting out of the system.
TBM Systems have deep reporting capabilities, but using the reports requires users to log into the system. Moreover, once a user logs into a system, they have to find the specific information they need by locating the report then selecting the appropriate slicers, filters, and columns for their purpose.
RPA has the ability to enter the TBM System directly and interact with the various filters, slicers and column pickers to generate the report. RPA can then export the now customized report and send it the requesting individual.
The key to making this work is the construction of a table listing all attributes which each user needs to be modified for each report to be used in this way. Any standard table that both a user and the RPA could access would be sufficient for the report, Excel tables and SharePoint lists will be easiest for most organizations.
The table should be structured so that the report to be accessed, individual to receive the report and each filter or column selection should have their own column- essentially a table of filter values. The table would act as both registry of report recipients, and instructions for the bot.
Starting from the initiate signal, the bot would access and read the table, find the report, enter the filters, export the report and then email the requestor.
Considerations for developing an RPA
- Develop a separate service account to ensure internal controls compliance. Treat the RPA bot as a digital headcount with its own Staff ID.
- Provision it with requisite access to all the data sources and in your TBM system
- Provide a complete process map for the RPA developers to design the automation of getting the data from source systems and loading them in your TBM system along with resolution paths for exceptions.
- Decide how the RPA will be initiated, will it be timed off a date? Initiated through e-mail, IM, or text? Will it be always on a waiting to be initiated, or only available at certain times?
- Any time your RPA will be pulling data from the TBM system to e-mail people (scenarios 2 and 3), build a dummy version of the report for testing. You will need to test the RPA throughout its development thoroughly, but you will need complete control of the testing dataset and the people that the RPA contacts. What you need is a report that is formatted identically to the live data report, but can be controlled directly by the dev team. The easiest way to do this is to export the live report, delete the data, build the dataset you need in Excel, then upload that dataset as a table and build a report on top of the table.
- If your RPA is expected to aggregate, manipulate, or use the data it is pulling, build a data dictionary for your reports; this will help your RPA developers immensely.
- List out your failure modes. Which fields constitute a complete record, without which the system wouldn’t be able to put a row in the error report? Which fields is it okay to have missing data? In which fields would missing data cause an exception?
- For each error, decide what you want the RPA to do in the event of a mistake. Halt completely? Fail silently, notify someone? How should the bot record and communicate the errors and actions taken?
- In case of success or failure (especially in case of failure), provide the custom communication for the RPA to send out the email
With the appropriate people, process and technology combination, you can succeed at laying the scalable foundation for pace-setting your TBM operations. RPA brings in an amazing deflationary force for scaling your data governance process.
Nate Bender was TBMA at Exelon, the largest electrical utility in the US by a count of customers served, where designed and implemented Exelon’s cost transparency and consumption-based billing models. He is about to start a new role as Founder for the Distributed Energy Resources, an internal entrepreneurship role dedicated to looking for 10x solutions to customer problems. Prior to Exelon, Bender founded a consulting firm specializing in the design, development, and outsourcing of consumer products. He has a Global MBA from Johns Hopkins University and a BA in East Asian Languages from the University of Maryland
Manik Patil is about to start in his new role of Modernization Evangelist at American International Group (AIG)*. Prior to that, he was a Global Senior Director at AIG, a Fortune 100 Global Company, where he led Technology Business Management efforts for over $2B Tech Spend. He collaborated with top leaders in Product Management, Operations, and Finance to develop strategy, set priorities, and drive strategic initiatives. Manik combines his Business Operations expertise with a deep understanding of strategy, AI/ML-driven analytics, and governance to help CxOs plan and manage business transformation. He holds a Masters degree in Management from Carnegie Mellon University and is a principal member of the TBM Council. You can follow him on Twitter.
*Note: The views and opinions expressed in this article are those of the author and do not represent that of his employer.