There is much attention on applications that leverage artificial intelligence (AI), machine learning (ML), and other software to take action without having to be told what to do. While there is a good degree of hype and science fiction, there are also more and more situations where this software is applied practically to add business value. Many such possibilities exist for machine learning applications within the technology business management (TBM) space.
Today, computing power is big, easily available, and inexpensive. As a result, computers run many business processes, generating tremendous amounts of detailed data. This data, if analyzed and used properly, can help a business serve its customers better, identify new revenue streams, optimize existing processes, and gain competitive advantage.
However, there are significant challenges in getting business value from this data.
- The sheer volume and detail make it impossible for a human – or group of humans – to effectively analyze and glean meaningful insights. Manual analysis can’t produce results fast enough, and by the time the human analysis is complete, the information is stale and not useful.
- Human brains can only analyze detail to a certain level. Patterns that represent potentially useful information often go unnoticed, leading to missed business opportunities.
- The data itself isn’t perfectly aligned or organized to lend itself to easy analysis. Given that the data comes from different systems, there is not always an obvious way to link it together. In addition, as much of this data is the result of human entry, fields can be input incorrectly or left blank, making it more difficult to draw value from it.
So, while there is gold waiting to be mined from your data, the effort to get that gold outweighs its perceived value.
Leveraging data to improve business value has always been a fundamental part of TBM. Within this framework, valuable information comes from combining IT financial and operational data to understand the cost of delivering technology services and the opportunities to optimize that delivery. The data involved is often incomplete and comes from different sources, but by applying a standard model and taxonomy, and a technology to cleanse and enrich it, data becomes valuable and provides insights that help optimize investments.
TBM + ML
Your TBM practice does not require that data be complete or completely aligned to be useful. In fact, TBM solutions provide a means to model and calculate metrics and KPIs by working with your best available data and applying standard and best practice methods of calculation. In addition, they’re flexible to accommodate new and/or better data as it becomes available.
But TBM is still only as good as the combination of these standards and the human brain. As AI solutions become more prevalent and practical, TBM can take advantage of this technology to greatly advance its capabilities. Machine learning, in particular, has a very direct application for TBM.
Machine learning applications use algorithms to make predictions about data to ensure the data itself become more relevant and accurate over time. Here’s how it works: the algorithm determines outputs based on a set of inputs and maps or labels the data. Either a programmer provides the maps, in the case of Supervised Machine Learning, or the algorithm itself determines them (unsupervised). The outputs can be classifications, estimations, or calculations. The key is that the algorithm uses probability to make its determinations. It does its mapping based on statistical methods and refines those methods as the sample set of data grows.
One of the main challenges in executing TBM is that the initial data is disparate and often incomplete, yet as the data grows, it can be refined over time. This aligns perfectly with the way ML works. Statistical methods provide the best output based on the available input and are continuously refined as the set of data grows and diversifies.
So how can machine learning be applied to your TBM practice? Here are four ways Apptio is looking at leveraging ML to improve efficiencies and shorten your time to value:
- Data enrichment
- Data classification
- Cost model improvement
- Insights hunting
As organizations look to adopt TBM, one of their biggest concerns is the value of their data. In my experience, while there is always room for improvement, there is often more value than most people believe. In many cases, much can be inferred from partial records or different names for the same entity or object. Often an organization will assign or hire someone to provide data clean up—an effort of manual data reconnoitering and resolution. The objective of that effort is to use partial information to fill in gaps; complete partial values or fill in missing fields. That same effort will often utilize a reference source to expand on the data provided.
Example #1: spotting trends in cloud data
Billing and consumption data from one or more cloud providers can contain tens of thousands of lines per month. It may not be a big task to see which products were used and by whom. What is challenging is uncovering how those products are being used, and being able to spot trends in usage that indicate potential cloud sprawl or other uncontrolled and costly consumption. In addition, it’s difficult to relate that cloud data to other reference and operational data to understand the applications, services, or business units supported by that cloud product. Machine learning can be employed to analyze this mountain of data to predict runaway costs or identify places to optimize.
An ML algorithm can work faster and be much more complete than its human counterpart. It can not only fill in the gaps but work the reference query as well. The completed data and additional reference information can then be used to weight cost allocations or better understand the total cost of operating that component. In addition, it could identify situations like components that are end-of-life or require special support. In this way, the basic data provided in a CMDB can instantly be enriched without the need for lengthy, error-prone manual analysis and research.
One of the keys to successfully building an IT cost model in TBM is the ability to classify records from the various data sources, financial or operational. The classification process often demands that thousands of different field and record types be examined and classified according to the standard TBM taxonomy. Initial data mapping can be manually intensive, and regular data updates usually yield new data or changed data that has not been mapped and needs to be examined and assigned its place.
Example #2: refining classifications
Machine learning has the ability to take data inputs, examine them, and instantly classify records according to the taxonomy through its predictive analysis. Some data sources, like financial extracts or fixed asset lists, can be notoriously difficult as they often involve only numeric codes or partial and basic text descriptions. Machine learning can do the initial classification plus the refinement of that classification over time based on the gathering of additional data and minor human intervention to refine those classifications. This would serve to reduce the initial configuration of a TBM solution, as well as having a great impact on any ongoing data maintenance as a result of new or changed data. TBM analysts could then focus on the business of making decisions and recommendations and not waste time cleaning and reclassifying data.
Cost model improvement
After an IT cost model is first built, that model is refined over time by additional data and expansion of the scope of the cost metrics being calculated. Operational and reference datasets are added to help understand and relate how applications and services consume various infrastructure and other services. This work relies on the ability to find and leverage patterns in multiple datasets to relate that data together, and then determine the best strategy with which to allocate costs from layer to layer and object to object in the model.
Machine learning algorithms are purpose-built for these tasks. Unsupervised machine learning algorithms can discover patterns and relationships across multiple datasets. They can also construct new and more granular allocation strategies, unfettered by an administrative interface that a human would need to manage. In addition, tapping into a community of TBM practitioners expands the available data and experience to drive even greater potential for learning.
One of the biggest benefits of TBM is the ability to offer valuable insights through the combination and modeling of IT finance, operational, and consumption data. Identifying these insights requires an ability to recognize trends and patterns in data, to pull data together in different ways, and to calculate probabilistic outcomes, and project estimated future possibilities.
Machine learning has the capability to unlock the tremendous insights stored in terabytes of disparate financial, reference, and operational data available to accelerate your ability to realize value and have business impact.
»Related content: Striking gold: how to get useful and valuable insights from your IT cost data
The promise of TBM + ML integration
There is no doubt that the renewed interest in ML and its practical applications is timely for those who practice TBM. By tapping into the data of the entire community, machine learning can not only help an organization in their present state but also apply the learnings of the community to help guide an organization to a desired future state.
Imaging insights that presented themselves based on leading indicators and expected future action as opposed to always looking at historical data to fix things? Now we’re starting to stretch ourselves into science fiction!
Bill Balnave is Regional Vice President of Solutions Consulting at Apptio and an avid insights hunter.