Conversational Insights Private Preview
Apptio’s Conversational Insights feature unifies financial, operational, and organizational data so you can ask–not search–and get value faster.
As a private preview tester, you’ll get early access to this feature and play a key role in shaping its development. You’ll have the opportunity to explore capabilities, share your insights, and help influence how we enhance the experience for all users.
Safe & Secure
Security, privacy, and compliance are at the core of our design, and we are committed to maintaining the confidentiality and security of our customers’ data. We do not use any of your data to train any models, nor does any of your data go to any third-party models.
Data usage guidelines
IBM Apptio and IBM Cloudability are adding features that use generative AI. Customers will have the ability to opt-in or opt-out of generative AI features. These features use foundation models, also known as large language models. The available foundation models are trained on a broad set of data and can be used for a variety of tasks including summarization, classification, content generation, and extraction.
No company data is stored or used to train our AI models. We are committed to maintaining the confidentiality and security of our customers’ data, and we do not use any of their information to retrain our LLMs. Client privacy is our priority. Unless separately authorized by customers through a voluntary feedback submission, IBM will not use the customer content or model outputs to train or retrain the foundation models. Content provided by customers and content generated by the foundation models are not visible to the model creators. Model creators may be IBM or third-party companies including but not limited to Anthropic, Meta, or Mistral.
Data Privacy & Training
No. Per Apptio’s AI usage guidelines, no company data is stored or used to train AI models. We are committed to maintaining the confidentiality and security of customer data, and we do not use any customer information to LLMs.
AI development is governed by IBM guardrails published at AI ethics – IBM
AWS Bedrock hosts the LLM. When we use the model within Bedrock, requests (and any related data) do not leave AWS.
No. The API key comes from an individual user that connects to the APIs, which means the permissions for that individual are maintained regardless of them querying using an LLM via our API. Row-level security (RLS) and user/role-based security are preserved through the API connection.
Data Flow & Architecture
- LLM is invoked by Apptio code through our agent
- Apptio’s agent makes the API calls and sends the result to the LLM (hosted on AWS Bedrock)
- Data and processed analysis are sent back to the user in the chat window
High-Level Architecture:
- Client Layer: Embedded within Apptio
- MCP Server Layer: MCP Servers deployed in an Apptio data center make the calls to the API tools to get data from Apptio
- Data Sources: Apptio Costing & Billing Data Source/Cost Model
Security & Access Controls
- End user authenticates through Frontdoor via the existing user’s browser session
- Secure API calls made to respective Apptio data sources using user’s credentials from their Apptio session through Frontdoor
- Data returned through the same secure pathway
- All user permissions and row-level security maintained throughout the flow
The API connection uses the individual user’s credentials and maintains their exact permission set. If a user cannot access specific cost centers, projects, or data in the TBM Studio UI, they cannot access that same data through the MCP Server API calls.
Compliance & Regulatory
Conversational AI inherits Apptio’s existing compliance posture. Data processing occurs within the same security framework as direct TBM Studio access. Customer data remains subject to the same data processing agreements and privacy controls already in place.
See the IBM Trust Center for more information.
Apptio maintains SOC2 Type II and other certifications. The MCP Server operates within this certified environment and does not change the underlying compliance posture of the TBM Studio platform.
Accuracy & Reliability
Conversational AI writes out its reasoning process in the UI chat. Using that information, one can trace through the model to validate data. One can also ask Conversational AI what assumptions were made and why some insights were made.
Conversational AI does not extrapolate based on the data or documents that it finds on the web. It only returns actual data from your TBM Studio and cost model. Because responses are based on real data from your system, the risk of hallucination is significantly reduced. However, users should always validate AI interpretations and analysis of the data.
Write Permissions & Agentic Capabilities
Conversational AI is currently read-only. All data access inherits permission from user credentials, ensuring that users can only access data they are already authorized to view through the standard TBM Studio interface.
Conversational AI is provided with various tools, which it uses in an agentic manner – while keeping the user in the loop – to answer the user’s questions. It discovers tables, metrics, and columns that are relevant to the user’s query and issues multiple queries asynchronously to retrieve data needed for analysis.
Additional Enterprise Concerns
Conversational AI operates within your existing data governance policies since it uses your current API permissions and access controls. It can be incorporated into your AI governance framework as a sanctioned tool for TBM data analysis.