Socrates famously said “If what you want to say is neither true, nor good or kind, nor useful or necessary, please don’t say anything at all.” In other words, the decision making that precedes what one says is of paramount importance, not just about what to speak but how to speak and whether to speak or not in the first place. What follows with words is a mere execution of the decision!
The quality of our life depends directly on the quality of decisions we make. Businesses and AI are no different, and whilst large language models like ChatGPT have certainly captured lightening in a bottle, a bigger and more valuable avatar of AI remains untamed; ‘Decision Intelligence’
Let us keep AI on the side for a moment and think what do most businesses care about? Invariably it is the top line (revenue, growth), bottom line (Operational expenses, efficiency) and how they improve it in a sustainable way, i.e., Customer experience and compliance. Beneath this layer, there may be domain specific objectives like Sales would like to increase conversion rates, Operations would want to improve customer retention rate. Obviously, the real lever that controls these business outcomes, is the day-to-day decisions that the domains/ business make. There are at least two dependencies on these decisions; firstly, they cannot be made in isolation from other processes within / across domains and secondly there is limited budget to use to turn around these decisions, i.e., there are business constraints.
What if AI could place itself at the heart of this supremely valuable framework, and model the business outcomes directly, churning out the most optimal decisions within the constraints that maximise the business outcome value. This is what gets us to Decision intelligence.
Gartner defines Decision intelligence as a practical domain framing a wide range of decision-making techniques bringing multiple traditional and advanced disciplines together to design, model, align, execute, monitor, and tune decision models and processes.
This is a paradigm shift from the data-first approach, followed by the industry leaders so far. The hypothesis was that investing in big data solutions and making enterprise-wide data available to everyone within the organisation, via catchy dashboards would miraculously improve the decision-making quality within the organisation. Research shows that a majority (60%) of such data investments are wasted. Even the more sophisticated modelling led initiatives convert to substantial business value for about 1% of all AI models built!
The decisions first approach turns the value proposition on its head, by directly solving for customer level business decisions that optimise business outcomes. With Decision intelligence, business value projections can be modelled on historical data with confidence intervals even before the AI led decision model is implemented. This builds a basis of prioritisation for those AI initiatives that could create most positive business impact, and with an iterative testing journey, the elusive business value becomes more and more self-evident and ubiquitous.
No wonder, Gartner estimated that by the end of 2023, more than 33% of large organizations will have analysts practicing decision intelligence, including decision modelling. But this is just the beginning as Mckinsey study points out that Decision Intelligence will create 63% of all business value from AI by 2030, more than any other forms of AI, e.g., Generative AI.
The question then arises that why are startups lining up to build generative AI and not so much Decision intelligence. Well, the answer lies partly in the fact that making a large pre-trained foundational language model is more achievable (not easy though) as compared to making foundational decision intelligence models. If we look at the human analogy, people make different decisions given the same set of external prompts all the time. But expecting a similar answer from a conversational AI model is perhaps much more palatable. The reality is before we even think of foundational decision models across the board, there is huge opportunity to build these models by domain. Personal finance and Health for example constitute two super important areas of life that consumes everyone attention. There is definitely opportunity to build foundation decision models within these domains.
Whilst conceptualising Decision Intelligence, the other dimension we need to grapple with is whether we need full automation on decisioning on every occasion. Obviously, like in life, these use-cases are cut inequal in the industry. There are some uses where due to the lack of data or trust, a light touch AI guided approach is appropriate, whereas there are other use cases where a 100% decision automation/ optimisation is the smartest choice. This is a very interesting paradigm as it has the potential to strike the balance between Human and Artificial intelligence, and perhaps attempt to solve for any future alignment challenges in AI applications.
From the perspective of the industry, this also means a data scientist’s job changing from making machines learn (machine learning) to learning with machines (something we at Zinia like to call ‘Reciprocal learning’). Surely there is still scope for the data scientists to get deep into the mathematical details of algorithms, but the business value & decision intelligence game will be played at a more holistic level. How do they design the AI test cases? Which KPI’s do they target such that business value is conclusively delivered by the AI test case. Once the model outcomes are available, how do they carry out trade-offs between optimality and risk associated with different outcomes. What degree of machine led recommendation do they allow for which use cases? How do they design tests in operations that deploy these recommendations and how do they learn from the results of these tests? Do they iterate over the degree of reliance on machine recommendations versus domain expert’s recommendation until they find the right optimality of business value and associated risks? Not to forget the necessity of trust in the outcomes and explainability of AI steps.
Handling bias and explainability is another big emerging theme which will require a data scientist’s focus. Both these factors cut across data, models, decisions, and outcomes. For example, Explainability is not just why decisions were made, but also once the decision was made and outcome registered, how do we explain the entire business outcomes chain. The collected dataset could be biased so the AI that is built on it will need to ensure it does not carry (and amplify) these biases.
Finally, continuous post analysis to understand where the risks and opportunities lie will need a data scientist’s attention. What is the feedback from outcomes into improving data, modelling and decision making? This is how a data scientist role will metamorphosize in businesses playing a pivotal role in transitioning into ‘Human-centered Decision Intelligence’ AI, an approach that manages AI risks more ethically and efficiently with automation, for businesses and human society.
Author: Aashutosh Mishra 15/03/2023