Uncertainty Quantification: What it is and why AI needs it
14 Apr 2026

Most AI systems give you a single answer.
A forecast. A prediction. A recommended action.
But in high-stakes environments – designing energy systems, managing infrastructure, assessing environmental risk – a single answer is not enough. You also need to know how much you should trust it.
Uncertainty Quantification (UQ) answers that question.
UQ adds a critical layer to AI systems by measuring the confidence behind their predictions. Instead of returning a single value, uncertainty-aware models reveal where the evidence is strong, and expose where it’s weak, cautious, or stretching beyond what it truly knows.
For organisations deploying AI in complex or safety-critical environments, that distinction matters. Knowing what a model predicts is useful, but knowing when to trust it – and when not to – is what makes AI deployable.
Why AI systems need Uncertainty Quantification
Most machine learning pipelines are designed to optimise predictive accuracy. Models are trained to minimise error across historical datasets so that they can generate the most likely outcome for new data.
While this approach often produces accurate predictions, it does not necessarily produce well-calibrated confidence. AI systems can appear highly certain even when they are operating in unfamiliar conditions or when the data supporting their prediction is limited. This can lead to over-confidence – one of the most widely recognised risks in machine learning systems.
For example, what happens when a forecasting model trained on historical energy demand is deployed during unprecedented weather conditions? Or when a healthcare model encounters patients whose characteristics differ from those in its training dataset? Even a financial model may attempt to interpret market behaviour during events it has never seen before.
Without a mechanism to quantify uncertainty, the model may still produce a confident prediction even when its evidence is weak.
UQ tackles this problem by attaching confidence estimates to predictions. Instead of returning a single value, models provide a range of plausible outcomes or a probability that the prediction is correct. This allows decision-makers to distinguish between predictions supported by strong evidence and those that require caution.
Two kinds of uncertainty
Not all uncertainty is the same. It has different sources – each one requiring a different response.
The first is aleatoric uncertainty, which reflects inherent variability in the world. Sensors are noisy, physical systems fluctuate, demand varies. This type of uncertainty cannot be removed because it’s part of the system itself. It can, however, be modelled and incorporated into predictions so that systems remain robust despite variability.
The second is epistemic uncertainty, which arises from a lack of knowledge. It occurs when models are trained on limited data, when important variables are missing, or when the model encounters conditions outside its experience. Unlike aleatoric uncertainty, epistemic uncertainty can often be reduced by collecting more data or improving the model.
Understanding the difference between these two types of uncertainty helps organisations determine whether they should design systems that tolerate variability or invest in learning more about the system.
How uncertainty is measured
There are several approaches used to quantify uncertainty in machine learning systems. Among the most widely used are Bayesian inference, ensemble modelling, and Gaussian processes.
At digiLab, our work is grounded in Bayesian statistics and probabilistic machine learning – approaches that place uncertainty at the centre of the modelling process rather than treating it as an afterthought.
Bayesian statistics provide a principled way to incorporate prior knowledge into machine learning models. In many scientific and engineering domains, experts already have a strong understanding of how systems behave. Bayesian inference allows that knowledge to be encoded into the model and updated as new data becomes available. This produces predictions that reflect both existing understanding and observed evidence while also quantifying how confident the model should be.
This approach has several practical advantages. Because the model starts with structured prior knowledge, it can learn efficiently and operate safely even when data is limited. It also makes the modelling process more transparent; by separating prior assumptions from observed data, engineers can see how strongly a prediction is supported by evidence and where the model is relying on prior structure because data is sparse.
Rather than hiding uncertainty, these models expose it.
Why uncertainty matters in practice
The value of UQ becomes clear when applied to real-world systems.
Fusion energy with UKAEA: fast surrogates, quantified risk
- Fusion R&D involves extreme physics and massively expensive simulation. In collaboration with the UK Atomic Energy Authority (UKAEA), we developed uncertainty-aware surrogate models capable of accelerating certain turbulence simulations by around 100,000 times, saving hundreds of thousands of CPU hours. These models do more than produce fast predictions – they also quantify where those predictions are reliable and where the model is extrapolating beyond available data. This allows engineers to explore design options rapidly while still understanding where further simulation is needed.
Water networks with Yorkshire Water: observing the unobserved
- Infrastructure networks can only be monitored at a limited number of sensor locations, yet operators still need visibility across the entire system. In partnership with Yorkshire Water, we developed SenSiteUQ, a system that determines where sensors should be placed across sewer and drainage networks. to maximise the information they provide about the network. Uncertainty modelling is used to determine where each new sensor would provide the most information about the wider system. This improves monitoring of flood risk and pollution by reducing blind spots, while ensuring that large-scale sensor deployment remains efficient.
Similar principles apply in healthcare and finance
- In clinical settings, uncertainty estimates can help doctors understand how confident an AI system is in a diagnosis or risk assessment.
- In financial modelling, uncertainty-aware forecasts help analysts evaluate risk distributions rather than relying on single-point predictions.
In each case, the benefit is the same: better decisions made with a clearer understanding of the evidence behind them.
The key takeaway
The central idea behind UQ is simple: AI systems should not only produce predictions, but also communicate how much those predictions can be trusted.
When uncertainty is made explicit, decision-makers gain visibility into where models are supported by strong evidence and where caution is required. That visibility enables safer automation, more strategic data collection, and better risk-aware decision-making.
As AI moves deeper into complex, safety-critical and regulated environments, this capability will become increasingly important. The difference between experimental AI and deployable AI is not just accuracy – it is measurable confidence.
In this series, we’ll explore how uncertainty quantification enables safer, more reliable AI across industries where the stakes are highest.