Wiremind Logo

Sruthi Kolukuluri

Content Manager

Outline

Subscribe to our newsletter!

Receive exclusive updates on Wiremind's products, customer stories, and all upcoming events.

Sruthi Kolukuluri

Content Manager

When Forecasting Isn’t Enough: Why We Invest in Causal Time-Series Models

March 19, 2026
-
5
min read

Forecasting sits at the core of every pricing and revenue management system. If you can predict demand accurately, you can optimize prices. If you can anticipate the load, you can allocate capacity more efficiently. Over the past decade, machine learning has significantly improved these predictions. Models are more expressive, more data-hungry, and more powerful than ever.

And yet, in practice, we kept encountering the same issue: a model could look excellent on standard accuracy metrics and still behave poorly when used to simulate price changes.

This gap between what is usually considered to be accurate and what is really accurate for decision-making application is what led our team to work on causal forecasting for time-series systems.

The Hidden Bias in Pricing Data

The root of the problem is neither a lack of data nor the number of parameters defining the model size. It is structural.

In most industries, prices are not set randomly. They are set deliberately. When demand is expected to be high, operators increase prices. When demand is expected to be weak, prices are lowered to stimulate sales. That means the historical data reflects a decision process layered on top of demand dynamics.

As a result, high prices frequently appear alongside high demand. Not because high prices cause high demand, but because both were driven by underlying market conditions.

A standard forecasting model trained to minimize prediction error will absorb that pattern. It will learn that price and demand move together. From a pure predictive standpoint, this is expected if data is biased.

The problem emerges when we ask a different question: what if we change only the price?

Causal Learning: A Way To Answer “What If?”

Forecasting models are trained to answer the following question: given the world as it seems to usually behave as per historical data, what outcome should we expect?

But pricing teams intervene in the world. They need to know how demand responds when they adjust prices. That requires estimating price elasticity — the causal effect of price on demand.

A model that fits historical data perfectly may still misestimate elasticity if the historical pricing strategy was correlated with demand expectations. In other words, the model may be excellent at predicting observed outcomes but unreliable at predicting the unobserved ones.

This distinction is subtle but critical. Once a model is used to guide pricing decisions, its ability to estimate the impact of changes matters more than its ability to reproduce past patterns.

Rethinking the Training Objective

Our research starts from a simple premise: if a model is meant to support decisions, it should be trained in a way that isolates the effect of those decisions.

Instead of training a single network to directly predict demand from all available features, we decompose the learning problem. We explicitly separate baseline demand dynamics from treatment effects. Conceptually, we distinguish between what is happening because of context — seasonality, route characteristics, time-to-departure — and what is happening because of the price itself.

To do this rigorously, we build on a statistical framework known as orthogonal learning. Without going into the mathematical details, the key idea is to model how decisions were made in historical data, and use that information to reduce bias in the estimation of the price effect. By accounting for the way prices were historically chosen, we can better isolate their true impact.

We then adapt this framework to deep learning architectures for time-series, used in production environments. This is not a theoretical exercise; it is designed to work with the kind of models and data structures used in real pricing systems.

Evaluating Causal Performance in the Real World

Training a model differently is only meaningful if it can be evaluated differently.

The difficulty with causal estimation is that we never observe the true misconception. When we set a specific price, we observe demand at that price, but we do not observe what demand would have been under a different one.

To address this, we use a methodology inspired by regression discontinuity designs from econometrics. When a price changes sharply while surrounding conditions remain similar, the resulting shift in demand provides a local estimate of the price effect. By systematically identifying such moments in historical data, we construct a benchmark for treatment effects.

We then compare models based on how accurately they recover these estimated effects.

The findings are consistent across datasets. Traditional forecasting models achieve lower prediction error on standard metrics. However, models trained with our causal framework are significantly more accurate at estimating the impact of price changes.

In practical terms, that means they are better aligned with the objective of pricing systems: understanding how demand responds to intervention on the price.

Why This Matters for Revenue Management

In pricing, small systematic errors compound. If elasticity is consistently overestimated or underestimated, pricing strategies drift over time. Decisions appear rational in isolation but lead to suboptimal performance when deployed at scale.

Focusing solely on predictive accuracy as it is usually defined can mask this problem. A model may look strong on standard error metrics like RMSE (Root Mean Square Error) while embedding biased treatment effects. That bias becomes visible only when decisions start to change.

By incorporating causal training and evaluation, we reduce the risk of building systems that optimize with the wrong information.

From Forecasting to Decision Intelligence

This research reflects a broader shift in how machine learning is used in operational contexts. Early applications focused on prediction. Modern systems are increasingly responsible for recommending and simulating decisions.

When models influence actions, the training and evaluation model criteria must evolve accordingly.

Rather than merely forecasting observational trends, causal inference aligns the model’s objective with the decision-making reality of pricing systems. For organizations building automated or semi-automated revenue management tools, this alignment is essential.

What’s Next? The Future of Causal Forecasting at Wiremind

At Wiremind, we are always looking toward the next frontier in causal forecasting.

While our current models already deliver outstanding improvements for our clients, our ML Research team is exploring new ways to capture market realities.

A key focus is sequential treatment effects. By applying diffusion generative models, we are moving beyond day-to-day price elasticity. Instead, we are modelling how historical pricing sequences shape current demand, allowing us to map how elasticity evolves over time and bringing our algorithms even closer to real-world pricing dynamics.

Read the full research paper here.

Other resources you might like.

Curious to dig deeper? Discover our articles that give you an insider’s view into the modern technologies in passenger transportation at Wiremind.