Undersmoothing Causal Estimators With Generative Trees

Average causal effects are averages of (heterogeneous) individual treatment effects (ITEs) taken over the entire target population. The estimation of average causal effects has been studied in depth, but averages are insufficient for more individualised decision-making where ITEs are more appropriate. However, estimating ITEs for every population member is challenging, particularly when estimation must be based on observational data rather than data from randomised experiments. One potential problem with observational data arises when there are large differences between the sample distributions of the input features of the treated and control units. This problem is known as covariate shift. It can lead to model misspecification the harmful effects of which can be severe for ITE estimation because point estimation is highly sensitive to regions of the common support of the input space in which the number of treated or control units is very small. Moreover, common solutions are often based on reweighing schemes involving propensity scores which were originally designed for average effects and not ITEs. In this paper, we propose Debiasing Generative Trees, a novel data augmentation method based on generative trees that debiases and undersmooths causal estimators trained on augmented data. It encourages higher modelling complexity that reduces misspecification and improves estimation of ITEs. We show empirically that our proposed approach yields models of higher complexity and more accurate predictions of ITEs, and is competitive with traditional methods for estimating average treatment effects. Our results confirm that reweighing methods can struggle with ITE estimation and that the choice of model class can significantly impact prediction performance.


I. INTRODUCTION
In the absence of data from randomised experiments, analysts must use observational data to make inferences about the causal effects of interventions or treatments, that is, what would happen if they intervened to change the treatment status of individual units in a population.The estimation of average causal effects -the average effect of the treatment aggregated across every unit in a population -has been studied in considerable depth.However, there is now growing interest in estimating heterogeneous treatment effects for individuals characterized by a possibly large number of input variables or covariates.If there is substantial heterogeneity across units, such systems can unlock the analysis of targeted interventions, for instance, in the form of personalised healthcare based on covariates that describe patients' symptoms and health histories.
The use of observational data creates challenges for the estimation of heterogeneous causal effects.First, the analyst must make assumptions, for example, that treatment selection is strongly ignorable given the available covariates.We take ignorability to hold throughout, and focus on the second problem, namely, that nonrandom treatment selection can lead to observed data in which the distributions of covariates among the treated and untreated units are very different.In practice, this can make it difficult for conventional learners to learn the true relationship between the treatment effect and covariates across the entire support of the covariates, and so result in poor performance when tested on other datasets.
More generally, this issue is known as 'covariate shift', which in this setting means the learning target P (Y |X) remains unchanged, while the marginal distributions of the covariate inputs P (X) for treated and untreated can be very different.Most existing methods attempt to transform the observational distribution by sample reweighing schemes usually based on propensity scores [4], [8], [19], [28], [29] (but not exclusively, see e.g.domain adaptation methods).However, reweighting seeks to standardise the observed support of X for the treated and untreated groups, and so generally performs well for estimating treatment effects averaged across the common support of X, but less so for estimating conditional average treatment effects at points outside the observed support; in other words, as pointed out by [32], reweighting does not address the problem of model misspecification which can be detrimental when it comes to estimating individualised treatment effects [33].
A promising alternative to these classical approaches is undersmoothing, where the model is allowed to fit the data very closely to capture P (X) in the two groups, and in doing so potentially produce more accurate individualised predictions.Encouraged by suggestions elsewhere -[9, footnote 3] and [23] -in this paper, we develop a novel approach to causal effect estimation that improves accuracy by undersmoothing the observed data.
Specifically, we propose to undersmooth using fast and straightforward generative trees [10] to augment the existing data, and in doing so facilitate more robust learning of downstream estimators of key causal parameters.The trees are used to 'discretise' the input space into subpopulations of similar units (subclassification); the distributions of these groups are then modelled separately via mixtures of Gaussians, from which we sample equally to reduce data imbalances.
Data augmentation has proven effective in multiple scenarios.For instance, image transformations in computer vision [26], or oversampling minority classes in imbalanced classifi-arXiv:2203.08570v1 [cs.LG] 16 Mar 2022 cation problems [7], [15].In our case, the method we propose could be seen as oversampling underrepresented data regions instead of just classes.
Generative models have also been investigated in causal inference literature [3], [22], though mostly for benchmarking purposes, where new synthetic data sets are created that closely resemble real data but with access to true effects.This work, on the other hand, goes beyond data modelling and focuses on targeted data augmentation instead.
Arguably the closest work to ours that combines data augmentation and generative models within the causal inference setting is [5].Despite a similar approach on a high-level, that is, train downstream causal estimators on augmented data, we believe our frameworks differ substantially upon further examination.More precisely, [5] incorporates neural network based generative models to specifically generate counterfactuals and focuses on conditions where the treatment is continuous.In this work, our proposed method: a) is based on simple and widely-used decision trees, b) does not specifically generate counterfactuals, but oversamples heterogeneous data regions (more general), and c) works with classic discrete treatments.
In terms of this paper's contributions, we show empirically that the choice of model class can have a substantial effect on estimator's final performance, and that standard reweighing methods can struggle with individual treatment effect estimation.Given our experiments, we also provide an evidence that our proposed method increases data complexity that leads to statistically significant improvements in individual treatment effect estimation, while keeping the average effect predictions competitive.Our experimental setup incorporates a wide breadth of non-neural standard causal inference methods and data sets.We specifically focus on non-neural solutions as they are more commonly used by practitioners.The code accompanying this paper is available online 1 .
The rest of the document is structured as follows.First, we revisit fundamental concepts that should aid understanding of the technical part of the paper.Next, we formally discuss the problem of model misspecification, followed by a thorough description of our proposed method.We then present our experimental setup and obtained results.Next section provides further discussion on the results, their implications and considered limitations of the method.Final section concludes the paper.

II. PRELIMINARIES
This section gives a brief overview of the essential background deemed relevant to this work.For a more extensive review, we refer the reader to classic positions on causal analysis [25], [27], and recent surveys on causal inference [14], [34].
Given two random variables T and Y , investigating effects of interventions can be described as measuring how the outcome Y differs across different inputs T .Real world systems usually contain other background covariates, denoted 1 https://github.com/misoc-mml/undersmoothing-data-augmentationas X, which have to be accounted for in the analysis as well.To formally approach the task, we take Rubin's Potential Outcomes [30] perspective, which is particularly convenient in outcome estimation without knowing the full causal graph.
We start by defining the potential outcomes t , that is, the observed outcome when individual i receives treatment t = 0, 1.Given this, the Individual Treatment Effect (ITE) can be written as: Thus, to compute such a value for individual i, we need access to both potential outcomes, 0 , but only one, called the factual, is observed: the other potential outcome, called the counterfactual, cannot be observed.The fact that we only observe factuals but also need the counterfactuals to properly compute causal effects is known as the fundamental problem of causal inference: ITEs are not identified by the observed data.
However, parameters such as the Average Treatment Effect (ATE) and Conditional Average Treatment Effect (CATE) are identified, where and E[.] denotes mathematical expectation.The ATE is essentially the average ITE for the entire population; the CATE is the average ITE for everyone in the subpopulation characterised by X = x.The ATE is not meaningful if there is substantial heterogeneity of the ITEs between subpopulations.In such circumstances, CATE is more informative about ITEs as it allows the effect to be conditioned on the subpopulation of interest.The ITE can be thought of as a special case of CATE where individual i is the only member of the subpopulation.While IT E i cannot be identified, CAT E for the subpopulation X = x which includes individual i will be better estimate of it than AT E (under the reasonable assumption that between-subpopulation variation in ITEs is greater than that within subpopulations).Despite the fact that the aforementioned treatment effects usually cannot be calculated directly, successful methods have been developed so far that attempt to approximate those quantities.Perhaps the simplest and most naive approach is regression adjustment, where a regressor, or multiple ones per each treatment value, is used to estimate potential outcomes.More advanced methods often incorporate propensity scores, where the estimator takes into account the probability of treatment assignment per each individual.For instance, Inverse Propensity Weighting [29] adjusts sample importances, further extended to more efficient and stable Doubly Robust method [12], [28].Double Machine Learning [8], on the other hand, improves existing statistical estimators using base learners.Furthermore, recent surge in machine learning also delivered powerful procedures, often pushing state-of-the-art results [17], [21], [31], [35].In the realm of ensembles, there is Causal Forest [4] that specifically targets CATE estimation.Another interesting perspective on the problem is given through metalearners [19], [24], where out of the box estimators are used in various combinations and strategies to collectively approximate causal effects.
These are the most common methods that employ the usual assumptions, that is, SUTVA and strong ignorability, though there are many procedures that attempt to relax some of the assumptions as well.Here, we limit our discussion to this standard set of assumptions as it is relevant to this work.For a broader overview of available causal inference methods, as well as formal definitions of the assumptions, consult recent reviews on the topic [14], [34].

III. MODEL MISSPECIFICATION
The choice of model class occurs at some point in any learning task.Such a decision is made based on available data, usually the training part of it, while the environment of the actual application can be different, a scenario often mimicked via a separate test set.The occurring discrepancies between those two data sets are known as covariate shift problem.Within causal inference, this manifests as differences between observational and interventional distributions, ultimately making effect estimation extremely difficult.More formally, given input covariates x, treatment t, and outcome y, the conditional distribution P (y|x, t) remains unchanged across the entire data set, whereas marginal distributions P (x, t) differ between observational and interventional data.This is where model misspecification occurs as the model class is selected based on available observations only, which does not generalise well to later predicted interventions.
Let us consider a simple example as presented in Figure 1.It consists of a single input feature x, output variable y (both continuous), and binary treatment t.For convenience, let us denote this data set as D. Note the effect is clearly heterogeneous as it differs in D(x < 0.5) and D(x > 0.5).Furthermore, the two data regions closer to the top of the figure, that is, D(x < 0.5, t = 1) and D(x > 0.5, t = 0), are in minority with respect to the rest of the data.By many learners these scarce data points will likely be treated as outliers, resulting in lower variance than needed to provide accurate estimates.Thus, naively fitting the data will lead to biased estimates, an example of which is depicted on the figure as Biased T and Biased C.However, what we aim for is an unbiased estimator that captures the data closely while still generalising well, a scenario showcased by Unbiased T and Unbiased C on the figure.
For ITE estimation, fitting the data closely is especially important.Although in case of average effect estimation the difference between biased and unbiased estimators can be negligible, the individualised case usually exacerbates the issue.For instance, in the presented example, the difference in ATE error is 0.44, but it grows to 0.77 in ITE error.
In this work, instead of altering the sample importance, as many existing methods do, we aim to augment provided data in a way that underrepresented data regions are no longer dominated by the rest of the samples, leading to estimators no longer treating those data points as outliers and fitting them more closely, ultimately resulting in less biased solutions and more accurate ITE estimates.The following section describes our proposed method in detail.

IV. DEBIASING GENERATIVE TREES
As described in the previous section, model misspecification can be caused by underrepresented or missing data regions.Reweighing partially addresses this problem, but struggles with ITE estimation, not to mention propensity score approximators are subject to misspecification too.To avoid these pitfalls, we tackle the misspecification through undersmoothness by augmenting the original data with new data points that carry useful information and help achieve the final estimators better ITE predictions.As the injected samples are expected to be informative to the learners, the overall data complexity increases as a consequence.Moreover, because this is a data augmentation procedure, it is estimator agnostic, that is, it can be used by any existing estimation methods.It is also worth pointing out that simply modelling and oversampling the entire joint distribution would not work as the learnt joint would include any existing data imbalances.In other words, underrepresented data regions would remain in minority, not addressing the problem at hand.This observation led us to a conclusion there is a need to identify smaller data regions, or clusters, and model their distributions in separation instead, giving us control over which areas to sample from and with what ratios.To achieve this, we incorporate recently proposed Generative Trees [10], which retain all the benefits of standard decision trees, such as simplicity, speed and transparency.They can also be easily extended to ensembles of trees, often improving the performance significantly.In practice, a standard decision tree regressor is used to learn the data.Once the tree is constructed, the samples can be assigned to tree leaves according to the learnt decision paths, forming distinct subpopulations that we are after.The distributions of these clusters are then separately modelled through Gaussian Mixture Models (GMMs).Similarly to decision trees, we again prioritise simplicity and ease of use Model S i with Gaussian Mixture Models.Obtain G i .

8:
Draw N G samples from G i .Store them in X G .9: end for 10: Repeat steps 3-9 for X C .11: Merge X and X G into a single data set X M .12: Train estimator E on X M .Get debiased estimator E D .13: return debiased estimator E D here, which is certainly the case with GMMs.The next step is to sample equally from modelled distributions, that is, to draw the same amount of new samples per each GMM.In this way, we reduce data imbalances.A merge of new and original data is then provided to a downstream estimator, resulting in a less biased final estimator.Through experimentation, we find that splitting the original data at the beginning of the process into treated and control units and learning two separate trees for each group helps achieve better overall effect.A stepby-step description of the proposed procedure is presented in Algorithm 1.
As ensembles of trees almost always improve over simple ones, we incorporate Extremely Randomised Trees for an additional performance gain.The procedure remains the same on a high level, differing only in randomly selecting inner trees at the time of sampling.Overall, we call this approach Debiasing Generative Trees (DeGeTs) as a general framework, with DeGe Decision Trees (DeGeDTs) and DeGe Forests (DeGeFs) for realisations with Decision Trees and Extremely Randomised Trees respectively.
There are a few important parameters to take care of when using the method.Firstly, depth of trees controls the granularity of identified subpopulations.Smaller clusters may translate to less accurate modelled distributions, whereas too shallow trees will bring the modelling closer to the entire joint that may result in not solving the problem of interest at all.The other tunable knob is the amount of new data samples to generate, where more data usually equates to a stronger effect, but also higher noise levels, which must be controlled to avoid destroying meaningful information in the original data.Finally, the number of components in GMMs is worth considering, where more complex distributions may require higher numbers of components.
All of the parameters can be found through cross-validation by using a downstream estimator's performance as a feedback signal as to which parameters work the best, which can also be tailored to a specific estimator of choice.The number of GMM components can be alternatively optimised through Bayesian Information Criterion (BIC) score.In order to make this method as general and easy to use as possible, we instead provide a set of reasonable defaults that we find work well across different data sets and settings.Default parameters: max depth = log 2 N f − 1, where N f denotes the number of input features, n samples = 0.5 × size(training data), n components ∈ [1, 5] -pick the one with the lowest BIC score.
In addition, we observe the fact that DeGeTs framework goes beyond applied Generative Trees and GMMs.This is because the data splitting part can, in fact, be performed by other methods, such as clustering.Consequently, GMMs can be substituted by any other generative models.

V. EXPERIMENTS
We follow recent literature (e.g.[17], [31], [35]) in terms of incorporated data sets and evaluation metrics.We start with defining the later as different data sets use different sets of metrics.The source code that allows for a full replication of the presented experiments is available online 2 and is based on the CATE benchmark 3 .
There are a few aspects we aim to investigate.Firstly, how the established reweighing methods perform in individual treatment effect estimation.Secondly, how the choice of model class impacts estimation accuracy (misspecification).Thirdly, how our proposed method affects the performance of the base learners, and how it compares to other methods.Finally, we also study how our method influences the number of rules in prunned decision trees as an indirect measure of data complexity.
Although we do perform hyperparameter search to some extent in order to get reasonable results, it is not our goal to achieve the best results possible, hence the parameters used here are likely not optimal and can be improved upon more extensive search.The main reason is the setups presented as part of this work are intended to be as general as possible.This is why in our analysis we specifically focus on the relative difference in performance between settings rather than comparing them to absolute state-of-the-art results.

A. Evaluation Metrics
The main focus of utilised metrics here is on the quantification of the errors made by provided predictions.Thus, the metrics are usually denoted as X , which translates to the amount of error made with respect to prediction type X (lower is better).In terms of treatment outcomes, Y (i) t and ŷ(i) t denote true and predicted outcomes respectively for treatment t and individual i.Thus, following the definition of ITE (Eq.( 1)), the difference Y 0 gives a true effect, whereas ŷ(i) 1 − ŷ(i) 0 a predicted one.Following this, we can define Precision in Estimation of Heterogeneous Effect (PEHE), which is the root mean squared error between predicted and true effects: Following the definition of ATE (Eq.( 2)), we measure the error on ATE as the absolute difference between predicted and true average effects, formally written as: Given a set of treated subjects T that are part of sample E coming from an experimental study, and a set of control group C, define the true Average Treatment effect on the Treated (ATT) as: The error on ATT is then defined as the absolute difference between the true and predicted ATT: Define policy risk as: Where E[.] denotes mathematical expectation and policy π becomes π(x) = 1 if ŷ1 − ŷ0 > 0; π(x) = 0 otherwise.

B. Data
We incorporate a set of well-established causal inference benchmark data sets that are briefly described in the following paragraphs and summarised in Table I.
IHDP.Introduced by [16], based on Infant Health Development Program (IHDP) clinical trial [6].The experiment measured various aspects of premature infants and their mothers, and how receiving specialised childcare affected the cognitive test score of the infants later on.We use a semi-synthetic version of this data set, where the outcomes are simulated through the NPCI package 4 (setting 'A') based on real pretreatment covariates.Moreover, the treatment groups are made imbalanced by removing a subset of the treated individuals.We report errors on estimated PEHE and ATE averaged over 1,000 realisations and split the data with 90/10 training/test ratios.
JOBS.This data set, proposed by [1], is a combination of the experiment done by [20] as part of the National Supported Work Program (NSWP) and observational data from the Panel Study of Income Dynamics (PSID) [11].Overall, the data captures people's basic characteristics, whether they received a job training from NSWP (treatment), and their employment NEWS.Introduced by [17], which consists of news articles in the form of word counts with respect to a predefined vocabulary.The treatment is represented as the device type (mobile or desktop) used to view the article, whereas the simulated outcome is defined as the user's experience.Similarly to IHDP, we report PEHE and ATE errors for this data set, averaging over 50 realisations with 90/10 training/test ratio splits.
TWINS.The data set comes from official records of twin births in the US in years 1989-1991 [2].The data are preprocessed to include only individuals of the same sex and where each of them weight less than 2,000 grams.The treatment is represented as whether the individual is the heavier one of the twins, whereas the outcome is the mortality within the first year of life.As both factual and counterfactual outcomes are known from the official records, that is, mortality of both twins, one of the twins is intentionally hidden to simulate an observational setting.Here, we incorporate the approach taken by [21], where new binary features are created and flipped at random (0.33 probability) in order to hide confounding information.We report AT E and P EHE for this data set, averaged over 10 iterations with 80/20 training/test ratio splits.
Debiasing Generative Trees.Our proposed method.We include the stronger performing DeGeF variation.
A general approach throughout all conducted experiments was to train a method on the training set and evaluate it against appropriate metrics on the test set.5 base learners were trained and evaluated in that way: l1, l2, Simple Trees, Boosted Trees and Kernel Ridge.DML and Meta-Learners were combined with different base learners as they need them to solve intermediate regression and classification tasks internally.This resulted in 3×5 = 15 combinations of distinct estimators.Similarly, DeGeF was combined with the same 5 base learners to investigate how they react to our data augmentation method.Causal Forest and dummy regressor were treated as standalone methods.Overall, we obtained 27 distinct estimators per each data set.In terms of Simple and Boosted Trees, we defaulted to ETs and CatBoost respectively.For NEWS, due to its high-dimensionality, we switched to computationally less expensive Decision Trees and LightGBM instead.
As our DeGeF method is a data augmentation approach, it affects only the training set that is later used by base learners.It does not change the test set in any way as the test portion is used specifically for evaluation purposes to test how methods generalise to unseen data examples.More specifically, DeGeF injects new data samples to the existing training set, and that augmented training set is then provided to base learners.
Most of our experimental runs were performed on a Linux based machine with 12 CPUs and 60 GBs of RAM.More demanding settings, such as NEWS combined with tree-based methods, were delegated to one with 96 CPUs and 500 GBs of RAM, though such a powerful machine is not required to complete those runs.
Tables II -V present the main results, where we specifically focus on: a) relevant to a given data set metrics, and b) changes in performance relative to a particular base learner.The latter is calculated as ((r a − r b )/r b ) × 100%, where r a and r b denote results of advanced methods and base learners respectively.The reason for analysing these relative changes rather than absolute values is because in this study we are specifically interested in how more complex approaches (including ours) affect the performance of the base learners, even if not reaching state-of-the-art results.For example, if a relative change for xl-et reads '−20', it means this estimator decreased the error by 20% when compared to plain et learner for that particular metric.Changes greater than zero denote an increase in errors (lower is better).Furthermore, Table VI shows the number of rules obtained from a prunned Decision Tree while trained on original data and augmented by degef.
All presented numbers (excluding relative percentages) denote means and 95% confidence intervals.

VI. DISCUSSION
In terms of IHDP data set (Table II), the classic methods (dml, tl, and xl) strongly improve in ATE, but can also be unstable as it is the case with dml, specifically dml-cb and dml-lgbm.Against PEHE, the situation is much worse as those methods significantly decrease in performance when compared to the base learners, not to mention catastrophic setbacks in the worst cases (deltas above 200%).Note that not a single traditional method improves in PEHE (all deltas positive).Our degef, on the other hand, often improves in both ATE and PEHE (see negative deltas).Even in the worst cases with l1 and l2, degef is still very stable and does not destroy the predictions as it happened with the other approaches.Thus, our method clearly offers the best improvements in PEHE and competitive predictions in ATE while providing a good amount of stability.In the JOBS data set (Table III), classic methods again achieve strong improvements in average effect estimation (ATT) in best cases, though they can be substantially worse as well (e.g.dml-et).In policy predictions, an equivalent of ITE, traditional techniques are even less likely to provide improvements, except the X-Learner.With respect to degef, it can also worsen the quality of predictions in ATT, as shown with degef-l1, though it does not get as bad as with dml-et.However, even in that worst example, policy predictions are not destroyed.The best cases in degef, on the other hand, achieve strong improvements in policy.Similarly to IHDP, here degef provided solid improvements in ITE predictions (policy), while staying on par with traditional methods in ATT, obtaining reasonable improvements and keeping the worst cases still better than the worst ones in the other methods, proving again its stability.TWINS data set (Table IV), proved to be very difficult for all considered methods when it comes to PEHE, though they did not worsen the predictions as well.Some good improvements in ATE can be observed, but also noticeable decreases in performance in the worst cases (combinations with dt).Our method behaves similarly to the classic ones, offering occasional gains and keeping the decreases in reasonable bounds.The stability of degef is especially noticeable in PEHE as the worst decrease (degef-dt) is still better than in other methods.
The last data set, NEWS (Table V), showed the traditional approaches can provide some improvements in PEHE as well, at least in their best efforts, though performance decreases are also noticeable in the worst ones.They also offer quite stable improvements in ATE, except extremely poor dml-dt.The X-Learner performs particularly well across both metrics (most deltas negative).Our proposed method offers reasonable gains in ATE as well, while keeping performance decreases at bay even in the worst efforts.Even though degef provides little improvement in PEHE, it does not destroy individualised predictions either.Overall, this data set showcases superior stability properties of degef particularly well, making it a preferable choice if small but safe performance gains are desirable over potentially higher but riskier improvements.In general terms, the results show that performance can vary substantially depending on the model class, even within the same advanced method (dml, xl, degef ).For instance, DML proved to work particularly well with L1 and L2 as base learners, whereas X-Learner often outperforms T-Learner, adding more stability to the results as well.Our proposed technique usually offers significant improvements in ITE predictions in best cases, often better than traditional methods, while keeping the predictions stable even in the worst examples.Classic methods are clearly strong in ATE estimates, but can struggle in individualised predictions.Overall, these methods (dml, xl) proved to be less stable than ours, where the worst cases can perform quite poorly, especially dml.This makes degef a safer choice on average when considering various estimators, even more so when achieving the best possible performance is not considered a priority.
We also investigate the number of rules in prunned Decision Trees as a proxy for data complexity.As presented in Table VI, degef significantly increases the amount of rules across all data sets, translating to an increase in data complexity.This proves the undersmoothing effect we aim for has been achieved.In addition, we observe that modest data complexity increases in IHDP and JOBS correlate with strong degef gains in ITE estimation in those two data sets, whereas a much bigger difference in TWINS (from 9.6 to 59.1) correlated with considerably lower prediction performance gains (Table IV).
After combining all the results together, we can observe that degef : a) improves effect predictions (Tables II -V), and b) increases data complexity (Table VI).We thus conclude that more accurate effect prediction (as per a)) is a sign of better model generalisation.Consequently, we equate better generalisation to reduced model misspecification.Furthermore, we can observe the undersmoothing effect as per b).This is our indirect evidence that our method addresses misspecification via undersmoothing by showing that downstream estimators improve effect predictions when trained on augmented data.In terms of theoretical guarantees, we rely on [33], which provides a thorough formal analysis of the problem of undersmoothing.
In terms of possible limitations of our method, we assume the data sets we work with have relatively low noise levels.This is because in noisy environments, the inner GMMs would likely pick up a lot of noise and thus sampling from them would result in even more noisy data samples.The result would be the opposite of what we aim for, that is, to increase data complexity and bring new informative samples, not to introduce bias in the form of noise.Thus, our method would likely worsen base learners performance in such environments.Furthermore, we expect extremely high-dimensional data sets may cause computational issues due to the increasing depth of the inner trees.This is partly why setting a reasonable depth limit is important.

VII. CONCLUSIONS
Treatment effect estimation tasks are often subject to the covariate shift problem that is exhibited by discrepancies between observational and interventional distributions.This leads to model misspecification, which we tackle directly in this work by introducing a novel data augmentation method based on generative trees that provides an undersmoothing effect and helps downstream estimators achieve better robustness, ultimately leading to less biased estimators.Through our experiments, we show that the choice of model class matters, and that traditional methods can struggle in individualised effect estimation.Our proposed approach presented competitive results with existing reweighing procedures on average effect tasks while offering significantly better performance improvements on individual effect problems.The method also exhibits better stability in terms of provided gains than other approaches, rendering it a safer option overall.
In terms of possible future directions, it might be interesting to investigate the feasibility of replacing generative trees with neural networks to handle extremely high-dimensional problems.Another direction would be to instantiate DeGeTs framework with alternative methods, such as standard clustering and generative neural networks.Lastly, extending our approach to noisy data sets would likely increase its potential applicability to real world problems.

Fig. 1 .
Fig. 1.An example highlighting model misspecification issue.T and C denote Treated and Control respectively.The difference in ITE error is almost twice as in ATE.

Algorithm 1
Debiasing Generative TreesInput: X -data set, E -estimator Parameter: N -number of generated samples Output: E D -debiased estimator 1: Let X G = ∅.2: Split X into treated and control units (X T and X C ). 3: Train a Decision Tree regressor on X T .4: Map X T to tree leaves.Obtain subpopulations S. 5: Let N G = N/(2 × len(S)).6: for S i in S do 7:

TABLE I A
SUMMARY OF INCORPORATED DATA SETS.T/C DENOTE THE AMOUNT OF TREATED AND CONTROL SAMPLES RESPECTIVELY.

TABLE V NEWS
RESULTS.ESTIMATORS MARKED WITH 'X' -NO RESULTS DUE TO UNREASONABLY EXCESSIVE TRAINING TIME.

TABLE VI NUMBER
OF RULES IN A PRUNNED DECISION TREE WITH AND WITHOUT degef AUGMENTATION.