- Notifications
You must be signed in to change notification settings - Fork798
ALICE (Automated Learning and Intelligence for Causation and Economics) is a Microsoft Research project aimed at applying Artificial Intelligence concepts to economic decision making. One of its goals is to build a toolkit that combines state-of-the-art machine learning techniques with econometrics in order to bring automation to complex causal …
License
py-why/EconML
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
EconML is a Python package for estimating heterogeneous treatment effects from observational data via machine learning. This package was designed and built as part of theALICE project at Microsoft Research with the goal to combine state-of-the-art machine learningtechniques with econometrics to bring automation to complex causal inference problems. The promise of EconML:
- Implement recent techniques in the literature at the intersection of econometrics and machine learning
- Maintain flexibility in modeling the effect heterogeneity (via techniques such as random forests, boosting, lasso and neural nets), while preserving the causal interpretation of the learned model and often offering valid confidence intervals
- Use a unified API
- Build on standard Python packages for Machine Learning and Data Analysis
One of the biggest promises of machine learning is to automate decision making in a multitude of domains. At the core of many data-driven personalized decision scenarios is the estimation of heterogeneous treatment effects: what is the causal effect of an intervention on an outcome of interest for a sample with a particular set of features? In a nutshell, this toolkit is designed to measure the causal effect of some treatment variable(s)T on an outcomevariableY, controlling for a set of featuresX, W and how does that effect vary as a function ofX. The methods implemented are applicable even with observational (non-experimental or historical) datasets. For the estimation results to have a causal interpretation, some methods assume no unobserved confounders (i.e. there is no unobserved variable not included inX, W that simultaneously has an effect on bothT andY), while others assume access to an instrumentZ (i.e. an observed variableZ that has an effect on the treatmentT but no direct effect on the outcomeY). Most methods provide confidence intervals and inference results.
For detailed information about the package, consult the documentation athttps://www.pywhy.org/EconML/.
For information on use cases and background material on causal inference and heterogeneous treatment effects see our webpage athttps://www.microsoft.com/en-us/research/project/econml/
Table of Contents
If you'd like to contribute to this project, see theHelp Wanted section below.
July 10, 2025: Release v0.16.0, see release noteshere
Previous releases
July 3, 2024: Release v0.15.1, see release noteshere
February 12, 2024: Release v0.15.0, see release noteshere
November 11, 2023: Release v0.15.0b1, see release noteshere
May 19, 2023: Release v0.14.1, see release noteshere
November 16, 2022: Release v0.14.0, see release noteshere
June 17, 2022: Release v0.13.1, see release noteshere
January 31, 2022: Release v0.13.0, see release noteshere
August 13, 2021: Release v0.12.0, see release noteshere
August 5, 2021: Release v0.12.0b6, see release noteshere
August 3, 2021: Release v0.12.0b5, see release noteshere
July 9, 2021: Release v0.12.0b4, see release noteshere
June 25, 2021: Release v0.12.0b3, see release noteshere
June 18, 2021: Release v0.12.0b2, see release noteshere
June 7, 2021: Release v0.12.0b1, see release noteshere
May 18, 2021: Release v0.11.1, see release noteshere
May 8, 2021: Release v0.11.0, see release noteshere
March 22, 2021: Release v0.10.0, see release noteshere
March 11, 2021: Release v0.9.2, see release noteshere
March 3, 2021: Release v0.9.1, see release noteshere
February 20, 2021: Release v0.9.0, see release noteshere
January 20, 2021: Release v0.9.0b1, see release noteshere
November 20, 2020: Release v0.8.1, see release noteshere
November 18, 2020: Release v0.8.0, see release noteshere
September 4, 2020: Release v0.8.0b1, see release noteshere
March 6, 2020: Release v0.7.0, see release noteshere
February 18, 2020: Release v0.7.0b1, see release noteshere
January 10, 2020: Release v0.6.1, see release noteshere
December 6, 2019: Release v0.6, see release noteshere
November 21, 2019: Release v0.5, see release noteshere.
June 3, 2019: Release v0.4, see release noteshere.
May 3, 2019: Release v0.3, see release noteshere.
April 10, 2019: Release v0.2, see release noteshere.
March 6, 2019: Release v0.1, welcome to have a try and provide feedback.
Install the latest release fromPyPI:
pip install econmlTo install from source, seeFor Developers section below.
Double Machine Learning (aka RLearner) (click to expand)
- Linear final stage
fromeconml.dmlimportLinearDMLfromsklearn.linear_modelimportLassoCVfromeconml.inferenceimportBootstrapInferenceest=LinearDML(model_y=LassoCV(),model_t=LassoCV())### Estimate with OLS confidence intervalsest.fit(Y,T,X=X,W=W)# W -> high-dimensional confounders, X -> featurestreatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# OLS confidence intervals### Estimate with bootstrap confidence intervalsest.fit(Y,T,X=X,W=W,inference='bootstrap')# with default bootstrap parametersest.fit(Y,T,X=X,W=W,inference=BootstrapInference(n_bootstrap_samples=100))# or customizedlb,ub=est.effect_interval(X_test,alpha=0.05)# Bootstrap confidence intervals
- Sparse linear final stage
fromeconml.dmlimportSparseLinearDMLfromsklearn.linear_modelimportLassoCVest=SparseLinearDML(model_y=LassoCV(),model_t=LassoCV())est.fit(Y,T,X=X,W=W)# X -> high dimensional featurestreatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# Confidence intervals via debiased lasso
- Generic Machine Learning last stage
fromeconml.dmlimportNonParamDMLfromsklearn.ensembleimportRandomForestRegressor,RandomForestClassifierest=NonParamDML(model_y=RandomForestRegressor(),model_t=RandomForestClassifier(),model_final=RandomForestRegressor(),discrete_treatment=True)est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)
Dynamic Double Machine Learning (click to expand)
fromeconml.panel.dmlimportDynamicDML# Use defaultsest=DynamicDML()# Or specify hyperparametersest=DynamicDML(model_y=LassoCV(cv=3),model_t=LassoCV(cv=3),cv=3)est.fit(Y,T,X=X,W=None,groups=groups,inference="auto")# Effectstreatment_effects=est.effect(X_test)# Confidence intervalslb,ub=est.effect_interval(X_test,alpha=0.05)
Causal Forests (click to expand)
fromeconml.dmlimportCausalForestDMLfromsklearn.linear_modelimportLassoCV# Use defaultsest=CausalForestDML()# Or specify hyperparametersest=CausalForestDML(criterion='het',n_estimators=500,min_samples_leaf=10,max_depth=10,max_samples=0.5,discrete_treatment=False,model_t=LassoCV(),model_y=LassoCV())est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)# Confidence intervals via Bootstrap-of-Little-Bags for forestslb,ub=est.effect_interval(X_test,alpha=0.05)
Orthogonal Random Forests (click to expand)
fromeconml.orfimportDMLOrthoForest,DROrthoForestfromeconml.sklearn_extensions.linear_modelimportWeightedLasso,WeightedLassoCV# Use defaultsest=DMLOrthoForest()est=DROrthoForest()# Or specify hyperparametersest=DMLOrthoForest(n_trees=500,min_leaf_size=10,max_depth=10,subsample_ratio=0.7,lambda_reg=0.01,discrete_treatment=False,model_T=WeightedLasso(alpha=0.01),model_Y=WeightedLasso(alpha=0.01),model_T_final=WeightedLassoCV(cv=3),model_Y_final=WeightedLassoCV(cv=3))est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)# Confidence intervals via Bootstrap-of-Little-Bags for forestslb,ub=est.effect_interval(X_test,alpha=0.05)
Meta-Learners (click to expand)
- XLearner
fromeconml.metalearnersimportXLearnerfromsklearn.ensembleimportGradientBoostingClassifier,GradientBoostingRegressorest=XLearner(models=GradientBoostingRegressor(),propensity_model=GradientBoostingClassifier(),cate_models=GradientBoostingRegressor())est.fit(Y,T,X=np.hstack([X,W]))treatment_effects=est.effect(np.hstack([X_test,W_test]))# Fit with bootstrap confidence interval construction enabledest.fit(Y,T,X=np.hstack([X,W]),inference='bootstrap')treatment_effects=est.effect(np.hstack([X_test,W_test]))lb,ub=est.effect_interval(np.hstack([X_test,W_test]),alpha=0.05)# Bootstrap CIs
- SLearner
fromeconml.metalearnersimportSLearnerfromsklearn.ensembleimportGradientBoostingRegressorest=SLearner(overall_model=GradientBoostingRegressor())est.fit(Y,T,X=np.hstack([X,W]))treatment_effects=est.effect(np.hstack([X_test,W_test]))
- TLearner
fromeconml.metalearnersimportTLearnerfromsklearn.ensembleimportGradientBoostingRegressorest=TLearner(models=GradientBoostingRegressor())est.fit(Y,T,X=np.hstack([X,W]))treatment_effects=est.effect(np.hstack([X_test,W_test]))
Doubly Robust Learners (click to expand)
- Linear final stage
fromeconml.drimportLinearDRLearnerfromsklearn.ensembleimportGradientBoostingRegressor,GradientBoostingClassifierest=LinearDRLearner(model_propensity=GradientBoostingClassifier(),model_regression=GradientBoostingRegressor())est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)
- Sparse linear final stage
fromeconml.drimportSparseLinearDRLearnerfromsklearn.ensembleimportGradientBoostingRegressor,GradientBoostingClassifierest=SparseLinearDRLearner(model_propensity=GradientBoostingClassifier(),model_regression=GradientBoostingRegressor())est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)
- Nonparametric final stage
fromeconml.drimportForestDRLearnerfromsklearn.ensembleimportGradientBoostingRegressor,GradientBoostingClassifierest=ForestDRLearner(model_propensity=GradientBoostingClassifier(),model_regression=GradientBoostingRegressor())est.fit(Y,T,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)
Double Machine Learning with Instrumental Variables (click to expand)
- Orthogonal instrumental variable learner
fromeconml.iv.dmlimportOrthoIVest=OrthoIV(projection=False,discrete_treatment=True,discrete_instrument=True)est.fit(Y,T,Z=Z,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# OLS confidence intervals
- Nonparametric double machine learning with instrumental variable
fromeconml.iv.dmlimportNonParamDMLIVest=NonParamDMLIV(discrete_treatment=True,discrete_instrument=True,model_final=RandomForestRegressor())est.fit(Y,T,Z=Z,X=X,W=W)# no analytical confidence interval availabletreatment_effects=est.effect(X_test)
Doubly Robust Machine Learning with Instrumental Variables (click to expand)
- Linear final stage
fromeconml.iv.drimportLinearDRIVest=LinearDRIV(discrete_instrument=True,discrete_treatment=True)est.fit(Y,T,Z=Z,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# OLS confidence intervals
- Sparse linear final stage
fromeconml.iv.drimportSparseLinearDRIVest=SparseLinearDRIV(discrete_instrument=True,discrete_treatment=True)est.fit(Y,T,Z=Z,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# Debiased lasso confidence intervals
- Nonparametric final stage
fromeconml.iv.drimportForestDRIVest=ForestDRIV(discrete_instrument=True,discrete_treatment=True)est.fit(Y,T,Z=Z,X=X,W=W)treatment_effects=est.effect(X_test)# Confidence intervals via Bootstrap-of-Little-Bags for forestslb,ub=est.effect_interval(X_test,alpha=0.05)
- Linear intent-to-treat (discrete instrument, discrete treatment)
fromeconml.iv.drimportLinearIntentToTreatDRIVfromsklearn.ensembleimportGradientBoostingRegressor,GradientBoostingClassifierest=LinearIntentToTreatDRIV(model_y_xw=GradientBoostingRegressor(),model_t_xwz=GradientBoostingClassifier(),flexible_model_effect=GradientBoostingRegressor())est.fit(Y,T,Z=Z,X=X,W=W)treatment_effects=est.effect(X_test)lb,ub=est.effect_interval(X_test,alpha=0.05)# OLS confidence intervals
See theReferences section for more details.
Tree Interpreter of the CATE model (click to expand)
fromeconml.cate_interpreterimportSingleTreeCateInterpreterintrp=SingleTreeCateInterpreter(include_model_uncertainty=True,max_depth=2,min_samples_leaf=10)# We interpret the CATE model's behavior based on the features used for heterogeneityintrp.interpret(est,X)# Plot the treeplt.figure(figsize=(25,5))intrp.plot(feature_names=['A','B','C','D'],fontsize=12)plt.show()
Policy Interpreter of the CATE model (click to expand)
fromeconml.cate_interpreterimportSingleTreePolicyInterpreter# We find a tree-based treatment policy based on the CATE modelintrp=SingleTreePolicyInterpreter(risk_level=0.05,max_depth=2,min_samples_leaf=1,min_impurity_decrease=.001)intrp.interpret(est,X,sample_treatment_costs=0.2)# Plot the treeplt.figure(figsize=(25,5))intrp.plot(feature_names=['A','B','C','D'],fontsize=12)plt.show()
SHAP values for the CATE model (click to expand)
importshapfromeconml.dmlimportCausalForestDMLest=CausalForestDML()est.fit(Y,T,X=X,W=W)shap_values=est.shap_values(X)shap.summary_plot(shap_values['Y0']['T0'])
Causal model selection with the `RScorer` (click to expand)
fromeconml.scoreimportRScorer# split data in train-validationX_train,X_val,T_train,T_val,Y_train,Y_val=train_test_split(X,T,y,test_size=.4)# define list of CATE estimators to select amongreg=lambda:RandomForestRegressor(min_samples_leaf=20)clf=lambda:RandomForestClassifier(min_samples_leaf=20)models= [('ldml',LinearDML(model_y=reg(),model_t=clf(),discrete_treatment=True,cv=3)), ('xlearner',XLearner(models=reg(),cate_models=reg(),propensity_model=clf())), ('dalearner',DomainAdaptationLearner(models=reg(),final_models=reg(),propensity_model=clf())), ('slearner',SLearner(overall_model=reg())), ('drlearner',DRLearner(model_propensity=clf(),model_regression=reg(),model_final=reg(),cv=3)), ('rlearner',NonParamDML(model_y=reg(),model_t=clf(),model_final=reg(),discrete_treatment=True,cv=3)), ('dml3dlasso',DML(model_y=reg(),model_t=clf(),model_final=LassoCV(cv=3,fit_intercept=False),discrete_treatment=True,featurizer=PolynomialFeatures(degree=3),cv=3))]# fit cate models on train datamodels= [(name,mdl.fit(Y_train,T_train,X=X_train))forname,mdlinmodels]# score cate models on validation datascorer=RScorer(model_y=reg(),model_t=clf(),discrete_treatment=True,cv=3,mc_iters=2,mc_agg='median')scorer.fit(Y_val,T_val,X=X_val)rscore= [scorer.score(mdl)for_,mdlinmodels]# select the best modelmdl,_=scorer.best_model([mdlfor_,mdlinmodels])# create weighted ensemble model based on score performancemdl,_=scorer.ensemble([mdlfor_,mdlinmodels])
First Stage Model Selection (click to expand)
EconML's cross-fitting estimators provide built-in functionality for first-stage model selection. This support can work with existing sklearn model selection classes such asLassoCV orGridSearchCV, or you can pass a list of models to choose the best from among them when cross-fitting.
fromeconml.dmlimportLinearDMLfromsklearnimportclonefromsklearn.ensembleimportRandomForestRegressorfromsklearn.linear_modelimportLassoCVfromsklearn.model_selectionimportGridSearchCVcv_model=GridSearchCV(estimator=RandomForestRegressor(),param_grid={"max_depth": [3,None],"n_estimators": (10,30,50,100,200),"max_features": (2,4,6), },cv=5, )est=LinearDML(model_y=cv_model,# use sklearn's grid search to select the best Y modelmodel_t=[RandomForestRegressor(),LassoCV()])# use built-in model selection to choose between forest and linear models for T model
Whenever inference is enabled, then one can get a more structureInferenceResults object with more elaborate inference information, suchas p-values and z-statistics. When the CATE model is linear and parametric, then asummary() method is also enabled. For instance:
fromeconml.dmlimportLinearDML# Use defaultsest=LinearDML()est.fit(Y,T,X=X,W=W)# Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]est.effect_inference(X_test).summary_frame(alpha=0.05,value=0,decimals=3)# Get the population summary for the entire sample Xest.effect_inference(X_test).population_summary(alpha=0.1,value=0,decimals=3,tol=0.001)# Get the parameter inference summary for the final modelest.summary()
Example Output (click to expand)
# Get the effect inference summary, which includes the standard error, z test score, p value, and confidence interval given each sample X[i]est.effect_inference(X_test).summary_frame(alpha=0.05,value=0,decimals=3)
# Get the population summary for the entire sample Xest.effect_inference(X_test).population_summary(alpha=0.1,value=0,decimals=3,tol=0.001)
# Get the parameter inference summary for the final modelest.summary()
You can also perform direct policy learning from observational data, using the doubly robust method for offlinepolicy learning. These methods directly predict a recommended treatment, without internally fitting an explicitmodel of the conditional average treatment effect.
Doubly Robust Policy Learning (click to expand)
fromeconml.policyimportDRPolicyTree,DRPolicyForestfromsklearn.ensembleimportRandomForestRegressor# fit a single binary decision tree policypolicy=DRPolicyTree(max_depth=1,min_impurity_decrease=0.01,honest=True)policy.fit(y,T,X=X,W=W)# predict the recommended treatmentrecommended_T=policy.predict(X)# plot the binary decision treeplt.figure(figsize=(10,5))policy.plot()# get feature importancesimportances=policy.feature_importances_# fit a binary decision forestpolicy=DRPolicyForest(max_depth=1,min_impurity_decrease=0.01,honest=True)policy.fit(y,T,X=X,W=W)# predict the recommended treatmentrecommended_T=policy.predict(X)# plot the first tree in the ensembleplt.figure(figsize=(10,5))policy.plot(0)# get feature importancesimportances=policy.feature_importances_
To see more complex examples, go to thenotebooks section of the repository. For a more detailed description of the treatment effect estimation algorithms, see the EconMLdocumentation.
You can get started by cloning this repository. We usesetuptools for building and distributing our package.We rely on some recent features of setuptools, so make sure to upgrade to a recent version withpip install setuptools --upgrade. Then from your local copy of the repository you can runpip install -e . to get started (but depending on what you're doing you might want to install with extras instead, likepip install -e .[plt] if you want to use matplotlib integration, or you can usepip install -e .[all] to include all extras).
We use thepre-commit framework to enforce code style and run checks before every commit. To install the pre-commit hooks, make sure you have pre-commit installed (pip install pre-commit) and then runpre-commit install in the root of the repository. This will install the hooks and run them automatically before every commit. If you want to run the hooks manually, you can runpre-commit run --all-files.
If you're looking to contribute to the project, we have a number of issues tagged with theup for grabs andhelp wanted labels. "Up for grabs" issues are ones that we think that people without a lot of experience in our codebase may be able to help with, while "Help wanted" issues are valuable improvements to the library that our team currently does not have time to prioritize where we would greatly appreciate community-initiated PRs, but which might be more involved.
This project usespytest to run tests for continuous integration. It is also possible to usepytest to run tests locally, but this isn't recommended because it will take an extremely long time and some tests are specific to certain environments or scenarios that have additional dependencies. However, if you'd like to do this anyway, to run all tests locally after installing the package you can usepip install pytest pytest-xdist pytest-cov coverage[toml] (as well aspip install jupyter jupyter-client nbconvert nbformat seaborn xgboost tqdm for the dependencies to run all of our notebooks as tests) followed bypython -m pytest.
Because running all tests can be very time-consuming, we recommend running only the relevant subset of tests when developing locally. The easiest way to do this is to rely onpytest's compatibility withunittest, so you can just runpython -m unittest econml.tests.test_module to run all tests in a given module, orpython -m unittest econml.tests.test_module.TestClass to run all tests in a given class. You can also runpython -m unittest econml.tests.test_module.TestClass.test_method to run a single test method.
This project's documentation is generated viaSphinx. Note that we usegraphviz'sdot application to produce some of the images in our documentation, so you should make sure thatdot is installed and in your path.
To generate a local copy of the documentation from a clone of this repository, just runpython setup.py build_sphinx -W -E -a, which will build the documentation and place it under thebuild/sphinx/html path.
The reStructuredText files that make up the documentation are stored in thedocs directory; module documentation is automatically generated by the Sphinx build process.
We use GitHub Actions to build and publish the package and documentation. To create a new release, an admin should perform the following steps:
- Update the version number in
econml/_version.pyand add a mention of the new version in the news section of this file and commit the changes. - Manually run the publish_package.yml workflow to build and publish the package to PyPI.
- Manually run the publish_docs.yml workflow to build and publish the documentation.
- Underhttps://github.com/py-why/EconML/releases, create a new release with a corresponding tag, and update the release notes.
May 2021:Be Careful When Interpreting Predictive Models in Search of Causal Insights
June 2019:Treatment Effects with Instruments paper
2017:DeepIV paper
If you use EconML in your research, please cite us as follows:
Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis.EconML: A Python Package for ML-Based Heterogeneous Treatment Effects Estimation.https://github.com/py-why/EconML, 2019. Version 0.x.
BibTex:
@misc{econml, author={Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Paul Oka, Miruna Oprescu, Vasilis Syrgkanis}, title={{EconML}: {A Python Package for ML-Based Heterogeneous Treatment Effects Estimation}}, howpublished={https://github.com/py-why/EconML}, note={Version 0.x}, year={2019}}This project welcomes contributions and suggestions. We use theDCO bot to enforce aDeveloper Certificate of Origin which requires users to sign-off on their commits. This is a simple way to certify that you wrote or otherwise have the right to submit the code you are contributing to the project. Git provides a-s command line option to include this automatically when you commit viagit commit.
If you forget to sign one of your commits, the DCO bot will provide specific instructions along with the failed check; alternatively you can usegit commit --amend -s to add the sign-off to your last commit if you forgot it orgit rebase --signoff to sign all of the commits in the branch, after which you can force push the changes to your branch withgit push --force-with-lease.
This project has adopted thePyWhy Code of Conduct.

EconML is a part ofPyWhy, an organization with a mission to build an open-source ecosystem for causal machine learning.
PyWhy also has aDiscord, which serves as a space for like-minded casual machine learning researchers and practitioners of all experience levels to come together to ask and answer questions, discuss new features, and share ideas.
We invite you to join us at regular office hours and community calls in the Discord.
Athey, Susan, and Stefan Wager.Policy learning with observational data.Econometrica 89.1, 133-161, 2021.
X Nie, S Wager.Quasi-Oracle Estimation of Heterogeneous Treatment Effects.Biometrika 108.2, 299-319, 2021.
V. Syrgkanis, V. Lei, M. Oprescu, M. Hei, K. Battocchi, G. Lewis.Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments.Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.(Spotlight Presentation)
D. Foster, V. Syrgkanis.Orthogonal Statistical Learning.Proceedings of the 32nd Annual Conference on Learning Theory (COLT), 2019.(Best Paper Award)
M. Oprescu, V. Syrgkanis and Z. S. Wu.Orthogonal Random Forest for Causal Inference.Proceedings of the 36th International Conference on Machine Learning (ICML), 2019.
S. Künzel, J. Sekhon, J. Bickel and B. Yu.Metalearners for estimating heterogeneous treatment effects using machine learning.Proceedings of the national academy of sciences, 116(10), 4156-4165, 2019.
S. Athey, J. Tibshirani, S. Wager.Generalized random forests.Annals of Statistics, 47, no. 2, 1148--1178, 2019.
V. Chernozhukov, D. Nekipelov, V. Semenova, V. Syrgkanis.Plug-in Regularized Estimation of High-Dimensional Parameters in Nonlinear Semiparametric Models.Arxiv preprint arxiv:1806.04823, 2018.
S. Wager, S. Athey.Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.Journal of the American Statistical Association, 113:523, 1228-1242, 2018.
Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy.Deep IV: A flexible approach for counterfactual prediction.Proceedings of the 34th International Conference on Machine Learning, ICML'17, 2017.
V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and a. W. Newey.Double Machine Learning for Treatment and Causal Parameters.ArXiv preprint arXiv:1608.00060, 2016.
Dudik, M., Erhan, D., Langford, J., & Li, L.Doubly robust policy evaluation and optimization.Statistical Science, 29(4), 485-511, 2014.
About
ALICE (Automated Learning and Intelligence for Causation and Economics) is a Microsoft Research project aimed at applying Artificial Intelligence concepts to economic decision making. One of its goals is to build a toolkit that combines state-of-the-art machine learning techniques with econometrics in order to bring automation to complex causal …
Topics
Resources
License
Security policy
Uh oh!
There was an error while loading.Please reload this page.





