Awesome Uncertainty in Deep Learning

Uncertainty Quantification in Deep Learning

Literature survey

Basic background for uncertainty estimation

  • B. Efron and R. Tibshirani. “Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy.” Statistical science, 1986. [Link]
  • R. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani. “Predictive inference with the jackknife+.” arXiv, 2019. [Link]
  • B. Efron. “Jackknife‐after‐bootstrap standard errors and influence functions.” Journal of the Royal Statistical Society: Series B (Methodological), 1992. [Link]
  • J. Robins and A. Van Der Vaart. “Adaptive nonparametric confidence sets.” The Annals of Statistics, 2006. [Link]
  • V. Vovk, et al., “Cross-conformal predictive distributions.” JMLR, 2018. [Link]
  • M. H Quenouille., “Approximate tests of correlation in time-series.” Journal of the Royal Statistical Society, 1949. [Link]
  • M. H Quenouille. “Notes on bias in estimation.” Biometrika, 1956. [Link]
  • J. Tukey. “Bias and confidence in not quite large samples.” Ann. Math. Statist, 1958.
  • R. G. Miller. “The jackknife–a review.” Biometrika, 1974. [Link]
  • B. Efron. “Bootstrap methods: Another look at the jackknife.” Ann. Statist., 1979. [Link]
  • R. A Stine. “Bootstrap prediction intervals for regression.” Journal of the American Statistical Association, 1985. [Link]
  • R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani. “Conformal prediction under covariate shift.” arXiv preprint arXiv:1904.06019, 2019. [Link]
  • R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani. “The limits of distribution-free conditional predictive inference.” arXiv preprint arXiv:1903.04684, 2019b. [Link]
  • J. Lei, M. G’Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman. “Distribution-free predictive inference for regression.” Journal of the American Statistical Association, 2018. [Link]
  • R. Giordano, M. I. Jordan, and T. Broderick. “A Higher-Order Swiss Army Infinitesimal Jackknife.” arXiv, 2019. [Link]
  • P. W. Koh, K. Ang, H. H. K. Teo, and P. Liang. “On the Accuracy of Influence Functions for Measuring Group Effects.” arXiv, 2019. [Link]
  • D. H. Wolpert. “Stacked generalization.” Neural networks, 1992. [Link]
  • R. D. Cook, and S. Weisberg. “Residuals and influence in regression.” New York: Chapman and Hall, 1982. [Link]
  • R. Giordano, W. Stephenson, R. Liu, M. I. Jordan, and T. Broderick. “A Swiss Army Infinitesimal Jackknife.” arXiv preprint arXiv:1806.00550, 2018. [Link]
  • P. W. Koh, and P. Liang. “Understanding black-box predictions via influence functions.” ICML, 2017. [Link]
  • S. Wager and S. Athey. “Estimation and inference of heterogeneous treatment effects using random forests.” Journal of the American Statistical Association, 2018. [Link]
  • J. F. Lawless, and M. Fredette. “Frequentist prediction intervals and predictive distributions.” Biometrika, 2005. [Link]
  • F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. “Robust statistics: the approach based on influence functions.” John Wiley and Sons, 2011. [Link]
  • P. J. Huber and E. M. Ronchetti. “Robust Statistics.” John Wiley and Sons, 1981.
  • Y. Romano, R. F. Barber, C. Sabatti, E. J. Candès. “With Malice Towards None: Assessing Uncertainty via Equalized Coverage.” arXiv, 2019. [Link]
  • H. R. Kunsch. “The Jackknife and the Bootstrap for General Stationary Observations.” The annals of Statistics, 1989. [Link]

Predictive uncertainty for general machine learning models

  • S. Wager, T. Hastie, and B. Efron. “Confidence intervals for random forests: The jackknife and the infinitesimal jackknife.” The Journal of Machine Learning Research, 2014. [Link]
  • L. Mentch and G. Hooker. “Quantifying uncertainty in random forests via confidence intervals and hypothesis tests.” The Journal of Machine Learning Research, 2016. [Link]
  • J. Platt. “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods.” Advances in large margin classifiers, 1999. [Link]
  • A. Abadie, S. Athey, G. Imbens. “Sampling-based vs. design-based uncertainty in regression analysis.” arXiv preprint (arXiv:1706.01778), 2017. [Link]
  • T. Duan, A. Avati, D. Y. Ding, S. Basu, Andrew Y. Ng, and A. Schuler. “NGBoost: Natural Gradient Boosting for Probabilistic Prediction.” arXiv preprint, 2019. [Link]
  • V. Franc, and D. Prusa. “On Discriminative Learning of Prediction Uncertainty.” ICML, 2019. [Link]

Predictive uncertainty for deep learning

  • J. A. Leonard, M. A. Kramer, and L. H. Ungar. “A neural network architecture that computes its own reliability.” Computers & chemical engineering, 1992. [Link]
  • C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. “Weight uncertainty in neural networks.” ICML, 2015. [Link]
  • B. Lakshminarayanan, A. Pritzel, and C. Blundell. “Simple and scalable predictive uncertainty estimation using deep ensembles.” NeurIPS, 2017. [Link]
  • Y. Gal and Z. Ghahramani. “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” ICML, 2016. [Link]
  • V. Kuleshov, N. Fenner, and S. Ermon. “Accurate Uncertainties for Deep Learning Using Calibrated Regression.” ICML, 2018. [Link]
  • J. Hernández-Lobato and R. Adams. “Probabilistic backpropagation for scalable learning of bayesian neural networks.” ICML, 2015. [Link]
  • S. Liang, Y. Li, and R. Srikant. “Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks.” ICLR, 2018. [Link]
  • K. Lee, H. Lee, K. Lee, and J. Shin. “Training Confidence-calibrated classifiers for detecting out-of-distribution samples.” ICLR, 2018. [Link]
  • P. Schulam and S. Saria “Can You Trust This Prediction? Auditing Pointwise Reliability After Learning.” AISTATS, 2019. [Link]
  • A. Malinin and M. Gales. “Predictive uncertainty estimation via prior networks.” NeurIPS, 2018. [Link]
  • D. Hendrycks, M. Mazeika, and T. G. Dietterich. “Deep anomaly detection with outlier exposure.” arXiv preprint arXiv:1812.04606, 2018. [Link]
  • D. Madras, J. Atwood, A. D’Amour, “Detecting Extrapolation with Influence Functions.” ICML Workshop on Uncertainty and Robustness in Deep Learning, 2019. [Link]
  • M. Sensoy, L. Kaplan, and M. Kandemir. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018. [Link]
  • W. Maddox, T. Garipov, P. Izmailov, D. Vetrov, and A. G. Wilson. “A simple baseline for bayesian uncertainty in deep learning.” arXiv preprint arXiv:1902.02476, 2019. [Link]
  • Y. Ovadia, et al. “Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift.” arXiv preprint arXiv:1906.02530, 2019. [Link]
  • D. Hendrycks, et al. “Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.” arXiv preprint arXiv:1906.12340, 2019. [Link]
  • A. Kumar, P. Liang, T. Ma. “Verified Uncertainty Calibration.” arXiv preprint, 2019. [Link]
  • I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. “Deep Exploration via Bootstrapped DQN.” NeurIPS, 2016. [Link]
  • I. Osband. “Risk versus Uncertainty in Deep Learning: Bayes, Bootstrap and the Dangers of Dropout.” NeurIPS Workshop, 2016. [Link]
  • J. Postels et al. “Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation.” ICCV, 2019. [Link]
  • A. Kendall and Y. Gal. “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” NeurIPS, 2017. [Link]
  • N. Tagasovska and D. Lopez-Paz. “Single-Model Uncertainties for Deep Learning.” NeurIPS, 2019. [Link]
  • A. Der Kiureghian and O. Ditlevsen. “Aleatory or Epistemic? Does it Matter?.” Structural Safety, 2009. [Link]
  • D. Hafner, D. Tran, A. Irpan, T. Lillicrap, and J. Davidson. “Reliable uncertainty estimates in deep neural networks using noise contrastive priors.” arXiv, 2018. [Link]
  • S. Depeweg, J. M. Hernández-Lobato, F. Doshi-Velez, and S. Udluft. “Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning.” ICML, 2018. [Link]
  • L. Smith and Y. Gal, “Understanding Measures of Uncertainty for Adversarial Example Detection.” UAI, 2018. [Link]
  • L. Zhu and N. Laptev. “Deep and Confident Prediction for Time series at Uber.” IEEE International Conference on Data Mining Workshops, 2017. [Link]

Predictive uncertainty in sequential models

  • R. Wen, K. Torkkola, B. Narayanaswamy, and D. Madeka. “A Multi-horizon Quantile Recurrent Forecaster.” arXiv, 2017.
  • D. T. Mirikitani and N. Nikolaev. “Recursive bayesian recurrent neural networks for time-series modeling.” IEEE Transactions on Neural Networks, 2009. [Link]
  • M. Fortunato, C. Blundell and O. Vinyals. “Bayesian Recurrent Neural Networks.” arXiv, 2019. [Link]
  • P. L. McDermott, C. K. Wikle. “Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data.” Entropy, 2019. [Link]
  • Y. Gal, Z. Ghahramani. “A theoretically grounded application of dropout in recurrent neural networks.” NeurIPS, 2016. [Link]

Computer Vision

  • MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation [Link]
  • Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving [Link]
  • Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation [Link]
  • Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation [Link]
  • Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty [Link]
  • Human uncertainty makes classification more robust [Link]
  • Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation [Link]
  • Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation [[Link]](http://Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation)
  • Uncertainty-aware Audiovisual Activity Recognition using Deep Bayesian Variational Inference [Link]
  • Robust Person Re-identification by Modelling Feature Uncertainty [Link]
  • Uncertainty Modeling of Contextual-Connections between Tracklets for Unconstrained Video-based Face Recognition [Link]

Unclassified

  • Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels [[Link]](http://Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels)
  • Verified Uncertainty Calibration [Link]
  • Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections [Link]
  • Propagating Uncertainty in Reinforcement Learning via Wasserstein Barycenters [[Link]](http://Propagating Uncertainty in Reinforcement Learning via Wasserstein Barycenters)
  • Uncertainty-based Continual Learning with Adaptive Regularization [[Link]](http://Uncertainty-based Continual Learning with Adaptive Regularization)
  • Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning [[Link]](http://Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning)
  • Accurate Uncertainty Estimation and Decomposition in Ensemble Learning [Link]
  • CXPlain: Causal Explanations for Model Interpretation under Uncertainty [Link]
  • Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates [Link]
  • Uncertainty on Asynchronous Time Event Prediction [Link]
  • Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness [[Link]](http://Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness)
  • Bayesian Layers: A Module for Neural Network Uncertainty [Link]
  • Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty [Link]
  • Adversarial Dropout for Supervised and Semi-supervised Learning [Link]
  • On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks [Link]

Implemented baselines

  • Bayesian neural networks (Pyro).
  • Monte Carlo dropout (PyTorch).
  • Bayes by backprop (PyTorch).
  • Probabilistic backprop.
  • Naive Jackknife, Jackknife-minmax, and Jackknife+ (PyTorch).
  • Cross conformal and split conformal learning (PyTorch).
  • Deep ensembles (Tensorflow).
  • Resampling uncertainty estimation (PyTorch).
1 Like