publications

publications by categories in reversed chronological order. generated by jekyll-scholar.

2023

  1. PhD Thesis
    Advancing Deep Active Learning & Data Subset Selection: Unifying Principles with Information-Theory Intuitions
    Kirsch, Andreas
    2023
  2. TMLR
    Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning
    Kirsch, Andreas, Farquhar, Sebastian, Atighehchian, Parmida, Jesson, Andrew, Branchaud-Charron, Frédéric, and Gal, Yarin
    Transactions on Machine Learning Research 2023
  3. TMLR
    Black-Box Batch Active Learning for Regression
    Kirsch, Andreas
    Transactions on Machine Learning Research 2023
  4. CVPR 2023
    Highlight
    Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
    Conference on Computer Vision and Pattern Recognition 2023
  5. AISTATS 2023
    Prediction-Oriented Bayesian Active Learning
    Bickford Smith*, Freddie, Kirsch*, Andreas, Farquhar, Sebastian, Gal, Yarin, Foster, Adam, and Rainforth, Tom
    26th International Conference on Artificial Intelligence and Statistics 2023
  6. Preprint
    Speeding Up BatchBALD: A k-BALD Family of Approximations for Active Learning
    Kirsch, Andreas
    arXiv 2023
  7. TMLR
    Repro. Cert.
    Does ’Deep Learning on a Data Diet’ reproduce? Overall yes, but GraNd at Initialization does not
    Kirsch, Andreas
    Transactions on Machine Learning Research (Reproducibility Certification) 2023

2022

  1. TMLR
    A Note on ”Assessing Generalization of SGD via Disagreement”
    Kirsch, Andreas, and Gal, Yarin
    Transactions on Machine Learning Research 2022
  2. TMLR
    Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities
    Kirsch, Andreas, and Gal, Yarin
    Transactions on Machine Learning Research 2022
  3. Pre-training
    Workshop
    ICML 2022
    Plex: Towards reliability using pretrained large model extensions
    Tran, Dustin, Liu, Jeremiah, Dusenberry, Michael W, Phan, Du, Collier, Mark, Ren, Jie, Han, Kehang, Wang, Zi, Mariet, Zelda, Hu, Huiyi, and others,
    ICML Pre-training Workshop, 2022 2022
  4. ICML 2022
    Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt
    Mindermann*, Sören, Brauner*, Jan M, Razzak*, Muhammed T, Sharma*, Mrinank, Kirsch, Andreas, Xu, Winnie, Höltgen, Benedikt, Gomez, Aidan N, Morisot, Adrien, Farquhar, Sebastian, and Gal, Yarin
    In Proceedings of the 39th International Conference on Machine Learning 2022
  5. UpML 2022
    Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling
    Kirsch, Andreas, Kossen, Jannik, and Gal, Yarin
    UpML 2022 – Updatable Machine Learning, Workshop @ ICML 2022 2022

2021

  1. NeurIPS 2021
    Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data
    Jesson, Andrew, Tigas, Panagiotis, Amersfoort, Joost, Kirsch, Andreas, Shalit, Uri, and Gal, Yarin
    In Advances in Neural Information Processing Systems 2021
  2. UDL 2021
    On Pitfalls in OoD Detection: Entropy Considered Harmful
    Kirsch, Andreas, Mukhoti, Jishnu, Amersfoort, Joost, Torr, Philip H.S., and Gal, Yarin
    In Uncertainty & Robustness in Deep Learning at Int. Conf. on Machine Learning (ICML Workshop) 2021
  3. UDL 2021
    Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty
    Mukhoti, Jishnu, Kirsch, Andreas, Amersfoort, Joost, Torr, Philip H.S., and Gal, Yarin
    In Uncertainty & Robustness in Deep Learning at Int. Conf. on Machine Learning (ICML Workshop) 2021
  4. SubSetML 2021
    Active Learning under Pool Set Distribution Shift and Noisy Data
    Kirsch, Andreas, Rainforth, Tom, and Gal, Yarin
    In SubSetML: Subset Selection in Machine Learning: From Theory to Practice (ICML Workshop) 2021
  5. SubSetML 2021
    A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions
    Kirsch, Andreas, Farquhar, Sebastian, and Gal, Yarin
    In SubSetML: Subset Selection in Machine Learning: From Theory to Practice (ICML Workshop) 2021
  6. SubSetML 2021
    A Practical & Unified Notation for Information-Theoretic Quantities in ML
    Kirsch, Andreas, and Gal, Yarin
    In SubSetML: Subset Selection in Machine Learning: From Theory to Practice (ICML Workshop) 2021
  7. SubSetML 2021
    Prioritized training on points that are learnable, worth learning, and not yet learned
    Mindermann, Sören, Razzak, Muhammed, Xu, Winnie, Kirsch, Andreas, Sharma, Mrinank, Morisot, Adrien, Gomez, Aidan N., Farquhar, Sebastian, Brauner, Jan, and Gal, Yarin
    In SubSetML: Subset Selection in Machine Learning: From Theory to Practice (ICML Workshop) 2021
  8. NACI 2021
    Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data
    Jesson, Andrew, Tigas, Panagiotis, Amersfoort, Joost, Kirsch, Andreas, Shalit, Uri, and Gal, Yarin
    In The Neglected Assumptions In Causal Inference (ICML Workshop) 2021

2020

  1. UDL 2020
    Scalable Training with Information Bottleneck Objectives
    Kirsch, Andreas, Lyle, Clare, and Gal, Yarin
    In Uncertainty & Robustness in Deep Learning at Int. Conf. on Machine Learning (ICML Workshop) 2020
  2. UDL 2020
    Learning CIFAR-10 with a Simple Entropy Estimator Using Information Bottleneck Objectives
    Kirsch, Andreas, Lyle, Clare, and Gal, Yarin
    In Uncertainty & Robustness in Deep Learning at Int. Conf. on Machine Learning (ICML Workshop) 2020
  3. Preprint
    Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning
    Kirsch, Andreas, Lyle, Clare, and Gal, Yarin
    arXiv Preprint 2020

2019

  1. NeurIPS 2019
    BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning
    Kirsch*, Andreas, van Amersfoort*, Joost, and Gal, Yarin
    NeurIPS 2019