Papers
This is what we have been up to recently
Predict the Next Word: `Humans exhibit uncertainty in this task and language models _____`
Evgenia Ilia and Wilker Aziz. In EACL, 2024.
[arXiv]
Evgenia Ilia and Wilker Aziz. In EACL, 2024.
[arXiv]
Interpreting Predictive Probabilities: Model Confidence or Human Label Variation?
Joris Baan, Raquel Fernández, Barbara Plank, Wilker Aziz. In EACL, 2024.
[arXiv]
Joris Baan, Raquel Fernández, Barbara Plank, Wilker Aziz. In EACL, 2024.
[arXiv]
The Effect of Generalisation on the Inadequacy of the Mode
Bryan Eikema. In UncertaiNLP at EACL, 2024.
Bryan Eikema. In UncertaiNLP at EACL, 2024.
Uncertainty in Natural Language Generation: From Theory to Applications
Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz. Pre-print. 2023.
[arXiv]
Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz. Pre-print. 2023.
[arXiv]
What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability
Mario Giulianelli, Joris Baan, Wilker Aziz, Raquel Fernández, Barbara Plank. In EMNLP, 2023.
[arXiv]
Mario Giulianelli, Joris Baan, Wilker Aziz, Raquel Fernández, Barbara Plank. In EMNLP, 2023.
[arXiv]
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
Angelos Nalmpantis, Apostolos Panagiotopoulos, John Gkountouras, Konstantinos Papakostas, Wilker Aziz. In XAI4CV at CVPR, 2023.
[arXiv] [poster] [code]
Angelos Nalmpantis, Apostolos Panagiotopoulos, John Gkountouras, Konstantinos Papakostas, Wilker Aziz. In XAI4CV at CVPR, 2023.
[arXiv] [poster] [code]
Sampling-Based Minimum Bayes Risk Decoding for Neural Machine Translation
Bryan Eikema and Wilker Aziz. In EMNLP, 2022.
[arXiv] [code]
Bryan Eikema and Wilker Aziz. In EMNLP, 2022.
[arXiv] [code]
Stop Measuring Calibration When Humans Disagree
Joris Baan, Wilker Aziz, Barbara Plank, and Raquel Fernandez. In EMNLP, 2022.
[arXiv]
Joris Baan, Wilker Aziz, Barbara Plank, and Raquel Fernandez. In EMNLP, 2022.
[arXiv]
Sparse Communication via Mixed Distributions
António Farinhas, Wilker Aziz, Vlad Niculae, and André F. T. Martins. In ICLR, 2022.
[arXiv] [code]
António Farinhas, Wilker Aziz, Vlad Niculae, and André F. T. Martins. In ICLR, 2022.
[arXiv] [code]
Statistical Model Criticism of Variational Auto-Encoders
Claartje Barkhof and Wilker Aziz. Pre-print. 2022.
[arXiv]
Claartje Barkhof and Wilker Aziz. Pre-print. 2022.
[arXiv]
Editing Factual Knowledge in Language Models
Nicola De Cao, Wilker Aziz, and Ivan Titov. In EMNLP, 2021.
[arXiv] [code]
Nicola De Cao, Wilker Aziz, and Ivan Titov. In EMNLP, 2021.
[arXiv] [code]
Highly Parallel Autoregressive Entity Linking with Discriminative Correction
Nicola De Cao, Wilker Aziz, and Ivan Titov. In EMNLP, 2021.
[arXiv] [code]
Nicola De Cao, Wilker Aziz, and Ivan Titov. In EMNLP, 2021.
[arXiv] [code]
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
Bryan Eikema and Wilker Aziz. In Coling, 2020. Best paper
[arXiv] [code]
Bryan Eikema and Wilker Aziz. In Coling, 2020. Best paper
[arXiv] [code]
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity
Gonçalo M. Correia, Vlad Niculae, Wilker Aziz, and André F. T. Martins. In NeurIPS, 2020. Spotlight
[arXiv] [code]
Gonçalo M. Correia, Vlad Niculae, Wilker Aziz, and André F. T. Martins. In NeurIPS, 2020. Spotlight
[arXiv] [code]
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao, Michael Schlichtkrull, Wilker Aziz, and Ivan Titov. In EMNLP, 2020.
[arXiv] [code]
Nicola De Cao, Michael Schlichtkrull, Wilker Aziz, and Ivan Titov. In EMNLP, 2020.
[arXiv] [code]
The Power Spherical Distribution
Nicola De Cao and Wilker Aziz. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2020.
[arXiv] [slides] [code]
Nicola De Cao and Wilker Aziz. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2020.
[arXiv] [slides] [code]
Effective Estimation of Deep Generative Language Models
Tom Pelsmaeker and Wilker Aziz. In ACL, 2020.
[arXiv] [slides] [talk] [code]
Tom Pelsmaeker and Wilker Aziz. In ACL, 2020.
[arXiv] [slides] [talk] [code]
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
Duygu Ataman, Wilker Aziz, and Alexandra Birch. In ICLR, 2020. Spotlight
[arXiv]
Duygu Ataman, Wilker Aziz, and Alexandra Birch. In ICLR, 2020. Spotlight
[arXiv]
Auto-Encoding Variational Neural Machine Translation
Bryan Eikema and Wilker Aziz. In RepL4NLP, 2019.
[arXiv] [code] [demo]
Bryan Eikema and Wilker Aziz. In RepL4NLP, 2019.
[arXiv] [code] [demo]
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings, Wilker Aziz and Ivan Titov. In ACL, 2019.
[arXiv] [code]
Jasmijn Bastings, Wilker Aziz and Ivan Titov. In ACL, 2019.
[arXiv] [code]
Latent Variable Model for Multi-modal Translation
Iacer Calixto, Miguel Rios and Wilker Aziz. In ACL, 2019.
[arXiv]
Iacer Calixto, Miguel Rios and Wilker Aziz. In ACL, 2019.
[arXiv]
Block Neural Autoregressive Flow
Nicola De Cao, Wilker Aziz and Ivan Titov. In UAI, 2019. Spotlight
[arXiv] [code]
Nicola De Cao, Wilker Aziz and Ivan Titov. In UAI, 2019. Spotlight
[arXiv] [code]
Question Answering by Reasoning Across Documents with Graph Convolutional Networks
Nicola De Cao, Wilker Aziz and Ivan Titov. In NAACL, 2019.
[arXiv]
Nicola De Cao, Wilker Aziz and Ivan Titov. In NAACL, 2019.
[arXiv]
And this is some of the good stuff we were up to prior to Probrabll
A Stochastic Decoder for Neural Machine Translation
Philip Schulz, Wilker Aziz and Trevor Cohn. In Proceedings of ACL, 2018.
[arXiv] [appendix] [bibtex] [code]
Philip Schulz, Wilker Aziz and Trevor Cohn. In Proceedings of ACL, 2018.
[arXiv] [appendix] [bibtex] [code]
Deep Generative Model for Joint Alignment and Word Representation
Miguel Rios, Wilker Aziz, Khalil Sima'an. In Proceedings of NAACL-HLT, 2018.
[arXiv] [bibtex] [slides] [code]
Miguel Rios, Wilker Aziz, Khalil Sima'an. In Proceedings of NAACL-HLT, 2018.
[arXiv] [bibtex] [slides] [code]
Modeling Latent Sentence Structure in Neural Machine Translation
Jasmijn Bastings, Wilker Aziz, Ivan Titov, Khalil Sima'an. In Extended abstract at ACL's NMT workshop, 2018.
[arXiv]
Jasmijn Bastings, Wilker Aziz, Ivan Titov, Khalil Sima'an. In Extended abstract at ACL's NMT workshop, 2018.
[arXiv]
Graph Convolutional Encoders for Syntax-aware Neural Machine Translation
Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima'an. In Proceedings of EMNLP, 2017.
[arXiv] [bibtex] [slides]
Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima'an. In Proceedings of EMNLP, 2017.
[arXiv] [bibtex] [slides]
Fast Collocation-Based Bayesian HMM Word Alignment
Philip Schulz and Wilker Aziz. In Proceedings of COLING, 2016.
[bibtex] [code]
Philip Schulz and Wilker Aziz. In Proceedings of COLING, 2016.
[bibtex] [code]
Examining the Relationship between Preordering and Word Order Freedom in Machine Translation
Joachim Daiber, Miloš Stanojević, Wilker Aziz and Khalil Sima'an. In Proceedings of WMT, 2016.
[bibtex] [slides] [code]
Joachim Daiber, Miloš Stanojević, Wilker Aziz and Khalil Sima'an. In Proceedings of WMT, 2016.
[bibtex] [slides] [code]
Word alignment without NULL words
Philip Schulz, Wilker Aziz and Khalil Sima'an. In Proceedings of ACL, 2016.
[bibtex] [poster] [code]
Philip Schulz, Wilker Aziz and Khalil Sima'an. In Proceedings of ACL, 2016.
[bibtex] [poster] [code]