About me
Hi, I’m Gabriel Ilharco. I work on large multimodal models at xAI. Previously, I received my Ph.D. from the University of Washington, where I was fortunate to be advised by Ali Farhadi and Hannaneh Hajishirzi. Prior to UW, I was an AI Resident at Google Research.
My research interests broadly include natural language processing and computer vision. I am particularity excited about large-scale multimodal models, transfer learning, distributional robustness, and data-centric machine learning. You can find more about my recent work from recent publications.
News
- 2024.02: I’ve graduated and started a new job at xAI!
- 2023.06: I’ve been selected as a 2023 JP Morgan Chase PhD Fellow!
- 2023.04: We are releasing a new benchmark for designing multimodal datasets, check out DataComp!
- 2023.01: Our work Editing Models with Task Arithmetic was accepted at ICLR 2023!
- 2022.12: Check out our new work, Editing Models with Task Arithmetic!
- 2022.12: New paper on debugging vision models, Adaptive Testing of Computer Vision Models.
- 2022.11: I’m at NeurIPS, talking about PAINT and giving a talk at the INTERPOLATE workshop. Come say hi!
- 2022.08: Our work Patching open-vocabulary models by interpolating weights as accepted to NeurIPS 2022!
- 2022.07: Check out work our new paper on patching models!
- 2022.06: Our work on robust fine-tuning was a Best Paper finalist at CVPR 2022!
- 2022.06: I’ll be presenting Robust fine-tuning of zero-shot models at CVPR 2022, come say hi!
- 2022.05: I’m very thankful for the recognition of outstanding reviewer at CVPR 2022
- 2022.04: Our open-source repository for training CLIP models has reached 1000 stars!
- 2022.03: Model soups set a new state-of-the-art on ImageNet
- 2022.03: What makes zero-shot CLIP models robust? Find out here
- 2022.03: Check out our work using CLIP for zero-shot object navigation.
- 2021.10: I’m excited to be starting an internship with Jacob Eisenstein at Google Research
- 2021.09: Check our our new work Robust fine-tuning of zero-shot models!
- 2021.08: I’m very thankful for the recognition of outstanding reviewer at ACL 2021
- 2021.06: Our paper exploring the relation between visual and text representations has been accepted to NAACL
- 2021.04: Our MultiModalQA paper on complex QA over text, tables and images is out
- 2021.03: Our paper on contrastive representation learning is out
- 2020.11: We’ll be presenting a tutorial on High Performance NLP at EMNLP 2020
- 2020.06: I’m very excited to be starting an internship with Peter Anderson and Ashish Vaswani at Google Research
- 2020.05: Check out our new preprint exploring similarities between vision and language representations
- 2020.04: Our work Evaluating NLP models via contrast sets is out
- 2020.02: Check out our new paper exploring the dynamics of fine-tuning in NLP
- 2020.01: Our paper Toward ML-Centric Cloud Platforms made the cover of the Communications of the ACM
- 2019.12: Don’t miss our spotlight presentation on SDTW at ViGIL, NeuRIPS 2019.
- 2019.11: Our CoNLL 2019 paper was awarded Honorable Mention for Best Paper in Research Inspired by Human Language Learning!
- 2019.09: I’m officially joining UW as a PhD student!
- 2019.09: Our paper Large-scale representation learning from visually grounded untranscribed speech was accepted to CoNLL 2019
- 2019.08: I’ll be at KDD 2019 presenting a hands-on tutorial on Deep Learning for Natural Language Processing using Tensorflow
- 2019.07: Don’t miss our oral presentation about istruction fidelity in VLN at ACL 2019!
- 2019.07: Our paper on transferable representation learning for VLN has been accepted to ICCV 2019
Research opportunities
- I am excited about mentoring ambitious undergraduates and master students. If you are a student at UW, please reach out!
- I especially encourage students from underrepresented groups to reach out.