Exploring the unknown, together.

Cohere For AI is a non-profit research lab that seeks to solve complex machine learning problems. We support fundamental research that explores the unknown, and are focused on creating more points of entry into machine learning research.

Curiosity-driven collaboration

We are committed to making meaningful progress in machine learning research through open collaboration. We believe that technology is powerful, and empowering different perspectives ensures responsible innovation.

Cohere employee drawing on glass with a marker

Fundamental research lab

We contribute to progress in machine learning through fundamental research. We see contributions to traditional conferences and publications in journals as an important part of our work, but also support efforts that go “beyond the research paper” and encourage scientific communication through different mediums.

See all our job opportunities
Title Card labelled Cohere For AI Scholars Program

Scholars Program

Our Scholars Program provides the opportunity to work alongside some of the best research and engineering expertise in the world — exploring the unknown, together. We have created an open, supportive environment that provides an alternative point of entry into machine learning research.

If you are an aspiring NLP researcher and looking for an opportunity to develop your research skills, your journey starts here.

Learn more

Our community

We're not just the usual suspects. Our community is a space where researchers, engineers, linguists, social scientists and lifelong learners connect and collaborate with each other. We come together from all over the world and welcome you whether you are a mentor, dropout, just getting started, PhD, masters, undergraduate, unaffiliated, industry, academic or not really sure. We are excited to support community-driven research and to be shaped by our members' interests.

Join our community

Seminar series

There are a lot of seminar series focused on research. Ours focus on people.


We bring together leading researchers and rising stars in the field of machine learning to discuss their research learning journeys. Research is inherently a human endeavor, and this discussion series provides insights from beginning to breakthrough. To stay up to date on upcoming talks, sign up to our mailing list.

Sign up to our mailing list

History of For AI

In 2017, a team of friends, classmates, and engineers started a distributed research collaboration, with a focus on creating a medium for early-career AI enthusiasts to engage with experienced researchers – they called it “for.ai.” Two of those co-founding members, Aidan Gomez and Ivan Zhang, later went on to co-found Cohere, and many of the original members went on to do exciting things (pursuing PhDs, working at industry and academic labs).

At the time, For AI was one of the first community-driven research groups to support independent researchers around the world. Today, Cohere is proud to reintroduce For AI as Cohere For AI, a dedicated research lab and community for exploring the unknown, together.

Who we are

Cohere For AI is a registered non-profit, and core to our mission statement is contributing to knowledge in the public domain. We collaborate with researchers from private and public institutions, as well as independent researchers unaffiliated with an institution.

We are committed to open sourcing code from our programs, and promoting good stewardship of open source scientific practices.

Spotlight papers

Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics

Providing a unified and efficient framework for Metadata Archaeology – uncovering and inferring metadata of examples in a dataset.

Authors: Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker

Read the paper

Efficient Methods for Natural Language Processing: A Survey

Synthesizing methods and findings in NLP efficiencies, guiding new researchers in the field, and inspiring the development of new methods.

Authors: Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H. Martins, André F. T. Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, Roy Schwartz

Read the paper

Intriguing Properties of Compression on Multilingual Models

Exploring compression as a way to improve model robustness for low-resource languages.

Authors: Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, Julia Kreutzer

Read the paper

Large Language Models are not Zero Shot Communicators

Investigating the implicature gap in Large Language Models.

Authors: Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette

Read the paper

More Papers

Work by Cohere For AI and Technical Staff at Cohere
  • BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model

    Authors: 

    Christopher Akiki, Giada Pistilli, Margot Mieskes, Matthias Gallé, Thomas Wolf, Suzana Ilic, Yacine Jernite

    Read the paper
  • MTEB: Massive Text Embedding Benchmark

    Authors: 

    Niklas Muennighoff, Nouamane Tazi, Loïc Magne, Nils Reimers

    Read the paper
  • Improving Policy Learning via Language Dynamics Distillation

    Authors: 

    Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel

    Read the paper
  • Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

    Authors: 

    Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot, Sebastian Farquhar, Yarin Gal

    arxiv.org
  • Studying the Impact of Magnitude Pruning on Contrastive Learning Methods

    Authors: 

    Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Sauk

    Read the paper
  • Robust Distillation for Worst-class Performance

    Authors: 

    Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon

    Read the paper
  • Lifting the Veil on Hyper-parameters for Value-based Deep Reinforcement Learning

    Authors: 

    João G.M. Araújo, Johan S. Obando-Ceron, Pablo Samuel Castro

    Read the paper
  • αNAS: Neural Architecture Search using Property Guided Synthesis

    Authors: 

    Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy

    Read the paper
  • Scalable Training of Language Models using PAX pjit and TPUv4

    Authors: 

    Joanna Yoo, Kuba Perlin, Siddhartha Rao Kamalakara, João G.M. Araújo

    Read the paper
  • Mitigating Harm in Language Models with Conditional-Likelihood Filtration

    Authors: 

    Helen Ngo, Cooper Raterink, João G.M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst

    Read the paper
  • No News is Good News: A Critique of the One Billion Word Benchmark

    Authors: 

    Helen Ngo, João G.M. Araújo, Jeffrey Hui, Nicholas Frosst

    Read the paper
  • Exploring Low Rank Training of Deep Neural Networks

    Authors: 

    Siddhartha Rao Kamalakara, Acyr Locatelli, Bharat Venkitesh, Jimmy Ba, Yarin Gal, Aidan N. Gomez

    Read the paper
  • Predicting Twitter Engagement With Deep Language Models

    Authors: 

    Maksim N Volkovs, Zhaoyue Cheng, Mathieu Ravaut, Hojin Yang, Kevin Shen, Jinpeng Zhou, Anson Wong, Saba Zuberi, Ivan Zhang, Nick Frosst, Helen Ngo, Carol Chen, Bharat Venkitesh, Stephen Gou, Aidan N. Gomez

    Read the paper
  • Interlocking Backpropagation: Improving depthwise model-parallelism

    Authors: 

    Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal

    jmlr.org

Sparking great conversations, collaborations, and community

Videos

Frequently Asked Questions

  • Do we charge for our educational programs or community membership?

    Cohere For AI is a registered non-profit. We do not charge for participating in any of our programs, and are committed to supporting educational outreach programs, which include compute resources and infrastructure needed to participate in machine learning research.

  • Are you hiring for research positions or interns?

    Our full list of positions are listed here.

Stay updated

Loading...