Curiosity-driven collaboration
We are committed to making meaningful progress in machine learning research through open collaboration. We believe that technology is powerful, and empowering different perspectives ensures responsible innovation.
Fundamental research lab
We contribute to progress in machine learning through fundamental research. We see contributions to traditional conferences and publications in journals as an important part of our work, but also support efforts that go “beyond the research paper” and encourage scientific communication through different mediums.
Scholars Program
Our Scholars Program provides the opportunity to work alongside some of the best research and engineering expertise in the world — exploring the unknown, together. We have created an open, supportive environment that provides an alternative point of entry into machine learning research.
Aya: An Open Science Initiative
Aya is a global project that aims to build a multilingual language model via instruction tuning that harnesses the collective wisdom and contributions of people from all over the world. The goal is to make language model development more accessible and collaborative and to address the under-representation of certain languages in natural language processing research. Aya is open to anyone who is passionate about advancing the field of natural language processing and is committed to promoting open science. Learn more about the project in this blog post.
Join the Aya Discord server and start contributing in your language today.
Our Open Science Community
We’re not just another research group. We are the open science community to conduct top-tier ML research while creating more points of entry into the field.
Our research community is a space where researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We come together from over 100 countries around the world and support large and small scale research collaborations.
Events
We bring together leading researchers and rising stars in the field of machine learning to discuss their research learning journeys and showcase their technical achievements. Research is inherently a human endeavor, and our event series provide insights from beginning to breakthrough.
To stay up to date on upcoming talks, sign up to our mailing list
More Papers
Work by Cohere For AI and Technical Staff at Cohere
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
Read the paperAuthors:
Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman
FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling
Read the paperAuthors:
Wei-Yin Ko, Daniel D’souza, Karina Nguyen, Randall Balestriero, Sara Hooker
Associative Memory Augmented Asynchronous Spatiotemporal Representation Learning for Event-based Perception
Read the paperAuthors:
Uday Kamal, Saurabh Dash, Saibal Mukhopahdyay
PASHA: Efficient HPO and NAS with Progressive Resource Allocation
Read the paperAuthors:
Authors: Andrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cedric Archambeau, Giovanni Zappella
BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model
Read the paperAuthors:
Christopher Akiki, Giada Pistilli, Margot Mieskes, Matthias Gallé, Thomas Wolf, Suzana Ilic, Yacine Jernite
MTEB: Massive Text Embedding Benchmark
Read the paperAuthors:
Niklas Muennighoff, Nouamane Tazi, Loïc Magne, Nils Reimers
Improving Policy Learning via Language Dynamics Distillation
Read the paperAuthors:
Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel
Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
arxiv.orgAuthors:
Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot, Sebastian Farquhar, Yarin Gal
Studying the Impact of Magnitude Pruning on Contrastive Learning Methods
Read the paperAuthors:
Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Sauk
Robust Distillation for Worst-class Performance
Read the paperAuthors:
Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon
Lifting the Veil on Hyper-parameters for Value-based Deep Reinforcement Learning
Read the paperAuthors:
João G.M. Araújo, Johan S. Obando-Ceron, Pablo Samuel Castro
αNAS: Neural Architecture Search using Property Guided Synthesis
Read the paperAuthors:
Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy
Scalable Training of Language Models using PAX pjit and TPUv4
Read the paperAuthors:
Joanna Yoo, Kuba Perlin, Siddhartha Rao Kamalakara, João G.M. Araújo
Mitigating Harm in Language Models with Conditional-Likelihood Filtration
Read the paperAuthors:
Helen Ngo, Cooper Raterink, João G.M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst
No News is Good News: A Critique of the One Billion Word Benchmark
Read the paperAuthors:
Helen Ngo, João G.M. Araújo, Jeffrey Hui, Nicholas Frosst
Exploring Low Rank Training of Deep Neural Networks
Read the paperAuthors:
Siddhartha Rao Kamalakara, Acyr Locatelli, Bharat Venkitesh, Jimmy Ba, Yarin Gal, Aidan N. Gomez
Predicting Twitter Engagement With Deep Language Models
Read the paperAuthors:
Maksim N Volkovs, Zhaoyue Cheng, Mathieu Ravaut, Hojin Yang, Kevin Shen, Jinpeng Zhou, Anson Wong, Saba Zuberi, Ivan Zhang, Nick Frosst, Helen Ngo, Carol Chen, Bharat Venkitesh, Stephen Gou, Aidan N. Gomez
Interlocking Backpropagation: Improving depthwise model-parallelism
jmlr.orgAuthors:
Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal
Sparking great conversations, collaborations, and community
Videos
Who we are
Cohere For AI is a registered non-profit, and core to our mission statement is contributing to knowledge in the public domain. We collaborate with researchers from private and public institutions, as well as independent researchers unaffiliated with an institution.
We are committed to open sourcing code from our programs, and promoting good stewardship of open source scientific practices.
Our Team
History of For AI
In 2017, a team of friends, classmates, and engineers started a distributed research collaboration, with a focus on creating a medium for early-career AI enthusiasts to engage with experienced researchers – they called it “for.ai.” Two of those co-founding members, Aidan Gomez and Ivan Zhang, later went on to co-found Cohere, and many of the original members went on to do exciting things (pursuing PhDs, working at industry and academic labs).
At the time, For AI was one of the first community-driven research groups to support independent researchers around the world. Today, Cohere is proud to reintroduce For AI as Cohere For AI, a dedicated research lab and community for exploring the unknown, together.
Frequently Asked Questions
Do we charge for our educational programs or community membership?
Cohere For AI is a registered non-profit. We do not charge for participating in any of our programs, and are committed to supporting educational outreach programs, which include compute resources and infrastructure needed to participate in machine learning research.
Are you hiring for research positions or interns?
Our full list of positions are listed here.