Curiosity-driven collaboration
We are committed to making meaningful progress in machine learning research through open collaboration. We believe that technology is powerful, and empowering different perspectives ensures responsible innovation.
Fundamental research lab
We contribute to progress in machine learning through fundamental research. We see contributions to traditional conferences and publications in journals as an important part of our work, but also support efforts that go “beyond the research paper” and encourage scientific communication through different mediums.
Scholars Program
The Scholars Program provides the opportunity to work alongside some of the best researchers and engineering expertise in the world — exploring the unknown, together. It will serve as an open, supportive environment that provides an alternative point of entry into NLP research.
Accepted applicants, will join a dedicated team of passionate researchers and industry experts from January 2024 to August 2024, and will be paired with a project proposal, allowing you to grow as a researcher. Participation is full-time and paid. As part of the program, Scholars will have access to a large-scale experimental framework, world class research experts and will help advance our commitment to supporting responsible, fundamental research on machine learning topics while prioritizing good stewardship of open source scientific practices.
Aya: An Open Science Initiative
Aya is a global project that aims to build a multilingual language model via instruction tuning that harnesses the collective wisdom and contributions of people from all over the world. The goal is to make language model development more accessible and collaborative and to address the under-representation of certain languages in natural language processing research. Aya is open to anyone who is passionate about advancing the field of natural language processing and is committed to promoting open science. Learn more about the project in this blog post.
Join the Aya Discord server and start contributing in your language today.
Our Open Science Community
We’re not just another research group. We are the open science community to conduct top-tier ML research while creating more points of entry into the field.
Our research community is a space where researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We come together from over 100 countries around the world and support large and small scale research collaborations.
Research Grant Program
Cohere For AI research grants are designed to support academic partners who are conducting research with the goal of releasing a peer-reviewed scientific artifact. Our program provides academic partners, developers, researchers, and other members of our community with subsidized access to the Cohere API. We are interested in supporting requests for API access that enable data for good applications of large language models (LLMs), and/or responsible use of LLMs. Learn more about the goals of this program on our blog.
Events
We bring together leading researchers and rising stars in the field of machine learning to discuss their research learning journeys and showcase their technical achievements. Research is inherently a human endeavor, and our event series provide insights from beginning to breakthrough.
To stay up to date on upcoming talks, sign up to our mailing list
More Papers
Work by Cohere For AI and Technical Staff at Cohere
Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models
Read the paperAuthors:
Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, Rafael Mosquera, Addison Howard, Will Cukierski, D. Sculley, Vijay Janapa Reddi, Lora Aroyo.
The Presidio Recommendations on Responsible Generative AI - World Economic Forum
Read the recommendationsAuthors:
Sara Hooker, and over 100 other thought leaders.
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
Read the paperAuthors:
Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman
FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling
Read the paperAuthors:
Wei-Yin Ko, Daniel D’souza, Karina Nguyen, Randall Balestriero, Sara Hooker
Associative Memory Augmented Asynchronous Spatiotemporal Representation Learning for Event-based Perception
Read the paperAuthors:
Uday Kamal, Saurabh Dash, Saibal Mukhopahdyay
PASHA: Efficient HPO and NAS with Progressive Resource Allocation
Read the paperAuthors:
Authors: Andrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cedric Archambeau, Giovanni Zappella
BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model
Read the paperAuthors:
Christopher Akiki, Giada Pistilli, Margot Mieskes, Matthias Gallé, Thomas Wolf, Suzana Ilic, Yacine Jernite
MTEB: Massive Text Embedding Benchmark
Read the paperAuthors:
Niklas Muennighoff, Nouamane Tazi, Loïc Magne, Nils Reimers
Improving Policy Learning via Language Dynamics Distillation
Read the paperAuthors:
Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel
Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
arxiv.orgAuthors:
Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot, Sebastian Farquhar, Yarin Gal
Studying the Impact of Magnitude Pruning on Contrastive Learning Methods
Read the paperAuthors:
Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Sauk
Robust Distillation for Worst-class Performance
Read the paperAuthors:
Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon
Lifting the Veil on Hyper-parameters for Value-based Deep Reinforcement Learning
Read the paperAuthors:
João G.M. Araújo, Johan S. Obando-Ceron, Pablo Samuel Castro
αNAS: Neural Architecture Search using Property Guided Synthesis
Read the paperAuthors:
Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy
Scalable Training of Language Models using PAX pjit and TPUv4
Read the paperAuthors:
Joanna Yoo, Kuba Perlin, Siddhartha Rao Kamalakara, João G.M. Araújo
Mitigating Harm in Language Models with Conditional-Likelihood Filtration
Read the paperAuthors:
Helen Ngo, Cooper Raterink, João G.M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst
No News is Good News: A Critique of the One Billion Word Benchmark
Read the paperAuthors:
Helen Ngo, João G.M. Araújo, Jeffrey Hui, Nicholas Frosst
Exploring Low Rank Training of Deep Neural Networks
Read the paperAuthors:
Siddhartha Rao Kamalakara, Acyr Locatelli, Bharat Venkitesh, Jimmy Ba, Yarin Gal, Aidan N. Gomez
Predicting Twitter Engagement With Deep Language Models
Read the paperAuthors:
Maksim N Volkovs, Zhaoyue Cheng, Mathieu Ravaut, Hojin Yang, Kevin Shen, Jinpeng Zhou, Anson Wong, Saba Zuberi, Ivan Zhang, Nick Frosst, Helen Ngo, Carol Chen, Bharat Venkitesh, Stephen Gou, Aidan N. Gomez
Interlocking Backpropagation: Improving depthwise model-parallelism
jmlr.orgAuthors:
Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal
Sparking great conversations, collaborations, and community
Videos
Who we are
Cohere For AI is a registered non-profit, and core to our mission statement is contributing to knowledge in the public domain. We collaborate with researchers from private and public institutions, as well as independent researchers unaffiliated with an institution.
We are committed to open sourcing code from our programs, and promoting good stewardship of open source scientific practices.
Our Team
History of For AI
In 2017, a team of friends, classmates, and engineers started a distributed research collaboration, with a focus on creating a medium for early-career AI enthusiasts to engage with experienced researchers – they called it “for.ai.” Two of those co-founding members, Aidan Gomez and Ivan Zhang, later went on to co-found Cohere, and many of the original members went on to do exciting things (pursuing PhDs, working at industry and academic labs).
At the time, For AI was one of the first community-driven research groups to support independent researchers around the world. Today, Cohere is proud to reintroduce For AI as Cohere For AI, a dedicated research lab and community for exploring the unknown, together.
Frequently Asked Questions
Do we charge for our educational programs or community membership?
Cohere For AI is a registered non-profit. We do not charge for participating in any of our programs, and are committed to supporting educational outreach programs, which include compute resources and infrastructure needed to participate in machine learning research.
Are you hiring for research positions or interns?
Our full list of positions are listed here.