Home / News / Artificial Intelligence / AI2 is creating a large science-optimized language model

AI2 is creating a large science-optimized language model

PaLM 2. GPT-4. Text-generating AI grows daily.

Researchers cannot see how most of these models work because they are behind APIs. Open source AI is becoming as advanced as commercial AI.

The Allen Institute for AI Research (AI2) will release the Open Language Model in 2024. AMD, the Large Unified Modern Infrastructure consortium, Surge AI, and MosaicML are developing Open Language Model, or OLMo.

“The research and technology communities need access to open language models to advance this science,” AI2 senior director of NLP research Hanna Hajishirzi told in an email interview. “With OLMo, we are building a competitive language model to close the gap between public and private research capabilities and knowledge.”

AI2 developed an open language model when Bloom, Meta’s LLaMA, and others already exist. Hajishirzi believes that while open source releases have been valuable and even groundbreaking, they have missed the mark in various ways.

AI2 views OLMo as a platform rather than a model, allowing the research community to use or improve its components. Hajishirzi says AI2 will make a public demo, training data set, and API for OLMo, documented with “very limited” exceptions under “suitable” licensing.

“We’re building OLMo to give AI researchers more access to language models,” Hajishirzi said. We believe the broad availability of all aspects of OLMo will allow the research community to improve what we are creating. We want to build the world’s best open language model together.

According to Noah Smith, senior director of NLP research at AI2, OLMo focuses on helping the model understand textbooks and academic papers rather than code. Meta’s infamous Galactica model was another attempt. Hajishirzi thinks AI2’s academic work and research tools like Semantic Scholar will make OLMo “uniquely suited” for scientific and academic applications.

“We believe OLMo has the potential to be something really special in the field,” Smith said. “AI2’s unique ability to act as third party experts allows us to work with both our own world-class expertise and the industry’s brightest minds. Our rigorous, documented approach will set the stage for building the next generation of safe, effective AI technologies.”

Definitely a nice thought. What about the ethical and legal challenges of training and releasing generative AI? Content owners’ rights and other issues are still being litigated.

The OLMo team will work with AI2’s legal department and unspecified outside experts at model-building “checkpoints” to address privacy and intellectual property rights concerns.

“Through an open and transparent dialogue about the model and its intended use, we can better understand how to mitigate bias, toxicity, and shine a light on outstanding research questions within the community, ultimately resulting in one of the strongest models available,” Smith said.
Misuse risk? Bad actors can spread misinformation and create malicious code using toxic and biased models.

“Maximize the scientific benefits while reducing the risk of harmful use,” said Hajishirzi of AI2. OLMo has an ethics review committee with internal and external advisors (AI2 wouldn’t say who) to advise policy during model creation.

We’ll see if that matters. The model’s technical specs are still unknown. (AI2 revealed that it will have 70 billion parameters, the parts of the model learned from historical training data.) As of January, Finland’s LUMI supercomputer, Europe’s fastest, will be used for training.

AI2 invites collaborators to help develop and critique models. Contact OLMo project organizers here.

 

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …