OpenLanguageModel
Check out our repository to see!
OpenLanguageModel (OLM) is a modular, transparent framework for building, training, and experimenting with transformer‑based language models.
OLM is designed to make sandboxing ideas and prototyping new architectures easy, while still exposing the full complexity required for serious research and large‑scale training. It deliberately avoids black‑box abstractions: every major component is explicit, inspectable, and replaceable.
At the same time, OLM does not force you to work at the lowest level. You can start training quickly, then progressively peel back layers as you explore, modify, or reimplement parts of the system. In other words, OLM allows you to have a customisable level of customisability.
Made by:
Vardhaman Kalloli •
Keshava Prasad •
Tavish Mankash