OpenMOSS (SII)
OpenMOSS Team is a research group under the Shanghai Innovation Institution (SII), working in close collaboration with Fudan University and MOSI Intelligence. Led by Prof. Xipeng Qiu, the team conducts cutting-edge research on large language models (LLMs), advancing the frontiers of model architecture, evaluation, and application with a strong commitment to open, collaborative, and impactful AI innovation.
We warmly welcome researchers, students, and collaborators who share our vision to join us in pushing the boundaries of LLM technology. For inquiries or collaboration opportunities, please contact us atopenmoss@sii.edu.cn.
🌐 Website:https://openmoss.github.io/ orhttp://openmoss.sii.edu.cn/
💻 GitHub:https://github.com/OpenMOSS
- SII is dedicated to fostering innovation in education and research in the field of artificial intelligence.
PinnedLoading
- Language-Model-SAEs
Language-Model-SAEs PublicPerformant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.
Repositories
- MOSS-TTS Public
MOSS‑TTS Family is an open‑source speech and sound generation model family from MOSI.AI and the OpenMOSS team. It is designed for high‑fidelity, high‑expressiveness, and complex real‑world scenarios, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
OpenMOSS/MOSS-TTS’s past year of commit activity - Language-Model-SAEs Public
Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.
OpenMOSS/Language-Model-SAEs’s past year of commit activity - TransformerLens Public Forked fromTransformerLensOrg/TransformerLens
A library for mechanistic interpretability of GPT-style language models
OpenMOSS/TransformerLens’s past year of commit activity - MOSS-TTSD Public
MOSS-TTSD is a spoken dialogue generation model designed for expressive multi-speaker synthesis. It features long-context modeling, flexible speaker control, and multilingual support, while enabling zero-shot voice cloning from short audio references.
OpenMOSS/MOSS-TTSD’s past year of commit activity - FRoM-W1 Public
[ArXiv 26] FRoM-W1: Towards General Humanoid Whole-Body Control with Language Instructions
Uh oh!
There was an error while loading.Please reload this page.
OpenMOSS/FRoM-W1’s past year of commit activity - MOSS-Audio-Tokenizer Public
MOSS-Audio-Tokenizer is a Causal Transformer-based audio tokenizer built on the CAT architecture. Trained on 3M hours of diverse audio, it supports streaming and variable bitrates, delivering SOTA reconstruction and strong performance in generation and understanding—serving as a unified interface for next-generation native audio language models.
OpenMOSS/MOSS-Audio-Tokenizer’s past year of commit activity - MOSS-Speech Public
MOSS-Speech is a true speech-to-speech large language model without text guidance.
OpenMOSS/MOSS-Speech’s past year of commit activity