- Notifications
You must be signed in to change notification settings - Fork261
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
License
mindspore-lab/mindnlp
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
MindNLP stands forMindSpore + Natural Language Processing, representing seamless compatibility with the HuggingFace ecosystem. MindNLP enables you to leverage the best of both worlds: the rich HuggingFace model ecosystem and MindSpore's powerful acceleration capabilities.
MindNLP provides seamless compatibility with the HuggingFace ecosystem, enabling you to run any Transformers/Diffusers models on MindSpore across all hardware platforms (GPU/Ascend/CPU) without code modifications.
You can directly use native HuggingFace libraries (transformers, diffusers, etc.) with MindSpore acceleration:
For HuggingFace Transformers:
importmindsporeimportmindnlpfromtransformersimportpipelinechat= [ {"role":"system","content":"You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role":"user","content":"Hey, can you tell me any fun things to do in New York?"}]pipeline=pipeline(task="text-generation",model="Qwen/Qwen3-8B",ms_dtype=mindspore.bfloat16,device_map="auto")response=pipeline(chat,max_new_tokens=512)print(response[0]["generated_text"][-1]["content"])
For HuggingFace Diffusers:
importmindsporeimportmindnlpfromdiffusersimportDiffusionPipelinepipeline=DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5",ms_dtype=mindspore.float16,device_map='cuda')pipeline("An image of a squirrel in Picasso style").images[0]
You can also use MindNLP's native interface for better integration:
frommindnlp.transformersimportAutoTokenizer,AutoModeltokenizer=AutoTokenizer.from_pretrained("bert-base-uncased")model=AutoModel.from_pretrained("bert-base-uncased")inputs=tokenizer("Hello world!",return_tensors='ms')outputs=model(**inputs)
Note: Due to differences in autograd and parallel execution mechanisms, any training or distributed execution code must utilize the interfaces provided by MindNLP.
MindNLP leverages MindSpore's powerful capabilities to deliver exceptional performance and unique features:
MindNLP providesmindtorch (accessible viamindnlp.core) for PyTorch-compatible interfaces, enabling seamless migration from PyTorch code while benefiting from MindSpore's acceleration on Ascend hardware:
importmindnlp# Automatically enables proxy for torch APIsimporttorchfromtorchimportnn# All torch.xx APIs are automatically mapped to mindnlp.core.xx (via mindtorch)net=nn.Linear(10,5)x=torch.randn(3,10)out=net(x)print(out.shape)# core.Size([3, 5])
MindNLP extends MindSpore with several advanced features for better model development:
- Dispatch Mechanism: Operators are automatically dispatched to the appropriate backend based on
Tensor.device, enabling seamless multi-device execution. - Meta Device Support: Perform shape inference and memory planning without actual computations, significantly speeding up model development and debugging.
- NumPy as CPU Backend: Use NumPy as a CPU backend for acceleration, providing better compatibility and performance on CPU devices.
- Heterogeneous Data Movement: Enhanced
Tensor.to()for efficient data movement across different devices (CPU/GPU/Ascend).
These features enable better support for model serialization, heterogeneous computing, and complex deployment scenarios.
You can install the official version of MindNLP which is uploaded to pypi.
pip install mindnlp
You can download MindNLP daily wheel fromhere.
To install MindNLP from source, please run:
pip install git+https://github.com/mindspore-lab/mindnlp.git# orgit clone https://github.com/mindspore-lab/mindnlp.gitcd mindnlpbash scripts/build_and_reinstall.sh
| MindNLP version | MindSpore version | Supported Python version |
|---|---|---|
| master | daily build | >=3.7.5, <=3.9 |
| 0.1.1 | >=1.8.1, <=2.0.0 | >=3.7.5, <=3.9 |
| 0.2.x | >=2.1.0 | >=3.8, <=3.9 |
| 0.3.x | >=2.1.0, <=2.3.1 | >=3.8, <=3.9 |
| 0.4.x | >=2.2.x, <=2.5.0 | >=3.9, <=3.11 |
| 0.5.x | >=2.5.0, <=2.7.0 | >=3.10, <=3.11 |
| MindNLP version | MindSpore version | Supported Python version |
|---|---|---|
| 0.6.x | >=2.7.1. | >=3.10, <=3.11 |
Since there are too many supported models, please checkhere
This project is released under theApache 2.0 license.
The dynamic version is still under development, if you find any issue or have an idea on new features, please don't hesitate to contact us viaGithub Issues.
MindSpore NLP SIG (Natural Language Processing Special Interest Group) is the main development team of the MindNLP framework. It aims to collaborate with developers from both industry and academia who are interested in research, application development, and the practical implementation of natural language processing. Our goal is to create the best NLP framework based on the domestic framework MindSpore. Additionally, we regularly hold NLP technology sharing sessions and offline events. Interested developers can join our SIG group using the QR code below.
MindSpore is an open source project that welcomes any contribution and feedback.
We wish that the toolbox and benchmark could serve the growing research
community by providing a flexible as well as standardized toolkit to re-implement existing methods
and develop their own new semantic segmentation methods.
If you find this project useful in your research, please consider citing:
@misc{mindnlp2022, title={{MindNLP}: Easy-to-use and high-performance NLP and LLM framework based on MindSpore}, author={MindNLP Contributors}, howpublished = {\url{https://github.com/mindspore-lab/mindnlp}}, year={2022}}About
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
