Update README.md
Browse files
README.md
CHANGED
|
@@ -91,7 +91,7 @@ These quantized models are optimized for efficient inference while Maintaining C
|
|
| 91 |
|
| 92 |
## Acknowledgments
|
| 93 |
|
| 94 |
-
These quantized models are based on the original work by **
|
| 95 |
|
| 96 |
Special thanks to:
|
| 97 |
- The [Nvidia](https://huggingface.co/nvidia) team for developing and releasing the [OpenReasoning-Nemotron-7B](https://huggingface.co/nvidia/OpenReasoning-Nemotron-7B) model.
|
|
|
|
| 91 |
|
| 92 |
## Acknowledgments
|
| 93 |
|
| 94 |
+
These quantized models are based on the original work by **Qwen** and the **NVIDIA** development team.
|
| 95 |
|
| 96 |
Special thanks to:
|
| 97 |
- The [Nvidia](https://huggingface.co/nvidia) team for developing and releasing the [OpenReasoning-Nemotron-7B](https://huggingface.co/nvidia/OpenReasoning-Nemotron-7B) model.
|