Spaces:
No application file
No application file
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,7 @@ We believe the future of AI is open. Thatβs why weβre sharing our latest mod
|
|
| 26 |
π **Explore relevant open-source tools**:
|
| 27 |
- [**vLLM**](https://github.com/vllm-project/vllm) β Serve large language models efficiently across GPUs and environments.
|
| 28 |
- [**LLM Compressor**](https://github.com/vllm-project/llm-compressor) β Compress and optimize your own models with SOTA quantization and sparsity techniques.
|
| 29 |
-
- [**Speculators**](https://github.com/vllm-project/speculators) β
|
| 30 |
- [**InstructLab**](https://github.com/instructlab) β Fine-tune open models with your data using scalable, community-backed workflows.
|
| 31 |
- [**GuideLLM**](https://github.com/neuralmagic/guidellm) β Benchmark, evaluate, and guide your deployments with structured performance and latency insights.
|
| 32 |
|
|
|
|
| 26 |
π **Explore relevant open-source tools**:
|
| 27 |
- [**vLLM**](https://github.com/vllm-project/vllm) β Serve large language models efficiently across GPUs and environments.
|
| 28 |
- [**LLM Compressor**](https://github.com/vllm-project/llm-compressor) β Compress and optimize your own models with SOTA quantization and sparsity techniques.
|
| 29 |
+
- [**Speculators**](https://github.com/vllm-project/speculators) β Build, evaluate, and store speculative decoding algorithms for LLM inference in vLLM.
|
| 30 |
- [**InstructLab**](https://github.com/instructlab) β Fine-tune open models with your data using scalable, community-backed workflows.
|
| 31 |
- [**GuideLLM**](https://github.com/neuralmagic/guidellm) β Benchmark, evaluate, and guide your deployments with structured performance and latency insights.
|
| 32 |
|