Update README.md
Browse files
README.md
CHANGED
|
@@ -124,6 +124,7 @@ Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT
|
|
| 124 |
|
| 125 |
---
|
| 126 |
Model details:
|
|
|
|
| 127 |
The QwQ-LCoT2-7B-Instruct is a fine-tuned language model
|
| 128 |
designed for advanced reasoning and instruction-following tasks. It
|
| 129 |
leverages the Qwen2.5-7B base model and has been fine-tuned on the chain
|
|
@@ -133,20 +134,10 @@ logical reasoning, detailed explanations, and multi-step
|
|
| 133 |
problem-solving, making it ideal for applications such as
|
| 134 |
instruction-following, text generation, and complex reasoning tasks.
|
| 135 |
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
Quickstart with Transformers
|
| 143 |
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
|
| 148 |
|
| 149 |
-
|
| 150 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 151 |
|
| 152 |
model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"
|
|
@@ -180,23 +171,11 @@ generated_ids = [
|
|
| 180 |
|
| 181 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 182 |
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
Intended Use
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
|
| 196 |
The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning
|
| 197 |
and instruction-following tasks, with specific applications including:
|
| 198 |
|
| 199 |
-
|
| 200 |
Instruction Following: Providing detailed and step-by-step guidance for a wide range of user queries.
|
| 201 |
Logical Reasoning: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios.
|
| 202 |
Text Generation: Crafting coherent, contextually relevant, and well-structured text in response to prompts.
|
|
@@ -205,17 +184,8 @@ that require chain-of-thought (CoT) reasoning, making it ideal for
|
|
| 205 |
education, tutoring, and technical support.
|
| 206 |
Knowledge Enhancement: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.
|
| 207 |
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
| 213 |
-
|
| 214 |
Limitations
|
| 215 |
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
Data Bias: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data.
|
| 220 |
Context Limitation: Performance may degrade for
|
| 221 |
tasks requiring knowledge or reasoning that significantly exceeds the
|
|
|
|
| 124 |
|
| 125 |
---
|
| 126 |
Model details:
|
| 127 |
+
-
|
| 128 |
The QwQ-LCoT2-7B-Instruct is a fine-tuned language model
|
| 129 |
designed for advanced reasoning and instruction-following tasks. It
|
| 130 |
leverages the Qwen2.5-7B base model and has been fine-tuned on the chain
|
|
|
|
| 134 |
problem-solving, making it ideal for applications such as
|
| 135 |
instruction-following, text generation, and complex reasoning tasks.
|
| 136 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 137 |
Quickstart with Transformers
|
| 138 |
|
|
|
|
|
|
|
|
|
|
| 139 |
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
|
| 140 |
|
|
|
|
| 141 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 142 |
|
| 143 |
model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"
|
|
|
|
| 171 |
|
| 172 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 173 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
Intended Use
|
|
|
|
|
|
|
|
|
|
| 175 |
|
| 176 |
The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning
|
| 177 |
and instruction-following tasks, with specific applications including:
|
| 178 |
|
|
|
|
| 179 |
Instruction Following: Providing detailed and step-by-step guidance for a wide range of user queries.
|
| 180 |
Logical Reasoning: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios.
|
| 181 |
Text Generation: Crafting coherent, contextually relevant, and well-structured text in response to prompts.
|
|
|
|
| 184 |
education, tutoring, and technical support.
|
| 185 |
Knowledge Enhancement: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.
|
| 186 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 187 |
Limitations
|
| 188 |
|
|
|
|
|
|
|
|
|
|
| 189 |
Data Bias: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data.
|
| 190 |
Context Limitation: Performance may degrade for
|
| 191 |
tasks requiring knowledge or reasoning that significantly exceeds the
|