Update README.md (#2)
Browse files- Update README.md (e35c22dc211f59ee7394e94e7f4ba0635820192e)
Co-authored-by: Chen Chen <[email protected]>
README.md
CHANGED
|
@@ -62,6 +62,64 @@ python3 test_scorer_hf.py --model-name /path/to/UNO-Scorer
|
|
| 62 |
|
| 63 |
**Minimal Example:**
|
| 64 |
> ⚠️ **Critical**: The prompt template below is simplified for illustration. **Only the complete prompt template in `test_scorer_hf.py` will properly activate the model's fine-tuned scoring capabilities.** Custom or simplified prompts will not achieve optimal results.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
```python
|
| 66 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 67 |
import re
|
|
@@ -82,6 +140,7 @@ question = "Which animal appears in the image?"
|
|
| 82 |
reference = "Sub-question 1: Elephant, total score 10 points"
|
| 83 |
response = "I see an elephant in the image."
|
| 84 |
|
|
|
|
| 85 |
prompt = f"""Please score the model's response based on the reference answer.
|
| 86 |
|
| 87 |
Question: {question}
|
|
@@ -93,15 +152,19 @@ Provide a step-by-step analysis and output the total score in <score></score> ta
|
|
| 93 |
# Generate score
|
| 94 |
messages = [{"role": "user", "content": prompt}]
|
| 95 |
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 100 |
score = extract_score(result)
|
| 101 |
-
print(f"Score: {score}
|
| 102 |
```
|
| 103 |
|
| 104 |
-
|
| 105 |
### 🔄 How It Works
|
| 106 |
|
| 107 |
UNO-Scorer evaluates model responses through a structured process:
|
|
@@ -157,7 +220,7 @@ Sub-question 1:
|
|
| 157 |
Question Content: How many apples are in the image?
|
| 158 |
Reference Answer: 2
|
| 159 |
Model Response: There are two appels.
|
| 160 |
-
Points:
|
| 161 |
Question Type: Numerical
|
| 162 |
Comparison Process: The reference answer is "2" and the model response is "two". The numerical values are completely identical, with only the expression format differing. This meets the scoring standard for numerical questions.
|
| 163 |
|
|
|
|
| 62 |
|
| 63 |
**Minimal Example:**
|
| 64 |
> ⚠️ **Critical**: The prompt template below is simplified for illustration. **Only the complete prompt template in `test_scorer_hf.py` will properly activate the model's fine-tuned scoring capabilities.** Custom or simplified prompts will not achieve optimal results.
|
| 65 |
+
<details>
|
| 66 |
+
<summary><b>Click to expand complete prompt template</b></summary>
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
def process_score_prompt(question, reference, response):
|
| 70 |
+
promt_template = """请先通读问题信息,然后基于参考答案对模型回复的结果进行正确性打分。每道题可能包含多个小问,每个小问都已给出了相应的参考答案和分值,请逐小问校验模型回复是否正确,正确得对应分值,错误或漏答得0分,累计计分,有如下要求。
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
### 要求1:信息梳理
|
| 75 |
+
|
| 76 |
+
- 梳理出如下信息
|
| 77 |
+
- 问题内容
|
| 78 |
+
- 参考答案(可适度完善表达,但不改变核心内容)
|
| 79 |
+
- 模型回复(需要将模型回复中的指代关系与参考答案对齐)
|
| 80 |
+
- 分值
|
| 81 |
+
|
| 82 |
+
### 要求2:判断题型
|
| 83 |
+
|
| 84 |
+
- 明确该小问属于以下哪种题型之一,并基于该类型的打分标准进行打分,需要给出详细的比对过程。
|
| 85 |
+
- **数值型**,要求模型回复与标准答案的数值完全相同,不允许有误差。例,`问题:北京奥运会是哪一年?参考答案:2008,模型回复:2004,打分结果:错误。`
|
| 86 |
+
- **枚举型**,要求模型回复列举出参考答案的全部对象,缺一不可、错一不可,允许同义词等语义相近的表达,题中有顺序要求则必须按顺序枚举。例,`图中出现了哪些动物?参考答案:大熊猫、河马、长颈鹿,模型回复:河马、小熊猫、长颈鹿,打分结果:错误。 `注:“/”表示“或”,如,XXA/XXB,表示回答出任意一项即可。
|
| 87 |
+
- **选择题**,要求模型回复与参考答案相同的选项或选项内容。例,`问题:李白是哪个朝代的诗人?A. 唐朝 B. 宋朝 C. 元朝,模型回复:李白是唐朝诗人,打分结果:正确。`
|
| 88 |
+
- **判断题**,要求模型回复与参考答案的判断一致。例,`问题:图中鼠标是否放在了笔记本电脑左侧?参考答案:是,模型回复:图中鼠标在笔记本电脑的左侧。打分结果:正确。`
|
| 89 |
+
- **简答题**,要求模型回复包括与参考答案语义一致的短语或表达,允许表达方式不同。例,`问题:视频中最后放入锅中的食材是什么?参考答案:洋葱,模型回复:胡萝卜。打分结果:错误。`
|
| 90 |
+
- **论述题**,要求模型回复包含参考答案的核心观点。例,`问题:请简要论述为什么要保护生物多样性。参考答案:维持生态平衡,模型回复:保护生物多样性能够让生态系统保持稳定,促进人类社会的可持续发展。打分结果:正确。`
|
| 91 |
+
|
| 92 |
+
### 要求3:打分标准
|
| 93 |
+
|
| 94 |
+
- **完全正确**:得满分。
|
| 95 |
+
- **错误或漏答**:得0分。
|
| 96 |
+
- 如模型回复与参考答案大意相同但细节略有差别,且非核心内容,视为正确,具体参考参考答案的详细要求。
|
| 97 |
+
- 若模型回复未直接给出答案,需主动归纳总结结论,只关注结论是否一致。
|
| 98 |
+
- 每小问独立打分,前序错误不影响后续小问的结果。
|
| 99 |
+
|
| 100 |
+
### 要求4:输出格式
|
| 101 |
+
|
| 102 |
+
- 逐小问列出得分说明。
|
| 103 |
+
- 所有小问得分相加,在<score></score>中给出总分,例如:<score>5</score>
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## 问题信息
|
| 108 |
+
{{question}}
|
| 109 |
+
## 参考答案
|
| 110 |
+
{{reference}}
|
| 111 |
+
## 模型回复
|
| 112 |
+
{{response}}
|
| 113 |
+
## 逐小问打分"""
|
| 114 |
+
|
| 115 |
+
prompt = promt_template.replace("{{question}}", remove_thought_block(question.strip()))
|
| 116 |
+
prompt = prompt.replace("{{reference}}", reference)
|
| 117 |
+
prompt = prompt.replace("{{response}}", response)
|
| 118 |
+
return prompt
|
| 119 |
+
```
|
| 120 |
+
</details>
|
| 121 |
+
|
| 122 |
+
|
| 123 |
```python
|
| 124 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 125 |
import re
|
|
|
|
| 140 |
reference = "Sub-question 1: Elephant, total score 10 points"
|
| 141 |
response = "I see an elephant in the image."
|
| 142 |
|
| 143 |
+
# This prompt template is simplified for illustration.
|
| 144 |
prompt = f"""Please score the model's response based on the reference answer.
|
| 145 |
|
| 146 |
Question: {question}
|
|
|
|
| 152 |
# Generate score
|
| 153 |
messages = [{"role": "user", "content": prompt}]
|
| 154 |
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 155 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 156 |
+
generated_ids = model.generate(
|
| 157 |
+
**model_inputs,
|
| 158 |
+
max_new_tokens=4096,
|
| 159 |
+
do_sample=False
|
| 160 |
+
)
|
| 161 |
+
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
| 162 |
+
result = tokenizer.decode(output_ids, skip_special_tokens=True)
|
| 163 |
+
print("Score response:\n", result)
|
| 164 |
score = extract_score(result)
|
| 165 |
+
print(f"Score: {score}")
|
| 166 |
```
|
| 167 |
|
|
|
|
| 168 |
### 🔄 How It Works
|
| 169 |
|
| 170 |
UNO-Scorer evaluates model responses through a structured process:
|
|
|
|
| 220 |
Question Content: How many apples are in the image?
|
| 221 |
Reference Answer: 2
|
| 222 |
Model Response: There are two appels.
|
| 223 |
+
Points: 10 point
|
| 224 |
Question Type: Numerical
|
| 225 |
Comparison Process: The reference answer is "2" and the model response is "two". The numerical values are completely identical, with only the expression format differing. This meets the scoring standard for numerical questions.
|
| 226 |
|