Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
Paper
•
2403.02178
•
Published
•
1
The model is trained with Masked thought Fine-Tuning (MFT), a simple variant of standard Supervised Fine-Tuning (SFT). You can refer to our code and paper below.
We test it with the scripts provided in our code.
| Model | GSM8K |
|---|---|
| adalaw/Llama2-7B-GSM8K-SFT | 42.8 |
| adalaw/Llama2-7B-GSM8K-MFT | 47.3 |