The dataset viewer should be available soon. Please retry later.
📚 FinePDFs-Edu
350B+ of highly educational tokens from PDFs 📄
What is it?
📚 FinePDFs-Edu dataset consists of 350B+ tokens of educational PDFs filtered from 📄 FinePDFs dataset covering 69 languages.
FinePDFs was created using the formula inspired from FineWeb-Edu, we developed an educational quality classifier using annotations generated by Qwen3-235B-A22B-Instruct-2507 for each of 69 languages present in this dataset. We then used this classifier to retain only the most educational web pages. FinePDFs-Edu outperforms FinePDFs on popular benchmarks and shows the power of classifiers trained on synthetic data.
The Dataset Curation section details the process for creating the dataset. While it might seem that the dataset is an order of magnitude smaller than FineWeb-Edu, unlike its web ancestor, this dataset is globally deduplicated!
What is being released?
Along with the dataset, which includes all filtered CommonCrawl dumps since CC-MAIN-2013-20 to CC-MAIN-2025-08, we also release:
- The educational classifier used for the filtering (for each language)
- The dataset with educational (and 3 other) labels by Qwen3-235B-A22B-Instruct-2507 for English.
- The dataset with educational labels by Qwen3-235B-A22B-Instruct-2507 for 69 languages beyond English.
- The code for training it and running inference.
How to download and use 📄 FinePDFs-Edu
See the tables above for the subset of the language you want to download.
We currently do not provide smaller sample versions, but by setting limit or using streaming=True you can easily fetch a sample of the data. If there is interest from the community we might upload smaller sampled versions later on.
Using 🏭 datatrove
from datatrove.pipeline.readers import ParquetReader
# limit determines how many documents will be streamed (remove for all)
# this will fetch the Portuguese filtered data
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/finepdfs-edu/data/por_Latn/train", limit=1000)
for document in data_reader():
# do something with document
print(document)
###############################
# OR for a processing pipeline:
###############################
from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import JsonlWriter
pipeline_exec = LocalPipelineExecutor(
pipeline=[
ParquetReader("hf://datasets/HuggingFaceFW/finepdfs-edu/data/por_Latn/train", limit=1000),
LambdaFilter(lambda doc: "hugging" in doc.text),
JsonlWriter("some-output-path")
],
tasks=10
)
pipeline_exec.run()
Using huggingface_hub
from huggingface_hub import snapshot_download
folder = snapshot_download(
"HuggingFaceFW/finepdfs-edu",
repo_type="dataset",
local_dir="./finepdfs-edu/",
# download the Czech filtered
allow_patterns=["data/ces_Latn/train/*"])
For faster downloads, make sure to install pip install huggingface_hub[hf_transfer] and set the environment variable HF_HUB_ENABLE_HF_TRANSFER=1.
Using datasets
from datasets import load_dataset
# get Croatian data
fw = load_dataset("HuggingFaceFW/finepdfs-edu", name="hrv_Latn", split="train", streaming=True)
Similiar to original FinePDFs, this dataset contains high amount of language switching samples, we thus recommend using the filtering function if this is not desired.
Dataset curation
We have used the same approach for FineWeb-Edu with minimal adjustments of the prompt. To scale to languages beyond English we decided to train separate classifier for each.
Educational Scoring
We used Qwen3-235B-A22B-Instruct-2507 to score approximately 300,000 FinePDFs samples for educational quality on a 0–5 scale. The final prompt used for scoring is available here.
After experimenting with several prompt variants, we found that the FineWeb-Edu prompt yielded the most consistent and reliable results. As in FineWeb-Edu, we observed that highly technical or graduate-level content did not correlate well with the benchmarks we track. However, unlike in FineWeb-Edu, the overall average score was noticeably lower—if we had used a fixed threshold of score = 3, only about 2% of samples would have been retained.
To address this, we instead selected the top 10% of samples based on their education score.
| Threshold | Drop Rate |
|---|---|
| 1 | 0.3028 |
| 2 | 0.9451 |
| 3 | 0.9802 |
| 4 | 0.9906 |
| 5 | 0.9987 |
We also replaced the teacher model to improve multilingual coverage and take advantage of the better inference efficiency offered by Mixture-of-Experts (MoE) architectures. To identify a suitable model, we aimed for one that was most “Claude-like”, i.e., whose scoring behavior most closely matched Claude Sonnet-4. We compared models using mean squared error (MSE) on a 10k-sample development set and found that Qwen3-235B-A22B-Instruct-2507 was both the most Claude-like and highly efficient—processing up to 14 chunks/sec on a single H100 GPU.
| Model | MSE (vs. Sonnet-4) |
|---|---|
| Qwen_Qwen3-235B-A22B-Instruct-2507 | 0.398 |
| Qwen_Qwen3-235B-A22B-Thinking-2507 | 0.812 |
| Qwen_Qwen3-30B-A3B-Instruct-2507 | 0.364 |
| Qwen_Qwen3-30B-A3B-Thinking-2507 | 0.925 |
| google_gemma-3-27b-it | 2.727 |
| meta-llama_Llama-3.3-70B-Instruct | 0.553 |
| meta-llama_Llama-4-Maverick-17B-128E-Instruct | 0.707 |
| meta-llama_Llama-4-Scout-17B-16E-Instruct | 1.177 |
| mistralai_Magistral-Small-2507 | 0.717 |
| zai-org_GLM-4.5-Air-FP8 | 0.510 |
For long documents, we take the first 2,048 tokens from the top of the document. If the document exceeds 10,000 characters, we also take the last 2,048 tokens and compute the final score as max(top_score, bottom_score).
Classifier Training
We fine-tuned a BERT-like regression model using these annotations, based on answerdotai/ModernBERT-large for English and jhu-clsp/mmBERT-base for other languages. Both models achieved the best F1 performance among the options we evaluated, while supporting FA2, which allowed us to label over 220 samples per second on an H100 GPU.
For each model, we unfroze both the classifier head and the last four transformer layers. To address severe class imbalance, we rebalanced the training data.
The resulting classifiers are available at:
https://huggingface.co/HuggingFaceFW/finepdfs_edu_classifier_{lang}
Filtering and results
We then built 📚 FinePDFs-Edu by filtering out 90% of samples with lowest edu score for each language. Our ablation demonstrated that this refined dataset surpasses 📄 FinePDFs and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU and ARC. You will find all the ablation models and datasets in this collection.
Considerations for Using the Data
See: FinePDFs.
Additional Information
Licensing Information
The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use.
Citation Information
@misc{kydlicek2025finepdfs,
title={FinePDFs},
author={Hynek Kydl{\'\i}{\v{c}}ek and Guilherme Penedo and Leandro von Werra},
year={2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceFW/finepdfs_edu}}
}
- Downloads last month
- -

