Datasets:

ArXiv:
License:

Unable to access the dataset data

#2
by mirth - opened

Hi.
Trying somehow to get the text data.
Trying the following:

from datasets import load_dataset

load_dataset('jhu-clsp/mmBERT-decay-data')

It starts to download the data and finally stops with:

requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/resolve-cache/datasets/jhu-clsp/mmBERT-decay-data/a7a5b96d68409e36d80331f041651a6419e63911/train%2Ffineweb2-sampled-decay-v2%2Fbul_Cyrl_train-sampled%2Fbatch_0971-tokenized-chunked-8192-512-32-backfill-nodups%2Fstats.json

The approach from the readme also doesn't work:

from streaming import StreamingDataset

dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/mmbert-decay',
    local='/tmp/mmbert-decay-data',
    shuffle=True
)

leads to:

ValueError: Unsupported remote path: https://huggingface.co/datasets/jhu-clsp/mmbert-decay

After changing it to:

dataset = StreamingDataset(
    remote='hf://huggingface.co/jhu-clsp/mmbert-decay',
    local='/tmp/mmbert-decay-data',
    shuffle=True,
    batch_size=2,
)

for sample in dataset:
    text = sample['text']
    print(text)

It shows the following error:

Entry Not Found for url: https://huggingface.co/datasets/jhu-clsp/mmBERT-decay-data/resolve/main/train/fineweb2-sampled-decay-v2/swe_Latn-tokenized-chunked-8192-512-32-backfill-nodups-sampled/000_00032-batch_0003-tokenized-chunked-8192-512-32-backfill-nodups/shard.00001.mds.zstd.

Packages:

datasets==4.2.0
mosaicml-streaming==0.13.0
Center for Language and Speech Processing @ JHU org

Hi @mirth , thanks for raising this!

The first portion is known, there is too much data to use one process like that (especially with HF rate limited). Instead it’s recommended to use HFs download cli through the HF hub. Apologies if that’s not in the readme, will check and update!

The iterative streaming from HF should work for a few samples until the rate limiting tho, so I do wonder if that’s a version issue or something else.

Are you trying to download for training? If so try the hf hub download command (https://huggingface.co/docs/huggingface_hub/v1.0.0.rc5/en/package_reference/file_download#huggingface_hub.snapshot_download) and I’ll look into the HF streaming for those who just want to look at a few samples.

Thanks!
huggingface_hub.snapshot_download seems to be working, but it's huge!

Basically I'm interested about how does raw text look like. More specifically does it contain sort of paragraph split inside a single sample?

Center for Language and Speech Processing @ JHU org

Yes haha, it is quite big!

Thanks for flagging the streaming though, it looks I uploaded the decompressed version which breaks the standard streaming API. I fixed it with this script: https://github.com/JHU-CLSP/mmBERT/blob/main/data/online_streaming.py

Sorry again about that and thanks for pointing it out!

But to answer your question, it depends on the text: much of it comes from many different sources (see the table in the paper). I would say nearly all of it contains multiple paragraphs though, if I had to guess, especially as this data is up to 8192 tokens.

Sign up or log in to comment