| # FAMA Training Data: YouTube-Commons | |
| This is the README for the [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons) part of | |
| the [FAMA training data](https://huggingface.co/datasets/FBK-MT/fama-data). | |
| Refer to the [main FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data) for generic information and data format. | |
| ## Prerequisites | |
| Install the following dependencies: | |
| - **sox**: Audio conversion utility | |
| - [**yt-dlp**](https://github.com/yt-dlp): YouTube video downloader | |
| ```bash | |
| # Install sox (example for Ubuntu/Debian) | |
| sudo apt-get install sox | |
| # Install yt-dlp, more detailed instructions at: https://github.com/yt-dlp/yt-dlp/wiki/Installation | |
| pip install -U "yt-dlp[default]" | |
| ``` | |
| ## Files Included | |
| - `SplitAudioUsingSileroLog.pl` - Perl script that processes audio using Silero logs | |
| - `train_youtubecommons-en.ids` - List of English file IDs to download | |
| - `yt-commons-en.silero.json.gz` - Compressed Silero log for English | |
| - `train_youtubecommons-it.ids` - List of Italian file IDs to download | |
| - `yt-commons-it.silero.json.gz` - Compressed Silero log for Italian | |
| ⚠️ **Time markers in the final tsv files stored in this repository refer to the *reduced* audio files, not the original YouTube audio files.** | |
| ## Instructions | |
| Follow the steps below to generate the audio segments starting from the logs available in this folder. | |
| If you are interested in replicating the logs, also follow the optional step for generating the logs. | |
| ### Download Audio Files | |
| Download in the folder `${DOWNLOAD_DIR}` the audio files listed in `${VIDEO_IDS}` using yt-dlp by running: | |
| ```bash | |
| for id in `cat ${VIDEO_IDS}` ; do | |
| download_output=${DOWNLOAD_DIR}/${id}.wav | |
| if ! test -f ${download_output} ; then | |
| echo "Saving audio track in ${download_output}" | |
| ${YT-DLP-PATH}/yt-dlp \ | |
| -r 500K \ | |
| --cookies-from-browser firefox \ | |
| --extract-audio \ | |
| --audio-format wav \ | |
| --postprocessor-args "-ar 16000 -ac 1" \ | |
| -o ${download_output} \ | |
| "https://www.youtube.com/watch?v=${id}" | |
| else | |
| echo "Skipping... ${download_output} already saved" | |
| fi | |
| done | |
| ``` | |
| Where `${VIDEO_IDS}` is `train_youtubecommons-en.ids` for English, and `train_youtubecommons-it.ids` for Italian. | |
| **Note**: Some videos may no longer be available on YouTube, resulting in a subset of the original dataset. | |
| If an original audio file is missing due to download failure, it will be automatically skipped during processing. | |
| ### (Optional) Generation of the Voice Activity Detection Logs | |
| To reproduce the logs, install [**Silero**](https://github.com/snakers4/silero-vad), | |
| which serves to remove non-speech phenomena (silence, noise, music): | |
| ```bash | |
| pip install silero-vad | |
| ``` | |
| Once installed, run the script `speech_only.py` present in this folder: | |
| ```bash | |
| python ./speech_only.py \ | |
| --folder ${DOWNLOAD_FOLDER} \ | |
| --sfx ${WAV_SUFFIX} \ | |
| --out_folder ${OUT_FOLDER} \ | |
| --out_file ${OUT_JSON_FILE} | |
| ``` | |
| The script processes the audio files in `${DOWNLOAD_FOLDER}` with suffix `${WAV_SUFFIX}` and stores the VAD-processed audios | |
| (in wav format at 16kHz) in `${OUT_FOLDER}` along with the associated segmentation file (in json format) in `${OUT_JSON_FILE}`. | |
| ### Segmentation based on Voice Activity Detection Logs | |
| Segment the audio downloaded in `${DOWNLOAD_DIR}` using the logs `${LOG_FILE}` and store them in `${AUDIO_SEGMENT_DIR}` by running: | |
| ```bash | |
| perl ./SplitAudioUsingSileroLog.pl ${LOG_FILE} ${DOWNLOAD_DIR} ${AUDIO_SEGMENT_DIR} | |
| ``` | |
| Where `${LOG_FILE}` is `yt-commons-en.silero.json.gz` for English, and `yt-commons-it.silero.json.gz` for Italian. | |
| ### Audio Segmentation using SHAS | |
| To split the reduced audio into segments with a controlled duration, download and install SHAS following | |
| [the official README](https://github.com/mt-upc/SHAS?tab=readme-ov-file#usage) in the `${SHAS_ROOT}` folder. | |
| To generate the final segmentation, run: | |
| ```bash | |
| python ./segment-ytc.py --not_strict \ | |
| -wavs ${PATH_TO_WAVS} \ | |
| -ckpt ${CHECKPOINT_PATH} \ | |
| -yaml ${OUTPUT_YAML} | |
| ``` | |
| Where `${PATH_TO_WAVS}` is the path to the wav files obtained from Silero and stored in `${AUDIO_SEGMENT_DIR}`, | |
| `${CHECKPOINT_PATH}` is the path to the | |
| [SHAS Multilingual model](https://drive.google.com/u/0/uc?export=download&confirm=x9hB&id=1GzwhzbHBFtwDmQPKoDOdAfESvWBrv_wB), | |
| `${OUTPUT_YAML}` is the path where to store the final audio segmentation saved as yaml files and used for training. | |
| ### Transcription and Translation | |
| Transcription is done following the same processing used for the [MOSEL dataset](https://huggingface.co/datasets/FBK-MT/mosel). | |
| Translation is done following the same processing of the other ASR datasets, | |
| described in the [FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data). | |
| ## License and Citation | |
| Please refer to the original [main FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data) for licensing information and citation. |