FAMA Training Data: YouTube-Commons
This is the README for the YouTube-Commons part of the FAMA training data. Refer to the main FAMA data card for generic information and data format.
Prerequisites
Install the following dependencies:
- sox: Audio conversion utility
- yt-dlp: YouTube video downloader
# Install sox (example for Ubuntu/Debian)
sudo apt-get install sox
# Install yt-dlp, more detailed instructions at: https://github.com/yt-dlp/yt-dlp/wiki/Installation
pip install -U "yt-dlp[default]"
Files Included
SplitAudioUsingSileroLog.pl- Perl script that processes audio using Silero logstrain_youtubecommons-en.ids- List of English file IDs to downloadyt-commons-en.silero.json.gz- Compressed Silero log for Englishtrain_youtubecommons-it.ids- List of Italian file IDs to downloadyt-commons-it.silero.json.gz- Compressed Silero log for Italian
⚠️ Time markers in the final tsv files stored in this repository refer to the reduced audio files, not the original YouTube audio files.
Instructions
Follow the steps below to generate the audio segments starting from the logs available in this folder. If you are interested in replicating the logs, also follow the optional step for generating the logs.
Download Audio Files
Download in the folder ${DOWNLOAD_DIR} the audio files listed in ${VIDEO_IDS} using yt-dlp by running:
for id in `cat ${VIDEO_IDS}` ; do
download_output=${DOWNLOAD_DIR}/${id}.wav
if ! test -f ${download_output} ; then
echo "Saving audio track in ${download_output}"
${YT-DLP-PATH}/yt-dlp \
-r 500K \
--cookies-from-browser firefox \
--extract-audio \
--audio-format wav \
--postprocessor-args "-ar 16000 -ac 1" \
-o ${download_output} \
"https://www.youtube.com/watch?v=${id}"
else
echo "Skipping... ${download_output} already saved"
fi
done
Where ${VIDEO_IDS} is train_youtubecommons-en.ids for English, and train_youtubecommons-it.ids for Italian.
Note: Some videos may no longer be available on YouTube, resulting in a subset of the original dataset. If an original audio file is missing due to download failure, it will be automatically skipped during processing.
(Optional) Generation of the Voice Activity Detection Logs
To reproduce the logs, install Silero, which serves to remove non-speech phenomena (silence, noise, music):
pip install silero-vad
Once installed, run the script speech_only.py present in this folder:
python ./speech_only.py \
--folder ${DOWNLOAD_FOLDER} \
--sfx ${WAV_SUFFIX} \
--out_folder ${OUT_FOLDER} \
--out_file ${OUT_JSON_FILE}
The script processes the audio files in ${DOWNLOAD_FOLDER} with suffix ${WAV_SUFFIX} and stores the VAD-processed audios
(in wav format at 16kHz) in ${OUT_FOLDER} along with the associated segmentation file (in json format) in ${OUT_JSON_FILE}.
Segmentation based on Voice Activity Detection Logs
Segment the audio downloaded in ${DOWNLOAD_DIR} using the logs ${LOG_FILE} and store them in ${AUDIO_SEGMENT_DIR} by running:
perl ./SplitAudioUsingSileroLog.pl ${LOG_FILE} ${DOWNLOAD_DIR} ${AUDIO_SEGMENT_DIR}
Where ${LOG_FILE} is yt-commons-en.silero.json.gz for English, and yt-commons-it.silero.json.gz for Italian.
Audio Segmentation using SHAS
To split the reduced audio into segments with a controlled duration, download and install SHAS following
the official README in the ${SHAS_ROOT} folder.
To generate the final segmentation, run:
python ./segment-ytc.py --not_strict \
-wavs ${PATH_TO_WAVS} \
-ckpt ${CHECKPOINT_PATH} \
-yaml ${OUTPUT_YAML}
Where ${PATH_TO_WAVS} is the path to the wav files obtained from Silero and stored in ${AUDIO_SEGMENT_DIR},
${CHECKPOINT_PATH} is the path to the
SHAS Multilingual model,
${OUTPUT_YAML} is the path where to store the final audio segmentation saved as yaml files and used for training.
Transcription and Translation
Transcription is done following the same processing used for the MOSEL dataset. Translation is done following the same processing of the other ASR datasets, described in the FAMA data card.
License and Citation
Please refer to the original main FAMA data card for licensing information and citation.