Categories: emacsconf

RSS - Atom - Subscribe via email

Converting our VTT files to TTML

| emacsconf, geek

I wanted to convert our VTT files to TTML files so that we might be able to use them for training lachesis for transcript segmentation. I downloaded the VTT files from EmacsConf 2021 to a directory and copied the edited captions from the EmacsConf 2022 backstage area (using head -1 ${FILE} | grep -q "captioned" to distinguish them from the automatic ones). I installed the ttconv python package. Then I used the following command to convert the TTML files:

for FILE in *.vtt; do
    BASE=$(basename -s .vtt "$FILE");
    ffmpeg -y -i $FILE $; tt convert -i $ -o $BASE.ttml

I haven't gotten around to installing whanever I need in order to get lachesis to work under Python 2.7, since it hasn't been updated for Python 3. It'll probably be a low-priority project anyway, as EmacsConf is fast approaching. Anyway, I thought I'd stash this in my blog somewhere in case I need to make TTML files again!

View or add comments

Re-encoding the EmacsConf videos with FFmpeg and GNU Parallel

| geek, linux, emacsconf

It turns out that using -crf 56 compressed the EmacsConf a little too aggressively, losing too much information in the video. We wanted to reencode everything, maybe going back to the default value of -crf 32. My laptop would have taken a long time to do all of those videos. Fortunately, one of the other volunteers shared a VM on a machine with 12 cores, and I had access to a few other systems. It was a good opportunity to learn how to use GNU Parallel to send jobs to different machines and retrieve the results.

First, I updated the compression script,

ffmpeg -y -i "$FILE"  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -aq-mode 2 -tile-columns 0 -tile-rows 0 -frame-parallel 0 -cpu-used 8 -auto-alt-ref 1 -lag-in-frames 25 -g 240 -pass 1 -f webm -an -threads 8 /dev/null &&
if [[ $FILE =~ "webm" ]]; then
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a copy "${FILE%.*}--compressed$SUFFIX.webm"
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a libvorbis "${FILE%.*}--compressed$SUFFIX.webm"

I made an originals.txt file with all the original filenames. It looked like this:


I set up a ~/.parallel/emacsconf profile with something like this so that I could use three computers and my laptop, sending one job each and displaying progress:

--sshlogin computer1 --sshlogin computer2 --sshlogin computer3 --sshlogin : -j 1 --progress --verbose --joblog parallel.log

I already had SSH key-based authentication set up so that I could connect to the three remote computers.

Then I spread the jobs over four computers with the following command:

cat originals.txt | parallel -J emacsconf \
                             --transferfile {} \
                             --return '{=$_ =~ s/\..*?$/--compressed32.webm/=}' \
                             --cleanup \
                             --basefile \
                             bash 32 {}

It copied each file over to the computer it was assigned to, processed the file, and then copied the file back.

It was also helpful to occasionally do echo 'killall -9 ffmpeg' | parallel -J emacsconf -j 1 --onall if I cancelled a run.

It still took a long time, but less than it would have if any one computer had to crunch through everything on its own.

This was much better than my previous way of doing things, which involved copying the files over, running ffmpeg commands, copying the files back, and getting somewhat confused about which directory I was in and which file I assigned where and what to do about incompletely-encoded files.

I sometimes ran into problems with incompletely-encoded files because I'd cancelled the FFmpeg process. Even though ffprobe said the files were long, they were missing a large chunk of video at the end. I added a compile-media-verify-video-frames function to compile-media.el so that I could get the last few seconds of frames, compare them against the duration, and report an error if there was a big gap.

Then I changed emacsconf-publish.el to use the new filenames, and I regenerated all the pages. For EmacsConf 2020, I used some Emacs Lisp to update the files. I'm not particularly fond of wrangling video files (lots of waiting, high chance of error), but I'm glad I got the computers to work together.

View or add comments

Adding little nudges to help on the EmacsConf wiki

| emacs, emacsconf

A number of people helped capture the talks for EmacsConf 2021, which was fantastic because we were able to stream all of the first day's talks with open captions and most of the second day's talks too. Right now, in fact, there are only two talks left that haven't been captioned. After the conference, a couple of other people volunteered to help out as well. Whee!

I want to figure out a good way to help people work on the things that they're interested in without necessarily burdening them with too much work, too little work, too much coordination, not enough coordination. Before the conference, one of the perks we had offered was that captioners got early access to the videos. I had a password-protected directory on a web server and an index that I made using Emacs Lisp to display the the talks that still need to be captioned. People e-mailed me to call dibs on the talk they wanted to caption, and that was how we avoided duplicating work. Now that all the videos are public, of course, people can just go to the regular wiki.

The other thing to think about is that in addition to captioning the two remaining talks (not essential, but it would be nice), there are also different levels of things that we can do. It would be nice to have chapter markers for some of the longer Q&A sessions. It would be fantastic to cross reference those with the questions and answers so that so that people can jump to the section they're interested in. It'd be incredible if somebody actually wrote down the answers. And it'd be even more awesome if people actually captioned the Q&A sessions as well, which were in many cases much longer than the actual sessions. So this is a fair bit of work, but people can probably pick a level that matches their interest and time available.

I'm not entirely sure how to coordinate this especially since I've got limited computer time. So my goal is to have something where volunteers can basically just wander around looking for talks that they're interested in and see ways to help out, or see a list of things that could use some work. So for example, while they're browsing the maintainers talk, they might say, "Oh, this one needs some chapter markers. I want to help with that. How do I do that? How do I get started?" And then they go down that path. On the other hand, you might have somebody sitting down saying, "I've got an hour and I want to go help out. What can I do?"

I don't want to keep data in many different places. I wonder if I can use the wiki for a lot of this coordination. Now that the videos are public, I've started tagging the pages that need extra help, like long Q&A session that need chapter markers.

With a little bit more work, I think people will be able to follow the instructions from there, especially if they've done this kind of captioning before, or email us to ask for help and then we can get them started.

I also thought about using Etherpad to do that kind of coordination where people would put their name next to a thing to reserve it, but then that's one more step. I don't know. At the moment, editing the wiki is a bit of an involved process. Worst-case scenario (best-case, actually, if we have lots of people wanting to help? =) ), people can call dibs by emailing us at and one of us organizers will add a little note there in the volunteer attribute. It's probably a good start, so we'll see where we can take it.

View or add comments

EmacsConf backstage: picking timestamps from a waveform

| emacs, emacsconf

We wanted to trim the Q&A session recordings so that people don't have to listen to the transition from the main presentation or the long silence until we got around to stopping the recording.

The MPV video player didn't have a waveform view, so I couldn't just jump to the parts with sound. Audacity could show waveforms, but it didn't have an easy way to copy the timestamp. I didn't want to bother with heavyweight video-editing applications on my Lenovo X220. So the obvious answer is, of course, to make a text editor do the job. Yay Emacs!


Figure 1: Select timestamps using a waveform

It's very experimental and I don't know if it'll work for anyone else. If you want to use it, you will also need mpv.el, the MPV media player, and the ffmpeg command-line tool. Here's my workflow:

  • M-x waveform-show to select the file.
  • left-click on the waveform to copy the timestamp and start playing from there
  • right-click to sample from that spot
  • left and right to adjust the position, shift-left and shift-right to take smaller steps
  • SPC to copy the current MPV position
  • j to jump to a timestamp (hh:mm:ss or seconds)
  • > to speed up, < to slow down

I finally figured out how to use SVG to embed the waveform generated by FFMPEG and animate the current MPV playback position. Whee! There's lots of room for improvement, but it's a pretty fun start.

If you're curious, you can find the code at . Let me know if it actually works for you!

View or add comments