drawvg is very useful. Before drawvg, I was always fine using the stable FFmpeg releases such as 8.0.1 but when I saw drawvg added to the master branch[1], I didn't want to wait for the next stable release and immediately re-built ffmpeg from master to start using it.
My main use case is modifying youtube videos of tech tutorials where the speaker overlays a video of themselves in a corner of the video. drawvg is used to blackout that area of the video. I'm sure some viewers like having a visible talking head shown on the same screen as the code but I find the constant motion of someone's lips moving and eyes blinking in my peripheral vision extremely distracting. Our vision is very tuned into paying attention to faces so the brain constantly fighting that urge so it can concentrate on the code. (A low-tech solution is to just put a yellow sticky know on the monitor to cover up the speaker but that means you can't easily resize/move the window playing the video ... so ffmpeg to the rescue.)
If the overlay was a rectangle, you can use the older drawbox filter and don't need drawvg. However, some content creaters use circles and that's where drawvg works better. Instead of creating a separate .vgs file, I just use the inline syntax like this:
That puts a black filled circle on the bottom right corner of a 4k vid to cover up the speaker. Different vids from different creators will require different x,y,radius coordinates.
One of the strangest discoveries of my life was that vector graphics is a solved problem, and the solution is turtle graphics that I was taught in primary school.
I still have a copy of Hobby Electronics from 1982 that my dad bought when I was about 9 or so and in primary school, with the article on building Hebot II.
I got various bits and pieces together and experimented with doing things like driving Big Trak gearboxes (remember? J Bull Electrical used to advertise them in every magazine, along with all sorts of fascinating old shite) with an interface plugged into my ZX Spectrum, but I never actually built one.
Funnily enough I was thinking about that the other day, and how sad it is that schools like my son's primary school just have very locked-down iPads for the children to use instead of the BBC Micros we grew up with (I'm guessing you're more approximately my age than primary school age, and those things were in schools well into the early 2000s. Bombproof.) that could be endlessly tinkered with.
Anyway the guy next door does a lot of 3D printing and it's never been easier to draw PCBs and get them made or even etch them at home (it's the drilling bit I hate). So maybe now EBv2.0 is five, it's time to dig out that issue of HE and start transcribing stuff into Kicad and Blender :-)
I understand that most of this comes from simplicity of use from the
shell, so if you take this point of view, the above makes a lot of sense.
My poor, feeble brain, though, has a hard time deducing all of this. Yes,
I can kind of know what it does to some extent ... start at 12 seconds
right? during 3 seconds ... apply the specified filter in the specified format,
use libvpx-vp9 as the video codec ... but the above example is somewhat
simple. There are total monsters in actual use when it comes to the filter
subsystem in ffmpeg. Avisynth was fairly easy on my brain; ffmpeg does not,
and nobody among the ffmpeg dev team seems to think that complicated uses are an issue. I even wrote a small ruby script that expands shortcut options as above, into the corresponding long names, simply because the long names are a bit easier to remember. Even that fails when it comes to complex filters used.
It's a shame because ffmpeg is otherwise really great.
drawvg is very useful. Before drawvg, I was always fine using the stable FFmpeg releases such as 8.0.1 but when I saw drawvg added to the master branch[1], I didn't want to wait for the next stable release and immediately re-built ffmpeg from master to start using it.
My main use case is modifying youtube videos of tech tutorials where the speaker overlays a video of themselves in a corner of the video. drawvg is used to blackout that area of the video. I'm sure some viewers like having a visible talking head shown on the same screen as the code but I find the constant motion of someone's lips moving and eyes blinking in my peripheral vision extremely distracting. Our vision is very tuned into paying attention to faces so the brain constantly fighting that urge so it can concentrate on the code. (A low-tech solution is to just put a yellow sticky know on the monitor to cover up the speaker but that means you can't easily resize/move the window playing the video ... so ffmpeg to the rescue.)
If the overlay was a rectangle, you can use the older drawbox filter and don't need drawvg. However, some content creaters use circles and that's where drawvg works better. Instead of creating a separate .vgs file, I just use the inline syntax like this:
That puts a black filled circle on the bottom right corner of a 4k vid to cover up the speaker. Different vids from different creators will require different x,y,radius coordinates.[1] https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/016d767c8e9d...
Oh, I was not expecting an entire DSL for this. This looks really useful.
This could be a great and easy way to add annotations to technical instruction videos in a way that is self-documenting and programmable.
One of the strangest discoveries of my life was that vector graphics is a solved problem, and the solution is turtle graphics that I was taught in primary school.
Well turtle graphics is not implemented in drawvg. But it should be easy to implement it.
I still have a copy of Hobby Electronics from 1982 that my dad bought when I was about 9 or so and in primary school, with the article on building Hebot II.
I got various bits and pieces together and experimented with doing things like driving Big Trak gearboxes (remember? J Bull Electrical used to advertise them in every magazine, along with all sorts of fascinating old shite) with an interface plugged into my ZX Spectrum, but I never actually built one.
Funnily enough I was thinking about that the other day, and how sad it is that schools like my son's primary school just have very locked-down iPads for the children to use instead of the BBC Micros we grew up with (I'm guessing you're more approximately my age than primary school age, and those things were in schools well into the early 2000s. Bombproof.) that could be endlessly tinkered with.
Anyway the guy next door does a lot of 3D printing and it's never been easier to draw PCBs and get them made or even etch them at home (it's the drilling bit I hate). So maybe now EBv2.0 is five, it's time to dig out that issue of HE and start transcribing stuff into Kicad and Blender :-)
That's quite cool. I guess ffmpeg would kind of technically be a replacement for avisynth at this point.
Still, I find the syntax it uses horrible:
I understand that most of this comes from simplicity of use from the shell, so if you take this point of view, the above makes a lot of sense.My poor, feeble brain, though, has a hard time deducing all of this. Yes, I can kind of know what it does to some extent ... start at 12 seconds right? during 3 seconds ... apply the specified filter in the specified format, use libvpx-vp9 as the video codec ... but the above example is somewhat simple. There are total monsters in actual use when it comes to the filter subsystem in ffmpeg. Avisynth was fairly easy on my brain; ffmpeg does not, and nobody among the ffmpeg dev team seems to think that complicated uses are an issue. I even wrote a small ruby script that expands shortcut options as above, into the corresponding long names, simply because the long names are a bit easier to remember. Even that fails when it comes to complex filters used.
It's a shame because ffmpeg is otherwise really great.
Maybe it helps to view ffmpeg as a DSL and the ugliness is caused by the CLI constraints / conventions.
For what it's worth, LLMs are a great tool for both composing and understanding ffmpeg commands.
And if you want something more verbose / easier to read you can use something like https://github.com/kkroening/ffmpeg-python (with LLMs) as well
It looks like that project is dead, some googling turned up this one which seems active and popular
https://github.com/pyav-org/pyav