Experimental algorithmic art in c
On this monday I passed my midterm oral test in Information Theory with a top grade! Hooray!. I’ve been quite happy since monday, and my brain has been going thinking a lot, non-stop.
I got an GL-iNet AR150 from Amazon for cheap, and I’ve started tinkering with it as soon as it arrived in order to make a PirateBox out of it. I fucked up the default OpenWRT firmware in such a way that I couldn’t roll back to the default version, so I had to install the latest firmware release from the GL-iNet website. Too bad I stalled there. I’m gonna pick the project up again soon, I hope.
But before that, I just had to try doing something related to what I’ve been thinking about since monday evening, when I was going back home on the bus from university.
That evening I was thinking of a kinda silly art project about generating random pictures of fixed size, and later trying to classify them through a
(already trained) neural network.
While I still think this project is doable (just a little out of my reach, for the moment), I happened to settle on something quite simpler but no less entertaining,
which is very close to various already existing programs and concepts which are probably
far more advanced and better than what I could possibly ever achieve, but that didn’t stop me anyway.
What do you mean with “Algorithmic art”?
I’m pretty sure that nowadays we are all more familiar with the concept of Algorithmic art due to the rise of Machine Learning and Neural Networks.
My interpretation of algorithmic art is something way more primitive, though, and has its roots in the Demoscene.
While I’m no demoscener, those who know me may already know about my experience with Pico-8, while the others may have to wait
for some dedicated blog post later on. Anyway, I’ve been trying, along with many other amazing users, to learn, reproduce and discover some demoscene-y animations and patterns
with the classic approach of producing them with compact mathematical formulas that fit in small amounts of memory.
Now, that is the beauty of having limited resources to play with.
So I tried working a similar way to produce images in C.
“Why C?”, may any reasonably sane individual ask me.
I think I can’t hide the fact that I like C.
Its low-levelness and ability to play hackish, dirty tricks with byte-level data is something that I really like,
and its efficiency in generating small and efficient executables was also one of the perks I considered for this project.
There are a few reasons why this choice may backfire and I won’t be able to bring everything to higher levels (such as real-time generation of pictures),
also because of my intention to keep everything as distant as possible from libraries (like SDL), to have it low-level.
Luckily, I’ve found out about the .ppm file format.
I’ll spare you the details, but it’s probably the easiest uncompressed image format to write and/or parse, since it has a minimal header and
a very user-friendly data structure. Instant adoption!
I still don’t get it. What does your program do?
My program generates frames of an animation.
Each pixel in each frame assumes an RGB color value (1 byte per color).
Each color value of that pixel is calculated by a user-defined formula which can be defined upon:
- the “coordinates” x, y;
- the “frame time” t;
- whatever C allows you to do: math functions, time functions, random functions, data read from files, data obtained from the internet, etc;
Basically, you can create any pattern! (as long as you are willing to shrink it into a single byte per color).
For example,
// Red: R = x - t
color[0] = x - t;
// Green: G = y + t
color[1] = y + t;
// Blue: B = R XOR G
color[2] = color[0] ^ color[1];
yields this result:
Thanks to AlsD for the suggestion of using WebMs instead of GIFs. We’re in the future already!
The fun thing about this whole business is that similarly to bytebeat, a lot of fun results can be obtained just using (combinations of) bitwise operators: expecially XOR is great since its truth table is 50% true and 50% false, and it tends to yield square-like patterns. Obviously experiments with much more complicated formulas can be made. For example, the following code:
color[0] = (int)((x+y)-(double)(t/2)) | (int)((double)((0xFF)*fabs(sin((double)(-x+t)*2*PI/(SIZEX+SINEDURATION)))));
color[1] = (0xFF-(x^t)-y+(t%127))+(2*t) | (int)((double)((0xFF/(t+1)%16)*fabs(cos((double)(x+y+t)*2*PI/(SIZEX+SIZEY+SINEDURATION)))));
color[2] = (char)(color[0]*fabs(sin((double)(t*2*PI/(SINEDURATION))))) ^ (char) ((int)(0xFF*sin((double)x*2*PI/(SIZEX))) & (int)((double)((y^x)*fabs(cos((double)(x+y-t)*2*PI/(SIZEX+SIZEY+SINEDURATION))))));
Even if not in an optimal form, this example code shows how necessary type casting turns out to be using this program, when using math formulas. The result is a strange pattern which makes me think of ever-changing, colorful city buildings:
I will propose another example, but I have to admit that I am not a strong pre-visualizer of these patterns, and I do not possess the necessary
level of knowledge and experience around the whole thing so I tend to proceed a lot by trial and error rather than accurately “studying” pattern formulas.
More advanced users may possess the ability of “byte-level-pattern-visualization” (I just made that up), so they may preview in their mind what is going
to happen with the code they write. God bless them. I curse them, for I’m very envious. Oof.
Anyway:
color[0] = (char)(255*fabs(sin((double)y*2*PI/SIZEY)+cos((double)(x+t)*2*PI/(SIZEX))))%255;
color[1] = (char)(255*fabs(sin((double)x*2*PI/SIZEX)+cos((double)(y+t)*2*PI/(SIZEY))))%255;
color[2] = color[1]^color[0];
This one, I called “Gravitational orbs”, since it reminds me of… well, it’s just roundish orbs, right?
How do you create the videos?
As I said, the program generates the frame, but getting to the video is a completely different business.
I didn’t even think about rendering video in C. Even if a “simple” video format existed, I would never want to deal with it in C. It would be scary.
I settled for a much more convenient solution: since I have the frames, I’ll just use ffmpeg to make a video out of them.
There’s nothing ffmpeg can’t handle.
So I came up with a CLI command which generates an mp4 video starting from the indexed frames and shuts up about it:
ffmpeg -i %01d.ppm -framerate 60 -pattern_type sequence -y out.mp4 &> /dev/null
Now, I didn’t investigate much on it but it seemingly won’t use the framerate parameter properly. I suspect it may depend on the default codec used, but I guess I don’t really need it for now, so if you happen to know, let me know via mail, but I don’t really care for now.
Gifs? No wait… there’s something better!
Creating the gifs, from here, is also very simple. I chose to create the gifs from the video, but I could’ve done it from the frames as well. A bit of quality is lost following my way, but I don’t really care since I’m compressing the whole thing anyway. I used some filtering parameters found online, but I didn’t want to waste a lot of time seeking for optimal parameters. It’s just for online show off, after all.
echo "Creating palette..."
ffmpeg -i out.mp4 -vf "fps=30,scale=160:-1:flags=lanczos,palettegen=stats_mode=diff" -y 'palette.png' >& /dev/null
echo "Creating gif..."
ffmpeg -i out.mp4 -i palette.png -lavfi "fps=30,scale=160:-1:flags=lanczos,paletteuse=dither=bayer:bayer_scale=2:diff_mode=rectangle" -y out.gif &> /dev/null
Please notice how I send every output to /dev/null. This is because I’ve put everything in a small script you can find at the end of this page.
EDIT: Disregard the GIFs, just use webm instead! This command should generate relatively decent quality webms with a small filesize.
ffmpeg -i out.mp4 -y -c:v libvpx -b:v 500k out.webm &> /dev/null
Parallelizing
Another thing I noticed about creating the frames, however, is that it turns out to be painfully slow, expecially when the calculations are made out of a lot of math functions, such as sin(), cos(), etc. This was the perfect test ground for an application I found months ago and never had the pleasure to try, which is GNU parallel.
GNU Parallel is a useful shell tool for running stuff in parallel. Since I (luckily?) didn’t have to follow my uni’s advanced course on OSs,
which is mandatory only for the Computer Engineering Master students, I do not know how to define parallel threads in C.
Ok, at least I do know about forks, but that’s not it, right?
Parallel comes in handy because it deals with all that stuff. I just need my highly repetitive tasks to run as quick as possible, using my dual-core CPU (on my x220).
I just had to allow arguments in the C program in order to read from the outside what frame to generate.
Now my program needs to run as:
./main CURRENT_FRAME TOTAL_FRAMES
Parametrizing the frame time variable this way, I was able to use the following command to generate as many frames as I like:
parallel ./main {} "$1" :::: <(seq 0 "$1")
“$1” is, in my bash script, the user-input number of total frames. Just substitute it with any positive integer value to run it standalone.
The “seq” command outputs a sequence of values from 0 to $1, which is fed into parallel to generate as many tasks, one for each frame to generate.
Everything’s easy, but I struggled a bit with how parallel deals with the seq command.
Luckily, the official documentation alone was enough to figure it out.
Now, in my case using parallel halved the time used for a 256 frames rendering. In your case, I don’t know, it heavily depends on what CPU you have, I guess, but you may expect a significant improvement. Also, since I noticed how slow my computer was at first when I tried rendering the frames, it helped me discover that the CPU frequency scaling was disabled in my TLP settings. Huge find!
Just give me the script already!
Ok, ok!
Damn, chill out!
I’m making a github repository, okay? You happy?
Please try it out. If you have fun with it and happen to find some interesting formula, let me know!
If you think it’s completely wrong, buggy, stupid, let me know that as well.
At least that would mean someone’s reading this, lol.