Pinned toot

So you're aware: most of my art is now going on at

This account will remain active, but follow both if you don't want to miss anything!

A Stylish-Cycle-GAN that uses the inputted image in the augmented instance norm. The fun of Cycle-GAN with Style.

For some reason I'm doing flowers -> buildings...

love too give 1 billion votes, to 1 (one) boomer in Wyoming,,

mm yess, the ecletoral collage, which serves to protect those tread upon rural peoples,,

having a nice pair of headphones on my desk adjacent to my laptop and opting to use the laptop speakers for music is just really emblematic of my life

So you're aware: most of my art is now going on at

This account will remain active, but follow both if you don't want to miss anything!

Liking one of the posts but not replying because I realize my research isn't even near to being published yet, but I *will* be jumping into the discourse soon...

Sweating and looking for that discourse on ML and physics from months ago because I feel I finally have something to contribute to the conversation

Adverb boosted

Thinking about my animated particle systems while drawing today's #inkyDays, inspired by chemistry and physics.

#generative #art #ink #drawing

newest drawing at bottom of post: InkyDays 05/29/19 -

Adverb boosted

Honestly, the dream is to find this but for glasses...

Found a single scalar in the latent vector that works pretty well for masc/fem appearance

Next step is to find my own face in the latent space and alter it

(Extremely divorced 45-year-old man voice) The National's new album is practically out if you don't mind a video with it.

Not that I dislike the film portion - I'm just leaving it open in a different tab to hear the soundtrack.

This is made by setting the latent vector to be constant and the first layer of noise to change.

Surprisingly, the subsequent noise layers do very little aside from micro-details. This may be because I didn't regularize the noise by switching it every now and again.

I've written what is essentially a StyleGAN-Light, I think.

My model uses just over 8 million trainable weights compared to ~23 million in the generator alone for Nvidia's StyleGAN.

This is trained on a Kaggle kernel for less than 10 hours on the celebA dataset at 256x256 resolution.

Show more
Refactor Camp

Mastodon instance for attendees of Refactor Camp, and members of various online/offline groups that have grown out of it. Related local groups with varying levels of activity exist in the Bay Area, New York, Chicago, and Austin.

Kinda/sorta sponsored by the Ribbonfarm Blogamatic Universe.

If you already know a few people in this neck of the woods, try and pick a handle they'll recognize when you sign up. Please note that the registration confirmation email may end up in your spam folder, so check there. It should come from administrator Zach Faddis.