it’s interesting how many op-eds were written about how children born in the late 90s-onward were digital natives that would go on to become extremely versatile in tech when the reality is that tech becoming more consumer oriented nipped the incentive for a lot of kids to explore beyond the services offered to them. not knowing how to torrent things is only the tip of the iceberg and tech illiteracy is only going to continue to climb as the cultural shift from computers to phones becomes more pronounced in coming years. I used to joke that people in the late aughts saw laptops as like, $700 facebook machines but the modern comparison is that people see laptops as $1200 subscription service for media they don’t own machines.
no smart appliances in this house. absolute fucking moron appliances only. my toaster is there to make bread hot not to tweet what time I ate breakfast or whatever the fuck
don’t need my goddamn microwave to snitch to the nsa
if i am somehow forced to own a smart appliance (likely due to lack of availability) i will figure out how to take the computer out and make it dumb
It’s the neural net equivalent of shouting “enhance!” at a computer in a movie – the resulting photo is MUCH higher resolution than the original.
Could this be a privacy concern? Could someone use an algorithm like this to identify someone who’s been blurred out? Fortunately, no. The neural net can’t recover detail that doesn’t exist – all it can do is invent detail.
This becomes more obvious when you downscale a photo, give it to the neural net, and compare its upscaled version to the original.
As it turns out, there are lots of different faces that can be downscaled into that single low-res image, and the neural net’s goal is just to find one of them. Here it has found a match – why are you not satisfied?
And it’s very sensitive to the exact position of the face, as I found out in this horrifying moment below. I verified that yes, if you downscale the upscaled image on the right, you’ll get something that looks very much like the picture in the center. Stand way back from the screen and blur your eyes (basically, make your own eyes produce a lower-resolution image) and the three images below will look more and more alike. So technically the neural net did an accurate job at its task.
A tighter crop improves the image somewhat. Somewhat.
The neural net reconstructs what it’s been rewarded to see, and since it’s been trained to produce human faces, that’s what it will reconstruct. So if I were to feed it an image of a plush giraffe, for example…
Given a pixellated image of anything, it’ll invent a human face to go with it, like some kind of dystopian computer system that sees a suspect’s image everywhere. (Building an algorithm that upscales low-res images to match faces in a police database would be both a horrifying misuse of this technology and not out of character with how law enforcement currently manipulates photos to generate matches.)
However, speaking of what the neural net’s been rewarded to see – shortly after this particular neural net was released, twitter user chicken3gg posted this reconstruction:
Biased AIs are a well-documented phenomenon. When its task is to copy human behavior, AI will copy everything it sees, not knowing what parts it would be better not to copy. Or it can learn a skewed version of reality from its training data. Or its task might be set up in a way that rewards – or at the least doesn’t penalize – a biased outcome. Or the very existence of the task itself (like predicting “criminality”) might be the product of bias.
In this case, the AI might have been inadvertently rewarded for reconstructing white faces if its training data (Flickr-Faces-HQ) had a large enough skew toward white faces. Or, as the authors of the PULSE paper pointed out (in response to the conversation around bias), the standard benchmark that AI researchers use for comparing their accuracy at upscaling faces is based on the CelebA HQ dataset, which is 90% white. So even if an AI did a terrible job at upscaling other faces, but an excellent job at upscaling white faces, it could still technically qualify as state-of-the-art. This is definitely a problem.
A related problem is the huge lack of diversity in the field of artificial intelligence. Even an academic project with art as its main application should not have gone all the way to publication before someone noticed that it was hugely biased. Several factors are contributing to the lack of diversity in AI, including anti-Black bias. The repercussions of this striking example of bias, and of the conversations it has sparked, are still being strongly felt in a field that’s long overdue for a reckoning.
Bonus material this week: an ongoing experiment that’s making me question not only what madlibs are, but what even are sentences. Enter your email here for a preview.