Several websites have picked up on the story of Robbie Barrat, an AI researcher at Stanford who trained a Generated Adversarial Network (GAN) how to ‘paint’ nude portraits. GANs are a form of artificial intelligence algorithm which can learn by itself without human supervision; Barrat fed the network thousands of classical art pieces from a datset, and then he let it ‘let loose’ to see what it would come up with. The process wasn’t entirely ‘willy-nilly’, since Barrat claims he sought to train his little robotic Rembrandt using the principles of Minimalist artist Sol LeWitt, who would employ simply formulated rules –coding, if you will– into his pieces, which could be even followed by other people without his direct involvement.
Usually the machine just paints people as blobs of flesh with tendrils and limbs randomly growing out – I think it’s really surreal. I wonder if that’s how machines see us…
The result kinda looks like a LSD-laced mashup between Goya, Picasso and HR Giger, but everyone’s a philistine when it comes to Art they can’t understand, right?
And yet THAT is the crux of it: CAN the machine understand what it’s being shown by its art instructor? And is the final outcome a result of this primitive understanding, or hint on just how different from us a machine would perceive the world?
I think by now it’s fair to guess the answer to the former question is ‘No’, and to the latter is a reserved ‘Maybe.’
The trippy quality of Barrat’s results reminds me of the experiments performed using Google Deep Dream a few years ago. The pictures generated then were more reminiscent of Arcimboldo’s paintings, and I wrote about it on a short post, because I was interested in what these experiments could teach us about the limits of perception when confronted with things beyond the scope of our normal range of experiences –namely during UFO/paranormal encounters, but also in other altered states of consciousness elicited by the use of psychedelic substances.
I say ‘other altered states of consciousness’ because, like many other people before me, I suspect that when a witness is confronted with something so completely outside of what they consider to be ‘normal’ –a foreign object in the sky, an anomalous creature in the forest, a ghostly apparition– their mind is hit with an instantaneous sense of the surreal that would be not unlike a drug-induced experience; perhaps that is the reason why high-strangeness cases are infused with a dream-like quality. Under such circumstances, it’s highly unlikely their minds would be able to accurately perceive what they actually encountered, simply for lack of an adequate ‘language’ that could be applied to it –not to mention the ‘parsing’ of the experience through the witness’s cultural filter.
And of course, with the case of ‘mind-altering’ substances –or should they better be called ‘mind-readjusting‘? –it’s also worth remembering those experiments in which artists were given a dosage of LSD in order to see how it would affect (or enhance) their artwork. Notice the piece produced 2 hours and 45 minutes after the first dose, and compare it with Barrat’s A.I.-generated images:
Whether all of this is relevant to the discussion of high-strangeness and the cocreation of close encounters or not, in any case it’s fun to fantasize that in the future A.I. beings will try to interact with us the same way Terence McKenna tried to make contact with the DMT machine elves; one can easily envision those first cybernetic ‘materianauts’ returning to their A.I. world with bizarre stories to tell, while the more skeptics among them would be stubbornly insisting ‘there’s no such thing as human beings’, and that is all a figment of their faulty OS…