In an earlier post I made, I brought up the point that enabling a machine to read, as humans do, is a very difficult, if not impossible, task to accomplish. After reading the blog post about machine-imaged artworks, I realized that my point was reiterated. The author states that art (including literature) ‘captures complex, uniquely human judgments which occupy a space outside of simple visual perception’.

Rather than dwell on this well-trodden point, the post rather points out a new direction: Instead of us shoving some humanities material through some algorithm, allow machines to describe new artwork to us. The author actually implemented this, and the result is quite intriguing. While browsing through some of these, I found that some of them felt oddly real, but some of them were quite absurd as well. One of the more realistic ones that was generated was as follows:

The work is made of Lineblock prints, Software, and Coffee. It displays the qualities of chance, blur, and photographic negative. It talks about hope, gluttony, and contemplation, whilst embracing speed, good and evil, and peace. Note the symbolic of rosemary of memory, apostrophe, and poppy of death.

Generative artwork with fundamentally human ideas behind it. (Iridem, Sergio Maltagliati, CC BY-SA 3.0)

The ‘Software’ component to the piece was a little off-putting, but I figure that software can play a big part in more modern art pieces. Near the end of his post, Drass writes that he imagines automating the entire process of generating artwork. This, I still would assert to be impossible for a machine to do. The creation of art is still very much a ‘human’ thing. Even considering the description generated, the idea is much too broad. There are too many stylistic decisions that have to be made before the piece would be complete (as he put it, human judgments). Would it be neat? Absolutely. But even if someone programmed the computer’s decision-making process, could one even say that this is machine-generated artwork?