• Jeffrey Choy

We need to think bigger about AI and Art

Updated: Aug 3

Whether you’re on the ‘AI art is art’ camp or the other, this is a tool that will forever change the creative industry for better or worse, and it has arrived. If you’re fortunate enough to not know anything about this technology, a number of machine learning / AI research labs have created AI systems that allow computers to generate images. These tools are mostly experimental and nothing close to being able to create what humans can create - until now. It’s something that the digital art field has paid close attention to and is going to change art in ways we could never expect.


Examples of creations I facilitated - from left: ‘blue abstract composition Bauhaus painting by Mark Rothko’, ‘blue fluffy alien in red clothes’, ‘tear gas smoke sculpture by Umberto Boccioni’, ‘modernist castle designed by Mies Van Der Rohe in misty English forest’ and ‘5 years old crayon drawing of a fire giraffe’


From abstract art, digital painting, complex sculpture, architectural visualisation or 5 years old hand drawing, whatever you ask for, the AI makes it, or at least tries its best to. This might be less scary to any artists that might be reading this when you consider machine learning has been deployed to help creatives in their work process. Adobe has rolled out some simpler AI tools such as object selection tool and image upscaling a few years ago; On the other side of the world, the mega corporate Taobao, China's equivalent of Amazon, has a graphic design tool that existed for a while to generate product images and advertisements of different sizes and for the sellers products.


But nothing came quite close to what Open AI’s DALL-E 2 and Midjourney has been able to achieve. Generative art will change how we approach art creation. It'll change the kind of skill sets that's needed for a creative job - it's going to be less about your skill to think about composition, colour theory, tools use, it's more about your ability to associate ideas, ability to describe visual ideas into words, and input it into the computer in the right way to generate the image you want. Does that cheapen the value of someone who puts their hands in researching, planning, sketching, developing their artworks; versus an AI just learning that from having studied millions of existing images and figuring out what we humans like to look at, and replicating that in mere minutes? Did the invention of Photoshop make oil painting obsolete?


I think the answer to that is Yes and No - maybe less people paint, but was that really so bad? When I was in design school in 2015, a lot of people don't draw, something that would be unthinkable in a design degree in 1995; but there we were, clicking our degree to completion. It’s not taking any physical painting away, but more people are allowed a chance to be creative, and even excel when they do it right, and people that love painting by hand can still do it. Perhaps these AI generative tools* are the same way, simply allowing more people to express themselves in more interesting ways, which will in turn change the creative landscape. * (they don’t really have an official name and I’m hesitant to call them ‘AI art generators’.)


It's not simply a lower barrier of entry into the field, though - you might need to find creative ways around words in order to generate the right picture. Complex detailed fantasy painting that used to take months can be done within minutes; A project like The Birrin Project by Alex Ries, a digital painting and world building project, designed an entire alien world from biology to culture, architecture and technology. Now a capable writer will be able to, without the ability to draw, to build a project like this or one of my personal favourite digital painting projects ‘All Tomorrow by C. M. Kosemen’ in a fraction of the time it originally took those creators.


A test set of 22 tarot cards all with uniquely design visual
A test set of 22 tarot cards all with uniquely design visual, created in 5 hours

It does, however, take away a certain agency and direction from the artists. You can create artwork without specifying colour or even form of your subject, and simply allowing the AI to take the reins, while you only provide inspirations and mood you wanted to achieve. It's not perfect though, as you will quickly notice; it can not, at the moment at least, understand nuance interactions between human culture, and everything that is presented in the resulting image is based on learning from what people created. As you might imagine, people are not perfect.


We will have to teach an AI the more nuance context like race - that it is not okay to give every Asian man a Vietnamese farmer hat, the same way with face detection tech that have a harder time recognise darker skin faces need to be adjusted to understand what a darker coloured face looks like - the nature of the tool is Eurocentric since it based on what the general digital visual art scene is like, and since most people that draw Asian faces draw them in Asian clothing it's easier for AI to generate them together than not; when you generate a character without specifying gender you’re most likely get a female character because that's what the majority of digital artist paint.

I asked for 'old Asian man in red blindfold', and all I can see is that scene from Father Ted

An example of unintentional AI-racism can be seen in MIT Technology Review’s series on AI colonialism, where AI surveillance tools in South-Africa, built on observing people’s behaviours and faces, are re-entrenching racial hierarchies and fuelling a digital apartheid. The training data determined how ‘normal’ certain behaviours are based on how available face recognition tools are in the wealthier area, and thus resulting in the machine flagging poor people's clothing and skin colour in wealthy areas 'abnormal' and causing unnecessary reports that targeted them.


The same kind of problem could potentially appear in generative AI - characters that are royal, in power are usually depicted as white. Nomadic and roguish types are more likely to be people of colour. The tool itself doesn't understand fairness but is mostly a reflection of what was fed into its dataset, what we teach the AI to learn. They will unfortunately encourage biases and stereotypes if not monitored and corrected carefully. Who should have the power and responsibility to monitor the balance between freedom of speech or artistic freedom? If this is not done right it will shape the coming generations of creative people.


Style of digital art will evolve very differently - it used to change according to the general shift of culture and social focus, but now it will change with algorithms much like how Netflix, YouTube and TikTok algorithms change the contents being produced by the general public. More people are able to make visuals for whatever project they're working on; this is the invention of printing press for digital visual media and it's only just started. The challenge we're now facing is rethinking the definition of creativity and being mindful of the secrecy nature of these products - where do they get their training data from? How is the AI flagging objects and associations?


Allowing it to run without supervision will in turn create a visual language in our society that might encourage bigotry and biases, and limit true creativity that inspire progression through diversity. Perhaps instead of arguing 'is AI art art?’ (if you think lack of effort is associated with lack of expression, do think about how Expressionism was preceded when it was popularised) , I think we need to start thinking about what this tool really means for our societies and cultures.


P.S. Recently I'm working on a Tear Gas related social cultural art programme with my company Hidden Keileon and these American Gothic inspired tear gas 'photography' visual are some of the nicest things that came out of Midjourney that I 'made' - as a thank you for sitting through the writing? Hope you enjoyed it


Proofread by Anne Verheij