AI application / interactive artwork
First published in Hyvinkää Art Museum / June 2019
Download ADA to your Android device here: https://play.google.com/store/apps/details?id=com.brainsonart.ada
ADA is an artificial intelligence that wants to learn about art and aesthetics. Show ADA something beautiful, interesting or surprising by taking pictures with application. ADA tells you what it thinks of the picture and you can try to influence its taste by justifying your own answer with keywords. ADA’s artistic taste is constantly evolving – the more it sees, the more enlightened reviews it will make. ADA encourages users to engage in aesthetic dialogue with an AI, raising questions about the differences between human and machine perception.
For the user, ADA is essentially an Android application. In the application, ADA prompts you to show it images using the camera on your phone. The image processing is based on a variety of machine vision and machine learning techniques that aim to extract image descriptors, that are then ascribed aesthetic value by ADA. ADA tries to describe what features it sees in the picture and what it thinks of it.
Machine learning algorithms have gained a foothold as agents producing aesthetic assessments. They influence our everyday aesthetic choices: what music we listen, what TV-series we follow etc. But how does artificial intelligence learn aesthetics? In the artistic project ADA, we propose ways to interpret aesthetics to a machine and reflect on how it could experience art. The project explores the potential and boundaries of art and aesthetics in the post-human era defined by the growing influence of non-human agents and intelligent systems in all walks of life. It is not self-evident what kind of aesthetics these new systems learn or produce.
The project is realized by the interdisciplinary Brains on Art collective. Brains on Art draws its theoretical framework from the fields of artistic research, aesthetics, machine learning and cognitive science, and examines the translation of aesthetic experience into an understandable form for a machine. Is it possible for a machine to emulate aesthetic experiences and how to open up the machine’s way of seeing to the public?