Giuseppe Lo Schiavo is an award-winning visual artist based between London and Milan who is currently conducting research that aims to create a bridge between art and science. Using AI and machine learning, virtual reality, infrared systems, or microorganisms in the lab, the artist’s research often focuses on opposing elements: creation-destruction, past-future, analog-digital, real-virtual.
“Onirica” comes from the Greek word “ὄνειρος” (óneiros) that means “dreamlike.” What was your thought process behind the concept of your new NFT drop?
I describe Onirica as a trip inside the mind. I was inspired by recent developments in a new scientific field called functional neuroimaging where scientists discovered new ways to recreate an image a person sees or dreams by looking at the person’s brain activity. These procedures are made by analysing the brain waves from monitoring devices such as fMRI (Functional magnetic resonance imaging), EEG (electroencephalogram) or by implanted electrodes such as those from the company Neuralink, founded by Elon Musk.
Working with scientists is not new for my practice, and for this specific project I had the chance to meet with Professor Yukiyasu Kamitani from Kyoto University who shared his views regarding developments with neuroimaging.

We are still at a very primitive stage of this technology but we can already reconstruct visual imagery without any stimulation from a brain scan, even with the person sleeping. The quality of the extrapolated visual content will be improved by new algorithms in the future and new brain measurement methods. We are now trying to use implanted electrodes in neurosurgical patients to give them real-time feedback of reconstructed images online.
Yukiyasu Kamitani
Can you explain how you use AI to create the music and the lyrics of the video?
I used an Artificial Intelligence model “Image to text” to create an audio caption of the video scenes.
The AI struggles to identify complex and artistic compositions, but using the generated caption—even if it’s not accurate—as the audio description of the scene introduces a new layer of creativity to the artwork and underlines the limitations of the current stage of the technology.
For the creation of the music, I have experimented with Magenta Studio, an open-source software developed by Google that uses cutting-edge machine learning techniques for music generation. In my opinion, the future of music composition might reside there.
On your website, you say that you aim to create a bridge between the…
Read More: editorial.superrare.com