Despite the chaos and carnage of three nights of live punk at the Institute of Contemporary Arts punters would still be hard pressed to miss the three pogo-dancing robots in their midst.
The machines, which have been created by a collaboration of artists and scientists, have been designed to fall in love with punk music and show their appreciation through dance.
The robot punks take pride of place in the mosh pit at a series of gigs called Neurotic.
Standing 2m tall, padded and dressed in leather, they are no ordinary concert goers.
Professor McOwan, from Queen Mary University, and one of the creators of the robots, said they were built because of his fascination with human-computer interaction.
"I'm a computational neuroscientist and my interest is in trying to build mathematical and computational models for the way the brain processes sensory information, such as visual or auditory information.
"I work out how human beings do that, build a computer model to test how it works and then hopefully, if it works well, you understand more about humans but also you have software for use in robotic systems.
"The idea is to look at the information processing strategies that have taken billions of years to develop through evolution, steal them and put them into computers."
The robots use neural networks, a collection of computer processors that function in a similar way to a simple animal brain.
Neural networks are popular in the field of artificial intelligence because of their ability to recognise patterns from the sensory input of external sources, much like a human brain.
The robots have been trained to like punk, explained Professor McOwan.
"The robot brain, for want of a better word, was played lots of punk, reggae, disco and classical and over a period of time the robot has learned to recognise and appreciate the patterns of sound in punk music," he said.
The neural network understands the music in a similar way to a human brain, breaking down the sound into a series of frequency bands.
Programmer Jons Jones Morris said: "Breaking down the sound produces a map of the audio over time which is turned into an image. That image is submitted to one of the neural networks."