Zhizhi Nini works in the field of audio visual live performances, where they combine game controllers, sensors and DIY made sensor based instruments to control music and live generative visuals, researching new ways of communication on stage.
A sensor-based setup gives the ability to send signals to the system and manipulate sound and visuals in real time in order to create a dialog with each other. The performance is based on communication through technology and communication with technology.
Using sensors and game controls, two performers create a live dialogue, acoustic, and visual environments. For the sound part Ableton Live extended with customized and programmed max4live patches are used. All the data from sensors, game controllers as well as sound analysis is sent over OSC protocol to the Touch Designer software, in which the visual part is being created and live generated. This kind of network based OSC communication between performers allows them to control sound and visual parts of the performance in real time, closely related to each other.
For the performance Zhizhi Nini decided to use a completely similar setup of controllers, which allows to create a form of equality in the control domain between the two performers. Visually this kind of setup and positioning on stage provides a sophisticated and holistic visual identity for the project.
With their performance Zhizhi Nini are aiming to find new eclectic ways of communication with the audience, combining various audio and visual elements, sarcastically playing with the topic of algorithms and more complex systems, and questioning if the performers are being controlled by the content they produce or if they create a new narration with their audio-visual composition.