Towards a (per)sonal topography of grand piano and electronics
How can I develop a grand piano with live electronics through iterated development loops in the cognitive technological environment of instrument, music, performance and my poetics?
The instrument I am developing, a grand piano with electronic augmentations, are adapted to cater my poetics. This adaptation of the instrument will change the way I compose. The change of composition will change the music. The change of music will change my performances. The change in performative needs will change the instrument, because it needs to do different things. This change in the instrument will show me other poetic perspectives and change my ideas. The change of ideas demands another music and another instrument, because the instrument should cater to my poetics. And so it goes… These are the development loops I am talking about.
I have made an augmented grand piano using various music technologies. I call the instrument the HyPer(sonal) Piano, a name derived from the suspected interagency between the extended instrument (HyPer), the personal (my poetics) and the sonal result (music and sound). I use old analogue guitar pedals and my own computer programming side by side, processing the original piano sound. I also take out control signals from the piano keys to drive different sound processes. The sound output of the instrument is deciding colors, patterns and density on a 1x3 meter LED light carpet attached to the grand piano. I sing, yet the sound of my voice is heavily processed, a processing who´s decided by what I am playing on the keys. All sound sources and control signal sources are interconnected, allowing for complex and sometimes incomprehensible situations in the instrument´s mechanisms.