Along with writing an article on my project etherSound I am also reworking the control program and the csound orchestra that synthesize the incoming messages. The sound synthesis for the first version, displayed and performed in August 2003, was consciously made simple for a number of reasons:
- etherSound was (and still is) an experiment and I wanted the triggered sounds to be recognizable and plain.
- the acoustics of the performance space was very lively, to say the least.
- severe limitations on processing power only allowed me a few voices per message. I am using formant synthesis through the fof unit generator in csound which can be quite exhausting in real time usage.
For the second performance in May 2004 the performance space and the processing power was less limited. But instead I ran into an annoying bug which forced me to using the original version for that performance as well.
In December I am doing a recording of a perfromance of the messages sent to etherSound during the May -04 concert with improvisations by myself and drummer/percussionist Peter Nilsson. Since I am not limited by processing power (real time no necessary) nor acoustics, I am intending to develop and extend both the timbres and the text-to-sound translation. In the computer part I have two distinct instruments with their own distinct expression. What I am trying now is to map them to the letters in each message according to the table. Rhythm and timbre is derived from other parameters and octave placement in voice A is derived from the number of occurences of each letter.