Multipresence and signal processing (tentative title)
Prof. Michael Cohen, Aizu University
A user in a virtual or augmented environment, including audition of spatial audio, can fork presence, to enjoy sensation of
distribution. Applications include professional conferencing, utility navigation, and recreational chatting, separately or together,
which provide opportunities for multiplicity of virtual location, with ego, endo-, or exocentric perspective, modeled with,
delegated by, and controlled through figurative avatars or abstracted icons.
A multipresent user interface can be extended for narrowcasting, using autofocus to disambiguate apparent paradoxes of conflicted
sensation. Proofs-of-concept multipresent applications featuring 2- and 3-D interfaces will be demonstrated, including realtime
modulation and spatialized audition of recorded tracks. Soundscape scenes are compiled by cascaded effects, effectively compositing
layers of audition. Currently source & sink enabling and disabling are simple switches, but more sophisticated operations would
soften dichotomous state transition and also cascade filters. For example, mute need not be simple "off," but could be -20 dB
attenuation and simultaneous LPF.