This research project seeks to develop the next generation of human-controlled electronic music instruments. Human interfaces to musical instruments are about to be revolutionized by the appearance of inexpensive, handheld high-resolution multitouch displays and other sensors (such as those of the iPad and iPhone, Android devices, and Kinect). These offer many possibilities for new paradigms of controlling sound generation, with the potential to give humans previously unheard-of power in real-time control of musical expression.
However, translating the raw multitouch input stream into meaningful music control events is nontrivial. Current software is extremely primitive - either too hard to use, or too limited in expressive power. Instead, the human interface design should maximize both the musician's expressive power, as well as the learnability/playability.
The solution to overcoming these obstacles will be in merging AI and pattern recognition techniques to analyze the incoming multitouch input stream, and adaptively translate them into meaningful music control events.
The applicant will help (a) design the human interfaces as described above, (b) develop an smartphone/tablet/sensor platform for the interface, (c) and design and experimentally test various AI techniques for analyzing and translating the stream of multitouch events.
(1) Understand the strengths and weaknesses of various AI and pattern recognition techniques for analyzing and translating streams of data
(2) Understand what makes user interfaces (especially for musical intrument controllers) ergonomic or not
(3) Understand how the computer's knowledge of musical structure can improve its ability to interpret incoming human control events
(4) Develop excellent software engineering skills, especially with multitouch devices