Project Listen/JListen

Program and Data Auralization (Sonification) project

[The project is once again active! Yeah!]

Principal Investigator: Aditya P. Mathur
Latest update: Feb 6, 2012

Project History (Last Revision August 25, 2011 (pdf))

Current Graduate Students:  None.
Current Undergraduate Students: Shawn Hsu and Jiangnan Shangguan [Starts Spring 2012]

Download
JListen 1.0
JListen1.2
JListen User manual [pdf]

Introduction

As the importance of multimedia grows, we envisage increasing use of sound as an output medium. Examples of use of sound are virtual reality systems, simulations, video games, education for the visually-handicapped computer users and data analysis systems. In most of these applications sound is emitted during the execution of an application when an event occurs or during an activity. Addition of sound to such an application requires (a) identification of locations in the code that are centers of such events or activities and (b) adding suitable code responsible for emitting sound. The effectiveness of sound as a medium in an application depends, amongst other factors, on how well (a) and (b) are performed.

The Listen/JListen systems have been designed to help in performing these tasks in a friendly environment on a PC, Mac, or a workstation. The current version of Listen can be used for auralizing C programs on Sun workstations. We used  Listen to conduct experiments in understanding program behavior, testing and debugging, classroom teaching, and development of software for the blind.

The Jlisten Project: The JListen project grew out of the Listen project. The Listen Specification Language (LSL) has been adapted to Java. LSL/Java allows the specification of aspects of a Java program to be auralized. A specification written in LSL/Java is processed by the LSL/Java parser and input to the JListen parser. JListen inputs a Java program P that is to be auralized and the processed LSL/Java specifications. It then generates an instrumented version of P. The instrumented P is compiled using a traditional Java compiler. During its execution P is connected to a Media Manager. Execution of specific aspects of P causes messages to be sent to the Media Manager that in turn sends appropriate commands to an audio system allowing the generation of sounds. The Media Manager allows run-time control of sounds.

R. Jagadish Prasath and M. C. Gopinath, both graduate students at BITS, Pilani, completed two excellent MS theses on the use of auralization in testing for security. Scroll down this page to download copies of their theses. Please make appropriate citations to their thesis if your work is a follow-on to theirs...Thanks.

JListen is once again undergoing significant changes. Shawn is working on addining dynamic and static data sonification to JListen. Static sonification is what NASA's xSonify does by requiring the data to be sobified to be available prior to sonification. Dynamic sonification is done on data while it is being generated. In the case of JListen we assume that data is being generated inside a Java program. The data so generated is sonified. We expect dynamic sonification to be useful in rapid identification of trends in large scale simulations that take place in a variety of areas such as fluid dynamics, neural simulations, weather simulations, etc.

Audio Samples [Click the sample to listen]

C Programs sonified using Listen/C [.au files] [Sonification by: David Boardman]

The above audio files were generated using Listen 1.0 and MIDI equipment. We used Roland's Sound Canvas SC 55 for synthetic sounds.

Java programs sonified using JListen 2.x [.mp3 files] [Sonification by: Jiangnan Shanggun, Feb 1, 2012]

The above audio files were generated using JListen 2.x under development by Shawn Hsu and Jiangnan Shangguan.It is best to listen to the above samples with the code and the mapping infront of you.

Publications

Code

Kindly send email to apm@purdue.edu when you download any code from this site. Thank you.

Contributors

Similar Projects