Representing Data with Sound (sonification)

Written by: Ranti Junus

Primary Source: Digital Scholarship Collaborative Sandbox

When it’s said a picture is worth a thousand words, it generally is. Data visualization provides information that otherwise might take several paragraphs to explain. Yet, this technique privileges users that have sight. What techniques can be used for users that rely on screen readers to access information?

I recently discovered[1] a project by University of Minnesota undergraduate Daniel Crawford and Geography professor Scott St. George that uses music composition to represent climate change data.

It’s really cool listening to the sound combination from four different instruments, each representing climate change in a geographic area.

Intrigued by this project, I went to YouTube and did a search on “data sonification.” There are quite a number of results, so I will only point to two of them:

Great Lakes Data Sonification by Chris Symons, an undergraduate student from Michigan State University (OK, he’s an MSU student. We gotta pitch his project!)

The sound of the Higgs boson: two plots from the Higgs discovery seminar at the European Organization for Nuclear Research (CERN), transformed into music by Piotr Traczyk, a CERN physicist and a metal dude to boot.

Sonification itself is not a new thing. It’s been used widely to represent things like heartbeat in the hospital room and static when using a radiation monitoring device (the higher the pitch, the higher the radiation contamination level). However, the previously mentioned sonification projects bring this approach to another level. Instead of just using a flat tone, the creators mapped each data point to a specific tone.

In the many search results I also found examples of data sonification made specifically for blind users.

iSonic: Interactive Data Sonification for Blind Users developed by Haixia Zhao the Human-Computer Interaction Lab at the University of Maryland

The main principal of this sonification is similar to the others with a slightly more complex approach. Creating sound from the population dataset seems quite straightforward. However, conveying additional information, such as the name of the state and the age group of the population, requires reading additional information from the data set. The program also takes into account user interaction, indicated when the user moves from one state to another or if the user touches a “no-land” area. Watch the video to view how the software works.

At this point, I don’t know whether there are data sonification best practices. Based on the examples above, the conversion of each data point into a sound showed that lower number = lower pitch and higher number = higher pitch. Chris Symon indicated in his video that deciding how to represent the data is not a trivial matter and it essentially boils down to “[w]hat is the message that you’re trying to convey.” I think that’s a pretty darn good starting point.

Ranti Junus

[1] Thanks to the blog post by Willam Denton, a Web Librarian at York University in Toronto, Canada.

The following two tabs change content below.
Ranti Junus
Ranti Junus is Systems Librarian for Electronic Resources, supporting the design and access organization of library materials as well as support in technical and access issues related to electronic services and resources including purchased databases, the online catalog, and other digital resources. She is also responsible for assessing the library web presence and electronic resources for accessibility issues, serves as library liaison for MSU Museum Studies program, and a subject librarian for the Library & Information Science collections. She is interested in usability & accessibility (especially for persons with disabilities), issues in technology & society, open source system, digital assets management, linked data & semantic web, and digital humanities. In her spare time, she listens to prog-rock, blues, jazz, and classic.