top of page

Engineers Are Turning Data into Sound


Dr. Genevieve Williams. Source: University of Exeter

Composers, programmers and engineers are combining art and science to create a better understanding of data through sonification.

One example is a two-minute recording that portrays, with music, an image of the Mars sunrise as it was captured by the Opportunity rover on its 5,000th Martian day on the red planet.

Dr. Genevieve Williams, of the University of Exeter, and her colleague, Dr. Domenico Vicinanza, director of the Sound and Game Engineering research group at Anglia Ruskin University, used data sonification techniques to compose the musical representation.

The process involved scanning the image of the sunrise pixel by pixel. The two then assigned each element a specific pitch and melody that was based on brightness, color and terrain elevation of the Martian landscape.

The music this technique created builds and changes as it shifts from the dark sky on one side of the image to the bright sun in the center, and then back to the quieter tones as the dark background reemerges on the other side of the image.

The Mars sunrise recording premiered at the Supercomputing SC18 Conference in Dallas on November 13. The debut used both conventional speakers and vibrational transducers to create, in effect, a first-person experience of the Martian sunrise by allowing the audience to hear the sound and feel the vibration.

“Image sonification is a really flexible technique to explore science and it can be used in several domains, from studying certain characteristics of planet surfaces and atmospheres, to analyzing weather changes or detecting volcanic eruptions," said Dr. Vicinanza. In health science, for example, it can provide scientists with new methods to analyse the occurrence of certain shapes and colors, which is particularly useful in image diagnostics.

Different Part of the Brain

Margaret Schedel. Source: Stony Brook University

In a 2016 interview on the radio program Science Friday, Margaret Schedel, associate professor of music and director of the Consortium for Digital Arts, Culture and Technology at Stony Brook University said that listening to data enhances our experience of science by using "a different part of our brain."

Schedel collaborated on a sonification project at Brookhaven National Laboratory Center for Functional Nanomaterials where, she explained, scientists shoot X-rays at tiny particles. The X-rays can be detected when they scatter off the material and sonification allowed the scientists to hear, for example, the pattern of that bouncing action.

Listening also allowed researchers to identify when a misalignment occurred, because the sound changed significantly. Sonifiying the equipment in real-time, Schedel said, would allow the scientists to hear when a specific process is going well and when it is not.

Our ears are good at picking up regular patterns that we identify as pitch and computers offer a way to add knowledge to data that is turned into sound, Schedel explained, adding, "Sonification is entering [scientists'] vocabulary now, especially in this era of big data. They are just scrambling to find something to help them understand [it all]."

Sound Diagnosis

Lauren Oakes and Nik Sawe applied music to data as a way to illustrate changes in the tree population in the Alexander Archipelago. Brian Foo developed Data-Driven DJ, a playlist of various datasets that includes: Two Trains, sonification of income inequality on the New York City subway; Air Play, music created from Beijing air quality data; and Distance From Home, translating global refugee movement to song.

In a 2017 study, German researchers applied sound to electrocardiogram (ECG) data, the information used to diagnose cardiac pathologies, and found that, after 10 minutes of training, the sonification allowed observers to detect "clinically relevant pathological patterns." While the study was limited in scope, it opened the possibility to further exploration into the use of sonification as an aid in diagnosis and treatment. For example, the research suggests that combining auditory and visual results is an efficient way to detect abnormal signals.

Translating Information Into Sound

Robert Alexander. Source: University of Michigan

Sonification as a tool to clarify or communicate data is not new. Galileo Galilei (1564-1642) is believed to have used sonification to demonstrate his law of falling bodies. Galileo built a wooden apparatus consisting of an inclined plane, a pendulum, and bells placed at increasing distances along the length of the plane. A ball placed at the top would roll down and ring the bells, one bell for every complete oscillation of the pendulum. The even spacing of the sound of the bells despite the greater distance between them demonstrated that the speed of the ball increased as it was pulled by gravity down the inclined plane.

Composer and researcher Robert Alexander, from the University of Michigan's Solar Heliospheric Research Group, also participated in the Science Friday interview and offered this definition of sonification: "Sonification is the translation of information into sound for the purposes of then conveying some new knowledge... [it is] any way in which we can use our ears to more fundamentally understand the world around us."

He gave as examples parameter mapping, taking the brightness of a star and mapping that to a melody (similar to the Mars sunrise piece), and audification, which he described as, "the direct translation of data samples to audio samples." Audification is, he said, essentially like "pushing 'play' on the data plot," listening to it directly and "hearing some of the richness and some of the nuances in these data that we stream down from things like satellites. It gives you a really fundamental understanding that you might not receive from just looking at it with your eyes."

Alexander shared his recordings of solar wind, which is the stream of highly charged particles that are constantly flowing outward from the sun. Solar wind creates phenomena like the Northern Lights. Listening to the sound as well as looking at the data collected enhances the scientists' ability to forecast space weather.

Hear the complete interview with Robert Alexander and Margaret Schedel along with samples of their work here. Learn more here about the work of Brian Foo, Domenico Vicinanza and Genevieve Williams, and Lauren Oakes and Nik Sawe.

This article first appeared on https://www.globalspec.com.

Tags
No tags yet.
Follow Us
  • Facebook Social Icon
  • Twitter Social Icon
  • YouTube Social  Icon
  • Instagram Social Icon
bottom of page