A comment about the "sound of silence" blog got me thinking and
anyone
who knows me will realise that me thinking is a dangerous thing… In general, we are all obsessed by our dominant sense - there is quite a bit of research to suggest that we all have one. In my case, I think my hearing is probably my strongest sense and although I rely on it every single day, it's my hearing in combination with other senses which enables me to get about and more. As well as hearing distances, my feet pick up changes in texture which I remember as clues to where I am. Hearing and all its characteristics is incredibly important to me but, like sight, there are very few of us with perfect hearing and even those of us who might have it can have days where it doesn't work so well for us. I took pain relief yesterday for a Neuralgia condition and, today, I do not have perfect pitch. I'm sure that the concert we go to in a couple of weeks will leave our ears ringing and less effective. It's incredibly dangerous to say that you know what losing a sense is like because you have tried on a set of simulation spectacles or undertaken some hearing tests. It's only the experience of sensory loss that can really make you understand some of its difficulties, but simulations can give you some appreciation. A few years ago, I visited a university project in Japan where they were attempting to develop a new form of artificial ear. This development promised to be much more than a cochlea implant, the scientists wanted to create an ear with potential connectivity to the brain. Part of this experiment was to simulate the audio spectrum and we were invited to observe what had been achieved so far. The first audio example was human speech speeded up to the point that it could only just be detected by "normal ears"; they used a kind of time pitch modulation so that the pitch of the speed increases without changing the pauses between words, syllables etc. I could hear the voice but not what was being said; they wanted to prove that pitch is an important element of what we hear. The second set of experiments showed the importance of a broad audio spectrum and the implications of losing upper or lower frequency hearing. One example compressed the sound in to a very hew hertz and, although the fidelity of the sound had been maintained, what was being said or even the nature of the sound could not be determined by a "normal" ear. The third set of experiments showed the importance of a processor or brain. If you listen to a politics debate where people talk over each other, it soon becomes difficult to hear what is being said as the voices are equally competitive. To make this work for you, be sure not to watch the screen as those of you who can see lip read without even knowing it. If you hear that debate in real life, your processor of a brain will focus on the person you want to hear, often driven by your bias for a particular political view, or a will to hear a particular strand of argument above anything else. It's this processing element which microphones can't deal with and most experimental ears I have seen fail to manage. Remembering all of this got me thinking about audio alerts for silent vehicles and the importance of creating a sound which can be detected by the broadest cross section of people. I've already exhausted all of my knowledge on this subject in this blog and those with a lot more understanding of the characteristics of hearing loss will need to interact with the process to determine suitable sounds for such vehicles. Thanks to people for the timely reminder to think about such things and let's hope that the UN committee gets it right for all of us. |
Quick Reply |
|
No comments:
Post a Comment