Invented by Glen A. Norris, Philip Scott Lyren, Eight Khz LLC
The market for this technology is driven by the increasing demand for immersive experiences. Consumers are looking for ways to enhance their entertainment experiences, and binaural sound behind an image displayed on an electronic device is one way to achieve this. This technology is particularly popular in the gaming and virtual reality industries, where it is used to create a more realistic and immersive experience for users.
The market for providing binaural sound behind an image displayed using an electronic device is also being driven by the increasing availability of high-quality audio equipment. As more people invest in high-quality headphones and speakers, the demand for high-quality audio experiences is also increasing. This has led to a growing number of companies developing binaural sound technology that can be used with electronic devices.
One of the key advantages of binaural sound behind an image displayed using an electronic device is that it can be used in a wide range of applications. For example, it can be used in gaming, virtual reality, and even in education and training. This versatility makes it an attractive option for companies looking to develop new products and services.
Another advantage of binaural sound behind an image displayed using an electronic device is that it can be used to create a more personalized experience for users. By capturing sound from different directions, the technology can create a unique sound experience that is tailored to the individual user. This can help to increase engagement and create a more memorable experience for users.
In conclusion, the market for providing binaural sound behind an image displayed using an electronic device is growing rapidly. This technology is being driven by the increasing demand for immersive experiences, the availability of high-quality audio equipment, and its versatility in a wide range of applications. As more companies develop new products and services that incorporate this technology, we can expect to see even more growth in this market in the coming years.
The Eight Khz LLC invention works as follows
An electronic device displays an image with sound. A processor converts the sound from the image to binaural sound for the user. Binaural sound uses a sound localization (SLP), which is a location of coordinates that is behind the image, while the image is at a near-field distance to the head of the user.
Background for Providing binaural sound behind a image displayed using an electronic device
Three-dimensional (3D), sound localization gives people a wealth new technological avenues to communicate not only with one another but also with electronic devices, programs, and processes.
As technology advances, there will be challenges in how sound localization fits into the modern age. Examples of embodiments provide solutions to some of these problems and aid in technological advances in 3D sound localization methods and apparatus.
Methods and apparatus improve a user experience during situations in which a listener localizes electronically-generated binaural sounds. The sound is processed or convolved to a location behind or near the source of sound, so the listener can perceive the location of sound as coming from the source.
Other examples of embodiments are discussed in this article.
Example embodiments include apparatus and methods that enhance the user experience during a phone call or any other type of electronic communication.
One problem with electronic generated binaural sound, or three-dimensional (3D), sound rendering is that listeners can have difficulty locating the location from which the sound originated. If the source of the binaural sound originates from less than one meter away, this is considered ‘near-field’. It may be harder for someone to locate the location or direction of a Binaural Sound. Listeners may be more successful in external localization if the binaural sound originates from a distance of at least one meter from the listener (considered to have?far-field ).
Head-related transfer functions (HRTFs), describe how sound waves change as they interact with the head, torso and pinnae. The far-field range is not affected by HRTFs. However, HRTFs can be significantly different in the near-field range. These variations can vary depending on the frequency of sound (e.g. sound below 500Hz). Interaural level differences (ILD) can also be affected by frequency. It can also be difficult to model near-field virtual auditory spaces with HRTFs.
These issues are magnified when electronic devices attempt to condense sound so that the binaural sound is heard by the listener in the near field range. It is possible for the listener to perceive the sound as coming from a different location than it actually is or they may not be able to locate the source of the sound. Additionally, convolving sound with near-field HRTFs can be computationally complex and processor-intensive.
Electronically generated binaural sounds can also have problems when the location where the sound originated is not the same as the location of the image. A computer or electronic system may process a stream or sound, such as a voice, sound, or sound, from a telephone call, video game, computer software application, or social VR (virtual or augmented reality) software program. The stream of sound is then heard by the listener as if it is coming from a specific point in space, where an image is located within the listener’s physical or virtual environment. If the listener cannot locate or hear the processed or convolved sound at the image’s location, the user experience will be lost. This can happen, for instance, if the image is not visible or located near the listener, or if the image is obscured by something physical or virtual. The source of the sound may be visible to the listener (e.g., a real object or virtual object). Or there may not be any visual cues or images at the source. In some cases, the location at which the listener perceives the origin of sound is not consistent with the image or intended location. These problems can be solved by example embodiments in order to enhance the user experience when using binaural sound in virtual or physical space.
Consider an example where a listener is wearing a head-mounted display or another wearable electronic device. The source of the sound is an image of the caller, which can be seen during a phone call. The caller’s voice is not heard as originating from the image on the display. It comes from another location. The listener would not be able to have a reliable user experience if the voice of the caller does not originate at the image.
Binaural sound and 3D audio are more real when the location from which the sound originated is matched or aligned with an object or image. If the source of the sound is not visible, the user experience and realism are significantly reduced.
The problem of aligning binaural sounds with visible sources of sound is magnified when they are within close-field of the listener. This situation means that even if the sound is conlved using near-field HRTTFs to allow the sound to come from the visible source, the convolved sound might not be able to locate the listener accurately due to lower externalization reliability from far-field HRTFs.
These problems can also arise when the source isn’t visible, but the listener knows the location. A listener may see and talk with someone and then glance away temporarily in another direction. The listener doesn’t see the other person at the moment they look away but does know the exact location of the person. The listener expects that the voice of the other person will come from this location.
Convert binaural sound to a visual location. (e.g. convolve sound to coordinates of avatar, virtual sound source (VSS), or image). Sometimes it can be difficult, undesirable, impossible, or impossible to condense the sound to the location that is visually perceivable. The electronic device that convolves the sound cannot select HRTFs whose coordinates match the coordinates of the source of the sound or the location of the source as presented to the listener. These HRTFs might not exist. Additionally, the process of selecting and generating such HRTFs can be time-consuming or may have other disadvantages. These are the instances where sound should be constrained to locate. Which HRTFs, if any, should be chosen?
Example embodiments resolve these problems and enhance the user experience for a listener externally locating binaural sounds.
Electronic devices that display the sound source or provide sound may not include speakers. However, the electronic device may contain speakers that generate sound but not with the speakers.
Instead, the listener might hear sound from a portable or handheld electronic device such as headphones, earphones or earbuds, or a device that uses bone conduction to provide sound. The electronic devices can include both wired and wireless devices such as wireless headphones, wireless earphones, wireless headphones or electronic glasses, a head-mounted display (HMD), optical head mounted display, or OHMD with earphones/earbuds. A smartphone may also have wireless earphones/earbuds.
Electronic devices in example embodiments do not have to emit sound to a user in a traditional manner. An electronic device can be a smartphone, computer or television with speakers that are muted, decoupled or turned off. However, listening to sound localized at an electronic device can be very useful and convenient. Binaural sound, for example, localizes sound at an image on or through an electronic device that is not providing sound in a conventional way.
Though sound may not be emanated from speakers of an electronics device in a traditional way, some embodiments process sound to make it appear to emanate or originate from a particular source or location (e.g. at the electronic devices, behind them, in front of them, on one side or another). Binaural sounds are perceived by the listener as emanating from the sound source or being localized at it, even though there is no sound in the air. Instead, the listener hears the condensed binaural sound via earphones or earbuds, headphones or other electronic devices.
One or more processors, such as a DSP or digital signal processor, process or convolve the sound to a location other than the location of the source. A DSP, for example, processes the sound to localize it to a location that is not the same as the sound source’s coordinate location. The listener still believes the sound is from the source. The listener hears the sound originate at a sound localization (SLP), which is either at or near the sound source. The HRTF pair that is configuring the sound has an influence on the SLP the listener experiences, but it does not determine the SLP. The SLP can also be influenced by a visual image that the listener associates to an auditory event. A dog barking sound, for example, is convex to a point three meters away from a listener and stored in a file. Blindfolded, the sound file is played to an unblinded listener, who externalizes the binaural sound and approximates that a dog is at least three meters away. The SLP of the barking sound can be heard from three meters away. A listener with no blindfold can hear the sound file of the barking dogs at three meters distance. Additionally, a two-meter distance away, a visual of a barking dog is visible. The listener can see the image of the dog that is associated with the dog barking sound and localize it to the sound source (at two metres). The SLP for the hearing listener is at two meters, while the one for the blinded listener can be heard three meters away. A sound that has been condensed with a pair of HRTFs can be externalized at different SLPs, depending on the situation and listener.
These are just a few examples of the methods and apparatus that can be used to solve problems related to binaural sound. When the source or sound source is within a reasonable distance from the listener, example embodiments can improve the realism and user experience. Examples of embodiments can be applied to visible and non-visible sound sources.
Click here to view the patent on Google Patents.