Invented by Dennis Fink, Crestron Electronics Inc
A hybrid network architecture utilizes a combination of centralized and distributed processing. This means that some processing tasks are handled by a centralized server, while others are handled by individual nodes distributed throughout the network. This approach allows for greater flexibility in audio processing applications, as processing tasks can be distributed across the network based on the specific needs of the application.
One of the key benefits of a hybrid network architecture is scalability. As the number of nodes in the network increases, the processing power of the system also increases. This allows for greater processing capabilities without the need for expensive hardware upgrades.
Another benefit of a hybrid network architecture is flexibility. With a centralized processing approach, all processing tasks are handled by a single server. This can lead to bottlenecks and performance issues if the server becomes overloaded. With a hybrid approach, processing tasks can be distributed across the network, allowing for greater flexibility in managing processing loads.
The market for audio DSPs utilizing a hybrid network architecture is growing rapidly. This technology is being used in a wide range of applications, including live sound reinforcement, recording studios, and broadcast facilities. In these applications, the ability to distribute processing tasks across the network is critical for achieving high-quality audio processing.
One of the key players in the market for audio DSPs utilizing a hybrid network architecture is QSC. The company’s Q-SYS platform is a fully integrated audio, video, and control solution that utilizes a hybrid network architecture. This platform is designed to provide maximum flexibility and scalability in audio processing applications.
In conclusion, the market for audio DSPs utilizing a hybrid network architecture is growing rapidly. This technology offers a number of benefits, including scalability and flexibility, making it ideal for a wide range of audio processing applications. As the demand for high-quality audio processing continues to grow, we can expect to see continued growth in this market in the years to come.
The Crestron Electronics Inc invention works as follows
A system and method for processing digital audio signals using audio processing software on one or several electronic devices within a computer system. The system includes a digitizer to digitize a received sound signal and a processor to perform a plurality audio processing tasks on the digitized signals. Each of these audio processing operations has at least one programable parameter. Further, each audio object is organized in a channel strip. This channel strip processes digitized audio messages for a specific received audio signal.
Background for Audio digital signal processor utilizing a hybrid network architecture
Technical Field
The embodiments discussed herein pertain generally to digital signal processing, but more specifically to systems and methods for audio digital signal processors (DSP), that include a hybrid network architecture to process graphical user images and other DSP functionality.
Background Art
FIG. 1. illustrates a simplified block diagram for a digital signal processor (DSP 100). DSP system 100 comprises analog-to digital converter (ADC), 102, digital processor 104 and digital-toanalog converter 106. Amplifier 108 is also included. Speaker 110 is also included. With the exception of the configuration and programming of digital processor (104), 106, amplifier 108, and speaker 110, each of these components is well-known to those skilled in the art. DSPs 100 process audio input signals. Usually, there are two or more input channels. The DSPs can then output the desired effect. DSP104 is a specialized microprocessor, programmable gate array or other processor-based circuitry that has been optimized for digital signal processing. DSPs 104 operate mostly in the digital realm, but they can also receive analog audio signals and digitize them. Then, they output an analog or a digital signal. DSPs 104 allow users to create filters (bandpass, high-pass, low pass and notch filters), equalizers that attenuate and amplify different frequencies, mixers, matrices and other components.
Presently available DSPs104 are often difficult to create and difficult to use. Programming is the most common way to implement all of the DSP functions mentioned above. While field-programmable gate arrays (FPGAs), in some cases can be used to simplify the task, others have circuits that incorporate microprocessors. Different programming languages are used to program different objects and functions. ASICs, or application-specific integrated circuits (ASICs), are becoming more popular. These processors have been specifically designed to be used. Advanced user interfaces have made DSPs 104 easier to use and have allowed for the programming of objects and functions. The current DSPs 104 are significantly more user-friendly than their predecessors. Modern DSPs104 include advanced user interfaces and system architectures.
There are currently two types of DSP architectures that are commercially available: Open and Fixed. Computer engineering is one subset of special operation. DSPs104 are a subset. Computer architecture describes the functionality, organization and implementation of computer systems (or in this case the DSP system 100). Computer architecture is, as with all architecture, the art of determining what the users need and designing to meet them within the constraints of economic and technological resources.
Both types of DSP architecture exist but have drawbacks.
In fixed architecture DSP, although simple to use, is limited in its versatility. FIG. FIG. 2 shows an interconnection block diagram for fixed architecture DSP104 a. Dropdown menus can be used to modify the parameters of any object 202 of DSP104 a but the signal flow between the different objects is fixed. This means that the user cannot change the order or arrangement of the blocks of processing. Block 202 can be used to refer to a DSP. It is an object that contains a filter, amplifier, delay and echo, as well as an equalizer. Every object can have one or several functions. An equalizer object, for example, can contain a graphic equalizer and a parametric equalizer. Fixed architecture DSP104a has one major characteristic. Function parameters can be modified and saved. However, it is not necessary to recompile the DSP software processing code after making such changes. You only need to save the program. If desired, the reprogrammed DSP program may be copied and used in other applications.
In FIG. 2 DSP104 a consists of the first block 200 a (gain), the second block 202 b (parametric equalizer, PEQ), and the third block 202c (gate (GTE). FIG. FIG. 2 shows that the input digital signals are received and processed by gain block 200 a. The output of gain block 200 b is fed to PEQ block 202 b. GTE block 222 c is then fed to GTE block 222. Gain block 202a would have the parameter gain. This could be anything from a negative gain (i.e. attenuation) up to a positive gain (e.g. from around?20 to about?20 of gain).
As anyone skilled in the art will know, signal flow in fixed structures DSP104a is usually characterized by constant latency. This means that the delay in different functional implementations will generally be approximately the same. The?program’ does not need to be compiled. The user can implement the program in the UI. However, fixed DSP 104 a architecture requires that users manually set the DSP parameters for each signal path and input. Fixed DSP104a architecture blocks 202 have been pre-arranged in a prescribed way with the digital audio flow path. There is little or no freedom in how blocks 202 should be arranged and what path digital audio signals follow. Each block 202 can be specified by the user (cutoff frequencies for LPFs, gain, etc.). However, productivity and workflow are what you gain. Fixed DSP 104 a has known processing times and delays. Clear One, Polycom and Extron are just a few of the manufacturers that sell DSPs with fixed architecture types.
The second type DSP architecture currently in use is open architecture, DSP104 b as shown in FIGS. 3A, 3B and 3B. Open architecture is defined by a simple, unstructured grid in the UI. This allows the user to drag and drop interconnections between objects (202, as shown in FIG. 2, first block 202 a is for gain, second block 202 b is for PEQ and third block 202 cis for GTE. However, filters and other equalizers (EQs) can be added to the open architecture. Interconnects are digital audio signals that run between blocks 202, as those skilled in the art will know. The entire DSP program must be recompiled and loaded into the DSP hardware each time a path is altered, as those skilled in the art will further understand. There is a lot of creativity, but also low productivity and workflow.
FIG. 3A is an illustration of an un-programmed and not-yet-compiled open architecture DSP (DSP). 104 b. An open architecture allows the user to control a larger portion of the signal flow. This architecture is more flexible, but it can also be more difficult to use. Latency can vary between functional implementations so latency compensation should be included. Compilation should occur in the design phase. As those skilled in the art will know, it is typically one of the final steps in the DSP design process. An advanced technician or audio engineer is required to implement an open architecture. Integration of third-party control systems can also be challenging. The UI is often referred to as a “rat’s nest”. Schematic view of the DSP process. FIG. FIG. 3A shows the interconnects that can manipulated by the audio engineer according to his/her needs (FIGS. The following examples of open architecture DSPs are discussed in 3B and 3. BSS, Symmetrix and BiAmp are just a few examples of DSP manufacturers that sell open architecture DSPs.
FIG. “FIG. 2. However, the difference is that FIG. blocks are not interconnected when fixed DSP104 bUI is accessed by a user. 3B, it is basically a blank slate. In this regard. FIG. 3C illustrates an open architecture. 3C, however, in this case the interconnections were done in a significantly different way and speaker 110 will produce a different sound. FIG. 3C: The user connects the input digital signals via PEQ block202 b to which some frequency bands will be reduced and others will remain unaffected. Signals are then directed to GTE block202 c and finally to gain block202 a. The sound produced by speaker 110 in open DSP104 b(2) is the result of gain block 202a. 3C can or will be significantly different from the sound produced in speaker 110 by open DSP104 b(1). FIG. 3B.
Those skilled in the art will appreciate that there are many controls for each open DSP object in both fixed and open architecture DSP systems (e.g. gain object 200 a, PEQ objects 202b, and GTE object 200 c). To use these objects with another device such as a touch screen interface device (TS), the user must open the DSP tool and click on the object they wish to import. Next, the control properties screen will open. Each file must be copied and pasted into the software (SW). module used to design/implement the device. FIG. FIG. 4 shows a file transfer process (process 404) for different objects in digital signal processing systems.
In step 402, the user opens a UI for an existing DSP-device; then in step 404 the user selects the first N objects to be programmed. The user can choose to program the second object 202b (PEQ). Step 406 allows the user to adjust all parameters for the selected object. In step 408 the file is saved and closed. This is called “unseen”. The UI program then creates two files, an object type file as well as a parameter file. The steps 404-410 can then be repeated for each object, as the user desires. The files are saved and transferred to another device to access the DSP-programmed settings for objects. Each object and parameter file must be copied and pasted into a new device in order to import them.
As anyone skilled in the art will appreciate, this process can be tedious and error-laden. It is possible that many errors can occur during the file transfer process.
As anyone skilled in the art will know, the DSP matrix is often too large to be viewed or manipulated on a user interface. Users tend to think in sections when it comes the DSP matrix. Sections.
The current solutions are expanding and contracting trees in order to concentrate on the relevant sections. This approach is limited in that it only works in one dimension. It’s similar to expanding with file and directory structures.
Those skilled in the art will further appreciate that the inability of using a single DSP to equalize a space means that, in order to equalize it, a spectrum analyzer, pink noise generator, and other devices must all be brought to each area.
Accordingly, there is a need for a set of DSP tools that are easier to program, use, and modify than previously available. It also has additional features that many art professionals find attractive.
It is the object of the embodiments that at least some of the problems or disadvantages discussed above are substantially solved, and at least one or more advantages are provided below.
Click here to view the patent on Google Patents.
Leave a Reply