BACKGROUNDThe present invention relates generally to an audio mixing system comprising a plurality of cascade-connected mixing apparatus, and more particularly to an improved method for controlling the individual mixing apparatus in the mixing system.
Audio mixers are apparatus which perform mixing processing, such as mixing of audio signals of a plurality of channels and impartment of effects to the audio signals. In recent years, digital mixers have been in wide-spread use, which convert analog audio signals, input via input devices such as microphones, into digital signals and then perform mixing processing on the converted digital signals. In each of these digital mixers, a human operator (or user) of the mixer sets values of mixing processing parameters via an operation section (or console section) that is provided with a multiplicity of operators operable to manipulate various parameters to be used in mixing processing. The current settings (set values) of the various mixing processing parameters are stored in a storage area called “current memory”. DSP array (i.e., signal processing section) carries out the mixing processing on the basis of the various parameter settings held in the current memory.
The conventionally-known digital mixers can collectively reproduce settings of given mixing parameters by storing in advance, as a scene, the current settings of the parameters, held in the current memory, in a scene memory and then recalling the stored scene from the scene memory to the current memory. Such a function is commonly called “scene store/recall” function, and scene data of a plurality of scenes can be stored in the scene memory in the conventionally-known digital mixers.
In event venues, such as a music festival where a plurality of human performers exhibit performances (music performances etc.) in turn on the stage, it has been known to achieve a smooth progression of performances on the stage by providing two sets of performance platforms, which performers mount, and mixers which mix music performances executed on the performance platforms and alternately using the provided two sets.FIG. 19 shows an example of a conventionally-known PA system including two performance platforms, “platformA”400aand “platformB”400b. In the illustrated example ofFIG. 19, a mixer (“mixA”)410 is provided in correspondence with “platformA”400a, and mixing of a music performance on “platformA” is performed by themixer410. Mixer (“mixB)411 is provided in correspondence with “platformB”400b, and mixing of a music performance in “platformB” is performed by themixer411.
Output signals of “mixA”410 and “mixB”411 are supplied to an output switch device (“SW”)412, which selectively outputs either the output signals of “mixA”410 or the output signals of “mixB”411 to anamplifier500 so that audio signals corresponding to the selection by theoutput switch device412 are audibly generated or sounded through aspeaker600. During the course of actual execution or exhibition, on the stage, of a particular performance assigned to “platformA”400a, for example, the system ofFIG. 19 permits preparations (such as mixing processing, sound check and the like) for a succeeding performance assigned to the other platform (“platformB”)400bwhile allowing audio signals of the performance of “platformA”400a(i.e., output signals of “mixA”410) to be sounded via thespeaker600.
Generally, in an event venue and the like, the mixers (“mixA” and “mixB”)410 and411 are installed in a mixing booth provided in an audience seating area, as shown inFIG. 19. This is for the purpose of allowing a user (human operator) of the mixers to perform desired mixing operation while aurally checking or confirming balance between audio signals audibly reproduced or sounded through thespeaker600 to the audience. As well known, a plurality of channel strips for processing audio signals on a channel-by-channel basis are provided on the operation panel (console section) of the mixer. The greater the processing capability (i.e., number of channels) of the mixer for use in a concert venue or the like, the greater would become the physical size of the body, including the console section, of the mixer. Consequently, the conventionally-known PA system illustrated inFIG. 19 would present the inconvenience that much of the space in the audience seating area is occupied with the twomixers410 and411.
Further, in the conventionally-known PA system, thick andheavy audio cables413 called “multi cables”, each comprising a bundle of a plurality of cables, are installed between acoustic equipment on the stage-side performance platforms400aand400band the audience-seat-side mixers410 and411. Further, astereo audio cable414 for delivering stereo signals is installed between theoutput switching device412 and theamplifier500. Namely, a plurality of theaudio cables413 and thestereo audio cable414 have to be installed or run over a long distance between the stage-side positions and the audience-seat-side positions. Particularly, in the conventionally-known PA system, the necessary wiring work is very complicated and cumbersome because themulti cables413 are thick and heavy and hence very difficult to handle and it is necessary to branch audio signals of a plurality of channels, channel by channel, via a connection device (i.e., connector box) disposed near the mixers and couple the audio signals from the connection box to individual input sections of a plurality channels of the mixers. Further, because the multi cables are relatively expensive, the conventionally-known PA system presents the inconvenience of high wiring cost.
Further, in the conventionally-known PA system, desired mixing operation is performed separately on each of themixers410 and411. It has been considered convenient if the mixing operation could be performed on themixers410 and411 alternately via the console section of one of the mixers. Among the conventionally-known techniques for controlling mixing operation on a plurality of mixers via the console section of one of the mixers is one disclosed, for example, in Japanese Patent Application Laid-open Publication No. 2005-277649 (hereinafter referred to as Patent Literature 1), which is arranged to not only expand the number of input channels of a plurality cascaded mixers by interconnecting respective buses but also allow settings of some parameters (e.g., scene recall instruction) to be interlocked or interlinked between the mixers. However, with the technique disclosed inPatent Literature 1, what can be controlled in an interlocked manner are limited to only some parameters (e.g., scene recall instruction), and it is impossible to control channel-specific mixing processing parameters of a given one of the mixers via the console section of another of the mixers.
Further, from Japanese Patent Application Laid-open Publication No. HEI-7-122944 (hereinafter referred to as Patent Literature 2), for example, there has been known a function for recalling parameter settings of a scene, stored in a scene memory, to the console section of a mixer while retaining a state of mixing currently performed by an internal DSP array of the mixer (i.e., stored contents of a current memory in the mixer), and then allowing the console section to confirm or edit the individual parameter settings.
If the technique disclosed inPatent Literature 2 is applied to the system ofFIG. 19, it will be possible to perform control on audio signals, currently sounded through the speaker of a mixer, on the basis of a state of mixing being executed by an internal DSP array of the mixer and simultaneously recall, to the console section of the mixer, mixing processing parameter settings for a next or succeeding performance, prepared in another mixer, to then adjust the recalled settings. However, with the technique disclosed inPatent Literature 2, even if the console section adjusts the mixing processing parameter settings for the succeeding performance, the adjusted results can not be reflected in the control by the internal DSP array of the other mixer because the adjusted results can not be returned to the other mixer, and sounds corresponding to the adjusted results can not be aurally checked or confirmed in the other mixer as well as in the one mixer. Thus, even with the technique disclosed inPatent Literature 2, preparations (mixing operation, sound check, etc.) for the succeeding performance in the other mixer can not be made through operation on the console section of the one mixer.
SUMMARY OF THE INVENTIONIn view of the foregoing, it is an object of the present invention to allow mixing operation of a plurality of mixing apparatus to be performed efficiently. More specifically, it is an object of the present invention to provide an improved mixing system which allows mixing operation of two mixing apparatus to be efficiently performed alternately in an event venue and the like.
In order to accomplish the above-mentioned object, the present invention provides an improved mixing system including a plurality of cascaded mixing apparatus, which comprises: a main mixing apparatus including an main operation section for receiving operation by a user; a first mixing apparatus to which are inputted audio signals from a first input source; a second mixing apparatus to which are inputted audio signals from a second input source; an auxiliary operation section for receiving operation by the user different from the operation received via said main operation section; a main output section that outputs an audio signal to a sound system; an auxiliary output section that outputs a confirming audio signal; a mode selection section that selects either one of a first control mode for causing the signal of said first input source to be outputted through said main output section and a second control mode for causing the signal of said second input source to be outputted through said main output section; a first control section that, in said first control mode, controls mixing processing of said first mixing apparatus for mixing the audio signals, inputted from the first input source, in response to operation received via said main operation section, to thereby cause a result of the controlled mixing processing of said first mixing apparatus to be outputted through said main output section and controls mixing processing of said second mixing apparatus for mixing the audio signals, inputted from the second input source, in response to operation received via said auxiliary operation section, to thereby cause a result of the controlled mixing processing of said second mixing apparatus to be outputted through said auxiliary output section; and a second control section that, in said second control mode, controls the mixing processing of said second mixing apparatus for mixing the audio signals, inputted from the second input source, in response to operation received via said main operation section, to thereby cause a result of the controlled mixing processing of said second mixing apparatus to be outputted through said main output section and controls the mixing processing of said first mixing apparatus for mixing the audio signals, inputted from the first input source, in response to operation received via said auxiliary operation section, to thereby cause a result of the controlled mixing processing of said first mixing apparatus to be outputted through said auxiliary output section.
According to the mixing system of the present invention, in the first control mode, the mixing processing of the first mixing apparatus is controlled in response to the operation received via the main operation section so that the result of the thus-controlled mixing processing of the first mixing apparatus can be outputted through the main output section, and the mixing processing of the second mixing apparatus is controlled in response to the operation received via the auxiliary operation section so that the result of the mixing processing of the thus-controlled second mixing apparatus can be outputted through the auxiliary output section. In the second control mode, on the other hand, the mixing processing of the second mixing apparatus is controlled in response to the operation received via the main operation section so that the result of the thus-controlled mixing processing of the second mixing apparatus can be outputted through the main output section, and the mixing processing of the first mixing apparatus can be controlled in response to the operation received via the auxiliary operation section so that the result of the thus-controlled mixing processing of the first mixing apparatus can be outputted through the auxiliary output section.
Thus, in an event venue or the like, where switching is made per performance between two mixing apparatus to allow the two mixing apparatus to be used alternately, audio signals for a current performance are input to either one of the first and second mixing apparatus and mixing control is performed on the input audio signals for the current performance, in response to operation received via the main operation section, so that the result of the thus-controlled mixing processing is outputted through the main output section for sounding through a main speaker, during which time audio signals for a next or succeeding performance are input to the other of the first and second mixing apparatus and mixing control is performed on the input audio signals for the succeeding performance, in response to operation received via the auxiliary operation section, so that the result of the thus-controlled mixing processing can be outputted through the auxiliary output section for aural check or confirmation via a headphone set or the like. Because switching can be readily made between the first and second control modes in accordance with the input destination (first or second mixing apparatus) of the audio signals for the current performance, two different mixing processing can be performed efficiently using the main operation section of the main mixing apparatus.
The present invention may be constructed and implemented not only as the apparatus invention as discussed above but also as a method invention. Also, the present invention may be arranged and implemented as a software program for execution by a processor such as a computer or DSP, as well as a storage medium storing such a software program. Further, the processor used in the present invention may comprise a dedicated processor with dedicated logic built in hardware, not to mention a computer or other general-purpose type processor capable of running a desired software program.
The following will describe embodiments of the present invention, but it should be appreciated that the present invention is not limited to the described embodiments and various modifications of the invention are possible without departing from the basic principles. The scope of the present invention is therefore to be determined solely by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGSFor better understanding of the objects and other features of the present invention, its preferred embodiments will be described hereinbelow in greater detail with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram showing example electric hardware setups of a digital audio mixer and mixer engine constituting a mixing system according to an embodiment of the present invention;
FIG. 2 is a block diagram schematically showing an example construction of a PA system including the embodiment of the mixing system;
FIG. 3 is a block diagram showing an example algorithm construction of a representative one of mixing apparatus constituting the embodiment of the mixing system;
FIG. 4 is a block diagram showing an example audio signal processing construction to be used when the embodiment of the mixing system should operate in a “normal mode”;
FIGS. 5A and 5B are block diagrams showing examples of audio signal processing constructions to be used when the embodiment of the mixing system should operate in a “festival mode”, of whichFIG. 5A shows an example audio signal processing construction to be used in a “festival A mode” whileFIG. 5B shows an example audio signal processing construction to be used in a “festival B mode”;
FIG. 6 is a view showing an example construction of a console section of a mixer included in the embodiment of the mixing system;
FIGS. 7A-7D are diagrams explanatory of assignment, to channel strips, of objects of control when the embodiment of the mixing system should operate in the “festival mode”, of whichFIG. 7A shows assignment, to monaural channel strips, of objects of control in local control,FIG. 7B shows assignment, to monaural channel strips, of objects of control in remote control,FIG. 7C shows assignment, to stereo output channel strips, of objects of control, andFIG. 7D shows assignment of objects of remote control by a PC;
FIG. 8 is a diagram explanatory of constructions of current memories provided in individual mixing apparatus included in the embodiment of the mixing system and parameter editing performed in current memories in the “normal mode”;
FIG. 9 is a flow chart showing an example operational sequence of a cascade-connection detection event process performed by the mixer in the embodiment when a new cascade-connection detection event has been detected;
FIG. 10 is a flow chart showing an example operational sequence of a mode change event process performed by the mixer in the embodiment;
FIG. 11 is a flow chart showing an example operational sequence of an operator operation event process performed by the mixer in the embodiment;
FIG. 12 is a flow chart showing an example operational sequence of a parameter value change result reception event process performed by the mixer in the embodiment;
FIGS. 13A and 13B are views explanatory of examples of parameter editing processes based on remote control when the embodiment of the mixing system should operate in the festival mode, of whichFIG. 13A shows a parameter editing process in “mode A” whileFIG. 13B shows a parameter editing process in “mode B”;
FIG. 14 is a flow chart showing an example operational sequence of a local-ON event process to be performed when an object of control by the mixer is to be switched from “remote” to “local”;
FIG. 15 is a flow chart showing an example operational sequence of a remote-ON event process to be performed when the object of control by the mixer is to be switched to “remote”;
FIGS. 16A and 16B are diagrams explanatory of control for interlocking a scene store/recall function in the embodiment of the mixing system, of whichFIG. 16A shows such control in the “normal mode” whileFIG. 16B shows such control in the “festival A mode”;
FIG. 17 is a flow chart showing an example operational sequence of a scene store event process performed by the mixer in the embodiment;
FIG. 18 is a flow chart showing an example operational sequence of a scene recall event process performed by the mixer in the embodiment; and
FIG. 19 is a block diagram showing a construction of a conventionally-shown PA system.
DETAILED DESCRIPTIONWith reference to the accompanying drawings, a detailed description will be given about a mixing system according to an embodiment of the present invention. Of a plurality of mixing apparatus constituting the embodiment of the mixing system, the mixing apparatus having a console section (i.e., operation panel or operation section) will hereinafter be referred to as “digital audio mixer” or “mixer”, while each of the other mixing apparatus having no console section will hereinafter be referred to as “mixer engine” or “engine”.
FIG. 1 is a block diagram showing example electric hardware setups of the digital audio mixer and mixer engine constituting the mixing system of the present invention. The instant embodiment of the mixing system comprises at least onemixer100 and at least onemixer engine200 which are cascade-connected with each other.
As shown inFIG. 1, themixer100 includes aCPU1, aflash memory2, aRAM3, a signal processing (DSP)section4, a waveform input/output interface (waveform I/O)5, a cascade interface (cascade I/O)6, adisplay7, anoperator member unit8,electric faders9, and another interface section10; these components1-10 are interconnected via a bus1B. Microcomputer, comprising theCPU1,flash memory2 andRAM3, executes a control program, stored in theflash memory2 orRAM3, to control all operations of themixer100. TheRAM3 includes a current memory area for storing the current settings of various parameters for mixing processing.
Thesignal processing section4 comprises a DSP array for performing digital signal processing on audio signals. The waveform I/O5 includes an analog input port, analog output port and digital input/output ports, and each analog audio signal input via the waveform I/O5 is converted into a digital audio signal and then supplied to theDSP array4. TheDSP array4 performs signal processing on the supplied digital audio signal on the basis of an instruction given from theCPU1, and the digital audio signal generated as a result of the signal processing by theDSP array4 is then converted into analog representation and output via the waveform I/O5. TheDSP array4 also communicates digital audio signals with digital acoustic equipment connected thereto via the waveform I/O5. Further, a monitor (e.g., headphone set)11 for a user or human operator of themixer100 outputs monitoring audio signals supplied from the waveform I/O5.
Thedisplay7,operator member unit8 andelectric faders9 are user interfaces that constitute the console section (indicated at60 inFIG. 4) operable by the user or human operator of themixer100, and these user interfaces7-9 are provided on the upper surface of theconsole section60 of themixer100.
Theelectric faders9 are operator members operable to continuously vary values of parameters allocated thereto in accordance with operating positions of corresponding vertically-slidable knobs. Theelectric faders9 are provided on, and in one-to-one corresponding relation to, a plurality of channel strips (seeFIG. 6) on theconsole section60. Each of theelectric faders9 has a motor built therein for automatically driving the knob to vary the operating position of the knob; namely, the motor can be driven as necessary under the control of theCPU1 to automatically vary the knob position of theelectric fader9. By the operating position of the knob, the current value of the parameter allocated to theelectric fader9 can be visually indicated to the user. Thedisplay7, which is in the form of a liquid crystal display (LCD) panel and/or the like, displays various information to the user under the control of theCPU1. Further, theoperator member unit8 includes a multiplicity of operator members operable to, for example, set various parameters, switch among various operation modes and instruct activation of various functions.
Themixer100 is cascade-connected (cascaded) with another mixing apparatus (mixer or mixer engine) via the cascade I/O6. In the instant embodiment, a general-purpose LAN cable12, such as a CAT5 cable, may be used to cascade the mixing apparatus. Between the cascaded mixing apparatus, audio signals and remote control signals of a plurality of channels can be delivered bi-directionally by use of a communication protocol, such as the EtherSound (registered trademark) or CobraNet (registered trademark) protocol, capable of communicating audio signals and remote control signals of a plurality of channels via one LAN cable. In the instant embodiment, it is assumed that the EtherSound (registered trademark) protocol is used as the communication protocol. With the EtherSound protocol, bi-directional data communication can be performed per Ethernet frame that comprises a packet containing audio signals of 64 channels (e.g., 32 channels for upstream communication and 32 channels for downstream communication). The aforementioned remote control signals include signals instructing changes in values or settings of various parameters related to mixing processing to be performed by the other mixing apparatus cascaded with themixer100, information indicative of the changed results, etc. Namely, themixer100 can transmit and receive, to and from the cascaded other mixing apparatus, control signals including ones instructing changes of various parameters values or settings pertaining to the mixing processing, information indicative of the changed results, etc.
Theother interface section10 may include various interfaces for connection with other equipment, such as a personal computer (PC), external MIDI equipment, recorder, USB memories, etc. PC containing an application program for controlling themixer100 can be connected to theother interface section10, so as to control themixer100 from the PC.
Themixer engine200 is similar in signal-processing-related hardware setup to theaforementioned mixer100 but different from themixer100 in that it has no console section for the user to perform mixing operation. Namely, themixer engine200 includes: a microcomputer comprising aCPU13, aflash memory14 andRAM15; a DSP array16 for performing mixing processing; a waveform I/O17 for inputting and outputting audio signals; and a cascade I/O18 for connection with other equipment including themixer100. The above-mentioned components13-18 are interconnected via abus13B. Further, amonitor22 for a user or human operator of themixer engine200 outputs monitoring audio signals supplied from the waveform I/O17.Display19 andoperator member unit20 shown inFIG. 1 as components of themixer engine200 are in the form of extremely simple LED lamps, switches, etc., which do not constitute a console section of a mixer.
Theengine200 is cascaded with other mixing apparatus, including themixer100, via theLAN cable12 connected to the cascade I/O18. In theengine200, remote control signals transmitted from themixer100 are received via the cascade I/O18, the DSP arrays16 performs mixing-processing-related control, such as changes in values of various parameters on the basis of the received control signals, and the results of the mixing-processing-related control, such as changes in value of various parameters, can be returned to themixer100 via the cascade I/O18.
Furthermore, aPC300 containing an application program for controlling themixer engine200 via an other I/O section21. The other I/O section21 may include, for example, a serial port like RC-232C, and/or one or more other interfaces compliant with any of the conventionally-known communication standards, such as USB, IEEE1394 and Ethernet. As conventionally known, thePC300 can execute the application program for controlling themixer engine200, generate the above-mentioned remote control signals in response to operation of a user interface of thePC300 and supply the control signals to theengine200 to control theengine200. In this case, thePC300 and theengine200 together can operate as an independent mixer, even if they are not cascaded. Theengine200 is controlled by thePC300 via an operation screen on the display of thePC300. The operation screen, which emulates a construction of the mixer console section shown inFIG. 6, includes a plurality of channel strips, and parameter-setting GUI components, such as a fader operator, CUE instructing button, etc. provided for each of the channel strips.
FIG. 2 schematically shows an example construction of a PA system including the instant embodiment of the mixing system. The PA system shown inFIG. 2 is assumed to be one that is built in a music festival venue or the like where performances (such as music performances) by a plurality of human performers are exhibited. In the illustrated example ofFIG. 2, a mixer (“dmix”)100, engine (“meA”)200aand engine (“meB”)200bare cascaded with one another via general-purpose LAN cable (e.g., CAR5 cable)12. Between the mixing cascaded apparatus, audio signals and remote control signals of a plurality of channels can be delivered bi-directionally by use of the EtherSound (registered trademark).
InFIG. 2,reference numerals400aand400brepresent two performance platforms (i.e., “platformA” and “platformB”) each provided for mounting thereon a set of human performers, such as human music performers. The engine (“meA”)200ais disposed near the performance platform (“platformA”)400aand connected via an audible cable with acoustic equipment (first input source) provided on theperformance platform400a. Similarly, the engine (“meB”)200bis disposed near the performance platform (“platformB”)400band connected via an audible cable with acoustic equipment (second input source) provided on theperformance platform400b. Further, a sound system including anamplifier500 andstereo speakers600 is connected to the engine (“meB”)200b, and audio signals output via an audio signal output path (waveform I/O17) of theengine200bare amplified as necessary by theamplifier500 and then audibly generated or sounded from thespeakers600 toward the audience. Further,PCs300aand300bmay be connected to theengines200aand200bto control theengines200aand200bfrom thePCs300aand300b. Let it be assumed that thePCs300aand300bare located, for example, on the left and right wings of the stage near theengines200aand200b.
As shown inFIG. 2, the mixer (“dmix”)100 is displaced in a mixing booth installed at a suitable position, such as a rear position of the audience seating area, in a music festival venue or the like. In the mixing booth, the user of themixer100 can perform mixing operation while aurally checking or confirming balance between audio signals sounded from the sound system toward the audience seating area. The engines (“meA” and “meB”)200aand200bare located on the sides of the stage near therespective performance platforms400aand400b. Here, themixer100,engine200aandengine200bare interconnected via thesingle LAN cable12. In this type of music festival venue or the like, it has been conventional to use thick and heavy audio cables, called “multi cables”, as the cables interconnecting the equipment located on the stage and mixing apparatus located in the mixing booth. Thus, heretofore, one or more multi cables have to be run over a long distance between the stage-side positions and the mixing booth in the audience seating area, and such wiring work is very complicated and cumbersome and tends to require high cost. In the mixing system shown inFIG. 2, on the other hand, it is sufficient that only one general-purpose LAN cable12 be run for cascade connection among the audience-seat-side mixer100, stage-side engine200aand stage-side engine200b. Thus, the necessary wiring work for the mixing system ofFIG. 2 can be dramatically simplified as compared to that in the conventional counterparts. Further, because the LAN cable is very inexpensive as compared to the multi cable, the necessary wiring cost can be extremely lowered.
The following lines describe how the twoperformance platforms400aand400bare used in an event, such as a music festival, where a plurality of performances (such as music performances) are exhibited in succession on the stage. One of the twoperformance platforms400aand400b(e.g., “platformB”) is moved to the middle of the stage so that a given performance is exhibited on the performance platform (“platformB”)400bon the stage, during which time preparations for a succeeding performance are made on the other performance platform (“platformA”)400akept standby on one of the wings of the stage. Namely, while the current performance is being exhibited on the stage, theengine200ais used to perform mixing setting, sound check. etc. for the succeeding performance assigned to “platformA”400a. Then, upon completion of the current performance, “platformB”400bhaving so far been in the middle of the stage is moved back to the other wing of the stage, and “platformA”400ahaving so far been kept standby on the one wing of the stage is moved to the middle of the stage. After that, a given performance is exhibited on “platformA”400a, during which time preparations for another succeeding performance are made on “platformB”400bnow kept standby on the wing of the stage. In this way, the twoperformance platforms400aand400bare used alternately, so that performances (e.g., music performances) can be executed on the stage in succession smoothly in the event, such as a music festival.
In the instant embodiment of the mixing system, desired mixing operation related to performances on “platformA”400aand desired mixing operation related to performances on “platformB”400bcan be remote-controlled alternately via theconsole section60 of thesingle mixer100, in accordance with desired usage of the mixing system in the event. Namely, the mixer (“dmix”)100 is equipped with a special operation mode (hereinafter referred to as “festival mode”) for performing the aforementioned remote control.
In the “festival mode”, audio signals for a performance to be exhibited on the stage are input to one of the engines (200aor200b), mixing processing on the input audio signals in the one engine is remote-controlled via theconsole section60 of themixer100, and the results of the mixing processing are sounded through the sound system (speakers600). Also, in the “festival mode”, audio signals for a succeeding performance are input to the other audio signal (200bor200a), mixing processing on the input audio signals in the other engine is remote-controlled via the PC (300bor300a), and the results of the mixing processing can be monitored by the human operator of the PC via the monitor, such as a headphone set. Namely, in the “festival mode”, theconsole section60 of themixer100 functions as a “main operation section” for controlling the mixing processing on the audio signals for the performance to be exhibited on the stage, while the PC (300aor300b) functions as an “auxiliary operation section” for controlling the mixing processing on the audio signals for the succeeding performance. Furthermore, an output path via which the audio signals for the performance to be exhibited on the stage are output to the sound system in the festival mode will hereinafter be referred to as “main output path” or “main output”, while the audio signals via which the audio signals for the succeeding performance are output to the operator's monitor (11 or22 inFIG. 1) in the festival mode will hereinafter be referred to as “auxiliary output path” or “auxiliary output”.
In addition to the “festival mode”, the mixer (dmix)100 also has an operation mode in which corresponding buses of “dmix”100, “meA”200aand “meB”200bcascaded with one another in an ordinary manner are interconnected to expand the number of input channels; such an operation mode will hereinafter be referred to as “normal mode”.
FIG. 3 shows an example signal processing algorithm construction of a representative one of the mixing apparatus in the instant embodiment of the mixing system. In the illustrated example, it is assumed that the individual mixing apparatus (mixer100 andengines200aand200b) in the mixing system are identical to one another in signal processing algorithm (i.e., in terms of the number of input channels, number of buses, number of output channels, number of effects, and the like).
InFIG. 3, an audiosignal input section30 includes audio input ports of a plurality of channels that receive analog and digital audio signals of a plurality of channels from external acoustic equipment connected to the individual audio input ports. The received analog audio signals are converted in the audiosignal input section30 to digital audio signals.Input patch section31 allocates each of the input signals to any one of a plurality ofinput channels32 provided at the next stage. In the specification, connecting input ports to input channels or connecting output channels to output ports is referred to as “patch”. Further, data indicative of a patch setting between an input port and an input/output channel will be referred to as “patch data”, and such patch data is stored in a suitable memory, such as the flash memory or RAM.
Each of the mixing apparatus (mixer100 andengines200aand200b) includes the plurality ofinput channels32. In the instant embodiment, it is assumed that each of the mixing apparatus (mixer100 andengines200aand200b) includes 48 input channels32 (assigned channel numbers “CH1”-“CH48”). Each of the plurality ofinput channels32 controls characteristics (sound volume level setting, parameter settings of various effectors, etc.) of the input digital audio signal, on the basis of parameter settings specific to the input channel.
Each of the plurality ofinput channels32 is connected to each of a predetermined plurality ofbuses33. Each of thebuses33 is assigned a unique bus number, and a signal of each of theinput channels32 can be output to a desired one of thebuses33 by designating the unique bus number of the desiredbus32. The plurality ofbuses33 include a plurality of mixing buses (in this example, 24 monaural mixing buses and a pair of left and right stereo mixing buses), and two types of CUE buses (main CUE bus and auxiliary CUE bus). Each of the mixing buses is a bus for mixing the input audio signals at a mixing ratio corresponding to signal output levels of the individual input channels. Each of the CUE buses is a bus for outputting the audio signal of a user-designated channel directly to a monitoring output; the main CUE bus is a bus for outputting the audio signals of the main output in the festival mode directly to the monitoring output of themixer100, while the auxiliary CUE bus is a bus for outputting the audio signals of the auxiliary output in the festival mode to the monitoring output of theengine200aor200b.
In each of the plurality ofoutput channels34, control is performed on characteristics (sound volume level setting, parameter settings of various effectors) of the audio signal supplied thereto. The plurality ofoutput channels34 are provided in one-to-one corresponding relation to the plurality ofbuses33. Namely, the plurality ofoutput channels34 include 24 monaural output channels and a pair of left and right stereo output channels, and each of theoutput channels34 is supplied, via a later-describedcascade control section40, with the audio signal output from a corresponding one of the mixingbuses33.Output patch section35 allocates, on the basis of output patch data, the output signal of each of theoutput channels34 to any one of a plurality of analog or digital output ports provided in anaudio output section36. In this way, audio signals having been subjected to user-desired mixing processing can be output through theaudio output section36.
Monitoringcircuit37 is a circuit for outputting confirming (monitoring) signals to amonitoring output section38. Normally (i.e., when the CUE is OFF), themonitoring circuit37 outputs the audio signals from theoutput circuit36 to themonitoring output section38. When the user designates the audio signal of a particular channel as an object of CUE (i.e., when the CUE is ON), themonitoring circuit37 outputs the audio signal of the particular channel (i.e., CUE signal) to themonitoring output section38. InFIG. 3, a flow of the CUE signal is indicated by dotted lines. The user can set CUE ON or CUE OFF for each of the plurality ofinput channels32 andoutput channels34. The audio signal of the channel, for which CUE ON has been instructed, is output to the CUE bus of the plurality ofbuses33, and the audio signal of the CUE bus is supplied to themonitoring circuit37 via the later-describedcascade control section40 and then ultimately output via themonitoring output section38. Note that the user can select, for each of the channels, either a pre-fader signal having not yet been subjected to sound volume adjustment by the sound volume fader or a post-fader signal having been subjected to sound volume adjustment by the sound volume fader, as a CUE signal to be sent from theinput channel32 oroutput channel34 to the CUE bus.
InFIG. 3, thecascade control sections40, indicated by a one-dash-dot line block, are provided in corresponding relation to the plurality ofbuses33; in the figure, only a representative one of thecascade control sections40, which corresponds to one of thebuses33, is shown for clarity of illustration.
In thecascade control section40, asignal path50 outputs an audio signal, input from a mixing apparatus (mixer or engine) that precedes the mixing apparatus in question (i.e., mixing apparatus which thecascade control section40 belongs) in the cascade-connected apparatus group (hereinafter referred to as “preceding-cascade-stage mixing apparatus”), to a mixing apparatus that succeeds the mixing apparatus in question in the cascade-connected apparatus group (hereinafter referred to as “succeeding-cascade-stage mixing apparatus”). Further, asignal path51 outputs or returns an audio signal, input from the succeeding-cascade-stage mixing apparatus, to the preceding-cascade-stage mixing apparatus. In this specification, each audio signal communicated between the mixers via the cascade connection (i.e., audio signals flowing over thesignal path50 or51) will hereinafter be referred to as “cascade signal”.
Adder section41 adds together a cascade signal transmitted from the preceding-cascade-stage mixing apparatus and an audio signal output from thebus33 of the mixing apparatus in question. More specifically, output signals from the corresponding buses of the cascaded mixing apparatus are added by theadder section41. For example, audio signals output from the mixing bus of bus number B1 of themixer100, from the mixing bus of bus number B1 of theengine200aand from the mixing bus of bus number B1 of theengine200bare added together by theadder section41. In this way, the correspondingbuses33 of the cascaded mixing apparatus are interconnected.
Switch section42 is a switch for switching between ON and OFF of audio signal input from the mixingbus33 of the mixing apparatus in question to theadder section41. When theswitch section42 is in the OFF state, the output signal from thebus33 is not added with the cascade signal of thesignal path50; namely, thebus33 is not connected with the correspondingbuses33 of the other mixing apparatus cascade-connected with the mixing apparatus in question.Delay section43 preceding theswitch section42 is provided for compensating for a delay resulting from the cascade connection when the cascade signal and output signal from thebus33 are to be added by theadder section41.
Switch section44 is a switch that is turned on to interconnect thesignal paths50 and51 if the mixing apparatus in question (i.e., mixing apparatus thesection44 belongs to) is at the last stage of the cascade connection (i.e., located at a predetermined position to function as a cascade master). Note that the functions of theadder section41 andswitch section44 are conventionally known in the field of the ordinary cascade connection between mixing apparatus.
Selector section45 selects, as the cascade signal to be output from the mixing apparatus in question to the preceding-cascade-stage mixing apparatus, either the cascade signal output from thebus33 of the mixing apparatus in question or the cascade signal flowing over the signal path51 (i.e., cascade signal output from the succeeding-cascade-stage mixing apparatus). Further, aselector section46 selects, as the audio signal to be supplied to the plurality ofoutput channels34 ormonitoring circuit37, the audio signal output from thebus33 of the mixing apparatus in question, the cascade signal flowing over the signal path50 (cascade signal output from the preceding-cascade-stage mixing apparatus, i.e. audio signal with which the bus-output audio signal of the mixing apparatus in question has not yet been added) or the cascade signal flowing over the signal path51 (i.e., cascade signal output from the succeeding-cascade-stage mixing apparatus).
Delay section47 provided at a stage succeeding theselection section46 is provided for compensating for a delay resulting from the cascade connection among the mixing apparatus when the audio signal is to be output to the audio signal output path.
With thecascade control sections40 arranged in the aforementioned manner, destinations of the audio signals (including the cascade signals) of thebuses33 of the individual mixing apparatus can be controlled independently among thebuses33, by switching the settings of the switch andselector sections42,45 and46. Namely, by switching the settings of any of the switch andselector sections42,45 and46 depending on the operation mode (“normal mode” or “festival mode”), the instant embodiment can achieve a plurality of different signal path connections corresponding to the user-designated operation mode. Variations of the signal path connection corresponding to the user-designated operation mode will be described later with reference toFIGS. 4 and 5.
FIG. 4 is a block diagram showing an example audio signal processing construction when the instant embodiment of the mixing system should operate in the “normal mode”. In the illustrated example ofFIG. 4, the mixer (“dmix”)100 is connected with the engines (“meA” and “meB”)200aand200band located at a predetermined position to function as a cascade master, so that it receives output signals (cascade signals) from therespective buses33 of theengines200aand200b. As illustrated inFIG. 4, output signals from the plurality ofbuses33 of “meA”200aare input, via thesignal path50, to “meB”200band added, via theadder sections41 of “meB”200b, with output signals of the correspondingbuses33 of “meB”200b. Output signals from theadder sections41 of “meB”200bare input, via thesignal path50, to “dmix”100 and added, via theadder sections41 of “dmix”100, with output signals of the correspondingbuses33 of “dmix”100. By mixing the output signals from the correspondingbuses33 of the mixing apparatus (i.e., “dmix”, “meA” and “meB”) in the aforementioned manner, the corresponding buses are, in effect, interconnected. Ultimate outputs from the cascadedbuses33 can be supplied, via thesignal path51, to the output channels of the individual mixing apparatus. Thus, when the mixing system operates in the “normal mode”, thebuses33 of the cascaded mixing apparatus are interconnected, so that the number of input channels handled by a single mixing apparatus can be increased. The foregoing functions in the “normal mode” are similar to the functions of the conventionally-known cascade connection.
As will be later detailed, when the instant embodiment of the mixing system operates in the “normal mode”, theconsole section60 of “dmix”100 can be used to perform not only mixing control on each of the channels of themixer100 but also mixing control on each of the channels of the individual engines (“meA” and “meB”). In this specification, the mixing control on “dmix”100 by theconsole section60 of “dmix”100 will hereinafter be referred to as “local control” or “local”, while the mixing control on the engines (“meA” and “meB”) by theconsole section60 of “dmix”100 will hereinafter be referred to as “remote control” or “remote”.
FIGS. 5A and 5B are block diagrams showing example constructions for audio signal processing when the instant embodiment of the mixing system should operate in the “festival mode”. More specifically,FIG. 5A shows an example audio signal processing construction to be used in a sub-mode of the festival mode where audio signals for a performance to be exhibited on the stage are input to the engine (“meA”)200a(hereinafter “mode A” or “festival A mode”), whileFIG. 5B shows an example audio signal processing construction in a sub-mode of the festival mode where audio signals for a performance to be exhibited on the stage are input to theengine200b(“meB”) (hereinafter “mode B” or “festival B mode”). In the embodiment of the mixing system, the sound system (speakers600) is connected to “meB”, as noted above; namely, the plurality ofoutput channels34 of “meB”200bare used as the “main output path” of the mixing system.
First, the following lines describe the signal processing construction in “mode B” shown inFIG. 5B. In “mode B”, audio signals for a performance to be exhibited on the stage are input to the plurality ofinput channels32 of “meB”200b. Thus, in this case, the audio signals input to “meB”200bhave to be supplied to the main output path, i.e. the plurality ofoutput channels34 of “meB”200b. For this purpose, the mixingbuses52 of “meB”200band “dmix”100 are interconnected, and therespective output channels34 of “meB”200band “dmix”100 are connected with the output of theinterconnected mixing buses52 of “meB”200band “dmix”100, as shown inFIG. 5B. In this way, audio signals obtained by mixing output signals from therespective mixing buses52 of “meB”200band “dmix”100 (typically, only audio signals input to “meB”200b) are sounded through thespeakers600.
Further, themain CUE buses53 of “meB”200band “dmix”100 are cascade-connected with each other, and therespective input channels32 andoutput channels34 of “meB”200band “dmix”100 are connected to the interconnectedmain CUE buses53 of “meB”200band “dmix”100 as inputs to thebuses53. Themonitoring output section38 of “dmix”100 is connected to the interconnectedmain CUE buses53 as an output destination of thebuses53. The user can use a headphone set (HP)61, connected to themonitoring output section38aof “dmix”100, to monitor audio signals output from the interconnected CUE buses53 (i.e., main output audio signals).
Meanwhile, audio signals for a succeeding performance are input to the plurality ofinput channels32 of “meA”200a. Output signals from theindividual mixing buses52 of “meA”200aare supplied to theoutput channels34 of “meA”200a.Auxiliary CUE buses54 of “meA”200aand “meB”200bare cascade-connected with each other, and theinput channels32 andoutput channels34 of “meA”200aare connected to the interconnectedauxiliary CUE buses54 as inputs to thebuses54.Monitoring output sections38bof “meA”200aand “meB”200bare connected to the interconnectedauxiliary CUE buses54 as output destinations of thebuses54. In the illustrated example, the user can use a headphone set (HP)62, connected to themonitoring output section38bof “meB”200b, to monitor audio signals output from the interconnected auxiliary CUE buses54 (i.e., auxiliary output audio signals).
Namely, the main feature of “mode B” is that, for thecascade control sections40 corresponding to the mixingbuses52, cascade setting is performed to interconnect only “meB”200band “dmix”100.
The following lines describe the signal processing construction in “mode A” shown inFIG. 5A. In “mode A”, audio signals for a performance to be exhibited on the stage are input to the plurality ofinput channels32 of “meA”200a. Thus, in this case, the audio signals input to “meA”200ahave to be supplied to the main output path, i.e. the plurality ofoutput channels34 of “meB”200b. For this purpose, the mixingbuses52 of “meA”200aand “dmix”100 are interconnected, and the respective output channels of “meB”200band “dmix”100 are connected to the output of theinterconnected mixing buses52 as output destinations of thebuses52, as shown inFIG. 5A. In this way, audio signals obtained by mixing output signals from therespective mixing buses52 of “meA”200aand “dmix”100 (typically, only audio signals input to “meA”200a) are sounded through thespeakers600.
Further, themain CUE buses53 of “meA”200a, “meB”200band “dmix”100 are cascade-connected with one another, and therespective input channels32 of “meA”200aand “dmix”100 andoutput channels34 of “meB”200band “dmix”100 are connected to the interconnectedmain CUE buses53 as inputs to thebuses53. Themonitoring output section38aof “dmix”100 is connected to the interconnectedmain CUE buses53 as an output destination of thebuses53. The user can use the headphone set (HP)61, connected to themonitoring output section38aof “dmix”100, to monitor audio signals output from the interconnected CUE buses53 (i.e., main output audio signals).
Meanwhile, audio signals for a succeeding performance are input to the plurality ofinput channels32 of “meB”200b. Output signals from theindividual mixing buses52 of “meB”200bare supplied to theoutput channels34 of “meA”200athrough the cascade connection. Theauxiliary CUE buses54 of “meA”200aand “meB”200bare cascade-connected with each other, and theinput channels32 of “meB”200bandoutput channels34 of “meA”200aare connected to the interconnectedauxiliary CUE buses54 as inputs to thebuses54. Themonitoring output sections38bof “meA”200aand “meB”200bare connected to the interconnectedauxiliary CUE buses54 as output destinations of thebuses54. In the illustrated example, the user can use the headphone set (HP)62, connected to themonitoring output section38bof “meB”200b, to monitor audio signals output from the interconnected auxiliary CUE buses54 (i.e., auxiliary output audio signals).
Namely, the main feature of “mode A” is that, for thecascade control sections40 corresponding to the mixingbuses52, cascade setting is performed to interconnect “meA”200aand “dmix”100 so that outputs of interconnected “meA”200aand “dmix”100 are output from “meB”200band “dmix”100. Namely, theswitch sections42 in “meA”200aand “dmix”100 are set to ON, while theswitch section42 in “meB”200bis set to OFF. Further, cascade signals flowing over thesignal path51 are selected as output signals of theselector sections46 of “meB”200band “dmix”100, and theselector sections45 of “meB”200bare set to cascade-output output signals of the mixingbuses52 of the mixing apparatus in question to “meA”200a.
In the “festival mode” of the instant embodiment of the mixing system of the invention, control can be performed on the channels, to which are supplied audio signals for a performance currently exhibited on the stage, in response to operation, by the user, on theconsole section60, while control can be performed on the channels, to which are supplied audio signals for a succeeding performance, in response to operation, by the user, on the PC (auxiliary console section)300. In “mode B” shown inFIG. 5B, the object of remote control based on operation, by the user, on theconsole section60 of “dmix”100 is theinput channels32 andoutput channels34 of “meB”200b, and the object of remote control based on operation, by the user, on thePC300 is theinput channels32 andoutput channels34 of “meA”200a. Further, in “mode A” shown inFIG. 5A, the object of remote control based on operation, by the user, on theconsole section60 of “dmix”100 is theinput channels32 of “meA”200aandoutput channels34 of “meB”200b, and the object of remote control based on operation, by the user, on thePC300 is theinput channels32 of “meB”200bandoutput channels34 of “meA”200a.
The mixing operation in the festival mode is carried out in the following manner. While a performance pertaining to one of the two performance platforms (e.g., “platformB”400b) is being exhibited or executed on the stage, the mixing system is set in “mode B”, so that characteristics of audio signals for the currently-executed performance are controlled by the mixing processing on theindividual input channels32 andoutput channels34 of “meB”200bbeing controlled via theconsole section60 of “dmix”100. Further, in response to CUE instructing operation of a particular channel performed via theconsole section60 of “dmix”100, signals of the particular channel, designated from among theinput channels32 andoutput channels34 of “meB”200b, can be monitored through themonitoring output section38aof “dmix”100. On the other hand, characteristics of audio signals for a succeeding performance pertaining to the other performance platform (e.g., “platformA”) being kept standby on one of the wings of the stage are controlled by the mixing processing on theindividual input channels32 andoutput channels34 of “meB”200bbeing controlled via the PC (auxiliary operation section)300. Further, in response to CUE instructing operation of a particular channel designated on thePC300, signals of the particular channel, designated from among theinput channels32 andoutput channels34 of “meA”200a, can be monitored through themonitoring output section38bof “meB”200b.
While a performance pertaining to the other performance platform (e.g., “platformA”400a) is being exhibited on the stage, the mixing system is switched to “mode A”, so that characteristics of audio signals for the currently-executed performance are controlled by the mixing processing on theindividual input channels32 andoutput channels34 of “meA”200abeing controlled via theconsole section60 of “dmix”100. Further, in response to CUE instructing operation of a particular channel designated on theconsole section60 of “dmix”100, signals of the particular channel, designated from among theinput channels32 andoutput channels34 of “meB”200b, can be monitored through themonitoring output section38aof “dmix”100. On the other hand, characteristics of audio signals for a succeeding performance pertaining to the performance platform (“platformB”) being kept standby on the other wing of the stage are controlled by the mixing processing on theindividual input channels32 andoutput channels34 of “meB”200bbeing controlled via the PC (auxiliary operation section)300. Further, in response to CUE instructing operation of a particular channel performed via thePC300, signals of the particular channel, designated from among theinput channels32 andoutput channels34 of “meB”200b, can be monitored through themonitoring output section38bof “meB”200b.
By switching between “mode A” and “mode B”, the mixing operation for a performance pertaining to “platformA” and the mixing operation for a performance pertaining to “platformB” can be remote-controlled alternately by thecontrol section60 of the single mixer (“dmix”)100. As a result, in an event, such as a festival, the instant embodiment of the mixing system permits efficient mixing operation in a case where two sets of performance platforms are provided and used alternately (i.e., where, while a performance of “platformA” is being executed, preparations for a succeeding performance are made).
FIG. 6 is a schematic outer appearance view showing principal sections of the console section of the mixer (“dmix”)100. On theconsole section60 of themixer100, there are provided the display7 (seeFIG. 1), a plurality of monaural channel strips70, stereo (ST) output channel strips71, mode change switches72,73 and74, object-of-control change switches75,76 and77, layer change switches78,79 and80, etc.
The monaural channel strips70 are modules for performing mixing operation on the monaural channels, such as theinput channels32 oroutput channels34, and the stereo output channel strips71 are modules for performing mixing operation on stereo output channels included in theoutput channels34. Theconsole section60 of “dmix”100 includes, for example, 24 monaural channel strips70, and two (i.e., left and right) stereo output channels. Each of the monaural channel strips70 and stereo output channel strips71 includes: the electric fader9 (seeFIG. 1) for adjusting a sound volume, aCUE switch81 for giving a CUE (CUE-ON) instruction to send an audio signal of the channel; aselection switch82 for developing in detail a parameter of the channel, an ON/OFF (mute) switch83 of the channel; and aknob operator84 for adjusting an allocated parameter (e.g., send level to a mixing bus, gain, panning, or the like). For each of the channel strips70 and71, the user can make various parameter settings related to mixing processing on the channel assigned to the channel strip. Channel assignment to the channel strips70 and71 will be later described in detail.
Each of the mode change switches72-74, object-of-control change switches75-77 and layer change switches78-80 has a light emitting element, such as an LED, incorporated therein. By illuminating each switch for which a corresponding function or parameter is ON, the instant embodiment can display a currently-selected operation mode, object of control or layer. In the illustrated example ofFIG. 6, it is assumed that “festival mode A”, “Remo1” and “Layer 1” are currently selected, and each switch being illuminated is indicated in halftone. Further, each of the channel strips70 and71 and switches81,82 and83 has a light emitting element, such as an LED, incorporated therein; each switch for which a corresponding function or parameter is ON is illuminated. Further, a plurality of light emitting elements, such as LEDs, are disposed around each of theknob operators84, so that the current setting of theknob operator84 can be displayed by illumination of the light emitting elements.
Theconsole section60 of “dmix”100 includes aheadphone terminal85, and a sound-volumeadjusting operator member86 for theheadphone terminal85. Theheadphone terminal85 corresponds to the operator'smonitor11 ofFIG. 1 ormonitoring output section38 ofFIG. 3. Further, the user can call any of various display screens to thedisplay7 to set any of various parameters using GUI components on the called display screen. The various display screens include a display screen of the input patch or output patch, screen for controlling principal parameters of a plurality of channel strip images, screen for developing in detail parameters of a particular channel to set detailed parameters.
Theconsole section60 of “dmix”100aalso includes, as a module for controlling a “scene store/recall” function, a scenenumber display section87, a number increment (UP) switch88 and decrement (DOWN)switch89, astore switch90 for instructing storage of a scene, and arecall switch91 for instructing recall of a scene.
The mode change switches72-74 are each operable to change the mode of the mixing processing, which consist of theswitch72 for selecting “mode A” of the festival mode (i.e., “festival “A” mode), switch73 for selecting “mode B” of the festival mode (i.e., “festival “B” mode) and switch74 for selecting the normal mode. With these mode change switches72-74, the user can select a suitable operation mode corresponding to a desired form of usage of the mixing system. When the number of input channels of the mixer or engine is to be increased through the normal cascade connection, the normal mode is selected (i.e., the “normal”switch74 is turned on and illuminated). Further, when the mixing system is used in the situation shown inFIG. 2 (in a music festival or the like), the festival mode is selected (i.e., “A” or “B”switch72 or73 is turned on and illuminated). In the festival mode, switching can be made between “mode A” and “mode B” in accordance with the mixing apparatus to which audio signals of a performance to be exhibited on the stage are input (“meA” or “meB”).
The object-of-control change switches75-77 are each provided for changing the object of control to be controlled via theconsole section60 of themixer100. When the “Local”switch75 has been operated (so that “Local” is illuminated), local control is performed on the stored contents (for controlling the DSP array4) of the current memory of themixer100 in response to operation performed via theconsole section60. Further, when the “Remo1”switch76 or “Remo2”switch77 has been operated (so that “Remo1” or “Remo2” is illumined), the stored contents (for controlling the DSP array16) of the current memory of another mixing apparatus (engine200aor200bofFIG. 2), connected to themixer100, is controlled in response to operation performed via theconsole section60.
The layer change switches78-80 are each provided for changing the channels to be assigned to the 24 monaural channel strips70. When the “master1”switch78 has been operated (so that “master1” is illuminated), a layer of 24 monaural output channels of channel numbers1-24 (corresponding to the plurality ofoutput channels34 ofFIG. 3) of any one of the mixing apparatus is assigned to the channel strips70. Further, when the “layer1”switch79 has been operated (so that “layer1” is illuminated), a layer of 24 input channels of channel numbers1-24 (corresponding to the plurality ofinput channels32 ofFIG. 3) of any one of the mixing apparatus is assigned to the channel strips70. Furthermore, when the “layer2”switch80 has been operated (so that “layer2” is illuminated), a layer of 24 input channels of channel numbers25-48 (corresponding to the plurality ofinput channels32 ofFIG. 3) of any one of the mixing apparatus is assigned to the channel strips70.
Thus, with “dmix”100 in the instant embodiment, a particular object of control by the console section60 (including the monaural channel strips70 and ST output channel strips71) can be designated by a combination of settings of the mode change switches72-74, object-of-control switches75-77 and layer change switches78-80.
The following lines describe a specific example manner in which channels to be controlled via the monaural channel strips70 are assigned to the channel strips70. It is assumed here that, when the mixing system is in the normal mode, the mixer (“dmix”)100 becomes the object of control in response to operation of the “Local”switch75, “meB”200bbecomes the object of control in response to operation of the “Remo1”switch76, and “meA” becomes the object of control in response to operation of the “Remo2”switch77. Then, for the object of control selected via the object-of-control change switches75-77, a group of channels belonging to a layer selected via the layer change switches78-80 are assigned to the monaural channel strips70. Further, for the object of control to be controlled by any one of the ST output strips71, the assignment depends on the selection by any one of the object-of-control change switches75-77. In an alternative, “meA”200aand “meB”200bmay be assigned to the “Remo1”switch76 and “Remo2”switch77, respectively, and correspondency between the “Remo1”switch76 and “Remo2”switch77 and the engines may be set by the user.
Further, when the mixing system is in the normal mode (with the “normal”switch74 illuminated), the DSP array16 of “meB” becomes the object of control in response to operation of the “Remo1”switch76, and the “Remo1”switch76 is illuminated. The DSP array16 of “meA” becomes the object of control in response to operation of the “Remo2”switch77, and the “Remo2”switch77 is illuminated. Further, theDSP array4 of themixer100 becomes the object of control in response to operation of the “Local”switch75, and the “Local”switch75 is illuminated.
In the festival mode, theDSP array4 of themixer100 performs local control on themixer100 in response to selection of the “Local”switch75 in each of “mode A” and “mode B”, so that the channels of “dmix”100, belonging to a layer selected through operation of any one of the layer change switches78-80, are assigned to the monaural channel strips70.
Further, in the festival mode, the object of control by the monaural channel strips70 is determined, in correspondence with “mode A” or “mode B”, in response to selection of the “Remo1”switch76 as shown inFIG. 7B. Namely, in “mode A”, the monaural output channels “CH1”-“CH24” of “meB”200bare allocated to “Master1”, the monaural output channels “CH1”-“CH24” of “meA”200aare allocated to “Layer1”, and the monaural output channels “CH25”-“CH48” of “meA”200aare allocated to “Layer2”. Namely, in “mode A” of the festival mode, theinput channels32 of “meA”200aare allocated to “Layer1” and “Layer2” while themonaural output channels34 of “meB”200bare allocated to “Master1”, and thus, in the illustrated example ofFIG. 5B, the remote control signal line of theconsole section60 of “dmix”100 is connected to both of “meA”200aand “meB”200bas indicated by a double-head arrow. Further, in “mode A” (with the “A”switch72 illuminated), once the “Remo1”switch76 or “Remo2”switch77 is operated with “master1” selected (i.e., with the “master1”switch78 illuminated), the DSP array16 of “meB”200bbecomes the object of control, so that the “Remo1”switch76 corresponding to the object of control is illuminated. Furthermore, in “mode A”, once the “Remo1”switch76 or “Remo2”switch77 is operated with “Layer1” or “Layer2” selected (i.e., with the “Layer1” or “Layer2”switch79 or80 illuminated), the DSP array16 of “meA”200abecomes the object of control, so that the “Remo2”switch77 corresponding to the object of control is illuminated. Once the “Local”switch75 is operated, theDSP array4 of themixer100 becomes the object of control irrespective of the layer-selected state, so that the “Local”switch75 is illuminated. Namely, in mode A” of the festival mode, the illumination is automatically switched between the “Remo1”switch76 and the “Remo2”switch77 depending on whether the object of control is “meB”200bor “meA”200ain response to a currently-selected layer.
In “mode B” of the festival mode, on the other hand, the monaural output channels “CH1”-“CH24” of “meB”200bare allocated to “Master1”. The input channels “CH1”-“CH24” of “meB”200bare allocated to “Layer1”, and the input channels “CH25”-“CH48” of “meB”200bare allocated to “Layer2”. Namely, in “mode B”, theinput channels32 of “meB”200bare allocated to “Layer1” and “Layer2” while the output channels of “meB”200bare allocated to “Master1”, and thus, in the illustrated example ofFIG. 5B, the remote control signal line of theconsole section60 of “dmix”100 is connected to “meB”200bas indicated by a single-head arrow. Further, in “mode B” (with the “B”switch73 illuminated), once the “Remo1”switch76 or “Remo2”switch77 is operated, the DSP array16 of “meB”200bbecomes the object of control, so that the “Remo1”switch76 corresponding to the object of control is illuminated. Furthermore, in “mode B”, once the “Local”switch75 is operated, theDSP array4 of themixer100 becomes the object of control irrespective of the layer-selected state, so that the “Local”switch75 is illuminated. Namely, in mode B” of the festival mode, “meB”200bbecomes the object of control irrespective of which of the “Remo1”switch76 and “Remo2”switch77 is operated.
In the aforementioned manner, the instant embodiment allows the user to confirm, through the illumination states of the switches75-77, of which mixing apparatus the DSP array is currently the object of control, although the object of control by the monaural channel strips70 may switch among the mixing apparatus in accordance with selection of an operation mode and layer.
Further, in the festival mode, the ST output channels of “dmix”100 are assigned to the two ST output channel strips71 in response to selection of “Local”75, as shown inFIG. 7C. Furthermore, in each of “mode A” and “mode B”, the ST output channels of “meB”200b, used as the main outputs, are assigned to the two ST output channel strips71.
Furthermore, when the festival mode is selected, the object of control by the application program stored in thePC300, connected to “meA”200aor “meB”200b(seeFIG. 5A andFIG. 5B), also switches in response to mode selection between “mode A” and “mode B”. Namely, in “mode A”, the object of control by thePC300 is the input channels CH1-CH48 of “meB”200band the monaural output channels CH1-CH24 and ST output channels of “meA”200a, while, in “mode B”, the object of control by thePC300 is the input channels CH1-CH48 of “meA”200aand the monaural output channels CH1-CH24 and ST output channels of “meB”200b(seeFIG. 7D).
FIG. 8 is a diagram explanatory of constructions of the current memories provided in themixer100 andengines200aand200b, as well as parameter editing performed in the current memories in the normal mode. As shown inFIG. 8, theRAM3 of the mixer (“dmix”)100 (seeFIG. 1) includes: a local current memory (“Local”)101 for storing the current settings of various parameters for the mixing processing in “dmix”100; a remote current memory (“Bin′” and “Bout′”)102 for storing the current settings of various parameters for remote-controlling “meB” cascade-connected with “dmix”100; and a remote current memory (“Ain′” and “Aout′”)103 for storing the current settings of various parameters for remote-controlling “meA” cascade-connected with “dmix”100. The parameters stored in the localcurrent memory101 are used both in control of the mixing processing (control of the DSP array4) of “dmix”100 and in display control performed when the current values or settings of the mixing processing parameters of “dmix”100 have been read out to theconsole section60 of “dmix”100. Further, the parameters stored in the remotecurrent memories102 and103 are used in remote control of the corresponding engines, i.e. in display control performed when the current values or settings of the mixing processing parameters of the corresponding engines have been read out to theconsole section60 of “dmix”100.
Further, a local current memory (“Bin” and “Bout”)201 for storing the current settings of various parameters for mixing control of “meB”200bis provided in theRAM15 of the engine (“meB”)200b(seeFIG. 1), and a local current memory (“Ain” and “Aout”)202 for storing the current settings of various parameters for mixing control of “meA”200ais provided in theRAM15 of the engine (“meA”)200a. The parameters stored in each of the localcurrent memories201 and202 are used both in control of the mixing processing (control of the DSP array16) of the corresponding engine.
For each of the remotecurrent memories102 and103 and localcurrent memories201 and202 shown inFIG. 8, current memory sections (Ain, Bin, Ain′, Bin′) for storing parameters related to the input channels and current memory sections (Aout, Bout, Aout′, Bout′) for storing parameters related to the output channels are depicted separately. This is for the purpose of clarifying that the input channels and output channels of “meA”200aand “meB”200bare separately selected and remote-controlled by theconsole section60.
FIG. 9 is a flow chart showing an example operational sequence of a cascade-connection detection event process performed by the mixer (“dmix”)100 when a new cascade-connection detection event has been detected. Let it be assumed that “dmix”100 constantly checks states of connection, to its cascade I/O6 (seeFIG. 1), of other mixing apparatus. Upon detection of new cascade connection, “dmix”100 performs, for each of the buses (i.e.,buses33 inFIG. 3), cascade setting of thecascade control section40, i.e. setting of theswitch section43 andselector sections45 and46, at step S1. In this way, a signal path is established for performing communication of audio signals with the mixing apparatus newly cascade connected with “dmix”100. Let it be assumed here that the mixing system operates in the normal mode at a cascade-connection initialization stage. Namely, at step S1, the cascade setting in the normal mode is performed.
At nest step S2, a determination is made as to whether the mixing apparatus newly cascaded with “dmix”100 is a mixer engine. If a mixing apparatus other than a mixer engine (i.e. mixer having the console section) has been cascaded as determined at step S2, there will be achieved a better operability by the newly-cascaded mixer being controlled via its own console section rather than being remote-controlled via the console section of the mixer (“dmix”)100 through the cascade connection. Thus, in the instant embodiment, operations at and after step S3 are carried out only when a mixer engine has been cascaded with the mixer100 (YES determination at step S2), to thereby allow the engine to be remote-controlled by themixer100. If a mixing apparatus other than a mixer engine has been cascaded with the mixer100 (NO determination at step S2), the cascade-connection detection event process is brought to an end without the newly-cascaded mixer being handled as the object of remote control. However, a mixing apparatus other than a mixer engine may of course be handled as the object of remote control, in which case the determination operation at step S2 may be dispensed with. In an alternative, the user may make a setting as to whether or not a mixing apparatus other than a mixer engine should be handled as the object of remote control.
At step S3, a remote current memory for, or corresponding to, the newly cascaded engine is created in theRAM3 of “dmix”100, e.g. by securing in the RAM3 a storage region to be used as such a remote current memory. In this manner, the remotecurrent memory102 of “meB”200band remotecurrent memory103 of “meA”200acan be created in “dmix”100. At step S4, data of all parameter settings stored in the current memory of the newly-cascaded engine are received from the newly-cascaded engine, and the received data are written into the remote current memory created in themixer100 for the newly-cascaded engine. In this manner, the stored contents of the remotecurrent memory102 or103 for the newly-cascaded engine in themixer100 can be made to agree with the stored contents of the localcurrent memory201 or202 of the newly-cascaded engine, so that the remote control, by “dmix”100, of the newly-cascaded engine becomes effective. After that, as long as the remote control is performed, any change made to the localcurrent memory201 or202 is transmitted to the remotecurrent memory102 or103 so that the same change can be made to the stored contents of the remotecurrent memory102 or103; thus, control can be performed such that the stored contents of the two (i.e., local and remote) current memories can constantly agree with each other.
At step S5, the “normal mode”selection switch74 is illuminated; this is because the normal mode is set as an initial mode in the instant embodiment as noted earlier. Let it also be assumed here that “dmix”100 transmits a current setting instruction to the cascaded engine to cause thecascade control section40 of each of the buses of the engine to perform cascade setting of the normal mode.
FIG. 10 is a flow chart showing an example operational sequence of a mode change process performed by the mixer (“dmix”)100 when a mode change has been instructed by operation of any one of the mode change switches72-74. Once a mode change is instructed by operation of any one of the mode change switches72-74, themixer100 transmits a cascade setting change instruction, corresponding to the instructed mode, to all engines cascade-connected with themixer100, at step S6. Then, at step S7, cascade setting is performed on thecascade control section40 perbus33 of “dmix”100 in accordance with the instructed mode. In each of the cascade-connected engines too, cascade setting is performed on thecascade control section40 per bus of the engine on the basis of the received cascade setting change instruction. In this manner, a signal path is established in the mixing system in accordance with the user-selected mode (seeFIGS. 4 and 5A and5B).
FIG. 11 is a flow chart showing an example operational sequence of an operator operation event process performed by the mixer (“dmix”)100 in response to generation of an operation event of any one of the operator members provided on theconsole section60 of “dmix”100. Here, the “operation event” means operation of any one of the operator members for changing the value of a parameter related to the mixing processing, such as operation of any one of theelectric faders9 andknob operators84 or parameter setting operation via any one of the GUI components of thedisplay7. Upon detection of an operation event of any one of the operator members on theconsole section60 of “dmix”100, “dmix”100 determines what is the current object of control (by checking selection states of the object-of-control change switches75-77) at step S8 ofFIG. 11.
If the current object of control is “Local” (YES determination at step S8), and once mixing operation (control operation of “Local” inFIG. 8) is performed on theconsole section60, the value of a parameter, corresponding to the mixing operation, of the parameters currently stored in the localcurrent memory101 is updated at step S9, so that the signal processing by theDSP array4 will be controlled on the basis of the updated stored contents of the localcurrent memory101. Further, at step S10, the corresponding parameter value displayed on the console section is updated on the basis of the parameter value updated at step S9 above. The parameter display updating at step S10 includes illumination control of the illuminating elements disposed around the correspondingknob operator member84, updating of the corresponding parameter indication (e.g., visual indication of a value indicated within a numerical value box, operating position of the corresponding GUI component and the like) on the screen of thedisplay7, electric control of the operating position of the corresponding fader operator, etc.
If the current object of control is “Remote” (NO determination at step S8), the engine to be controlled is identified at step S11. Then, at step S12, a remote control signal instructing a value change of the parameter corresponding to the mixing operation on the console section60 (i.e., parameter-value-change instructing signal or parameter-value-change instruction) is transmitted to the identified cascade-destination engine via the cascade connection. Namely, the parameter-value-change instructing signal includes information that designates the cascade-destination engine to be controlled, so that, on the basis of the information designating the cascade-destination engine, the engine in question can receive, via the cascade connection, the parameter-value-change instructing signal transmitted thereto.
InFIG. 8, there is shown an example of the parameter editing process based on remote control in the normal mode, where any of parameter settings related to the input channels of “meA” has been changed via the console section of “dmix”100. More specifically,FIG. 8 shows the example where, in the normal mode, “Remo2” has been selected as the object of control (i.e., the “Remo2”switch77 has been illuminated) and “layer1” or “Layer2” has been selected as the layer (i.e., “layer1” switch79 o “layer2”switch80 has been selected). Once any one of the operator members of the monaural channel strips70 is operated on theconsole section60 of “dmix”100 in this state, a parameter setting related to the input channel of “meA”200ais changed (i.e., control operation of “Ain”), and then, a parameter-value-change instructing signal corresponding to the control operation of “Ain” is transmitted to “meA”200avia the cascade connection. On the basis of the parameter-value-change instructing signal received, “meA”200aupdates the value of the corresponding parameter in the local current memory202 (i.e., one of the parameters contained in the “Ain” current memory section). Such updating of the localcurrent memory202 is reflected in the signal processing by the DSP array16 of the engine (“meA”)200a. After completion of the updating of the localcurrent memory202, “meA”200atransmits the updated value of the parameter, i.e. “parameter value change result”, to “dmix”100.
FIG. 12 is a flow chart showing an example operational sequence of a parameter value change result reception event process performed by the mixer (“dmix”)100 when the “parameter value change result” has been received from the engine cascaded with themixer100. On the basis of the received parameter value change result, dmix”100 updates the value of the corresponding parameter in the remotecurrent memory103 of “meA”200b(i.e., one of the parameters contained in the “Ain′” current memory section), at step S13. Then, at step S14, a visual indication of the parameter value is updated on theconsole section60 of “dmix”100. Similarly to the one explained above in relation to step S10, the parameter value indication updating at step S14 includes illumination control of the illuminating elements disposed around the correspondingknob operator member84, updating of the corresponding parameter indication on the screen of the display7 (e.g., updating of a visual indication of the value indicated within the corresponding numerical value box, operating position of the corresponding operator member image, GUI component and the like) on the screen of thedisplay7, electric control of the operating position of the corresponding fader operator, etc. Through the operations ofFIG. 12, the “parameter value change result” in the engine cascaded with “dmix”100 can be reflected in the console section of “dmix”100.
Similarly, in a case where an engine (“meA”200aor “meB”200b) has been controlled via thePC300, the stored contents of the localcurrent memory201 or202 are updated, so that a “parameter value change result” based on the updating is transmitted to “dmix”100. Thus, “dmix”100 performs the aforementioned process ofFIG. 12 on the basis of the “parameter value change result” received from theengine200aor200b. In this case, however, depending on the local/remote setting or layer setting in theconsole section60, i.e. if the engine in question or layer thereof is not currently selected on theconsole section60, updating of a visual indication, on theconsole section60, corresponding to the parameter value change result (step S14 ofFIG. 12) is not effected at this time, although the corresponding value in the remote current memory is updated (step S13 ofFIG. 12).
Next, with reference toFIGS. 13A and 13B, a description will be given about examples of the parameter editing process based on remote control in the festival mode.FIG. 13A shows an example of the parameter editing process based on remote control in “mode A” of the festival mode, whileFIG. 13B shows another example of the parameter editing process based on remote control in “mode B” of the festival mode. Whereas the parameter editing process based on remote control in the festival mode is basically similar to the parameter editing process in the normal mode explained above in relation toFIGS. 8 and 12, the parameter editing process in the festival mode is characterized by its way of setting the object of remote control.
In “mode A”, as shown inFIG. 13A, once any one of the operator members of the monaural channel strips70 on theconsole section60 of “dmix”100 is operated when “layer1” or “layer2” is selected as the object of control by theconsole section60 of “dmix”100 (i.e., the “layer1” or “layer2”switch79 or80 and “Remo2”switch77 are illuminated), a parameter setting related to the input channel of “meA”200ais changed (control operation of “Ain”). Then, a signal instructing a parameter value change corresponding to the “Ain” control operation is transmitted to “meA”200avia the cascade connection (step S12 ofFIG. 11). On the basis of the parameter-value-change instructing signal received, “meA”200aupdates the value of the corresponding parameter in the local current memory202 (i.e., one of the parameters contained in the “Ain” current memory section). Such updating of the localcurrent memory202 is reflected in the signal processing by the DSP array16 of “meA”200a. “meA”200atransmits the updated value of the parameter, i.e. “parameter value change result”, to “dmix”100. On the basis of the parameter value change result received, dmix”100 updates the value of the corresponding parameter in the remote current memory (“Ain′”)103 of “meA”200a(step S13 ofFIG. 12). Then, on the basis of the updating, a visual indication of the parameter value is updated on theconsole section60 of “dmix”100 (step S14 ofFIG. 12).
Further, once any one of the operator members of the monaural channel strips70 on theconsole section60 of “dmix”100 is operated when “master1” is selected as the object of control by theconsole section60 of “dmix”100 (i.e., the “master”switch78 and “Remo1”switch76 are illuminated) in the example ofFIG. 13A, a parameter setting related to the output channel of “meB”200bis changed (control operation of “Bout”). Then, a signal instructing a parameter value change corresponding to the “Bout” control operation is transmitted to “meB”200bvia the cascade connection. On the basis of the parameter-value-change instructing signal received, “meB”200bupdates the value of the corresponding parameter in the local current memory201 (i.e., one of the parameters contained in the “Bout” current memory section). Such updating of the localcurrent memory201 is reflected in the signal processing by the DSP array16 of “meB”200b. “meB”200btransmits the updated value of the parameter, i.e. “parameter value change result”, to “dmix”100. On the basis of the parameter value change result received, dmix”100 updates the value of the corresponding parameter in the remote current memory (“Bout′”)102 of “meB”200b. Also, on the basis of the updating, a visual indication of the parameter value is updated on theconsole section60 of “dmix”100.
In “mode B”, as shown inFIG. 13B, once any one of the operator members of the monaural channel strips70 on the console section of “dmix”100 is operated when “layer1” or “layer2” is selected as the object of control by theconsole section60 of “dmix”100 (i.e., the “layer1” or “layer2”switch79 or80 and “Remo1”switch76 are illuminated), a parameter setting related to the output channel of “meB”200bis changed (control operation of “Bin”). Then, a signal instructing a parameter value change corresponding to the “Bin” control operation is transmitted to “meB”200bvia the cascade connection. On the basis of the parameter-value-change instructing signal received, “meB”200bupdates the value of the corresponding parameter in the local current memory201 (i.e., one of the parameters contained in the “Bin” current memory section). Then, “meB”200btransmits the updated value of the parameter, i.e. “parameter value change result”, to “dmix”100. On the basis of the parameter value change result received, dmix”100 updates the value of the corresponding parameter in the remote current memory (“Bin′”)102 of “meB”200b. Then, on the basis of the updating, a visual indication of the parameter value is updated on theconsole section60 of “dmix”100. Similar operations are carried out in response to control operation of “Bout”; namely, if control operation of “Bout has been performed when “Master” is selected (i.e., “master”switch78 and “Remo1”switch76 are illuminated), the value of the corresponding parameter in the local current memory201 (i.e., one of the parameters contained in the “Bout” current memory section) is changed in response to a parameter value change instruction given via the console section of “dmix”100, and the parameter value change result is returned to “dmix”100 so that it is reflected on the display on the console section of “dmix”100.
InFIG. 13A, illustration of the remote current memory sections “Bin′” and “Aout′” corresponding to the input channels of “meB” and output channels of “meA”, which are not the object of remote control by “dmix”100 in “mode A”, is omitted for clarity. In “mode A”, as noted above, the mixing processing on the input channels of “meB” and output channels of “meA” (i.e., mixing processing on audio signals related to a succeeding performance) can be controlled from the PC (i.e., auxiliary operation section)300 (seeFIG. 5A etc.). Further, inFIG. 13B, illustration of the remote current memory sections “Ain′” and “Aout′” corresponding to the input channels of “meA” and output channels of “meA”, which are not the object of remote control by “dmix”100 in “mode B”, is omitted for clarity. In “mode B”, the mixing processing on the input channels and output channels of “meA” can be controlled from the PC300 (seeFIG. 5B etc.).
In the instant embodiment of the mixing system, as set forth above in relation toFIGS. 8,11,12,13A and13B, once operation is performed on theconsole section60 of the mixer (“dmix”)100 when remote control is designated as the object of control (through operation of the “Remo1”switch76 or “Remo2” switch72), a control signal (change instructing signal) is transmitted to one of the engines (“meA”200aor “meB”200b) that is the object of control so that a parameter value in the localcurrent memory201 or202 of the engine (“meA”200aor “meB”200b) is changed, and then the parameter value change result, indicative of the result of the parameter value change in the localcurrent memory201 or202, is transmitted to “dmix”100. In this way, the result of the parameter value change made in the engine (“meA”200aor “meB”200b), which is the object of control, can be reflected in the console section of “dmix”100; here, the reflection in the “dmix”100 includes updating of the visual indication of the corresponding parameter on the screen of thedisplay7 of the console section, updating of the display pertaining to the corresponding operator member (e.g., illumination of LEDs), control of the operating position of the correspondingelectric fader9, etc.
With reference toFIGS. 14 and 15, the following lines describe an object-of-control change process responsive to operation of any one of the object-of-control change switches75-77. When the object of control has been changed from “Remo1” or “Remo2” to “Local”, the mixer (“dmix”)100 updates the display on theconsole section60 and performs electric control on the operating position of theelectric fader9 of each of the channel strips70 and71 in accordance with the stored contents of the local current memory101 (step S15 ofFIG. 14). When the object of control has been changed from “Local” to “Remo1” or “Remo2”, “dmix”100 identifies the remotecurrent memory102 or103 storing parameters to be read out to the console section of “dmix”100, at step S16 ofFIG. 15. Then, at step S17, “dmix”100 updates the display on the console section and performs electric control on the operating position of theelectric fader9 of each of the channel strips70 and71 in accordance with the stored contents of the remotecurrent memory102 or103.
Thus, when the mixer (“dmix”)100 has changed the object of control, the instant embodiment of the mixing system allows the current parameter settings of a mixing apparatus, which becomes a new object of control, to be reflected in thecontrol section60 of “dmix”100. Further, by providing the three current memories, i.e. localcurrent memory101, remotecurrent memory102 of “meB”200band remotecurrent memory103 of “meA”200a, and by switching among the three current memories101-103, display updating and switching operations responsive to the object-of-control change can be performed promptly.
Lastly, a description will be given about control for interlinking (interlocking), between mixing apparatus, of a scene store/recall function (i.e., scene store/recall interlocking function) performed in the instant embodiment of the mixing system. The “scene store/recall function” is a function for collectively reproducing settings of given mixing parameters by storing the current settings of parameters, retained in the current memory, into the scene memory as a set of scene data of a scene and reading out (recalling) the stored scene data from the scene memory to the current memory, as noted earlier.
FIGS. 16A and 16B are diagrams explanatory of constructions of the scene memories and scene recall processes; more specifically,FIG. 16A is explanatory of the scene recall process in the normal mode, whileFIG. 16B is explanatory of the scene recall process in the festival mode. As shown in FIGS.16A and16B, thescene memories110,210 and211 are provided in therespective flash memories12 and14 of “dmix”100, “meB”200band “meA”200a. Each of thescene memories110,210 and211 has stored therein a plurality of sets of scene data, representative of a plurality of scenes (six scenes in each of the illustrated examples), of the corresponding mixing apparatus. The plurality sets of scene data stored in each of thescene memories110,210 and211 are assigned respective scene numbers “1”-“6” and managed with these scene numbers. Further, in the figures, the scene data related to the input channel group are each indicated with a suffix “i” (e.g., “S4i”), and the scene data related to the output channel group are each indicated with a suffix “o” (e.g., “S4o”). This is because, in some cases, only scene data related to the input channel group or only scene data related to the output channel group should be recalled in the festival mode, as will be later detailed. Therefore, in the instant embodiment, the scene data related to the input channel group and the scene data related to the output channel group are managed separately even in a single scene. Further, the reason why thescene memories210 and211 are provided in “meB”200band “meA”200ahaving no console section is to allow these engines to be used even when the engines are not cascade-connected with themixer100. Further, the reason why “dmix”100 includes only the remotecurrent memories103 and102 but includes no remote scene memory is that 1) the scene memory is great in size and, even when a remote scene memory is provided in “dmix”100, there can be achieved only a not-so-significant advantageous result that displays can be made promptly in “dmix”100 at the time of scene recall with no change in the scene recall speed in “dmix”100, and 2) if a remote scene memory of a great size is provided, a longer time would be required for a synchronizing operation (step S5) at the beginning of cascade connection.
With reference to the construction of the console section shown inFIG. 6, the following lines describe an operational sequence in which the user instructs storage or recall of a scene. First, once the user of “dmix”100 designates a desired scene number using the number increment (UP) switch88 and/or decrement (DOWN)switch89, the designated scene number is displayed blinkingly on the scenenumber display section87. Then, by operating thescene store switch90, the user can instruct storing of the current settings of the individual mixing apparatus of the mixing system as a set of scene data of the designated scene number. Further, by operating thescene recall switch91, the user can recall the scene data of the designated scene number to the individual mixing apparatus (“dmix”, “meA” and meB”) of the mixing system.
Next, with reference to a flow chart ofFIG. 17, a description will be given about an example operational sequence of a process performed by the mixer (“dmix”)100 in response to a scene data store instruction given by the user. Once a scene store instruction event is generated in response to the user operating thescene store switch90, “dmix”100 identifies cascade (delivery)-destination mixing apparatus to which the scene store instruction is to be transmitted (i.e., cascade destinations of the scene store instruction) and identifies content of the scene store in the cascade-destination mixing apparatus, at step S18. Here, the “destinations of the scene store instruction” are mixing apparatus (“meA”200aand “meB”200b) where the scene store operation should be performed in an interlocked manner. Further, the “content of the scene store” is information indicating whether the scene to be stored in the cascade destinations is the stored contents of the current memory related to only the input channel group, the stored contents of the current memory related to only the output channel group or the stored contents of the current memories related to both of the input and output channel groups.
At step S19, “dmix”100 transmits a scene store content instruction to the identified cascade-destination apparatus so as to cause the cascade-destination apparatus to store the content of the scene store with the user-designated scene number. Further, at step S20, “dmix”100 stores in thescene memory110 the current stored contents of the localcurrent memory101 as scene data of the user-designated scene number.
The cascade-destination mixing apparatus (“meA”200aand “meB”200b) receive the scene store content instruction transmitted from “dmix”100 at step S18 above, and then, in response to the received scene store content instruction, the destination mixing apparatus store, in theirrespective scene memories210 and211, part (corresponding only to the input or output channel group) or whole of the current stored contents of the respective localcurrent memories201 and202. In this way, the current stored contents of the respective local current memories can be stored in “dmix”100, “meA”200aand “meB”200bas scene data of the same scene number. Namely, the scene store operation can be interlinked or interlocked among dmix”100, “meA”200aand “meB”200b.
Next, with reference to a flow chart ofFIG. 18 as well asFIGS. 16A and 16B, a description will be given about an example operational sequence of a process performed by the mixer (“dmix”)100 in response to a scene data recall instruction given by the user. Once a scene recall instruction event is generated in response to the user operating thescene recall switch91, “dmix”100 identifies cascade-destination mixing apparatus to which the scene recall instruction is to be transmitted (i.e., destinations of the scene recall instruction) and identifies content of the scene recall in the cascade-destination mixing apparatus, at step S21. Here, the “content of the scene recall” is information indicating whether the scene to be recalled in the cascade-destinations is of scene data related to only the input channel group, scene data related to only the output channel group or scene data related to both of the input and output channel groups.
At step S22, “dmix”100 transmits a scene recall content instruction to the identified cascade-destination apparatus so as to cause the cascade-destination apparatus to recall the scene data of the user-designated scene number in accordance with the content of the scene recall instructed. InFIGS. 16A and 16B, there is shown a case where the scene data set of scene number “4” has been instructed to be recalled (i.e., “scene 4 recall instruction” has been given). At step S23, “dmix”100 performs an operation for reading out scene data of the user-designated scene number from thescene memory110 an then writing the read-out scene data into the localcurrent memory101. The stored contents of the localcurrent memory101, having been changed or updated with the read-out scene data, are reflected in the control of the signal processing control by theDSP array4 and in the control of the display when the stored contents of the localcurrent memory101 have been read out to theconsole section60 of “dmix”100.
The cascade-destination mixing apparatus (“meA”200aand “meB”200b), as shown inFIGS. 16A and 16B, receive the “scene 4 recall instruction”, read out the scene data of the designated scene number (4 in the illustrated example) from therespective scene memories211 and210 on the basis of the received “scene 4 recall instruction”, and write the read-out scene data into the respective localcurrent memories202 and201. The stored contents of the localcurrent memories202 and201, having been updated with the read-out scene data, are reflected in the control of the signal processing control by the respective DSP arrays16.
In the normal mode, as shown inFIG. 16A, the “scene 4 recall instruction” to “meA”200aand “meB”200bincludes a content instruction instructing scene data S4iand S4orelated to both the input channel group and the output channel group. Thus, in “meA”200aand “meB”200b, the scene data of S4iand S4oare recalled from thescene memories211 and210 to the respective localcurrent memories202 and201.
In the festival mode, as shown inFIG. 16B, the “scene 4 recall instruction” to “meB”200bincludes a content instruction instructing the scene data S4orelated to only the output channel group, and the “scene 4 recall instruction” to “meA”200aincludes a content instruction instructing the scene data S4irelated to only the input channel group. Thus, in the festival mode, “meB”200breads out and recalls the scene data S4ofrom thescene memory210 to the local current memory201 (current memory section Bo), while “meA”200areads out and recalls the scene data S4ifrom thescene memory211 to the local current memory202 (current memory section “Ai”).
Once the stored contents of the localcurrent memory202 or201 are updated by the scene recall, each of “meA”200aand “meB”200breturns the updated results of the individual parameter values (“recall results”) to “dmix”100. In the normal mode shown inFIG. 16A, the entire stored contents (whole of one scene) of the localcurrent memories202 and201 are returned, as the “recall results”, from “meA”200aand “meB”200bto “dmix”100. In the festival mode shown inFIG. 16B, the stored contents (only part of one scene related to only the output channel group) of the local current memory201 (current memory section Bo) are returned, as the “recall results”, from “meB”200bto “dmix”100, while the stored contents (only part of one scene related to only the input channel group) of the local current memory202 (current memory section Ai) are returned, as the “recall results”, from “meA”200ato “dmix”100.
Referring back toFIG. 18, “dmix”100 receives the “recall results” (i.e., updated parameter settings) at step S24, and then updates, at step S25, the corresponding parameter values in the remotecurrent memories102 and103 on the basis of the received “recall results” (updated parameter settings). More specifically, in the normal mode shown inFIG. 16A, the stored contents of the remote current memory section B′ of thememory102 corresponding to “meB”200bare updated on the basis of the recall results from “meB”200b, while the stored contents of the remote current memory A′ corresponding to “meA”200aare updated on the basis of the recall results from “meA”200a. In the festival mode shown inFIG. 16B, on the other hand, the stored contents of the output-channel-related remote current memory section Bo′ of thecurrent memory102 corresponding to “meB”200bare updated on the basis of the recall results from “meB”200b, while the stored contents of the input-channel-related remote current memory section “Ai′” of thecurrent memory103 corresponding to “meA”200aare updated on the basis of the recall results from “meA”200a.
Then, at step S26, “dmix”100 performs display updating control on theconsole section60 and electric control on the operating positions of theelectric faders9 of the individual channel strips70 and71 on the basis of the stored contents of any one of the localcurrent memory101 and remotecurrent memories102 and103 which corresponds to the current object of control by theconsole section60.
Thus, the instant embodiment of the mixing system allows the recall results of cascade-destination mixing apparatus (“meA”200aand “meB”200b), which are other mixing apparatus than “dmix”100 in the system, to be reflected in the console section of “dmix”100 (i.e., screen display, parameter setting display, operating positions of the operator members, etc. on the console section60), by causing the cascade-destination mixing apparatus (“meA”200aand “meB”200b) to perform the scene recall in response to the scene recall instruction given from “dmix”100 and to return the scene recall results to “dmix”100.
The scene recall interlocking control has been explained above, with reference toFIGS. 16A and 16B, on the assumption that a “scene recall link parameter” for setting as to whether or not the cascade-destination engines (“meA”200aand “meB”200b) should perform a scene recall in interlocked relation with a scene recall of the mixer (“dmix”)100 is set ON in each of the engines (“meA”200aand “meB”200b). Namely, each engine where the “scene recall link parameter” is set OFF is not interlocked with a scene recall instructed via the mixer (“dmix”)100. Let it also be assumed here that, when a scene recall has been performed independently in the engine where the “scene recall link parameter” is set OFF, results of updating, by the scene recall, of the stored contents of the current memory (i.e., recall results) are returned to “dmix”100 and then “dmix”100 updates the remote current memory of that engine on the basis of the returned recall results.
In the case where thePC300 is connected to the other I/O sections21 of theengines200aand200bor to the other I/O10 of themixer100 so that theengines200aand200bormixer100 can be remote-controlled from thePC300, similar operations to those explained above in relation toFIGS. 8,13 and16A and16B are performed. In such a case, thePC300 includes two remote current memories for remote-controlling thecurrent memory101 of themixer100 and for remote-controlling the localcurrent memories201 and202 of theengines200band200a.
In the case where the stored contents of thecurrent memory201 or202 of themixer100 orengine200bor200aare updated in response to operation on theconsole section60 of themixer100, the “parameter value change result” are transmitted to thePC300 as well, so that the corresponding remote current memory within thePC300 too is updated.
When operation (e.g., control operation of “Ain”) has been performed on an operation screen of thePC300, a parameter value change instruction, corresponding to the operation, is transmitted, for example, to theengine200avia the other I/O21 or10 and cascade connection, so that the corresponding parameter stored in the current memory of theengine200ais updated. Further, the “parameter value change result” is transmitted to thePC300 andmixer100, and thePC300 andmixer100, having received the “parameter value change result”, update the stored contents of the corresponding current memories provided therein.
In the normal mode, thePC300 can set, as its object of remote control, all of the current memories of themixer100 andengines200band200a, while, in the festival mode, thePC300 can set, as its object of remote control, only limited parts of the current memories which are not the object of control by the console section of themixer100. Namely, in “mode A” of the festival mode, thePC300 can set, as its object of remote control, the current memory section Bin of theengine200band current memory section Aout of theengine200a, while, in “mode B” of the festival mode, thePC300 can set, as its object of remote control, the current memory sections Ain and Aout of theengine200a.
According to the instant embodiment of the mixing system of the invention, as set forth above, the mixing processing of the mixer engines (“meA” and “meB”)200aand200b, cascade-connected with the mixer (“dmix”)100, is remote-controlled from theconsole section60 of themixer100, and the result of the control is reflected in theconsole section60 of themixer100; thus, the result of the control can be confirmed via the console section of themixer100. When the object of control has been switched or changed, the current stored contents of the current memory of the mixing apparatus selected as the new object of control (e.g., localcurrent memory101 or remotecurrent memories102 and103) can be reflected in theconsole section60 of themixer100. Also, when set values (settings) of parameters stored in any of themixer engines200aand200b, cascade-connected with the mixer (“dmix”)100, have been updated by the scene recall control, the updated results (namely, current parameter settings) can be reflected in theconsole section60 of themixer100. Thus, the instant embodiment of the mixing system can achieve a superior advantageous benefit that, while the current parameter settings (stored contents of the current memory) of the mixing processing of one engine (first mixing apparatus), selected as the object of remote control, are being reflected in theconsole section60, the mixing processing of another engine (second mixing apparatus) can be remote-controlled.
Further, the user can use the channel strips70 and71, provided on theconsole section60 of themixer100, to adjust channel-specific mixing processing parameters of the other mixing apparatus (“meA”200aand “meB”200b) in generally the same manner as when adjusting mixing processing parameters of themixer100. Thus, the instant embodiment of the mixing system can achieve another superior advantageous benefit that all of the mixing processing in the mixing system can be controlled through unified operation.
Further, in the festival mode, there can be achieved an advantageous benefit that, while audio signals for a current performance input to one of the engines (i.e., “meA”200aor “meB”200b) are being subjected to mixing control, in response to operation via theconsole section60 of themixer100, and output to the main output path (sounded through the main speaker), audio signals for a succeeding performance can be input to the other engine (i.e., “meB”200bor “meA”200a), subjected to mixing processing and output to the auxiliary output path (monitored or confirmed by the headphone set). Furthermore, by switching between “mode A” and “mode B” in accordance with a destination (“meA”200aor “meB”200b) of the audio signals for the current performance, the instant embodiment allows two different mixing processing to be performed efficiently by use of the single mixer.
The embodiment of the mixing system has been described above as comprising onemixer100 provided with theconsole section60 andengines200aand200bwith no console section and constructed in such a manner that theengines200aand200bwith no console section are remote-controlled from thesingle mixer100 with theconsole section60. Alternatively, the object of remote control may be a mixer provided with a console section rather than the mixer engine. Further, the number of the mixing apparatus constituting the mixing system may be other than three.
Further, in the embodiment of the mixing system, as described above in relation toFIG. 3, the mixer (“dmix”)100 andengines200aand200b(“meA” and “meB”200aand200b) are substantially identical to one another in signal processing construction for mixing processing (such as, the number of input channels, the number of mixing buses, the number of output channels, the number of effects, etc.). However, the present invention is not so limited; for example, the number of input channels provided in each of the engines may be greater or smaller than that provided in the engine. Similarly, the number of input channels provided in each of the engines may be greater or smaller than that provided in the engine. Further, in a case where the number of mixing buses provided in a given mixing apparatus is greater than that provided in another mixing apparatus (i.e., the number of mixing buses is not equal between the mixing apparatus), it is sufficient that an ultimate output of each extra mixing bus of the given mixing apparatus, which has no counterpart in the other mixing apparatus, be output only from an output channel of the given mixing apparatus without the extra mixing bus being cascade-connected with any mixing bus of the other mixing apparatus.
Furthermore, it has been described above in relation toFIG. 4 that, in the normal mode, ultimate outputs of the mutually-connected mixing buses can be output from any of the mixing apparatus (“dmix”100, “meA”200A and “meB”200B), the ultimate outputs need not necessarily be coupled to the output channels of all of the mixing apparatus, and it is sufficient if the ultimate outputs can be output from any one of the mixing apparatus (e.g., “meB”200bconnected to the sound system).
Furthermore, it has been described above that, in executing the cascade connection in the normal mode shown inFIG. 4, themixer100 provided with theconsole section60 is located at a predetermined position for a “cascade master”; however, the instant embodiment may be carried out with no problem even if themixer100 is located at a position for a “cascade slave”. The difference between the cascade master and the cascade slave is merely that the cascade master transmits a cascade signal while the cascade slave receives the cascade signal, and thus, even where themixer100 is located at a position of a “cascade slave”, the mixing processing of another mixing apparatus can be remote-controlled via theconsole section60 of themixer100. Note that, in a case where noengine200 is cascade-connected with themixer100, themixer100 can operate independently so as to control the mixing processing by its ownsignal processing section4 in response to operation on theconsole section60.
Furthermore, whereas the embodiment of the mixing system has been described above in relation to the case where the auxiliary operation section in the festival mode is implemented by thePC300, the auxiliary operation section may be implemented by other than a PC; for example, the auxiliary operation section may be implemented by a suitable user interface, such as a PDA or small-size, dedicated remote control panel. Moreover, the auxiliary operation section (e.g., PC300) and theengine200 may be interconnected by wireless connection (e.g., by a wireless LAN or wireless USB) rather than by wired connection. In such a case, if a radio wave of necessary intensity can reach the auxiliary operation section andengine200, a wireless connection I/O need not be positioned near the auxiliary operation section (e.g., PC300); for example, themixer100 may be provided with a wireless connection I/O.
Furthermore, whereasFIGS. 5A and 5B show example constructions where the headphone set62 is connected to themonitoring output section38bof “meB”200bto monitor signals of theauxiliary CUE buses54, the present invention is not so limited, and the headphone set62 may be connected to themonitoring output section38bof “meA”200ato monitor signals of theauxiliary CUE buses54. Further, the auxiliary output in the festival mode may be provided in the auxiliary operation section (e.g., PC300) rather than in the mixer engine. In such a case, control and audio signals may be together sent to the connection line connecting between the auxiliary operation section (PC300) and theengine200 so that the audio signals can be output from the audio output section of the auxiliary operation section (PC300). For example, in the case where a USB is employed as the connection line connecting between the auxiliary operation section (PC300) and theengine200, audio signals of the auxiliary output can be delivered to the auxiliary operation section (PC300) via the connection line. Furthermore, in the case where the connection line connecting between the auxiliary operation section (PC300) and theengine200 comprises an Ethernet device, a well-known audio signal delivery technique, such as the VOIP (Voice Over Internet Protocol), may be employed.
Furthermore, whereas the examples ofFIGS. 5A and 5B are arranged such that the output path of the mixer engine (“meB”) function as the main output (coupling the sound system to “meB”), any of the signal output paths of the cascade-connected mixer and mixer engines may function as the main output. Thus, the main signal output path in each of the festival “A” mode and festival “B” mode (i.e., which of the output channels of the mixer or the output channels of the mixer engine the ultimate bus output signals should be supplied to) is not limited to that employed in the above-described embodiment; the main signal output path in each of the festival “A” mode and festival “B” mode may be set as desired by the user.
Furthermore, theconsole section60 of “dmix”100 shown inFIG. 6 has been described above in relation to the case where the mode change switches72-74, object-of-control change switches75-77 and layer change switches78-80 are mechanical switches provided on theconsole section60, these switches72-80 may be virtual switches in the form of GUI components (images of switches) operable via the screen of thedisplay7.
This application is based on, and claims priority to, JP PA 2007-061761 filed on 12 Mar. 2007. The disclosure of the priority application, in its entirety, including the drawings, claims, and the specification thereof, is incorporated herein by reference.