The content of the invention
For this reason, it may be necessary to a kind of data processing technique scheme of video calling be provided, to solve existing visual telephone processIn, the problems such as using user's viscosity not high, poor user experience.
To achieve the above object, a kind of data processing equipment of video calling is inventor provided, described device includes manyIndividual terminal and server, the terminal include first terminal and second terminal;The first terminal includes the first video acquisition listMember, the first audio collection unit, the first audio frequency effect unit, the first display synthesis unit, the first coding unit and the first communicationUnit;The first audio frequency effect unit includes the first audio identification subelement, the first digital audio audio subelement and firstAudio speed changing subelement;
First video acquisition unit is used to gather the first video data in real time, and the first audio collection unit is used forThe first voice data is gathered in real time;
The first audio identification subelement is used to the first voice data being identified as the first lteral data;First soundFirst voice data is superimposed digital audio audio by frequency digital audio subelement;The first audio speed changing subelement is by the first soundFrequency data investigation speed-variation without tone effect;
The first display synthesis unit is used to be synthesized the first lteral data with the first video data, obtains firstGenerated data;
First coding unit is used to the first voice data carrying out coding compression according to preset audio form, obtains theOne coded audio data;And for the first generated data to be carried out into coding compression according to default video format, obtain the first videoCoded data;And for the first coded audio data and the first video data encoder to be packed according to Preset Transfer form, obtainTo the first coded multimedia data;
First communication unit is used to send the first coded multimedia data to second terminal.
Further, the first terminal also includes the first display unit;The second terminal includes the second video acquisitionUnit, the second audio collection unit, the second audio frequency effect unit, the second display synthesis unit, the second coding unit, the second communicationUnit and the second display unit;The second audio frequency effect unit includes the second audio identification subelement, the second digital audio soundImitate subelement and the second audio speed changing subelement;
Second video acquisition unit is used to gather the second video data in real time, and the second audio collection unit is used forSecond audio data is gathered in real time;
The second audio identification subelement is used to second audio data being identified as the second lteral data;Second soundFrequency digital audio subelement is used to second audio data being superimposed digital audio audio;The second audio speed changing subelement is used forSecond audio data is superimposed speed-variation without tone effect.
The second display synthesis unit is used to be synthesized the second lteral data with the second video data, obtains secondGenerated data;
Second coding unit is used to second audio data carrying out coding compression according to preset audio form, obtains theTwo coded audio datas;And for the second generated data to be carried out into coding compression according to default video format, obtain the second videoCoded data;And for the second coded audio data and the second video data encoder to be packed according to Preset Transfer form, obtainTo the second coded multimedia data;
Second communication unit is sent to first terminal for the second coded multimedia data;
First display unit is used to show the second coded multimedia data, and the second display unit is used to show that first compilesCode multi-medium data.
Further, the first terminal also includes the first text effects rendering unit and the first video effect renders listMember;The first text effects rendering unit is used to receive the first render instruction, and the first lteral data progress effect is rendered;The first video effect rendering unit is used to receive the second render instruction, and the first video data progress effect is rendered;InstituteState the first display synthesis unit and be used for the first lteral data after rendering and synthesized with the video data after rendering, obtain theOne generated data;
Further, server includes memory cell, and the memory cell renders template for storing text effects;It is describedFirst terminal renders template acquiring unit including the first word;
First word, which renders template acquiring unit, to be used to receive the acquisition instruction of word template, is obtained from the serverCorresponding text effects render template;
The first text effects rendering unit is used to render template to the first word number according to acquired text effectsRendered according to effect is carried out.
Further, server includes memory cell, and the memory cell is additionally operable to storage video effect and renders template;InstituteStating first terminal includes the first Video Rendering template acquiring unit;
The Video Rendering template acquiring unit is used to receive video template acquisition instruction, and correspondence is obtained from the serverVideo effect render template;
The first video effect rendering unit is used to render template to the first video counts according to acquired video effectRendered according to effect is carried out.
Inventor additionally provides a kind of data processing method of video calling, and methods described is applied to the data of video callingProcessing unit, described device includes multiple terminals and server, and the terminal includes first terminal and second terminal;Described firstTerminal include the first video acquisition unit, the first audio collection unit, the first audio frequency effect unit, first display synthesis unit,First coding unit and the first communication unit;Methods described includes:
First video acquisition unit gathers the first video data in real time, and the first audio collection unit gathers the first audio in real timeData;
First audio frequency effect unit includes the first audio identification subelement, the first digital audio audio subelement and the first soundFrequency speed change subelement.First voice data is identified as the first lteral data by the first audio identification subelement;First digital audioFirst voice data is superimposed digital audio audio by audio subelement;First voice data is superimposed by the first audio speed changing subelementSpeed-variation without tone effect.
First display synthesis unit is synthesized the first lteral data with the first video data, obtains the first composite numberAccording to;
First voice data is carried out coding compression by the first coding unit according to preset audio form, obtains the first audio volumeCode data;And the first generated data is subjected to coding compression according to default video format, obtain the first video data encoder;AndFirst coded audio data and the first video data encoder are packed according to Preset Transfer form, the first encoded multimedia number is obtainedAccording to;
First communication unit sends the first coded multimedia data to second terminal.
Further, the first terminal also includes the first display unit;The second terminal includes the second video acquisitionUnit, the second audio collection unit, the second audio frequency effect unit, the second display synthesis unit, the second communication unit, the second codingUnit and the second display unit;Methods described includes:
Second video acquisition unit gathers the second video data in real time, and the second audio collection unit gathers the second audio in real timeData;
Second audio frequency effect unit includes the second audio identification subelement, the second digital audio audio subelement and the second soundFrequency speed change subelement;Second audio data is identified as the second lteral data by the second audio identification subelement;Second digital audioSecond audio data is superimposed digital audio audio by audio subelement;Second audio data is superimposed by the second audio speed changing subelementSpeed-variation without tone effect;
Second display synthesis unit is synthesized the second lteral data with the second video data, obtains the second composite numberAccording to;
Second audio data is carried out coding compression by the second coding unit according to preset audio form, obtains the second audio volumeCode data, and the second generated data is subjected to coding compression according to default video format, obtain the second video data encoder;AndSecond coded audio data and the second video data encoder are packed according to Preset Transfer form, the second encoded multimedia number is obtainedAccording to;
Second communication unit sends the second coded multimedia data to first terminal;
First display unit shows the second coded multimedia data, and the second display unit shows the first encoded multimedia numberAccording to.
Further, the first terminal also includes the first text effects rendering unit and the first video effect renders listMember;Methods described includes:
First text effects rendering unit receives the first render instruction, and the first lteral data progress effect is rendered;
First video effect rendering unit receives the second render instruction, and the first video data progress effect is rendered;
The first lteral data after first display synthesis unit will be rendered is synthesized with the video data after rendering, and is obtainedFirst generated data;
Further, server includes memory cell, and the memory cell renders template for storing text effects;It is describedFirst terminal renders template acquiring unit including the first word;Methods described includes:
First word renders template acquiring unit and receives the acquisition instruction of word template, and corresponding text is obtained from the serverWord effect renders template;
First text effects rendering unit renders template according to acquired text effects and the first lteral data is imitatedFruit renders.
Further, server includes memory cell, and the memory cell is additionally operable to storage video effect and renders template;InstituteStating first terminal includes the first Video Rendering template acquiring unit;Methods described includes:
First Video Rendering template acquiring unit receives video template and obtains instruction, and corresponding regard is obtained from the serverYupin effect renders template;
First video effect rendering unit renders template according to acquired video effect and the first video data is imitatedFruit renders.
The data processing method and device of video calling described in above-mentioned technical proposal, methods described is applied to video callingData processing equipment, described device include multiple terminals and server, the terminal include first terminal and second terminal;InstituteStating first terminal includes the first video acquisition unit, the first audio collection unit, the first audio frequency effect unit, the first display synthesisUnit, the first communication unit;Methods described includes:First video acquisition unit gathers the first video data, the first audio in real timeCollecting unit gathers the first voice data in real time;First audio frequency effect unit includes the first audio identification subelement, the first audioDigital audio subelement and the first audio speed changing subelement, are identified as lteral data by the first voice data and are superimposed DABAudio/audio speed changing effect;First display synthesis unit is synthesized the first lteral data with the first video data, obtains theOne generated data;First coding unit, which encodes the first voice data according to certain audio format, to be compressed;And synthesized firstData are encoded according to certain video format to be compressed;And beat the coded data of audiovisual compression according to certain transformatBag, obtains the first coded multimedia data;First communication unit sends the first coded multimedia data to second terminal.ThisAudio audio/audio speed changing effect, during visual telephone, can be added on voice data by sample;Can be by language of conversingSound is identified as text information, and and video data overlay, and presented in the terminal of call, effectively enhance Consumer's Experience.
Embodiment
To describe the technology contents of technical scheme in detail, feature, the objects and the effects being constructed, below in conjunction with specific realityApply example and coordinate accompanying drawing to be explained in detail.
Referring to Fig. 1, the schematic diagram of the data processing equipment for the video calling that an embodiment of the present invention is related to.The dressPut including multiple terminals and server 103, the terminal includes first terminal 101 and second terminal 102;The first terminal101 include the first video acquisition unit 111, the first audio collection unit 121, the display conjunction of the first audio frequency effect unit 131, firstInto unit 141, the first communication unit 151, the second coding unit 211;
First video acquisition unit 111 is used to gather the first video data, the first audio collection unit in real time121 are used to gather the first voice data in real time.The terminal is intelligent movable equipment, and such as mobile phone, flat board, user can pass throughVideo calling is carried out between terminal.The video acquisition unit is the electronic component with video data acquiring function, is such as imagedHead;The first audio collection unit is the electronic component of specific audio data collecting function, such as microphone.In present embodimentIn, first terminal and second terminal is carry out two terminal devices of video calling, and first video data is first terminalThe video image information of the first user that is gathered of camera, first voice data adopted by the microphone of first terminalThe voice data of first user of collection.
It is single that the first audio frequency effect unit 131 includes the first audio identification subelement 133, the first digital audio audioThe audio speed changing subelement 135 of member 134 and first;First audio identification subelement 133 is used to the first voice data being identified as theOne lteral data;First voice data is superimposed digital audio audio by the first digital audio audio subelement 134;DescribedFirst voice data is superimposed speed-variation without tone effect by one audio speed changing subelement 135;
The audio identification subelement can parse the audio-frequency information of voice data by speech recognition algorithm, then by soundFrequency information is identified as corresponding lteral data, i.e. the first lteral data.
The first display synthesis unit 141 is used to be synthesized the first lteral data with the first video data, obtainsFirst generated data.In the present embodiment, the first lteral data can be folded with the first video data in the form of subtitlesPlus, such as positioned at according to video pictures ratio, from appropriate word size, video is placed in using lteral data as picture captionsThe bottom of picture.In further embodiments, the first lteral data can also be superimposed upon other orientation of video pictures, specifically may be usedSet according to the hobby of user.
First coding unit 211 is used to the first voice data carrying out coding compression according to preset audio form, obtainsTo the first coded audio data;And for the first generated data to be carried out into coding compression according to default video format, obtain firstVideo data encoder;And for the first coded audio data and the first video data encoder to be beaten according to Preset Transfer formBag, obtains the first coded multimedia data.
The preset audio form includes but is not limited to PCM, MP3, AAC, AC3, DTS.The default video format includesBut it is not limited to DIVX, XVID, MPEG4, H264, H265, VP8, VP9.The Preset Transfer form includes but is not limited to M3U8,TS,MPEG2TS,FLV,MOV,MP4。
First communication unit 151 is used to send the first coded multimedia data to second terminal.First terminal andSecond terminal can be communicated by wired or wireless mode, after second terminal receives the first coded multimedia data,When the display unit of second terminal can not only show the image information of the first user, the audio number of the first user can also be shownAccording to corresponding lteral data, i.e., audio audio/audio speed changing effect is added on voice data;By speech recognition into word withVideo pictures superposition is shown so that voice is more intuitively changed during visual telephone, effectively increases Consumer's Experience.
In certain embodiments, the first terminal 101 also includes the first display unit 161;The second terminal 102 is wrappedInclude the second video acquisition unit 112, the second audio collection unit 122, the second audio frequency effect unit 132, second display synthesis listFirst 142, second coding unit 212, the second communication unit 152 and the second display unit 162.
Second video acquisition unit 112 is used to gather the second video data, the second audio collection unit in real time122 are used to gather second audio data in real time;The second audio frequency effect unit 132 include the second audio identification subelement 136,Second digital audio audio subelement 137 and the second audio speed changing subelement 138.Second audio identification subelement 136 is used for willSecond audio data is identified as the second lteral data;Second audio data is superimposed sound by the second digital audio audio subelement 137Frequency digital audio;Second audio data is superimposed speed-variation without tone effect by the second audio speed changing subelement 138.
The second display synthesis unit 142 is used to be synthesized the second lteral data with the second video data, obtainsSecond generated data;Second communication unit 152 is used to send the second generated data and second audio data to first eventuallyEnd;First display unit 161 is used to show the second coded multimedia data, and the second display unit 162 is used to show firstCoded multimedia data.It is preferred that, first terminal is shown after being parsed for the second coded multimedia data that will be received,Second terminal is shown after being parsed for the first coded multimedia data that will be received.Certainly, can also on first terminalParse the first coded multimedia data to be shown, so as to the display unit on first terminal, can see that communication is double simultaneouslyThe video pictures of square user, second terminal can similarly be obtained, so as to effectively improve the interactivity of video calling.
Second terminal is identified as the mode of lteral data to the voice data of the second user of acquisition, with first terminal classSeemingly, here is omitted.In video call process is carried out, by taking the first user perspective using first terminal as an example, first closesIn two windows that can be respectively displayed on screen into data and the second generated data, two windows can with size, put positionPutting can be adjusted according to user preferences.
In certain embodiments, the first terminal 101 also includes the first text effects rendering unit 191 and the first videoEffect rendering unit 201;The first text effects rendering unit 191 is used to receive the first render instruction, and to the first wordData carry out effect and rendered;The first video effect rendering unit 201 is used to receive the second render instruction, and to the first videoData carry out effect and rendered;After the first display synthesis unit 141 is used for the first lteral data after rendering and renderedVideo data is synthesized, and obtains the first generated data.First render instruction and the second render instruction can pass through userClick on the button triggering on screen.
It is preferred that, the server 103 includes memory cell 113, and the memory cell 113 is used to store text effects wash with watercoloursContaminate template;The first terminal renders template acquiring unit 171 including the first word;First word renders template and obtains singleMember 171 is used to receive the acquisition instruction of word template, and obtaining corresponding text effects from the server renders template;Described firstText effects rendering unit 191 is used to render template to the first lteral data progress effect wash with watercolours according to acquired text effectsDye.The word renders template and includes many text effects configuration items, and the text effects configuration item includes text color, wordShape, font size, writing style (such as gradual change presentation).
In certain embodiments, server includes memory cell, and the memory cell is additionally operable to storage video effect and renderedTemplate;The first terminal includes the first Video Rendering template acquiring unit 181;The first Video Rendering template 181 is obtainedUnit is used to receive video template acquisition instruction, and obtaining corresponding video effect from the server renders template;Described firstVideo effect rendering unit 201 is used to render template to the first video data progress effect wash with watercolours according to acquired video effectDye.The Video Rendering template includes many video effect configuration item, and the video effect configuration item is set including scene stylePut, background setting, brightness adjustment etc..
In certain embodiments, the second terminal also renders template acquiring unit 172, the second video including the second wordRender template acquiring unit 182, the second text effects rendering unit 192, the second video effect rendering unit 202, the second wordEffect rendering unit 192, which is used to rendering the word that template acquiring unit 172 obtains from server according to the second word, renders templateEffect is carried out to the second lteral data to render;Second video effect rendering unit 192 is used to be obtained according to the second Video Rendering templateThe Video Rendering template for taking unit 182 to be obtained from server carries out effect to the second video data and rendered.In actual applicationIn, when first terminal and second terminal are conversed, the first user can be carried out by first terminal to the first generated dataEffect is rendered, and the second generated data that can also be transmitted to second terminal carries out effect and rendered, so as to effectively increase user's bodyTest.
Referring to Fig. 2, the flow chart of the data processing method for the video calling being related to for an embodiment of the present invention,.InstituteThe data processing equipment that method is applied to video calling is stated, described device includes multiple terminals and server, and the terminal includesFirst terminal and second terminal;The first terminal includes the first video acquisition unit, the first audio collection unit, the first audioEffect unit, the first display synthesis unit, the first coding unit, the first communication unit;Methods described includes:
Initially enter the video acquisition units of step S201 first the first video data of collection, the first audio collection unit in real timeThe first voice data is gathered in real time.
The the first audio identification subelement then entered in step S202 the first audio frequency effect units is by the first voice dataIt is identified as the first lteral data.The first audio identification subelement can parse voice data by speech recognition algorithmAudio-frequency information, then audio-frequency information is identified as corresponding lteral data, i.e. the first lteral data.
Then show that synthesis unit is synthesized the first lteral data with the first video data into step S203 first,Obtain the first generated data.In the present embodiment, the first lteral data can enter with the first video data in the form of subtitlesRow superposition, such as, positioned at according to video pictures ratio, from appropriate word size, be placed in lteral data as picture captionsThe bottom of video pictures.In further embodiments, the first lteral data can also be superimposed upon other orientation of video pictures, toolBody can be set according to the hobby of user.
Then enter the first coding units of step S204 and the first voice data is subjected to coding pressure according to preset audio formContracting, obtains the first coded audio data, and the first generated data is carried out into coding compression according to default video format, obtains firstVideo data encoder.
Then enter the coding units of step S205 first by the first coded audio data and the first video data encoder according toPreset Transfer form is packed, and obtains the first coded multimedia data.
Then the first coded multimedia data is sent to second terminal into step the first communication units of S206.First eventuallyEnd and second terminal can be communicated by wired or wireless mode, when second terminal receives the first generated data and firstAfter voice data, when the display unit of second terminal can not only show the image information of the first user, first can also be shownAudio audio/audio speed changing effect, i.e., be added on voice data by the corresponding lteral data of voice data of user;By voiceIt is identified as word and is superimposed with video pictures to be shown so that voice is more intuitively changed during visual telephone, effectively increasesConsumer's Experience.
Referring to Fig. 3, the flow chart of the data processing method for the video calling being related to for another embodiment of the invention.InstituteStating first terminal also includes the first display unit;The second terminal includes the second video acquisition unit, the second audio collection listMember, the second audio frequency effect unit, the second display synthesis unit, the second coding unit, the second communication unit and the second display unit;Methods described includes:The video acquisition units of step S301 second the second video data of collection in real time is initially entered, the second audio is adoptedCollection unit gathers second audio data in real time;The second audio identification then entered in step S302 the second audio frequency effect unitsSecond audio data is identified as the second lteral data by unit;Then entering step S303 second shows synthesis unit by the second textDigital data is synthesized with the second video data, obtains the second generated data;Then entering the second coding units of step S304 willSecond audio data carries out coding compression according to preset audio form, obtains the second coded audio data, and by the second composite numberCoding compression is carried out according to according to default video format, the second video data encoder is obtained;Then encoded into step S305 secondUnit packs the second coded audio data and the second video data encoder according to Preset Transfer form, obtains many matchmakers of the second codingVolume data;Then the second coded multimedia data is sent to first terminal into step the second communication units of S306;And it is laggardEnter the display units of step S307 first and show the second coded multimedia data, the second display unit shows the first encoded multimedia numberAccording to.
As shown in figure 4, in certain embodiments, the first terminal also includes the first text effects rendering unit and firstVideo effect rendering unit;Methods described includes:Initially enter the first text effects of step S401 rendering unit and receive the first wash with watercoloursOrder is had a finger in every pie, and effect is carried out to the first lteral data and is rendered;Then enter the first video effects of S402 rendering unit and receive secondRender instruction, and the first video data progress effect is rendered;After then showing that synthesis unit will be rendered into S403 firstFirst lteral data is synthesized with the video data after rendering, and obtains the first generated data.
In certain embodiments, server includes memory cell, and the memory cell renders mould for storing text effectsPlate;The first terminal renders template acquiring unit including the first word;Methods described includes:First word renders template acquisitionUnit receives word template and obtains instruction, and obtaining corresponding text effects from the server renders template;First text effectsRendering unit renders template according to acquired text effects and the first lteral data progress effect is rendered.
In certain embodiments, server includes memory cell, and the memory cell is additionally operable to storage video effect and renderedTemplate;The first terminal includes the first Video Rendering template acquiring unit;Methods described includes:First Video Rendering template is obtainedTake unit to receive video template and obtain instruction, obtaining corresponding video effect from the server renders template;First video is imitatedFruit rendering unit renders template according to acquired video effect and the first video data progress effect is rendered.
The data processing method and device of video calling described in above-mentioned technical proposal, methods described is applied to video callingData processing equipment, described device include multiple terminals and server, the terminal include first terminal and second terminal;InstituteStating first terminal includes the first video acquisition unit, the first audio collection unit, the first audio frequency effect unit, the first display synthesisUnit, the first communication unit;Methods described includes:First video acquisition unit gathers the first video data, the first audio in real timeCollecting unit gathers the first voice data in real time;First audio frequency effect unit includes the first audio identification subelement, the first audioDigital audio subelement and the first audio speed changing subelement, are identified as lteral data by the first voice data and are superimposed DABAudio/audio speed changing effect;First voice data is identified as the first lteral data by the first audio identification subelement;First displaySynthesis unit is synthesized the first lteral data with the first video data, obtains the first generated data;First communication unit willFirst generated data and the first voice data are sent to second terminal.
So, during visual telephone, audio audio/audio speed changing effect can be added on voice data;CanSo that call voice is identified as into text information, and and video data overlay, and presented in the terminal of call, effectively enhance useExperience at family.
It should be noted that herein, such as first and second or the like relational terms are used merely to a realityBody or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operatingIn any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended toNonexcludability is included, so that process, method, article or terminal device including a series of key elements not only include thoseKey element, but also other key elements including being not expressly set out, or also include being this process, method, article or endThe intrinsic key element of end equipment.In the absence of more restrictions, limited by sentence " including ... " or " including ... "Key element, it is not excluded that also there is other key element in the process including the key element, method, article or terminal device.ThisOutside, herein, " being more than ", " being less than ", " exceeding " etc. are interpreted as not including this number;" more than ", " following ", " within " etc. understandIt is to include this number.
It should be understood by those skilled in the art that, the various embodiments described above can be provided as method, device or computer program productionProduct.These embodiments can be using the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardwareForm.All or part of step in the method that the various embodiments described above are related to can be instructed by program correlation hardware comeComplete, described program can be stored in the storage medium that computer equipment can be read, for performing the various embodiments described above sideAll or part of step described in method.The computer equipment, includes but is not limited to:Personal computer, server, general-purpose computationsMachine, special-purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, WearableSmart machine, vehicle intelligent equipment etc.;Described storage medium, includes but is not limited to:RAM, ROM, magnetic disc, tape, CD, sudden strain of a muscleDeposit, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are with reference to method, equipment (system) and the computer program product according to embodimentFlow chart and/or block diagram are described.It should be understood that can be by every in computer program instructions implementation process figure and/or block diagramOne flow and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computers can be providedProgrammed instruction is to the processor of computer equipment to produce a machine so that pass through the finger of the computing device of computer equipmentOrder, which is produced, to be used to realize what is specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square framesThe device of function.
These computer program instructions may be alternatively stored in the computer that computer equipment can be guided to work in a specific way and setIn standby readable memory so that the instruction being stored in the computer equipment readable memory, which is produced, includes the manufacture of command deviceProduct, the command device is realized to be referred in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square framesFixed function.
These computer program instructions can be also loaded into computer equipment so that performed on a computing device a series ofOperating procedure is to produce computer implemented processing, so that the instruction performed on a computing device is provided for realizing in flowThe step of function of being specified in one flow of figure or multiple flows and/or one square frame of block diagram or multiple square frames.
Although the various embodiments described above are described, those skilled in the art once know basic woundThe property made concept, then can make other change and modification to these embodiments, so embodiments of the invention are the foregoing is only,Not thereby the scope of patent protection of the present invention, the equivalent structure that every utilization description of the invention and accompanying drawing content are made are limitedOr equivalent flow conversion, or other related technical fields are directly or indirectly used in, similarly it is included in the patent of the present inventionWithin protection domain.