Movatterモバイル変換


[0]ホーム

URL:


CN107277592A - Multimedia data playing method, device and storage medium based on embedded platform - Google Patents

Multimedia data playing method, device and storage medium based on embedded platform
Download PDF

Info

Publication number
CN107277592A
CN107277592ACN201710620382.1ACN201710620382ACN107277592ACN 107277592 ACN107277592 ACN 107277592ACN 201710620382 ACN201710620382 ACN 201710620382ACN 107277592 ACN107277592 ACN 107277592A
Authority
CN
China
Prior art keywords
medium data
video
hardware
duration
medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710620382.1A
Other languages
Chinese (zh)
Other versions
CN107277592B (en
Inventor
周立辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Qingdao Hisense Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronics Co LtdfiledCriticalQingdao Hisense Electronics Co Ltd
Priority to CN201710620382.1ApriorityCriticalpatent/CN107277592B/en
Publication of CN107277592ApublicationCriticalpatent/CN107277592A/en
Application grantedgrantedCritical
Publication of CN107277592BpublicationCriticalpatent/CN107277592B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a kind of multimedia data playing method based on embedded platform, device and storage medium, belong to multimedia technology field.Methods described includes:When needing to play at least two multi-medium datas simultaneously, it is determined that the handling duration being respectively processed by hardware resource at least two multi-medium data;Distinguish corresponding handling duration according at least two multi-medium data, time-division processing is carried out at least two multi-medium data by the hardware resource, to play at least two multi-medium data simultaneously.The present invention solves the problem of synchronization can only play a multi-medium data in embedded platform, provides the user better service by the multi-medium data of time-division processing at least two.

Description

Multimedia data playing method, device and storage medium based on embedded platform
Technical field
The present invention relates to multimedia technology field, more particularly to a kind of multi-medium data broadcasting side based on embedded platformMethod, device and storage medium.
Background technology
Embedded platform refers to that one kind is completely embedded into inside controlled device, application-centered, using computer technology as basePlinth, software and hardware can cut, adapt to the special-purpose computer that application system is strict with to function, reliability, cost, volume, power consumptionSystem platform.At present, with the popularization of embedded platform, increasing equipment realizes miscellaneous service using embedded platformDemand, so the broadcasting of the multi-medium data based on embedded platform is arisen at the historic moment.Intelligence electricity such as based on embedded platformBroadcasting depending on multi-medium data can be carried out.
In correlation technique, when playing multi-medium data based on embedded platform, it is necessary to be separated by resolver, audio frequency and videoThe hardware resources such as device, encoder, renderer are handled multi-medium data, to realize the broadcasting of multi-medium data.Specifically,First by the host-host protocol of resolver definition multimedia data, then by audio frequency and video separator, after being parsed to host-host protocolThe encapsulation format of multi-medium data parsed, to obtain the raw data packets of Voice & Video, i.e., Voice & Video is enteredRow separation.Afterwards, the raw data packets of Voice & Video are decoded respectively by decoder, obtains voice data and videoData, and the voice data and video data that are obtained after decoding are rendered and exported by renderer, to be presented to user.
Due to the resolution ratio more and more higher of multi-medium data, can reach that more than 1080p is even most of can reach 4K,And due to the functional limitation of embedded platform, it usually needs high-resolution multi-medium data is solved by hardware resourceAnalysis, it is especially desirable to which hardware decoder is decoded to high-resolution multi-medium data, and hardware decoder is in synchronizationA multi-medium data can be decoded, so, synchronization is may result in and is merely able to play a multi-medium data, it is impossible to meet and useFamily synchronization plays the demand of multiple multi-medium datas.
The content of the invention
In order to solve the problem of synchronization in correlation technique can only play a video and audio, the embodiment of the present invention is carriedA kind of multimedia data playing method based on embedded platform, device and storage medium are supplied.The technical scheme is as follows:
First aspect includes there is provided a kind of multimedia data playing method based on embedded platform, methods described:
When needing to play at least two multi-medium datas simultaneously, it is determined that by hardware resource to the matchmaker more than at least twoThe handling duration that volume data is respectively processed;
Distinguish corresponding handling duration according at least two multi-medium data, by the hardware resource to it is described extremelyFew two multi-medium datas carry out time-division processing, to play at least two multi-medium data simultaneously.
Alternatively, the hardware resource includes hardware parser, audio frequency and video hardware separation device, hardware decoder and hardware wash with watercoloursContaminate device;
It is described to determine to distinguish at least two multi-medium data corresponding handling duration by hardware resource, including:
For the hardware parser, determine that corresponding parsing duration is the at least two multi-medium data respectivelyOne specifies duration;
For the audio frequency and video hardware separation device, determine that at least two multi-medium data distinguishes corresponding audio frequency and video pointIt is the second specified duration from duration;
For the hardware decoder, corresponding coded system and resolution are distinguished based at least two multi-medium dataRate, determines that at least two multi-medium data distinguishes corresponding decoding duration;
For the hardware renderer, determine that the corresponding duration that renders is at least two multi-medium data respectivelyThree specify duration.
Alternatively, it is described that time-division processing is carried out at least two multi-medium data by the hardware resource, including:
When needing to parse at least two multi-medium data, according at least two multi-medium data pointNot corresponding host-host protocol and described first specifies duration, by the hardware parser at least two multi-medium dataCarry out timesharing parsing;
When needing to carry out audio frequency and video separation at least two multi-medium data, according at least two multimediaData distinguish corresponding encapsulation format and described second and specify duration, by the audio frequency and video hardware separation device to described at least twoIndividual multi-medium data carries out timesharing audio frequency and video separation;
When needing to decode at least two multi-medium data, according at least two multi-medium data pointNot corresponding coded system, resolution ratio and decoding duration, by the hardware decoder at least two multi-medium dataCarry out timesharing decoding;
When needing to render at least two multi-medium data, according at least two multi-medium data pointNot corresponding rendering parameter and the described 3rd specifies duration, by the hardware renderer at least two multi-medium dataTimesharing is carried out to render.
Alternatively, the processing for determining to be respectively processed at least two multi-medium data by hardware resourceBefore duration, in addition to:
When at least two multi-medium data is high-resolution video, performs and determine by hardware resource to instituteThe step of stating the handling duration that at least two multi-medium datas are respectively processed, the high-resolution video refers to resolution ratioMore than or equal to the video of default resolution ratio;
When at least two multi-medium data is video or the audio of low resolution, pass through at least two softwaresResource is handled at least two multi-medium data, described low to play at least two multi-medium data simultaneouslyThe video of resolution ratio refers to that resolution ratio is less than the video of the default resolution ratio.
Alternatively, methods described also includes:
When at least two multi-medium data includes high-resolution video, and low resolution video or soundDuring frequency, the quantity of the high-resolution video at least two multi-medium data is determined;
It is high-resolution regard according to at least two multi-medium data when the quantity is less than predetermined numberThe mode that frequency is handled, is handled at least two multi-medium data by the hardware resource;
It is height according to at least two multi-medium data when the quantity is more than or equal to the predetermined numberThe mode that the video of resolution ratio is handled, is handled the high-resolution video, and lead to by the hardware resourceSoftware resource is crossed to handle the video or audio of the low resolution.
Second aspect includes there is provided a kind of multi-medium data playing device based on embedded platform, described device:
First determining module, for when needing to play at least two multi-medium datas simultaneously, it is determined that passing through hardware resourceThe handling duration being respectively processed at least two multi-medium data;
First processing module, for distinguishing corresponding handling duration according at least two multi-medium data, passes through instituteState hardware resource and time-division processing is carried out at least two multi-medium data, to play at least two multimedias number simultaneouslyAccording to.
Alternatively, the hardware resource includes hardware parser, audio frequency and video hardware separation device, hardware decoder and hardware wash with watercoloursContaminate device;
First determining module includes:
First determination sub-module, for for the hardware parser, determining at least two multi-medium datas differenceCorresponding parsing duration is the first specified duration;
Second determination sub-module, for for the audio frequency and video hardware separation device, determining at least two multimedias numberIt is the second specified duration according to corresponding audio frequency and video separation duration respectively;
3rd determination sub-module, for for the hardware decoder, based at least two multi-medium data differenceCorresponding coded system and resolution ratio, determine that at least two multi-medium data distinguishes corresponding decoding duration;
4th determination sub-module, for for the hardware renderer, determining at least two multi-medium datas differenceThe corresponding duration that renders is the 3rd specified duration.
Alternatively, the first processing module includes:
Analyzing sub-module, for when need at least two multi-medium data is parsed when, according to it is described at leastTwo multi-medium datas distinguish corresponding host-host protocol and described first and specify duration, by the hardware parser to it is described extremelyFew two multi-medium datas carry out timesharing parsing;
Audio frequency and video separate submodule, for when need at least two multi-medium data carry out audio frequency and video separation when,Distinguish corresponding encapsulation format and described second according at least two multi-medium data and specify duration, pass through the audio frequency and videoHardware separation device carries out timesharing audio frequency and video separation at least two multi-medium data;
Decoding sub-module, for when need at least two multi-medium data is decoded when, according to it is described at leastTwo multi-medium datas distinguish corresponding coded system, resolution ratio and decoding duration, by the hardware decoder to it is described extremelyFew two multi-medium datas carry out timesharing decoding;
Render submodule, for when need at least two multi-medium data is rendered when, according to it is described at leastTwo multi-medium datas distinguish corresponding rendering parameter and the described 3rd and specify duration, by the hardware renderer to it is described extremelyFew two multi-medium datas carry out timesharing and rendered.
Alternatively, described device also includes:
Trigger module, for when at least two multi-medium data is high-resolution video, triggers described theOne determining module determines the handling duration being respectively processed by hardware resource at least two multi-medium data, describedHigh-resolution video refers to that resolution ratio is more than or equal to the video of default resolution ratio;
Second processing module, video or audio for when at least two multi-medium data being low resolutionWhen, at least two multi-medium data is handled by least two software resources, with least two described in broadcasting simultaneouslyIndividual multi-medium data, the video of the low resolution refers to that resolution ratio is less than the video of the default resolution ratio.
Alternatively, described device also includes:
Second determining module, for including high-resolution video when at least two multi-medium data, and it is lowWhen the video or audio of resolution ratio, the quantity of the high-resolution video at least two multi-medium data is determined;
3rd processing module, for when the quantity is less than predetermined number, according to at least two multimedias numberAccording to being mode that high-resolution video is handled, at least two multi-medium data is entered by the hardware resourceRow processing;
Fourth processing module, for when the quantity is more than or equal to the predetermined number, according to described at least twoIndividual multi-medium data is the mode that high-resolution video is handled, by the hardware resource to described high-resolutionVideo is handled, and the video or audio of the low resolution are handled by software resource.
The third aspect includes there is provided a kind of multi-medium data playing device based on embedded platform, described device:
Processor;
Memory for storing processor-executable instruction;
Wherein, the step of processor is configured as any one method described in above-mentioned first aspect.
Fourth aspect is there is provided a kind of computer-readable recording medium, and be stored with finger on the computer-readable recording mediumOrder, the step of instruction realizes any one method described in above-mentioned first aspect when being executed by processor.
The beneficial effect that technical scheme provided in an embodiment of the present invention is brought is:The method that the present embodiment is provided is by determiningThen at least two multi-medium datas are carried out by the handling duration of the multi-medium data of time-division processing at least two by hardware resourceTime-division processing, so as in embedded platform, at least two multi-medium datas can be played with synchronization, hardware is effectively utilizedResource, also solves the problem of synchronization can only play a multi-medium data in embedded platform, meets the need of userAsk, provide the user better service.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, makes required in being described below to embodimentAccompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, forFor those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawingsAccompanying drawing.
Fig. 1 is a kind of multi-medium data play structure block diagram based on embedded platform provided in an embodiment of the present invention;
Fig. 2 is a kind of multimedia data playing method flow chart based on embedded platform provided in an embodiment of the present invention;
Fig. 3 is a kind of multimedia data playing method flow chart based on embedded platform provided in an embodiment of the present invention;
Fig. 4 is a kind of multi-medium data playing device structural representation based on embedded platform provided in an embodiment of the present inventionFigure;
Fig. 5 is a kind of mobile embedded type equipment structural representation provided in an embodiment of the present invention.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present inventionFormula is described in further detail.
In order to make it easy to understand, before to the embodiment of the present invention carrying out that explanation is explained in detail, first to the embodiment of the present inventionThe application scenarios and system architecture being related to are introduced.
First, to the present embodiments relate to application scenarios be introduced.
At present, increasing equipment uses embedded platform, for the ease of description, by using the equipment of embedded platformReferred to as embedded device.During user uses embedded device, often occur while playing at least two multimediasThe phenomenon of data.Next it will divide a variety of situations that this phenomenon is illustrated.
The first, the picture-in-picture in Stream Media Application that is, during video is played by a picture, passes through the pictureAnother picture in face plays another video.
Second, while opening multiple webpages and each webpage plays an audio or video.
The third, opens a webpage, and webpage is operated while video is played by the webpage, when in operation webpageDuring moving focal point when broadcasting button sound.
4th kind, user interface of the operation with audio while playing video.
5th kind, start an application, by the application plays audio, then restart another application, and by anotherOne application plays video.
Certainly, the embodiment of the present invention can be applied not only in above-mentioned several application scenarios, in practical application, may also may be usedApplied in other application scenarios, be will not enumerate in this embodiment of the present invention to other application scene.
Finally, to the present embodiments relate to system architecture be introduced.
Fig. 1 is a kind of multimedia data play system Organization Chart based on embedded platform provided in an embodiment of the present invention.The system architecture can be deployed on embedded device, and the embedded device can be intelligent television.As shown in figure 1, the systemResource management module 101, analytic sheaf 102, audio frequency and video separating layer 103, decoding layer 104, rendering layer 105 and at least two can be includedIndividual player 106.Resource management module 101 can respectively with analytic sheaf 102, audio frequency and video separating layer 103, decoding layer 104 and wash with watercoloursDye layer 105 is communicated, and analytic sheaf 102 can be communicated with audio frequency and video separating layer 103, and audio frequency and video separating layer 103 can be withCommunicated with decoding layer 104, decoding layer 104 can be communicated with rendering layer 105, player 106 can respectively with resourceManagement module 101, analytic sheaf 102, audio frequency and video separating layer 103, decoding layer 104 and rendering layer 105 are communicated.Wherein, parseA hardware parser can be included in layer 102, multiple software resolvers can also be included.Can be with audio frequency and video separating layer 103Including an audio frequency and video hardware separation device, multiple audio frequency and video software separators can also be included.One can be included in decoding layer 104Individual hardware decoder, can also include multiple software decoders.A hardware renderer can be included in rendering layer 105, also may be usedWith including multiple software renderers.
When needing to play at least two multi-medium datas simultaneously, player 106 can be sent to resource management module 101Resource bid is asked, and resource management module 101 can be based on the current apllied resource of player 106 from corresponding resource layerCorresponding resource is dispatched, such as, when player 106 is currently needed for application decoder, resource management module 101 can be from decoding layerCorresponding decoder is dispatched in 104.However, because current play has at least two multi-medium datas simultaneously, it is assumed that resource managementModule 101 is hardware resource for the player 106 scheduling, now, in order to play multiple sounds simultaneously by a hardware resourceVideo, can first determine that at least two multi-medium data distinguishes corresponding handling duration, and then pass through the matchmaker more than at least twoVolume data distinguishes corresponding handling duration, and time-division processing is carried out at least two multi-medium data, is played simultaneously extremely with realizingFew two multi-medium datas.
Introduced the present embodiments relate to application scenarios and system architecture after, the embodiment of the present invention is entered belowThe detailed explanation of row.Fig. 2 is a kind of multi-medium data broadcasting side based on embedded platform provided in an embodiment of the present inventionMethod flow chart.This method is applied in embedded device.Referring to Fig. 2, this method comprises the following steps:
Step 201:When needing to play at least two multi-medium datas simultaneously, it is determined that by hardware resource at least twoThe handling duration that multi-medium data is respectively processed.
Step 202:Corresponding handling duration is distinguished according at least two multi-medium datas, by the hardware resource to instituteState at least two multi-medium datas and carry out time-division processing, to play at least two multi-medium data simultaneously.
In summary, during processing of the method by determining the multi-medium data of time-division processing at least two that the present embodiment is providedIt is long, time-division processing is then carried out at least two multi-medium datas by hardware resource, so that in embedded platform, Ke YitongOne moment played at least two multi-medium datas, is effectively utilized hardware resource, also solves synchronization in embedded platformA problem of multi-medium data can only be played, meets the demand of user, provides the user better service.
Fig. 3 is a kind of multimedia data playing method flow chart based on embedded platform provided in an embodiment of the present invention.This method is applied in embedded device, and the embedded device can dispose the system architecture shown in above-mentioned Fig. 1.Below will knotFig. 1 is closed the embodiment shown in above-mentioned Fig. 2 is described in detail.Referring to Fig. 2, this method comprises the following steps:
Step 301:When needing to play at least two multi-medium datas simultaneously, judge that at least two multi-medium data isNo is high-resolution video, and the high-resolution video refers to that resolution ratio is more than or equal to the video of default resolution ratio.
Due to the functional limitation of embedded platform, generally a multimedia can only be played by hardware resource in synchronizationData, particularly with high-resolution video, one can only be played in synchronization by hardware resource, therefore, same when needingWhen playing at least two multi-medium datas, it is necessary to judge whether at least two multi-medium data is high-resolution regardFrequently.
When playing video, the resolution ratio of the video is generally comprised in video data, therefore, it can directly by includingResolution ratio is compared with default resolution ratio, if at least two multi-medium data is video, and resolution ratio is all higher than or waitedIn the default resolution ratio, then it is high-resolution video that can determine at least two multi-medium data.
It should be noted that the differentiation that default resolution ratio refers to ability and experience previously according to hardware resource to setThe resolution ratio of high-resolution video and low-resolution video.
Alternatively, when embedded device includes the resource management module 101 shown in above-mentioned Fig. 1, analytic sheaf 102, audio frequency and videoWhen separating layer 103, decoding layer 104, rendering layer 105 and at least two players 106, the process that implements of step 301 can be withFor:When at least two player 106 is required to play multi-medium data, at least two player can be based on this at leastThe parameter that two multi-medium datas include judges whether at least two multi-medium data is high-resolution video.WhenSo, the parameter that at least two multi-medium data includes can also be sent to resource management by least two player 106Whether module 101, it is high-resolution video that at least two multi-medium data is judged by resource management module 101.
Step 302:When at least two multi-medium data is high-resolution video, it is determined that passing through hardware resource pairThe handling duration that at least two multi-medium datas are respectively processed.
Because the disposal ability of software resource is limited, and the disposal ability of hardware resource is than the disposal ability of software resourceBy force, and when handling high-resolution video the stronger resource of disposal ability is needed to handle, therefore, when at least two multimediaWhen data are high-resolution video, it may be determined that need by hardware resource at least two multi-medium data atReason.However, a set of hardware resource is there is normally only in embedded device, and a set of hardware resource synchronization can only handle oneMulti-medium data, therefore, it can first determine be respectively processed at least two multi-medium data by the hardware resourceHandling duration, and then according to the handling duration of determination, at least two multi-medium data is carried out at timesharing by subsequent stepReason.
, it is necessary to the host-host protocol of the multi-medium data be parsed, to the number after parsing during one multi-medium data of usual broadcastingAccording to encapsulation format decapsulated, so as to separate audio, video data, then the audio, video data after separation is decoded, soDecoded data are rendered afterwards, so as to obtain video pictures, the broadcasting of multi-medium data are realized.Therefore, hardware resourceIn would generally include hardware parser, audio frequency and video hardware separation device, hardware decoder and hardware renderer.Now, it is determined that passing throughThe handling duration that hardware resource is respectively processed at least two multi-medium datas implements process and can be:For hardPart resolver, determines that corresponding parsing duration is the first specified duration at least two multi-medium datas respectively.For audio frequency and videoHardware separation device, determines that corresponding audio frequency and video separation duration is the second specified duration at least two multi-medium datas respectively.It is rightIn hardware decoder, distinguish corresponding coded system and resolution ratio based at least two multi-medium datas, determine more than at least twoMedia data distinguishes corresponding decoding duration.For hardware renderer, determine that at least two multi-medium datas distinguish corresponding wash with watercoloursDye duration is the 3rd specified duration.
It that is to say, for hardware parser, audio frequency and video hardware separation device and hardware renderer, when being respectively provided with fixed parsingLong, audio frequency and video separate duration and render duration, but for hardware decoder, can dividing at least two multi-medium dataNot corresponding coded system and resolution ratio, dynamically determine that at least two multi-medium datas distinguish corresponding decoding duration.Certainly,, can also be based on multi-medium data for hardware parser, audio frequency and video hardware separation device and hardware renderer in practical applicationHost-host protocol, encapsulation format and rendering parameter, when corresponding parsing duration, audio frequency and video separation duration being dynamically set and being renderedIt is long, and for hardware decoder, fixed decoding duration can be set.
For hardware parser, audio frequency and video hardware separation device and hardware renderer, when dynamically setting corresponding parsingLong, audio frequency and video separation duration and when rendering duration, can the host-host protocol based at least two multi-medium data, from storageIn corresponding relation between host-host protocol and parsing duration, corresponding parsing duration is obtained.Similarly, can based on this at least twoThe encapsulation format of multi-medium data, from the encapsulation format of storage separate duration with audio frequency and video between corresponding relation in, obtain pairThe audio frequency and video separation duration answered.Similarly, can the rendering parameter based at least two multi-medium data, render ginseng from storageCount and render in the corresponding relation between duration, acquisition is corresponding to render duration.
For hardware decoder, when dynamically setting corresponding decoding duration, that is to say, when based on this more than at least twoMedia data distinguishes corresponding coded system and resolution ratio, determines that at least two multi-medium datas distinguish corresponding decoding durationWhen, corresponding coded system and resolution ratio can be distinguished based at least two multi-medium data, coded format from storage, pointIn corresponding relation between resolution and decoding duration, corresponding decoding duration is obtained.
It should be noted that for different multi-medium datas, the required processing when being handled by hardware resourceDuration may be different, therefore, when at least two multi-medium data set fixed parsing duration, audio frequency and video separation duration,It when decoding duration and rendering duration, possibly can not ensure that at least two multi-medium data can carry out the broadcasting of smoothness, and work as, can be according to each many when setting parsing duration, audio frequency and video to separate duration by dynamic mode, decode duration and render durationHandling duration needed for media data is configured respectively, can so ensure that at least two multi-medium data can be flowedSmooth broadcasting.
Alternatively, when embedded device includes the resource management module 101 shown in above-mentioned Fig. 1, analytic sheaf 102, audio frequency and videoWhen separating layer 103, decoding layer 104, rendering layer 105 and at least two players 106, the process that implements of step 302 can be withFor:When at least two player 106 is required to play multi-medium data, resource management module 101 can set this at leastRequired parsing duration during two multi-medium datas, audio frequency and video separation duration and to render duration be respectively first to specify duration, theTwo specify duration and the 3rd specified duration, and distinguish corresponding coded system and resolution based at least two multi-medium datasRate, determines that at least two multi-medium data distinguishes corresponding decoding duration.Certainly, at least two player 106 can alsoWhen multi-media protocol, encapsulation format and the rendering parameter included by least two multi-medium data dynamically sets parsingLong, audio frequency and video separate duration and render duration, and for hardware decoder, can be set by least two player 106Fixed decoding duration.
When resource management module 101 determines the corresponding parsing duration of at least two multi-medium data, audio frequency and video separationGrow, decode duration and render after duration, parsing duration can be sent to analytic sheaf 102, audio frequency and video separation duration is sentTo audio frequency and video separating layer 103, decoding duration is sent to decoding layer 104, and duration will be rendered it is sent to rendering layer 105.
Step 303:Corresponding handling duration is distinguished according at least two multi-medium datas, by hardware resource at least twoIndividual multi-medium data carries out time-division processing, to play at least two multi-medium data simultaneously.
Specifically, when needing to parse at least two multi-medium datas, according at least two multi-medium datas pointNot corresponding host-host protocol and first specifies duration, and timesharing solution is carried out at least two multi-medium datas by hardware parserAnalysis.When needing to carry out audio frequency and video separation at least two multi-medium datas, corresponded to respectively according at least two multi-medium datasEncapsulation format and second specify duration, by audio frequency and video hardware separation device at least two multi-medium datas carry out timesharing sound regardFrequency division from.When needing to decode at least two multi-medium datas, distinguish corresponding according at least two multi-medium datasAt least two multi-medium datas are carried out timesharing decoding by coded system, resolution ratio and decoding duration by hardware decoder.When needWhen being rendered at least two multi-medium datas, corresponding rendering parameter and the are distinguished according at least two multi-medium datasThree specify duration, and carrying out timesharing at least two multi-medium datas by hardware renderer renders.
Understood according to the description of above-mentioned steps 301, for high-resolution video, it usually needs at hardware resourceReason.However, at least two multi-medium data played is not necessarily high-resolution video simultaneously, it that is to say, this is at leastTwo multi-medium datas be also possible to be low resolution video either audio or a part be high-resolution video,Another part is the video or audio of low resolution.In this case, in order to more many in synchronization broadcastingMedia data, can combine hardware resource and software resource carries out the processing of multi-medium data.Specifically, when this more than at least two, can be by least two software resources at least two multimedias when media data is video or the audio of low resolutionData are handled, to play at least two multi-medium datas simultaneously, wherein, the video of low resolution refers to that resolution ratio is less than in advanceIf the video of resolution ratio.When at least two multi-medium datas include high-resolution video, and low resolution video orDuring person's audio, it may be determined that the quantity of the high-resolution video at least two multi-medium datas.When it is determined that quantity be less thanDuring predetermined number, by being in the way of high-resolution video is handled, by hard at least two multi-medium dataPart resource is handled at least two multi-medium datas.When it is determined that quantity be more than or equal to predetermined number when, according to thisAt least two multi-medium datas are the mode that high-resolution video is handled, and are regarded by hardware resource to high-resolutionFrequency is handled, and the video or audio of low resolution are handled by software resource.
Due to a set of hardware resource and many set software resources, and the processing energy of software resource can be installed in embedded devicePower is weaker, can only handle the video and audio of low resolution, so handling regarding for low resolution usually using a software resourceFrequency or audio.When it is determined that quantity be more than or equal to predetermined number when, by hardware resource to high-resolution video atReason, and the video or audio of low resolution are handled by software resource.Embedded set can so be effectively utilizedHardware resource and software resource in standby, in the more multi-medium datas of broadcasting that synchronization is smooth.
Wherein, predetermined number is disposal ability and default resolution ratio according to hardware resource to determine.I.e. for a certain heightThe video of resolution ratio, hardware resource at most can be while this several height of time-division processing on the premise of playing fluency is not influenceedThe video of resolution ratio.
Alternatively, when embedded device includes the resource management module 101 shown in above-mentioned Fig. 1, analytic sheaf 102, audio frequency and videoWhen separating layer 103, decoding layer 104, rendering layer 105 and at least two players 106, the process that implements of step 303 can be withFor:When at least two player 106 is required to play multi-medium data, at least two player can be based on broadcasting extremelyThe quantity of few two multi-medium data middle high-resolution videos come determine using hardware resource or software resource to this at least twoIndividual multi-medium data is handled., it is necessary to money when it is determined that handling at least two multi-medium data by hardware resourceThe corresponding hardware resource of source control module application is handled, and then according to the parsing duration, audio frequency and video determined in step 302 pointFrom duration, decode duration and render duration and carry out time-division processing respectively;When it is determined that being carried out by software resource to multi-medium data, it is necessary to be handled to the corresponding software resource of resource management module application after processing.
In summary, during processing of the method by determining the multi-medium data of time-division processing at least two that the present embodiment is providedIt is long, time-division processing is then carried out at least two multi-medium datas by hardware resource, can so be effectively utilized embeddedHardware resource and software resource in equipment, in the more multi-medium datas of broadcasting that synchronization is smooth, also solve insertionThe problem of synchronization can only play a multi-medium data in formula platform, meets the demand of user, provides the user more excellentThe service of matter.
Fig. 4 is that the embodiments of the invention provide a kind of multi-medium data playing device based on embedded platform, the deviceIt is deployed in embedded device, referring to Fig. 4, the device includes:
First determining module 401, for when needing to play at least two multi-medium datas simultaneously, it is determined that being provided by hardwareThe handling duration that source is respectively processed at least two multi-medium datas;
First processing module 402, for distinguishing corresponding handling duration according at least two multi-medium datas, passes through hardwareResource carries out time-division processing at least two multi-medium datas, to play at least two multi-medium datas simultaneously.
Alternatively, hardware resource includes hardware parser, audio frequency and video hardware separation device, hardware decoder and Hardware RenderDevice;
First determining module 401 includes:
First determination sub-module, for for hardware parser, determining that at least two multi-medium datas distinguish corresponding solutionAnalysis duration is the first specified duration;
Second determination sub-module, for for audio frequency and video hardware separation device, determining that at least two multi-medium datas are right respectivelyThe audio frequency and video separation duration answered is the second specified duration;
3rd determination sub-module, for for hardware decoder, based on the corresponding volume of at least two multi-medium datas differenceCode mode and resolution ratio, determine that at least two multi-medium datas distinguish corresponding decoding duration;
4th determination sub-module, for for hardware renderer, determining that at least two multi-medium datas distinguish corresponding wash with watercoloursDye duration is the 3rd specified duration.
Alternatively, first processing module 402 includes:
Analyzing sub-module, for when needing to parse at least two multi-medium datas, according to matchmaker more than at least twoVolume data distinguishes corresponding host-host protocol and first and specifies duration, and at least two multi-medium datas are carried out by hardware parserTimesharing is parsed;
Audio frequency and video separate submodule, for when need at least two multi-medium datas carry out audio frequency and video separation when, according toAt least two multi-medium datas distinguish corresponding encapsulation format and second and specify duration, by audio frequency and video hardware separation device at leastTwo multi-medium datas carry out timesharing audio frequency and video separation;
Decoding sub-module, for when needing to decode at least two multi-medium datas, according to matchmaker more than at least twoVolume data distinguishes corresponding coded system, resolution ratio and decoding duration, by hardware decoder at least two multi-medium datasCarry out timesharing decoding;
Submodule is rendered, for when needing to render at least two multi-medium datas, according to matchmaker more than at least twoVolume data distinguishes corresponding rendering parameter and the 3rd and specifies duration, and at least two multi-medium datas are carried out by hardware rendererTimesharing is rendered.
Alternatively, the device also includes:
Trigger module, for when at least two multi-medium datas are high-resolution video, triggering first to determine mouldBlock 401 determines the handling duration being respectively processed by hardware resource at least two multi-medium datas, high-resolution to regardFrequency refers to that resolution ratio is more than or equal to the video of default resolution ratio;
Second processing module, during for being video or the audio of low resolution when at least two multi-medium datas, leads toAt least two software resources are crossed to handle at least two multi-medium datas, to play at least two multi-medium datas simultaneously,The video of low resolution refers to that resolution ratio is less than the video of default resolution ratio.
Alternatively, the device also includes:
Second determining module, for including high-resolution video, and low resolution when at least two multi-medium datasWhen the video or audio of rate, the quantity of the high-resolution video at least two multi-medium datas is determined;
3rd processing module, for being height according to at least two multi-medium datas when quantity is less than predetermined numberAt least two multi-medium datas are handled by the mode that the video of resolution ratio is handled by hardware resource;
Fourth processing module, for when quantity is more than or equal to predetermined number, according to at least two multi-medium datasIt is the mode that high-resolution video is handled, the high-resolution video is handled by hardware resource, andThe video or audio of low resolution are handled by software resource.
In summary, the device that the present embodiment is provided is when playing multi-medium data, by determining time-division processing at least twoThen at least two multi-medium datas are carried out time-division processing, so by the handling duration of individual multi-medium data by hardware resourceThe hardware resource and software resource in embedded device can be effectively utilized, in the more many matchmakers of the smooth broadcasting of synchronizationVolume data, also solves the problem of synchronization can only play a multi-medium data in embedded platform, meets user'sDemand, provides the user better service.
It should be noted that:A kind of multi-medium data playing device based on embedded platform that above-described embodiment is provided exists, can be as needed only with the division progress of above-mentioned each functional module for example, in practical application when playing multi-medium dataAnd above-mentioned functions are distributed and completed by different functional modules, i.e., the internal structure of equipment is divided into different functional modules,To complete all or part of function described above.In addition, above-described embodiment offer is a kind of based on many of embedded platformMedia data playing method embodiment belongs to same design, and it implements process and refers to embodiment of the method, repeats no more here.
Fig. 5 is a kind of embedded device structural representation provided in an embodiment of the present invention.The embedded device 500 can beMobile terminal, or intelligent television.Referring to Fig. 5, the embedded device includes communication unit 510, includes one or oneThe memory 520 of individual above computer-readable recording medium, input block 530, display unit 540, sensor 550, audio-frequency electricRoad 560, WIFI (Wireless Fidelity, Wireless Fidelity) module 570, include one or more than one processing coreProcessor 550 and the part such as power supply 590.It will be understood by those skilled in the art that the embedded device knot shown in Fig. 5Structure does not constitute the restriction to mobile embedded type equipment, can include parts more more or less than diagram, or combine somePart, or different part arrangements.Wherein:
Communication unit 510 can be used for receive and send messages or communication process in, the reception and transmission of signal, the communication unit 510Can for RF (Radio Frequency, radio frequency) circuit, router, modem, etc. network communication equipment.Especially, whenWhen communication unit 510 is RF circuits, after the downlink information of base station is received, transfer at one or more than one processor 550Reason;In addition, being sent to base station by up data are related to.Usually as communication unit RF circuits include but is not limited to antenna,At least one amplifier, tuner, one or more oscillators, subscriber identity module (SIM) card, transceiver, coupler,LNA (LowNoiseAmplifier, low-noise amplifier), duplexer etc..In addition, communication unit 510 can also pass through channel radioLetter communicates with network and other equipment.The radio communication can use any communication standard or agreement, include but is not limited toGSM, GPRS (General PacketRadio Service, general packet radio service), CDMA (Code DivisionMultipleAccess, CDMA), WCDMA, LTE (Long Term Evolution, Long Term Evolution), Email, SMS(ShortMessaging Service, Short Message Service) etc..Memory 520 can be used for storage software program and module, placeReason device 550 is stored in the software program and module of memory 520 by operation, so as to perform various function application and dataProcessing.Memory 520 can mainly include storing program area and storage data field, wherein, storing program area can storage program area,Application program (such as sound-playing function, image player function etc.) needed at least one function etc.;Storage data field can be depositedStorage uses created data (such as voice data, phone directory etc.) etc. according to embedded device 500.In addition, memory 520High-speed random access memory can be included, can also include nonvolatile memory, for example, at least one disk memory,Flush memory device or other volatile solid-state parts.Correspondingly, memory 520 can also include Memory Controller, to carryAccess for processor 550 and input block 530 to memory 520.
Input block 530 can be used for the numeral or character information for receiving input, and generation to be set with user and functionThe relevant keyboard of control, mouse, action bars, optics or the input of trace ball signal.Preferably, input block 530 may include to touchSensitive surfaces 531 and other input equipments 532.Touch sensitive surface 531, also referred to as touch display screen or Trackpad, collect and use(such as user is using any suitable objects such as finger, stylus or annex in touch-sensitive table for touch operation of the family on or near itOperation on face 531 or near touch sensitive surface 531), and corresponding attachment means are driven according to formula set in advance.It is optional, touch sensitive surface 531 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is usedThe touch orientation at family, and the signal that touch operation is brought is detected, transmit a signal to touch controller;Touch controller is from touchTouch information is received in detection means, and is converted into contact coordinate, then gives processor 550, and can reception processing device 550The order sent simultaneously is performed.Furthermore, it is possible to using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wavesRealize touch sensitive surface 531.Except touch sensitive surface 531, input block 530 can also include other input equipments 532.Preferably,Other input equipments 532 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.),One or more in trace ball, mouse, action bars etc..
Display unit 540 can be used for the information and embedded device for showing the information inputted by user or being supplied to user500 various graphical user interface, these graphical user interface can by figure, text, icon, video and its any combination LaiConstitute.Display unit 540 may include display panel 541, optionally, can using LCD (Liquid Crystal Display,Liquid crystal display), the form such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure displayPanel 541.Further, touch sensitive surface 531 can cover display panel 541, when touch sensitive surface 531 is detected on or near itTouch operation after, processor 550 is sent to determine the type of touch event, with preprocessor 550 according to touch eventType provides corresponding visual output on display panel 541.Although in Figure 5, touch sensitive surface 531 is with display panel 541Realized as two independent parts input and input function, but in some embodiments it is possible to by touch sensitive surface 531 withDisplay panel 541 is integrated and realizes input and output function.
Embedded device 500 may also include at least one sensor 550, such as optical sensor, motion sensor and itsHis sensor.Optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient lightThe light and shade of line adjusts the brightness of display panel 541, and proximity transducer can close when embedded device 500 is moved in one's earDisplay panel 541 and/or backlight.As one kind of motion sensor, gravity accelerometer can detect (one in all directionsAs be three axles) acceleration size, size and the direction of gravity are can detect that when static, available for identification mobile phone posture application(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;Other sensings such as gyroscope, barometer, hygrometer, thermometer, infrared ray sensor for can also configure as embedded device 500Device, will not be repeated here.
Voicefrequency circuit 560, loudspeaker 561, the audio that microphone 562 can be provided between user and embedded device 500 connectMouthful.Electric signal after the voice data received conversion can be transferred to loudspeaker 561, by loudspeaker 561 by voicefrequency circuit 560Be converted to voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 562, by voicefrequency circuit560 receive after be converted to voice data, then after voice data output processor 550 is handled, through communication unit 510 to be sent toSuch as another embedded device, or voice data is exported to memory 520 so as to further processing.Voicefrequency circuit 560 is alsoEarphone jack is potentially included, to provide the communication of peripheral hardware earphone and embedded device 500.
In order to realize radio communication, wireless communication unit 570 can be configured with the embedded device, the radio communication listMember 570 can be WIFI module.WIFI belongs to short range wireless transmission technology, and embedded device 500 passes through wireless communication unit570 can help user to send and receive e-mail, browse webpage and access streaming video etc., and it has provided the user wireless broadbandInternet access.Although showing wireless communication unit 570 in Fig. 5, but it is understood that, it is simultaneously not belonging to embedded setStandby 500 must be configured into, can be omitted in the essential scope for do not change invention as needed completely.
Processor 550 is the control centre of embedded device 500, utilizes each of various interfaces and connection whole mobile phoneIndividual part, by operation or performs and is stored in software program and/or module in memory 520, and calls and be stored in storageData in device 520, perform the various functions and processing data of embedded device 500, so as to carry out integral monitoring to mobile phone.CanChoosing, processor 550 may include one or more processing cores;It is preferred that, processor 550 can integrated application processor and modulationDemodulation processor, wherein, application processor mainly handles operating system, user interface and application program etc., modulation /demodulation processingDevice mainly handles radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 550.
Embedded device 500 also includes the power supply 590 (such as battery) powered to all parts, it is preferred that power supply can be withIt is logically contiguous by power-supply management system and processor 550, thus by power-supply management system realize management charging, electric discharge, withAnd the function such as power managed.Power supply 560 can also include one or more direct current or AC power, recharging system,The random component such as power failure detection circuit, power supply changeover device or inverter, power supply status indicator.
Although not shown, embedded device 500 can also include camera, bluetooth module etc., will not be repeated here.
In the present embodiment, embedded device also includes one or more than one program, this or oneProcedure above is stored in memory, and is configured to by one or more than one computing device, one or oneIndividual procedure above, which is included, to be used to carry out many matchmakers based on embedded platform described in above-mentioned Fig. 2-Fig. 3 provided in an embodiment of the present inventionThe instruction of volume data player method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally providedSuch as include the memory of instruction, above-mentioned instruction can complete the above method by the computing device of embedded device.For example, describedNon-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and lightData storage device etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by embedded deviceWhen managing device execution so that embedded device is able to carry out the multi-medium data based on embedded platform described in above-mentioned Fig. 2 and Fig. 3Player method.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can be by hardwareTo complete, the hardware of correlation can also be instructed to complete by program, described program can be stored in a kind of computer-readableIn storage medium, storage medium mentioned above can be read-only storage, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit andWithin principle, any modification, equivalent substitution and improvements made etc. should be included in the scope of the protection.

Claims (12)

CN201710620382.1A2017-07-262017-07-26Multimedia data playing method and device based on embedded platform and storage mediumActiveCN107277592B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710620382.1ACN107277592B (en)2017-07-262017-07-26Multimedia data playing method and device based on embedded platform and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710620382.1ACN107277592B (en)2017-07-262017-07-26Multimedia data playing method and device based on embedded platform and storage medium

Publications (2)

Publication NumberPublication Date
CN107277592Atrue CN107277592A (en)2017-10-20
CN107277592B CN107277592B (en)2020-07-17

Family

ID=60079631

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710620382.1AActiveCN107277592B (en)2017-07-262017-07-26Multimedia data playing method and device based on embedded platform and storage medium

Country Status (1)

CountryLink
CN (1)CN107277592B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110740383A (en)*2018-07-202020-01-31宏碁股份有限公司 Multimedia file management method, terminal device, service device and file management system
CN113286140A (en)*2021-05-112021-08-20北京飞讯数码科技有限公司Video coding and decoding test method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103124374A (en)*2011-11-172013-05-29株式会社艾库塞尔Method for moving image reproduction processing and mobile information terminal using the method
CN103260017A (en)*2013-05-302013-08-21华为技术有限公司Video processing method, video processing device and video processing system
CN105187765A (en)*2015-07-212015-12-23中国科学院西安光学精密机械研究所Embedded multifunctional video interface module
CN105516666A (en)*2015-12-142016-04-20深圳市云视互联有限公司Audio/video acquisition device, audio/video acquisition auxiliary system and audio/video monitoring system
CN105657492A (en)*2015-12-292016-06-08广州视源电子科技股份有限公司Television signal processing method and device and television playing control system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103124374A (en)*2011-11-172013-05-29株式会社艾库塞尔Method for moving image reproduction processing and mobile information terminal using the method
CN103260017A (en)*2013-05-302013-08-21华为技术有限公司Video processing method, video processing device and video processing system
CN105187765A (en)*2015-07-212015-12-23中国科学院西安光学精密机械研究所Embedded multifunctional video interface module
CN105516666A (en)*2015-12-142016-04-20深圳市云视互联有限公司Audio/video acquisition device, audio/video acquisition auxiliary system and audio/video monitoring system
CN105657492A (en)*2015-12-292016-06-08广州视源电子科技股份有限公司Television signal processing method and device and television playing control system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110740383A (en)*2018-07-202020-01-31宏碁股份有限公司 Multimedia file management method, terminal device, service device and file management system
CN110740383B (en)*2018-07-202021-10-08宏碁股份有限公司 Multimedia file management method, terminal device, service device and file management system
CN113286140A (en)*2021-05-112021-08-20北京飞讯数码科技有限公司Video coding and decoding test method, device and storage medium
CN113286140B (en)*2021-05-112022-09-02北京飞讯数码科技有限公司Video coding and decoding test method, device and storage medium

Also Published As

Publication numberPublication date
CN107277592B (en)2020-07-17

Similar Documents

PublicationPublication DateTitle
CN107634962B (en) Network bandwidth management method and related products
CN104519404B (en)The player method and device of graphic interchange format file
CN105183296B (en)interactive interface display method and device
CN106973321B (en)Determine the method and device of video cardton
CN107438200A (en)The method and apparatus of direct broadcasting room present displaying
CN103400592A (en)Recording method, playing method, device, terminal and system
CN103905885A (en)Video live broadcast method and device
CN106412662A (en)Timestamp distribution method and device
CN104036536A (en)Generating method and apparatus of stop motion animation
WO2021228131A1 (en)Information transmission method and apparatus, and electronic device
CN104157007A (en)Video processing method and apparatus
CN104793991B (en)A kind of audio output apparatus switching method and device
CN104602135A (en)Method and device for controlling full screen play
CN110248233A (en)A kind of audio and video playing method, apparatus, equipment and storage medium
CN110180181A (en)Screenshot method, device and the computer readable storage medium of Wonderful time video
CN104967864B (en)A kind of method and device merging video
CN103491421B (en)Content displaying method, device and intelligent television
CN104240710B (en)A kind of method, system and the terminal device of information transmission
CN107396193A (en) Method and device for playing video
CN106371797A (en)Method and device for configuring sound effect
CN107277592A (en)Multimedia data playing method, device and storage medium based on embedded platform
CN106375182A (en)Voice communication method and device based on instant messaging application
CN108966162A (en)Data communications method, communication processing equipment, terminal and readable storage medium storing program for executing
CN108399057A (en)message display method, terminal and computer readable storage medium
CN108495302A (en)SIM card initializes accelerated method, mobile terminal and readable storage medium storing program for executing

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information
CB02Change of applicant information

Address after:266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after:Hisense Visual Technology Co., Ltd.

Address before:266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before:QINGDAO HISENSE ELECTRONICS Co.,Ltd.

GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20221014

Address after:83 Intekte Street, Devon, Netherlands

Patentee after:VIDAA (Netherlands) International Holdings Ltd.

Address before:266555, No. 218, Bay Road, Qingdao economic and Technological Development Zone, Shandong

Patentee before:Hisense Visual Technology Co., Ltd.


[8]ページ先頭

©2009-2025 Movatter.jp