Movatterモバイル変換


[0]ホーム

URL:


CN101431652A - Data adapting device, data adapting method - Google Patents

Data adapting device, data adapting method
Download PDF

Info

Publication number
CN101431652A
CN101431652ACNA2008101660380ACN200810166038ACN101431652ACN 101431652 ACN101431652 ACN 101431652ACN A2008101660380 ACNA2008101660380 ACN A2008101660380ACN 200810166038 ACN200810166038 ACN 200810166038ACN 101431652 ACN101431652 ACN 101431652A
Authority
CN
China
Prior art keywords
segmentation
user
content
priority
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008101660380A
Other languages
Chinese (zh)
Inventor
江村恒一
宗续敏彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co LtdfiledCriticalMatsushita Electric Industrial Co Ltd
Publication of CN101431652ApublicationCriticalpatent/CN101431652A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明涉及内容自适应装置和内容自适应方法。该装置具有对多个分段(segment)组成的数据(d102)记载赋予每个分段的分段观点的元数据(d101)、以及将用户的嗜好作为用户观点记载的用户偏好(preference)(d103),通过根据所述用户观点对所述分段观点进行比较,提取数据中的分段,以此适应用户的嗜好显示数据。

Figure 200810166038

The present invention relates to a content adaptive device and a content adaptive method. This device has metadata (d101) that describes the segment viewpoint given to each segment for data (d102) composed of a plurality of segments, and user preference (preference) (preference) that records the user's preference as the user viewpoint ( d103), by comparing the view of the segment according to the view of the user, extracting segments in the data, so as to display the data according to the user's preference.

Figure 200810166038

Description

Content-adaptive device and content-adaptive method
The application is that application number is 00818804.1, international filing date is on November 29th, 2000, denomination of invention is divided an application for the application for a patent for invention of " the data adapting makeup is put, data adapting method, medium and program ".
Technical field
The present invention relates to make data adaptive device, and the data adaptive method of data adapting user hobby.The self-reacting device that particularly relates to the data of propagating by broadcasting such as digital broadcasting, internet and communication medium that comprise video, audio frequency, file.
Background technology
In recent years, the trend of digitization of broadcasting is active day by day, also constantly makes progress with the fusion of communicating by letter.In broadcast world, predetermined is that ground wave broadcast also will be realized digitlization from now in advance with satellite digital broadcasting.By making the broadcasted content digitlization, add that existing video, audio broadcasting also can carry out data broadcasting.Tivo (www.tivo.com), Replay TV television broadcastings such as (www.replay.com) are stored in hard disk selling in the U.S., realization is the new ideas of audiovisual (TV anytime) product afterwards.In addition, in the communications field, the digital content program request of being undertaken by the internet begins as carrying head with music, and the Internet Broadcast platform that carries out video broadcasting also constantly adds.Also have, on mobile terminal, by the broadband of access network and with the connecting of internet, also can carry out the visit of internet content.
The leap of the information expansion of sd so it is contemplated that the user can conduct interviews to the huge information source that comprises video, audio frequency, file etc. simply by various terminals from now in changing.If do not utilize communication and fusion, the digitlization broadcasted like this, the metadata of operation instruction content only transmits selectively, reproduces the information that conforms with user's inclination, and the user just is difficult to handle huge information source.
" Adapting Multimedia Internet Content For Univeral Access " (being adapted to the multimedia internet content of universal access) (IEEE Transaction on Multimedia.March.1999.pp104-114) proposed a content is allotted information to various terminals such as television receiver, portable terminal, or from HHC (Hand Held Computer; Palmtop), PDA (Personal Digital Assistant; The individual digital aid), the scheme that abundant content is used on the Smart Phone various terminal access networks such as (smart phones), and the display capabilities ground that makes internet content be fit to terminal carries out the method (for example, change display size and show chromatic number) of conversion (data adaptive).
Suitable makeup is put and is described to existing data below.The block diagram that Figure 68 puts for the suitable makeup of the existing data of expression.Existing suitable makeup is put by dataadaptive unit 100, policy decision device (policyengine) 101,Context resolution unit 102,content choice unit 103,content operation unit 104 andterminal 105 and is constituted.Context resolution unit 102 inputting Internet content d1001.In addition, terminal preferences (preference) d1002 of the ability information of terminal 105storage representation terminals 105.
In addition, Figure 69 is the details drawing of terminal preferences d1002.Terminal preferences d1002 comprises that the display capabilities ofterminal 105 is screen chromatic number information x and screen size information a * b.
Utilize Figure 70 that the action of data adaptive unit with above-mentioned formation is illustrated below.Figure 70 is the process chart of explanation dataadaptive unit 100 actions.
Policy decision device 101 utilizes the terminal preferences d1002 that obtains fromterminal 105, and identification is as the display screen size and the show chromatic number (p1201) of terminal capability.Then,policy decision device 101 is according to information and policy, control content selectedcell 103 and thecontent operation unit 104 of p1201 identification.
In addition, the physical data classification (p1202) of the internet content d1001 ofContext resolution unit 102 analysis user requirement.Then,content choice unit 103 judges that can internet content d1001 show (p1203) with the display capabilities ofterminal 105, when have the ability showing internet content d1001, from internet content d1001, only select the terminal content displayed (p1204) of having the ability.
Then,content operation unit 104 is transformed into content the form that can be presented at the selected energy of p1204 content displayed according to terminal capability again.Specifically, whethercontent choice unit 103 judgement contents than content that display screen size big by content operation unit dwindle it (p1206) than display screen size big (p1205).In addition, when content was big unlike display frame, whethercontent operation unit 104 judged than display capabilities little (p1207), less content bycontent operation unit 104 with its amplification (1208).
Again,content operation unit 104 show chromatic number whether colored (p1209) of judging terminal 105.Then, when the show chromatic number of terminal was not colour,content operation unit 104 judged whether gray scale (p1210), gray scale in this way, and then the chromatic number with content is transformed into gray scale (p1211), if neither colored, is not again that gray scale is then made binary conversion treatment (p1212).
But, with above-mentioned existing technology will be according to general policy, the policy that promptly is fit to specific terminal carries out the selection and the operation of content from the content analysis result.Therefore, the problem of existence is only to make suitableization of data of the terminal pattern that is fit to predetermined some.
In addition, existing technology can not make content conform with the operation and the selection of user hobby, for example, can not dwindle and carries out part demonstration etc. and select and operation image.Therefore,, also exist with respect to the user and use this terminal to make content conform with the form of user's hobby, the problem that data can not self adaptationization even using predetermined particular terminal to carry out under the situation of content choice and operation with existing technology.
Summary of the invention
The 1st purpose of the present invention is in order to solve above-mentioned existing problem, selects the segmentation of data according to user's taste information, makes the data adaptiveization of content with the form that conforms with user's hobby with this.
In addition, the 2nd purpose of the present invention is, when allotting content information by network, wants the form that receives with the user, and according to the scalable data adaptiveization that makes of the state of network (scalable).
For reaching the 1st above-mentioned purpose, the present invention can grasp user's taste information, with this preference as the user, hobby according to the user, select the segmentation of data, carry out the conversion of picture breakdown rate based on the priority of segmentation and terminal capability, each content is all wanted the form implementation data self adaptationization that receives with the user with this.By means of this, can conform with user's hobby, be absorbed in and make data adaptiveization.
In addition, in order to reach above-mentioned the 2nd purpose, the present invention can be when allotting content by network, the passband information of network is obtained as network preference, priority and passband according to segmentation are adjusted data volume, thereby want the form that receives with the user, and make the self adaptationization of data according to the state of network scalablely.By means of this, the user can want the form that receives with it, and makes data adaptiveization according to the state of network scalablely.
The invention provides a kind of data adaptive device, possess: content-data is obtained the unit, obtain metadata, and obtain the real data of described content, described metadata comprises a plurality of segmentations records of the section sometime of constitution content, the attribute of described segmentation has added segmentation viewpoint and segmentation priority, and described segmentation viewpoint is the keyword of the content of the described segmentation of expression, and described segmentation priority is represented the importance degree based on the keyword of the content of described segmentation; User preference is obtained the unit, obtains to have recorded and narrated User Perspective and as the user preference information of the User Priority of the attribute of User Perspective, described User Perspective is to have a liking for relevant keyword with the user; And user's adaptive unit, extract described User Perspective and described User Priority from described user preference information, User Perspective that extracts and the described segmentation viewpoint that is included in the described metadata information are compared, and extract a plurality of described segmentation with consistent described segmentation viewpoint, from the segmentation of being extracted, extract the segmentation that described User Priority has satisfied described segmentation priority.
The present invention also provides a kind of data adaptive method, possess: content-data is obtained step, obtain metadata, and obtain the real data of described content, described metadata comprises a plurality of segmentations records of the section sometime of constitution content, the attribute of described segmentation has added segmentation viewpoint and segmentation priority, and described segmentation viewpoint is the keyword of the content of the described segmentation of expression, and described segmentation priority is represented the importance degree based on the keyword of the content of described segmentation; User preference is obtained step, obtains to have recorded and narrated User Perspective and as the user preference information of the User Priority of the attribute of User Perspective, described User Perspective is to have a liking for relevant keyword with the user; And user's self adaptation step, extract described User Perspective and described User Priority from described user preference information, described User Perspective that extracts and the described segmentation viewpoint that is included in the described metadata information are compared, and extract a plurality of described segmentation with consistent described segmentation viewpoint, from the segmentation of being extracted, extract the segmentation that described User Priority has satisfied described segmentation priority.
Description of drawings
Fig. 1 is the block diagram of terminal that possesses the data adaptive device of theinvention process form 1 and example 3.
Fig. 2 is the details drawing of the user preference of example 1.
Fig. 3 represents the record definition of the user preference of example 1.
Fig. 4 represents the record example of the user preference of example 1.
Fig. 5 is the action specification figure of the segmentation selected cell of example 1.
Fig. 6 is the process chart of user's self-adaptive controller of example 1.
Fig. 7 is the process chart of example 1 terminal self-adaptive controller.
Fig. 8 is the block diagram of system that possesses the data adaptive unit of theinvention process form 2 and example 4.
Fig. 9 is the process chart of the network self-adapting control unit of example 2.
Figure 10 is the details drawing of user preference the 2nd example.
Figure 11 represents the 2nd record definition of user preference.
Figure 12 is that user preference the 2nd is recorded and narrated example.
Figure 13 is the key diagram of the 2nd action of segmentation selected cell.
Figure 14 is the 2nd process chart of user's self-adaptive controller.
Figure 15 is the block diagram of other examples of system that possesses the data adaptive unit of the invention process form.
Figure 16 represents other forms of the segmentation of example of the present invention.
Figure 17 represents the segmentation record example of example of the present invention.
Figure 18 represents the data structure of theinvention process form 5.
Figure 19 represents the record example of the data of theinvention process form 5.
Figure 20 is other examples of the data structure of expression theinvention process form 5.
Figure 21 is the record example of other examples of the data structure of expression theinvention process form 5.
Figure 22 represents the data structure of theinvention process form 6.
Figure 23 is that the data of theinvention process form 6 are recorded and narrated example.
Figure 24 represents the data structure of theinvention process form 7.
Figure 25 represents the data record of theinvention process form 7.
Figure 26 represents other examples of the data structure of theinvention process form 7.
Figure 27 is the record of other examples of the data structure of theinvention process form 7.
Figure 28 represents the data structure of the invention process form 8.
Figure 29 is the record of the data of the invention process form 8.
Figure 30 is other examples of the data structure of the invention process form 8.
Figure 31 is other record examples of the data of expression the invention process form 8.
Figure 32 is the 1st figure of the record definition of the user preference of expression theinvention process form 9.
Figure 33 is the 2nd figure of the record definition of the user preference of expression theinvention process form 9.
Figure 34 is the record example of the user preference of theinvention process form 9.
Figure 35 is the 1st figure of the definition of the segmentation record of expression example 9.
Figure 36 is the 2nd figure of the definition of the segmentation record of expression example 9.
Figure 37 is the 3rd figure of the definition of the segmentation record of expression example 9.
Figure 38 represents the 1st record example of the segmentation of example 9.
Figure 39 represents the 2nd record example of the segmentation of example 9.
Figure 40 represents the 3rd record example of the segmentation of example 9.
Figure 41 represents the 4th record example of the segmentation of example 9.
Figure 42 represents the 5th record example of the segmentation of example 9.
Figure 43 represents the 6th record example of the segmentation of example 9.
Figure 44 represents the 7th record example of the segmentation of example 9.
Figure 45 represents the 8th record example of the segmentation of example 9.
Figure 46 represents the 9th record example of the segmentation of example 9.
Figure 47 is the 1st figure of the record definition of the user preference of expression theinvention process form 10.
Figure 48 is the 2nd figure of the record definition of the user preference of expression example 10.
Figure 49 is the 3rd figure of the record definition of the user preference of expression example 10.
Figure 50 is the 4th figure of the record definition of the user preference of expression example 10.
Figure 51 is the 5th figure of the record definition of the user preference of expression example 10.
Figure 52 represents the record example of the user preference of example 10.
Figure 53 is the 1st figure of the definition of the segmentation record of expression example 10.
Figure 54 is the 2nd figure of the definition of the segmentation record of expression example 10.
Figure 55 is the 3rd figure of the definition of the segmentation record of expression example 10.
Figure 56 is the 4th figure of the definition of the segmentation record of expression example 10.
Figure 57 is the 1st figure of the record example of the segmentation of expression example 10.
Figure 58 is the 2nd figure of the record example of the segmentation of expression example 10.
Figure 59 is the 3rd figure of the record example of the segmentation of expression example 10.
Figure 60 is the 1st figure of other examples of the record of the segmentation of expression example 10.
Figure 61 is the 2nd figure of other examples of the record of the segmentation of expression example 10.
Figure 62 is the 1st figure of the definition of summary (digest) record of expression example 11.
Figure 63 is the 2nd figure of the definition of the summary record of expression example 11.
Figure 64 is the 3rd figure of the definition of the summary record of expression example 11.
Figure 65 is the 4th figure of the definition of the summary record of expression example 11.
Figure 66 represents the record example of the viewpoint inventory of example.
Figure 67 represents the record example of the summary of example 11.
Figure 68 is the block diagram of existing data adaptive device.
Figure 69 is the record detail drawing of existing terminal preference.
Figure 70 is the process chart of existing data adaptive unit.
Concrete example
With reference to the accompanying drawings example of the present invention is elaborated.
Example 1
Terminal to the data adaptive device that possesses theinvention process form 1 describes below.Fig. 1 is the block diagram of terminal that possesses the data adaptive device of example 1.As shown in Figure 1, the information used of the data d102 that forms by the content that constitutes of a plurality of segmentations of storage ofterminal 10 and explanation data d102 be the relevant user's hobby of content-data medium 11, storage of metadata d101 information, be that the ability information of userpreference memory cell 19, the storage terminal of user preference d103 is that the terminalpreferences memory cell 20 of terminal preferences d104 and user'sadaptive unit 12 of utilizing metadata d101, user preference d103 and terminal preferences d104 that data d102 adaptation user is had a liking for are formed.
Setting obtains the means of various data on user's adaptive unit 12.As the means that obtain various data, be provided with the terminal preferences that the data that obtainedunit 21a and obtained data d102 by the metadata that obtains metadata d101 obtain that the content-data thatunit 21b constitutes is obtainedunit 21, the user preference of obtaining user preference d103 is obtainedunit 22 and obtain terminal preferences d104 and obtainunit 23.
In addition, be provided with the segmentation of from the data d102 that the hobby according to the user obtains, selecting regulation, to adapt to the unit of user's hobby at user's adaptive unit 12.As making selected segmentation conform with the unit of user's hobby, metadata d101 and user preference data d103 that utilization obtains are set, from the data d102 that obtains, generate the segmentation of selecting regulation user's self-adaptive controller 14 of information and the information that generates according to user's self-adaptive controller 14, select, extract the segmentation selectedcell 16 of regulation segmentation from the data d102 that obtains.
In addition, at user'sadaptive unit 12, also be provided with processing unit according to data d102 and be the terminal capability ofdisplay unit 18 grades carries out conversion to data d102 unit.As making data d102, be provided with according to metadata d101 and terminal preferences d104 and generate the terminal self-adaptive controller 15 of the spatial resolution that changes data d102, information that colour resolution is used and carry out the spatial resolution transformation of d102 and theresolution conversion unit 17 of colour resolution conversion according to the information that terminal self-adaptive controller 15 generates according to the unit that terminal capability carries out conversion.
Below user preference d103 is described.Fig. 2 is the detail drawing of user preference d103.As shown in Figure 2, user preference d103 has a liking for recording and narrating 3a~3n by a plurality of users and constitutes.In addition, have a liking for recording and narrating on 3a~3n the user, storing and a plurality ofly comprise time long letter breath 35 that identification has a liking for recording and narrating thecontent identifier 34 of the corresponding content of 3a~3n with user user respectively, uses with desirable time showing content, from content, extract keyword (User Perspective) 36a, 36b that desirable segmentation uses and thepriority 3 7a of thiskeyword 36a, 36b, the group of 37b.
Specify below and comprise that the definition that the user has a liking for recording and narrating the user preference d103 of 3a~3n records and narrates.The schematic diagram that Fig. 3 records and narrates for the definition of expression user preference d103.
As shown in Figure 3, user preference is recorded and narrated DTD (the Document Type Definition thatdefinition 40 can be used XML (Extensible Markup Language); The file type definition) records and narrates.
User preference is recorded and narrateddefinition 40, shown among the figure 41, is defined as at user preference and has more than one content (content).In addition, shown among the figure 42, be defined as every content and have more than one keyword (User Perspective) 36 (Keyword).In addition, shown among the figure 43, be defined as in content, the size information that has content identifier 34 (content ID), long 35 (Duration) of time, expression data as attribute is screen size 46 (Screen Size).In addition, shown among the figure 47, keyword 36 (Keyword) is defined as with text data to be recorded and narrated.In addition, shown among the figure 48, be defined askeyword 36 and have priority 37 (Priority) as attribute respectively.
The user preference that record definition with user preference shown in Figure 3 is generated describes below.Fig. 4 is the schematic diagram of expression user preference example.
In Fig. 4, what record and narrate serve as to use the user to have a liking for recording and narratingdefinition 40 with 50, and the user of the reality of recording and narrating with XML has a liking for recording and narrating routine.
User preference 50 has two contents 51,52.(in this example is 123456789, is putting down in writing content ID53b (is 123456788 in this example) in content 52 to put down in writing content ID53a in content 51.In addition, in content 51, putting down in writing long 54 (are smpte=00:05:00:00 in this example) of demonstration time as the attribute of content.
In addition, in content 51, putting down in writing keyword (User Perspective) 55a~55c.On keyword 55a, put down in writing Nakata, putting down in writing soccer on the keyword 55b, putting down in writing Japan on the keyword 55c.In addition,, on keyword 55a, putting down inwriting priority 5, on keyword 55b, putting down inwriting priority 4, on keyword 55c, putting down inwriting priority 2 as the attribute of keyword 55a~55c.
On the other hand, on content 52, putting down in writing display scale 56 (is pixel=320 * 240 in this example) as the attribute of content.In addition, putting down in writing keyword 55d~55f in content 52.In keyword 55d, put down in writing Headline, in keyword 55e, putting down in writing stock, in keyword 55f, putting down in writing sports.In addition,, in keyword 55d, putting down inwriting priority 5, in keyword 55e, putting down inwriting priority 5, in keyword 55f, putting down inwriting priority 3 as the attribute of keyword 55d~55f.
Use Fig. 5 that the formation of data d102 is described below.
Data d102 is made of the segmentation 61 of certain time range of composition data d102.Segmentation 61 is made of the 1stsecondary segmentation 62 and the 2nd secondary segmentation 63.Secondary segmentation 62 is made of the 1stsecondary segmentation 64 and the 2ndsecondary segmentation 65, andsecondary segmentation 63 is made of the 1stsecondary segmentation 66 and the 2nd secondary segmentation 67.Like this, data d102 has the hierarchy of segmentation and secondary segmentation.
In addition, insecondary segmentation 62, added keyword (segmentation viewpoint) 621, insecondary segmentation 63 by additive keyword 631.In addition, insecondary segmentation 64, be endowed keyword (segmentation viewpoint) 641, insecondary segmentation 65, by having been added keyword (segmentation viewpoint) 651, insecondary segmentation 66, be endowed keyword (segmentation viewpoint) 661, insecondary segmentation 67 by additive keyword (segmentation viewpoint) 671.
Keyword 621,631,641,651,661,671 is documented on the metadata d101.In addition, be endowed a plurality of keywords (KWD) of the content of theexpression segmentation 62~67 corresponding on the keyword 621,631,641,651,661,671 with keyword 621,631,641,651,661,671.
In addition, each keyword (KWD) is given the priority (segmentation priority) of segmentation.In the example on figure,keyword 621 is made of the group of thesegmentation priority 3 of the group of thesegmentation priority 3 of the additional sequence inKWD#2 and theKWD#2 and the additional sequence inKWD#3 and theKWD#3.
Utilize Fig. 5 and Fig. 6 that the action of segmentation selectedcell 16 and the action of user's self-adaptive controller 14 are described below.Fig. 5 is the action specification figure of explanation segmentation selectedcell 16 actions.Fig. 6 is the process chart of explanation user self-adaptive controller 14 actions.
The interior cell data of dataadaptive unit 12 obtainsunit 21 and reads and obtain metadata d101 and data d102 from medium 11.Then, the metadata d101 that 14 analyses of user's self-adaptive controller obtain, extract in each segmentation of the structure be attached to data and structure at least more than one the segmentation viewpoint and the segmentation priority in the segmentation viewpoint, data (content) d102 (P701) that identification is read.
Then, user's self-adaptive controller 14 selects the user corresponding with the content that identifies to have a liking for recording and narrating (for example the user has a liking for recording and narrating 31) (P702) from user preference d103.And then user's self-adaptive controller 14 has a liking for recording and narrating 31 contained User Perspectives (keyword) are arranged in a certain priority according to the attribute of User Perspective order (P703) with the user.Then, user's self-adaptive controller 14 selects to be endowed the high-priority users viewpoint (P704) of limit priority.
Then, user's self-adaptive controller 14 will be selected to deliver to segmentation selectedcell 16 as the information of the segmentation consistent with this User Perspective from data d102 from the high User Perspective of priority (initial User Perspective) successively.
Then, the information that segmentation selectedcell 16 is sent here according to user's self-adaptive controller 14 is compared the segmentation viewpoint of initial User Perspective and segmentation, and judging has the segmentation (P705) that is not endowed the segmentation viewpoint consistent with initial User Perspective.
Then, transfer to the segmentation that is endowed the segmentation viewpoint consistent and select action with initial User Perspective.
But, just merely all select the segmentation of the segmentation viewpoint consistent with initial User Perspective, then might make selected segmentation exceed time long letter breath 54, spatial dimension information 56 or this both restriction condition that user preference d103 is put down in writing.Therefore, in this example, in case virtual selection is all carried out in the segmentation that segmentation viewpoint that will be consistent with initial User Perspective is given.Then, the segmentation of the number of the real again scope of selecting to be no more than time long letter breath 54, spatial dimension information 56 or its both restriction condition from the segmentation of virtual selection.
Specifically, the at first segmentation selectedcell 16 virtual selections segmentation (P706) consistent with User Perspective.Then, segmentation selectedcell 16 judges that to data d101 contained whole segmentations carry out virtual selection and whether finish (P707).If do not finish as yet, then extract next segmentation with metadata d101, carry out the processing of P705, P706.Like this, can carry out virtual selection to the contained whole segmentations of data d102.
Then, at P707,, then the virtual segmentation of selecting is arranged (P708) according to the segmentation priority order of giving segmentation incase 12 pairs of whole segmented virtual of judgment data adaptive unit are selected to finish.This is because when the true selection of carrying out segmentation described below, begins to carry out the cause of the true selection of segmentation from the high segmentation of segmentation priority.
At first, (P709) really selected in the highest segmentation of the 12 pairs of segmentation priority in data adaptive unit.Then, dataadaptive unit 12 judges whether the true selection of whole segmentations of virtual selection finishes (P710).And, at P710,, dataadaptive unit 12 do not have to finish in case judging the true selection of whole segmentation of virtual selection, judge then whether the data length of the true segmentation of selecting exceeds time length, spatial dimension or this both restriction condition (P711).Then, the action from P709 to P711 is carried out repeatedly in dataadaptive unit 12 in the scope that does not exceed these restriction conditions.
In addition, at P710, in a single day dataadaptive unit 12 judges that the true selection of whole segmentations of virtual selection finishes, and just judges whether all User Perspectives (keyword) have been done processing (P715).Then, at P715, judge end process under the situation that all User Perspectives processing have been finished in data adaptive unit 12.On the other hand,, judge under the situation that all User Perspective processing are not had to finish, transfer to processing next User Perspective in dataadaptive unit 12 at P715.
At this, the segmentation that chooses in above-mentioned processing is handled once again, selected once again is nonsensical.Therefore, dataadaptive unit 12 at first extracts unselected segmentation, is defined in unselected segmentation and carries out following processing (P714).Next User Perspective (P713) is selected in dataadaptive unit 12 then, carries out the processing from P705 to P711.
And then, specify segmentation with Fig. 5 again and select action.At first, dataadaptive unit 12 searching from the contained User Perspective (user key words) of user preference (user have a liking for record and narrate) 3 has the secondary segmentation of thehighest KWD#1 of segmentation priority.In thekeyword 661 of thekeyword 631 ofsecondary segmentation 63,secondary segmentation 66,KWD#1 is arranged respectively.Then, long restriction of time is added in dataadaptive unit 12 insecondary segmentation 63,secondary segmentation 66, selects by the strict order of long restriction of time fromsecondary segmentation 63, secondary segmentation 66.Specifically, at first the long restriction of select time is not tight, and promptly long relatively shortersecondary segmentation 66 of time is as prepreerence dateout d102a.Then, then selectsecondary segmentation 63 as the segmentation that constitutes dateout 102b.
Equally, the secondary segmentation with thehigh KWD#2 of priority the 2nd is sought in dataadaptive unit 12 in the contained User Perspective of user preference 3.In all secondary segmentations that constitute segmentation 61, contain KWD#2.Thereby, need here to determine which secondary segmentation is preferential.
Specifically, the segmentation priority of the additional segmentation viewpoint of 12 pairs of each the secondary segmentations in data adaptive unit compares.In the example of Fig. 5, the priority ofsecondary segmentation 63,secondary segmentation 62,secondary segmentation 65,secondary segmentation 66 is identical, then issecondary segmentation 64,secondary segmentation 67 according to priority after a while.Again, becausesecondary segmentation 67 is selected, so from following process object, omit.
Therefore, dataadaptive unit 12 under the situation that long restriction of time allows, selects thesecondary segmentation 64 highsecondary segmentations 65 of segmentation priority ratio to add thatsecondary segmentation 63 is as the secondary segmentation that constitutes dateout 102c after having selected dateout d102b again.Remainingsecondary segmentation 64 containsKWD#2,KWD#3, and thesecondary segmentation 65 of segmentation priority ratio is low.Therefore, when the time, long restriction was the loosest, selectsecondary segmentation 64 to add thatsecondary segmentation 63 andsecondary segmentation 65 are as the secondary segmentation that constitutes output stream 102d.
In addition, user's self-adaptive controller 14 according to selected segmentation on the priority of additional keyword, upgrade the priority of the contained User Perspective of the user preference d103 consistent with keyword.
Like this, from data d102, select to be adapted to the segmentation of user preference d103, can make data d102 be fit to user preference d103.
Then, to transfer to according to terminal ability information be the action that conversion is carried out in segmentation that terminal preferences d104 will select as mentioned above in data adaptive unit 12.Utilize Fig. 7 that the action of terminal self-adaptive controller 15 is described below.Fig. 7 is the process chart of explanation terminal self-adaptive controller 15 actions.
At first, terminal self-adaptive controller 15 is obtained the terminal preferences d104 that terminal 10 is write down, and identification is as the screen display size and the show chromatic number (P801) of thedisplay unit 18 of terminal capability.Then, 15 judgements of terminal self-adaptive controller utilize the segmentation priority (P802) that attribute had of the segmentation of segmentation selectedcell 16 selections as self.Then, terminal self-adaptive controller 15 judges that selected segmentation is whether than the display capabilities big (P803) of thedisplay unit 18 ofterminal 10.
Then, at P803, terminal self-adaptive controller 15 is judged when selected segmentation is big than the display capabilities of thedisplay unit 18 ofterminal 10, terminal self-adaptive controller 15 is added segmentation priority on terminal preferences d104, decision be with the segmentation integral body of selecting dwindle, show the part of selected segmentation, do not show selected segmentation a part, or the part of selected segmentation done demonstration (P804) after the conversion.
On the other hand, at P803, terminal self-adaptive controller 15 judges when selected segmentation is not more than the display capabilities ofdisplay unit 18 ofterminal 10, and whether the data that terminal self-adaptive controller 15 is judged selected segmentation are than the screen display size little (P805) of display unit 18.Then, at P805, under terminal self-adaptive controller 15 concludes that selected segment data is than the little situation of the screen display size ofdisplay unit 18, terminal self-adaptive controller 15 is added segmentation priority on terminal preferences d104, decision be with selected segmentation wholely enlarge, the part of selected segmentation enlarges, does not show the part of selected segmentation, or will show (P806) after a part of conversion of selected segmentation.
Then, can the show chromatic number of terminal self-adaptive controller 15 identification terminal from terminal preferences d104 be judged according to the terminal show chromatic number and to have been implemented to begin carry out colour demonstration (P807) atdisplay unit 18 to the segmentation of the processing of P806 from P801.Then, at P807, in a single day terminal self-adaptive controller 15 makes the segmentation of having implemented above-mentioned processing can carry out the colored judgement that shows atdisplay unit 18, with regard to end process.
On the other hand, at P807, can in a single day terminal self-adaptive controller 15 makes the segmentation of implementing above-mentioned processing can not carry out the colored judgement that shows atdisplay unit 18, doing to show (P808) after gray processing is handled ondisplay unit 18 with regard to judging the segmentation of implementing above-mentioned processing.Then, in a single day terminal self-adaptive controller 15 is made the segmentation of implementing above-mentioned processing and can just make the decision (P809) of the segmentation of implementing above-mentioned processing being done the gray processing processing in the judgement ofdisplay unit 18 demonstrations after being done the gray processing processing.In addition, in a single day terminal self-adaptive controller 15 makes the segmentation of implementing above-mentioned processing in the judgement of doing can not show after gray processing is handled ondisplay unit 18, just make the decision (P810) to the segmentation enforcement binaryzation of implementing above-mentioned processing.
Then, the segmentations in fact selected of 17 pairs of the resolution conversion unit decision content that adapts to terminal self-adaptive controller 15 is carried out resolution conversion.Anddisplay unit 18 is presented at the segmentation after 17 conversion of resolution conversion unit.
As mentioned above, adopt example 1, extraction be attached in each segmentation of metadata d101 at least more than one the segmentation viewpoint and the segmentation priority in the segmentation viewpoint, can select to have the segmentation of the segmentation viewpoint consistent with the contained User Perspective of user preference d103 according to segmentation priority.Can make the hobby of data d102 with this, carry out data adaptive uniquely according to the user.
In addition, adopt example 1,,, just can make data adaptive uniquely according to the ability of terminal by time and the spatial resolution transformation that carries out segmentation according to the contained terminal ability information of terminal preferences.
In addition, adopt example 1,,, just can change the priority between User Perspective so only change User Priority because be the form of each User Perspective being given User Priority.Therefore, just easily user's User Perspective is edited.
Example 2
Utilize Fig. 8 that the data adaptive device of theinvention process form 2 is described below.Fig. 8 is the block diagram of the data adaptive device of expression example 2.
Example 2 makes server have user's self-adaptive controller of example 1, by network server is connected with terminal.But just merely network is connected with terminal, then can't wants the state that receives and carry out data adaptiveization scalablely by the user according to the state of network at user's self-adaptive controller.In order to address this problem, at example 2, user's adaptive unit, is obtained the information of the passband of network, and is made its priority and passband according to segmentation adjust data volume when terminal is allotted content information by network as network preference.Like this, example 2 just can be wanted the state that receives and according to the self adaptationization of scalable ground of network state implementation data according to the user.
Describe example 2 below in detail.At first utilize Fig. 8 that the formation of the dataadaptive unit 22 of example 2 is described.Fig. 8 is the block diagram of system that possesses the dataadaptive unit 22 of example 2.
As shown in Figure 8,server 20 is connected withnetwork 2A withterminal 2B.Terminal 2B is that the ability information of userpreference memory cell 30, thestorage terminal 2B of user preference d103 is that the medium 2C of the passband information of the terminalpreferences memory cell 31 of terminal preferences d104, thestorage networking 2A data that to be the networkpreference memory cell 32 of network preference d205 and storage send here fromserver 20 constitutes by the information of the relevant user of storage hobby.
In addition,server 20 is that information that data d102 and explanation d102 use is the content-data medium 21 of metadata d101 by the content formed of a plurality of segmentations of storage and utilizes metadata d101 and user preference d103, terminal preferences d104 and network preference d205 make data d102 adapt to user'sadaptive unit 22 that the user has a liking for to constitute.
Be provided with the unit of obtaining various data at user's adaptive unit 22.Unit as obtaining various data is provided with: the terminal preferences that the data that obtained unit 33a and obtained data d102 by the metadata that obtains metadata d101 obtain that content-data that unit 33b constitutes is obtainedunit 33, the user preference of obtaining user preference d103 bynetwork 2A is obtainedunit 34, obtain terminal preferences d104 bynetwork 2A is obtained unit 35 and is obtainedunit 36 by the network preference that network 2A obtains network preference d205.
In addition, be provided with the predetermined segmentation of selection from the data d102 that the hobby according to the user obtains at user'sadaptive unit 12, to conform with the unit of user's hobby.As making selected segmentation conform with the means of user's hobby, be provided with metadata d101 and user preference data d103 that utilization obtains, generation from the data d102 that obtains, select regulation the information used of segmentation user's self-adaptive controller 24 and according to the information that generates at user's self-adaptive controller 24, from the data d102 that obtains, select, extract the segmentation selectedcell 26 of regulation segmentation.
In addition, at user'sadaptive unit 24, the processing unit that is provided with according to data d102 is the ability ofterminal 2B, the unit that data d102 is carried out conversion.As unit according to terminal capability transform data d102, setting is from metadata d101 and terminal preferences d104, and generation is used to change data d102 spatial resolution, the terminal self-adaptive controller 25 of the information of colour resolution, the information that utilization generates at terminal self-adaptive controller 25 is carried out the spatial resolution transformation of d102, theresolution conversion unit 27 of colour resolution conversion, utilize metadata d101 and network preference d205 to generate the network self-adaptingcontrol unit 28 of the information of the amount of adjusting the data d102 that delivers to network 2A, and utilize the information adjustment that generates at network self-adaptingcontrol unit 28 to deliver to the datavolume adjustment unit 29 of amount of the data d102 ofnetwork 2A.
Utilize Fig. 6 that the action of segmentation selectedcell 26 and the action of user's self-adaptive controller 24 are described below.
The content-data of dataadaptive unit 24 is obtainedunit 33 and is read, obtains metadata d101 and data d102 from medium 21.Then, the metadata d101 that 24 analyses of user's self-adaptive controller obtain, extract in each segmentation of the structure be attached to data and structure at least more than one the segmentation viewpoint and the priority of segmentation viewpoint, data (content) d102 (P701) that identification is read.Then, user's self-adaptive controller 24 selects the user corresponding with the content that identifies to have a liking for recording and narrating (for example the user has a liking for recording and narrating 31) (P702) from user preference d103.Also have, it is the sequence arrangement (P703) of priority by the attribute of User Perspective that user's self-adaptive controller 24 has a liking for recording and narrating 31 contained User Perspectives (keyword) with the user.Then, select to have given priority the highest User Perspective (P704).
Then, the User Perspective (initial User Perspective) that user's self-adaptive controller 24 will be high according to User Priority selects the segmentation such information consistent with this User Perspective to deliver to segmentation selectedcell 26 successively from data d102.
Then, segmentation selectedcell 26 is according to the information of sending here from user's self-adaptive controller 24, and the segmentation viewpoint of initial User Perspective and segmentation is compared, and judging has the segmentation (P705) that has not been endowed the segmentation viewpoint consistent with initial User Perspective.
Then, transfer to action to the selection of the segmentation of having given the segmentation viewpoint consistent with initial User Perspective.
At first, the segmentation selectedcell 26 virtual selections segmentation (P706) consistent with User Perspective.Then, segmentation selectedcell 26 judges whether the virtual selection that contained all segmentations are carried out to metadata d101 finishes (P707).If do not finish, then utilize metadata, extract next segmentation, carry out the processing of P705, P706.Like this, can carry out virtual selection to all contained segmentations of data d102.
Then, at P707, in case dataadaptive unit 22 judges that the virtual selection that all segmentations are carried out finishes, just arranges (P708) with the virtual segmentation of selecting by the segmentation priority of giving segmentation in regular turn.This is because in the described below true selection that segmentation is carried out, be to begin really to select from the high segmentation of segmentation priority.
Then, at first, the highest segmentation (P709) of dataadaptive unit 22 true selection segmentation priority.Then, dataadaptive unit 22 judges whether the true selection of all segmentations of virtual selection finishes (P710).Then, in a single day dataadaptive unit 22 judges that at P710 the true selection of all segmentations of virtual selection does not have to finish, and just judges long time length, spatial dimension or this both restrictive condition (P711) of whether exceeding of data of the segmentation of really selecting.Then, dataadaptive unit 22 carries out the action of P709 to P711 repeatedly in the scope that does not exceed these restrictive conditions.
In addition, at P710, in a single day dataadaptive unit 22 judges that the true selection of all segmentations of virtual selection finishes, and just judges whether all User Perspectives (keyword) have been carried out handling (P715).Then, at P715, dataadaptive unit 22 is judged when the processing to all User Perspectives has finished, with regard to end process.On the other hand, at P715, dataadaptive unit 22 is judged when the processing to all User Perspectives does not finish, then is transferred to the processing to next User Perspective.
At this, the segmentation of choosing in above-mentioned processing is handled once more, selected is nonsensical.Therefore, at first, dataadaptive unit 22 extracts unselected segmentation, limits unselected segmentation and carries out following processing (P714).Then, select following User Perspective (P713).Carry out the processing of P705 to P711.
Like this, from data d102, select to adapt to the segmentation of user preference d103, data d102 can be carried out user preference d103 self adaptationization.
Then, dataadaptive unit 12 is transferred to according to terminal ability information, is terminal preferences d104, the segmentation of selecting is as mentioned above carried out the action of conversion.Below utilize Fig. 7 that the action of terminal self-adaptive controller 25 is described.
At first, terminal self-adaptive controller 25 is obtained unit 35 from terminal preferences and is obtained the terminal preferences d104 that is recorded interminal 2B, and identification is as the screen display size and the show chromatic number (P801) of thedisplay unit 18 of terminal capability.Then, terminal self-adaptive controller 25 is judged the segmentation priority (P802) that segmentation that segmentation selectedcells 26 select has as self attribute.Then, terminal self-adaptive controller 25 judges that selected segmentation is whether than the display capabilities big (P803) ofterminal 2B.
Then, at P803, terminal self-adaptive controller 25 is judged when selected segmentation is bigger than the display capabilities ofterminal 2B, terminal self-adaptive controller 25 is added segmentation priority on terminal preferences d104, decision be with selected segmentation dwindle, show the part of selected segmentation, do not show selected segmentation a part, or conversion shows a selected segmentation part (P804).
On the other hand, at P803, terminal self-adaptive controller 25 judges when selected segmentation is big unlike the display capabilities ofterminal 2B, and whether the data that terminal self-adaptive controller 25 is judged selected segmentation are than the screen display size little (P805) of terminal 2B.Then, at P805, the data of judging selected segmentation at terminal self-adaptive controller 25 are than the screen display size ofterminal 2B hour, then terminal self-adaptive controller 25 is added segmentation priority on terminal preferences d104, decision be with selected segmentation wholely enlarge, with the part of selected segmentation enlarge, do not show selected segmentation a part, or conversion shows the part (P806) of selected segmentation.
Then, the show chromatic number of terminal self-adaptive controller 25identification terminal 2B from terminal preferences d104 judges that can carry out colour at display unit from the segmentation of the processing of P801 to P806 according to show chromatic number enforcement shows (P807).Then, at P807, in a single day terminal self-adaptive controller 25 makes the segmentation of implementing above-mentioned processing can carry out the colored judgement that shows at terminal 2B, with regard to end process.
On the other hand, at P807, in a single day terminal self-adaptive controller 25 makes the segmentation of implementing above-mentioned processing can not carry out the colored judgement that shows at terminal 2B, just judge that can segmentation that implement above-mentioned processing show (P808) after gray processing is handled onterminal 2B, then, in a single day terminal self-adaptive controller 25 makes the judgement that the segmentation of implementing above-mentioned processing can show atdisplay unit 18 after gray scale is handled, just make the decision (P809) that gray processing is handled is carried out in the segmentation of implementing above-mentioned processing.In addition, terminal self-adaptive controller 25 1 is made the judgement that the segmentation of implementing above-mentioned processing can not show on terminal 2B after gray processing is handled, the decision (P810) of the segmentation binary conversion treatment of implementing above-mentioned processing of just sening as an envoy to.
Then,resolution conversion unit 27 carries out self adaptation, resolution conversion with the decision content of terminal self-adaptive controller 25 to the actual segmentation of selecting.Then, deliver todata volume regulon 29 through the data of resolution conversion.
Utilize Fig. 9 that the action of network self-adaptingcontrol unit 28 is described below.Fig. 9 is the process chart of the network self-adaptingcontrol unit 28 of example 2.
Network self-adaptingcontrol unit 28 is obtainedunit 36 bynetwork 2A from network preference and is obtained the network preference d205 that sends here from terminal 2B.Then, the passband (P901) of network self-adaptingcontrol unit 28 recognition network 2A.Then, network self-adaptingcontrol unit 28 is judged the segmentation priority (P902) that segmentation that segmentation selectedcells 26 select has as self attribute.Then, network self-adaptingcontrol unit 28 judges that whether data after the conversion are greater than passband (P903).
At P903, the data of network self-adaptingcontrol unit 28 after judging conversion are during greater than passband, and then 28 decisions of network self-adapting control unit are that data are all compressed, the part of Data transmission, do not transmitted total data, or will transmit (P904) after a part of conversion of data.At this, so-called will transmit the data that the data conversion that is meant MPEG1 becomes MPEG4 after the data conversion, perhaps the data conversion of AAC is become the format conversion of the data etc. of MP3.
On the other hand, at P903, when the data of network self-adaptingcontrol unit 28 after judging conversion are not more than passband, network self-adaptingcontrol unit 28 judges that whether the data of the segmentation after the conversion are less than passband (P805), at P805, when network self-adaptingcontrol unit 28 judged that data after the conversion are less than passband, 28 decisions of network self-adapting control unit appended data that transmission do not transmit, do not transmit, or conversion or conversion restored transmit (P906) again.
Then, datavolume adjustment unit 29 is adjusted data volume, makes the decision content of network self-adaptingcontrol unit 28 carry out self adaptation to the actual segmentation of selecting, and sends terminal 2B again to.Terminal 2B is recorded in the segmentation that receives on the medium 2C.
As mentioned above, adopt example 2, except the effect of example 1, can also pass through according to the contained passband of network preference d205, adjust the data volume of selected segmentation, can want the form that receives with the user, and carry out data adaptiveization scalablely according to the state of network.
Example 3
The following describes the data adaptive device of the invention process form 3.The formation of the data adaptive device of example 3 is identical with the formation of the data adaptive device of example 1.Therefore omission is to the explanation of the formation of the data adaptive device of example 3.
Between the data adaptive device of the data adaptive device of example 3 and example 1, the formation difference of user preference d103, below the user preference d103 of explanation example 3.Figure 10 is the 2nd width of cloth details drawing of user preference d103.
As shown in figure 10, user preference d103 has a liking for recording and narrating 131~133 by a plurality of users and constitutes.In addition, have a liking for recording and narrating the user and storing keyword 136a, the 136b that a plurality of identifications are had a liking for recording and narrating thecontent identifier 134 of 131~133 corresponding contents, displaying contents is used timelong letter breath 135 and used from the segmentation of contents extraction regulation with the user respectively in 131~133.Like this, in the user preference of example 3, there is not the priority of storage for keyword 136a, 136b.In this, the user preference of example 3 is different with the user preference of example 1.
Specify the definition record that the user has a liking for recording and narrating 131~133 contained user preference d103 below.Figure 11 is that the definition of user preference d103 is recorded and narrated.
As shown in figure 11, user preference is recorded and narrated file type definition (DTD, the i.e. Document Type Definition) record ofdefinition 140 usefulness extend markup languages (XML, i.e. eXtensibleMarkup Language).
Thedefinition 140 that user preference is recorded and narrated is defined as and has more than one content (content) in the user preference shown among the figure 141.And for example among the figure shown in 142, be defined as and in each content, have more than one keyword 136 (Keyword).And for example among the figure shown in 143, having defined long 135 (Duration) of content identifier 134 (content ID), time as attribute and the size information of video data in content is screen size 144 (Screen Size).And for example among the figure shown in 148, keyword 136 (Keyword) is defined as with text data to be recorded and narrated.
Record and narratedefinition 140 also as can be known from the user preference of Figure 11, do not store priority in the user preference of example 3 forkeyword 136.
The following describes user preference with the record definition generation of user preference shown in Figure 11.Figure 12 represents the example of user preference.
At Figure 12,150 for using the user to have a liking for recording and narratingdefinition 40, and the user of the reality of recording and narrating with XML has a liking for recording and narrating example.
User preference 150 shown in Figure 12 has two contents 151,152.Put down in writing content ID153a (is 123456789 in this example) on thecontent 151, oncontent 152, putting down in writing content ID153b (is 123456788 in this example).In addition, oncontent 151, put down in writing long 154 (are smpte=00:05:00:00 in this example) of demonstration time, in addition, oncontent 151, put down in writingkeyword 155a~155c as the attribute of content.Onkeyword 155a, put down in writing NaKata, put down in writing Soccer on thekeyword 155b, put down in writing Japan on thekeyword 155c.
On the other hand, oncontent 152, put down in writing demonstration scale (scale) 156 (is pixel=320 * 240 in this example) as the attribute of content.In addition, incontent 152, put down in writingkeyword 155d~155f.In keyword 155d, put down in writing Headline, putting down in writing stock among thekeyword 155e, putting down in writing sports among thekeyword 155f.
Use Figure 13 that the formation of the data d102 of example 3 is described below.In addition, Figure 13 is the action specification figure of explanation segmentation selectedcell 16 actions.At Figure 13, what be endowedlabel 131 is that the user has a liking for recording and narrating.The formation that the user has a liking for recording and narrating beyond 131 is identical with example 1.
Then, utilize Figure 13 and Figure 14 that the action of segmentation selectedcell 16 of example 3 and the action of user's self-adaptive controller 14 are described.Figure 13 is the action specification figure of the action of explanation segmentation selected cell 16.Figure 14 is the process chart of explanation user self-adaptive controller 14 actions.
The content of data adaptive unit 12 obtains unit 21 and reads, obtains metadata d101 and data d102 from medium.Then, the metadata d101 that 14 analyses of user's self-adaptive controller obtain, the segmentation viewpoint of at least more than one that adds in the structure of extraction data and each segmentation of structure and the priority of segmentation viewpoint, data (content) d102 (P1701) that identification is read.Then, user's self-adaptive controller 14 selects the user corresponding with the content of discerning to have a liking for recording and narrating (for example the user has a liking for recording and narrating 131) (P1702) from user preference d103.Also have, user's self-adaptive controller 14 is had a liking for recording and narrating 131 contained User Perspectives (keyword) to the user and is arranged according to the order of recording and narrating.In the example of Figure 12, the keyword of recording and narrating from top delegation begins to be arranged in order, and promptly according to keywords the order of 155a, keyword 155b, keyword 155c is arranged.Be chosen at first the User Perspective (P1704) of upper record just then.Like this, be because do not give User Priority on the user preference d103 of example 3, so must give certain priority corresponding by the sequence arrangement keyword of recording and narrating with User Priority.At example 3,, make the priority of keyword corresponding with the order of record as substituting of this priority.
Then, user's self-adaptive controller 14 will select the segmentation such information consistent with this User Perspective to deliver to segmentation selectedcell 16 from data d102 from the high User Perspective of priority (initial User Perspective) in regular turn.
Then, segmentation selectedcell 16 is according to the information of sending here from user's self-adaptive controller 14, and the segmentation viewpoint of initial User Perspective and segmentation is compared, and judging has the segmentation (P1705) that is not endowed the segmentation viewpoint consistent with initial User Perspective.
Then, transfer to the selection action of the segmentation that is endowed the segmentation viewpoint consistent with initial User Perspective.
At first, the segmentation selectedcell 16 virtual selections segmentation (P1706) consistent with User Perspective.Then, segmentation selectedcell 16 judges that whether contained all segmented virtual selections finish (P1707) to data d102.If do not finish, then utilize metadata d101 to extract next segmentation, carry out the processing of P1705, P1706.Like this, can be to the virtual selection of carrying out of the contained whole segmentations of data d102.
Then, at P1707, the judgement of dataadaptive unit 12 usefulness finishes to the virtual selection of all segmentations, and just the segmentation priority that the segmentation of virtual selection is endowed with segmentation is that preface is arranged (P1708).This is because during the true selection of the segmentation that will narrate afterwards, the cause that begins really to select from the high segmentation of segmentation priority.
Then, at first, the highest segmentation (P1709) of dataadaptive unit 12 true selection segmentation priority.Then, dataadaptive unit 12 judges that whether the true selection of all segmentations of virtual selection finishes (P1710).Then, at P1710, the true selection that all of the virtual selection of judgement of dataadaptive unit 12 usefulness get segmentation does not finish, with regard to long time length, spatial dimension or this both restrictive condition (P1711) of whether exceeding of data of judging the true segmentation of selecting.Then, dataadaptive unit 12 carries out the action from P1709 to P1711 repeatedly in the scope that does not exceed these restrictive conditions.
In addition, at P1710, in a single day dataadaptive unit 12 judges the true selection of all segmentations of virtual selection finishes, and just judges whether all User Perspectives (keyword) have been done processing (P1715).Then, at P1715, judge under the situation that all User Perspective processing have been finished end process in data adaptive unit 12.On the other hand, at P1715, when 12 judgements of data adaptive unit do not finish all User Perspective processing, transfer to processing to next User Perspective.
At this, to the segmentation that in above-mentioned processing, has chosen, handle once more, inferior degree selects just nonsensical.Therefore, at first, dataadaptive unit 12 extracts unselected segmentation, limits unselected segmentation and carries out following processing (P1714).Then, following User Perspective (P1713) is selected in dataadaptive unit 12, carries out beginning processing to P1711 from P1705.
Utilize Figure 13 to specify the action that segmentation is selected again.At first, dataadaptive unit 12 is sought and is documented in the contained User Perspective (user key words) of user preference (user has a liking for recording and narrating) 131 in the uppermost delegation, just has the secondary segmentation of thehigh KWD#1 of priority.On thekeyword 661 of thekeyword 631 ofsecondary segmentation 63,secondary segmentation 66,KWD#1 is arranged respectively.Then, long restriction of time is added in dataadaptive unit 12 insecondary segmentation 63,secondary segmentation 66, selects according to the tight order of long restriction of time fromsecondary segmentation 63, secondary segmentation 66.Specifically, the at first preference time, long restriction was not tight, and promptly thesecondary segmentation 66 of time length is as dateout d102a.Then, then selectsecondary segmentation 63 as the segmentation that constitutes dateout 102b.
Equally, the secondary segmentation of thehigh KWD#2 of priority the 2nd is sought in dataadaptive unit 12 from the contained User Perspective of user preference d103.In all secondary segmentations that constitute segmentation 61, include KWD#2.Therefore, need determine that at this which segmentation is preferential.
Specifically, dataadaptive unit 12 relatively is attached to the segmentation priority of the segmentation viewpoint in each secondary segmentation.In the example of Figure 13,secondary segmentation 63,secondary segmentation 62,secondary segmentation 65,secondary segmentation 66 priority are identical, andsecondary segmentation 64,secondary segmentation 67 priority are right after thereafter.In addition,secondary segmentation 67 is because selected, so, from following process object, omit.
Therefore, dataadaptive unit 12 is after selecting dateout d102b, and under the situation that long restriction of time allows, thesecondary segmentation 64 highsecondary segmentations 65 of selection segmentation priority ratio add thatsecondary segmentation 63 is as the secondary segmentation that constitutes dateout 102c.Remainingsecondary segmentation 64 comprisesKWD#2 andKWD#3, because thesecondary segmentation 65 of segmentation priority ratio is low, so, when the time, long restriction was the loosest, selectsecondary segmentation 64 to add thatsecondary segmentation 63 andsecondary segmentation 65 are as the secondary segmentation that constitutes output stream 102d.
In addition, user's self-adaptive controller 14 upgrades the priority of the contained User Perspective of the user preference d103 consistent with keyword according to the priority that is attached to the keyword in the selected segmentation.
Like this, from data d102, select to adapt to the segmentation of user preference d103, can make data d102 be fit to user preference d103.
Secondly,Terminal Control Element 15 and example 1 are same, according to flow process shown in Figure 7, resolution conversion are carried out in the selected segmentation that segmentation selectedcell 16 is sent here.
As mentioned above, adopt example 3,, also can will record and narrate the processed of the order of User Perspective as User Perspective even do not give User Priority to user preference d103.Like this, even do not give User Priority to user preference d103, also can with example 1 equally according to user's hobby, realize data adaptive uniquely.
In addition, as example 3, owing to adopt the structure that does not possess User Priority, the data structure of user preference d103 has become simply.
Example 4
The following describes the data adaptive device in theinvention process form 4.
Example 4 is to make server have user's self-adaptive controller of example 3, the situation that server and terminal are connected by network.
The system that comprises the system of data adaptive device of example 4 and the data adaptive device that comprises example 2 constitutes identical.Therefore, Fig. 8 is the figure of system block diagram that expression comprises the data adaptive device of example 4.
In addition, the user preference d103 of example 4 is identical with example 3.Therefore, figure shown in Figure 10 is the figure of the user preference d103 of expression example 4.In addition, Figure 11 is the definition of the user preference of example 4.
In addition, also the segmentation selectedcell 26 with example 3 is identical in the action of the segmentation selectedcell 26 of example 4.Therefore, Figure 13 is the action specification figure of explanation segmentation selectedcell 26 actions.In addition, Figure 14 is the process chart of the action of user's self-adaptive controller 24 of explanation example 4.
In addition, the terminal self-adaptive controller 25 of example 4 carries out and 3 the identical action of terminal self-adaptive controller from example 1 to example.Therefore, Fig. 7 is the process chart of the action of the terminal self-adaptive controller 25 of explanation example 4.In addition, the network self-adaptingcontrol unit 28 of example 4 carries out the action identical with the network self-adaptingcontrol unit 28 of example 2.Therefore, Fig. 9 is the process chart of network self-adaptingcontrol unit 28 actions of explanation example 4.
Below use these description of drawings to comprise the system of the data adaptive device of example 4.
The content of data adaptive unit 22 obtains unit 33 and reads, obtains metadata d101 and data d102 from medium 21.The metadata d101 that obtains of user's self-adaptive controller 24 analyses then extracts the segmentation viewpoint of additional at least more than one in each segmentation of the structure of data and structure and the priority of segmentation viewpoint, data (content) d102 (P1701) that identification is read.Then, user's self-adaptive controller 24 selects the user corresponding with the content that identifies to have a liking for recording and narrating (for example the user has a liking for recording and narrating 131) (P1702) from user preference d103.And user's self-adaptive controller 24 is also had a liking for the user recording and narrating 131 contained User Perspectives (keyword) and is pressed the sequence arrangement of being put down in writing.In the example of Figure 12, record order, promptly begin to be preface from top delegation, arrange according to the order of keyword 155a, keyword 155b, keyword 155c.Then, select at first, the User Perspective (P1704) of upper record just.Like this, keyword being complied with the sequence arrangement of being put down in writing is because do not give User Priority to the user preference d103 of example 4, so be necessary to give and a certain User Priority corresponding priority level.At example 3,, the priority of keyword is set as the order of recording and narrating as substituting of this User Priority.
Then, user's self-adaptive controller 24 will be delivered to segmentation selectedcell 26 from the information as selecting the segmentation consistent with this User Perspective the data d102 in regular turn from the high User Perspective of priority (initial User Perspective).
Then, segmentation selectedcell 26 is according to the information of sending here from user's self-adaptive controller 24, and the segmentation viewpoint of initial User Perspective and segmentation is compared, and judgement has the segmentation (P1705) that is not endowed the segmentation viewpoint consistent with initial User Perspective.
Then, be transferred to the selection action of the segmentation that is endowed the segmentation viewpoint consistent with initial User Perspective.
At first, the segmentation selectedcell 26 virtual selections segmentation (P1706) consistent with User Perspective.Then, segmentation selectedcell 26 judges that whether the virtual selection that contained all segmentations are carried out to metadata d101 finishes (P1707), if do not finish, then utilizes metadata d101 to extract next segmentation, carries out the processing of P1705, P1706.Like this, can carry out virtual selection to all contained segmentations of data d102.
Then,, finish, just the segmentation of virtual selection is arranged (P1708) by the segmentation priority orders of giving segmentation in case the virtual selection that all segmentations are carried out is judged in dataadaptive unit 22 at P1707.This is because when the true selection of segmentation described later, begins the true cause of selecting from the high segmentation of segmentation priority.
Then, at first, the highest segmentation (P1709) of dataadaptive unit 22 true selection segmentation priority.Then, dataadaptive unit 22 judges that whether the true selection of all segmentations of virtual selection finishes (P1710).Then, at P1710, in a single day dataadaptive unit 22 judges that the true selection of all segmentations of virtual selection does not finish, with regard to long time length, spatial dimension or this both restrictive condition (P1711) of whether exceeding of data of judging the true segmentation of selecting.Then, dataadaptive unit 22 carries out the action of P1709 to P1711 repeatedly in the scope that does not exceed these restrictive conditions.
In addition, at P1710, in a single day dataadaptive unit 22 judges the true selection of all segmentations of virtual selection finishes, and just judges whether all User Perspectives (keyword) are dealt with (P1715).Then, at P1715, end process when dataadaptive unit 22 has finished all User Perspective processing in judgement.On the other hand, at P1715, whendata processing unit 22 judgements do not finish all User Perspective processing, be transferred to processing to next User Perspective.
At this, it is nonsensical to the segmentation that has chosen in above-mentioned processing, handling once more, selecting once more.Therefore, at first, dataadaptive unit 22 extracts unselected segmentation, limits unselected segmentation and carries out following processing (P1714).Then, following viewpoint (P1713) is selected in dataadaptive unit 22, carries out the processing from P1705 to P1711.
Also utilize Figure 13 to specify segmentation below and select action.At first, dataadaptive unit 22 seek be documented in uppermost delegation in the contained User Perspective (user key words) of user preference (user has a liking for recording and narrating) 131, be the high secondary field thatKWD#1 is arranged of priority.On thekeyword 661 of thekeyword 631 ofsecondary field 63,secondary field 66,KWD#1 is arranged respectively.Then, long restriction of time is added in dataadaptive unit 22 insecondary segmentation 63,secondary segmentation 66, selects by the tight order of long restriction of time fromsecondary segmentation 63,secondary segmentation 66.
Specifically, at first, the long restriction of select time is tight, be that thesecondary segmentation 66, override of time length is as dateout d102a.Then, then, selectsecondary segmentation 63 as the segmentation that constitutes dateout 102b.Equally, the high secondary segmentation thatKWD#2 is arranged of tool priority the 2nd is sought in dataadaptive unit 22 from the contained User Perspective of user preference d103.ContainKWD#2 in all secondary segmentations that constitute segmentation 61.Thereby, be necessary to determine that at this which secondary segmentation is preferential.
Specifically, dataadaptive unit 22 relatively is attached to the segmentation priority of the segmentation viewpoint in each secondary segmentation.In the example of Figure 13,secondary segmentation 63,secondary segmentation 62,secondary segmentation 65,secondary segmentation 66 priority are identical, andsecondary segmentation 64,secondary segmentation 67 priority are subsequently.In addition, becausesecondary segmentation 67 is selected, so from following process object, omit.
Therefore, dataadaptive unit 12 is being selected under the situation that long restriction of time allows behind the dateout d102b, selects thesecondary segmentation 64 highsecondary segmentations 65 of segmentation priority ratio to add thatsecondary segmentation 63 is as the secondary segmentation that constitutes dateout 102c.Remainingsecondary segmentation 64 comprisesKWD#2 andKWD#3, because of thesecondary segmentation 65 of segmentation priority ratio low, so when the time, long restriction was the loosest, selectsecondary segmentation 64 to add thatsecondary segmentation 63 andsecondary segmentation 65 are as the secondary segmentation that constitutes output stream 102d.
In addition, user's self-adaptive controller 24 is according to the priority that is attached to the keyword in the selected segmentation, upgrades the priority of the contained User Perspective of the user preference consistent withkeyword 103.
Like this, the segmentation from data d102 selects to adapt to user preference d103 can make data d102 be fit to user preference d103.
Then, then, terminal self-adaptive controller 25 is the same with example 1, and according to the flow process of Fig. 7, resolution conversion is carried out in the segmentation of the selection that segmentation selectedcell 26 is sent here.
Below utilize Fig. 9 that the action of network self-adaptingcontrol unit 28 is described.
Fig. 9 is the process chart of the network self-adaptingcontrol unit 28 of example 4.
Network self-adaptingcontrol unit 28 is obtainedunit 36 bynetwork 2A from network preference and is obtained the network preference d205 that sends here from terminal 2B.Then, the passband (P901) of network self-adaptingcontrol unit 28 recognition network 2A.Then, network self-adaptingcontrol unit 28 is judged the segmentation priority (P902) that the 26 selected segmentations of segmentation selected cells have as self attribute.Then, network self-adaptingcontrol unit 28 judges that whether data after the conversion are than passband big (P903).
At P903, network self-adaptingcontrol unit 28 judges when the data through conversion are bigger than passband, 28 decisions of network self-adapting control unit be with the data reduced overall, transmit data a part, do not transmit data integral body, or will transmit (P904) after a part of conversion of data.
On the other hand, at P903, network self-adaptingcontrol unit 28 judges that network self-adaptingcontrol unit 28 judged that whether the data of the segmentation after the conversion are than passband little (P905) when the data after the conversion were big unlike passband.At P905, the data of network self-adaptingcontrol unit 28 after judging conversion are than passband hour, and 28 decisions of network self-adapting control unit are to append data that transmission do not transmit, do not transmit, still carry out conversion or make conversion restore the back transmitting (P906).
Then, datavolume adjustment unit 29 is adjusted data volume, makes the segmentation of the decision content-adaptive actual selection of network self-adaptingcontrol unit 28, sends terminal 2B again to.Terminal 2B is recorded in the segmentation that receives on the medium 2C.
As mentioned above, adopt example 4, the viewpoint of at least more than one that each segmentation of the structure of extraction data and structure adds from metadata and the priority in the viewpoint, select segmentation according to the priority of the viewpoint consistent, carry out the time of the segmentation selected according to the contained terminal ability information of terminal preferences and the conversion of spatial resolution with the contained User Perspective of user preference.By adjusting the data volume of the segmentation of selecting according to the contained passband of network preference, can want the form that receives and the scalable data adaptiveization that makes of the state of map network according to the user (scalable).
Moreover, at example 3 and example 4, with the priority of User Perspective as the record order, but also can with this reversed in order, or the viewpoint that adopts the center will be positioned at the record order is expressed the priority of each User Perspective as other description method such as the highest description method.
In addition, at example 2 and example 4, as shown in Figure 8, so that control device to be set inserver 20 1 sides is that the form of user's self-adaptive controller 24, terminal self-adaptive controller 25, network self-adaptingcontrol unit 28 is illustrated, but also can as shown in figure 15 user's self-adaptive controller 24, terminal self-adaptive controller 25, network self-adaptingcontrol unit 28 be set onterminal 2B.
Under the situation of this form, metadata d102 is delivered to terminal 2B fromserver 20, utilize user's self-adaptive controller 25 to make data d102 adapt to the user atterminal 2B, its result is delivered to server 20.Then, the information thatserver 20 is sent here according to user's self-adaptive controller 25 is selected segmentation from data d102, terminal 2B is delivered in selected segmentation.In addition, send the signal of theresolution conversion unit 27 ofControl Server 20, send the signal of the datavolume adjustment unit 29 ofControl Server 20 from the network self-adaptingcontrol unit 28 ofterminal 2B from the terminal self-adaptive controller 25 of terminal 2B.The control signal action thatresolution conversion unit 27 and datavolume adjustment unit 29 are sent here according to terminal 2B, the data of segmentation selectedcell 26 processing selecting are delivered to treated segmentation the medium 2C ofterminal 2B.
In addition, in the above-described embodiments, divide the form that is equipped with a segmentation viewpoint to be illustrated respectively for segmentation or secondary segmentation.But, when a plurality of segmentations that are endowed identical segmentation viewpoint or secondary segmentation are arranged, also can be to additional segments viewpoint in some segmentations or the secondary segmentation, the form of giving the link information of this segmentation viewpoint for other segmentations or secondary segmentation.
Can use link to show the segmentation viewpoint of some segmentations or secondary segmentation with this to the segmentation viewpoint of other segmentations or secondary segmentation.
Below use Figure 16, Figure 17 to describe this form in detail.Figure 16 represents other forms of segmentation of example of the present invention.Figure 17 represents the record example of the segmentation of the invention process form.
As can be seen from Figure 16,segmentation 1601 is the segmentations that have the structure that had illustrated to the example 4 at example 1.Give segmentation viewpoint and segmentation priority tosegmentation 1601.
Segmentation 1602 is endowed identical segmentation viewpoint with segmentation 1603.Segmentation 1603 has been endowed the link information forsegmentation 1602, and is linked insegmentation 1602.
Use Figure 17 to describesegmentation 1602 andsegmentation 1603 in detail below.
Putting down in writing ID number (id) 1701 of segmentation viewpoint (keyword) insegmentation 1602.ID numbers 1701 is 1 in this embodiment.In addition, insegmentation 1602, putting down in writing segmentation priority (P) 1702 to keyword.Segmentation priority in this embodiment is 5.Then, insegmentation 1602, putting down in writing " Team A " as keyword.
In addition, insegmentation 1603, putting down in writing link information, promptly to the reference number (idref) 1703 of other segmentation viewpoints (keyword).In this embodiment,reference number 1701 is1.Be segmentation 1603 with reference to the keyword that is 1 for ID number.
Therefore, the keyword ofsegmentation 1603 is " Team A ".Insegmentation 1603, putting down in writing segmentation priority (P) 1704 to keyword.In this embodiment, segmentation priority is 2.
Like this, link, just needn't in all segmentations, put down in writing keyword by making between the segmentation keyword.In addition, link, make the intersegmental relation of branch become clear and definite by making between the segmentation keyword.
Example 5
The data adaptive device of theinvention process form 5 below is described.Constitute identical as system with example 1~example 4.
Example 5 relates to the segmentation viewpoint that is additional to segmentation or secondary segmentation, and the key element that the table (viewpoint table) that the segmentation viewpoint that is occurred is gathered is recorded and narrated the highest level of data as context mechanism is the sub-key element of content.
Utilize Figure 18, Figure 19 that the data structure of example 5 is described below.Figure 18 is example 5 data structure diagrams.Figure 19 represents the record example of the data of example 5.
As can be seen from Figure 18, content has the viewpoint table 1801 as sub-key element.Viewpoint table 1801 has been stored a plurality ofsegmentation viewpoint 1802a~1802n contained in the content.
In addition, comprise a plurality ofsegmentations 1803~1805 in the content.Give link information tosegmentation 1803, be linked in thesegmentation viewpoint 1802a of viewpoint table 1801 for viewpoint table 1801.Give link information tosegmentation 1804, be linked in thesegmentation viewpoint 1802b of viewpoint table 1801 viewpoint table 1801.Give link information tosegmentation 1804, be linked in thesegmentation viewpoint 1802c of viewpoint table 1801 viewpoint table 1801.In addition, the segmentation priority for separately segmentation viewpoint has been given insegmentation 1803~1805.
Use Figure 19 to describe the data structure of example 5 in detail below.
On viewpoint table 1801, putting down in writingsegmentation viewpoint 1802a~1802c.Put down in writing (id) 1901a ID number at segmentation viewpoint 1802a.In this embodiment,ID 1901a is 1, and in addition,segmentation viewpoint 1802a is to be called the text representation of " A ".
Put down in writing (id) 1901b ID number atsegmentation viewpoint 1802b again.In this embodiment, ID 1901b is 2.In addition, thesegmentation viewpoint 1802b text representation that is called " B ".Put down in writing (id) 1901c ID number at segmentation viewpoint 1802c.ID 1901c is 3 in this embodiment.In addition, thesegmentation viewpoint 1802c text representation that is called " C ".
On the other hand, to put down in writing the link information to thesegmentation viewpoint 1802a~1802c of viewpoint table 1801 respectively be reference number (idref) 1903a~1903c insegmentation 1803~1805.
In this embodiment, thereference number 1903a ofsegmentation 1803 is1.Be segmentation 1803 with reference to theUser Perspective 1802a that is 1 for ID number.Therefore, the keyword ofsegmentation 1803 is " A ".Have again, put down in writing segmentation priority (P) 1904a keyword in segmentation 1803.In this embodiment, segmentation priority is 2.In addition, thereference number 1903b ofsegmentation 1804 is2.Be segmentation 1804 with reference to theUser Perspective 1802b that is 2 for ID number.Therefore, the keyword ofsegmentation 1804 then is " B ".
Also have, insegmentation 1804, put down in writing segmentation priority (P) 1904b keyword.In this embodiment, segmentation priority is 3.In addition, thereference number 1903c ofsegmentation 1805 is3.Be segmentation 1805 with reference to theUser Perspective 1802c that is 1 for ID number.Therefore, the keyword ofsegmentation 1805 then is " C ".Have again, insegmentation 1805, put down in writing segmentation priority (P) 1904c keyword.In this embodiment, segmentation priority is 4.
By adopting such structure, just become easy to user prompt segmentation viewpoint inventory in advance.By means of this, before the segmentation viewpoint that user's input is liked, just can know the segmentation viewpoint that occurs there.In addition, can carry out the input selection operation of user's segmentation viewpoint according to the viewpoint table.
And, also can adopt in each segmentation or secondary segmentation linking and its structure of group of score (score) of viewpoint corresponding in additional and this viewpoint table.
Again, the viewpoint table can not be a content also, but the structure that certain segmentation or secondary segmentation are had, or record and narrate separately.
Can not be again the performance of all using with the viewpoint table that links, but make with the viewpoint table link and the group of score and the group of viewpoint and score are mixed existence.
Figure 20 makes the group of viewpoint and score mix the data structure schematic diagram that exists.As can be seen from Figure 20, in content, have thesegmentation 1803,1804 of viewpoint table 1801 link information and do not havesegmentation 2001 to the link information of viewpoint table 1801 and mix and exist.
Figure 21 is the record example of data structure shown in Figure 20.Can understand from this figure, needn't record and narrate all segmentation viewpoints that in context record data, occur on the viewpoint table, record reference by connection.In addition, do not comprise link information insegmentation 2001 to viewpoint table 1801.
Example 6
The data adaptive device of theinvention process form 6 below is described.
Example 6 has been prepared list and has been shown in the viewpoint table that context is recorded and narrated the segmentation viewpoint that occurs in the data, before input the user is pointed out.
Below utilize Figure 22 that the data structure of example 6 is described.
The content of example 6 is made of viewpoint table 2101 and a plurality ofsegmentation 2103~2105.In viewpoint table 2101, comprise a plurality ofsegmentation viewpoint 2102a~2102n.
In addition, insegmentation 2103, comprisesegmentation viewpoint 2106 andsegmentation priority 2 107, insegmentation 2104, comprisesegmentation viewpoint 2108 andsegmentation priority 2 109, insegmentation 2105, comprisesegmentation viewpoint 2110 andsegmentation priority 2 122.Different with the segmentation of example 5,segmentation 2103~2105 does not comprise the link information to viewpoint table 2101.That is to say thatsegmentation 2103~2105 and viewpoint table 2101 form independent structures.
Figure 23 represents the record example of the data of the invention process form 6.As can be seen from Figure 23, stored the containedviewpoint 2106,2108,2110 ofsegmentation 2103~2105 at viewpoint table 2101.Specifically, withviewpoint 2106,2108,2110 corresponding storedviewpoint 2301~2303.
And, make it can before the user imports User Perspective, show viewpoint table 2101.
By means of this, import the user before the User Perspective of hobby, just can know the viewpoint that shows at there.Has the effect that the enough selection operations based on this viewpoint table of energy are finished user's User Perspective input again.
Example 7
The data adaptive device of theinvention process form 7 below is described.
Example 7 with computer treatable mode express context and record and narrate under the data conditions, be divided into part and the record of recording and narrating structure and express as the viewpoint of attribute and the part of score thereof.
Utilize Figure 24,25 explanation examples 7 below.Figure 24 is the data structure diagram of example 7.Figure 25 represents the record of the data of example 7.
The record of the first half of Figure 24 be thestructure division 2200a that records and narrates structure, the latter half record be theattribute section 2200b that records and narrates as the segmentation viewpoint and the priority thereof of attribute.In the content ofstructure division 2200a, record and narrate a plurality ofsegmentation 2201a~2201n.
Also have, in the figure, record and narrate the part of structure and record and narrate the simplest structure, but, also can take the structure that other examples are recorded and narrated.
In addition, inattribute section 2200b, record and narrate a plurality ofsegmentation viewpoint 2202a, 2202b.In eachsegmentation viewpoint 2202a, 2202b, gathered group topriority 2 204a~2204f of thelink information 2203a~2203f of segmentation, secondary segmentation and eachsegmentation viewpoint 2202a, 2202b as object.
Then, utilize Figure 25 to describe the data structure of example 7 in detail.
As can be seen from Figure 25, putting down in writing a plurality ofsegmentation 2201a~2201c in thecontent.Segmentation 2201a is given ID number (id) 2301a of segmentation.Id2301a is 1 in thisembodiment.Segmentation 2201b is given ID number (id) 2301b of segmentation.Id2301b is 2 in thisembodiment.Segmentation 2201n is given ID number (id) 2301n of segmentation.Id2301n is n in this embodiment.
On the other hand, as can be seen from Figure 25,segmentation viewpoint 2202a is " TeamA ".A plurality ofsegmentation priority 2 204a~2204c have been put down in writing atsegmentation viewpoint 2202a again.In this embodiment, the priority ofsegmentation priority 2 204a is 3, and the priority ofsegmentation priority 2 204b is 2, and the priority ofsegmentation priority 2 204c is 5.
Tosegmentation priority 2 202a, givelink information 2203a~2203c respectivelyagain.Link information 2203a~2203c has put down in writing the reference number Idref thatID 2301a~2301c of referredfragment 2201a~2201c uses.In this embodiment, the idref ofsegmentation priority 2 204a is 1, and the idref ofsegmentation priority 2 204b is n, and the idref ofsegmentation priority 2 204c is 2.Therefore, the object fragments ofsegmentation priority 2 204a issegmentation 2201a, and the object fragments ofsegmentation priority 2 204b issegmentation 2201c, and the object fragments ofsegmentation priority 2 204c issegmentation 2201n.
Like this, just can be fromsegmentation priority 2 204a~2204c referred fragment 201a~2201c.
In addition, as can be seen from Figure 25,segmentation viewpoint 2202b is " TeamB ".A plurality ofsegmentation priority 2 204d~2204f onsegmentation viewpoint 2202b, have been put down in writing again.In this embodiment, the priority ofsegmentation priority 2 204d is 4, and the priority ofsegmentation priority 2 204e is 5, and the priority ofsegmentation priority 2 204f is 2.
In addition,segmentation priority 2 202b is endowedlink information 2203d~2203f respectively.Link information 2203d~2203f has put down in writing the reference number idref thatID 2301a~2301c of referredfragment 2201a~2201c uses.In this embodiment, the idref ofsegmentation priority 2 204d is 2, and the idref ofsegmentation priority 2 204e is 3, the idref ofsegmentation priority 2 204f is 1.Therefore, the object fragments ofsegmentation priority 2 204d issegmentation 2201b, and the object fragments ofsegmentation priority 2 204e issegmentation 2201c, and the object fragments ofsegmentation priority 2 204f issegmentation 2201a.
Like this, just can be fromsegmentation priority 2 204a~2204f referredfragment 2201a~2201n.
Below the segmentation under the situation of the such data structure of explanation is selected to handle.At first,select priority 2 204a~2204f for eachsegmentation viewpoint 2202a, 2202b in the part that above-mentioned segmentation selectedcell 16,26dependency part 2200b record and narrate as object.Because givelink information 2203a~2203f respectively, so can selectsegmentation 2201a~2201n as the object ofpriority 2 203a~2203f of selecting to thepriority 2 204a~2204f that selects.Like this, segmentation selectedcell 16,26 just can be selected thesegmentation 2201a~2201n as object by eachsegmentation 2202a, 2202b being specifiedviewpoint priority 2 204a viewpoint.
Also have,, also can obtain same effect even the part oflink information 2203a~2203f andpriority 2 204a~2204f is recorded and narrated respectively not as identical file.
Also have, also can consider to connect the form of segmentation and segmentation viewpoint in the bi-directional chaining mode.
Below utilize Figure 26,27 these forms of explanation.Figure 26 represents the data structure of example 7.Figure 27 represents the data record of example 7.
The data of this form are the same with example 7, are made ofstructure division 2400a and attribute section 2400b.In the content ofstructure 2400a, record and narrate a plurality ofsegmentation 2401a~2401n.And inattribute section 2400b, record and narrate a plurality ofsegmentation viewpoint 2402a, 2402b.Eachsegmentation viewpoint 2402a, 2402b have gathered as thelink information 2403a~2403f of the segmentation of object, secondary segmentation andpriority 2 404a~2404f of eachsegmentation viewpoint 2402a, 2402b.
In addition, in this form, becausemake structure division 2400a and attribute section bi-directional chaining, so also put down in writing link information amongsegmentation 2401a~segmentation 2401n tosegmentation priority 2 404a~2404f.
Below utilize Figure 25 to describe the data structure of this example in detail.
As can be seen from Figure 27, a plurality ofsegmentation 2401a~2401c have been put down in writing in thecontent.Segmentation 2401a is given ID number (id) 2501a of segmentation.In this embodiment, id2501a is 1.Give ID number (id) 2501b of segmentation tosegmentation 2401b, in this embodiment, id2501b is 2.Segmentation 2401c is given ID number (id) 2501c of segmentation.In this embodiment, id2501c is 3.
On the other hand, a plurality ofsegmentation priority 2 404a~2404c onsegmentation viewpoint 2402a, have been put down in writing.In this embodiment, the priority ofsegmentation priority 2 404a is 3, and the priority ofsegmentation priority 2 404b is 2, and the priority ofsegmentation priority 2 404c is 5.
In addition, givelink information 2403a~2403c respectively tosegmentation priority 2402a.Link information 2403a~2403c has put down in writing the reference number Idref thatID 2501a~2501c of referredfragment 2401a~2401c uses.In this embodiment, the idref ofsegmentation priority 2 404a is 1, and the idref ofsegmentation priority 2 404b is 2, and the idref ofsegmentation priority 2 404c is n.
Therefore, the object fragments ofsegmentation priority 2 404a issegmentation 2401a, and the object fragments ofsegmentation priority 2 404b issegmentation 2401b, and the object fragments ofsegmentation priority 2 404c issegmentation 2401n.
On the other hand, a plurality ofsegmentation priority 2 404d~2404f onsegmentation viewpoint 2402b, have been put down in writing.In this embodiment, the priority ofsegmentation priority 2 404d is 4, and the priority ofsegmentation priority 2 404e is 5, and the priority ofsegmentation priority 2 404f is 2.
In addition, givelink information 2403d~2403f. respectively tosegmentation priority 2402b.Link information 2403d~2403f has put down in writing the reference number Idref thatID 2501a~2501c of referredfragment 2401a~2401c uses.In this embodiment, the idref ofsegmentation priority 2 404d is 2, and the idref ofsegmentation priority 2 404e is 3, and the idref ofsegmentation priority 2 404f is 1.
Therefore, the object fragments ofsegmentation priority 2 404d issegmentation 2401b, and the object fragments ofsegmentation priority 2 404e issegmentation 2401c,, the object fragments ofsegmentation priority 2 404f issegmentation 2401a.
Also have,segmentation priority 2 402a is endowed (idref) 2503a~2503c respectively priority ID number.In this embodiment,priority ID 2503a is p101, andpriority ID 2503b is p102, and priority ID 2503c is p103.
In addition, give (idref) 2503d~2503f respectively priority ID number tosegmentation priority 2 402b.In this embodiment,priority ID 2503d is p201, and priority ID 2503e is p202, and priority ID 2503f is p203.
On the other hand,segmentation 2401a~2401n is given to the link information ofsegmentation priority 2 404a~2404f, promptly with reference to priority reference number (idrefs) 2502a~2502e of priority ID2503a~2503c.In this embodiment, idrefs2502a is p101, and idrefs2502b is p203, and idrefs2502c is p102, and idrefs2502d is that p201, idrefs2502e are p202.
Therefore, segmentation priority as the object ofsegmentation 2401a issegmentation priority 2 404a, 2404f, segmentation priority as the object ofsegmentation 2401b issegmentation priority 2 404c, 2404d, issegmentation priority 2 404b as the segmentation priority of the object ofsegmentation 2401n.
Like this,segmentation 2401a~2401n andsegmentation priority 2 204a~2204f just can be from both sides' references.By means of this, above-mentioned segmentation selectedcell 16,26 promptly uses the method for example 1~example 6 records, or with the method that example 7 is put down in writing, can both handle.
Example 8
The data adaptive device of the invention process form 8 below is described.
Example 8 is the same with example 7, and the expression that context is recorded and narrated data is divided into the part of recording and narrating structure and records and narrates attribute is that the part of viewpoint and score thereof shows.But, under the situation of example 8, be not with to the link of segmentation and with the formal representation of the group of segmentation priority, but each segmentation viewpoint is recorded and narrated according to the order of segmentation priority height.
Below utilize Figure 28,29 explanation examples 8.Figure 28 represents the data configuration of example 8.Figure 29 represents the record of example 8 data.
As shown in figure 28, the data of example 8 are made of thestructure division 2600a and theattribute section 2600b that record and narrate structure.Record and narrate a plurality ofsegmentation 2601a~2601n in the content of structure division 2600a.Record and narrate a plurality ofsegmentation viewpoint 2602a, 2602b among the attribute section 2600b.Eachsegmentation viewpoint 2602a, 2602b records and narrates thelink information 2603a~2603f as the segmentation or the secondary segmentation of object.
Utilize Figure 29 to describe the data structure of example 8 in detail below.
As can be seen from Figure 29, a plurality ofsegmentation 2601a~2601n have been put down in writing in thecontent.Segmentation 2601a~2601n is given ID number (id) 2701a~2701n of segmentation respectively.In this embodiment, id2701a is 1, and id2701a is 2, id2701n is n.
On the other hand, as can be seen from Figure 29,segmentation viewpoint 2602a is " TeamA ".In addition, givelink information 2603a~2603c respectively to segmentation viewpoint2602a.Link information 2603a~2603c is putting down in writing reference number idref2702a~2302c thatID 2701a~2701n of referredfragment 2601a~2601n uses.In this example, idref2702a is 1, and idref2702b is that n, idref2702c are 2.
In addition, linkinformation 2603a~2603c is because discern the record order as segmentation priority, so the segmentation priority oflink information 2603a is the highest in this example, the segmentation priority oflink information 2603c is minimum.
Therefore, the segmentation priority ofsegmentation 2601a segmentation priority the highest,segmentation 2601b is minimum.
On the other hand, as can be seen from Figure 29,segmentation viewpoint 2602b is " TeamB ".In addition, givelink information 2603d~2603f respectively to segmentation viewpoint2602b.Link information 2603d~2603f has put down in writing reference number idref2702d~2302f thatID 2701a~2701n of referredfragment 2601a~2601n uses.In this embodiment, idref2702d is 2, and idref2702e is 3, and idref2702f is 1.
In addition, linkinformation 2603d~2603f is because record is used as the identification of segmentation priority in proper order, so in this embodiment, the segmentation priority oflink information 2603d is the highest, and the segmentation priority oflink information 2603f is minimum.
Therefore, the segmentation priority ofsegmentation 2601b is the highest, and the segmentation priority ofsegmentation 2601a is minimum.
Like this, even onsegmentation viewpoint 2602a, b, do not put down in writing segmentation priority, also can with reference tosegmentation 2201a~2201n, can determine priority according tosegmentation viewpoint 2602a, b.
Also have,, also can diverse segmentation priority be arranged, but also can be as shown in figure 30, can record and narrate same segment priority each segmentation (or secondary segmentation) at Figure 28.
In example shown in Figure 30, on thesegmentation viewpoint 2802 of attribute section 2800, record and narrate, represent in regular turn that from a left side segmentation priority is high, then being arranged above and below of same segment priority.
Specifically, linkinformation 2803a is recorded and narrated to become to be arranged above and below withlink information 2803b, linkinformation 2803a is recorded and narrated the left side at link information 2803c.Then, linkinformation 2803a andsegmentation 2601b link, and linkinformation 2803c andsegmentation 2601a link, and linkinformation 2803b andsegmentation 2601n link.Be thatsegmentation 2601b andsegmentation 2601n have same segment priority,segmentation 2601a becomes and has thansegmentation 2601b and the low segmentation priority ofsegmentation 2601n.
At this, describe the data structure of Figure 30 in detail with Figure 31.
As can be seen from Figure 31, putting down in writing a plurality ofsegmentation 2601a~2601n in thecontent.Segmentation 2601a~2601n is given ID number (id) 2901a~2901n of segmentation respectively.In this embodiment, id2901a is 1, and id2901a is 2, and id2901n is n.
On the other hand, as can be seen from Figure 31,segmentation viewpoint 2802 is " TeamA ".In addition, give link information (idref) 2803a~2803c respectively to segmentation viewpoint 2602a.In this embodiment, idref2803a is 2, and idref2803b is 1, and idref2803c is n.
In addition, linkinformation 2803a~2803c is because record is used as the identification of segmentation priority in proper order, so in this embodiment, the segmentation priority oflink information 2803a is the highest, and the segmentation priority oflink information 2803c is minimum.
Therefore, the segmentation priority ofsegmentation 2601a segmentation priority the highest,segmentation 2601b is minimum.
Like this, even the segmentation viewpoint is not put down in writing segmentation priority, also can record and narrate segmentation with same segment priority.
Also have, above-mentioned segmentation selectedcell 16,26 adopts the method for example 7 records to select segmentation.
Example 9
Theinvention process form 1~8 is carried out content aware form for utilizing content identifier.But also can utilize content recognition, not carry out content recognition and deposit the identification that probability carries out content according to keyword or keyword same.In addition, also can discern content with other method.
Again, needn't prepare user preference for each content, also can user preference chronologically, several hobby types such as colony and general hobby is prepared or initial setting up (default) is irrespectively prepared in user preference and user's hobby.
Example 9 is to can't help content recognition to carry out content aware example.Below utilize the user preference of Figure 32,33 explanation examples 9.The record definition of the user preference of Figure 32,33 expression examples 9.
Also have, User Preference represents user preference among the figure, and Preference Summary Theme represents keyword, the time long letter breath that Summary Duration uses for the expression content, and preference value represents the priority to keyword.
As we know from the figure, User Preference3201 is as key element and as user profile in definition, comprise identifier that the identification user uses and be User Identifier3202 and user taste information, be UsagePreference3203.In addition, definition Usage Preference3203 is for comprising 0 or 1.
The following describes the definition of User Identifier3202.User Identifier3202 is defined as and does not comprise key element shown among the figure 3204.In addition, definition User Identifier3202 comprises expression as attribute and can rewrite the information of denying and be protected 3204 and user name, be user name 3205.Protected 3204 can be from rewriting " true ", can not rewrite " false ", can rewriting corresponding to the user " user " and select one.In addition, the initial setting up of protected 3204 becomes " true ".Definition user name3205 is that CDATA is a text message.User name3205 is set at " anoymous " as initial setting up in addition.
Definition to Usage Preference3203 describes below.Shown among the figure 3206,definition UsagePreference 3203 is a key element, comprises theBrowsingPreferences 3207 that has condition informations such as time, weather more than 0.In addition,definition Usage Preference 3203 for the information that comprises expression as attribute and can rewrite automatically by manager etc., be allow Automatic update 3208.Allow Automaticupdate3208 can be from rewriting " true ", can not rewrite " false ", can rewriting corresponding to the user " user " and select one.Also have being initially set to of allow Automatic update3208 " true ".
Definition to Browsing Preferences3207 describes below.Shown among the figure 3209,definition Browsing Preferences 3207 is for having the Summary perferences3210 more than 0 as key element.Then, to represent to rewrite the information of denying as key element be that the priority of protected 3211 andBrowsing Preferences 3207 ispreference Value 3212 todefinition Browsing Preferences 3207 in order to comprise as attribute.Protected 3211 can be from rewriting " true ", can not rewrite " false ", can rewriting corresponding to the user " user " and select one.In addition, being initially set to of protected 3211 " true ".Definition preference Value 3212 is a text message for CDATA.In addition,prefernce Value 3212 is set at 100 as initial setting up.
Like this, because BrowsingPreferences 3207 hasprefernce Value 3212, so, also can attach priority and handle even at the upper stratum that Userpreference3201 is calledBrowsing Preferences 3207.
Definition to Summary Preferences3210 describes below.Shown among the figure 3301,definition Summary Preferences 3210 is for having keyword more than 0, be information during PreferredSummary Theme3302,0 or 1 demonstration, be Summary Duration3303 as key element.
In addition,definition Summary Preferences 3210 is for to have priority prefernceValue3304 as attribute.Also have,definition prefernce Value 3304 is initially set to 100.
Like this, because it is prefernce Value3304 thatSummary Preferences 3210 has priority as attribute, even keyword, be Preferred Summary Theme3302 upper stratum, beSummaryPreferences 3210, also can attach priority and handle.
The below definition of explanation Preferred Summary Theme3302.Shown among the figure 3305, definition Preferred Summary Theme3302 is for to have text data as key element.In addition,definition PreferredSummary Theme 3302 is for having as the language message of selecting, be xml:lang 3306 and precedence information, beprefernce Value 3307 as attribute.Again, the definition precedence information, be prefernceValue3307 be initially set to 100.
Like this, by making keyword, being that Preferred Summary Theme3302 holds precedence information as attribute, isprefernce Value 3307, can attach the processing of priority to Preferred Summary Theme3302.
In addition, shown among the figure 3308,definition Summary Duration 3303 is for to comprise text data as key element.
Like this, the user preference of example 9 is not the processing to each content, but can each upper stratum of the Preferred SummaryTheme3302 that be calledBrowsing Preferences 3207,Summary Preferences 3210 be handled.And, can attach the processing of priority to each upper stratum of thePreferred Summary Theme 3302 that is called Browsing Preferences3207,Summary Preferences 3210.
Utilize Figure 34 that the concrete record of the user preference of example 9 is described below.Figure 34 records and narrates the figure of example for the user preference of expression example 9.The record example of user preference shown in Figure 34 is record example and the example 9 corresponding examples that make user preference shown in Figure 4.
Shown among the figure 3401, UserIdentifier has recorded and narrated " true " as protected, has recorded and narrated " Bob " as user name.In addition, shown among the figure 3402, Usage Preferences has recorded and narrated " false " as allow AutomaticUpdate.In addition, shown among the figure 3403, Browsing Preferences has recorded and narrated " true " as pretected.
In addition, two Summary Preferences3404a, 3404b on User preference3400, have been recorded and narrated.
Shown among the figure 3405,, recorded and narrated the information that is called " Nakata " as Preferred SummaryTheme at Summary Preferences 3404a.In addition, the prefernce Value of the Preferred SummaryTheme shown in the 3405a is 500 among the figure.In addition, shown in 3406a among the figure, recorded and narrated the information that is called " Soccer " as Preferred Summary Theme at SummaryPreferences 3404a.In addition, the prefernce Value of the Preferred Summary Theme shown in the 3406a is 400 among the figure.Shown in 3407a among the figure, recording and narrating the information that is called " Japan " as Preferred SummaryTheme at Summary Preferences 3404a.In addition, the prefernce Value of the Preferred SummaryTheme shown in 3407 is 200 among the figure.Also have, shown in 3408a among the figure, on Summary Duration, recorded and narrated " PT5M " i.e. information of 5 minutes.
Shown in 3405b among the figure, recorded and narrated the information that is called " Headline " as Preferred SummaryTheme at Summary Preferences 3404b.Again, the prefernce Value of the Preferred SummaryTheme shown in the 3405b is 500 among the figure.Shown in 3406b among the figure, recorded and narrated the information that is called " Stock " as Preferred Summary Theme at SummaryPreferences3404b again.In addition, the prefernce Value of the Preferred Summary Theme shown in the 3406b is 500 among the figure.Shown in 3407b among the figure, on Summary Preferences 3404b, recorded and narrated the information that is called " Sports " as Preferred SummaryTheme.In addition, the prefernce Value of the PreferredSummary Theme shown in the 3407b is 300 in the drawings.In addition, shown in 3408b among the figure, recorded and narrated " PT3M " i.e. information of 3 minutes at SummaryDuration.
As mentioned above, according to the definition of the user preference of example 9, also can record and narrate the content corresponding with Fig. 4.
The segmentation that the following describes example 9 is recorded and narrated.At first utilize Figure 35~Figure 37 that the definition of the segmentation of example 9 is described.Figure 35~Figure 37 represents the definition that the segmentation of example 9 is recorded and narrated.
In this embodiment, as segmentation, be that the definition of Audio Visual Segment describes to segmentation as Voice ﹠ Video.
Shown among the figure 3501, definition AudioVisual Segment 3502 is for comprising Point ofView 3503 and theMedia Time 3504 more than 0 as key element.Point ofView 3503 is the key element that comprises about the viewpoint ofsegmentation.Media Time 3504 is the key element that comprises that time of segmentation is long etc.
Again, shown among the figure 3505, definition Audio Visual Segment3502 is for to have identifier ID as attribute.In addition, definition AudioVisual Segment 3502 for have as attribute shown among the figure 3506 as the URL that selects with reference to information, be href and shown among the figure 3507 as the ID that selects with reference to information, be idref.
The following describes the definition of Point of View 3503.Shown among the figure 3508, definition Point of View3503 is for having 0 or 1Supplemental Info 3503 and the Value3510 more than 1 as keyelement.Supplemental Info 3503 is the note of the content of expression Point of View 3503.Value3510 is the precedence information of Point ofView 3503.
Again, definition Point ofView 3503 is defined as alternatively in having identifier information on the attribute, being id shown among the figure 3511.Point ofView 3503 is defined as shown among the figure 3512 as attribute and has the segmentation viewpoint, is View Point in addition.Again, definition View Point is for to record and narrate with text message.
The following describes the definition of Value 3510.Shown among the figure 3513,Value 3510 is for to comprise text message as key element in definition.In addition, shown among the figure 3514,Value 3510 is for to have identifier id as attribute in definition.Also have, the identifier id ofdefinition Value 3510 is option (option).
Like this, because Point ofView 3502 andValue 3510 have id, Audio Visual Segment3502 has the idref with reference to these id, and then AudioVisual Segment 3502 can be according to idref with reference to Point ofView 3503 and Value 3510.Be that AudioVisual Segment 3502 can be linked inPoint ofView 3503 andValue 3510 according to idref.
The following describes the definition of Supplemental Info 3509.Shown among the figure 3601,definition Supplemental Info 3509 is for havingFree Text Annotation 3602 or Structured Annotation 3603 as a key element at least.
In addition, shown among the figure 3604,definition Supplemental Info 3509 is for to have text data as key element.In addition, shown among the figure 3605,definition Supplemental Info 3509 is for to have language message as attribute.
The following describes the definition of Structured Annotation 3603.Shown among the figure 3606, definition Structured Annotation 3603 is for having 0 or 1 whose dried information of expression at least as key element, be Who3607, perhaps to have 0 or 1 information what expression done at least, be What Object3608, perhaps to have 0 or 1 expression at least and do and so on information, be WhatAction 3609, perhaps to have 0 or 1 at least and be illustrated in the information of where doing, be Where 3610, perhaps to have 0 or 1 at least and be illustrated in the information of when doing, be When 3611, perhaps have 0 or 1 at least and be expressed as the information what is done, be Why 3612.
Again, shown among the figure 3613~3618, definition Who3607, What Object3608, What Action3609, Where 3610, When 3611, Why 3612 are for to comprise text message as key element.
In addition, shown among the figure 3619, definition Structured Annotation 3603 is for to have identifier information, to be id as attribute.And for example among the figure shown in 3620, definition Structured Annotation 3603 is for to have language message as attribute.
Like this, what kind of information just can easily discern Audio Visual Segment3502 according to Structured Annotation 3603 is.
The following describes the definition of Media Time 3504.Shown among the figure 3701,definition Media Time 3504 is for to have Media Time Point 3702 and Media Duration 3703 as key element.Media TimePoint 3702 is the time started information ofAudio Visual Segment 3502, and Media Duration 3703 is the time long letter breath ofAudio Visual Segment 3502.
In addition, shown among the figure 3704, definition Media Time Point 3702 is for to have text message as key element.And for example among the figure shown in 3705, also define Media Duration 3703 for to have text message as key element.
As mentioned above, defined AudioVisual Segment 3502.
The record example ofAudio Visual Segment 3502 to definition as mentioned above describes below.
Figure 38 record example corresponding that be the record example that makes segmentation shown in Figure 17 with example 9.
As can be seen from Figure 38, insegmentation 1602 with the same ID number (id) 1701 that is putting down in writing segmentation viewpoint (keyword) of record example of Figure 17.In addition, put down in writing segmentation priority (P) 1702 insegmentation 1602 for keyword.And, put down in writing " TeamA " insegmentation 1602 as keyword.
In addition, the same insegmentation 1603 with the record example of Figure 17, put down in writing reference number (idref) 1703 as other segmentation viewpoints (keyword) of link information.Therefore, the keyword ofsegmentation 1603 becomes " TeamA ".In addition, putting down in writing segmentation priority (P) 1704 insegmentation 1603 for keyword.
Like this, link even utilize example 9 also can make between the segmentation.
The record that the record example of utilizing Figure 39 to illustrate below to make segmentation shown in Figure 19 and example 9 are corresponding.
As can be seen from Figure 39, on viewpoint table 1801, put down in writingsegmentation viewpoint 1802a to 1802c.Onsegmentation viewpoint 1802a, put down in writing (id) 1901a ID number.And thesegmentation viewpoint 1802a text representation that is called " A ".
Again, ID number (id) 1901b of record on segmentation viewpoint 1802b.In addition, thesegmentation viewpoint 1802b text representation that is called " B ".ID number (id) 1901c of record on segmentation viewpoint 1802c.Again, thesegmentation viewpoint 1802c text representation that is called " C ".
On the other hand,segmentation 1803~1805 respectively Ji Load for the link information of thesegmentation viewpoint 1802a~1802c of viewpoint table 1801, be reference number (idref) 1903a~1903c.
Also have, insegmentation 1803, putting down in writing segmentation priority (P) 1904a for keyword.At segmentation priority (P) 1904b ofsegmentation 1804 records for keyword.Put down in writing segmentation priority (P) 1904c in thesegmentation 1805 for keyword.
Like this, even use example 9 also to point out segmentation viewpoint inventory in advance to the user easily.
Utilize Figure 40 to illustrate below to make segmentation shown in Figure 21 to record and narrate the corresponding record of example and example 9.
As can be seen from Figure 40, needn't record and narrate all segmentation viewpoints that on context record data, manifest at viewpoint table 1801, only with reference to link.In addition, insegmentation 2001, do not comprise link information to viewpoint table 1801.
Like this, utilize the record of Figure 40 can make the record of Figure 21 corresponding with example 9.
Utilize Figure 41 to illustrate below to make segmentation shown in Figure 23 to record and narrate the corresponding record of example and example 9.
As can be seen from Figure 41, on viewpoint table 2101, storing the containedviewpoint 2106,2108,2110 ofsegmentation 2103~2105.Specifically, stored theviewpoint 2301~2303 corresponding withviewpoint 2106,2108,2110.
And, be before the user imports User Perspective, show viewpoint table 2101.Do like this, can utilize record shown in Figure 41, make the record of Figure 23 corresponding with example 9.
Utilize Figure 42 to illustrate below to make segmentation shown in Figure 25 to record and narrate the corresponding record of example and example 9.
As can be seen from Figure 42, a plurality ofsegmentation 2201a~2201c have been put down in writing in terms ofcontent.Segmentation 2201a is given ID number (id) 2301a ofsegmentation.Segmentation 2201b is given ID number (id) 2301b ofsegmentation.Segmentation 2201c is given ID number (id) 2301c of segmentation.
On the other hand, the 2202a of segmentation viewpoint as can be known is " TeamA ".In addition, a plurality ofsegmentation priority 2 204a~2204c have been put down in writing atsegmentation viewpoint 2202a.
In addition, segmentation priority is endowedlink information 2203a~2203c respectively.Link information 2203a~2203c has put down in writing the reference number Idref thatID 2301a~2301c of referredfragment 2201a~2201c uses.
Like this, just can be according tosegmentation priority 2 204a~2204c referredfragment 2201a~2201c.
In addition, segmentation viewpoint as can be known is " TeamB ".Also have, put down in writing a plurality ofsegmentation priority 2 204d~2204f atsegmentation viewpoint 2202b.
Also have, thesegmentation priority 2 202b givenlink information 2203d~2203f respectively.The reference number Idref thatID 2301a~2301c oflink information 2203d~2203f record referredfragment 2201a~2201c uses.
Like this, just can be according tosegmentation priority 2 204a~2204f referredfragment 2201a~2201n.
Like this, by the record of Figure 42, can make the record of its Figure 25 corresponding with example 9.
Utilize Figure 43,44 below, illustrate to make its segmentation shown in Figure 27 record and narrate the corresponding record of example and example 9.
From figure as can be known, a plurality ofsegmentation 2401a~2401c in content, have been put down inwriting.Segmentation 2401a is given ID number (id) 2501a ofsegmentation.Segmentation 2401b is given ID number (id) 2501b ofsegmentation.Segmentation 2401c is given ID number (id) 2501c of segmentation.
On the other hand, put down in writing a plurality ofsegmentation priority 2 404a~2404c atsegmentation viewpoint 2402a.
Again,segmentation priority 2 402a is endowedlink information 2403a~2403c respectively.Link information 2403a~2403c is putting down in writing the reference number Idref thatID 2501a~2501c of referredfragment 2401a~2401c uses.
Therefore, the object fragments ofsegmentation priority 2 404a issegmentation 2401a, and the object fragments ofsegmentation priority 2 404b issegmentation 2401b, and the object fragments ofsegmentation priority 2 404c issegmentation 2401n.
On the other hand, put down in writing a plurality ofsegmentation priority 2 404d~2404f at segmentation viewpoint 2402b.Also have, thesegmentation priority 2 402b givenlink information 2403d~2403f respectively.Link information 2403d~2403f is putting down in writing the reference number Idref thatID 2501a~2501c of referredfragment 2401a~2401c uses.
Therefore, the object fragments ofsegmentation priority 2 404d issegmentation 2401b, and the object fragments ofsegmentation priority 2 404e issegmentation 2401c, and the object fragments ofsegmentation priority 2 404f issegmentation 2401a.
Again,segmentation priority 2 402a has given (idref) 2503a~2503c respectively priority ID number.In addition,segmentation priority 2 402b has given (idref) 2503d~2503f respectively priority ID number.
On the other hand,segmentation 2401a~2401c has given for the link information ofsegmentation priority 2 404a~2404f, promptly with reference to priority reference number (idrefs) 2502a~2502e of priority ID2503a~2503c.
Therefore, beingsegmentation priority 2 404a, 2404f, beingsegmentation priority 2 404b, 2404d as the segmentation priority of the object ofsegmentation 2401b as the segmentation priority of the object ofsegmentation 2401a, issegmentation priority 2 404e as the segmentation priority of the object ofsegmentation 2401c.
Like this, even use example 9 also just can be from two aspects with reference tosegmentation 2401a~2401e andsegmentation priority 2 204a~2204f.
Utilize Figure 45 to illustrate below to make segmentation shown in Figure 29 to record and narrate the corresponding record of example and example 9.
From accompanying drawing as can be known, a plurality ofsegmentation 2601a~2601c in content, have been put down in writing.Given ID number (id) 2701a~2701c of segmentation respectively atsegmentation 2601a~2601c.
On the other hand, from accompanying drawing as can be known,segmentation viewpoint 2602a is " TeamA ".In addition,segmentation viewpoint 2602a given link information 2603A~2603c respectively.Link information 2603a~2603c has put down in writing reference number idref2702a~2302c that theID 2701a~2701c with reference tosegmentation 2601a~2601c uses.
In addition, linkinformation 2603a~2603c is because record is discerned as segmentation priority in proper order, so in this embodiment, the segmentation priority oflink information 2603a is the highest, and the segmentation priority oflink information 2603c is minimum.
Therefore, the segmentation priority ofsegmentation 2601a is for the highest, and the segmentation priority ofsegmentation priority 2 601b is minimum.
On the other hand, as we know from the figure,segmentation viewpoint 2602b is " TeamB ".In addition, givenlink information 2603d~2603f respectively to segmentation viewpoint2602b.Link information 2603d~2603f has put down in writing reference number idref 2702d~2302f thatID 2701a~2701c of referredfragment 2601a~2601c uses.
In addition, linkinformation 2603d~2603f is because record is discerned as segmentation priority in proper order, and in this embodiment, the segmentation priority oflink information 2603d is the highest, and the segmentation priority oflink information 2603f is minimum.
Therefore, the segmentation priority ofsegmentation 2601b is the highest, and the segmentation priority ofsegmentation 2601a is minimum.
Like this, utilize the record of Figure 45, can make the record of Figure 29 corresponding with example 9.
Utilize Figure 46 to illustrate below to make segmentation shown in Figure 31 to record and narrate the corresponding record of example and example 9.
As we know from the figure, in content, putting down in writing a plurality ofsegmentation 2601a~2601c.Segmentation 2601a~2601c is endowed ID number (id) 2901a~2901c of segmentation respectively.
On the other hand, as we know from the figure,segmentation viewpoint 2802 is " TeamA ".In addition, given link information (idref) 2803a~2803c respectively tosegmentation viewpoint 2602a.
In addition, linkinformation 2803a~2803c is because record is discerned as segmentation priority in proper order, so in this embodiment, the segmentation priority oflink information 2803a is the highest, and the segmentation priority oflink information 2803c is minimum.
Therefore, the segmentation priority ofsegmentation 2601a is the highest, and the segmentation priority ofsegmentation 2601b is minimum.
Like this, utilize the record of Figure 46, can make the record of Figure 31 corresponding with example 9.
Example 10
9 are described user preference with the form of recording and narrating with XML-DTD from example 1 to example, but can be other language on basis, the language outside the XML also with RDF, XML-Schema, XML, or the record beyond the language, carry out the record of user preference.
Example 10 is for recording and narrating the form of user preference with XML-Schema.Below utilize Figure 47 to Figure 51 that the definition of the user preference of example 10 is described.Represent the record definition of the user preference of example 10 from Figure 47 to Figure 51.Again, Figure 47 to Figure 51 is corresponding with the record definition of the user preference of example 9.
As we know from the figure,definition User Preference 3201 for as key element, as user profile compriseidentifier User Identifier 3202 that the identification user uses and user taste information, be Usage Preference3203.In addition,definition User Preference 3203 is for comprising 0 or 1.
The below definition of explanation User Preference 3203.Shown among the figure 4803,definition UserPreference 3203 contains theBrowsing Preferences 3207 more than 0.Again,definition UserPreference 3203 is for to contain AllowAutomatic Update 3208 as attribute.Again, shown among the figure 4801,definition User Preference 3203 is for having the Filtering And SearchPreferences 4802 more than 0 as key element.
The below definition of explanation Browsing Preferences 3207.Shown among the figure 4900,definition Browsing Preferences 3207 is for having the Summary Preferences3210 more than 0 as key element.Also have,definition Browsing Preferences 3207 is for to comprise pretected 3211 andprefernce Value 3212 as attribute.Also have,definition Browsing Preferences 3207 is for comprising thePrefernce Condition Type 4902 more than 0 as key element.
The following describes the definition of Summary Preferences 3210.Shown among the figure 5002, the key element that is called Summary Prefernces Type thatdefinition Summary Preferences 3210 is expanded for the Summary Component Type with the inventory that will have more than 0 shown among the figure 5003.Again,definition Summary Preferences 3210 is for having 0 above keyword as key element, be information during 3302,0 of PreferredSummary Theme or 1 demonstration, beSummary Duration 3303.
Again,definition Summary Preferences 3210 is for to have priority, to beprefernceValue 3304 as attribute.Also have,definition prefernce Value 3304 is initially set to 100.
In addition, definitionPreferred Summary Theme 3302 has text data, language message for being key element as Textual Type.In addition, definitionPreferred Summary Theme 3302 is for to have precedence information, to beprefernce Value 3307 as attribute.
And for example among the figure shown in 5101~5110,definition Summary Preferences 3210 is for having the long Min Summary Duration of expression minimum time as key element, expression long Max SummaryDuration of maximum time, expression shows the Num Of Key frames of frame number, the minimum Min NumOf Key frames that shows frame number of expression, the maximum Max Num Of Key frames that shows frame number of expression, the Num OfChars of expression Display Characters Per Frame, the Min Num OfChars that represents minimum Display Characters Per Frame, and the Max Num Of Chars that represents maximum Display Characters Per Frame.
In addition, shown among the figure 5110, define these key elements for to have priority as attribute.
Like this, utilize Figure 47 to Figure 51, can record and narrate user preference with XML-Schema.
Utilize Figure 52 that the concrete example of the user priority selection definition of example 10 is described below.Figure 52 is the figure of the record example of the user preference of expression example 10.
Shown among the figure 5201, User Identifier records and narrates as protected and is " true ", records and narrates as user name to be " Mike ".In addition, shown among the figure 5202, User Preferences records and narrates as Allow AutomaticUpdate and is " false ".In addition, shown among the figure 5203, Browsing Preferences records and narrates as pretected and is " true ".
TwoSummary Preferences 5204 have been recorded and narrated at User Preference5200 again.
Shown among the figure 5205, recorded and narrated the information that is called " Free-kick " as Preferred SummaryTheme at Summary Preferences 5204.In addition, shown among the figure 5206,, recorded and narrated the information that is called " Goals " as Preferred Summary Theme at SummaryPreferences 5204.In addition, shown among the figure 5208, recorded and narrated at Summary Duration and to be called " PT5M ", i.e. 5 minutes information.
Like this, can record and narrate user preference with XML-Schema.
In addition, the record of 10 pairs of segmentations of example is is also recorded and narrated with XML-Schema.Below the segmentation of explanation example 10 is recorded and narrated.At first, utilize the definition of the segmentation of Figure 53~56 explanation examples 9.The definition that the segmentation of Figure 53~56 expression examples 10 is recorded and narrated.
The segmentation definition of example 10 is the abstract segmentation shown in Figure 53, Figure 54, is Segment DS.And Audio Visual Segment DS, Audio Segment DS, Video Segment DS, Still RegionDS, Moving Region DS inherits Segment DS respectively becomes concrete record definition.Below explanation Audio Visual Segment DS at first illustrates Segment DS.
Shown among the figure 5300, declaration Segment is abstract type.
And shown among the figure 5301, definition Segment is for having media information such as coded format, being Media Information as key element.In addition, shown among the figure 5302, definition Segment is for having link information to the entity of this segmentation, being Media Locator as key element.In addition, shown among the figure 5303, definition Segment is for having producer's information such as copyright information, being CreationInformation as key element.In addition, shown among the figure 5304, definition Segment for have using method as key element, use restriction etc. use information, be Usage Information.Also have, shown among the figure 5305, definition Segment for the note that has segmentation as key element etc., be Text Annotation.Also have, shown among the figure 5307, definition Segment for the viewpoint (keyword) that has segmentation as key element, be Point Of View.Again, shown among the figure 5308, Segment is for to have the SegmentDecomposition that is used to specify secondary segmentation as key element in definition.
Also have, shown among the figure 5311, Segment is for to have identifier id as attribute in definition.In addition, shown among the figure 5312, definition Segment is for having link information to URL, being href as attribute.In addition, shown among the figure 5313, definition Segment is for having the link information idref to id as attribute.
Like this, Segment have as Point Of View, the identifier id of keyword message and to the identifier of other segmentation with reference to information, be idref.
Utilize Figure 55 that the Audio Visual Segment that has inherited above-mentioned abstract segmentation is described below.
Shown among the figure 5501, Audio Visual Segment records and narrates and is the expansion of above-mentioned abstract Segment.
Also have, shown among the figure 5502, definition Audio Visual Segment is for having the Media Time of expression time started as the key element that expands.
In addition, shown among the figure 5504, definition Audio Visual Segment for have as attribute to other segmentation with reference to information, be idref.
Like this, Audio Visual Segment has the same as with example 9 information idref.
Utilize Figure 56 that the definition of Point Of View is described below.Shown among the figure 5600, the type of declaration Point OfView.
Then, shown among the figure 5601, definition Point Of View is for to have SupplementalInfo as key element.In addition, shown among the figure 5602, definition Point Of View is for to have Value as key element.
The type of Value is the Primitive Importance Type shown in 5603 among the figure.Shown among the figure 5604, definition Primitive Importance Type for the id that has segmentation as attribute with reference to information.
And for example among the figure shown in 5606, record and narrate Point Of View for have the viewPoint that text is recorded and narrated as attribute.
Like this, Point Of View and example 9 are the same has id and an idref.
Utilize Figure 57 to Figure 59 that the record example of basis with the segmentation of above-mentioned definition is described below.Figure 57 to Figure 59 represents the record example of the segmentation of example 10.
Record example shown in Figure 57 to Figure 59 is to record and narrate the example of Point Of View in segmental structure.
In this embodiment, shown among the figure 5700, id has the segmentation that is called " FootBall Game ".Segmentation with the id of this " FootBall Game " shown in 5701a, 5701b among the figure, is made of two secondary segmentations.Have, the secondary segmentation among the figure shown in the 5701a has id and is thesecondary segmentation 5702b of " seg2 " and the idsecondary segmentation 5702c for " seg20 " forsecondary segmentation 5702a, the id of " seg1 " again.
And,secondary segmentation 5702a shown in 5703a among the figure, have priority and be Value and be 0.3, be called the viewPoint of " TeamaA " and shown in 5703b among the figure, priority, be that Value is view Point 0.7, that be called " TeamB ".In addition,secondary segmentation 5702a also has Media Time information shown in 5704a among the figure.
In addition,secondary segmentation 5702b has priority, is that Value is view Point 0.5, that be called " TeamA " shown in 5703c among the figure.Secondary againsegmentation 5702a also has MediaTime information shown in 5704b among the figure.
In addition,secondary segmentation 5702c has priority, is that Value is view Point 0.8, that be called " TeamA " and shown in 5703e among the figure shown in 5703d among the figure, priority, is that Value is view Point 0.2, that be called " TeamB ".In addition,secondary segmentation 5702a also has Media Time information shown in 5704c among the figure.
On the other hand, the secondary segmentation shown in the 5701b has id and is the secondary segmentation 5702e of " 2seg2 " and the idsecondary segmentation 5702f for " 2seg20 " forsecondary segmentation 5702d, the id of " 2seg1 " among the figure.Secondary segmentation 5702d has priority, is that Value is view Point 0.3, that be called " TeamA " shown in 5703f among the figure.In addition,secondary segmentation 5702b also has Media Time information shown in 5704d among the figure.
In addition,secondary segmentation 5702b has priority, is that Value is view Point 0.5, that be called " TeamA " shown in 5703g among the figure.Also have,secondary segmentation 5702a also has Media Time information shown in 5704e among the figure.
In addition,secondary segmentation 5702f has priority, is that Value is view Point 0.8, that be called " TeamA " shown in 5703h among the figure.Also have,secondary segmentation 5702f also has Media Time information shown in 5704f among the figure.
Like this, in the record example of Figure 57 to Figure 59, in segmental structure, recorded and narrated Point Of View.
The record example of below utilizing Figure 60, Figure 61 explanation that segmental structure and Point Of View structure are separated.Figure 60, Figure 61 represent other examples that the segmentation of example 10 is recorded and narrated.
In this embodiment, shown among the figure 6000, id has segmental structure and thePointOfView structure 6005,6008 that is called " FootBall Game ".The segmental structure of id with this being called " FootBall Game " is made of 1 secondary segmentation shown among the figure 6001.Also have, the secondary segmentation among the figure shown in 6001 has id and is thesecondary segmentation 6004b of " seg2 " and the idsecondary segmentation 6004c for " seg20 " forsecondary segmentation 6004a, the id of " seg1 ".
Andsecondary segmentation 6002a has Media Time information shown in 6004a among the figure.And for example among the figure shown in the 6004b,secondary segmentation 6002b has Media Time information.Shown in 6004c among the figure,secondary segmentation 6002c has Media Time information.
Secondary segmentation 6001 also shown in 6004d among the figure, has Media Time information.
On the other hand,Point OfView structure 6005 has the Point Of View that is called " TeamA ".And Point OfView structure 6005 has segmentation with reference toinformation 6006a~6006c.Segmentation has 0.3, segmentation is " seg1 " with reference to information idref as Value with reference to information 6006a.Segmentation has 0.5 with reference toinformation 6006b as Value, and is " seg2 " to segmentation with reference to information idref.Segmentation has 0.8, segmentation is " seg20 " with reference to information idref as Value with reference toinformation 6006c.
In addition, Point Of View structure 6008 has the Point Of View that is called " TeamB ".And Point Of View structure 6008 has segmentation with reference to information 6009a, 600b.Segmentation has 0.7 with reference to information 6009a as Value, and is " seg1 " to segmentation with reference to information idref.Segmentation has 0.2 with reference to information 6009b as Value, and is " seg20 " to segmentation with reference to information idref.
Like this, even at example 10, also because Point Of View has the information of the id of referred fragment, so segmental structure and Point Of View structure can be separated.
Example 11
In above-mentioned example, give viewpoint (keyword) to segmentation, extract segmentation with this viewpoint.Then, with a plurality of segmentation marshallings of extracting like this, make summary.
Example 11 is given viewpoint (keyword) to this summary.Therefore also can utilize viewpoint to extract summary.
Below utilize Figure 62~Figure 65 that the summary of example 11 is described.At first utilize the definition of Figure 62,63 descriptive abstracts (record and narrate in the drawings and be Hierarchical Summary).
Shown among the figure 6200, the type of declaration Hierarchical Summary.And, shown among the figure 6201, definition Hierarchical Summary for viewpoint (keyword) inventory that in key element, has Hierarchical Summary, be Summary Theme List.And for example among the figure shown in 6202, definition HierarchicalSummary has information that what kind of segmentation constitutes, is Highlight Level for have expression Hierarchical Summary as key element.
And for example among the figure shown in 6203, definition Hierarchical Summary has SummaryComponent List in the attribute.Shown among the figure 6204, Summary Component List inherits the Summary Component List Type what kind of formation expression Hierarchical Summary has.Specifically Summary Component List Type forms and to have the formation of representing the unconstrained of the Key Themes of structure of Key AudioClips, expression theme of Key Video Clips, the expression audio frequency limiting structure of KeyFrames, the expression video limits structure of frame structure and expression restrictive condition as inventory.
Utilize below Figure 64 illustrate Hierarchical Summary viewpoint (keyword) inventory, be the definition of SummaryTheme List.
Shown among the figure 6400, the type of declaration Summary Theme List.
Then, shown among the figure 6401, definition Summary Theme List for the viewpoint that has Hierarchical Summary as key element, be Summary Theme.Define Summary Theme again and be text message, also have a language message.
In addition, shown among the figure 6402, definition Summary Theme has identifier id on attribute.For another example among the figure shown in 6403, definition Summary Theme be have reference information to other SummaryTheme of upper level, be Parent ID.
Like this, just can express Summary Theme by different level.
Below utilize Figure 65 to illustrate that expression Hierarchical Summary becomes the information of what kind of segmental structure, is the definition of High Light Level.
Shown among the figure 6500, declaration High Light Level.And shown among the figure 6501, High LightLevel has the segment information that is contained in Hierarchical Summary, is High LightSegment as key element.
And for example among the figure shown in 6502, High Light Level has High Light Level as key element.Therefore, just might have High Light Level, make High Light Level form the structure that returns at the next level of High Light Level.
In addition, High Light Level is shown among the figure 6503~6506, as attribute, have the precedence information Level of name information name, High Light Level of High LightLevel and High LightLevel length information duration and for which Summary Theme with reference to information themeIds.
Like this, High Light Level is linked to Summary Theme.
The concrete example of the SummaryTheme that below utilizes Figure 66 to illustrate to make according to the definition of Hierarchical Summary.
In Summary Theme List6600, contain Summary Theme6601a~6601f.
Summary Theme 6601a represents shown in 6602a among the figure in English.AndSummaryTheme 6601a has the id that is called " item0 " shown in 6603a among the figure.In addition, Summary Theme6601a is shown in 6604a among the figure, and content is " baseball ".
In addition, Summary Theme represents shown in 6602b among the figure in English.AndSummaryTheme 6601b has the id of " item01 " shown in 6603b among the figure.Also have, Summary Theme6601b is shown in 6604b among the figure, and content is " home run ".AndSummary Theme 6601b has the parent ID of " item0 " shown in 6605b among the figure.That is,Summary Theme 6601b is the Summary Theme that is positioned at the next level ofSummary Theme 6601a.
In addition,Summary Theme 6601c represents shown in 6602c among the figure in English.Also have,SummaryTheme 6601c has the id of " item1 " shown in 6603c among the figure.In addition,Summary Theme 6601c is shown in 6604c among the figure, and content is " basketball ".Also haveSummary Theme 6601a to have the parent ID of " item0 ".
In addition,Summary Theme 6601d represents shown in 6602d among the figure in English.Also have,SummaryTheme 6601d has the id of " item11 " shown in 6603d among the figure.In addition, Summary Theme6601d is shown in 6604d among the figure, and content is " three pointer ".Also have,Summary Theme 6601d has the parent ID that is called " item1 " shown in 6605d among the figure.Be thatSummary Theme 6601d is the Summary Theme that is positioned at the next level ofSummary Theme 6601c.
In addition,Summary Theme 6601e represents shown in 6602e among the figure in English.AndSummaryTheme 6601e has the id of " item12 " shown in Figure 66 03e.In addition,Summary Theme 6601e is shown in 6604e among the figure, and content is " slamdunk ".Also haveSummary Theme 6601e shown in 6605e among the figure, have the parent ID that is called " item1 ".Be thatSummary Theme 6601e is the Summary Theme that is positioned at the next level ofSummaryTheme 6601c.
In addition, 6602f represents in English as showing amongSummary Theme 6601f such as the figure.And SummaryTheme6601f has the id of " item2 " shown in 6603f among the figure.In addition,Summary Theme 6601f is shown in 6604f among the figure, and content is " soccer ".
Like this, because Summary Theme has parent ID, can link with other Summary Theme.And can link by different level between the Summary Theme.
Below with Figure 67 the record example of the Hierarchical Summary that uses Summary Theme is described.
Shown among the figure 6700, the declaration name is called the HierarchicalSummary of " Key Themes Summary 001 ".
Also have, Hierarchical Summary is made ofSummary Theme List 6701, the High Light Level6702b that is called theHigh Light Level 6702a of " Summary001 " and is called " Summary 002 ".
Summary Theme List 6701 has theSummary Theme 6703a that is called " slam dunk ".In addition,Summary Theme 6703a has the id that is called " E0 ".Have, Summary Theme List6701 has theSummary Theme 6703b that is called " 3-point shots " again.Again, Summary Theme6703b has the id that is called " E1 ".
On the other hand,High Light Level 6702a adopts the segmental structure shown in the 6704a among the figure.AndHigh Light Level 6702a has the theme Ids that is called " E0 " shown in 6705a among the figure.That is to say thatHigh Light Level 6702a and theSummary Theme 6703a with the id that is called " E0 " link.Therefore,High Light Level 6702a has the Summary Theme that is called " slam dunk ".
In addition,High Light Level 6702b adopts the formation of the segmentation shown in the 6704b among the figure.AndHighLight Level 6702b has the theme Ids that is called " E1 " shown in 6705b among the figure.That is to say thatHigh Light Level 6702b and theSummary Theme 6703b with the id that is called " E1 " link.Therefore,High Light Level 6702b has the Summary Theme that is called " 3-point shots ".
Like this, can make Hierarchical Summary have Summary Theme.
In addition, do like this, can make High Light Level and Summary Theme link.Therefore, High Light Level and Summary Theme can be constituted respectively and record and narrate.
In addition, at example 1 to example 11, user preference no matter be the user the use resume, buy resume, incident resume, preprepared template etc. and all can.
Also have, to example 11, the description method of the dominance relation of the content identifier in the user preference, time length, keyword, priority or keyword also is not limited to present embodiment at example 1.
In addition, to example 11, terminal preferences can not be the kind of concrete terminal or the ability value of terminal at example 1, can be the ID etc. of the type (for example PC use with, carried terminal etc.) of GC group connector.In addition, terminal preferences also can be determined uniquely by terminal.Also can upgrade according to the edition upgrading of firmware, software.
Also have, to example 11, data also can be the data beyond audio frequency, video or the file at example 1.
In addition, to example 11, user's self-adaptive controller also can not upgrade user preference at example 1.In addition, for identical content, also can have different a plurality of users and have a liking for recording and narrating.
In addition, to example 10, the level of segmentation and secondary segmentation also can be any more than 1 at example 1.
In addition, at example 2, user preference, terminal preferences, network preference can not be in end side also, can be at server side or diverse administration web page.In addition, network preference also can determine uniquely according to the contract service of terminal, also can determine uniquely or dynamically according to the communications status between terminal and server.
Moreover, also can in calculating the recording medium that function reads, utilize computer to carry out this program this procedure stores, thereby realize the action of each several part by all or part of action programming with the invention described above various piece.
In addition, also the specialized hardware of the function of the each several part of utilizable energy performance the invention described above is realized.
In addition, also can adopt with computer and carry out computer, the program of all or part of action usefulness of the contained each several part of the invention described above, the form of product (product).
This specification is that the spy with on December 3rd, 1999 application is willing to flat 11-344476 number and the spy of application on March 10th, 2000 is willing to be for 2000-066531 number foundation.These contents all are being included in here.
Industrial applicability
As mentioned above, adopt the present invention, the 1st, make each by the taste information with the user as user preference Content has this user's taste information, according to user preference, selects the segmentation of data, excellent according to segmentation Level and terminal capability carry out resolution conversion earlier, thereby can make each content want the form number of seeing with the user According to self adaptation.
The 2nd, when allotting content information by network, by with the passband information of network as network preference Obtain, adjust data volume according to the priority of segmentation and passband, thereby can want the form that receives with the user, And the state according to network makes data adaptive scalablely.

Claims (11)

Translated fromChinese
1.一种内容自适应装置,包括:1. A content adaptive device, comprising:内容数据取得单元,取得元数据和内容的实际数据,所述元数据是由数据结构部分和属性部分与用于构成所述内容的某一时间段的多个分段对应地构成,所述数据结构部分记述有分段,所述属性部分记述有分段观点和链接信息,所述分段观点是表示与所述分段对应的所述内容的内容的关键字,所述链接信息是用于链接到按每个所述分段观点成为对象的分段的信息;The content data acquisition unit acquires metadata and actual data of the content, the metadata is composed of a data structure part and an attribute part corresponding to a plurality of segments for constituting a certain period of time of the content, and the data Sections are described in the structure part, and the view of the section is described in the attribute part, which is a keyword indicating the content of the content corresponding to the section, and the link information is used for information linking to the segments targeted by each of said segment viewpoints;用户偏好取得单元,取得用户偏好记述信息,所述用户偏好记述信息记述有与对于所述内容的用户的嗜好有关的关键字的用户观点;以及a user preference acquisition unit that acquires user preference description information, the user preference description information describing user views of keywords related to the user's preferences for the content; and用户自适应单元,基于所述元数据和所述用户偏好记述信息,使所述内容适应于所述用户的嗜好。A user adaptation unit adapts the content to the preference of the user based on the metadata and the user preference description information.2.如权利要求1所述的内容自适应装置,其中,2. The content adaptation apparatus as claimed in claim 1, wherein,所述元数据分级地记述分段。The metadata describes segments hierarchically.3.如权利要求1或2所述的内容自适应装置,其中,3. The content adaptation device as claimed in claim 1 or 2, wherein,所述元数据附加有分段优先级作为属性,所述分段优先级表示基于所述分段观点的重要度。The metadata is attached with, as an attribute, a segment priority indicating a degree of importance based on the viewpoint of the segment.4.如权利要求3所述的内容自适应装置,其中,4. The content adaptation apparatus as claimed in claim 3, wherein,所述元数据附加有所述分段观点、以及多个所述分段优先级的组作为其属性。The metadata has the segment point of view and a plurality of groups of the segment priorities added as its attributes.5.如权利要求1所述的内容自适应装置,其中,5. The content adaptation apparatus as claimed in claim 1, wherein,所述用户偏好记述信息是对所述用户观点附加了用户优先级作为其属性的信息。The user preference description information is information in which user priority is added as an attribute to the user point of view.6.如权利要求5所述的内容自适应装置,其中,6. The content adaptation apparatus as claimed in claim 5, wherein,所述用户偏好记述信息是附加有所述用户观点、以及多个用户优先级的组作为其属性的信息。The user preference description information is information to which the user viewpoint and a plurality of sets of user priorities are added as attributes.7.如权利要求1至6中的任意一项所述的内容自适应装置,其中,7. The content adaptation device according to any one of claims 1 to 6, wherein,所述内容是音频和/或视频。The content is audio and/or video.8.一种内容自适应方法,包括以下的步骤:8. A content adaptive method, comprising the following steps:取得元数据和内容的实际数据,所述元数据是由数据结构部分和属性部分与用于构成所述内容的某一时间段的多个分段对应地构成,所述数据结构部分记述有分段,所述属性部分记述有分段观点和链接信息,所述分段观点是表示与所述分段对应的所述内容的内容的关键字,所述链接信息是用于链接到按每个所述分段观点成为对象的分段的信息;The metadata and the actual data of the content are obtained. The metadata is composed of a data structure part and an attribute part corresponding to a plurality of segments used to constitute a certain time period of the content. The data structure part describes Segment, the attribute part is described with a segment view and link information, the segment view is a keyword representing the content of the content corresponding to the segment, and the link information is used to link to each The segment view becomes the segment information of the object;取得用户偏好记述信息,所述用户偏好记述信息记述有与对于所述内容的用户的嗜好有关的关键字的用户观点;以及Acquiring user preference description information describing user views of keywords related to the user's preferences for the content; and基于所述元数据和所述用户偏好记述信息,使所述内容适应于所述用户的嗜好。Based on the metadata and the user preference description information, the content is adapted to the user's taste.9.一种内容自适应装置,包括:9. A content adaptive device, comprising:内容数据取得单元,取得元数据和内容的实际数据,所述元数据在用于构成所述内容的某一时间段的多个分段记述中附加有链接到其分段记述的代表帧(Audio Visual)的链接信息作为属性;The content data obtaining unit obtains metadata and actual data of the content, the metadata having a representative frame (Audio Visual) link information as an attribute;用户偏好取得单元,取得用户偏好信息,所述用户偏好信息记述有与所述内容的用户的嗜好有关的显示帧数信息(NumOFKeyframes);以及The user preference acquisition unit acquires user preference information, the user preference information describes display frame number information (NumOFKeyframes) related to the user's preference of the content; and用户自适应单元,通过分析并比较所述元数据和所述用户偏好信息,使所述内容适应于所述用户偏好信息。A user adaptation unit adapts the content to the user preference information by analyzing and comparing the metadata and the user preference information.10.如权利要求1所述的内容自适应装置,其中,10. The content adaptation apparatus as claimed in claim 1, wherein,所述用户偏好信息的显示帧数信息是表示最小显示帧数时间的最小显示帧数信息(MinNumOFKeyframes)。The display frame number information of the user preference information is the minimum display frame number information (MinNumOFKeyframes) indicating the minimum display frame number time.11.如权利要求1所述的内容自适应装置,其中,11. The content adaptation apparatus as claimed in claim 1, wherein,所述用户偏好信息的显示帧数信息是表示最大显示帧数的最大显示帧数信息(MaxNumOFKeyframes)。The display frame number information of the user preference information is the maximum display frame number information (MaxNumOFKeyframes) indicating the maximum display frame number.
CNA2008101660380A1999-12-032000-11-29Data adapting device, data adapting methodPendingCN101431652A (en)

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
JP19993444761999-12-03
JP344476991999-12-03
JP2000665312000-03-10

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CNB008188041ADivisionCN100437528C (en)1999-12-032000-11-29Data adaptive device, data adaptive method, and computer readable medium

Publications (1)

Publication NumberPublication Date
CN101431652Atrue CN101431652A (en)2009-05-13

Family

ID=40646780

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CNA2008101660380APendingCN101431652A (en)1999-12-032000-11-29Data adapting device, data adapting method

Country Status (1)

CountryLink
CN (1)CN101431652A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113297824A (en)*2021-05-112021-08-24北京字跳网络技术有限公司Text display method and device, electronic equipment and storage medium
CN115166186A (en)*2022-08-082022-10-11广东长天思源环保科技股份有限公司Online automatic monitoring system for water quality of water inlet of sewage treatment enterprise

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113297824A (en)*2021-05-112021-08-24北京字跳网络技术有限公司Text display method and device, electronic equipment and storage medium
CN115166186A (en)*2022-08-082022-10-11广东长天思源环保科技股份有限公司Online automatic monitoring system for water quality of water inlet of sewage treatment enterprise

Similar Documents

PublicationPublication DateTitle
CN100437528C (en)Data adaptive device, data adaptive method, and computer readable medium
US12028577B2 (en)Apparatus, systems and methods for generating an emotional-based content recommendation list
KR101006335B1 (en) Information processing apparatus and information processing method and recording medium
JP4363806B2 (en) Audiovisual program management system and audiovisual program management method
CN100377046C (en)Intelligent default selection in an on-screen keyboard
US9615138B2 (en)Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US9170738B2 (en)Managing and editing stored media assets
US8296808B2 (en)Metadata from image recognition
CN102591912B (en) System and method for acquiring, sorting and delivering media in an interactive media guidance application
EP1189151A2 (en)Agent interface device
US8875186B2 (en)Apparatus and method of providing a recommended broadcast program
US20010024565A1 (en)Television receiver
JP4932447B2 (en) User terminal, control program therefor, content guidance system and control method
JP2005539307A (en) Adapting media system interest profiles
US20050066350A1 (en)Creating agents to be used for recommending media content
WO2004057819A1 (en)A residential gateway system having a handheld controller with a display for displaying video signals
US10348426B2 (en)Apparatus, systems and methods for identifying particular media content event of interest that is being received in a stream of media content
WO2011118249A1 (en)Content recommendation server, content display terminal, and content recommendation system
CN101431652A (en)Data adapting device, data adapting method
TWI474201B (en) Construction system scene fragment, method and recording medium
JP3023359B1 (en) Transmission device, reception device, transmission / reception device, transmission bubble reception method and transmission / reception method
CN104754427B (en)For determining the method and system of media classification
US20080168094A1 (en)Data Relay Device, Digital Content Reproduction Device, Data Relay Method, Digital Content Reproduction Method, Program, And Computer-Readable Recording Medium
JP2003153119A (en)Television receiver, television program output method, computer program and recording medium recorded with computer program
CN114531612A (en)Family informatization system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C02Deemed withdrawal of patent application after publication (patent law 2001)
WD01Invention patent application deemed withdrawn after publication

Application publication date:20090513


[8]ページ先頭

©2009-2025 Movatter.jp