Movatterモバイル変換


[0]ホーム

URL:


CN111131900A - Multimedia interaction system and multimedia interaction method - Google Patents

Multimedia interaction system and multimedia interaction method
Download PDF

Info

Publication number
CN111131900A
CN111131900ACN201811339373.6ACN201811339373ACN111131900ACN 111131900 ACN111131900 ACN 111131900ACN 201811339373 ACN201811339373 ACN 201811339373ACN 111131900 ACN111131900 ACN 111131900A
Authority
CN
China
Prior art keywords
display
navigation video
navigation
route
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811339373.6A
Other languages
Chinese (zh)
Inventor
蔡贵鈐
郑育镕
廖宪正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute for Information Industry
Original Assignee
Institute for Information Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute for Information IndustryfiledCriticalInstitute for Information Industry
Publication of CN111131900ApublicationCriticalpatent/CN111131900A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本揭示文件提供一种多媒体互动系统及多媒体互动方法。多媒体互动系统包含第一显示器、第二显示器以及服务器。服务器通讯连接第一显示器以及第二显示器。服务器用以接收第一显示器回放第一导览影片的第一即时播放时间;根据第一即时播放时间以获得第一导览影片所关联的无遮蔽区域;取得无遮蔽区域对应的第二导览影片;以及若第二显示器正在播放第二导览影片,则传送互动数据至第一显示器以及第二显示器,本揭示文件可以快速确认使用者之间是否处于可以互动交流的虚拟空间,减少服务器处理不必要的运算数据,并降低传输数据的频宽资源消耗。

Figure 201811339373

This disclosure document provides a multimedia interaction system and a multimedia interaction method. The multimedia interactive system includes a first display, a second display and a server. The server communicates with the first display and the second display. The server is used to receive the first real-time playback time of the first display playing back the first navigation video; obtain the unobstructed area associated with the first navigation video based on the first real-time playback time; and obtain the second navigation corresponding to the unobstructed area. video; and if the second display is playing the second navigation video, the interaction data is sent to the first display and the second display. This disclosure document can quickly confirm whether the users are in a virtual space where they can interact and communicate, reducing server processing. Eliminate unnecessary computing data and reduce bandwidth resource consumption for transmitting data.

Figure 201811339373

Description

Multimedia interaction system and multimedia interaction method
Technical Field
The present disclosure relates to a multimedia system and method, and more particularly, to a multimedia interaction system and method.
Background
In spatial navigation systems, a navigable collection of movies is typically formed by combining a plurality of movies. Therefore, it is necessary to adopt the technique of splicing movies to build a virtual space system. For the method of linking films, no three-dimensional model is established, so that it is impossible to directly use the virtual coordinates and the virtual line of sight to calculate whether users at different virtual positions can communicate and interact. For example, the virtual space system cannot determine whether the space between users is blocked by a shelter or not, or whether the interaction distance is too long.
The existing virtual space system can not use the film content to judge whether the users can interact with each other, therefore, on the premise of not using a three-dimensional image model, in the space navigation system established by using the film, the technical problem that the calculation of the multi-user interaction range is difficult needs to be solved, and whether the users are in the virtual space capable of communicating with each other or not is quickly confirmed.
Disclosure of Invention
This summary is provided to provide a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and is intended to neither identify key/critical elements of the embodiments nor delineate the scope of the embodiments.
According to an embodiment of the present disclosure, a multimedia interaction system includes a first display, a second display, and a server. The server is in communication connection with the first display and the second display. The server is used for receiving a first instant playing time of a first guide movie played back by the first display; obtaining an unmasked area associated with the first guide film according to the first real-time playing time; obtaining a second navigation film corresponding to the non-shielded area; and if the second display is playing the second guide film, transmitting the interactive data to the first display and the second display.
According to an embodiment of the present disclosure, the multimedia interaction system includes a first object corresponding to the first display and a second object corresponding to the second display, wherein the server is configured to transmit the first object and the second object to the first display and the second display, respectively, wherein the first display plays the first navigation movie with the second object, and the second display plays the second navigation movie with the first object.
According to an embodiment of the present disclosure, when the first object performs a first operation and the first operation is transmitted to the second display through the server, the first operation of the first object is presented on the second display, and when the second object performs a second operation and the second operation is transmitted to the first display through the server, the second operation of the second object is presented on the first display.
According to an embodiment of the present disclosure, the server stores a paragraph data lookup table, and the paragraph data lookup table records the first navigation movie, the non-masked area corresponding to the first navigation movie, and the second navigation movie corresponding to the non-masked area.
According to an embodiment of the present disclosure, the server is further configured to query the paragraph data lookup table to obtain the non-occluded area when the first live playing time falls within a first period of the first navigation movie.
According to the multimedia interaction system of an embodiment of the present disclosure, the server is further configured to obtain a second real-time playing time when the second display is playing the second navigation movie; querying the paragraph data lookup table to determine whether the second real-time playing time is in a second period of the second navigation movie; and calculating a distance between a first position in the first navigation movie and a second position in the second navigation movie when the second real-time playing time is between the second period, wherein the first position corresponds to the first real-time playing time, and the second position corresponds to the second real-time playing time.
According to an embodiment of the present disclosure, the server is further configured to determine whether the distance is smaller than a visible length, and transmit the interactive data to the first display and the second display if the distance is smaller than the visible length, where the visible length is a sum of a first field of view of the interactive data displayed by the first display and a second field of view of the interactive data displayed by the second display.
According to an embodiment of the present disclosure, the server is further configured to store map data, the map data including a first route and a second route, wherein the first navigation movie is captured by a camera along the first route, and the second navigation movie is captured by the camera along the second route.
According to an embodiment of the present disclosure, the map data further includes a mask between a portion of the first route and a portion of the second route.
The multimedia interaction system according to an embodiment of the disclosure, wherein the server is further configured to obtain the first navigation movie in the first period outside the mask mark of the first route, and obtain the second navigation movie in the second period outside the mask mark of the second route; and storing an identifier of the first navigation movie and the first period, the non-masked area except the masking mark, and an identifier of the second navigation movie and the second period in the paragraph data lookup table in an associated manner.
According to another embodiment, a multimedia interaction method is disclosed, comprising receiving a first live play time for a first guide movie played back by a first display; obtaining an unmasked area associated with the first guide film according to the first real-time playing time; obtaining a second navigation film corresponding to the non-shielded area; and if the second display is playing the second guide film, transmitting the interactive data to the first display and the second display.
According to an embodiment of the present disclosure, the multimedia interaction method further includes transmitting the first object and the second object to the first display and the second display, respectively, playing the first navigation movie with the second object through the first display, and playing the second navigation movie with the first object through the second display.
The method of multimedia interaction according to an embodiment of the present disclosure further includes presenting the operation of the first object on the second display when the first object performs a first operation and the first operation is transmitted to the second display through a server, and presenting the operation of the second object on the first display when the second object performs a second operation and the second operation is transmitted to the first display through the server.
According to an embodiment of the present disclosure, a session data lookup table records the first navigation movie, the non-masked area corresponding to the first navigation movie, and the second navigation movie corresponding to the non-masked area.
According to an embodiment of the present disclosure, the method further includes querying the paragraph data lookup table to obtain the non-occluded area when the first live playback time falls within a first period of the first navigation movie.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining a second real-time playing time when the second display is playing the second navigation movie; querying the paragraph data lookup table to determine whether the second real-time playing time is in a second period of the second navigation movie; and calculating a distance between a first position in the first navigation movie and a second position in the second navigation movie when the second real-time playing time is between the second period, wherein the first position corresponds to the first real-time playing time, and the second position corresponds to the second real-time playing time.
The multimedia interaction method according to an embodiment of the present disclosure further includes determining whether the distance is smaller than a visible length, and transmitting the interaction data to the first display and the second display if the distance is smaller than the visible length, where the visible length is a sum of a first field of view of the interaction data displayed by the first display and a second field of view of the interaction data displayed by the second display.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining map data, where the map data includes a first route and a second route, and the first navigation movie is captured by a camera along the first route, and the second navigation movie is captured by the camera along the second route.
According to an embodiment of the present disclosure, the map data further includes a mask between a portion of the first route and a portion of the second route.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining the first navigation film in the first period outside the masking mark of the first route, and obtaining the second navigation film in the second period outside the masking mark of the second route; and storing an identifier of the first navigation movie and the first period, the non-masked area except the masking mark, and an identifier of the second navigation movie and the second period in the paragraph data lookup table in an associated manner.
Drawings
The following detailed description, when read in conjunction with the appended drawings, will facilitate a better understanding of aspects of the disclosure. It should be noted that the features of the drawings are not necessarily drawn to scale as may be required to practice the description. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
FIG. 1 is a schematic diagram of a multimedia interaction system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of map data of a multimedia interactive system according to some embodiments of the present disclosure;
FIG. 3 is a flow chart illustrating steps of a method for multimedia interaction according to some embodiments of the present disclosure;
FIG. 4 is a schematic diagram illustrating the range of interaction between users of a multimedia interaction system according to some embodiments of the present disclosure;
FIG. 5 is a flowchart illustrating steps for creating a paragraph data lookup table according to some embodiments of the present disclosure.
Detailed Description
The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific examples of components and arrangements are described below to simplify the present disclosure. Of course, these examples are merely illustrative and are not intended to be limiting. For example, forming a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features such that the first and second features may not be in direct contact. Additionally, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
Further, spatially relative terms, such as "under," "below," "lower," "above," "higher," and the like, may be used herein for ease of description to describe one element or feature's relationship to another element (or elements) or feature (or features) as illustrated in the figures. Spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
Referring to FIG. 1, a schematic diagram of amultimedia interaction system 100 according to some embodiments of the present disclosure is shown. As shown in FIG. 1, themultimedia interaction system 100 includes aserver 110 and a plurality of displays 120 a-120 n. The displays 120 a-120 n may be virtual reality head mounted devices, display screens, or the like. The displays 120 a-120 n are communicatively coupled to theserver 110. In one embodiment, the displays 120 a-120 n are connected to a local host device (not shown), and the host device and theserver 110 are connected to theserver 110 via wired or wireless transmission communication, so as to download data to theserver 110 or upload data to theserver 110. Thedisplays 120a to 120n acquire data of theserver 110 through the host device.
Displays 120 a-120 n are respectively disposed in the same or different geographic locations. For example, the displays 120 a-120 n are respectively disposed in different rooms of the same building, so that a plurality of users can operate the displays 120 a-120 n.
Theserver 110 stores a plurality of guide movies. The display 120 may display the navigation movie downloaded from theserver 110. The navigation movie may be a pre-recorded movie of an actual scene, for example, recorded from different routes in a large-english museum. A description of creating a navigation movie will be set forth later.
Referring to FIG. 2, amap data 200 of themultimedia interaction system 100 of FIG. 1 is shown according to some embodiments of the present disclosure. As shown in fig. 2, themap data 200 includes a plurality of routes (routes 221, 223, 225, 227), a plurality ofendpoints 211, 211d, 213, 215, 217, 219, andocclusion labels 231, 233, 235. Each guide movie is recorded in advance along theroutes 221, 223, 225, 227, respectively. For example, a tour film is recorded at a large-english museum, and includes a tour film taken along a route (route 221) from a gate (Main entry) to a Great Court (Great Court) and then to a second Room (Room 2), a tour film taken along a route (route 225) from the Great Court to East stairs (East stairs), a tour film taken along a route (route 227) from the Great Court to West stairs (West stairs), a tour film taken along a route (route 223) from the Great Court to South stairs (South stairs), and the like.
The user can select the film to be watched. For example, if the user wants to know how to go from the gate to the great atrium, he can choose to watch the corresponding guide movie. It is worth mentioning that the user can select any one of the displays 120 a-120 n in themultimedia interaction system 100 of fig. 1 to watch the movie. The multimediainteractive system 100 can have multiple displays simultaneously to play the same or different guide films, so that multiple users can watch the films on the line at the same time. For example, afirst user 241 views a selected navigation film through thedisplay 120a, asecond user 243 views a selected navigation film through thedisplay 120b, a third user views a selected navigation film through thedisplay 120n, and so on.
Themap data 200 has anocclusion label 231, anocclusion label 233, and anocclusion label 235. Themask note 231 indicates that there is an obstacle or a wall or the like between theroute 221 and theroute 223. Themasking notation 233 indicates that there is an obstacle or wall, etc. between theroute 225 and theroute 227. Themasking notation 235 indicates that there is an obstacle or wall or the like between theroute 227 and theroute 223.
For example, in the first navigation film, the user selects a navigation film, and views along theend point 211, theend point 211a, theend point 211b, theend point 211c, and theend point 211d in sequence are viewed on thepath 221. Atendpoints 211, 211a, 211b, 211c, and 211d, respectively, correspond to 0 minute 0 second, 2 minute 50 second, 7 minute 50 second, 10 minute 0 second, and 12 minute 50 second of the first navigation movie.
In the second navigation film, the user can watch the scenes along theend point 213, theend point 213a, theend point 213b to theend point 215 in thepath 223.Endpoint 213,endpoint 213a,endpoint 213b, andendpoint 215 correspond to 0 minute 0 second, 2 minute 0 second, 6 minute 30 second, and 9 minute 30 second, respectively, of the second navigation movie.
In the third guide movie, the user can watch the scenes along theend point 225a, theend point 225b to theend point 219 in thepath 225.Endpoint 225a,endpoint 225b, andendpoint 219 correspond to 0 minute 0 second, 3 minute 30 second, and 8 minute 40 second, respectively, of the third navigation movie.
In one embodiment, when thefirst user 241 is watching the first guide movie (route 221) and, at the same time, thesecond user 243 is watching the second guide movie (route 223), thefirst user 241 may, under certain conditions, see the virtual figure (avatar), the action of the figure, and/or the user's voice, etc. of thesecond user 243 in the first guide movie, and vice versa. That is, when there is no shelter between theroute 221 and theroute 223, thefirst user 241 and thesecond user 243 can see each other's virtual puppet in the guide movie. For example, if thefirst user 241 watches the first guide movie and thesecond user 243 watches the second guide movie, if the first guide movie is played to 8 minutes 32 seconds (between theend point 211b and theend point 211 c) and the second guide movie is played to 0 minutes 50 seconds (between theend point 213 and theend point 213 a), there is no shielding (such as the region R shown in fig. 2) between theroute 221 and the route 2232) Thus, thefirst user 241 and thesecond user 243 can see each other's virtual figure.
In one embodiment, if thefirst user 241 watches the first navigation movie and the second user watches the second navigation movie, if the first navigation movie is played to 5 minutes and 15 seconds (between theend point 211a and theend point 211 b) and the second navigation movie is played to 5 minutes and 30 seconds (between theend point 213a and theend point 213 b), thefirst user 241 and thesecond user 243 cannot see the virtual puppet, the puppet motion and/or the user's voice of each other because there is a mask (such as themask 231 shown in fig. 2) between theroute 221 and theroute 223.
Referring to FIG. 3, a flow chart of steps of a method for multimedia interaction according to some embodiments of the present disclosure is shown. The multimedia interaction method of the present disclosure can be applied to themultimedia interaction system 100 shown in fig. 1. The following description takes themultimedia interaction system 100 of FIG. 1 and themap data 200 of FIG. 2 as examples, but the disclosure is not limitedtheretoThe map data 200 is a limit. In step S310, thefirst user 241 views the first guide movie using thedisplay 120 a. During the viewing process, theserver 110 may obtain a first instant playing time (e.g., 8 minutes and 32 seconds) that thedisplay 120a is currently viewing, i.e., a movie time of a first navigation movie that thedisplay 120a is currently displaying. In step S320, theserver 110 queries the non-occluded area in the paragraph data lookup table, which is shown in the following table i. Theserver 110 queries the time point of the first instant playing time to query other scene positions that can be seen by the scene position in the movie at that time. For example, the first navigation movie (Video 1) corresponds to an unmasked area R1、R2、R3And R4. When the first real-time playing time is 8 min 32 s, it falls within the first period with the starting time of 7 min 80 s and the ending time of 10 min 0 s, and this first period corresponds to the non-shaded region R2
Table one, paragraph data lookup table
Figure BDA0001862125650000081
In step S330, theserver 110 obtains a second navigation movie corresponding to the non-occluded area in the paragraph data lookup table according to the obtained non-occluded area. For example, in the obtained non-occlusion region R2The corresponding non-shielded region R can also be obtained simultaneously2This is the second guide movie (Video 4).
In step S340, in one embodiment, thesecond user 243 uses thedisplay 120b to watch a second navigation film (e.g., Video 4). During the viewing process, theserver 110 may obtain a second instant playing time (e.g., 2 minutes and 50 seconds) that thedisplay 120b is currently viewing, i.e., a movie time of a second navigation movie that thedisplay 120b is currently displaying.
In step S350, theserver 110 determines whether the second instant playing time is between the second period. If the second real-time playing time is not within the second period, in step S352, theserver 110 determines that thefirst user 241 does not have any user capable of interacting currently.
If the second instant playing time is within the second period, in step S360, theserver 110 calculates a distance between a first position corresponding to the first instant playing time and a second position corresponding to the second instant playing time. For example, the movie times of the first navigation movie at theend points 211, 211a, 211b, 211c are 0 min 0 s, 2 min 50 s, 7 min 50 s, 10 min 0 s, respectively. The movie length from theend point 211 to theend point 211c is 10 minutes 0 seconds. When the first instant playing time is 8 minutes 32 seconds, the time length from this time to 10 minutes 0 seconds is calculated to be 1 minute 28 seconds (i.e. 10:00-8:32, the difference between 10 minutes 0 seconds and 8 minutes 32 seconds). The virtual length L (or the previously measured scene distance) between theend point 211 and theend point 211c may be calculated as the distance from the virtual position of the first instant playing time to the virtual position of theend point 211c
Figure BDA0001862125650000091
In one embodiment, the navigation films are captured while the camera is moving forward at a fixed moving speed. Therefore, the length L of the camera on the route can be calculated according to the moving speed and the film time of the camera. In another embodiment, an Inertial Measurement Unit (IMU) is disposed on the camera, and the length L of the route can be obtained through measurement data of the IMU while the camera is moving and recording films.
In step S370, the server determines 110 whether the calculated distance is smaller than the visible length. The visual length is the sum of a first view of thedisplay 120a displaying the interactive data and a second view of thedisplay 120b displaying the interactive data.
Referring to FIG. 4, a schematic diagram of interaction ranges between users of themultimedia interaction system 100 of FIG. 1 according to some embodiments of the present disclosure is shown. When the first navigation movie is played to the endpoint a, thedisplay 120a may display a first view r1 of the interactive data. When the second navigation movie is played to the end point B, the display 120B may display a second field of view r2 of the interactive data. Wherein, the lengths of the first view field r1 and the second view field r2 can be advanced in advanceAnd setting a row. Theserver 110 calculates the distance between the endpoints A and B, i.e. the distance between the endpoints A and B
Figure BDA0001862125650000092
The visible length is the sum of the first field of view r1 and the second field of view r 2.
Distance if
Figure BDA0001862125650000093
Less than or equal to the visual length r1+ r2, theserver 110 transmits the interaction data to thedisplay 120a and thedisplay 120b in step S380. The interaction data may be Augmented Reality virtual figure objects displayed on thedisplays 120a, 120 b. For example, thefirst user 241 may see the virtual figure, figure movement, and/or user's voice of thesecond user 243 in the first navigation movie, and vice versa. The operation or motion information of thefirst user 241 can be obtained through a sensor (not shown), a microphone (not shown) or other methods, so that thesecond user 243 can see the motion of the virtual figure of the first user 241 (for example, waving his hand) through thedisplay 120b or hear the sound of thefirst user 241 through a speaker (not shown). Similarly, the operation or action information of thesecond user 243 is also transmitted to thedisplay 120a through theserver 110. Thus, thefirst user 241 and thesecond user 243 can interact through thedisplays 120a and 120 b.
Distance if
Figure BDA0001862125650000101
If the length is greater than the visible length r1+ r2, the process returns to step S352, and theserver 110 determines that thefirst user 241 does not have any users capable of interacting with each other.
In some embodiments, the paragraph data lookup table may be queried by the displays 120 a-120 n or a host device connected to the displays 120 a-120 n to perform the multimedia interaction method.
Referring now to FIG. 5, a flowchart illustrating steps for creating a paragraph data lookup table according to some embodiments of the present disclosure is shown. As shown in fig. 5, in step S510, the camera takes a first navigation movie along a first route (e.g.,route 221 of fig. 2) and a second navigation movie along a second route (e.g.,route 223 of fig. 2). The Camera may be a ball Camera (Speed Camera). The guide movie may be a panoramic image with a 360-degree circular field. In step S520, the recorded guide movies are transmitted to a computing device (not shown in fig. 1). The operation device may be an electronic device having an image processing operation. In one embodiment, the computing device executes an image matching technique to capture the start point and the end point of each navigation film, so as to calculate the spatial relationship between the navigation films. For example, referring to the table i, the image at the end of the Video 1 is similar to the horizontal angle images at the beginning of the Video 2, the Video 3, and the Video 4 (e.g. the image captured by the ball camera at 180 degrees). Therefore, the arithmetic device records that there is a correlation between the end point of the movie Video 1 and the start points of the movie Video 2, the movie Video 3, and the movie Video 4. Thus, themap data 200 of fig. 2 can be established through the spatial correlation between movies and the length of each route.
Next, in step S530, the map data is transmitted to the computing device. In step S540, the computing device creates a masking label of the map data. In one embodiment, the map maintainer can manually mark the shielded area, or the computing device can execute an image comparison algorithm to calculate the similarity of the images, and when the similarity is lower than a threshold value, the paths at a specific position are judged to have shielding. The image similarity algorithm may be, but not limited to, feature point matching algorithm (featuregraphing), color histogram algorithm (color histogram), common information algorithm (statistical information), machine learning algorithm (machine learning), and the like. In one embodiment, when the guide movie is at a different angle, the similarity is compared with the similarity of other angles of other guide movies. For example, when the similarity parameter between the image of the first guide film at the viewing angle 0 degree and the image of the second guide film at the viewing angle 180 degree is greater than the threshold value, it is determined that the corresponding relationship exists between the viewing angles of the images of the two guide films. Or, when the similarity parameter between the image of the first guide film at the viewing angle of 180 degrees and the image of the second guide film at the viewing angle of 220 degrees is greater than the threshold value, it is determined that the corresponding relationship exists between the image viewing angles of the two guide films. Therefore, when there is a corresponding relationship between the image viewing angles of the guided movies, it can be determined that there is no shelter between the positions of the guided movies.
In step S550, the computing device excludes the relationship between the guided movies according to the masking labels, and obtains the relationship between the first guided movie and the second guided movie during the first period and the second period. For example, when the relationship between the navigation film Video 1 and the navigation film Video 4 is established, when the navigation film Video 1 has a masking mark between theend point 211a and theend point 211b of theroute 211 and the navigation film Video 4 has a masking mark between theend point 213a and theend point 213b of theroute 223, the computing device records that the corresponding relationship exists between the navigation film Video 1 from theend point 211 to theend point 211a and the navigation film Video 4 from theend point 213b to the end point 215 (i.e. the non-masking region R is located in the non-masking region R)1). Similarly, the computing device records that there is a corresponding relationship between the navigation film Video 1 from theendpoint 211b to theendpoint 211c and the navigation film Video 4 from theendpoint 213 to theendpoint 213a (i.e. the non-occluded area R)2)。
In step S560, the identifier of the first navigation movie, the first period, the non-occluded area, the identifier of the second navigation movie, and the second period are recorded in the paragraph data lookup table. For example, as in the first column of the foregoing Table I, the non-occluded area R is recorded1An identifier of the guide movie Video 1, a first period, an identifier of the guide movie Video 4, and a second period.
In summary, themultimedia interaction system 100 and the multimedia interaction method of the present disclosure can provide users with a guide film and interact with other users who also watch the guide film. The user watches the guide film and knows real scene in the film, in order to let the user can more personally go to his own scene, this revelation file combines image identification and augmented reality, increases the user and can see the user that is located other scene positions when watching the guide film to can be interactive with other users, for example wave the hand and inquire the position at place, exchange the information that experiences etc. the user can know the scene in the film more fast, and increase the authenticity and the enjoyment of watching the guide film. In addition, the disclosure can quickly determine whether users are in a virtual space capable of interactive communication. Through comparison of the paragraph data lookup table, the unnecessary information can be filtered, the virtual distance between the users can be calculated through the virtual positions of the users, if the users respectively set the virtual interaction range, interaction communication is provided for the users with intersection, and therefore the unnecessary operation data processed by the server can be reduced, and the bandwidth resource consumption of the transmission data can be reduced.
The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that the present invention may be readily utilized as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.

Claims (20)

Translated fromChinese
1.一种多媒体互动系统,其特征在于,包含:1. a multimedia interactive system, is characterized in that, comprises:一第一显示器以及一第二显示器;a first display and a second display;一服务器,通讯连接该第一显示器以及该第二显示器,该服务器用以:a server, communicating with the first display and the second display, the server is used for:接收该第一显示器回放一第一导览影片的一第一即时播放时间;receiving a first real-time playback time of the first display playing back a first navigation video;根据该第一即时播放时间以获得该第一导览影片所关联的一无遮蔽区域;Obtaining an unobstructed area associated with the first navigation video according to the first real-time playback time;取得该无遮蔽区域对应的一第二导览影片;以及obtaining a second navigation video corresponding to the clear area; and若该第二显示器播放该第二导览影片,则传送一互动数据至该第一显示器以及该第二显示器。If the second display plays the second navigation video, an interactive data is sent to the first display and the second display.2.根据权利要求1所述的多媒体互动系统,其特征在于,该互动数据包含对应于该第一显示器的一第一物件及对应于该第二显示器的一第二物件,其中该服务器分别用以传送该第一物件及第二物件至该第一显示器及该第二显示器,其中该第一显示器播放具有该第二物件的该第一导览影片,以及该第二显示器播放具有该第一物件的该第二导览影片。2. The multimedia interactive system according to claim 1, wherein the interactive data comprises a first object corresponding to the first display and a second object corresponding to the second display, wherein the server uses to transmit the first object and the second object to the first display and the second display, wherein the first display plays the first navigation video with the second object, and the second display plays the first navigation video with the first the second navigation video of the object.3.根据权利要求2所述的多媒体互动系统,其特征在于,当该第一物件执行一第一操作,且该第一操作透过该服务器传送至该第二显示器时,于该第二显示器呈现该第一物件的该第一操作,其中当该第二物件执行一第二操作,且该第二操作透过该服务器传送至该第一显示器时,于该第一显示器呈现该第二物件的该第二操作。3 . The multimedia interactive system of claim 2 , wherein when the first object performs a first operation, and the first operation is transmitted to the second display through the server, the second display is displayed on the second display. 4 . Presenting the first operation of the first object, wherein when the second object performs a second operation and the second operation is transmitted to the first display through the server, the second object is presented on the first display of this second operation.4.根据权利要求1所述的多媒体互动系统,其特征在于,该服务器储存一段落数据查找表,该段落数据查找表记录该第一导览影片、该第一导览影片对应的该无遮蔽区域、以及该无遮蔽区域对应的该第二导览影片。4 . The multimedia interactive system according to claim 1 , wherein the server stores a paragraph data look-up table, and the paragraph data look-up table records the first navigation video and the unshielded area corresponding to the first navigation video. 5 . , and the second navigation video corresponding to the unshielded area.5.根据权利要求4所述的多媒体互动系统,其特征在于,该服务器还用以查询该段落数据查找表,以当该第一即时播放时间落于该第一导览影片的一第一期间时,获得该无遮蔽区域。5 . The multimedia interactive system according to claim 4 , wherein the server is further configured to query the segment data lookup table, so that when the first real-time playback time falls within a first period of the first navigation video. 6 . , the unshaded area is obtained.6.根据权利要求5所述的多媒体互动系统,其特征在于,该服务器还用以:6. The multimedia interactive system according to claim 5, wherein the server is also used for:当该第二显示器正在播放该第二导览影片时,取得一第二即时播放时间;When the second display is playing the second navigation video, obtain a second real-time playback time;查询该段落数据查找表,以判断该第二即时播放时间是否介于该第二导览影片的一第二期间;以及querying the segment data lookup table to determine whether the second real-time play time is within a second period of the second navigation video; and当该第二即时播放时间介于该第二期间,则计算于该第一导览影片中的一第一位置以及于该第二导览影片中的一第二位置之间的一距离,其中该第一位置对应该第一即时播放时间,该第二位置对应该第二即时播放时间。When the second real-time playing time is within the second period, a distance between a first position in the first navigation video and a second position in the second navigation video is calculated, wherein The first position corresponds to the first instant play time, and the second position corresponds to the second instant play time.7.根据权利要求6所述的多媒体互动系统,其特征在于,该服务器还用以判断该距离是否小于一可视长度,若判定该距离小于该可视长度,则传送该互动数据至该第一显示器以及该第二显示器,其中该可视长度为该第一显示器显示该互动数据的一第一视野以及该第二显示器显示该互动数据的一第二视野的一总和。7. The multimedia interactive system according to claim 6, wherein the server is further configured to judge whether the distance is less than a visible length, and if it is judged that the distance is less than the visible length, then transmit the interactive data to the first A display and the second display, wherein the visible length is a sum of a first field of view in which the first display displays the interactive data and a second field of view in which the second display displays the interactive data.8.根据权利要求7所述的多媒体互动系统,其特征在于,该服务器还用以储存一地图数据,该地图数据包含一第一路线以及一第二路线,其中该第一导览影片是由一摄影机沿着该第一路线所拍摄,以及该第二导览影片是由该摄影机沿着该第二路线所拍摄。8. The multimedia interactive system according to claim 7, wherein the server is further configured to store a map data, the map data comprises a first route and a second route, wherein the first navigation video is composed of A camera is captured along the first route, and the second guide video is captured by the camera along the second route.9.根据权利要求8所述的多媒体互动系统,其特征在于,该地图数据还包含一遮蔽标注,该遮蔽标注介于该第一路线的一部分与该第二路线的一部分之间。9 . The multimedia interactive system of claim 8 , wherein the map data further comprises a masking mark, and the masking mark is between a part of the first route and a part of the second route. 10 .10.根据权利要求9所述的多媒体互动系统,其特征在于,该服务器还用以:10. The multimedia interactive system according to claim 9, wherein the server is further used for:取得该第一导览影片于该第一路线的该遮蔽标注以外的该第一期间,以及取得该第二导览影片于该第二路线的该遮蔽标注以外的该第二期间;以及obtaining the first period of the first navigation video outside the occlusion mark of the first route, and obtaining the second period of the second navigation video outside the occlusion mark of the second route; and将该第一导览影片的一识别符及该第一期间、该遮蔽标注以外的该无遮蔽区域、以及该第二导览影片的一识别符及该第二期间关联地储存于该段落数据查找表中。An identifier of the first navigation video and the first period, the unobstructed area other than the masking mark, and an identifier of the second navigation video and the second period are stored in the segment data in association with each other lookup table.11.一种多媒体互动方法,其特征在于,包含:11. A multimedia interaction method, characterized in that, comprising:接收一第一显示器回放一第一导览影片的一第一即时播放时间;receiving a first real-time playback time of a first display playing back a first navigation video;根据该第一即时播放时间以获得该第一导览影片所关联的一无遮蔽区域;Obtaining an unobstructed area associated with the first navigation video according to the first real-time playback time;取得该无遮蔽区域对应的一第二导览影片;以及obtaining a second navigation video corresponding to the clear area; and若一第二显示器播放该第二导览影片,则传送一互动数据至该第一显示器以及该第二显示器。If a second display plays the second navigation video, an interactive data is sent to the first display and the second display.12.根据权利要求11所述的多媒体互动方法,其特征在于,该互动数据包含对应于该第一显示器的一第一物件及对应于该第二显示器的一第二物件,其中该方法还包含分别传送该第一物件及该第二物件至该第一显示器及该第二显示器,以及通过该第一显示器播放具有该第二物件的该第一导览影片,以及通过该第二显示器播放具有该第一物件的该第二导览影片。12. The multimedia interactive method of claim 11, wherein the interactive data comprises a first object corresponding to the first display and a second object corresponding to the second display, wherein the method further comprises The first object and the second object are respectively transmitted to the first display and the second display, and the first navigation video with the second object is played through the first display, and the video with the second object is played through the second display. the second navigation video of the first object.13.根据权利要求12所述的多媒体互动方法,其特征在于,该方法还包含当该第一物件执行一第一操作,且该第一操作透过一服务器传送至该第二显示器时,于该第二显示器呈现该第一物件的该操作,以及当该第二物件执行一第二操作,且该第二操作透过该服务器传送至该第一显示器时,于该第一显示器呈现该第二物件的该操作。13. The multimedia interaction method of claim 12, further comprising: when the first object performs a first operation and the first operation is transmitted to the second display through a server The second display presents the operation of the first object, and when the second object performs a second operation and the second operation is transmitted to the first display through the server, the first display is presented on the first display This operation for two objects.14.根据权利要求11所述的多媒体互动方法,其特征在于,一段落数据查找表记录该第一导览影片、该第一导览影片对应的该无遮蔽区域、以及该无遮蔽区域对应的该第二导览影片。14 . The multimedia interaction method of claim 11 , wherein a paragraph data lookup table records the first navigation video, the unshaded area corresponding to the first navigation video, and the unshaded area corresponding to 14 . Second tour video.15.根据权利要求14所述的多媒体互动方法,其特征在于,还包含查询该段落数据查找表,以当该第一即时播放时间落于该第一导览影片的一第一期间时,获得该无遮蔽区域。15 . The multimedia interaction method of claim 14 , further comprising querying the segment data lookup table to obtain when the first real-time playback time falls within a first period of the first navigation video. 16 . the unshaded area.16.根据权利要求15所述的多媒体互动方法,其特征在于,还包含:16. The multimedia interaction method according to claim 15, further comprising:当该第二显示器正在播放该第二导览影片时,取得一第二即时播放时间;When the second display is playing the second navigation video, obtain a second real-time playback time;查询该段落数据查找表,以判断该第二即时播放时间是否介于该第二导览影片的一第二期间;以及querying the segment data lookup table to determine whether the second real-time play time is within a second period of the second navigation video; and当该第二即时播放时间介于该第二期间,则计算于该第一导览影片中的一第一位置以及于该第二导览影片中的一第二位置之间的一距离,其中该第一位置对应该第一即时播放时间,该第二位置对应该第二即时播放时间。When the second real-time playing time is within the second period, a distance between a first position in the first navigation video and a second position in the second navigation video is calculated, wherein The first position corresponds to the first instant play time, and the second position corresponds to the second instant play time.17.根据权利要求16所述的多媒体互动方法,其特征在于,还包含判断该距离是否小于一可视长度,若判定该距离小于该可视长度,则传送该互动数据至该第一显示器以及该第二显示器,其中该可视长度为该第一显示器显示该互动数据的一第一视野以及该第二显示器显示该互动数据的一第二视野的一总和。17. The multimedia interaction method of claim 16, further comprising determining whether the distance is less than a visible length, and if determining that the distance is less than the visible length, transmitting the interactive data to the first display and The second display, wherein the visible length is a sum of a first field of view in which the first display displays the interactive data and a second field of view in which the second display displays the interactive data.18.根据权利要求17所述的多媒体互动方法,其特征在于,还包含取得一地图数据,该地图数据包含一第一路线以及一第二路线,其中该第一导览影片是由一摄影机沿着该第一路线所拍摄,以及该第二导览影片是由该摄影机沿着该第二路线所拍摄。18. The multimedia interaction method of claim 17, further comprising obtaining a map data, the map data comprising a first route and a second route, wherein the first navigation video is taken along by a camera The first route is taken, and the second guide video is taken by the camera along the second route.19.根据权利要求18所述的多媒体互动方法,其特征在于,该地图数据还包含一遮蔽标注,该遮蔽标注介于该第一路线的一部分与该第二路线的一部分之间。19 . The multimedia interaction method of claim 18 , wherein the map data further comprises a masking mark, and the masking mark is between a part of the first route and a part of the second route. 20 .20.根据权利要求19所述的多媒体互动方法,其特征在于,还包含:20. The multimedia interaction method according to claim 19, further comprising:取得该第一导览影片于该第一路线的该遮蔽标注以外的该第一期间,以及取得该第二导览影片于该第二路线的该遮蔽标注以外的该第二期间;以及obtaining the first period of the first navigation video outside the occlusion mark of the first route, and obtaining the second period of the second navigation video outside the occlusion mark of the second route; and将该第一导览影片的一识别符及该第一期间、该遮蔽标注以外的该无遮蔽区域、以及该第二导览影片的一识别符及该第二期间关联地储存于该段落数据查找表中。An identifier of the first navigation video and the first period, the unobstructed area other than the masking mark, and an identifier of the second navigation video and the second period are stored in the segment data in association with each other lookup table.
CN201811339373.6A2018-11-012018-11-12Multimedia interaction system and multimedia interaction methodPendingCN111131900A (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
TW107138840ATWI674799B (en)2018-11-012018-11-01Multimedia interacting system and multimedia interacting method
TW1071388402018-11-01

Publications (1)

Publication NumberPublication Date
CN111131900Atrue CN111131900A (en)2020-05-08

Family

ID=69023779

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811339373.6APendingCN111131900A (en)2018-11-012018-11-12Multimedia interaction system and multimedia interaction method

Country Status (3)

CountryLink
US (1)US10628106B1 (en)
CN (1)CN111131900A (en)
TW (1)TWI674799B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050192025A1 (en)*2002-04-222005-09-01Kaplan Richard D.Method and apparatus for an interactive tour-guide system
US6968973B2 (en)*2003-05-312005-11-29Microsoft CorporationSystem and process for viewing and navigating through an interactive video tour
CN101127122A (en)*2007-09-132008-02-20复旦大学 A content-adaptive progressive occlusion analysis target tracking algorithm
CN101246546A (en)*2008-03-132008-08-20复旦大学 A Variable Masking Template Matching Algorithm for Video Object Tracking
CN103502982A (en)*2011-03-162014-01-08诺基亚公司Method and apparatus for displaying interactive preview information in a location-based user interface
TW201407458A (en)*2012-05-232014-02-16Microsoft CorpUtilizing a ribbon to access an application user interface
TW201607305A (en)*2014-06-302016-02-16蘋果公司 Intelligent automatic assistant for TV user interaction
CN105894998A (en)*2014-11-302016-08-24黄石木信息科技有限公司Making method of three-dimensional virtual scene tour guidance system
US10080061B1 (en)*2009-12-182018-09-18Joseph F. KirleyDistributing audio signals for an audio/video presentation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6388688B1 (en)*1999-04-062002-05-14Vergics CorporationGraph-based visual navigation through spatial environments
US20080109841A1 (en)*2006-10-232008-05-08Ashley HeatherProduct information display and product linking
TW200901710A (en)*2007-06-222009-01-01Shiang Suo Co LtdNavigation service system
ES2350514T3 (en)*2008-04-072011-01-24Ntt Docomo, Inc. MESSAGE SYSTEM WITH EMOTION RECOGNITION AND MESSAGE STORAGE SERVER FOR THE SAME.
US20110119597A1 (en)*2009-05-092011-05-19Vivu, Inc.Method and apparatus for capability-based multimedia interactions
KR20120070650A (en)*2010-12-222012-07-02삼성전자주식회사Method for playing and providing a video based on cloud computing
TWM450794U (en)*2012-08-062013-04-11zheng-dao LinIntegrated tourists service system
TWI535278B (en)*2013-12-192016-05-21仁寶電腦工業股份有限公司Method and system for playing video

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050192025A1 (en)*2002-04-222005-09-01Kaplan Richard D.Method and apparatus for an interactive tour-guide system
US6968973B2 (en)*2003-05-312005-11-29Microsoft CorporationSystem and process for viewing and navigating through an interactive video tour
CN101127122A (en)*2007-09-132008-02-20复旦大学 A content-adaptive progressive occlusion analysis target tracking algorithm
CN101246546A (en)*2008-03-132008-08-20复旦大学 A Variable Masking Template Matching Algorithm for Video Object Tracking
US10080061B1 (en)*2009-12-182018-09-18Joseph F. KirleyDistributing audio signals for an audio/video presentation
CN103502982A (en)*2011-03-162014-01-08诺基亚公司Method and apparatus for displaying interactive preview information in a location-based user interface
TW201407458A (en)*2012-05-232014-02-16Microsoft CorpUtilizing a ribbon to access an application user interface
TW201607305A (en)*2014-06-302016-02-16蘋果公司 Intelligent automatic assistant for TV user interaction
CN105894998A (en)*2014-11-302016-08-24黄石木信息科技有限公司Making method of three-dimensional virtual scene tour guidance system

Also Published As

Publication numberPublication date
US10628106B1 (en)2020-04-21
TWI674799B (en)2019-10-11
US20200142657A1 (en)2020-05-07
TW202019187A (en)2020-05-16

Similar Documents

PublicationPublication DateTitle
JP7348895B2 (en) System and method for controlling virtual camera
JP6894962B2 (en) Image data capture method, device, and program for free-viewpoint video
US10536661B2 (en)Tracking object of interest in an omnidirectional video
US11217006B2 (en)Methods and systems for performing 3D simulation based on a 2D video image
US7796155B1 (en)Method and apparatus for real-time group interactive augmented-reality area monitoring, suitable for enhancing the enjoyment of entertainment events
TWI530157B (en)Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
CN109997175B (en)Determining the size of a virtual object
WO2015182227A1 (en)Information processing device and information processing method
US20210038975A1 (en)Calibration to be used in an augmented reality method and system
CN110168615B (en)Information processing apparatus, information processing method, and storage medium
CN107636534A (en)General sphere catching method
CN103607568A (en)Stereo street scene video projection method and system
WO2012147363A1 (en)Image generation device
US9906769B1 (en)Methods and apparatus for collaborative multi-view augmented reality video
US20230353717A1 (en)Image processing system, image processing method, and storage medium
WO2021095573A1 (en)Information processing system, information processing method, and program
KR102041279B1 (en)system, method for providing user interface of virtual interactive contents and storage of computer program therefor
US11287658B2 (en)Picture processing device, picture distribution system, and picture processing method
WO2021161894A1 (en)Information processing system, information processing method, and program
CN107683604A (en) Generator
US20180124374A1 (en)System and Method for Reducing System Requirements for a Virtual Reality 360 Display
WO2020206647A1 (en)Method and apparatus for controlling, by means of following motion of user, playing of video content
CN112288877B (en) Video playback method, device, electronic device and storage medium
CN111131900A (en)Multimedia interaction system and multimedia interaction method
KR101572348B1 (en)Image data process method using interactive computing device and system thereof

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20200508


[8]ページ先頭

©2009-2025 Movatter.jp