This summary is provided to provide a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and is intended to neither identify key/critical elements of the embodiments nor delineate the scope of the embodiments.
According to an embodiment of the present disclosure, a multimedia interaction system includes a first display, a second display, and a server. The server is in communication connection with the first display and the second display. The server is used for receiving a first instant playing time of a first guide movie played back by the first display; obtaining an unmasked area associated with the first guide film according to the first real-time playing time; obtaining a second navigation film corresponding to the non-shielded area; and if the second display is playing the second guide film, transmitting the interactive data to the first display and the second display.
According to an embodiment of the present disclosure, the multimedia interaction system includes a first object corresponding to the first display and a second object corresponding to the second display, wherein the server is configured to transmit the first object and the second object to the first display and the second display, respectively, wherein the first display plays the first navigation movie with the second object, and the second display plays the second navigation movie with the first object.
According to an embodiment of the present disclosure, when the first object performs a first operation and the first operation is transmitted to the second display through the server, the first operation of the first object is presented on the second display, and when the second object performs a second operation and the second operation is transmitted to the first display through the server, the second operation of the second object is presented on the first display.
According to an embodiment of the present disclosure, the server stores a paragraph data lookup table, and the paragraph data lookup table records the first navigation movie, the non-masked area corresponding to the first navigation movie, and the second navigation movie corresponding to the non-masked area.
According to an embodiment of the present disclosure, the server is further configured to query the paragraph data lookup table to obtain the non-occluded area when the first live playing time falls within a first period of the first navigation movie.
According to the multimedia interaction system of an embodiment of the present disclosure, the server is further configured to obtain a second real-time playing time when the second display is playing the second navigation movie; querying the paragraph data lookup table to determine whether the second real-time playing time is in a second period of the second navigation movie; and calculating a distance between a first position in the first navigation movie and a second position in the second navigation movie when the second real-time playing time is between the second period, wherein the first position corresponds to the first real-time playing time, and the second position corresponds to the second real-time playing time.
According to an embodiment of the present disclosure, the server is further configured to determine whether the distance is smaller than a visible length, and transmit the interactive data to the first display and the second display if the distance is smaller than the visible length, where the visible length is a sum of a first field of view of the interactive data displayed by the first display and a second field of view of the interactive data displayed by the second display.
According to an embodiment of the present disclosure, the server is further configured to store map data, the map data including a first route and a second route, wherein the first navigation movie is captured by a camera along the first route, and the second navigation movie is captured by the camera along the second route.
According to an embodiment of the present disclosure, the map data further includes a mask between a portion of the first route and a portion of the second route.
The multimedia interaction system according to an embodiment of the disclosure, wherein the server is further configured to obtain the first navigation movie in the first period outside the mask mark of the first route, and obtain the second navigation movie in the second period outside the mask mark of the second route; and storing an identifier of the first navigation movie and the first period, the non-masked area except the masking mark, and an identifier of the second navigation movie and the second period in the paragraph data lookup table in an associated manner.
According to another embodiment, a multimedia interaction method is disclosed, comprising receiving a first live play time for a first guide movie played back by a first display; obtaining an unmasked area associated with the first guide film according to the first real-time playing time; obtaining a second navigation film corresponding to the non-shielded area; and if the second display is playing the second guide film, transmitting the interactive data to the first display and the second display.
According to an embodiment of the present disclosure, the multimedia interaction method further includes transmitting the first object and the second object to the first display and the second display, respectively, playing the first navigation movie with the second object through the first display, and playing the second navigation movie with the first object through the second display.
The method of multimedia interaction according to an embodiment of the present disclosure further includes presenting the operation of the first object on the second display when the first object performs a first operation and the first operation is transmitted to the second display through a server, and presenting the operation of the second object on the first display when the second object performs a second operation and the second operation is transmitted to the first display through the server.
According to an embodiment of the present disclosure, a session data lookup table records the first navigation movie, the non-masked area corresponding to the first navigation movie, and the second navigation movie corresponding to the non-masked area.
According to an embodiment of the present disclosure, the method further includes querying the paragraph data lookup table to obtain the non-occluded area when the first live playback time falls within a first period of the first navigation movie.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining a second real-time playing time when the second display is playing the second navigation movie; querying the paragraph data lookup table to determine whether the second real-time playing time is in a second period of the second navigation movie; and calculating a distance between a first position in the first navigation movie and a second position in the second navigation movie when the second real-time playing time is between the second period, wherein the first position corresponds to the first real-time playing time, and the second position corresponds to the second real-time playing time.
The multimedia interaction method according to an embodiment of the present disclosure further includes determining whether the distance is smaller than a visible length, and transmitting the interaction data to the first display and the second display if the distance is smaller than the visible length, where the visible length is a sum of a first field of view of the interaction data displayed by the first display and a second field of view of the interaction data displayed by the second display.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining map data, where the map data includes a first route and a second route, and the first navigation movie is captured by a camera along the first route, and the second navigation movie is captured by the camera along the second route.
According to an embodiment of the present disclosure, the map data further includes a mask between a portion of the first route and a portion of the second route.
The multimedia interaction method according to an embodiment of the present disclosure further includes obtaining the first navigation film in the first period outside the masking mark of the first route, and obtaining the second navigation film in the second period outside the masking mark of the second route; and storing an identifier of the first navigation movie and the first period, the non-masked area except the masking mark, and an identifier of the second navigation movie and the second period in the paragraph data lookup table in an associated manner.
Detailed Description
The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. Specific examples of components and arrangements are described below to simplify the present disclosure. Of course, these examples are merely illustrative and are not intended to be limiting. For example, forming a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features such that the first and second features may not be in direct contact. Additionally, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
Further, spatially relative terms, such as "under," "below," "lower," "above," "higher," and the like, may be used herein for ease of description to describe one element or feature's relationship to another element (or elements) or feature (or features) as illustrated in the figures. Spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
Referring to FIG. 1, a schematic diagram of amultimedia interaction system 100 according to some embodiments of the present disclosure is shown. As shown in FIG. 1, themultimedia interaction system 100 includes aserver 110 and a plurality of displays 120 a-120 n. The displays 120 a-120 n may be virtual reality head mounted devices, display screens, or the like. The displays 120 a-120 n are communicatively coupled to theserver 110. In one embodiment, the displays 120 a-120 n are connected to a local host device (not shown), and the host device and theserver 110 are connected to theserver 110 via wired or wireless transmission communication, so as to download data to theserver 110 or upload data to theserver 110. Thedisplays 120a to 120n acquire data of theserver 110 through the host device.
Displays 120 a-120 n are respectively disposed in the same or different geographic locations. For example, the displays 120 a-120 n are respectively disposed in different rooms of the same building, so that a plurality of users can operate the displays 120 a-120 n.
Theserver 110 stores a plurality of guide movies. The display 120 may display the navigation movie downloaded from theserver 110. The navigation movie may be a pre-recorded movie of an actual scene, for example, recorded from different routes in a large-english museum. A description of creating a navigation movie will be set forth later.
Referring to FIG. 2, amap data 200 of themultimedia interaction system 100 of FIG. 1 is shown according to some embodiments of the present disclosure. As shown in fig. 2, themap data 200 includes a plurality of routes (routes 221, 223, 225, 227), a plurality ofendpoints 211, 211d, 213, 215, 217, 219, andocclusion labels 231, 233, 235. Each guide movie is recorded in advance along theroutes 221, 223, 225, 227, respectively. For example, a tour film is recorded at a large-english museum, and includes a tour film taken along a route (route 221) from a gate (Main entry) to a Great Court (Great Court) and then to a second Room (Room 2), a tour film taken along a route (route 225) from the Great Court to East stairs (East stairs), a tour film taken along a route (route 227) from the Great Court to West stairs (West stairs), a tour film taken along a route (route 223) from the Great Court to South stairs (South stairs), and the like.
The user can select the film to be watched. For example, if the user wants to know how to go from the gate to the great atrium, he can choose to watch the corresponding guide movie. It is worth mentioning that the user can select any one of the displays 120 a-120 n in themultimedia interaction system 100 of fig. 1 to watch the movie. The multimediainteractive system 100 can have multiple displays simultaneously to play the same or different guide films, so that multiple users can watch the films on the line at the same time. For example, afirst user 241 views a selected navigation film through thedisplay 120a, asecond user 243 views a selected navigation film through thedisplay 120b, a third user views a selected navigation film through thedisplay 120n, and so on.
Themap data 200 has anocclusion label 231, anocclusion label 233, and anocclusion label 235. Themask note 231 indicates that there is an obstacle or a wall or the like between theroute 221 and theroute 223. Themasking notation 233 indicates that there is an obstacle or wall, etc. between theroute 225 and theroute 227. Themasking notation 235 indicates that there is an obstacle or wall or the like between theroute 227 and theroute 223.
For example, in the first navigation film, the user selects a navigation film, and views along theend point 211, theend point 211a, theend point 211b, theend point 211c, and theend point 211d in sequence are viewed on thepath 221. Atendpoints 211, 211a, 211b, 211c, and 211d, respectively, correspond to 0 minute 0 second, 2 minute 50 second, 7 minute 50 second, 10 minute 0 second, and 12 minute 50 second of the first navigation movie.
In the second navigation film, the user can watch the scenes along theend point 213, theend point 213a, theend point 213b to theend point 215 in thepath 223.Endpoint 213,endpoint 213a,endpoint 213b, andendpoint 215 correspond to 0 minute 0 second, 2 minute 0 second, 6 minute 30 second, and 9 minute 30 second, respectively, of the second navigation movie.
In the third guide movie, the user can watch the scenes along theend point 225a, theend point 225b to theend point 219 in thepath 225.Endpoint 225a,endpoint 225b, andendpoint 219 correspond to 0 minute 0 second, 3 minute 30 second, and 8 minute 40 second, respectively, of the third navigation movie.
In one embodiment, when thefirst user 241 is watching the first guide movie (route 221) and, at the same time, thesecond user 243 is watching the second guide movie (route 223), thefirst user 241 may, under certain conditions, see the virtual figure (avatar), the action of the figure, and/or the user's voice, etc. of thesecond user 243 in the first guide movie, and vice versa. That is, when there is no shelter between theroute 221 and theroute 223, thefirst user 241 and thesecond user 243 can see each other's virtual puppet in the guide movie. For example, if thefirst user 241 watches the first guide movie and thesecond user 243 watches the second guide movie, if the first guide movie is played to 8 minutes 32 seconds (between theend point 211b and theend point 211 c) and the second guide movie is played to 0 minutes 50 seconds (between theend point 213 and theend point 213 a), there is no shielding (such as the region R shown in fig. 2) between theroute 221 and the route 2232) Thus, thefirst user 241 and thesecond user 243 can see each other's virtual figure.
In one embodiment, if thefirst user 241 watches the first navigation movie and the second user watches the second navigation movie, if the first navigation movie is played to 5 minutes and 15 seconds (between theend point 211a and theend point 211 b) and the second navigation movie is played to 5 minutes and 30 seconds (between theend point 213a and theend point 213 b), thefirst user 241 and thesecond user 243 cannot see the virtual puppet, the puppet motion and/or the user's voice of each other because there is a mask (such as themask 231 shown in fig. 2) between theroute 221 and theroute 223.
Referring to FIG. 3, a flow chart of steps of a method for multimedia interaction according to some embodiments of the present disclosure is shown. The multimedia interaction method of the present disclosure can be applied to themultimedia interaction system 100 shown in fig. 1. The following description takes themultimedia interaction system 100 of FIG. 1 and themap data 200 of FIG. 2 as examples, but the disclosure is not limitedtheretoThe map data 200 is a limit. In step S310, thefirst user 241 views the first guide movie using thedisplay 120 a. During the viewing process, theserver 110 may obtain a first instant playing time (e.g., 8 minutes and 32 seconds) that thedisplay 120a is currently viewing, i.e., a movie time of a first navigation movie that thedisplay 120a is currently displaying. In step S320, theserver 110 queries the non-occluded area in the paragraph data lookup table, which is shown in the following table i. Theserver 110 queries the time point of the first instant playing time to query other scene positions that can be seen by the scene position in the movie at that time. For example, the first navigation movie (Video 1) corresponds to an unmasked area R1、R2、R3And R4. When the first real-time playing time is 8 min 32 s, it falls within the first period with the starting time of 7 min 80 s and the ending time of 10 min 0 s, and this first period corresponds to the non-shaded region R2。
Table one, paragraph data lookup table
In step S330, theserver 110 obtains a second navigation movie corresponding to the non-occluded area in the paragraph data lookup table according to the obtained non-occluded area. For example, in the obtained non-occlusion region R2The corresponding non-shielded region R can also be obtained simultaneously2This is the second guide movie (Video 4).
In step S340, in one embodiment, thesecond user 243 uses thedisplay 120b to watch a second navigation film (e.g., Video 4). During the viewing process, theserver 110 may obtain a second instant playing time (e.g., 2 minutes and 50 seconds) that thedisplay 120b is currently viewing, i.e., a movie time of a second navigation movie that thedisplay 120b is currently displaying.
In step S350, theserver 110 determines whether the second instant playing time is between the second period. If the second real-time playing time is not within the second period, in step S352, theserver 110 determines that thefirst user 241 does not have any user capable of interacting currently.
If the second instant playing time is within the second period, in step S360, the
server 110 calculates a distance between a first position corresponding to the first instant playing time and a second position corresponding to the second instant playing time. For example, the movie times of the first navigation movie at the
end points 211, 211a, 211b, 211c are 0 min 0 s, 2 min 50 s, 7 min 50 s, 10 min 0 s, respectively. The movie length from the
end point 211 to the
end point 211c is 10 minutes 0 seconds. When the first instant playing time is 8 minutes 32 seconds, the time length from this time to 10 minutes 0 seconds is calculated to be 1 minute 28 seconds (i.e. 10:00-8:32, the difference between 10 minutes 0 seconds and 8 minutes 32 seconds). The virtual length L (or the previously measured scene distance) between the
end point 211 and the
end point 211c may be calculated as the distance from the virtual position of the first instant playing time to the virtual position of the
end point 211c
In one embodiment, the navigation films are captured while the camera is moving forward at a fixed moving speed. Therefore, the length L of the camera on the route can be calculated according to the moving speed and the film time of the camera. In another embodiment, an Inertial Measurement Unit (IMU) is disposed on the camera, and the length L of the route can be obtained through measurement data of the IMU while the camera is moving and recording films.
In step S370, the server determines 110 whether the calculated distance is smaller than the visible length. The visual length is the sum of a first view of thedisplay 120a displaying the interactive data and a second view of thedisplay 120b displaying the interactive data.
Referring to FIG. 4, a schematic diagram of interaction ranges between users of the
multimedia interaction system 100 of FIG. 1 according to some embodiments of the present disclosure is shown. When the first navigation movie is played to the endpoint a, the
display 120a may display a first view r1 of the interactive data. When the second navigation movie is played to the end point B, the display 120B may display a second field of view r2 of the interactive data. Wherein, the lengths of the first view field r1 and the second view field r2 can be advanced in advanceAnd setting a row. The
server 110 calculates the distance between the endpoints A and B, i.e. the distance between the endpoints A and B
The visible length is the sum of the first field of view r1 and the second field of view r 2.
Distance if
Less than or equal to the visual length r1+ r2, the
server 110 transmits the interaction data to the
display 120a and the
display 120b in step S380. The interaction data may be Augmented Reality virtual figure objects displayed on the
displays 120a, 120 b. For example, the
first user 241 may see the virtual figure, figure movement, and/or user's voice of the
second user 243 in the first navigation movie, and vice versa. The operation or motion information of the
first user 241 can be obtained through a sensor (not shown), a microphone (not shown) or other methods, so that the
second user 243 can see the motion of the virtual figure of the first user 241 (for example, waving his hand) through the
display 120b or hear the sound of the
first user 241 through a speaker (not shown). Similarly, the operation or action information of the
second user 243 is also transmitted to the
display 120a through the
server 110. Thus, the
first user 241 and the
second user 243 can interact through the
displays 120a and 120 b.
Distance if
If the length is greater than the visible length r1+ r2, the process returns to step S352, and the
server 110 determines that the
first user 241 does not have any users capable of interacting with each other.
In some embodiments, the paragraph data lookup table may be queried by the displays 120 a-120 n or a host device connected to the displays 120 a-120 n to perform the multimedia interaction method.
Referring now to FIG. 5, a flowchart illustrating steps for creating a paragraph data lookup table according to some embodiments of the present disclosure is shown. As shown in fig. 5, in step S510, the camera takes a first navigation movie along a first route (e.g.,route 221 of fig. 2) and a second navigation movie along a second route (e.g.,route 223 of fig. 2). The Camera may be a ball Camera (Speed Camera). The guide movie may be a panoramic image with a 360-degree circular field. In step S520, the recorded guide movies are transmitted to a computing device (not shown in fig. 1). The operation device may be an electronic device having an image processing operation. In one embodiment, the computing device executes an image matching technique to capture the start point and the end point of each navigation film, so as to calculate the spatial relationship between the navigation films. For example, referring to the table i, the image at the end of the Video 1 is similar to the horizontal angle images at the beginning of the Video 2, the Video 3, and the Video 4 (e.g. the image captured by the ball camera at 180 degrees). Therefore, the arithmetic device records that there is a correlation between the end point of the movie Video 1 and the start points of the movie Video 2, the movie Video 3, and the movie Video 4. Thus, themap data 200 of fig. 2 can be established through the spatial correlation between movies and the length of each route.
Next, in step S530, the map data is transmitted to the computing device. In step S540, the computing device creates a masking label of the map data. In one embodiment, the map maintainer can manually mark the shielded area, or the computing device can execute an image comparison algorithm to calculate the similarity of the images, and when the similarity is lower than a threshold value, the paths at a specific position are judged to have shielding. The image similarity algorithm may be, but not limited to, feature point matching algorithm (featuregraphing), color histogram algorithm (color histogram), common information algorithm (statistical information), machine learning algorithm (machine learning), and the like. In one embodiment, when the guide movie is at a different angle, the similarity is compared with the similarity of other angles of other guide movies. For example, when the similarity parameter between the image of the first guide film at the viewing angle 0 degree and the image of the second guide film at the viewing angle 180 degree is greater than the threshold value, it is determined that the corresponding relationship exists between the viewing angles of the images of the two guide films. Or, when the similarity parameter between the image of the first guide film at the viewing angle of 180 degrees and the image of the second guide film at the viewing angle of 220 degrees is greater than the threshold value, it is determined that the corresponding relationship exists between the image viewing angles of the two guide films. Therefore, when there is a corresponding relationship between the image viewing angles of the guided movies, it can be determined that there is no shelter between the positions of the guided movies.
In step S550, the computing device excludes the relationship between the guided movies according to the masking labels, and obtains the relationship between the first guided movie and the second guided movie during the first period and the second period. For example, when the relationship between the navigation film Video 1 and the navigation film Video 4 is established, when the navigation film Video 1 has a masking mark between theend point 211a and theend point 211b of theroute 211 and the navigation film Video 4 has a masking mark between theend point 213a and theend point 213b of theroute 223, the computing device records that the corresponding relationship exists between the navigation film Video 1 from theend point 211 to theend point 211a and the navigation film Video 4 from theend point 213b to the end point 215 (i.e. the non-masking region R is located in the non-masking region R)1). Similarly, the computing device records that there is a corresponding relationship between the navigation film Video 1 from theendpoint 211b to theendpoint 211c and the navigation film Video 4 from theendpoint 213 to theendpoint 213a (i.e. the non-occluded area R)2)。
In step S560, the identifier of the first navigation movie, the first period, the non-occluded area, the identifier of the second navigation movie, and the second period are recorded in the paragraph data lookup table. For example, as in the first column of the foregoing Table I, the non-occluded area R is recorded1An identifier of the guide movie Video 1, a first period, an identifier of the guide movie Video 4, and a second period.
In summary, themultimedia interaction system 100 and the multimedia interaction method of the present disclosure can provide users with a guide film and interact with other users who also watch the guide film. The user watches the guide film and knows real scene in the film, in order to let the user can more personally go to his own scene, this revelation file combines image identification and augmented reality, increases the user and can see the user that is located other scene positions when watching the guide film to can be interactive with other users, for example wave the hand and inquire the position at place, exchange the information that experiences etc. the user can know the scene in the film more fast, and increase the authenticity and the enjoyment of watching the guide film. In addition, the disclosure can quickly determine whether users are in a virtual space capable of interactive communication. Through comparison of the paragraph data lookup table, the unnecessary information can be filtered, the virtual distance between the users can be calculated through the virtual positions of the users, if the users respectively set the virtual interaction range, interaction communication is provided for the users with intersection, and therefore the unnecessary operation data processed by the server can be reduced, and the bandwidth resource consumption of the transmission data can be reduced.
The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that the present invention may be readily utilized as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.