Disclosure of Invention
The disclosure provides a video sequencing method and device in a search scene, electronic equipment and a storage medium, which are used for at least solving the problem that videos cannot be sequenced and displayed in the related technology. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video ranking method in a search scene, including:
acquiring a target video set; wherein the target videos in the target video set include at least one first target video and at least one second target video;
acquiring a first ordering characteristic corresponding to the first target video;
weighting the first ordering features according to weight vectors in a preset feature weight set to obtain second ordering features corresponding to the second target video;
sequencing the target videos in the target video set according to the first sequencing characteristic and the second sequencing characteristic to obtain a sequencing result;
and displaying the target videos in the target video set according to the sequencing result.
In an exemplary embodiment, each of the first target videos corresponds to one first video feature, and each of the second target videos corresponds to one second video feature;
the method for acquiring the feature weight set comprises the following steps:
and determining a weight vector of the second target video relative to the first target video according to the incidence relation between the first video characteristic and the second video characteristic to obtain the characteristic weight set.
In an exemplary embodiment, the determining a weight vector of the second target video relative to the first target video according to the association relationship between the first video feature and the second video feature to obtain the feature weight set includes:
determining the similarity between each second video feature and each first video feature to obtain a similarity set;
and carrying out normalization processing on the similarity in the similarity set, and determining a weight vector of the second target video relative to the first target video to obtain the characteristic weight set.
In an exemplary embodiment, the normalizing the similarity in the similarity set to determine a weight vector of the second target video relative to the first target video to obtain the feature weight set includes:
normalizing the similarity in the similarity set to determine the generalization weight of the second target video relative to the first target video to obtain a generalization weight set;
and normalizing the generalized weight in the generalized weight set to determine a weight vector of the second target video relative to the first target video, so as to obtain the characteristic weight set.
In an exemplary embodiment, the manner of obtaining the first video feature or the second video feature includes:
inputting the first target video into a preset Embedding model to obtain the first video characteristic;
or, inputting the second target video into the Embedding model to obtain the second video characteristic.
In an exemplary embodiment, the obtaining manner of the target video set includes:
acquiring a target search word;
and acquiring videos related to the target search terms to obtain the target video set.
In an exemplary embodiment, the sorting the target videos in the target video set according to the first sorting feature and the second sorting feature to obtain a sorting result includes:
inputting the first sorting feature and the second sorting feature into a preset video sorting model, sorting a first target video corresponding to the first sorting feature and a second target video corresponding to the second sorting feature, and outputting a sorting result.
According to a second aspect of the embodiments of the present disclosure, there is provided a video ranking device in a search scene, including:
a video set acquisition unit configured to perform acquisition of a target video set; wherein the target videos in the target video set include at least one first target video and at least one second target video;
a first ordering feature obtaining unit configured to perform obtaining of a first ordering feature corresponding to the first target video;
the second ordering feature determining unit is configured to perform weighting on the first ordering features according to weight vectors in a preset feature weight set to obtain second ordering features corresponding to the second target video;
the video sorting unit is configured to sort the target videos in the target video set according to the first sorting feature and the second sorting feature to obtain a sorting result;
and the video display unit is configured to display the target videos in the target video set according to the sequencing result.
In an exemplary embodiment, each of the first target videos corresponds to one first video feature, and each of the second target videos corresponds to one second video feature;
the second ranking characteristic determining unit is further configured to perform:
and determining a weight vector of the second target video relative to the first target video according to the incidence relation between the first video characteristic and the second video characteristic to obtain the characteristic weight set.
In an exemplary embodiment, the second ranking characteristic determining unit is further configured to perform:
determining the similarity between each second video feature and each first video feature to obtain a similarity set;
and carrying out normalization processing on the similarity in the similarity set, and determining a weight vector of the second target video relative to the first target video to obtain the characteristic weight set.
In an exemplary embodiment, the second ranking characteristic determining unit is further configured to perform:
normalizing the similarity in the similarity set to determine the generalization weight of the second target video relative to the first target video to obtain a generalization weight set;
and normalizing the generalized weight in the generalized weight set to determine a weight vector of the second target video relative to the first target video, so as to obtain the characteristic weight set.
In an exemplary embodiment, the second ranking characteristic determining unit is further configured to perform:
inputting the first target video into a preset Embedding model to obtain the first video characteristic;
or, inputting the second target video into the Embedding model to obtain the second video characteristic.
In an exemplary embodiment, the video set acquisition unit is further configured to perform:
acquiring a target search word;
and acquiring videos related to the target search terms to obtain the target video set.
In an exemplary embodiment, the video ordering unit is further configured to perform:
inputting the first sorting feature and the second sorting feature into a preset video sorting model, sorting a first target video corresponding to the first sorting feature and a second target video corresponding to the second sorting feature, and outputting a sorting result.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video ranking method in the search scenario of any of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video ranking method in a search scene according to any one of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, the program product comprising a computer program, the computer program being stored in a readable storage medium, from which the at least one processor of the apparatus reads and executes the computer program, so that the apparatus performs the method of video ranking under a search scenario described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
and weighting the first ordering characteristics corresponding to each first target video according to the weight vectors in the preset characteristic weight set to obtain second ordering characteristics corresponding to the second target video. Therefore, when the second target video does not have the corresponding sorting feature, the sorting feature is incomplete or the sorting feature is unavailable, the second sorting feature corresponding to the second target video can be obtained according to the first sorting feature corresponding to the first target video. And sequencing the target videos in the target video set according to the first sequencing characteristic and the second sequencing characteristic to obtain a sequencing result, and displaying the target videos in the target video set according to the sequencing result. Therefore, when no corresponding sorting feature, incomplete sorting feature or unavailable sorting feature exists in part of videos in the target video set, a second sorting feature corresponding to a second target video can be obtained according to a first sorting feature corresponding to a first target video, and then the target videos in the target video set are sorted and displayed according to the first sorting feature and the second sorting feature.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video ranking method in a search scene according to an exemplary embodiment, where the method is applied to an electronic device for example, it is understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and is implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
in step S100, a target video set is acquired; wherein the target videos in the target video set include at least one first target video and at least one second target video.
In step S200, a first ordering feature corresponding to the first target video is obtained.
In step S300, the first ranking features are weighted according to the weight vector in the preset feature weight set, so as to obtain second ranking features corresponding to the second target video.
In step S400, the target videos in the target video set are sorted according to the first sorting feature and the second sorting feature, so as to obtain a sorting result.
In step S500, the target videos in the target video set are displayed according to the sorting result.
Wherein, the target video set is a set formed by videos which need to be ordered and displayed. The feature weight set is a set formed by weight vectors of the second target video relative to the first target video, and the first ordering features can be mapped to second ordering features corresponding to the second target video through the weight vectors in the feature weight set.
Specifically, a target video set formed by videos to be sequenced and displayed is obtained, and the target videos in the target video set comprise two types of videos, namely at least one first target video and at least one second target video. And then, obtaining a first ordering feature corresponding to the first target video, weighting the first ordering feature according to a weight vector in a preset feature weight set to obtain an ordering feature reconstructed based on the first ordering feature and the weight vector, determining the ordering feature as a second ordering feature corresponding to the second target video, and representing the ordering feature of the second target video by using the ordering feature of the first target video. Therefore, when the second target video does not have the corresponding sorting feature, the sorting feature is incomplete or the sorting feature is unavailable, the second sorting feature corresponding to the second target video can be obtained according to the first sorting feature corresponding to the first target video. And sequencing the target videos in the target video set according to the first sequencing characteristic and the second sequencing characteristic to obtain a sequencing result. And displaying the target videos in the target video set according to the sequencing result.
Illustratively, when there are 10 target videos in one target video set to be sorted and presented, wherein there is a corresponding play amount for 8 target videos (first target video) of the 10 target videos, there is no corresponding play amount for 2 target videos (second target video) of the 10 target videos, or play amount data is not available. At this time, if 10 target videos need to be sorted according to the playing amount (sorting feature) of the videos, since there is no corresponding playing amount data in part of the target videos, 10 target videos in the target video set cannot be sorted and displayed according to the playing amount. At this time, two weight vectors may be adopted to weight the play amounts of 8 target videos with play amounts to obtain two reconstructed play amount data, and the two reconstructed play amount data are used to represent the play amounts of 2 target videos without play amounts. At this time, the 10 target videos in the target video set all have corresponding play amount data, and at this time, the 10 target videos in the target video set may be sorted and displayed according to the play amount. The problem that the target videos cannot be sequenced and displayed is solved.
According to the video sorting method under the search scene, the first sorting features corresponding to each first target video are weighted according to the weight vectors in the preset feature weight set, and the second sorting features corresponding to the second target videos are obtained. Therefore, when the second target video does not have the corresponding sorting feature, the sorting feature is incomplete or the sorting feature is unavailable, the second sorting feature corresponding to the second target video can be obtained according to the first sorting feature corresponding to the first target video. And sequencing the target videos in the target video set according to the first sequencing characteristic and the second sequencing characteristic to obtain a sequencing result, and displaying the target videos in the target video set according to the sequencing result. Therefore, when no corresponding sorting feature, incomplete sorting feature or unavailable sorting feature exists in part of videos in the target video set, a second sorting feature corresponding to a second target video can be obtained according to a first sorting feature corresponding to a first target video, and then the target videos in the target video set are sorted and displayed according to the first sorting feature and the second sorting feature.
In an exemplary embodiment, for one way of obtaining the feature weight set:
and determining a weight vector of the second target video relative to the first target video according to the incidence relation between the first video characteristic and the second video characteristic to obtain a characteristic weight set.
Specifically, the weight vector in the feature weight set is used for determining the second ranking feature according to the first ranking feature, so that the weight vector is an embodiment of a mapping relationship from the first ranking feature to the second ranking feature, and the weight vector is determined according to an association relationship between the first video feature and the second video feature.
Optionally, as shown in fig. 2, a flowchart of a feature weight set obtaining method according to an exemplary embodiment specifically includes the following steps:
in step S310, a similarity between each second video feature and each first video feature is determined, so as to obtain a similarity set.
In step S320, the similarity in the similarity set is normalized, and a weight vector of the second target video relative to the first target video is determined, so as to obtain a feature weight set.
Specifically, according to the second video features and the first video features, the similarity between each second video feature and each first video feature is determined, the similarity in the similarity set is normalized, and the normalized similarity is determined as a weight vector of the second target video relative to the first target video to obtain a feature weight set.
Exemplarily, when the first video features and the second video features are represented by Embedding corresponding to the target video, the similarity between each second video feature and each first video feature is obtained as shown in formula (1):
simliarityi=cosine(embeddingi,embeddingp) (1)
wherein simliarityiRepresenting the similarity between each second video feature and each first video feature, embeddingiEmbedding is an Embedding representation of a first video featurepIs an embed representation of the second video feature.
Specifically, the target video set is P, the second target video is P, the at least one second target video is P ', the first target video is i, and the at least one second target video is { i ∈ (P-P') }. SimliaritysetIs made by simliarityiThe formed similarity set.
And carrying out normalization processing on the similarity in the similarity set, and determining the normalized similarity as a weight vector of the second target video relative to the first target video to obtain a characteristic weight set.
In the above exemplary embodiment, the similarity between each second video feature and each first video feature is determined to obtain a similarity set, the similarities in the similarity set are normalized, and the weight vector of the second target video relative to the first target video is determined to obtain a feature weight set. Therefore, the weight vector capable of establishing the mapping relation between the first video feature and the second video feature can be determined on the basis of the first video feature and the second video feature, so that the second ordering feature corresponding to the second target video can be obtained according to the first ordering feature on the premise of obtaining the first ordering feature of the first target video, and the problems that the second target video does not have the corresponding ordering feature, the ordering feature is incomplete or the ordering feature can be solved.
In an exemplary embodiment, as shown in fig. 3, a flowchart of an implementable manner of step S320 shown in accordance with an exemplary embodiment specifically includes:
in step S321, the similarity in the similarity set is normalized, and a generalization weight of the second target video with respect to the first target video is determined, so as to obtain a generalization weight set.
In step S322, normalization processing is performed on the generalized weights in the generalized weight set, and a weight vector of the second target video relative to the first target video is determined, so as to obtain a feature weight set.
Specifically, the similarity in the similarity set is normalized to obtain the similarity after the normalization, and the similarity after the normalization can reflect the weight proportion of the second target video relative to the first target video.
Exemplarily, the similarity is normalized by min-max to obtain the weighting of the second target video p relative to the first target video i, and the weighting is generalizediIs obtained as shown in formula (2)The following steps:
then, in order to ensure that the features of the second target video p and the first target video i are in one order of magnitude, the generalization weight needs to be weight normalized, and the renormalized generalization weight is determined as a weight vector of the second target video relative to the first target video to obtain a feature weight set. The weight vector is obtained as shown in equation (3):
optionally, inputting the first target video into a preset Embedding model to obtain a first video characteristic; or, inputting the second target video into the Embedding model to obtain the second video characteristic.
Specifically, when the first video feature and the second video feature are the embed representations corresponding to the target video, a video Embedding system is needed to obtain the embed representations of the first target video and the second target video, and the Embedding system can be the intermediate feature of the visual model or the wide trained based on the behavior data&deep model. The unification can be expressed as that, for a video i, there is one embeddingiCorresponding to it.
In the above exemplary embodiment, the similarity in the similarity set is normalized, a generalization weight of the second target video relative to the first target video is determined, a generalization weight set is obtained, the generalization weight in the generalization weight set is normalized, a weight vector of the second target video relative to the first target video is determined, and a feature weight set is obtained. Therefore, a weight vector capable of establishing a mapping relation between the first video feature and the second video feature can be determined on the basis of the first video feature and the second video feature, normalization processing is carried out on the weight vector through the generalized weight, the first sequencing feature and the second sequencing feature can be guaranteed to be in one order of magnitude, on the premise that the first sequencing feature of the first target video is obtained, the second sequencing feature corresponding to the second target video can be obtained according to the first sequencing feature, and the problems that the second target video does not have the corresponding sequencing feature, the sequencing feature is incomplete or the sequencing feature can be solved.
In an exemplary embodiment, one way to obtain a target video set is:
acquiring a target search word; and acquiring videos related to the target search terms to obtain a target video set.
Specifically, in a video search scene, videos related to a target search term can be recalled according to the target search term, and the videos are videos which need to be ranked and displayed to form a target video set.
In the above exemplary embodiment, an obtaining manner of a target video set is provided, so as to provide a boundary for subsequently sorting and displaying videos. It should be understood that the above is only one way of obtaining the target video set, and is not intended to limit the target video set. Illustratively, a target video set can also be determined by a user ID, a video distribution area, a video distribution time, and the like.
In an exemplary embodiment, one possible implementation of step S400 includes:
inputting the first sequencing feature and the second sequencing feature into a preset video sequencing model, and performing sequencing feature on a first target video corresponding to the first sequencing feature and a second target video corresponding to the second sequencing feature to obtain a second sequencing feature corresponding to the second target video, wherein a specific formula (4) is as follows:
wherein,
a second ordering attribute corresponding to a second target video,
identifying a first ordering feature, softmax, corresponding to a first target video
iRepresenting a weight vector.
After the second ordering characteristic corresponding to the second target video is obtained, the first ordering characteristic and the second ordering characteristic are input into a preset video ordering model, the first target video corresponding to the first ordering characteristic and the second target video corresponding to the second ordering characteristic are ordered, and an ordering result is output.
Wherein the predetermined video ordering model may be a scoring model, for example, by
And scoring and sorting the crust sorting feature and the second sorting feature to obtain a sorting result of the target video.
In the above exemplary embodiment, the first ranking characteristic and the second ranking characteristic are input into a preset video ranking model, a first target video corresponding to the first ranking characteristic and a second target video corresponding to the second ranking characteristic are ranked, and a ranking result is output. And then according to the sequencing result, sequencing and displaying the target videos in the target video set, and solving the problem that the target videos cannot be sequenced and displayed.
It should be understood that although the various steps in the flow charts of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
Fig. 4 is a block diagram illustrating a video ranking apparatus in a search scenario according to an example embodiment. Referring to fig. 4, the apparatus includes a videoset acquisition unit 401, a first rankingfeature acquisition unit 402, a second rankingfeature determination unit 403, avideo ranking unit 404, and a video presentation unit 405:
a videoset acquisition unit 401 configured to perform acquisition of a target video set; wherein the target videos in the target video set comprise at least one first target video and at least one second target video;
a first orderingfeature obtaining unit 402 configured to perform obtaining a first ordering feature corresponding to a first target video;
a second rankingfeature determining unit 403, configured to perform weighting on the first ranking features according to the weight vectors in the preset feature weight set, so as to obtain second ranking features corresponding to the second target video;
avideo sorting unit 404 configured to perform sorting of the target videos in the target video set according to the first sorting feature and the second sorting feature, so as to obtain a sorting result;
and thevideo display unit 405 is configured to perform display of the target video in the target video set according to the sorting result.
In an exemplary embodiment, each first target video corresponds to one first video feature, and each second target video corresponds to one second video feature; the second rankingcharacteristic determination unit 403 is further configured to perform: and determining a weight vector of the second target video relative to the first target video according to the incidence relation between the first video characteristic and the second video characteristic to obtain a characteristic weight set.
In an exemplary embodiment, the second rankingcharacteristic determining unit 403 is further configured to perform: determining the similarity between each second video feature and each first video feature to obtain a similarity set; and carrying out normalization processing on the similarity in the similarity set, determining a weight vector of the second target video relative to the first target video, and obtaining a characteristic weight set.
In an exemplary embodiment, the second rankingcharacteristic determining unit 403 is further configured to perform: normalizing the similarity in the similarity set to determine the generalization weight of the second target video relative to the first target video to obtain a generalization weight set; and normalizing the generalized weight in the generalized weight set to determine a weight vector of the second target video relative to the first target video to obtain a characteristic weight set.
In an exemplary embodiment, the second rankingcharacteristic determining unit 403 is further configured to perform: inputting the first target video into a preset Embedding model to obtain a first video characteristic; or, inputting the second target video into the Embedding model to obtain the second video characteristic.
In an exemplary embodiment, the videoset acquisition unit 401 is further configured to perform: acquiring a target search word; and acquiring videos related to the target search terms to obtain a target video set.
In an exemplary embodiment, thevideo ordering unit 404 is further configured to perform: and inputting the first sequencing feature and the second sequencing feature into a preset video sequencing model, sequencing a first target video corresponding to the first sequencing feature and a second target video corresponding to the second sequencing feature, and outputting a sequencing result.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 5 is a block diagram illustrating anelectronic device 500 for video ranking in a search scene according to an example embodiment. For example, thedevice 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 5,device 500 may include one or more of the following components: processingcomponent 502,memory 504,power component 506,multimedia component 508,audio component 510, interface to input/output (I/O) 512,sensor component 514, andcommunication component 516.
Theprocessing component 502 generally controls overall operation of thedevice 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Theprocessing components 502 may include one ormore processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, theprocessing component 502 can include one or more modules that facilitate interaction between theprocessing component 502 and other components. For example, theprocessing component 502 can include a multimedia module to facilitate interaction between themultimedia component 508 and theprocessing component 502.
Thememory 504 is configured to store various types of data to support operation at thedevice 500. Examples of such data include instructions for any application or method operating ondevice 500, contact data, phonebook data, messages, pictures, videos, and so forth. Thememory 504 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Thepower supply component 506 provides power to the various components of thedevice 500. Thepower components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for thedevice 500.
Themultimedia component 508 includes a screen that provides an output interface between thedevice 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, themultimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when thedevice 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Theaudio component 510 is configured to output and/or input audio signals. For example, theaudio component 510 includes a Microphone (MIC) configured to receive external audio signals when thedevice 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in thememory 504 or transmitted via thecommunication component 516. In some embodiments,audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between theprocessing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Thesensor assembly 514 includes one or more sensors for providing various aspects of status assessment for thedevice 500. For example, thesensor assembly 514 may detect an open/closed state of thedevice 500, the relative positioning of the components, such as a display and keypad of thedevice 500, thesensor assembly 514 may also detect a change in the position of thedevice 500 or a component of thedevice 500, the presence or absence of user contact with thedevice 500, orientation or acceleration/deceleration of thedevice 500, and a change in the temperature of thedevice 500. Thesensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. Thesensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, thesensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Thecommunication component 516 is configured to facilitate communications between thedevice 500 and other devices in a wired or wireless manner. Thedevice 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, thecommunication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, thecommunication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications.
In an exemplary embodiment, theapparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as thememory 504 comprising instructions, executable by theprocessor 520 of thedevice 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, the program product comprising a computer program stored in a readable storage medium, from which the at least oneprocessor 520 of the device reads and executes the computer program, causing the device to perform the above-mentioned method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.