Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of an information pushing method according to an embodiment of the present invention, where the present embodiment is applicable to a situation of pushing personalized push information to a user, and the method may be executed by an information pushing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a wearable device (such as smart glasses and smart helmet, etc.), as shown in fig. 1, the method includes the following operations:
s110, collecting user behavior data, generating a corresponding interest tag and uploading the interest tag to a server; the interest tag is used for indicating the server to update the interest preference value corresponding to the user.
The user behavior data may be various types of data involved in using the wearable device by the user, such as web browsing data, application data, various item data in a virtual scene (or augmented reality scene), data input by the user through text or voice, various data of user interest determined through eye tracking technology, and the like. The web browsing data may be commodity shopping information, various text (news, novels) data, movie data and the like in the web, and the application data may be various applications such as Word, QQ, various video player software and the like. The interest tag may be a tag generated for a type of the user behavior data, and may reflect interest preference of the user. For example, "women's clothing" may be used as an interest tag when a user browses various women's clothing items in a web page. Further, the interest labels may also reflect interest preference values of the users, wherein the interest preference values are used to identify the user-biased interest points and the avoided interest points. For example, when the user browses women's dress goods, the user focuses on browsing women's dress of commuter class, and does not browse women's dress of skirt class, then "women's dress: commute as Yes; no "is used as the label of interest. It should be noted that the interest tag only needs to be capable of identifying the interest and the focus of the user reflected by the user behavior data, and the embodiment of the present invention does not limit the form of the interest tag.
In the embodiment of the invention, the user can wear the intelligent head-wearing equipment to browse various information. The intelligent head-mounted device includes, but is not limited to, Virtual Reality (VR) glasses, Augmented Reality (AR) glasses, and other various VR head displays. Different users have different interests and concerns, so that the users have different emphasis points in the process of wearing the intelligent head-mounted equipment to browse information. In order to push information meeting attention requirements of different users, behavior data of the users can be collected in the process of browsing the information by the users and uploaded to a server. The server automatically generates an interest tag matched with the user according to the received behavior data of the user so as to realize intelligent identification of the behavior data of the user, and accordingly generates and correspondingly stores an interest preference value corresponding to the user.
S120, sending an information acquisition request to the server; the information acquisition request is used for indicating the server to acquire information corresponding to the interest tag of the user for pushing.
The information acquisition request is a request sent by the intelligent head-mounted device to the server to acquire push information for the current user. Optionally, the information acquisition request may be sent to the server when no additional application program is started after the smart headset is powered on, or sent when it is detected that no other information window exists in the display screen of the smart headset, or sent when the leisure time specified by the user arrives, or sent in real time during the process of acquiring the user behavior data of the user, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, when the user wears the smart headset, the smart headset may send an information acquisition request to the server. And after the server receives the information acquisition request, inquiring the stored interest preference value of the current user and searching corresponding push information so as to feed the searched push information back to the intelligent head-mounted equipment.
Correspondingly, after the intelligent head-mounted device receives the push information sent by the server, the push information can be displayed on a display interface of the display lens. The display interface for pushing information and the position of the display interface in the display lens and the like can be adaptively designed according to actual requirements in combination with the display conditions of other windows in the display lens, which is not limited in the embodiment of the present invention.
In an optional embodiment of the present invention, after sending the information obtaining request to the server, the method may further include: and obtaining operation feedback executed by the user aiming at the push information of the server, and providing the operation feedback to the server, wherein the operation feedback is used for indicating the server to correspondingly update the interest preference value corresponding to the user.
The operation feedback may be a marking operation performed by the user for the currently displayed push information, and the server may update the interest tag or the interest preference value of the user according to the operation feedback. For example, the operation feedback may be an operation performed by the user for pushing information, such as adding attention, removing attention, collecting, removing collection, marking as liked or marking as disliked.
In the embodiment of the invention, in order to more accurately push the push information meeting the requirements of the user, the user can interact with the user in real time to acquire the personalized requirements of the user. The method comprises the following steps: and the user reads the push information in the display mirror interface, and when the user thinks that the user can continuously pay attention to the push information subsequently or does not want to continuously pay attention to the information type matched with the currently displayed push information subsequently, the user can mark the currently displayed push information according to the self requirement. The user marks the push information and feeds the push information back to the server, so that the interest labels stored by the server and the interest preference values corresponding to the user can more accurately reflect the information requirements of the user.
According to the embodiment of the invention, the user behavior data is collected to generate the corresponding interest tag to be uploaded to the server, the information acquisition request corresponding to the user is sent to the server to receive and display the push information corresponding to the interest tag of the user sent by the server, the problem that the push information in the existing intelligent head-mounted equipment is lack of individuation is solved, the personalized push information is pushed for the intelligent head-mounted equipment according to the browsing habits and interest positions of different users, and therefore the individualized requirements of the different users on the push information are met.
Example two
Fig. 2a is a flowchart of an information pushing method provided by the second embodiment of the present invention, fig. 2b is an implementation flowchart of an operation of acquiring a gazing area of a user in a display interface provided by the second embodiment of the present invention, fig. 2c is an implementation flowchart of an operation of acquiring a gazing area of a user in a display interface provided by the second embodiment of the present invention, fig. 2d is an implementation flowchart of an operation of acquiring a target display area matched with a gazing area provided by the second embodiment of the present invention, and fig. 2e is an implementation flowchart of an operation of acquiring a target display area matched with a gazing area provided by the second embodiment of the present invention. In this embodiment, the user behavior data is specifically user browsing data and/or user input data, and a specific implementation manner of collecting the user behavior data and generating a corresponding interest tag to upload to the server is provided. Accordingly, as shown in fig. 2a, the method of the present embodiment may include:
and S210, acquiring a watching area of the user in a display interface.
The gazing area refers to a gazing area of an eye of a user on the display interface, and may be a rectangle, a circle, a square, or any other shape, which is not limited in this embodiment of the present invention. It should be noted that the gazing area is smaller than the area of the display interface and can be covered by the display interface.
In the embodiment of the present invention, in order to automatically generate a corresponding interest tag according to collected user behavior data, the user behavior data needs to be collected first. Optionally, the user behavior data may include user browsing data and/or user input data. The user browsing data can be historical browsing data related to the process of using the intelligent head-mounted device by the user, and can reflect historical concerns of the user; the user input data can be data input by a user in real time during the process of using the intelligent head-mounted device, and can reflect the current focus of the user. If the behavior data of the user is the user input data, the retrieval can be directly carried out according to the user input data, and the retrieved information is fed back to the intelligent head-mounted device. If the user's behavior data is user browsing data, the user's gaze area in the display interface may be first obtained before collecting the user behavior data.
In an optional embodiment of the present invention, acquiring the gaze area of the user in the display interface may include: acquiring an eye image of the user; extracting eye feature information according to the eye image; determining the gaze region based on the eye feature information.
In the embodiment of the invention, when the watching region of the user in the display interface is obtained, the eye image of the user can be obtained, so that the eye feature information is extracted according to the obtained eye image, and the watching region of the user in the display interface is determined based on the extracted eye feature information.
Specifically, the acquiring the gazing area of the user in the display screen may include: and acquiring the gazing area of the user in the display picture by a pupil and cornea reflection spot center positioning method or a pupil center positioning method.
Correspondingly, when the pupil and the cornea reflection spot center positioning method is adopted to acquire the gazing area of the user in the display screen, as shown in fig. 2b, S210 may specifically include:
s211a, acquiring at least two eye images with light spots when the light source irradiates eyeballs of the user through the image acquisition equipment.
S212a, determining the user' S gaze area on the display interface according to the at least two eye images with the light spots.
The center positioning method of the pupil and the cornea reflection facula, namely, the pupil-cornea reflection method, is used for determining the gazing information of the user. The working principle of the pupil-cornea reflex method can be simply summarized as follows: acquiring an eye image; gaze information is estimated from the eye images. The hardware requirements of the pupillary-corneal reflex method are embodied in two aspects: (1) light source: one or more infrared light sources, which use infrared light without affecting the vision of the eye. If a plurality of light sources are used, they may be arranged in a predetermined manner, such as a delta or a straight, etc. (2) An image acquisition device: such as an infrared camera device, an infrared image sensor, a camera or a video camera, etc. The pupillary-corneal reflex method can be divided into two main links: (1) acquiring an eye image: in the link, a light source is required to irradiate the eye, the eye is shot by an image acquisition device, and a reflection point of the light source on a cornea, namely a light spot (also called a purkinje spot), is shot correspondingly, so that an eye image with the light spot is obtained. (2) Gaze information (i.e. gaze/point of gaze) estimation: when the eyeballs rotate, the relative position relationship between the pupil center and the light spots changes, and a plurality of eye images with the light spots correspondingly acquired reflect the position change relationship; and estimating the sight line/the fixation point according to the position change relation. After the fixation point of the user is obtained, a fixation area of the user on the display interface can be formed according to the fixation point. For example, a gaze area of the user on the display interface is formed with the gaze point as the center of a shape such as a rectangle or a circle of a set size.
Correspondingly, when the pupil center positioning method is used to acquire the gazing area of the user in the display interface, as shown in fig. 2c, S210 may specifically include:
s211b, acquiring a reference eye pattern captured when the camera is facing the eyeball of the user in advance, and setting the pupil center position in the reference eye pattern as the reference pupil center position of the user.
The reference eye pattern refers to an eye picture shot when a camera in the intelligent head-mounted device faces the eyeballs of the user, and the center position of the pupils of the eyeballs of the user in the reference eye pattern is usually located at the center position of the reference eye pattern.
In the embodiment of the invention, before acquiring the gazing area of the user in the display lens, a reference eye pattern may be acquired as a basis, and the current gazing area of the user on the display interface may be determined by taking the pupil center position in the reference eye pattern as the reference pupil center position of the user.
And S212b, acquiring a current position eye pattern shot by the camera when the eyeballs of the user are at the current position.
And S213b, taking the pupil center position in the eye diagram at the current position as the current pupil center position.
Specifically, when the current watching area of the eyeballs of the user on the display interface is judged, the eye diagram of the current position of the user can be shot through the camera, the pupil center position in the eye diagram of the current position is obtained, and the pupil center position in the eye diagram of the front position is used as the current pupil center position for comparison with the reference pupil center position, so that the watching area is determined.
It should be noted that the camera in the embodiment of the present invention may be a visible light camera, an infrared thermal imaging camera, or another type of camera. In addition, the determination of the pupil center position from the eye pattern photographed by the camera is a relatively mature prior art means, and various pupil center positioning methods exist in the prior art, which are not described in detail in the embodiments of the present invention.
S214b, determining the gazing area of the user on the display interface according to the current pupil center position and the reference pupil center position of the user.
Correspondingly, after the current pupil center position is obtained, the current pupil center position can be compared with the reference pupil center position to determine the relative distance and the relative direction between the current pupil center position and the reference pupil center position. And then determining the position of the current pupil center position of the user on the display interface according to the reference pupil center position, and the relative distance and the relative direction between the current pupil center position and the reference pupil center position. And finally, determining the gazing area according to the preset shape and size of the gazing area by taking the position of the current pupil center position on the display interface as the center of the gazing area. In the embodiment of the invention, the gazing area can be accurately and quickly determined by utilizing the pupil center position.
S220, acquiring a target display area matched with the watching area, generating a corresponding interest tag according to the type of information displayed in the target display area, and uploading the interest tag to the server.
Wherein, the target display area refers to a part of the display area in the display interface. The information in the display interface may have a plurality of functional areas divided according to the information type, such as a display interface displaying two-dimensional display data. The two-dimensional display data is data that is uniformly displayed in a horizontal plane corresponding to the display interface, and is similar to data browsed by a user through a mobile phone, a tablet computer, a computer or the like. For example, a display interface displaying two-dimensional display data may be divided into three regions, which correspond to news, games, entertainment, and the like, and then the regions in the display interface may be automatically detected and divided by the smart headset device using a conventional image recognition algorithm. And when the display area where the news is located is matched with the gazing area, the display area where the news is located is a target display area. In addition, the display interface may also display multi-dimensional display data (e.g., three-dimensional or four-dimensional data), where the multi-dimensional display data is data that is displayed in a space (including a virtual space and a real space) corresponding to the display interface in a multi-dimensional manner. Illustratively, the display interface displays a virtual scene of a shopping type, the virtual scene is divided into three areas which respectively correspond to different spatial areas such as clothes, foods and homes, and the like, and then the area detection and division can be automatically performed on each spatial area in the display interface of the virtual scene through the intelligent head-mounted device by using an image scene recognition algorithm and the like. When the space area where the clothing is located is matched with the gazing area, the space area where the clothing is located is a target display area.
In the embodiment of the invention, the interested interest point of the user can be automatically determined by acquiring the target webpage display area matched with the watching area and generating the corresponding interest tag according to the information type displayed in the target webpage display area.
Correspondingly, when the user behavior data is two-dimensional display data, as shown in fig. 2d, S220 may specifically include:
and S221a, acquiring the alternative display area with the highest overlapping ratio with the gazing area, wherein one alternative display area corresponds to one information type.
The alternative display area is a display area that is divided according to the information type and has the highest overlapping proportion with the watching area. The overlap ratio needs to be calculated with reference to the gazing area, i.e. the percentage of the gazing area occupied by the overlap area. The alternative display area displays only one type of information, for example, the alternative display area displays news information, or the alternative display area displays entertainment news information among the news information.
In an embodiment of the present invention, a gaze region on a display interface may overlap with a plurality of divided display regions. In order to accurately determine the two-dimensional display information viewed by the user, the display region having the highest overlapping ratio with the gaze region may be used as the candidate display region. Therefore, the shape of the watching region is not suitable to be too large, and the overlapping area of the watching region and the display region which is divided too much due to too large watching region is avoided.
S222, 222a, if the overlapping ratio corresponding to the alternative display area is determined to exceed the overlapping threshold, obtaining the gazing duration of the gazing area which is continuously gazed by the user.
The overlap threshold may be 80% or 90%, and may be specifically set according to an actual requirement, which is not limited in this embodiment of the present invention.
In the embodiment of the present invention, after the candidate display area is determined, whether a ratio of an overlapping area of the candidate display area and the gaze area to the gaze area (an overlapping ratio corresponding to the candidate display area) exceeds a set overlapping threshold value may be calculated. When the overlapping ratio corresponding to the alternative display area is determined to exceed the set overlapping threshold value, the gazing duration of the user continuously gazing at the gazing area is further detected.
S223a, if the fixation duration is determined to exceed the duration threshold, determining the alternative display area as the target display area.
The duration threshold may be 5 seconds, 1 minute, or 10 minutes, and may be adaptively designed according to the type of information in the alternative display area, which is not limited in the embodiment of the present invention.
Further, when it is determined that the overlap ratio corresponding to the alternative display area exceeds a set overlap threshold and the gazing duration of the user continuously gazing at the gazing area exceeds a duration threshold, determining the alternative display area as a target display area matched with the gazing area. By applying two constraint conditions that the overlapping ratio corresponding to the alternative display area exceeds the overlapping threshold and the watching duration exceeds the duration threshold, the situation that the display area which is not really concerned by the user is determined as the target display area when the user browses the two-dimensional display data quickly can be avoided, the accuracy and the effectiveness of the target display area are ensured, and the accuracy and the effectiveness of the generated interest tag are further ensured.
It should be noted that the two-dimensional display data may have various types of data, such as web browsing data or application data. Because the webpage browsing data is provided with the label information, the intelligent head-mounted device can analyze the webpage related data according to the currently displayed webpage background, find out the keywords which embody the webpage information and use the keywords as the interest labels. Of course, besides the web browsing data, other application data may also be used as the user behavior data, and the smart head-mounted device may also generate a corresponding interest tag according to the application data. For example, when the user uses the QQ application, a keyword of "lipstick" is frequently displayed in the current chat window, and the smart headset uses the "lipstick" as an interest tag of the user and uploads the interest tag to the server. When a user opens a document by using a Word application program, the text information in the document can also be used as behavior data of the user.
S224a, generating the corresponding interest tag according to the information type displayed in the target display area and uploading the interest tag to the server.
Of course, in the embodiment of the present invention, the interest tag may also be generated in cooperation with other browsing habits of the user. For example, at least two candidate display areas with the highest overlapping ratio with the gazing area are obtained first, and then the candidate display area with the highest gazing frequency in the candidate display areas is used as the target display area according to the number of times that the user repeatedly gazes in the candidate display areas. When the number of times of gazing of the user in each alternative display area is the same, the target display area can be determined according to the gazing sequence of the user in each alternative display area.
Correspondingly, when the user behavior data is multidimensional display data, as shown in fig. 2e, S220 may specifically include:
and S221b, acquiring a plurality of alternative display areas with overlapping ratios with the gazing area meeting matching conditions, wherein one alternative display area corresponds to one information type.
The multidimensional display data is mostly applied to virtual scenes or augmented reality scenes and the like. When the user behavior data is multidimensional display data, the alternative display area may also display only one type of information, for example, the alternative display area is a space area corresponding to a clothing class, or the alternative display area is a space area corresponding to a billboard in a bus stop. The matching condition may be a condition for determining the number of the alternative display regions, such as that the overlap ratio of the alternative display region to the gaze region exceeds 50%, or that the first 3 display regions having the highest overlap ratio to the gaze region are taken as the alternative display regions.
In the embodiment of the invention, the watching region on the display interface can be overlapped with a plurality of divided display regions. In order to accurately determine the multi-dimensional display information browsed by the user, the first display areas with the highest overlapping ratio with the watching area can be used as the alternative display areas. Therefore, the shape of the watching region is not suitable to be too large, and the overlapping area of the watching region and the display region which is divided too much due to too large watching region is avoided.
S222b, if the overlapping ratio corresponding to the alternative display areas is determined to exceed the overlapping threshold, obtaining the distance between the target information in each alternative display area and the user, and displaying each alternative display area according to the distance.
The overlap threshold may be 80% or 90%, and may be specifically set according to an actual requirement, which is not limited in this embodiment of the present invention. The target information may be the identified information in each alternative display area, such as various brands of apparel in the space area corresponding to the apparel class, or food at different positions in the space area corresponding to the food class, and so on.
In the embodiment of the present invention, after determining a plurality of candidate display regions corresponding to the multi-dimensional display data, a ratio of an overlapping region of each candidate display region and the gaze region to the gaze region (an overlapping ratio corresponding to the candidate display region) may be calculated. And when the overlapping ratio corresponding to the alternative display areas is determined to exceed the set overlapping threshold value, further acquiring the distance between the target information and the user in each alternative display area. After the distance between the target information in each candidate display area and the user is obtained, the distance between each target information and the user can be used as a display basis to display each candidate display area according to the distance priority. For example, each target information and the corresponding candidate display area are sequentially displayed in the order of distance from near to far.
S223b, obtaining the gazing duration of the user continuously gazing the target information, and if the gazing duration is determined to exceed a duration threshold, determining the alternative display area corresponding to the target information as the target display area.
The duration threshold may be 5 seconds, 1 minute, or 10 minutes, and may be adaptively designed according to the type of the target information in the alternative display area, which is not limited in the embodiment of the present invention.
Further, when each alternative display area is displayed according to the distance between each target information and the user, the watching duration of each target information which is watched by the user continuously can be obtained. When the gazing duration of the user continuously gazing at a certain target information exceeds a duration threshold, the alternative display area corresponding to the target information can be determined as the target display area matched with the gazing area. The target display area is determined according to the three factors of the overlapping ratio, the distance and the watching duration, so that the situation that the display area which is not really concerned by the user is determined as the target display area when the user browses the multi-dimensional display data quickly can be avoided, the accuracy and the effectiveness of the target display area are ensured, and the accuracy and the effectiveness of the generated interest tag are further ensured.
S224b, generating the corresponding interest tag according to the information type displayed in the target display area and uploading the interest tag to the server.
S230, sending an information acquisition request corresponding to the user to the server, wherein the information acquisition request is used for indicating the server to acquire push information corresponding to the interest preference value of the user and feeding back the push information.
S240, receiving the push information sent by the server and displaying the push information in a display interface.
EXAMPLE III
Fig. 3 is a schematic view of an information push apparatus according to a third embodiment of the present invention, where the present embodiment is applicable to a situation of pushing personalized push information to a user.
As shown in fig. 3, the apparatus includes: a tag uploadmodule 310 and arequest sending module 320, wherein:
thetag uploading module 310 is configured to collect user behavior data, generate a corresponding interest tag, and upload the interest tag to a server; the interest tag is used for indicating the server to update an interest preference value corresponding to the user;
arequest sending module 320, configured to send an information obtaining request to the server; the information acquisition request is used for indicating the server to acquire information corresponding to the interest tag of the user for pushing.
According to the embodiment of the invention, the user behavior data is collected to generate the corresponding interest tag to be uploaded to the server, the information acquisition request corresponding to the user is sent to the server to receive and display the push information corresponding to the interest tag of the user sent by the server, the problem that the push information in the existing intelligent head-mounted equipment is lack of individuation is solved, the personalized push information is pushed for the intelligent head-mounted equipment according to the browsing habits and interest positions of different users, and therefore the individualized requirements of the different users on the push information are met.
Optionally, the user behavior data includes: user browsing data and/or user input data.
Optionally, thetag uploading module 310 includes:
the watching area acquiring unit is used for acquiring a watching area of the user in a display interface;
and the label uploading unit is used for acquiring a target display area matched with the watching area, and generating the corresponding interest label to upload to the server according to the information type displayed in the target display area.
Optionally, the gazing area obtaining unit is specifically configured to:
acquiring an eye image of the user; extracting eye feature information according to the eye image; determining the gaze region based on the eye feature information.
Optionally, the user behavior data is two-dimensional display data; the label uploading unit is specifically used for:
acquiring alternative display areas with the highest overlapping ratio with the watching areas, wherein one alternative display area corresponds to one information type;
if the overlapping ratio corresponding to the alternative display area is determined to exceed the overlapping threshold, the watching duration of the user continuously watching the watching area is obtained;
and if the fixation duration is determined to exceed a duration threshold, determining the alternative display area as the target display area.
Optionally, the user behavior data is multidimensional display data; the label uploading unit is specifically used for: acquiring a target display area matched with the gazing area, wherein the method comprises the following steps:
acquiring a plurality of alternative display areas of which the overlapping ratio with the watching area meets the matching condition, wherein one alternative display area corresponds to one information type;
if the fact that the overlap ratio corresponding to the alternative display areas exceeds the overlap threshold value is determined, the distance between the target information and the user in each alternative display area is obtained, and each alternative display area is displayed according to the distance;
acquiring the watching duration of the user continuously watching the target information;
and if the fixation duration is determined to exceed the duration threshold, determining the alternative display area corresponding to the target information as the target display area.
Optionally, the apparatus further includes a feedback providing module, configured to obtain an operation feedback executed by the user for the push information of the server, and provide the operation feedback to the server, where the operation feedback is used to instruct the server to update the interest preference value corresponding to the user correspondingly.
The information pushing device can execute the information pushing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the information pushing method provided in any embodiment of the present invention, reference may be made to the technical details not described in detail in this embodiment.
Example four
Fig. 4 is a schematic structural diagram of a wearable device according to a fourth embodiment of the present invention. Fig. 4 shows a block diagram of awearable device 412 suitable for implementing an embodiment of the invention. Thewearable device 412 shown in fig. 4 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 4, thewearable device 412 is in the form of a general purpose computing device. The components of thewearable device 412 may include, but are not limited to: one ormore processors 416, astorage device 428, and abus 418 that couples the various system components including thestorage device 428 and theprocessors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Thewearable device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible bywearable device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 428 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 430 and/orcache Memory 432. Thewearable device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only,storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected tobus 418 by one or more data media interfaces.Storage 428 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 436 having a set (at least one) ofprogram modules 426 may be stored, for example, instorage 428,such program modules 426 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination may comprise an implementation of a network environment.Program modules 426 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
Wearable device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, camera,display 424, etc.), with one or more devices that enable a user to interact with thewearable device 412, and/or with any devices (e.g., network card, modem, etc.) that enable thewearable device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, thewearable device 412 may also communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) through theNetwork adapter 420. As shown,network adapter 420 communicates with the other modules ofwearable device 412 overbus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with thewearable device 412, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
Theprocessor 416 executes various functional applications and data processing, for example, implementing a push method of information provided by the above-described embodiments of the present invention, by executing programs stored in thestorage device 428.
That is, the processing unit implements, when executing the program: collecting user behavior data, generating a corresponding interest tag and uploading the interest tag to a server; the interest tag is used for indicating the server to update an interest preference value corresponding to the user; sending an information acquisition request to the server; the information acquisition request is used for indicating the server to acquire information corresponding to the interest tag of the user for pushing.
EXAMPLE five
An embodiment five of the present invention further provides a computer storage medium storing a computer program, where the computer program is used to execute the information pushing method according to any one of the above embodiments of the present invention when executed by a computer processor: collecting user behavior data, generating a corresponding interest tag and uploading the interest tag to a server; the interest tag is used for indicating the server to update an interest preference value corresponding to the user; sending an information acquisition request to the server; the information acquisition request is used for indicating the server to acquire information corresponding to the interest tag of the user for pushing.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.