Disclosure of Invention
The disclosure provides various resource recommendation methods, devices, computer equipment and storage media so as to improve the accuracy of a resource recommendation process. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a resource recommendation method, including:
acquiring global recommendation characteristics and personalized recommendation characteristics of a target user account, wherein the global recommendation characteristics are used for representing behavior related information and resource information of the target user account on recommended resources, and the personalized recommendation characteristics are used for representing resource preference information of the target user account determined based on the recommended resources;
Acquiring bias information according to the global recommendation feature and the personalized recommendation feature, wherein the bias information is used for representing deviation between the global recommendation feature and the personalized recommendation feature;
processing the global recommendation characteristic according to the bias information to obtain at least one recommendation probability, wherein the recommendation probability is used for representing the preference degree of the target user account on a multimedia resource;
and recommending resources to the target user account based on the at least one recommendation probability.
In one possible implementation manner, the obtaining bias information according to the global recommendation feature and the personalized recommendation feature includes:
performing splicing processing on the global recommended feature and the personalized recommended feature to obtain a target spliced feature;
and calling a bias network to activate the target splicing characteristic to obtain the bias information.
In one possible implementation manner, the invoking the bias network to activate the target stitching feature to obtain the bias information includes:
inputting the target splicing characteristics into at least one activation sub-network in the bias network respectively, wherein one activation sub-network comprises at least one activation layer;
Activating the target splicing characteristic through at least one activation layer in any activation sub-network in the bias network to obtain a bias item of any activation sub-network;
and repeatedly executing the operation of acquiring the bias items, and acquiring at least one bias item output by the at least one activated sub-network as the bias information.
In one possible implementation, the training process of the bias network includes:
acquiring global recommendation characteristics, personalized recommendation characteristics and resource recommendation results of at least one sample user account, wherein the resource recommendation results are used for indicating whether the sample user account clicks recommended resources;
performing iterative training on an initial recommendation model based on global recommendation features, personalized recommendation features and resource recommendation results of the at least one sample user account, wherein the initial recommendation model comprises an initial bias network and an initial main body network;
and responding to the condition of stopping training, acquiring a recommendation model, wherein the recommendation model comprises a main body network and a bias network, the main body network is used for carrying out weighting processing on the input characteristics to acquire recommendation probability, and the bias network is used for carrying out bias processing on the input characteristics to acquire bias information.
In one possible implementation, the initial bias network is only passed on to the personalized recommendation features of the at least one sample user account when back propagating gradients during training of the recommendation model.
In one possible implementation manner, the bias information includes at least one bias item, and the processing the global recommendation feature according to the bias information to obtain at least one recommendation probability includes:
inputting the at least one bias term and the global recommendation feature into a subject network, the subject network comprising at least one hidden layer, wherein the at least one bias term corresponds to the at least one hidden layer one to one respectively;
and respectively carrying out bias processing on the input features of the at least one hidden layer according to the at least one bias item, calling the at least one hidden layer to carry out weighting processing on the biased features, and outputting the at least one recommended probability.
In one possible implementation manner, the biasing the input features of the at least one hidden layer according to the at least one bias term, and invoking the at least one hidden layer to perform weighting processing on the biased features includes:
Performing element multiplication on the output characteristics of the previous hidden layer and the bias items of the corresponding layers to obtain the biased characteristics of any hidden layer;
and calling any hidden layer to carry out weighting treatment on the biased characteristics to obtain output characteristics of any hidden layer, and inputting the output characteristics of any hidden layer to the next hidden layer.
In one possible implementation manner, the obtaining the global recommendation feature and the personalized recommendation feature of the target user account includes:
acquiring global initial characteristics and personalized initial characteristics of the target user account, wherein the global initial characteristics comprise user information of the target user account, resource information of recommended resources and behavior information of the target user account on the recommended resources, and the personalized initial characteristics comprise user identification of the target user account, resource identification of the recommended resources and publisher identification of the recommended resources;
and respectively carrying out embedding processing on the global initial feature and the personalized initial feature to obtain the global recommended feature and the personalized recommended feature.
According to a second aspect of the embodiments of the present disclosure, there is provided a resource recommendation apparatus, including:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is configured to execute the global recommendation feature and the personalized recommendation feature of a target user account, the global recommendation feature is used for representing behavior related information and resource information of the target user account on recommended resources, and the personalized recommendation feature is used for representing resource preference information of the target user account determined based on the recommended resources;
a second acquisition unit configured to perform acquisition of offset information representing a deviation between the global recommendation feature and the personalized recommendation feature, based on the global recommendation feature and the personalized recommendation feature;
the processing unit is configured to execute processing on the global recommendation characteristics according to the bias information to obtain at least one recommendation probability, wherein the recommendation probability is used for representing the preference degree of the target user account on a multimedia resource;
and the recommending unit is configured to execute resource recommendation on the target user account based on the at least one recommending probability.
In one possible embodiment, the second acquisition unit includes:
The splicing subunit is configured to perform splicing processing on the global recommended feature and the personalized recommended feature to obtain a target splicing feature;
and the activating subunit is configured to execute activating treatment on the target splicing characteristic by calling a biasing network to obtain the biasing information.
In one possible implementation, the activation subunit is configured to perform:
inputting the target splicing characteristics into at least one activation sub-network in the bias network respectively, wherein one activation sub-network comprises at least one activation layer;
activating the target splicing characteristic through at least one activation layer in any activation sub-network in the bias network to obtain a bias item of any activation sub-network;
and repeatedly executing the operation of acquiring the bias items, and acquiring at least one bias item output by the at least one activated sub-network as the bias information.
In one possible implementation, the training process of the bias network includes:
acquiring global recommendation characteristics, personalized recommendation characteristics and resource recommendation results of at least one sample user account, wherein the resource recommendation results are used for indicating whether the sample user account clicks recommended resources;
Performing iterative training on an initial recommendation model based on global recommendation features, personalized recommendation features and resource recommendation results of the at least one sample user account, wherein the initial recommendation model comprises an initial bias network and an initial main body network;
and responding to the condition of stopping training, acquiring a recommendation model, wherein the recommendation model comprises a main body network and a bias network, the main body network is used for carrying out weighting processing on the input characteristics to acquire recommendation probability, and the bias network is used for carrying out bias processing on the input characteristics to acquire bias information.
In one possible implementation, the initial bias network is only passed on to the personalized recommendation features of the at least one sample user account when back propagating gradients during training of the recommendation model.
In one possible implementation, the offset information includes at least one offset item, and the processing unit includes:
an input subunit configured to perform inputting the at least one bias term and the global recommendation feature into a subject network, the subject network comprising at least one hidden layer, wherein the at least one bias term corresponds one-to-one with the at least one hidden layer, respectively;
And the processing subunit is configured to execute bias processing on the input features of the at least one hidden layer according to the at least one bias item, call the at least one hidden layer to perform weighting processing on the biased features and output the at least one recommendation probability.
In one possible implementation, the processing subunit is configured to perform:
performing element multiplication on the output characteristics of the previous hidden layer and the bias items of the corresponding layers to obtain the biased characteristics of any hidden layer;
and calling any hidden layer to carry out weighting treatment on the biased characteristics to obtain output characteristics of any hidden layer, and inputting the output characteristics of any hidden layer to the next hidden layer.
In one possible implementation, the first acquisition unit is configured to perform:
acquiring global initial characteristics and personalized initial characteristics of the target user account, wherein the global initial characteristics comprise user information of the target user account, resource information of recommended resources and behavior information of the target user account on the recommended resources, and the personalized initial characteristics comprise user identification of the target user account, resource identification of the recommended resources and publisher identification of the recommended resources;
And respectively carrying out embedding processing on the global initial feature and the personalized initial feature to obtain the global recommended feature and the personalized recommended feature.
According to a third aspect of embodiments of the present disclosure, there is provided a computer device comprising:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the resource recommendation method of any one of the above-mentioned first aspect and possible implementation manners of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, at least one instruction in the storage medium, when executed by one or more processors of a computer device, enabling the computer device to perform the resource recommendation method of any one of the above-mentioned first aspect and possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions executable by one or more processors of a computer device to enable the computer device to perform the resource recommendation method of any one of the above-described first aspect and possible implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
by acquiring the global recommendation characteristic and the personalized recommendation characteristic of each target user account, the bias information acquired based on two different recommendation characteristics can be introduced in the process of processing the global recommendation characteristic, so that deviation caused by individual difference in the process can be corrected, the recommendation probability of resource preference more similar to the user corresponding to the target user account is obtained, and the resource recommendation based on the more accurate recommendation probability has higher accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
Fig. 1 is a schematic view of an implementation environment of a resource recommendation method according to an exemplary embodiment, referring to fig. 1, where at least one terminal 101 and a server 102 may be included in the implementation environment, which is described in detail below:
at least one terminal 101 is used for browsing multimedia resources, and an application program may be installed on each of the at least one terminal 101, the application program may be any client capable of providing a multimedia resource browsing service, a user may browse multimedia resources by starting the application program, the application program may be at least one game of a shopping application, a take-away application, a travel application, a game application or a social application, and the multimedia resources may include at least one of a video resource, an audio resource, a picture resource, a text resource or a web resource.
The server 102 is also a computer device for providing the multimedia asset recommendation service to the at least one terminal 101. Server 102 may include at least one of a server, multiple servers, a cloud computing platform, or a virtualization center. Alternatively, the server 102 may undertake primary computing work and the at least one terminal 101 may undertake secondary computing work; alternatively, the server 102 may undertake secondary computing work and the at least one terminal 101 may undertake primary computing work; alternatively, the server 102 and the at least one terminal 101 may perform collaborative computing using a distributed computing architecture.
In some embodiments, the server 102 may collect the behavior logs of the platform user on the application program, select at least one sample user account from the total platform users in a random sampling manner, and obtain global recommendation features, personalized recommendation features and resource recommendation results of the at least one sample user account according to the behavior logs of the at least one sample user account, so as to perform offline training on the initial recommendation model, and obtain the recommendation model.
Based on the above, the server 102 can predict the preference degree of the target user account (may be any user) on at least one multimedia resource, that is, predict at least one recommendation probability, so as to recommend resources to the target user account according to the at least one recommendation probability, optionally, when recommending, the server 102 may push, to the terminal where the target user account is located, a multimedia resource with a recommendation probability greater than a probability threshold, optionally, the server 102 may order the at least one multimedia resource in order from the higher recommendation probability to the lower recommendation probability, and push the multimedia resource with an order in the first N bits to the terminal of the user corresponding to the target user account, where N is an integer greater than or equal to 1. For different terminals, the server 102 determines different multimedia resources based on the recommendation model, and the multimedia resources better conform to the resource preference of the user corresponding to the terminal. The server 102 transmits the multimedia assets determined based on the recommendation model to the corresponding terminal so that the terminal can display the recommended multimedia assets in the application program.
In an exemplary scenario, taking a multimedia resource as a video resource (hereinafter referred to as a "video") as an example, a short video application may be installed on a terminal of a user corresponding to a target user account, the server 102 provides a short video service platform to the terminal through the short video application, the user may browse the short video or upload the short video through the short video application, the server 102 may invoke a recommendation model to determine short videos to be recommended from a video library according to global recommendation features and personalized recommendation features of the target user account, the short videos to be recommended are specifically recommended for the target user account, and in consideration of the global recommendation features of the target user account, personalized recommendation features are also introduced to increase personalized bias, so that the short videos to be recommended which better conform to user preferences can be predicted, optionally, the clicking behaviors of the user on the short videos to be recommended can be collected, and the recommendation model is iteratively updated as training data of a new round.
Note that, the device type of any one of the at least one terminal 101 may include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player, a laptop portable computer, or a desktop computer. For example, the any one of the terminals may be a smart phone, or other hand-held portable electronic device. The following embodiments are illustrated with the terminal comprising a smart phone.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the present disclosure does not limit the number of terminals and the type of devices.
Fig. 2 is a flowchart illustrating a resource recommendation method according to an exemplary embodiment, and referring to fig. 2, the resource recommendation method is applied to a computer device, and a description will be given below taking the computer device as a server.
In step 201, the server obtains global recommendation characteristics and personalized recommendation characteristics of the target user account, where the global recommendation characteristics are used to represent behavior related information of the target user account on recommended resources and resource information, and the personalized recommendation characteristics are used to represent resource preference information of the target user account determined based on the recommended resources.
In step 202, the server obtains bias information according to the global recommendation feature and the personalized recommendation feature, where the bias information is used to represent a deviation between the global recommendation feature and the personalized recommendation feature.
In step 203, the server processes the global recommendation feature according to the bias information to obtain at least one recommendation probability, where the recommendation probability is used to represent a preference degree of the target user account for a multimedia resource.
In step 204, the server recommends resources for the target user account based on the at least one recommendation probability.
According to the method provided by the embodiment of the disclosure, the global recommendation characteristic and the personalized recommendation characteristic of each target user account are obtained, and the bias information obtained based on two different recommendation characteristics can be introduced in the process of processing the global recommendation characteristic, so that deviation caused by individual difference in the process can be corrected, the recommendation probability of resource preference of a user corresponding to the target user account is obtained, and the resource recommendation based on the more accurate recommendation probability has higher accuracy.
In one possible implementation, obtaining the bias information according to the global recommendation feature and the personalized recommendation feature includes:
performing splicing processing on the global recommended feature and the personalized recommended feature to obtain a target spliced feature;
and calling a bias network to activate the target splicing characteristic to obtain the bias information.
In one possible implementation manner, invoking a bias network to activate the target stitching feature, and obtaining the bias information includes:
Inputting the target splicing characteristics into at least one activation sub-network in the bias network respectively, wherein one activation sub-network comprises at least one activation layer;
activating the target splicing characteristic through at least one activation layer in any activation sub-network in the bias network to obtain a bias item of any activation sub-network;
and repeatedly executing the operation of acquiring the bias items, and acquiring at least one bias item output by the at least one activated sub-network as the bias information.
In one possible implementation, the training process of the bias network includes:
acquiring global recommendation characteristics, personalized recommendation characteristics and resource recommendation results of at least one sample user account, wherein the resource recommendation results are used for indicating whether the sample user account clicks a recommended resource;
based on the global recommendation feature, the personalized recommendation feature and the resource recommendation result of the at least one sample user account, performing iterative training on an initial recommendation model, wherein the initial recommendation model comprises an initial bias network and an initial main body network;
in response to meeting the stop training condition, a recommendation model is obtained, the recommendation model including a subject network for weighting the input features to obtain recommendation probabilities and a bias network for biasing the input features to obtain bias information.
In one possible implementation, the initial bias network is passed only to the personalized recommendation features of the at least one sample user account while back-propagating gradients during training of the recommendation model.
In one possible implementation, the bias information includes at least one bias term, and processing the global recommendation feature according to the bias information to obtain at least one recommendation probability includes:
inputting the at least one bias term and the global recommendation feature into a subject network, the subject network comprising at least one hidden layer, wherein the at least one bias term corresponds to the at least one hidden layer one by one respectively;
and respectively carrying out bias processing on the input features of the at least one hidden layer according to the at least one bias item, calling the at least one hidden layer to carry out weighting processing on the biased features, and outputting the at least one recommended probability.
In one possible implementation manner, the biasing processing is performed on the input features of the at least one hidden layer according to the at least one biasing item, and the calling the at least one hidden layer to perform weighting processing on the biased features includes:
performing element multiplication on the output characteristics of the previous hidden layer and the bias items of the corresponding layers to obtain biased characteristics of any hidden layer;
And calling any hidden layer to carry out weighting treatment on the biased characteristics to obtain output characteristics of any hidden layer, and inputting the output characteristics of any hidden layer to the next hidden layer.
In one possible implementation, obtaining the global recommendation feature and the personalized recommendation feature for the target user account includes:
acquiring global initial characteristics and personalized initial characteristics of the target user account, wherein the global initial characteristics comprise user information of the target user account, resource information of recommended resources and behavior information of the target user account on the recommended resources, and the personalized initial characteristics comprise user identification of the target user account, resource identification of the recommended resources and publisher identification of the recommended resources;
and respectively carrying out embedding processing on the global initial feature and the personalized initial feature to obtain the global recommended feature and the personalized recommended feature.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 3 is an interactive flowchart of a resource recommendation method according to an exemplary embodiment, which is applied to a computer device, and is described below by taking the computer device as a server, as shown in fig. 3, and this embodiment includes the following steps.
In step 301, the server obtains global initial features and personalized initial features of the target user account.
The target user account is any user account registered in the server, the user corresponding to the target user account is one or more users using the target user account, and the user can start an application program on the terminal and log in the target user account, and then browse, praise and pay attention to each multimedia resource in the application program.
The user corresponding to the target user account may be a "new user" of the application program, that is, a user registering a user account in the application program for the first time, and the target user account may also be an "old user" of the application program, that is, a user registering a user account in the application program, for the old user, the target user account may be in a login state or an unregistered state, and in the embodiment of the present disclosure, the user type of the user corresponding to the target user account is not specifically limited.
The global initial feature comprises user information of the target user account, resource information of the recommended resources and behavior information of the target user account on the recommended resources.
Optionally, the user information of the target user account may include information that the target user account and the nickname, gender, age, occupation or geographic location of the user corresponding to the target user account are sufficiently authorized by the corresponding user.
Optionally, the resource information of the recommended resource refers to information related to a multimedia resource that has been recommended in the process of performing historical recommendation on the target user account, where the multimedia resource may include at least one of a video resource, an audio resource, a picture resource, a text resource, or a web page resource. Optionally, the resource information may include at least one of a name, a cover, a summary, a content tag, or publisher information of the recommended resource, where the publisher information refers to user information of a user uploading the recommended multimedia resource in the platform, and includes information fully authorized by the publisher, such as an account number, a nickname, a gender, an age, a occupation, or a geographic location of the publisher.
Optionally, the behavior information may include at least one of click behavior information, consumption behavior information, praise behavior information, collection behavior information or attention behavior information of the recommended resource by the target user account, where the click behavior information refers to whether a user corresponding to the target user account clicks the recommended resource, the consumption behavior information refers to whether a user corresponding to the target user account purchases a commodity recommended by a multimedia resource such as an advertisement, the praise behavior information refers to whether a user corresponding to the target user account clicks the recommended resource, the attention behavior information refers to whether a user corresponding to the target user account collects the recommended resource, the attention behavior information refers to whether a user corresponding to the target user account pays attention to a publisher of the recommended video after playing, in an example, the click behavior information, the consumption behavior information, the praise behavior information, the collection behavior information and the attention behavior information may be represented by a binary numerical value, for example, a true indicates that the user corresponding to the target user has purchased a related behavior, a false indicates that the user corresponding to the target user has not performed a related behavior, and if the target user corresponding to the target user account clicks the recommended video after playing, the recommended video is counted by a plurality of times, and the number of times may be counted after the user clicking the corresponding to the target user account.
The personalized initial feature comprises a user identification of the target user account, a resource identification of the recommended resource and a publisher identification of the recommended resource. Optionally, the server may allocate an Identification (identity) information to each user account and each multimedia resource, which may be used to uniquely identify a user corresponding to a certain user account or a certain multimedia resource, for example, the Identification information of the user may be the user account itself, or a common device identifier of the login user account, or a user Identification code allocated to each user, and the Identification information of the resource may be a resource Identification code allocated to each multimedia resource. In one example, the user of the target user account is identified as the target user account, the user identification may be denoted as "user id", the resource identification of the recommended resource is denoted as a resource identification code of the recommended resource, the resource identification may be denoted as "pid", the publisher of the recommended resource is identified as the user account of the user who published the recommended resource (i.e. the publisher), the publisher identification may be denoted as "aid (author id)", and optionally the personalized initial feature may be stored in the form of a triplet { uid, pid, aid }.
In the above process, the server may collect user information submitted by the user when registering the target user account and repeatedly authorized by the user, and a behavior log recorded in a history recommendation process on an application program of the terminal, so as to obtain user information of the target user account and resource information of recommended resources, then extract behavior information of the target user account on the recommended resources according to the behavior log, further extract a user identifier of the target user account from the user information of the target user account, extract a resource identifier of the recommended resources and a publisher identifier of the recommended resources from the resource information of the recommended resources, further perform one-hot (one-hot) encoding on the user information, the resource information and the behavior information, obtain global initial characteristics of the target user account, and obtain a triplet formed by the user identifier, the resource identifier and the publisher identifier as personalized initial characteristics of the target user account.
In one possible implementation manner, in addition to using the single thermal code to represent the global initial feature and using the triplet to represent the personalized initial feature, the server may also use a word vector to represent the global initial feature and the personalized initial feature, and the embodiments of the present disclosure do not specifically limit the expression forms of the global initial feature and the personalized initial feature.
In an exemplary scenario, taking a multimedia resource as an example, a server collects user information of a target user account and a behavior log in a history recommendation process, and obtains global initial features and personalized initial features of the target user account according to the user information and the behavior log. The global initial feature of the target user account may include user information of the target user account, video information of the recommended video, and behavior information of the target user account on the recommended video, for example, information that is fully authorized by the user, such as a nickname, gender, age, occupation, or geographic location of the target user account and the user corresponding to the target user account, is obtained as the user information, at least one of a name (i.e., title) of the recommended video, a cover image, a text abstract (i.e., a brief introduction), a won moment (i.e., a key frame), a content tag, or a publisher account (i.e., an up main account, a anchor account, etc.), and at least one of click behavior information, praise behavior information, collection behavior information, or attention behavior information of the target user account on the recommended video is obtained as the behavior information, and optionally, if the recommended video is an advertisement, consumption behavior information of the target user account on the recommended advertisement is also obtained as the behavior information. The personalized initial feature of the target user account may include a user identifier uid of the target user account, a resource identifier pid of the recommended resource, and a publisher identifier aid of the recommended resource.
In step 302, the server performs embedding processing on the global initial feature and the personalized initial feature, so as to obtain a global recommendation feature and a personalized recommendation feature of the target user account.
The global recommendation feature is used for representing behavior related information of the target user account on recommended resources and resource information, and the personalized recommendation feature is used for representing resource preference information of the target user account determined based on the recommended resources.
In some embodiments, a recommendation model may be stored on the server, where the recommendation model may include a body network and a bias network, which are described in detail in steps 304 and 305, respectively, and not described in detail herein. In addition, the recommendation model may further include a first embedded layer and a second embedded layer, the first embedded layer may be used as a pre-layer or an input layer of the main network, the second embedded layer may be used as a pre-layer or an input layer of the bias network, wherein the first embedded layer is used for preprocessing global initial features to obtain global recommendation features, and the second embedded layer is used for preprocessing personalized initial features to obtain personalized recommendation features.
In the step 302, the server may input the global initial feature into a first embedding layer of the recommendation model, perform embedding (embedding) processing on the global initial feature through the first embedding layer, convert the global initial feature in the form of one-hot encoding into a global embedding vector in an embedding space, and acquire the global embedding vector as a global recommendation feature; inputting the personalized initial feature into a second embedding layer of the recommendation model, embedding the personalized initial feature through the second embedding layer, converting the personalized initial feature in a triplet form into a personalized embedded vector in an embedding space, and acquiring the personalized embedded vector as the personalized recommendation feature.
Optionally, the first embedded layer and the second embedded layer may be the same embedded layer or may be different embedded layers, and in this embodiment of the present disclosure, description is given by taking the first embedded layer and the second embedded layer as different embedded layers, where the global initial feature is usually in a form of single thermal coding, and the personalized initial feature is in a form of triples, so that different embedded layers are trained for initial features of different data types, and prediction accuracy of the whole recommendation model can be improved.
In the above process, the server performs embedding processing on the global initial feature and the personalized initial feature, so that the Sparse feature (Sparse features) can be converted into a dense embedded vector in the embedding space, where the Sparse feature is a feature of the type that the number of non-zero elements is far smaller than the feature length, and the dense embedded vector is a feature of the type that the number of non-zero elements is smaller, and by compressing the feature, the expression capability of the global recommended feature and the personalized recommended feature can be improved.
In the steps 301 to 302, the server is equivalent to acquiring the global recommended feature and the personalized recommended feature of the target user account, and in some embodiments, the server may also call the feature extraction network to extract the global recommended feature and the personalized recommended feature after acquiring the global initial feature and the personalized initial feature, or may train two different feature extraction networks for the global initial feature and the personalized initial feature, or train a general feature extraction network for the global initial feature and the personalized initial feature, so that different feature extraction networks can be flexibly configured according to service requirements, and accuracy of the feature extraction process is improved.
In one possible implementation manner, if the global initial feature and the personalized initial feature are represented by adopting a word vector form, the server can also use the word vector submodel to extract the global recommended feature and the personalized recommended feature, and the recommended model does not need to include a first embedded layer and a second embedded layer, and only one word vector submodel is directly adopted as a front layer of the main network and the bias network, so that the architecture of the recommended model is simplified, and the flow of feature extraction is simplified. According to different word vector languages, the word vector submodel can be a Chinese word vector submodel or a foreign word vector submodel, and the type of the word vector submodel is not specifically limited in the embodiment of the present disclosure.
In step 303, the server performs a stitching process on the global recommended feature and the personalized recommended feature, to obtain a target stitching feature.
In the above process, the server may set the super parameters of the first embedded layer and the second embedded layer to ensure that the feature sizes (width and height) of the global recommended feature output by the first embedded layer and the personalized recommended feature output by the second embedded layer are identical, but the feature lengths (i.e. the number of channels) of the two features may be the same or different. On the basis of ensuring the feature sizes of the global recommendation feature and the personalized recommendation feature to be consistent, the global recommendation feature and the personalized recommendation feature can be directly connected in length, and the target splicing feature can be obtained, in other words, splicing (concat) processing between the global recommendation feature and the personalized recommendation feature is realized.
In one example, assuming that the global recommended feature is a 32×32×256-dimensional embedded vector, the personalized recommended feature is a 32×32×16-dimensional embedded vector, and both feature sizes are 32×32, it is guaranteed that the two recommended features can be directly connected in length and spliced into a 32×32× (16+256) =32×32×272-dimensional target spliced feature.
In step 303, only one possible implementation manner of fusing the global recommended feature and the personalized recommended feature by the server is to reduce the calculation amount of the feature fusion process through direct stitching processing, and accelerate the processing speed of resource recommendation, alternatively, the server may perform feature fusion by adding elements, multiplying elements, bilinear fusion, and the like, so that a deeper fusion effect can be achieved.
In step 304, the server invokes the bias network to activate the target splice feature to obtain bias information, where the bias information includes at least one bias term.
Wherein the bias information is used to represent a deviation between the global recommendation feature and the personalized recommendation feature.
Optionally, the recommendation model as referred to in step 302 above includes a principal network for weighting the input features to obtain recommendation probabilities and a bias network for biasing the input features to obtain bias information.
Optionally, the main network may include at least one hidden layer therein, and the bias network may include at least one active sub-network therein, where the number of the at least one active sub-network is greater than or equal to the number of the at least one hidden layer. Wherein one active sub-network of the at least one active sub-network is used for outputting one bias term, and one bias term is used for adding personalized bias information for one hidden layer corresponding to the bias term, and all bias terms output by all active sub-networks form the bias information.
In some embodiments, the server may input the target splice feature into at least one active sub-network of the bias network, respectively, one active sub-network including at least one active layer; activating the target splicing characteristic through at least one activation layer in any activation sub-network in the bias network to obtain a bias item of any activation sub-network; and repeatedly executing the operation of acquiring the bias items, and acquiring at least one bias item output by the at least one activated sub-network as the bias information.
In the foregoing process, for each active subnetwork, the server may input the target splicing feature to the first active layer, perform nonlinear transformation on the target splicing feature by an active function in the first active layer, use the output of the first active layer as the input of the second active layer, and so on, use the output of the last active layer as the bias term of the final output of the active subnetwork, where each active layer corresponds to an active function, and the active function may be a sigmoid function, a ReLU function, or a tanh function. It can be seen that, since the input target splicing feature is usually a feature vector, the bias term of the final activation sub-network output may also be a feature vector, and of course, the feature vector may also be normalized to a numerical value by a softmax function, and the data type of the bias term is not specifically limited in the embodiments of the present disclosure.
In one example, the bias network may include 4 active sub-networks (Gate Neural Networks, gate NN), each of which includes 2 active layers, for example, a first active layer includes a ReLU function, a second active layer includes a 2 x sigmod function, and the second active layer may restrict each element in the bias term output by each active sub-network to a value range of [0,2], so as to normalize the value of the element in the bias term output by each active sub-network, alternatively, the value of the element in the default output of the second active sub-network may be 1, when each element in all bias terms assumes the default value, which means that each bias term introduced by the bias network is 1, that is, when the whole recommendation model is equivalent to the main network.
In the above process, the target splicing features are respectively activated through at least one activating sub-network in the bias network, and at least one bias item is output as bias information, so that a personalized bias item can be respectively configured for each hidden layer in the main network.
In the above steps 303-304, the server is equivalent to obtaining the bias information according to the global recommendation feature and the personalized recommendation feature. In this implementation manner, at least one bias term can be output through at least one active sub-network, so that respective bias terms can be added in a targeted manner to at least one hidden layer in the main network, and bias terms with different values (equivalent to a correction factor) can be introduced into different hidden layers, thereby achieving a more flexible and accurate correction effect.
In some embodiments, the server may train a single independent bias network to obtain the bias information, for example, the bias network includes at least one active layer, the bias item output by the last active layer is used as general bias information, and the bias information is used as a correction factor shared by all hidden layers in the main network, so that the architecture of the bias network may be simplified, the calculation amount of the bias network for obtaining the bias information is reduced, and the method for obtaining the bias information is not limited in the embodiments of the present disclosure.
In step 305, the server inputs the at least one bias term and the global recommendation feature into a subject network, the subject network including at least one hidden layer, wherein the at least one bias term corresponds to the at least one hidden layer one-to-one, respectively.
In the above process, for the whole principal network, the server takes the at least one bias term and the global recommended feature as inputs of the whole principal network, and for at least one hidden layer in the principal network, the inputs of the first hidden layer are the global recommended feature and the first bias term, the inputs of the second hidden layer are the output of the first hidden layer and the second bias term, so it is obvious that the input of the last hidden layer is the output of the last but one hidden layer and the last bias term, and since the bias terms are in one-to-one correspondence with the hidden layers and the bias terms are also in one-to-one correspondence with the active sub-network, the server can also be regarded as an active sub-network for calculating the bias terms corresponding to the one hidden layer.
It should be noted that the principal Network may be any artificial neural Network, such as DNN (Deep Neural Networks, deep neural Network), CNN (Convolutional Neural Networks, convolutional neural Network), deep fm (Deep Factorization Machines, deep & Cross Network), DCN (Deep & Cross Network), and the like, and the embodiments of the present disclosure do not specifically limit the types of principal networks.
In step 306, the server performs bias processing on the input features of the at least one hidden layer according to the at least one bias term, invokes the at least one hidden layer to perform weighting processing on the biased features, and outputs at least one recommendation probability.
The recommendation probability is used to indicate a preference degree of the target user account for a multimedia resource, that is, a recommendation probability is used to indicate a probability that the server recommends a multimedia resource for the target user account, for example, the recommendation probability may refer to estimated CTR (Click Through Rate, click Rate), estimated LTR (Like Through Rate, praise Rate), estimated WTR (Watch Through Rate), estimated CVR (Conversion Rate), and the like, and different types of recommendation probabilities may be selected according to different predictors.
In the above process, for any hidden layer in the at least one hidden layer, the server may perform element-wise multiplication (element-wise product) on the output feature of the previous hidden layer and the bias term of the corresponding level, to obtain the bias feature of the any hidden layer; and calling any hidden layer to carry out weighting treatment on the biased characteristics to obtain output characteristics of any hidden layer, and inputting the output characteristics of any hidden layer to the next hidden layer.
In the above process, the server may multiply the global recommended feature by a first bias term to obtain a biased feature of the first hidden layer (hereinafter, simply referred to as a "first biased feature"), input the first biased feature into the first hidden layer, perform a weighting process on the first biased feature by the first hidden layer to obtain an output feature of the first hidden layer, then multiply the output feature of the first hidden layer by a second bias term to obtain a second biased feature, input the second biased feature into the second hidden layer, perform a weighting process on the second biased feature by the second hidden layer to obtain an output feature of the second hidden layer, so as to push, before the output feature of each hidden layer is input into the next hidden layer, perform a weighting process by a bias term of the corresponding level, input the biased feature into the next hidden layer, and finally, perform at least one probability normalization on the output feature of the hidden layer by using a soft max function.
In the above steps 305-306, the server processes the global recommendation feature to obtain at least one recommendation probability according to the bias information. In the above process, the bias term is taken as an example to describe the feature vector, and at this time, the process of performing bias processing on the input feature (i.e. one feature vector) of each hidden layer by using each bias term is equivalent to the process of multiplying two different feature vectors by element, that is, multiplying the elements at the corresponding positions in the two feature vectors to obtain the values of the elements at the corresponding positions in the biased feature, so that correction factors with different values can be introduced for the elements at different positions in the input feature, thereby achieving a more targeted correction effect.
In some embodiments, if the dimensions (or lengths) of the two feature vectors are different, the dimension up or down operation may be performed on any feature vector by using a convolution layer of 1×1 to ensure that the dimensions of the two feature vectors are consistent, or if the dimensions (or widths and heights) of the two feature vectors are different, the dimensions of any feature vector may be transformed by using a padding (padding) operation to ensure that the dimensions of the two feature vectors are consistent, where the value of the padding may be a default value of 1.
In some embodiments, since the bias term may not be a feature vector, but a certain value, if the bias term is a certain value, the value may be directly multiplied by all elements in the input feature, which is equivalent to enhancing or weakening the whole input feature, so as to achieve the effect of bias processing, and the biased feature is input into the next hidden layer, so that the calculation amount of the bias processing process may be reduced.
In step 307, the server recommends resources for the target user account based on the at least one recommendation probability.
In some embodiments, the server may screen out multimedia resources with a recommendation probability greater than a probability threshold from the multimedia resource library, and further push the multimedia resources with the recommendation probability greater than the probability threshold to the terminal of the user corresponding to the target user account, if the multimedia resources with the recommendation probability greater than the probability threshold are more, the server may randomly extract a certain number of multimedia resources to push, and periodically update the multimedia resources pushed to the terminal of the user corresponding to the target user account according to a deduplication principle.
In some embodiments, the server may further sort the at least one multimedia resource according to a sequence from the big to the small of the recommendation probability, and push the multimedia resource with the first N-bit sorting to the terminal of the user corresponding to the target user account, where N is an integer greater than or equal to 1.
In some embodiments, after recommending the resources to the target user account, the server may also collect the behavior logs of the recommended resources on the terminal by the user, and put the behavior logs as new training data into a new round of offline training process, so that the recommendation model may be gradually optimized over time, so as to achieve a better prediction effect.
According to the method provided by the embodiment of the disclosure, the global recommendation characteristic and the personalized recommendation characteristic of each target user account are obtained, and the bias information obtained based on two different recommendation characteristics can be introduced in the process of processing the global recommendation characteristic, so that deviation caused by individual difference in the process can be corrected, the recommendation probability of resource preference of a user corresponding to the target user account is obtained, and the resource recommendation based on the more accurate recommendation probability has higher accuracy.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 4 is a schematic diagram of a recommendation model provided in an embodiment of the present disclosure, and referring to fig. 4, the recommendation model 400 may be referred to as PPNet (Parameter Personalized Net, parameter personalization network), where a PPNet may include a sparse feature 401, an embedded Layer 402, and a multiple Neural network Layer 403 (Neural Layer).
The sparse feature 401 comprises a global initial feature on the left side and a personalized initial feature on the right side, where the global initial feature comprises m domain features (field 1-field m), corresponding to m feature categories included in the global initial feature, where the personalized initial features comprise a user identification (uid field), a resource identification (pid field), and a publisher identification (aid field).
It should be noted that, the uid, pid and aid are generally suitable for personalized recommendation for audio/video resources, and for other types of multimedia resources, other personalized initial features may be configured, for example, for advertisement resources, the aid is not required to be paid attention, but the commodity type and the speaker of the advertisement recommendation are required to be paid attention, so that in the advertisement recommendation scene, the personalized initial features may delete the aid and increase the commodity type identifier and the speaker identifier.
The embedding layer 402 is configured to perform an embedding process on the sparse feature 401, optionally perform an embedding process on m domain features (field 1-field m) to obtain m embedded vectors (emb 1-emb m), and use the m embedded vectors (emb 1-emb m) as global recommendation features, optionally perform an embedding process on features of three uid, pid, aid domains to obtain three embedded vectors, i.e., uid emb, pid emb, and aid emb, as personalized recommendation features.
In the polyneural layer 403, the left part is the principal network 4031, and the principal network 4031 is taken as a DNN network for illustration, where the DNN network includes 4 hidden layers, the right part is the bias network 4032, and the bias network 4032 includes 4 Gate NNs (i.e. active sub-networks), where it can be seen that the number of Gate NNs is consistent with the number of hidden layers in the principal network. Each Gate NN is a 2-layer neural network, where the activation function in the first activation layer is a ReLU function, and the activation function in the second activation layer is 2 x sigmoid to constrain the elements in each bias term to be within the range of values of 0,2, and its default value is 1.
The global recommendation features (emb 1-emb m) on the left side in the embedded layer are spliced with the personalized recommendation features (uid emb, pid emb and aid emb) on the right side to obtain target splicing features, the target splicing features are used as the input of 4 Gate NNs, 4 bias items (namely bias information) are output by the 4 Gate NNs, the 4 bias items are respectively used for carrying out bias processing on the input features of the 4 hidden layers in the DNN network, and specific bias processing operation comprises element multiplication, so that personalized bias aiming at a target user account is achieved.
In some embodiments, the training process of the recommendation model may include: acquiring global recommendation characteristics, personalized recommendation characteristics and resource recommendation results of at least one sample user account, wherein the resource recommendation results are used for indicating whether the sample user account clicks a recommended resource; based on the global recommendation feature, the personalized recommendation feature and the resource recommendation result of the at least one sample user account, performing iterative training on an initial recommendation model, wherein the initial recommendation model comprises an initial bias network and an initial main body network; in response to meeting the stop training condition, a recommendation model is obtained, the recommendation model including a subject network and a bias network.
If the resource recommendation result is that the user clicks the recommended resource, the behavior of the sample user account on the recommended resource can be used as a positive sample, otherwise, if the resource recommendation result is that the user does not click the recommended resource, the behavior of the sample user account on the recommended resource can be used as a negative sample. For the positive sample, the playing times and the playing time length of each playing of the user can be counted, so that the preference degree of the user on the recommended resources can be measured.
In the training process of the recommendation model, if the training stopping condition is not met, the server carries out parameter adjustment on the initial main body network, and back-propagates gradients to global recommendation features of at least one sample user account, carries out parameter adjustment on the initial bias network, and back-propagates gradients to personalized recommendation features of at least one sample user account, in other words, although input (target stitching features) of the initial bias network both comprise global recommendation features and personalized recommendation features, the initial bias network only transmits the personalized recommendation features of the at least one sample user account when back-propagates gradients, and the global recommendation features of the at least one sample user account do not accept the back-propagation gradients of the initial bias network, so that the operation mode can reduce the influence of the initial bias network on convergence of each embedded (empeding) vector in the existing global recommendation features.
If the training stopping condition is met, training is stopped immediately, and an initial subject network and an initial bias network in the last iteration process are respectively acquired as a subject network and a bias network of a final recommended model. Alternatively, the training stopping condition may be that the number of iterations exceeds a target number, or the loss function value is smaller than a loss threshold, where the target number may be any number greater than or equal to 1, and the loss threshold may be any number greater than or equal to 0, and the content of the training stopping condition is not specifically limited in the embodiment of the present disclosure.
Through experimental comparison, personalized bias items are added for the input features of each hidden layer in the main network through each Gate NN in the bias network, and the accuracy of each predicted recommendation probability can be improved, so that the target prediction capability of the recommendation model is remarkably improved. The PPNet supports the individuation capability of DNN network parameters through Gate NN, so that the prediction capability of a certain target (such as CTR, LTR, WTR, VCR and the like) is improved, and the method can be applied to all predicted scenes based on DNN models, such as individuation recommendation scenes, advertisement recommendation scenes, DNN-based reinforcement learning scenes and the like.
Specifically, taking the estimated target as the CTR as an example, PPNet may be applied to a CTR model for accurately ordering video resources, in the related art, AUC (Area of ROC curve and coordinate axis) is generally adopted as an evaluation index of the ordering process, but there is a certain limitation, because AUC measures global ordering capability among whole samples, and in a real resource recommendation scene, more attention is paid to ordering capability among different multimedia resources possibly preferred by different users, so in this embodiment, WUAUC (Weighted User AUC, an AUC index of weighted user dimension) is adopted as an index for measuring accuracy of the model, and ordering capability of real environment on outgoing lines can be reflected more accurately.
After PPNet is applied, the wuauuc of the unregistered user is evaluated offline to promote 1.0pp (percentile Point), and the wuauuc of the logged user is promoted by 0.48pp, so that a maximum single model promotion can be achieved especially on the unregistered user.
Because the unregistered users usually have fewer behaviors, the preference modes are quite different from those of the logged-in users, and the users have quite large individual differences, and the PPNet can distinguish the unregistered users from the logged-in users well in the DNN network parameter part.
After the PPNet is applied to a CTR model for accurately sequencing video resources, obvious online benefits are obtained when resource recommendation is performed based on the PPNet. The next-day retention rate of the unregistered equipment (or called brand new equipment) is improved by 0.239pp, the seven-day retention rate is improved by 0.199pp, the CTR of the first screen resource of the brand new equipment is improved by 0.593pp, retention and experience indexes are remarkably improved, and larger single promotion is realized for the retention rate of the brand new equipment. The next day retention rate is a ratio between the number of users that are newly added on the day and that are still logged in on the 1 st day after the new addition and the total number of users that are newly added on the first day, and the seventh day retention rate is a ratio between the number of users that are still logged in on the 7 th day after the new addition and the total number of users that are newly added on the first day.
In a multi-task training scene, the PPNet can also promote the predictive effect of multiple targets. Experimental comparison proves that the offline WUAUC for improving long-time playing and effective playing of the PPNet reaches 0.2pp. Further, after PPNet is applied to a CTR model for accurately sequencing videos in the same city, WUAUC is raised by 0.2pp, and benefits can be obviously observed in an online AB experiment, wherein the online AB experiment refers to that in the same time dimension, target crowd with similar composition components are respectively subjected to random access of an application program for resource recommendation by the PPNet and an application program for resource recommendation by the non-application PPNet, and feedback information of the target crowd is collected for benefit analysis.
The effective playing refers to playing behavior with playing time longer than a first time threshold, so that a user may click on a certain video resource due to misoperation such as false touch, the playing behavior at the moment belongs to invalid playing, the long-time playing refers to playing behavior with playing time longer than a second time threshold, the playing behavior represents that the user prefers the video resource currently played and watches for a long time, the first time threshold and the second time threshold are both values which are greater than or equal to 0, and the first time threshold is generally smaller than the second time threshold.
In some embodiments, PPNet may consume more CPU resources than the traditional CTR model before and increase request time, so the server may also GPU reform the model training and predictive architecture. For example, the core module of the training and predictive framework is replaced with a tensorsurface core and the computation is accelerated online using a GPU. Compared with a CPU version, after GPU (graphics processing unit) acceleration calculation is performed, the online estimated service throughput is 3 times of the original service throughput, the request time is reduced by 65.5% (by 38 ms), and the processing performance of the server is remarkably improved.
FIG. 5 is a block diagram illustrating the logical structure of a resource recommendation device, according to an example embodiment. Referring to fig. 5, the apparatus includes a first acquisition unit 501, a second acquisition unit 502, a processing unit 503, and a recommendation unit 504.
A first obtaining unit 501 configured to perform obtaining global recommendation characteristics and personalized recommendation characteristics of a target user account, where the global recommendation characteristics are used to represent behavior related information of the target user account on recommended resources and resource information, and the personalized recommendation characteristics are used to represent resource preference information of the target user account determined based on the recommended resources;
a second obtaining unit 502 configured to perform obtaining bias information according to the global recommendation feature and the personalized recommendation feature, the bias information being used to represent a deviation between the global recommendation feature and the personalized recommendation feature;
a processing unit 503, configured to perform processing on the global recommendation feature according to the bias information, so as to obtain at least one recommendation probability, where the recommendation probability is used to represent a preference degree of the target user account for a multimedia resource;
and a recommending unit 504 configured to perform resource recommendation on the target user account based on the at least one recommendation probability.
According to the method provided by the embodiment of the disclosure, the global recommendation characteristic and the personalized recommendation characteristic of each target user account are obtained, and the bias information obtained based on two different recommendation characteristics can be introduced in the process of processing the global recommendation characteristic, so that deviation caused by individual difference in the process can be corrected, the recommendation probability of resource preference of a user corresponding to the target user account is obtained, and the resource recommendation based on the more accurate recommendation probability has higher accuracy.
In one possible implementation, based on the apparatus composition of fig. 5, the second acquisition unit 502 includes:
the splicing subunit is configured to perform splicing processing on the global recommended feature and the personalized recommended feature to obtain a target splicing feature;
and the activating subunit is configured to execute activating processing on the target splicing characteristic by calling the biasing network to obtain the biasing information.
In one possible implementation, the activation subunit is configured to perform:
inputting the target splicing characteristics into at least one activation sub-network in the bias network respectively, wherein one activation sub-network comprises at least one activation layer;
activating the target splicing characteristic through at least one activation layer in any activation sub-network in the bias network to obtain a bias item of any activation sub-network;
and repeatedly executing the operation of acquiring the bias items, and acquiring at least one bias item output by the at least one activated sub-network as the bias information.
In one possible implementation, the training process of the bias network includes:
acquiring global recommendation characteristics, personalized recommendation characteristics and resource recommendation results of at least one sample user account, wherein the resource recommendation results are used for indicating whether the sample user account clicks a recommended resource;
Based on the global recommendation feature, the personalized recommendation feature and the resource recommendation result of the at least one sample user account, performing iterative training on an initial recommendation model, wherein the initial recommendation model comprises an initial bias network and an initial main body network;
in response to meeting the stop training condition, a recommendation model is obtained, the recommendation model including a subject network for weighting the input features to obtain recommendation probabilities and a bias network for biasing the input features to obtain bias information.
In one possible implementation, the initial bias network is passed only to the personalized recommendation features of the at least one sample user account while back-propagating gradients during training of the recommendation model.
In a possible implementation, the bias information includes at least one bias term, based on the apparatus composition of fig. 5, the processing unit 503 includes:
an input subunit configured to perform inputting the at least one bias term and the global recommendation feature into a subject network, the subject network comprising at least one hidden layer, wherein the at least one bias term corresponds one-to-one with the at least one hidden layer, respectively;
And the processing subunit is configured to perform bias processing on the input features of the at least one hidden layer according to the at least one bias term, call the at least one hidden layer to perform weighting processing on the biased features, and output the at least one recommendation probability.
In one possible implementation, the processing subunit is configured to perform:
performing element multiplication on the output characteristics of the previous hidden layer and the bias items of the corresponding layers to obtain biased characteristics of any hidden layer;
and calling any hidden layer to carry out weighting treatment on the biased characteristics to obtain output characteristics of any hidden layer, and inputting the output characteristics of any hidden layer to the next hidden layer.
In one possible implementation, the first acquisition unit 501 is configured to perform:
acquiring global initial characteristics and personalized initial characteristics of the target user account, wherein the global initial characteristics comprise user information of the target user account, resource information of recommended resources and behavior information of the target user account on the recommended resources, and the personalized initial characteristics comprise user identification of the target user account, resource identification of the recommended resources and publisher identification of the recommended resources;
And respectively carrying out embedding processing on the global initial feature and the personalized initial feature to obtain the global recommended feature and the personalized recommended feature.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The specific manner in which the respective units perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the resource recommendation method, and will not be described in detail herein.
Fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure, where the computer device 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 601 and one or more memories 602, where at least one program code is stored in the memories 602, and the at least one program code is loaded and executed by the processors 601 to implement the resource recommendation method provided in the foregoing embodiments. Of course, the computer device 600 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a storage medium is also provided, e.g. a memory, comprising at least one instruction executable by a processor in the terminal to perform the resource recommendation method of the above embodiment. Alternatively, the above-described storage medium may be a non-transitory computer-readable storage medium, which may include, for example, a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, including one or more instructions executable by a processor of a terminal to perform the resource recommendation method provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.