Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Before the embodiments of the present disclosure are disclosed in detail, key technical terms related to the embodiments of the present disclosure are described one by one, as follows:
large language model: the large language model is a model with large-scale parameter quantity and powerful language processing capability, can deeply analyze input texts, capture information in multiple aspects such as semantics, grammar, language use and the like, and can generate natural, accurate and coherent text replies. These models are typically based on neural network architecture, with extensive data training to continually boost performance. The parameters of a large language model are typically very large, often billions or even trillions of parameters, and in embodiments of the present disclosure, the large model chosen is a large model of smaller parameters, typically in the tens of millions of scales, due to hardware limitations of the front-end equipment.
The term: i.e., token, at the most basic level, token can be considered as a component of a text string, which is the basic unit of text analysis and understanding. For example, in English, words, punctuation marks, and even spaces can all be considered different tokens. From a linguistic perspective, token represents the smallest semantic unit in a language. At this level, each token carries a specific meaning or grammar function, which is critical to understanding and analyzing text. As the computer processes the text, by splitting the text into tokens, the computer can process and analyze the language data in a more efficient manner. This process involves challenges such as determining word boundaries, identifying symbols, and processing special characters.
Currently, existing front-end artificial intelligence assistants are capable of intelligent interaction with users through techniques such as speech recognition and natural language processing. The user can communicate with the artificial intelligent assistant directly in a voice or text mode without the need of tedious key menus or waiting for reception of the artificial customer service.
However, some drawbacks remain in the prior art solutions, as follows:
on the one hand, it is obvious that it is difficult for the artificial intelligent assistant with the traditional voice recognition and natural language processing technologies as cores to meet the increasing demands of users, and the front-end intelligent customer service assistant with the natural language processing model having stronger understanding and reasoning capabilities becomes urgent, so that the front-end equipment of the modern bank needs to equip or upgrade the built-in intelligent assistant to cope with the challenge.
On the other hand, the scheme that the traditional artificial intelligent model is deployed on the back-end server cannot guarantee the reliability of the artificial intelligent assistant service, and the realizability of the artificial intelligent assistant service in the prior art is low especially under the condition that the network environment is unstable due to remote banking sites.
In order to solve the technical problems existing in the prior art, an embodiment of the present disclosure provides a question-answering method for business handling, where the method includes: acquiring description information of a current business to be handled when first round of questions and answers; based on the description information, obtaining business handling information corresponding to the description information through a preset large language model; when a first round of question and answer is not performed, acquiring description information and historical keywords of a current business to be handled, wherein the historical keywords are extracted from the description information of the history; based on the description information and the history keywords, obtaining business handling information corresponding to the description information through a preset large language model; the preset large language model is arranged in the local front-end equipment.
In the embodiment of the disclosure, a large language model for service question and answer is arranged in front-end equipment, and in the actual service question and answer process, the large model arranged at the front end can directly output answers related to service handling, so that service is not influenced under the condition of unstable network or intense computing power resource of a server, and the reliability of the service is ensured; in addition, in the question-answering process, historical keywords related to the context relationship can be added besides the current service description information, so that the accuracy of inquiring the large model is ensured, and the reliability of the large model answer in the service handling process is improved.
In summary, the embodiments of the present disclosure may improve user satisfaction and service processing efficiency by accurate and rapid service questions and answers, while reducing operation cost and server computing power resource usage.
Fig. 1 schematically illustrates an application scenario diagram of a question-answering method of business transaction according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101,102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like. Among these, the terminal devices 101,102, 103 include terminal devices of banking outlets that interact directly with people.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the question-answering method for service transaction provided by the embodiments of the present disclosure may be generally executed by the terminal devices 101, 102, 103. Accordingly, the question answering apparatus for service transaction provided by the embodiments of the present disclosure may be generally provided in the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The following will describe in detail a question-answering method of business handling in the disclosed embodiment by using fig. 2 to 5 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flow chart of a question-answering method of business transaction according to an embodiment of the present disclosure.
As shown in fig. 2, the question-answering method of the business transaction of this embodiment includes operations S210 to S240, and the question-answering method of the business transaction may be executed by the terminal devices 101, 102, 103.
In a typical scenario, the user arrives at a banking site and transacts business by interacting with terminal devices 101, 102, 103 in the banking site. In order to accurately find the service to be transacted or obtain the transaction guidance of the service to be transacted, the user needs to conduct one or more rounds of questions and answers with the artificial intelligent assistant in the front-end equipment so as to confirm whether the information pushed by the artificial intelligent assistant is the relevant information of the service which the user intends to transact. The service handled by the user is diversified, for example, the service includes: user account inquiry, transaction flow description, loan application guidance, etc.
In operation S210, at the first round of question answering, description information of the current business to be handled is acquired.
In a typical scenario, when a user wakes up a terminal device 101, 102, 103 (i.e., a front-end device described later) in a banking website, the terminal device 101, 102, 103 collects description information from the user about a current proxy service in a first question-answering process by presenting the description of the intended transaction by means of voice or text.
It will be appreciated that the descriptive information may be in a direct representation, i.e., directly informing the artificial intelligence assistant what services need to be handled, e.g., user description of the artificial intelligence assistant: the machine can directly give out the account transfer service entrance and the corresponding prompt information; the description information may also be in an indirect manner, that is, by describing the current situation, the artificial intelligence assistant recommends what services need to be handled, for example, the user describes the artificial intelligence assistant: the machine needs to analyze that the business to be transacted needs to be recommended to the user when the bank card is lost.
According to an embodiment of the present disclosure, the acquiring the description information of the to-be-handled service when the first round of questions and answers, or the acquiring the description information and the history keyword of the current to-be-handled service when the first round of questions and answers is not the first round of questions and answers, includes: directly acquiring the description information in a text form aiming at the description information; or indirectly acquiring the descriptive information in a text form aiming at the descriptive information, wherein the indirectly acquiring the descriptive information in the text form comprises the following steps: acquiring description information in a voice form; and converting the descriptive information in the form of voice into descriptive information in the form of text.
The above description information is multi-sourced, depending on what interaction means the user adopts. Specifically, the description information may exist in the form of voice, and the description information may also exist in the form of text. And the collected multiple descriptive information is converted into a text form in consideration of the subsequent requirement of inputting into a large language model.
In operation S220, based on the description information, service handling information corresponding to the description information is obtained through a preset large language model.
In a typical scenario, the above-mentioned core constituting the artificial intelligence assistant is the pre-set large language model, which is directly disposed in the terminal devices 101, 102, 103, instead of the remote server 105, so that the pre-set large language model can be directly deployed on the front-end device, and directly run by using the computing power resources of the front-end device. Considering that the front-end equipment has limited computational power, the large language model with billions of parameter amounts is considered for training and use when selecting the existing large language model which is not commercial on the market.
Converting the description information into data in a mode of a preset trained large language model, and outputting business handling information corresponding to the description information through the trained large language model, wherein the business handling information comprises: one or more information of a target business handling entrance, a target business handling instruction, a target business handling guide/instruction and the like.
According to an embodiment of the present disclosure, after the obtaining the description information of the current to-do service or after the obtaining the description information and the history keyword of the current to-do service, the method further includes: matching the description information based on a preset keyword; and under the condition that the preset keyword entries are successfully matched, marking the keyword entries as the historical keywords and recording the occurrence times.
The collected descriptive information is usually a sentence formed by a plurality of words, and after the descriptive information is collected, the descriptive information can be analyzed to extract key content related to business handling, for example: word frequency statistics, semantic analysis, topic mining and other methods. In the embodiment of the disclosure, considering the characteristics of limited computing power of front-end equipment and service auxiliary handling of such usage scenarios (for example, short statement of descriptive information is unfavorable for using word frequency statistics, the description information is basically developed around the service to cause context association, etc.), an easy-to-implement keyword matching scheme is adopted, a keyword content extraction method can be implemented by presetting a dictionary containing service keywords (the dictionary can be used for maintaining iteration constantly), related words related to the service in the keyword matching descriptive information are matched, and the number of occurrences is recorded and counted when matching succeeds, wherein the number of occurrences is the number of occurrences in the service handling process, for example, the number of occurrences counted from the first question and answer.
For example, presetting the key entries in the dictionary includes: the terms such as "find", "query", "loan", "transfer" and "transfer money" which are highly related to the business are extracted, and then in the actual use process, the keywords in the description information are extracted by matching the terms.
In operation S230, when there is no first round of question answering, the description information of the current business to be handled and the history keyword are obtained, wherein the history keyword is extracted from the description information of the history.
In a typical scenario, when the user does not obtain the service handling information effectively recommended by the artificial intelligent assistant in the first round of question-answering process or the user does not obtain the service handling information effectively recommended by the artificial intelligent assistant in the non-first round of question-answering process, the user asks the artificial intelligent assistant again to form the description information of the current round as one of the inputs of the subsequent large language model, and meanwhile, the history keyword stored by the user in the current service handling process is also obtained as one of the inputs of the subsequent large language model.
According to the embodiment of the disclosure, when a question and answer is not first-round, the description information and the history keywords of the current business to be handled are obtained, including: and aiming at the historical keywords, acquiring the historical keywords with the occurrence times reaching a preset threshold and the historical keywords with the previous question and answer occurrence.
Specifically, P keywords (P is a non-negative integer) with higher occurrence frequency (i.e., greater than a preset threshold) and Q keywords (Q is a non-negative integer) that occur at the time of the previous question-answering are obtained.
In operation S240, based on the description information and the history keyword, obtaining service handling information corresponding to the description information through a preset large language model; the preset large language model is arranged in the local front-end equipment. The history keywords comprise M history keywords, wherein M is a positive integer.
And converting the description information and the historical keywords into data in a mode of a preset trained large language model, and outputting business handling information corresponding to the description information through the trained large language model.
Fig. 3 schematically illustrates a flowchart of a business transaction information output method according to an embodiment of the present disclosure.
As shown in fig. 3, the business transaction information output method of this embodiment includes operations S310 to S320, where operations S310 to S320 may at least partially execute operations S220 or S240.
In operation S310, the description information is converted into a word sequence, or the description information and the history keyword are converted into a word sequence.
The conversion of text information into a token sequence (or token sequence) mainly comprises: the method comprises the steps of word segmentation, marking addition, numerical conversion, serialization, position coding addition and the like, and the conversion process is specifically as follows:
Fig. 4 schematically illustrates a flowchart of a method of word sequence conversion according to an embodiment of the present disclosure.
As shown in fig. 4, the word sequence conversion method of this embodiment includes operations S410 to S440, where operations S410 to S440 may at least partially execute the above operation S310.
In operation S410, the description information is segmented to obtain N description entries, where N is a positive integer.
And segmenting the text of the descriptive information through a word segmentation device to obtain N descriptive entries. For example, the descriptive information is "how do loans transact? "then the word is divided into descriptive terms [ loan", "how", "transact", "are? ", is shown.
According to an embodiment of the disclosure, after forming the word sequence based on the N first words and the M second words, before converting the N description words and the M history keywords into the N first words and the M second words according to a preset word mapping relationship, the method further includes: splicing the N first entries and the M second entries, and adding distinguishing marks between the N first entries and the M second entries to finish distinguishing.
Specifically, the descriptive information and the history keywords are spliced, and the entry of the descriptive information and the entry of the history keywords are distinguished through special distinguishing marks.
In operation S420, according to a preset vocabulary entry mapping relationship, the N description vocabulary entries and the M history keywords are respectively converted into N first vocabulary elements and M second vocabulary elements.
The preset vocabulary entry mapping relation comprises a plurality of mapping pairs, wherein the vocabulary entry and vocabulary element mapping relation recorded in each mapping pair exists in a data ID form.
As shown in the examples above, for describing the entry, we will [ loan "," how "," transact "," are? The term "is converted into the numerical form of" ID1"," ID2"," ID3"," ID4", and converted into the first term, and similarly, the history keyword is also converted in such a way to form the second term.
In operation S430, a sequence of lemmas is formed based on the N first lemmas and the M second lemmas.
Specifically, N first lemmas and M second lemmas are fused and serialized into a format which can be directly used by a large language model to form a lemma sequence.
In operation S440, position codes are added to the first and second lemmas in the lemma sequence.
Position coding is performed to learn the sequential relationship between the tokens for use by a subsequent large model.
In operation S320, the word element sequence is used as an input of the preset large language model to obtain service handling information.
And outputting business handling information by taking the formed word element sequence as the input of the large language model.
In the embodiment of the disclosure, a large language model for service question and answer is arranged in front-end equipment, and in the actual service question and answer process, the large model arranged at the front end can directly output answers related to service handling, so that service is not influenced under the condition of unstable network or intense computing power resource of a server, and the reliability of the service is ensured; in addition, in the question-answering process, historical keywords related to the context relationship can be added besides the current service description information, so that the accuracy of inquiring the large model is ensured, and the reliability of the large model answer in the service handling process is improved.
In summary, the embodiments of the present disclosure may improve user satisfaction and service processing efficiency by accurate and rapid service questions and answers, while reducing operation cost and server computing power resource usage.
Fig. 5 schematically illustrates a flow chart of a model training method according to an embodiment of the present disclosure.
As shown in fig. 5, the model training method of the embodiment includes operations S510 to S520, where the operations S520 include operations S521 to S523.
In operation S510, a question-answer training set including input training data and output result training data and a pre-trained large language model are acquired.
The question and answer training set comprises input training data and output result training data, the input training data and the output result training data are in one-to-one correspondence, questions and answers in the actual business handling process are simulated, the input training data also follows the input used by the model in the above-mentioned 'question and answer method of business handling', in the first round of question and answer of business handling, the input training data comprises training data of descriptive information, or the input training data comprises training data of descriptive information and training data of history keywords, and the output result training data is used as an output label to be the training data of business handling information.
The pre-trained large language model is a large model which is pre-trained, and in the embodiment of the present disclosure, the model is subjected to fine-tuning through a question-answer training set. This allows a model pre-trained on a large dataset to be applied to a smaller, task-specific dataset for further training to meet the needs of the scenario.
In operation S520, the training operation is repeatedly performed until a preset cutoff condition is reached.
The training operation mainly comprises forward propagation, loss calculation, backward propagation and parameter updating, and the preset cutoff conditions comprise: training to a preset iteration number, training the used moral data to a certain scale, model quality evaluation to a certain level, and the like. The training operation is specifically as follows:
In operation S521, a business transaction information prediction result is output based on the input training data and the current pre-trained large language model.
Specifically, the model calculates according to the input data, outputs a prediction result, and completes forward propagation.
In operation S522, a loss value is calculated based on the output result training data and the business transaction information prediction result.
Specifically, the loss value is calculated by outputting result training data and a business transaction information prediction result through a predetermined loss function (such as cross entropy loss), and the calculation of the loss is completed.
In operation S523, parameters of the current pre-trained large language model are updated based on the loss value.
Specifically, the loss value is utilized for back propagation, the gradient of the model parameters is calculated, and the parameter of the pre-trained large language model is updated according to the gradient by using an optimizer, so that the back propagation and the parameter update are completed.
Based on the service answering method, the disclosure also provides a service answering device. The device will be described in detail below in connection with fig. 6.
Fig. 6 schematically shows a block diagram of a business transaction question-answering apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the service handling question and answer apparatus 600 of this embodiment includes a question acquisition module 610 and a question and answer module 620.
The question obtaining module 610 is configured to obtain description information of a current business to be handled when a first round of questions and answers. In an embodiment, the problem obtaining module 610 may be configured to perform the operation S210 described above, which is not described herein.
The question-answering module 620 is configured to obtain, based on the description information, business transaction information corresponding to the description information through a preset large language model. In an embodiment, the question-answering module 620 may be used to perform the operation S220 described above, which is not described herein.
The question obtaining module 610 is further configured to obtain, when there is no first round of question answering, description information of a current business to be handled and a history keyword, where the history keyword is extracted from description information of a history. In an embodiment, the problem obtaining module 610 may be configured to perform the operation S230 described above, which is not described herein.
The question-answering module 620 is further configured to obtain, based on the description information and the history keyword, business transaction information corresponding to the description information through a preset large language model; the preset large language model is arranged in the local front-end equipment. In an embodiment, the question-answering module 620 may be used to perform the operation S240 described above, which is not described herein.
In the embodiment of the disclosure, a large language model for service question and answer is arranged in front-end equipment, and in the actual service question and answer process, the large model arranged at the front end can directly output answers related to service handling, so that service is not influenced under the condition of unstable network or intense computing power resource of a server, and the reliability of the service is ensured; in addition, in the question-answering process, historical keywords related to the context relationship can be added besides the current service description information, so that the accuracy of inquiring the large model is ensured, and the reliability of the large model answer in the service handling process is improved.
In summary, the embodiments of the present disclosure may improve user satisfaction and service processing efficiency by accurate and rapid service questions and answers, while reducing operation cost and server computing power resource usage.
According to an embodiment of the disclosure, the problem obtaining module is specifically configured to directly obtain, for the description information, the description information in text form; or indirectly acquiring the descriptive information in a text form aiming at the descriptive information, wherein the indirectly acquiring the descriptive information in the text form comprises the following steps: acquiring description information in a voice form; and converting the descriptive information in the form of voice into descriptive information in the form of text.
According to an embodiment of the present disclosure, the apparatus further comprises: the keyword recording module is used for matching the description information based on a preset keyword entry; and under the condition that the preset keyword entries are successfully matched, marking the keyword entries as the historical keywords and recording the occurrence times.
According to an embodiment of the disclosure, the problem obtaining module is specifically further configured to obtain, for the historical keywords, the historical keywords whose occurrence number reaches a preset threshold and the historical keywords of the previous question and answer occurrence.
According to an embodiment of the present disclosure, a question-answering module includes: the word sequence conversion unit is used for converting the description information into word sequences or converting the description information and the historical keywords into word sequences; and the large model input unit is used for taking the word element sequence as the input of the preset large language model to obtain business handling information.
According to an embodiment of the disclosure, the history keywords include M history keywords, M is a positive integer, and the word element sequence conversion unit includes: the word segmentation subunit, the word mapping subunit and the word sequence conversion unit also comprise a word sequence segmentation subunit and a position coding addition subunit, the word segmentation subunit is used for segmenting the description information to obtain N description entries, wherein N is a positive integer; the vocabulary element mapping subunit is configured to convert the N description vocabulary entries and the M history keywords into N first vocabulary elements and M second vocabulary elements according to a preset vocabulary entry mapping relationship; the word sequence conversion unit further comprises a word sequence segmentation subunit, which is used for forming word sequences based on the N first words and the M second words; and the position code adding subunit is used for adding position codes to the first word element and the second word element in the word element sequence.
According to an embodiment of the disclosure, the word sequence conversion unit further includes a word sequence segmentation subunit, specifically configured to splice the N first terms and the M second terms, and add a distinguishing mark between the N first terms and the M second terms to complete distinguishing.
According to an embodiment of the present disclosure, the training method of the preset large language model includes: acquiring a question-answer training set and a pre-trained large language model, wherein the question-answer training set comprises input training data and output result training data; and repeatedly executing training operation until reaching a preset cut-off condition, wherein the training operation comprises the following steps: outputting a business handling information prediction result based on the input training data and the current pre-trained large language model; calculating a loss value based on the output result training data and the business handling information prediction result; and updating parameters of the current pre-trained large language model based on the loss value.
Any of the plurality of modules of the question acquisition module 610 and the question answering module 620 may be combined in one module to be implemented, or any of the plurality of modules may be split into a plurality of modules, according to embodiments of the present disclosure. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. In accordance with embodiments of the present disclosure, at least one of question acquisition module 610 and question answering module 620 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), programmable Logic Array (PLA), system-on-chip, system-on-substrate, system-on-package, application Specific Integrated Circuit (ASIC), or in hardware or firmware, in any other reasonable manner of integrating or packaging circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Or at least one of the question acquisition module 610 and question answering module 620 may be at least partially implemented as a computer program module that, when executed, performs the corresponding functions.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a business transaction question-answering method according to an embodiment of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 702 and/or RAM 703 and/or one or more memories other than ROM 702 and RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to implement the item recommendation method provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.