Disclosure of Invention
To solve at least some of the technical problems in the prior art, an object of the present invention is to provide an image recognition system and method based on a spiking neural network.
The technical scheme adopted by the invention is as follows:
an image recognition system based on a spiking neural network, comprising:
the upper computer is used for controlling the working state of the image recognition system, counting the image recognition accuracy of the image recognition system and displaying the final recognition result;
off-chip storage for storing image pixels, label dataset labels and results of image recognition by an image recognition system;
the master control module is used for controlling and managing the image recognition system;
the input coding module is used for carrying out frequency coding conversion according to the pixel value of the input image, and the higher the pixel value is, the higher the pulse emission frequency is corresponding to the higher the pixel value is;
the LIF neuron module is used for fitting biological cerebral neurons and realizing the update of the membrane potential of the conductive LIF neurons by adopting the conductive LIF neurons;
the pulse time information processing module is used for updating the time information of the pulse sequence under each time stamp and carrying out unsupervised learning on the pulse neural network;
the TSTDP module is used for calculating a change value of synaptic weights according to the time difference of pulse emission of the pre-synaptic neuron and the post-synaptic neuron and a TSTDP method so as to finish unsupervised learning of a pulse neural network;
and the full voting module is used for being responsible for decoding the pulse sequence output by the pulse neural network.
Further, the impulse neural network comprises an input layer, a hidden layer, an output layer and a full voting classification layer;
the neuron of the input layer is composed of pixel points of an input two-dimensional image, and each pixel point can be converted into a pulse sequence in the input layer through frequency coding and transmits an input pulse to the hidden layer; the area corresponding to the input layer can be used as the visual field of the hidden layer neuron;
the connection between the neuron of the hidden layer and the corresponding visual field has respective independent synaptic weights, and the neurons of the hidden layer in the same visual field belong to a competitive relationship;
the output layer consists of neurons of the last hidden layer and is respectively and fully connected with each classified neuron;
the full voting classification layer is composed of classification neurons, and pulse sequences generated by the network are analyzed through a full voting output decoding mechanism to obtain a network image identification result.
Further, the upper computer is used for controlling the pulse neural network to read image pixels in off-chip storage according to the zone bit signals sent by the master control module; and
reading the working state of the image recognition system through a handshake signal, and controlling the image recognition system to read the pixels of the next image stored outside the chip after the image recognition system finishes the processing of one input image so as to finish the updating of the input image; after the image recognition system processing system completes the task, the master control module sends a flag bit signal of the task completion, the upper computer starts to read the final result of the image recognition system, the recognition result is compared with the category of the actual image to be recognized, and the image recognition accuracy of the image recognition system is counted.
Further, the master control module adjusts the activities of the image recognition system according to three stages of training, label alignment and recognition, sends zone bit signals to the upper computer for information interaction according to a timestamp counter in a time-driven mode, sends state updating signals to the LIF neuron module and the TSTDP module, and performs time-sharing multiplexing on neurons according to network parameter configuration.
Further, the LIF neuron module realizes conductance type LIF neuron membrane potential update by using the following formula:
wherein V is a neuronal membrane potential; eresetIs neuronal resting potential; geIs excitatory synaptic conductance; eexcIs excitatory synaptic voltage; giIs inhibitory synaptic conductance; einhIs an inhibitory synaptic voltage; τ is a time constant; dt is the time step.
Further, the TSTDP module implements unsupervised training using the following formula:
w→w+A+*pre*post2,if t=tpost,(LTP)
w→w-A-*post1,if t=tpre,(LTD)
wherein w is the synaptic weight; pre is the state track of the presynaptic neuron transmitting pulse; post1 and post2 are different state trajectories for the postsynaptic neuron to fire a pulse; when the TSTDP module detects that the presynaptic neuron emits pulses, namely tpre time, LTD operation is triggered; when the TSTDP module detects that the postsynaptic neuron emits a pulse, namely, the tpost moment, LTP operation is triggered.
The other technical scheme adopted by the invention is as follows:
an image identification method based on a pulse neural network comprises the following steps:
a training stage: training synaptic weights between an input layer and a hidden layer, between the hidden layer and the hidden layer, and between the hidden layer and an output layer of the spiking neural network;
and (3) a calibration stage: training synaptic weights between an output layer and a full voting classification layer of the spiking neural network;
and (3) identification: and the output layer adopts a full voting mechanism to vote for the full voting classification layer for image recognition.
Further, the training phase comprises the following steps:
the master control module sets the image recognition system into a training mode;
the upper computer receives the enabling signal sent by the master control module and sends the input image in the off-chip storage to the image recognition system;
the input coding module converts the pixels of the image into the frequency of pulse emission and inputs the frequency to the pulse time information processing module;
if receiving a pulse signal of a presynaptic neuron, the TSTDP module reads a postsynaptic pulse time information module and updates a value of synaptic weight according to a TSTDP algorithm;
the LIF neuron module adjusts membrane voltage according to the received pulse signal, if the membrane voltage reaches a threshold value, the LIF neuron module releases a pulse to a next layer of neurons, and the membrane voltage is restored to a resting potential;
and repeating the steps until the timestamp counter in the master control module reaches a default value.
Further, the benchmarking stage comprises the following steps:
the master control module sets the image recognition system to be in a benchmarking mode;
the upper computer receives the enabling signal sent by the master control module and sends the input image in the off-chip storage and the label of the input image to the image recognition system;
the input coding module converts the pixels of the image into the frequency of pulse emission and inputs the frequency to the pulse time information processing module;
the LIF neuron module adjusts membrane voltage according to the received pulse signal, if the membrane voltage reaches a threshold value, a pulse is released to a next layer of neurons, and the membrane voltage is recovered to a resting potential;
the full voting module reads the label of the input image and connects the classified neuron corresponding to the label with the neuron of the output layer;
the synaptic weights of the classified neurons and the output layer neurons are set to be 0 before the benchmarking stage, and 1 is added to the weights between the classified neurons corresponding to the classes of the input images and the output layer neurons when the output layer neurons emit a pulse;
repeating the steps until a timestamp counter in the master control module reaches a default value, normalizing the weight between the classification neuron and the output neuron, and performing nonlinear conversion on the weight value, wherein the weight value is controlled to be between 0 and 1.
Further, the identification phase comprises the following steps:
the master control module sets the image recognition system into a recognition mode;
the upper computer receives an enabling signal sent by the master control module and sends an input image to be identified in off-chip storage to the image identification system;
the input coding module converts the pixels of the image into the frequency of pulse emission and inputs the frequency to the pulse time information processing module;
the LIF neuron module adjusts membrane voltage according to the received pulse signal, if the membrane voltage reaches a threshold value, a pulse is released to a next layer of neurons, and the membrane voltage is recovered to a resting potential;
the full voting module is used for fully connecting the output neurons with the classification neurons, pulses released by the output neurons are sent to all the classification neurons, the full voting module is used for recording the number of the pulses received by each classification neuron and comparing the pulses with each other, and the image category corresponding to the classification neuron with the largest number of pulses is the recognition result; after the recognition result is written into the off-chip storage, the pulse number received by the classification neuron is reset;
and repeating the steps until the timestamp counter in the master control module reaches a default value.
The invention has the beneficial effects that: the invention provides a full voting output decoding mechanism for reducing information loss, which effectively solves the problem of output decoding information loss caused by a traditional voting mechanism to a pulse neural network.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
The image recognition hardware system of the on-chip self-learning impulse neural network is realized on a digital circuit system which flexibly configures network parameters and fully utilizes parallelism, and a solution is provided for the application of brain-like computing.
To achieve the above object, the present embodiment introduces an output decoding mechanism based on the whole voting in the impulse neural network. The full voting classification layer is composed of classification neurons, the number of the classification neurons is the same as the number of the classes of the object to be identified, and the neuron of the last layer of the hidden layer can be used as an output neuron and is respectively and fully connected with each classification neuron. The weight of the full voting classification layer is 0 in the default value of the network training phase, namely the full voting classification layer does not participate in the training phase of the impulse neural network. In the benchmarking stage, the weights of the full voting classification layers are updated and adjusted according to the response of the output neurons to the training samples, and the corresponding relation between the output neurons and different classification neurons is obtained. In the identification stage, each output neuron votes for all the classified neurons, but not only for one of the classified neurons. The vote number of the classified neurons reflects the response of the impulse neural network to the input image, and the higher the vote number of the classified neurons is, the more remarkable the response of the network to the corresponding categories of the classified neurons is.
The technical scheme of the self-learning on the image recognition hardware system chip of the impulse neural network based on the full voting output decoding mechanism comprises three steps of a training stage, a benchmarking stage and a recognition stage: in the training stage, a pulse neural network model is subjected to unsupervised learning on a chip by a TSTDP method based on biological plasticity, and after training convergence, input neurons and hidden layer neurons, hidden layer neurons and hidden layer neurons, and synaptic weights between the hidden layer neurons and output neurons are obtained; in the target stage, the class to which the output neuron belongs is indicated through a small number of labeled data sets, synaptic weights between the output neuron and classification neurons corresponding to the class are adjusted according to the response of the output neuron to an input image, and the synaptic weights are normalized after target training to obtain the synaptic weights between the output neuron and the classification neurons; in the identification stage, the input image is identified through a feed-forward mechanism of the impulse neural network, the output neurons vote for all the classified neurons, and finally the classification represented by the classified neuron with the highest vote number is the image identification result of the impulse neural network.
The image recognition hardware system of the on-chip self-learning pulse neural network based on the full voting output decoding mechanism comprises an upper computer, an off-chip storage, a master control module, an input coding module, a LIF neuron module, a pulse time information processing module, a TSTDP module and a full voting module. The upper computer is responsible for generating information interaction with the impulse neural network hardware system, controlling the impulse neural network hardware system to read and write in off-chip storage, enabling the hardware system to work and updating an input image, and finally displaying a final result of hardware system image identification after the hardware processing system completes a task; the off-chip storage is used for storing a plurality of image pixels, waiting for the upper computer to send a reading instruction, and simultaneously writing an image identification result into the off-chip storage by the hardware system in an image identification stage; the master control module is responsible for controlling and managing the hardware system, can adjust the activity of the hardware system according to three stages of training, benchmarking and identification, simultaneously adopts a time driving mode, sends a flag bit to an upper computer for information interaction according to a timestamp counter, sends a state updating signal to the LIF neuron module and the TSTDP module, and carries out time-sharing multiplexing on neurons according to network parameter configuration; the input coding module performs frequency coding conversion according to the pixel value of the input image, and the higher the value of the pixel corresponds to the higher pulse emission frequency; the LIF neuron module is responsible for fitting biological cerebral neurons, adopts conductive LIF neurons and comprises a current accumulation module, a current delay module, a pulse generation module, a refractory period module and a threshold self-adaptive module; the pulse time information processing module is responsible for updating the time information of the pulse sequence and is used for unsupervised learning of the pulse neural network; the TSTDP module calculates a change value of synaptic weights based on a TSTDP method according to the time difference of pulse emission of pre-synaptic neurons and post-synaptic neurons, and completes unsupervised learning of a pulse neural network; the full voting module is responsible for decoding the pulse sequence output by the pulse neural network, and is connected with the neuron of the last layer of hidden layer and the classified neuron, thereby reducing the loss of output decoding information and enhancing the fault-tolerant capability of image recognition of the pulse neural network.
The above technical solution is described in detail below with reference to fig. 1 to 4.
The embodiment provides an image recognition hardware system of an on-chip self-learning pulse neural network based on a full voting output decoding mechanism, which has the design idea that the full voting output decoding mechanism is applied to the pulse neural network, the technical scheme of pulse neural network unsupervised learning based on the full voting output decoding mechanism is constructed, low-overhead and high-precision hardware fitting is carried out on a complex model of the pulse neural network, an on-chip self-learning pulse neural network model is built on a digital circuit platform, information interaction is carried out with an upper computer, and the image recognition hardware system supporting flexibly configured network parameter pulse neural network is realized.
As shown in fig. 1, the present embodiment proposes a full voting scheme as an output decoding mechanism of the impulse neural network, wherein the impulse neural network model includes an input layer, a hidden layer, an output layer, and a full voting classification layer. The neuron of the input layer is composed of pixel points of an input two-dimensional image, and each pixel point can be converted into a pulse sequence in the input layer through frequency coding and transmits an input pulse to the hidden layer; the area corresponding to the input layer can be used as a visual field of a hidden layer neuron, the connection between the hidden layer neuron and the corresponding visual field has independent synaptic weights, the hidden layer neurons in the same visual field belong to a competitive relationship, and once a certain hidden layer neuron emits a pulse, the activity of other hidden layer neurons in the same visual field can be inhibited; in this way, the hidden layer neurons form a new visual field and are connected with the next hidden layer neurons, and the network scale of the pulse neural network is sequentially expanded; the output layer consists of the neuron of the last hidden layer and is respectively and fully connected with each classified neuron; the full voting classification layer is composed of classification neurons, and pulse sequences generated by the network are analyzed through a full voting output decoding mechanism to obtain a network image identification result.
As shown in fig. 2, the image recognition hardware system of the on-chip self-learning impulse neural network based on the full voting output decoding mechanism according to the embodiment includes the following modules: the device comprises an upper computer, an off-chip storage, a master control module, an input coding module, a LIF neuron module, a pulse time information processing module, a TSTDP module and a full voting module.
The upper computer is responsible for generating information interaction with the impulse neural network hardware system, and controls the impulse neural network hardware system to read image pixels in off-chip storage according to a zone bit signal sent by a master control module of the hardware system. And secondly, reading the working state of the hardware system through a handshake signal, and after the hardware system finishes the processing of one input image, controlling the pulse neural network hardware system to read the pixel of the next image stored outside the chip so as to finish the updating of the input image. And finally, after the hardware processing system finishes the task, the master control module sends a flag bit signal of the finished task, the upper computer starts to read the final result of the hardware system image identification, the identification result is compared with the type of the actual image to be identified, the image identification accuracy of the system is counted, and the final result is displayed on an interface of the upper computer.
The off-chip storage is responsible for storing a plurality of image pixels, label data set labels and hardware system image identification results. And storing the image pixels and the labels of the label alignment data sets before the hardware system works, waiting for the upper computer to send a reading instruction according to different stages, and sequentially reading the image pixels and the labels of the label alignment data sets by the hardware system. In the image recognition stage, the hardware system writes the image recognition result into an off-chip memory and waits for a reading instruction of the upper computer.
The master control module is responsible for the control management of the pulse neural network hardware system, can adjust the activity of the hardware system according to three stages of training, benchmarking and identification, simultaneously adopts a time driving mode, sends a zone bit signal to the upper computer according to a timestamp counter for information interaction, sends a state updating signal to the LIF neuron module and the TSTDP module, and carries out time-sharing multiplexing on neurons according to network parameter configuration.
The input coding module performs frequency coding conversion according to the pixel value of the input image, and the higher the pixel value is, the higher the pulse emission frequency is corresponding to the higher the pixel value is.
The LIF neuron module is responsible for fitting biological cerebral neurons, adopts conductive LIF neurons, and realizes the update of the membrane potential of the conductive LIF neurons according to the formula (1).
And (3) discretizing the formula by adopting an Euler method according to the formula (2).
V is the neuronal membrane potential; eresetTaking-65 mv as resting potential of the neuron; geIs excitatory synaptic conductance; eexcTaking 0mv as excitatory synaptic voltage; giIs inhibitory synaptic conductance; einhTaking 100mv as inhibitory synaptic voltage; tau is a time constant and is taken as 128 ms; dt is the time step, taken as 0.125 ms. As shown in fig. 3, the LIF module includes a current accumulation module, a current delay module, a pulse generation module, and a refractory period module. The current accumulation module receives pulses transmitted by presynaptic neurons, calculates the pulses and corresponding conductance parameters according to the types of the presynaptic neurons, converts the pulses into excitatory currents or inhibitory currents, accumulates the influence of the pulses transmitted by all the presynaptic neurons on the currents of the postsynaptic neurons one by one, and updates the conductance parameters at each timestamp; the current delay module calculates the current of the current accumulation module and the membrane potential of the post-synaptic neuron to obtain the membrane potential of the neuron; the pulse generation module compares the membrane potential of the neuron with a threshold voltage, if the membrane potential is less than the threshold voltage, the neuron does not emit a pulse, and if the membrane potential is greater than the threshold voltage, the neuron emits a pulsePulsing; the refractory period module is responsible for setting the state of the neuron to a refractory period mode after the neuron fires a pulse, during which the neuron does not respond to the pulse fired by the pre-synaptic neuron.
And the pulse time information processing module is responsible for updating the time information of the pulse sequence under each time stamp and is used for unsupervised learning of the pulse neural network.
The TSTDP module calculates a change value of synaptic weights based on a TSTDP method according to the time difference of pulse emission of the pre-synaptic neuron and the post-synaptic neuron, and completes unsupervised learning of the pulse neural network. And (4) realizing a TSTDP unsupervised training algorithm according to the formulas (3) and (4).
w→w+A+*pre*post2,if t=tpost(LTP) formula (3)
w→w-A-*post1,if t=tpre(LTD) formula (4)
W is the synaptic weight; pre is the state track of the presynaptic neuron transmitting pulse; post1 and post2 are different state trajectories for the postsynaptic neuron to fire a pulse; a. the+=2-7=0.0078125,A-=2-130.0000122. When the TSTDP module detects that a presynaptic neuron emits a pulse, i.e., tpreTriggering LTD operation at moment; when the TSTDP module detects that a post-synaptic neuron emits a pulse, i.e., tpostAt time, the LTP operation is triggered.
The full voting module is responsible for decoding the pulse sequence output by the pulse neural network, and is connected with the neuron of the last layer of hidden layer and the classified neuron, thereby reducing the loss of output decoding information and enhancing the fault-tolerant capability of image recognition of the pulse neural network. In the identification stage, the full voting module writes the identification result of the hardware system for each image to be identified into the off-chip storage.
As shown in fig. 4, the workflow of the impulse neural network image recognition hardware system based on the full voting output decoding mechanism proposed in this embodiment includes three stages: training stage, benchmarking stage and identification stage.
In the training stage, the master control module sets the pulse neural network image recognition hardware system into a training mode. The upper computer receives the enabling signal sent by the master control module and sends the input image in the off-chip storage to the hardware system. The input coding module converts the pixels of the image into the frequency of pulse emission, and inputs the frequency to the pulse neural network processing module. If receiving a pulse signal of a presynaptic neuron, the TSTDP module reads a postsynaptic pulse time information module and updates a value of synaptic weight according to a TSTDP algorithm; meanwhile, the LIF neuron module adjusts the membrane voltage according to the received pulse signal, and if the membrane voltage reaches a threshold value, it releases a pulse to a next layer of neurons, and restores the membrane voltage to a resting potential. At this time, the TSTDP module reads the pre-synaptic pulse time information module, and updates the synaptic weight according to the TSTDP algorithm. The steps can be operated repeatedly, until a timestamp counter in the master control module reaches a default value, an image updating signal is sent to the upper computer, and the next input image is loaded to the hardware system from off-chip storage. The training mode continues until the image number counter in the master control module reaches the total number of the training data sets, and the impulse neural network hardware system does not enter the benchmarking stage.
In the benchmarking stage, the master control module sets the pulse neural network image recognition hardware system into a benchmarking mode. The upper computer receives the enabling signal sent by the master control module and sends the input image in the off-chip storage and the label of the input image to the hardware system. The input coding module converts the pixels of the image into the frequency of pulse emission, and inputs the frequency to the pulse neural network processing module. The LIF neuron module adjusts the membrane voltage according to the received pulse signal, and if the membrane voltage reaches a threshold value, the LIF neuron module releases a pulse to a next layer of neurons and restores the membrane voltage to a resting potential. And the full voting module simultaneously reads the label of the input image and connects the classification neuron and the output layer neuron corresponding to the label. The synaptic weight of the classification neuron and the output layer neuron will be set to 0 prior to the targeting phase. In the calibration stage, when the neuron of the output layer emits one pulse, the weight between the classified neuron corresponding to the class to which the input image belongs and the neuron of the output layer is sequentially added with 1. The synaptic weights of the classification neurons and the output layer neurons are enhanced along with the increase of the number of pulses released by the output layer neurons, so that training of the full voting classification layer synaptic weights is realized. The steps can be operated repeatedly, until a timestamp counter in the master control module reaches a default value, an image updating signal is sent to the upper computer, and the next input image is loaded to the hardware system from off-chip storage. And the target matching mode is continuously carried out until the image number counter in the master control module reaches the total number of the target matching data sets, and the full voting module is used for carrying out normalization processing on the synaptic weights between the neurons in the output layer and the neurons in the classification layer. After the normalization process is completed, the hardware system of the impulse neural network enters an identification stage.
In the identification stage, the master control module sets the pulse neural network image identification hardware system into an identification mode. The upper computer receives the enabling signal sent by the master control module and sends the image to be identified in the off-chip storage to the hardware system. The input coding module converts the pixels of the image into the frequency of pulse emission, and inputs the frequency to the pulse neural network processing module. The LIF neuron module adjusts the membrane voltage according to the received pulse signal, and if the membrane voltage reaches a threshold value, the LIF neuron module releases a pulse to a next layer of neurons and restores the membrane voltage to a resting potential. The full voting module is used for fully connecting the output neurons with the classification neurons, pulses released by the output neurons can be sent to all the classification neurons, and meanwhile, the full voting module can record the number of the pulses received by each classification neuron. The steps are repeatedly operated until a timestamp counter in the master control module reaches a default value, and the full voting module can count the number of pulses received by each classification neuron. And the class corresponding to the classification neuron with the largest pulse number is the identification result of the image to be identified. The full voting module sends the identification result of the image to be identified to an off-chip storage, sends an image updating signal to the upper computer, and loads the next image to be identified to the hardware system from the off-chip storage. The identification phase can be continuously carried out until the image number counter in the master control module reaches the total number of the test data sets, the master control module can send an enabling signal to the upper computer, the upper computer can read the identification results of all the test data sets from the hardware system to the upper computer, the identification results of all the test data sets are compared with the actual categories of all the test data sets, the identification accuracy of the pulse neural network hardware system is counted, and the results are printed on a screen.
In summary, compared with the prior art, the present embodiment has the following beneficial effects: 1. the implementation provides an output decoding mechanism of full voting, and the problem of output decoding information loss caused by a traditional voting mechanism to a pulse neural network is solved; 2. the implementation provides a technical scheme of pulse neural network unsupervised learning based on a full voting output decoding mechanism; 3. the implementation provides a technical scheme of self-learning on a pulse neural network image recognition hardware system chip, and the functions of training the pulse neural network on the chip and recognizing the image are achieved.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.