Disclosure of Invention
In view of this, embodiments of the present disclosure provide an image stylization generation method, an image stylization generation apparatus, and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an image stylization generation method, including:
the method comprises the steps of obtaining a plurality of images containing target objects displayed on an interactive interface, wherein the target objects form a first graphic area in the images;
determining whether the target object is in a static state based on a variation trend of the first graphic region in time series;
in response to determining that the target object is in a static state, selecting a set of image processing parameters from a plurality of sets of image processing parameters stored in a preset lightweight model to form first image processing parameters;
and converting the image to be displayed in the current interactive interface into a first stylized image corresponding to the target object in real time within a first time period by using the first image processing parameter and the lightweight model.
According to a specific implementation manner of the embodiment of the present disclosure, after the image to be displayed in the current interactive interface is converted into the first stylized image corresponding to the target object in real time within the first time period, the method further includes:
displaying a transition image of the target object in real-time in the interactive interface for a second time period after the first time period.
According to a specific implementation manner of the embodiment of the present disclosure, after the displaying the transition image of the target object in the interactive interface in real time, the method further includes:
displaying a native image of the target object in real time in the interactive interface within a third time period after the second time period, wherein the native image is an image which is not subjected to stylization processing.
According to a specific implementation manner of the embodiment of the present disclosure, the displaying the transition image of the target object in the interactive interface in real time includes:
acquiring n stylized images displayed in the second time period and n raw images corresponding to the n stylized images, wherein the raw images are images which are not subjected to stylization processing;
setting a first transparency (n-i)/n for the ith stylized image in the n stylized images, and setting a second transparency i/n for the ith native image in the n native images;
and displaying the stylized image with the first transparency and the native image with the second transparency in an overlapping mode.
According to a specific implementation manner of the embodiment of the present disclosure, after displaying the native image of the target object in the interactive interface in real time, the method further includes:
selecting one group of image processing parameters from a plurality of groups of image processing parameters stored in a preset lightweight model in a fourth time period after the third time period to form second image processing parameters;
and converting the image to be displayed in the current interactive interface into a second stylized image corresponding to the target object in real time within a fourth time period based on the second image processing parameter.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a plurality of images including a target object displayed on an interactive interface includes:
collecting video content in the interactive interface to obtain a video file containing a plurality of video frames;
and selecting a plurality of video frames from the video file to form a plurality of images containing the target object.
According to a specific implementation manner of the embodiment of the present disclosure, the selecting a plurality of video frames from the video file to form a plurality of images including the target object includes:
carrying out target object detection on video frames in the video file to obtain an image sequence containing a target object;
judging whether a first graphic area in a current video frame is the same as a first video area in a last video frame in the image sequence;
deleting the current video frame in the image sequence in response to the first graphics region in the current video frame being the same as the first video region in the previous video frame.
According to a specific implementation manner of the embodiment of the present disclosure, after the multiple images including the target object displayed on the interactive interface are acquired, the method further includes:
selecting a plurality of structural elements with different orientations;
performing detail matching on the plurality of images by using each structural element in the plurality of structural elements to obtain a filtered image;
determining a gray scale edge of the filtered image to obtain a number of pixels present in each of a plurality of gray scale levels in the filtered image;
weighting the number of pixels in each gray level, wherein the weighted average of the gray levels is used as a threshold;
carrying out binarization processing on the filtered image based on the threshold value;
and taking the image after the binarization processing as an edge image of the target object.
According to a specific implementation manner of the embodiment of the present disclosure, the converting, in real time, an image to be displayed in a current interactive interface into a stylized image corresponding to the target object by using the image processing parameter and the lightweight model includes:
selecting a plurality of convolution layers and a pooling layer from the lightweight model, wherein the pooling layer adopts an average pooling treatment mode;
setting the characteristic representation of the image to be displayed and the stylized image on the convolution layer and the pooling layer;
constructing a minimization loss function based on the feature representation;
generating a stylized image corresponding to the target object based on the minimization loss function.
In a second aspect, an embodiment of the present disclosure provides an image stylization generating apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a plurality of images which are displayed on an interactive interface and contain target objects, and the target objects form a first graphic area in the images;
a determining module, configured to determine whether the target object is in a static state based on a variation trend of the first graphic region in a time series;
the selection module is used for responding to the fact that the target object is determined to be in a static state, and selecting one group of image processing parameters from multiple groups of image processing parameters stored in a preset lightweight model to form first image processing parameters;
and the conversion module is used for converting the image to be displayed in the current interactive interface into a first stylized image corresponding to the target object in real time within a first time period by using the first image processing parameter and the lightweight model.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image stylization generation method of any one of the preceding first aspects or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for generating the stylized image of the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the disclosed embodiments also provide a computer program product, the computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method for stylizing an image in the foregoing first aspect or any implementation manner of the first aspect.
The image stylization generation scheme in the embodiment of the disclosure comprises the steps of obtaining a plurality of images which are displayed on an interactive interface and contain target objects, wherein the target objects form a first graphic area in the images; determining whether the target object is in a static state or not based on the variation trend of the first graphic region in time series; in response to determining that the target object is in a static state, selecting one set of image processing parameters from a plurality of sets of image processing parameters stored in a preset lightweight model to form first image processing parameters; and converting the image to be displayed in the current interactive interface into a first stylized image corresponding to the target object in real time within a first time period by using the first image processing parameter and the lightweight model. Through the scheme disclosed by the invention, the stylized effect of the image can be randomly set while the calculation load of the electronic equipment is reduced, and the use experience of a user is improved.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure of the present disclosure. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be carried into practice or applied to various other specific embodiments, and various modifications and changes may be made in the details within the description and the drawings without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image stylization generation method. The image stylization generation method provided by the embodiment may be executed by a computing device, which may be implemented as software or as a combination of software and hardware, and may be integrally provided in a server, a terminal device, or the like.
Referring to fig. 1, an image stylization generation method provided in an embodiment of the present disclosure includes the following steps:
s101, a plurality of images containing target objects displayed on an interactive interface are obtained, and the target objects form a first graphic area in the images.
The scheme of the embodiment of the disclosure can be applied to an electronic device with a data processing function, the electronic device comprises hardware and software installed in the electronic device, and meanwhile, the electronic device can also be provided with various application programs, such as an image processing application program, a video playing application program, a social contact application program and the like.
The interactive interface is a window running in an application, and an image or video containing a target object is displayed on the interactive interface. The target object is a specific object defined in the present disclosure, the target object having a certain shape, and by changing the shape of the target object, different shape-based commands can be formed. For example, the target object may be a body shape of a human body, which forms different postures through limbs, and may constitute different posture commands. Alternatively, the target object may be various gestures, and different gesture instructions may be expressed by gesture formation such as "holding up the thumb".
The target object occupies a certain position and area in the interactive interface, and correspondingly, the projection of the target object on the interactive interface forms a first graphic area which can be displayed in a plurality of images formed in the interactive area.
The electronic device may obtain a plurality of images (target image sequence) obtained by shooting the target object, played on the interactive interface, from a remote place or from a local place through a wired connection manner or a wireless connection manner. The interactive interface may be an interface for displaying an image obtained by shooting the target object. For example, the interactive interface may be an interface of the above-described application installed on the execution subject for capturing an image. The target object may be a person on which an image is taken, for example, the target object may be a user who uses the execution subject described above for self-timer shooting. The plurality of images may also be a sequence of images for moving object detection. In general, the plurality of images may include all or a part of images in an image sequence in which the target object is photographed, and the plurality of images include an image currently displayed on the interactive interface. As a case may be, the plurality of images may include a preset number of images including an image currently displayed on the interactive interface.
S102, determining whether the target object is in a static state or not based on the change trend of the first graphic area in the time sequence.
The moving object detection can be performed on the plurality of images, and the action information corresponding to each image in the plurality of images is determined. Since the plurality of images usually include certain time information (for example, image capturing time or image forming time) during formation, time on the plurality of images can be extracted to form a time series. Based on the time sequence, the images may be arranged in order according to the chronological order, so that the action information (e.g., action instructions) included on the images is determined based on the dimension of time.
The motion information is used for representing motion states of the target object generated in sequence in the time sequence, and the motion states can be motion states or static states. For an image of the plurality of images, the action state corresponding to the image may be determined according to a moving distance of a region composed of pixels that move on the target interface, with respect to an image before the image (which may be an image adjacent to the image or an image that is spaced apart from the image by a predetermined number of images) (for example, the moving distance may be a maximum moving distance among the moving distances of each pixel in the region composed of the pixels that move, or may be an average value of the moving distances of the respective pixels) in the image. For example, if the moving distance is greater than or equal to a preset distance threshold, the motion state corresponding to the image is determined to be a motion state. Or, determining the moving speed according to the moving distance and the playing time difference between the image and the target image, and if the moving speed is greater than or equal to a preset speed threshold, determining that the action state corresponding to the image is a motion state.
Generally, the shape command represented by the first graphic region formed by the target object in the static state is the motion command really desired by the user, and the shape formed by the target object in the motion state is usually an intermediate transient shape before the motion command is formed.
Specifically, the determination may be made based on a change in state of the plurality of images in time series. When the state of the target object on the plurality of images is detected to be converted from the motion state to the static state, the graphic instruction represented by the first graphic area in the static state is resolved into an operation instruction of the target object. The operation instruction can be expressed in various ways, and the form of the operation instruction can include but is not limited to at least one of the following: numbers, words, symbols, level signals, etc.
S103, in response to the fact that the target object is determined to be in a static state, one group of image processing parameters are selected from multiple groups of image processing parameters stored in a preset lightweight model and used for forming first image processing parameters.
The electronic equipment is internally provided with a lightweight model which is used for stylizing the image received in the electronic equipment. In order to reduce the resource consumption of an electronic device (for example, a mobile phone), the electronic device can effectively perform stylization processing on an input image under the condition of small resource occupation. The scheme of the disclosure designs a targeted lightweight model. Referring to fig. 2, the lightweight model is designed in a neural network model, and the neural network model includes a convolutional layer, a pooling layer, and a sampling layer. In order to improve the calculation efficiency of the neural network and reduce the calculation complexity of system electronic equipment, a full connection layer is not arranged in the scheme of the disclosure.
The convolutional layers mainly comprise the size of convolutional kernels and the number of input feature graphs, each convolutional layer can comprise a plurality of feature graphs with the same size, the feature values of the same layer adopt a weight sharing mode, and the sizes of the convolutional kernels in each layer are consistent. The convolution layer performs convolution calculation on the input image and extracts the layout characteristics of the input image.
The back of the feature extraction layer of the convolutional layer can be connected with the sampling layer, the sampling layer is used for solving the local average value of the input expression image and carrying out secondary feature extraction, and the sampling layer is connected with the convolutional layer, so that the neural network model can be guaranteed to have better robustness for the input expression image.
In order to accelerate the training speed of the neural network model, a pooling layer is arranged behind the convolutional layer, and the pooling layer is used for processing the output result of the convolutional layer in an average pooling mode, so that the gradient flow of the neural network can be improved, and a more infectious result can be obtained.
Different parameters are contained in the lightweight model, and different artistic styles can be generated on the lightweight model by setting the parameters. Specifically, when it is determined that the target object is in a stationary state, a set of image processing parameters may be selected randomly or in a specified manner from among a plurality of sets of image processing parameters stored in a preset weight-reduction model, to form the first image processing parameters.
And S104, converting the image to be displayed in the current interactive interface into a first stylized image corresponding to the target object in real time within a first time period by using the first image processing parameter and the lightweight model.
After the first image processing parameter is acquired, based on the first image processing parameter, the stylized type can be set in the lightweight model, so that the image to be displayed can be converted into the first stylized image corresponding to the target object in real time in the current interactive interface. The image to be displayed can be one or more images selected by the user in the current interactive interface, and the image to be displayed can also be one or more video frame images in the video to be displayed. Because the first image processing parameter is generated in a random generation mode, the stylized type of the first stylized image also has randomness, so that one stylized effect can be randomly displayed from a plurality of stylized effects, and the use experience of a user is improved.
In addition to generating the first stylized image, a second stylized image may be generated by a second image processing parameter that is different from the first stylized image by randomly generating the second image processing parameter after a preset time period in a preset manner.
As one case, to increase the user experience, a transition image of the target object is displayed in real-time in the interactive interface for a second time period after the first time period. The transition image is an image in which a smooth transition is made between the stylized image and the native image.
Displaying, in real-time, a native image of the target object in the interactive interface for a third time period after the second time period after displaying the transition image for the second time period. In this way, the stylized experience of the user can be further improved by switching among the stylized image, the transition image and the native image.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the present disclosure, displaying a transition image of the target object in real time in the interactive interface may include the following steps:
s301, acquiring n stylized images displayed in the second time period and n original images corresponding to the n stylized images, wherein the original images are images which are not subjected to stylization processing.
S302, setting a first transparency (n-i)/n for the ith stylized image in the n stylized images, and setting a second transparency i/n for the ith native image in the n native images. Wherein i and n are natural numbers, and i is less than or equal to n.
And S303, overlapping and displaying the stylized image with the first transparency and the native image with the second transparency.
In the manner of steps S301 to S303, the image displayed on the interactive interface can be smoothly transited between the stylized image and the native image.
As an optional implementation manner, in the process of acquiring a plurality of images including a target object displayed on an interactive interface, when content on the interactive interface is video content, the video content in the interactive interface may be collected, so as to obtain a video file including a plurality of video frames. And selecting one or more video frames from the video file based on actual needs to form a plurality of images containing the target object.
In order to reduce the consumption of resources of the electronic device in the process of selecting a plurality of images, according to an optional implementation manner of the embodiment of the present disclosure, target object detection may be performed on video frames in the video file to obtain an image sequence including a target object, and no processing is performed on image frames not including the target object, so that resources of the electronic device are saved.
For an image sequence containing a target object, in order to further reduce resource consumption of an electronic device, it may be determined whether a first graphics region in a current video frame is the same as a first video region in a previous video frame, and if so, the current video frame is deleted from the image sequence. In this way, the resources of the electronic device can be further optimized.
In order to facilitate the target object identification on the acquired multiple images, referring to fig. 4, according to a specific implementation manner of the embodiment of the present disclosure, after acquiring the multiple images including the target object displayed on the interactive interface, the method further includes:
s401, selecting a plurality of structural elements with different orientations.
The target object can be detected through the edge detection operator, if the edge detection operator only adopts one structural element, the output image only contains one type of geometric information, and the preservation of image details is not facilitated. In order to ensure the accuracy of image detection, an edge detection operator containing various structural elements is selected.
S402, carrying out detail matching on the plurality of images by using each structural element in the plurality of structural elements to obtain a filtering image.
By using multiple structural elements in different orientations, each structural element being used as a scale to match image details, various details of the image can be adequately preserved while filtering to different types and sizes of noise.
S403, determining a gray edge calculation of the filtered image to obtain a number of pixels present in each of a plurality of gray levels in the filtered image.
After filtering the image, in order to further reduce the amount of calculation, the filtered image may be converted into a gray scale image, and by setting a plurality of gray scale levels to the gray scale image, the number of pixels present in each gray scale image may be calculated.
S404, weights the number of pixels in each gray scale level, and sets the weighted average gray scale value as a threshold.
For example, a large weight is given to a gradation level value having a large number of pixels, a small weight is given to a gradation level value having a small number of pixels, and an average value of the weighted gradation values is calculated to obtain a weighted average gradation value as a threshold value, thereby performing binarization processing on a gradation image based on the average gradation value.
S405, performing binarization processing on the filtering image based on the threshold value.
Based on the threshold value, the filtered image may be subjected to binarization processing, for example, to data 1 for pixels larger than the threshold value and 0 for pixels smaller than the threshold value.
And S406, taking the image after the binarization processing as an edge image of the target object.
The edge image of the target object is obtained by performing corresponding color assignment on the binarized data, for example, assigning a pixel with a binarization value of 1 to black and assigning an image with a binarization value of 0 to white.
Through the steps of S401 to S406, the accuracy of target object detection is improved on the premise of reducing the consumption of system resources of the electronic equipment.
Before determining the first image processing parameter, a mapping table may be predefined, based on the predefined mapping table, a scaling factor and a translation factor corresponding to the operation instruction may be found, and by setting the scaling factor and the translation factor, stylized effects of different styles may be formed. For this reason, an input layer including a scaling factor and a translation factor may be provided in the lightweight model, and after obtaining specific image processing parameters, the scaling factor and the translation factor corresponding to the operation instruction are used as input factors, and all the condition input layers are arranged in the lightweight model, so that the lightweight model can be arranged easily and efficiently. The condition input layer can be arranged in one or more convolution layers, pooling layers or sampling layers according to actual needs. And taking the parameters of all condition input layers after the configuration as image processing parameters of the lightweight model, thereby obtaining different types of stylized models.
According to an optional implementation manner of the embodiment of the present disclosure, the generating a stylized image corresponding to the target object based on the plurality of convolutional layers and pooling layers may include steps S501 to S503:
s501, setting the characteristic representation of the image to be displayed and the stylized image on the convolutional layer and the pooling layer.
Sampling is carried out on the image to be displayed and the stylized image in the training sample in both a convolution layer and a pooling layer of the lightweight network, and after sampling, data of each layer form feature representations of the image to be displayed and the stylized image in the convolution layer and the pooling layer. For example, for the ith layer in the lightweight model, the feature representations of the image to be displayed and the stylized image at the ith layer may be denoted by Pi and Fi.
S502, constructing a minimization loss function based on the characteristic representation.
Based on Pi and Fi, a squared error loss function can be defined based on these two characterizations and set as a minimized loss function L, which can be expressed at the i-th level as:
wherein k and j are natural numbers less than or equal to i.
S503, generating a stylized image corresponding to the target object based on the minimized loss function.
By calculating the minimization function, the value of the minimization function L is minimized, and a stylized image corresponding to the target object can be obtained.
The accuracy of the generated stylized image is improved by means of the feature representation and the minimization function.
Corresponding to the above method embodiment, referring to fig. 5, the present disclosure also provides an image stylization generating apparatus 50, including:
an obtainingmodule 501, configured to obtain multiple images displayed on an interactive interface, where the multiple images include a target object, and the target object forms a first graphic area in the multiple images.
A determiningmodule 502, configured to determine whether the target object is in a static state based on a trend of the first graphic region in the time series.
A selectingmodule 503, configured to select a set of image processing parameters from a plurality of sets of image processing parameters stored in a preset lightweight model in response to determining that the target object is in a static state, so as to form a first image processing parameter.
Theconversion module 504 is configured to convert, in real time, an image to be displayed in the current interactive interface into a first stylized image corresponding to the target object within a first time period by using the first image processing parameter and the lightweight model.
The apparatus shown in fig. 5 may correspondingly execute the contents in the foregoing method embodiment, and details of parts not described in detail in this embodiment refer to the contents described in the foregoing method embodiment, which are not repeated herein.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of stylized generation of images of the method embodiments described above.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the image stylization generation method in the aforementioned method embodiments.
Referring now to FIG. 6, a block diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In theRAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. Theprocessing device 601, theROM 602, and theRAM 603 are connected to each other via abus 604. An input/output (I/O)interface 605 is also connected tobus 604.
Generally, the following devices may be connected to the I/O interface 605:input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.;output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like;storage 608 including, for example, tape, hard disk, etc.; and acommunication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from theROM 602. The computer program, when executed by theprocessing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.