Movatterモバイル変換


[0]ホーム

URL:


CN114219868A - Skin care scheme recommendation method and system - Google Patents

Skin care scheme recommendation method and system
Download PDF

Info

Publication number
CN114219868A
CN114219868ACN202111325303.7ACN202111325303ACN114219868ACN 114219868 ACN114219868 ACN 114219868ACN 202111325303 ACN202111325303 ACN 202111325303ACN 114219868 ACN114219868 ACN 114219868A
Authority
CN
China
Prior art keywords
skin
facial
user
image
verification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111325303.7A
Other languages
Chinese (zh)
Inventor
曹飞东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Source Technology Development Co ltd
Original Assignee
Beijing Source Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Source Technology Development Co ltdfiledCriticalBeijing Source Technology Development Co ltd
Priority to CN202111325303.7ApriorityCriticalpatent/CN114219868A/en
Publication of CN114219868ApublicationCriticalpatent/CN114219868A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention provides a skin care scheme recommendation method and a system, wherein the method comprises the following steps: performing living body authentication according to the first face image and the second face image of the user and judging whether the user is the same person during authentication; if the living body verification is passed and the user in the verification period is the same person, performing color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user; performing skin state analysis according to the facial skin image to obtain the facial skin state of the user; and generating a corresponding skin care scheme according to the facial skin state. The invention avoids illumination, captures incomplete skin data and provides a relatively accurate skin measurement result for a user, thereby recommending a more reliable skin care scheme for the user and improving the use experience of the user.

Description

Skin care scheme recommendation method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a skin care scheme recommendation method and system.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face.
At present, the face recognition technology is mature, but the skin quality and skin problems of a person cannot be detected through the face in an image.
Disclosure of Invention
The invention aims to provide a skin care scheme recommendation method and system, and solves the problems that a special skin detector is adopted for detection, a user needs to go to a professional skin service organization for skin detection, and time and labor are wasted. Most importantly, the technical problem that the reliability of a user on a skin care scheme recommended by a worker is low due to the fact that the worker subjectively recommends the skin care scheme for the user due to the personal economic benefit is solved.
The technical scheme provided by the invention is as follows:
the invention provides a skin care scheme recommendation method, which comprises the following steps:
performing living body authentication according to the first face image and the second face image of the user and judging whether the user is the same person during authentication;
if the living body verification is passed and the user in the verification period is the same person, performing color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user;
performing skin state analysis according to the facial skin image to obtain the facial skin state of the user;
and generating a corresponding skin care scheme according to the facial skin state.
Further, the performing the living body authentication and determining whether the user is the same person during the authentication period according to the first face image and the second face image of the user includes:
acquiring face orientation information of a user after triggering living body verification and face recognition processes;
if the face orientation information does not accord with a preset viewing range, an adjusting instruction is sent;
if the face orientation information accords with a preset viewing range, shooting to obtain the first face image and the second face image;
respectively carrying out face detection extraction on the first face image and the second face image to obtain a corresponding first key feature point set and a corresponding second key feature point set;
judging whether the living body characteristics are met and whether the user is the same person or not according to the first key characteristic point set and the second key characteristic point set;
and if the similarity of the first key feature point set and the second key feature point set reaches a preset threshold and both the first key feature point set and the second key feature point set accord with the living body features, the living body verification is passed and the user is the same person during the verification.
Further, the step of performing color cast correction and brightness correction according to the first and second facial images to obtain a facial skin image of the user comprises:
acquiring RGB values of the first and second face images, and calculating to obtain a white balance parameter according to the RGB values;
and correcting each pixel in the first and second facial images according to the white balance parameters to finish color cast correction and brightness correction to obtain the facial skin image.
Further, the skin state analysis according to the facial skin image to obtain the facial skin state of the user includes the steps of:
performing color space conversion processing and gray level processing on the facial skin image;
analyzing the processed facial skin image to obtain a corresponding image characteristic value;
analyzing the facial skin state of the user skin according to the image characteristic value;
the image characteristic values comprise RGB values, color chroma values, texture contrast ratios and gray average values; the facial skin condition includes skin age, skin type, skin attribute score.
Further, the step of generating a corresponding skin care plan according to the facial skin condition includes the steps of:
analyzing to obtain a skin care scheme meeting the requirements of the facial skin state of the user according to the skin age, the skin type and the skin attribute value;
wherein the skin care regimen comprises a mask efficacy profile, a care period; the preset parameters comprise user living habit data and user preference data.
The present invention also provides a skin care regimen recommendation system comprising:
the judging module is used for performing living body verification and judging whether the user is the same person during verification according to the first face image and the second face image of the user;
the image processing module is used for carrying out color cast correction and brightness correction according to the first and second face images to obtain a facial skin image of the user if the living body verification is passed and the user in the verification period is the same person;
the analysis module is used for carrying out skin state analysis according to the facial skin image to obtain the facial skin state of the user;
and the generating module is used for generating a corresponding skin care scheme according to the facial skin state.
Further, the judging module includes:
the system comprises an acquisition unit, a verification unit and a face recognition unit, wherein the acquisition unit is used for acquiring face orientation information of a user after triggering living body verification and face recognition processes;
the control unit is used for sending an adjusting instruction if the face direction information does not accord with a preset viewing range; if the face orientation information accords with a preset viewing range, shooting to obtain the first face image and the second face image;
the extraction unit is used for respectively carrying out face detection extraction on the first face image and the second face image to obtain a corresponding first key feature point set and a corresponding second key feature point set;
the judging unit is used for judging whether the living body characteristics are met or not and whether the user is the same person or not according to the first key characteristic point set and the second key characteristic point set;
and the determining unit is used for determining that the in-vivo verification passes and the user in the verification period is the same person if the similarity of the first key feature point set and the second key feature point set reaches a preset threshold and both the first key feature point set and the second key feature point set accord with the in-vivo feature.
Further, the image processing module includes:
the calculating unit is used for acquiring RGB values of the first and second face images and calculating white balance parameters according to the RGB values;
and the processing unit is used for correcting each pixel in the first and second facial images according to the white balance parameters, and finishing color cast correction and brightness correction to obtain the facial skin image.
Further, the analysis module comprises:
the image processing unit is used for carrying out color space conversion processing and gray level processing on the facial skin image;
the analysis unit is used for analyzing and obtaining a corresponding image characteristic value according to the processed facial skin image; analyzing the facial skin state of the user skin according to the image characteristic value;
the image characteristic values comprise RGB values, color chroma values, texture contrast ratios and gray average values; the facial skin condition includes skin age, skin type, skin attribute score.
Further, the generating module includes:
the analysis generating unit is used for analyzing and obtaining a skin care scheme meeting the requirements of the facial skin state of the user according to the skin age, the skin type, the skin attribute value and the skin attribute value;
wherein the skin care regimen comprises a mask efficacy profile, a care period; the preset parameters comprise user living habit data and user preference data.
By the skin care scheme recommending method and system provided by the invention, the defects of illumination and skin data capture can be avoided, and a relatively accurate skin measuring result is provided for a user, so that a more reliable skin care scheme is recommended for the user, and the use experience of the user is improved.
Drawings
The above features, technical features, advantages and modes of realisation of a skin care regimen recommendation method and system will be further described in the following detailed description of preferred embodiments in a clearly understandable manner, in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method of skin care regimen recommendation of the present invention;
fig. 2 is a schematic view of a page of operation of a software product of a skin care regimen recommendation method of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
One embodiment of the present invention, as shown in fig. 1, is a skin care regimen recommendation method comprising:
s100, according to the first face image and the second face image of the user, performing living body verification and judging whether the user is the same person during verification;
specifically, when the first facial image is a front image of the face of the user, the second facial image is a side image of the face of the user. Of course, the first facial image may also be the facial image of the user captured at the first time, and the second facial image may be the facial image of the user captured at the second time. The first time is different from the second time, that is, the first face image and the second face image are images acquired at different times or different angles.
The first face image and the second face image may be obtained by capturing a real-time video stream using one camera, and the different video frames captured from the video stream at adjacent time points are the first face image and the second face image. Of course, the first and second face images may be acquired by cameras with different shooting angles at the same time, for example, left and right cameras (a dual-camera mode) respectively capture the first and second face images acquired by the user at the same time.
Since the human face features are easily imitated by photos, videos, 3D structures and the like, in order to prevent false human face features from being verified by the human face recognition system, the present invention needs to add a human face living body recognition function, i.e. to determine whether the submitted human face features come from a live real person. The method can avoid fraud of users by means of photos, display screen videos, silica gel masks, three-dimensional 3D portraits and the like through a living body detection technology. The living body detection technology mainly judges whether a human face appearing in front of a machine is real or fake, wherein the human face presented by means of other media can be defined as a false human face, and the false human face comprises a printed photo, a display screen video, a silica gel mask, a three-dimensional 3D portrait and the like.
Therefore, after the first face image and the second face image of the user are acquired, face live body recognition can be performed by using a fit-type live body detection and a non-fit-type live body detection (silent live body detection). The coordinated living body detection requires a user to complete a designated action according to a prompt, and then a living body check is performed, for example, a motion set (including blinking, eye closing, mouth opening, head shaking, and the like) is predefined, and when the user performs the living body check, the user randomly selects one or more motions from the motion set each time, and the user is required to complete the corresponding selected motions within a specified time. The uncoordinated living body detection (silent living body detection) is opposite to the coordinated living body detection, whether the living body is a real living body is judged mainly under the condition that a series of actions such as blinking, mouth opening and the like are not carried out, the uncoordinated living body is mainly shot and imaged by adopting an infrared camera and an RGB (red, green and blue) camera or a camera, the silent living body detection is realized on the basis of facial images of users shot by the cameras with special functions, for example, the silent living body detection is based on PCA (principal component analysis) and SURF (speeded up robust features), or the silent living body detection is carried out on the basis of a Fourier spectrogram and a video frame intercepted in a real-time stream, so that action instruction matching is not needed, the living body verification is directly carried out under the condition that the users are not sensitive, the operation is simple and convenient, and better user experience is achieved. The present embodiment is not limited to the specific implementation manner of the living body detection.
S200, if the living body verification is passed and the user in the verification period is the same person, carrying out color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user;
specifically, color cast refers to the fact that a normal color shows an incorrect color, such as red, green, etc., when displayed. The color cast phenomenon of the image can be caused by the modes of snap-shot photos, incorrect development, beautiful pictures and the like. Due to the problem of the light angle, a picture taken in a scene such as a backlight shot may have color cast and uneven brightness. Therefore, after the face live body recognition, i.e., live body verification, and whether or not the user during verification is the same person are performed in the above manner, if the live body verification is passed and the user during verification is the same person, i.e., the submitted face features are from the same real user (generally, the same person), the face skin image of the user is obtained by performing the color cast correction and the brightness correction based on the first face image and the second face image acquired by photographing.
S300, performing skin state analysis according to the facial skin image to obtain the facial skin state of the user;
s400, generating a corresponding skin care scheme according to the facial skin state.
Specifically, a user may use a mobile terminal such as a mobile phone or a tablet to capture a first facial image and a second facial image of the user (hereinafter, for convenience of description, simply referred to as a user facial image), and the mobile terminal performs living body verification, face detection and recognition, skin state analysis, and skin care plan generation according to the user facial image.
Certainly, a mobile terminal such as a mobile phone or a tablet may be used to capture the face image of the user, and after the mobile terminal performs living body verification and face detection according to the face image of the user and recognizes that the face image is the same person, the mobile terminal sends the face image of the user to a mask machine or a server, and the mask machine or the server performs skin state analysis according to the face image of the user and generates a skin care scheme.
Certainly, a facial image of the user can also be shot and acquired by using a facial machine with a camera, after the facial image of the user passes through living body verification and is identified as the same person through face detection, the facial machine analyzes the skin state according to the facial image of the user and generates a skin care scheme.
Of course, a mobile terminal such as a mobile phone or a tablet may be used to capture the face image of the user, the mobile terminal sends the face image of the user to a mask machine or a server, and the mask machine or the server performs living body verification, face detection and recognition, skin state analysis and skin care scheme generation according to the face image of the user. In short, the recommended overall scheme of the skin care scheme of the main body executed in each step is within the protection scope of the invention, and is not repeated herein.
No matter the mobile terminal, the mask machine or the server is used for in-vivo verification, in the link of analyzing the skin state and generating the corresponding skin care scheme, the mask machine or the server can recommend fruits and vegetables and ultramicro peptide powder according to the period according to the skin condition and the attribute of the user after the skin measurement is completed. Wherein, generally, one month (corresponding to 28 days of four weeks) is a skin care period, a fruit and vegetable is recommended every day, and a superfine peptide powder is recommended every week. Generally, the recommended types of the ultramicro peptide powder in each week in one skin care period are different, and the recommended types of fruits and vegetables in a part of days in one skin care period can be the same or different. After a skin care cycle is completed, the skin measurement system can re-plan the next care plan according to the current skin type after the skin of the user is improved. The invention can be matched with ultramicro peptide powder and fruits and vegetables, continuously deduces a new skin care scheme step by step according to the skin improvement condition, and continuously and pertinently and individually recommends skin care.
Due to factors such as the self-carried skin-polishing and beauty of mobile terminals such as mobile phones, tablets and computers or a mask machine during photographing or unstable illumination, the phenomena of color cast or uneven brightness and the like of the first face image and the second face image obtained through photographing occur. If the first facial image and the second facial image are directly used for skin state analysis, the accuracy of the skin measurement result is affected. Therefore, after the first facial image and the second facial image are obtained to perform color cast correction and brightness correction to obtain the facial skin image of the user, the facial skin image after color cast correction and brightness correction is used for skin state analysis, so that a relatively reliable and accurate skin measurement result, namely the facial skin state, can be obtained, the defects of illumination influence and insufficient skin data capture are overcome, a skin care scheme suitable for the user is generated according to the facial skin state, and the reliability of the generated and recommended skin care scheme is greatly improved.
In one embodiment of the present invention, a method for skin care regimen recommendation comprises:
s110, acquiring face orientation information of a user after triggering living body verification and face recognition processes;
specifically, the mobile terminal or the mask machine is provided with a living body verification and face recognition process triggering function. For example, an entity function key is arranged on the mobile terminal or the mask machine, and the mobile terminal or the mask machine is triggered to start the living body verification and face recognition process by pressing or touching the entity function key. Of course, a virtual function key may also be displayed on the interactive touch screen of the mobile terminal or the mask machine, for example, a "skin measurement" control is displayed, and by pressing or touching the virtual function key ("skin measurement" control), the mobile terminal or the mask machine is triggered to start the living body verification and face recognition process. The mobile terminal or the mask machine can also acquire a voice signal of a user, perform voice recognition on the voice signal to obtain a keyword, and if the keyword is matched with a preset triggering keyword (for example, turning on "skin measurement", or starting "skin measurement"), the mobile terminal or the mask machine is triggered to turn on a living body verification and face recognition process.
Once the mobile terminal or the mask machine starts the living body verification and face recognition process, the mobile terminal or the mask machine can view the image through the camera configured or installed by the mobile terminal or the mask machine, the image coordinate of the face of the user in a camera view frame can be obtained, and the conversion relation between an image coordinate system and a world coordinate system can be adopted according to the image coordinate and the optical center position because the optical center position of the camera is known, so that the face direction and the face position of the user can be obtained through calculation, and further the face direction information of the user can be obtained.
S120, if the face orientation information does not accord with a preset viewing range, an adjusting instruction is sent;
s130, if the face orientation information accords with a preset viewing range, shooting to obtain a first face image and a second face image;
specifically, after the mobile terminal or the mask machine acquires the facial orientation information of the user in the above manner, whether the facial orientation information conforms to the preset viewing range of the camera is judged. And if the image accords with the condition of directly controlling the camera to shoot, acquiring a first face image and a second face image. Of course, if the face orientation information does not accord with the preset viewing range, the mobile terminal or the mask machine can generate and send an adjusting instruction so as to prompt the user to adjust the face orientation and the face position of the user by himself, and the camera is controlled to shoot to acquire the first face image and the second face image until the face orientation information accords with the preset viewing range. Of course, if the facial azimuth information does not accord with the preset viewing range, the mobile terminal or the mask machine can generate and send an adjusting instruction to the camera, control the camera to adjust the shooting angle so as to enable the facial azimuth information of the user to accord with the preset viewing range, and then control the camera to shoot to obtain the first facial image and the second facial image.
S140, respectively carrying out face detection and extraction on the first face image and the second face image to obtain a corresponding first key feature point set and a second key feature point set;
s150, judging whether the living body characteristics are met and whether the user is the same person or not according to the first key characteristic point set and the second key characteristic point set;
s160, if the similarity of the first key feature point set and the second key feature point set reaches a preset threshold and both the first key feature point set and the second key feature point set accord with the living body feature, the living body verification is passed and the user in the verification period is the same person;
specifically, key facial feature points include eyebrows, eyes, mouth, nose, ears, and facial contours. After the first face image and the second face image are obtained in the above manner, feature extraction is performed on the first face image by using an existing face feature point detection and extraction algorithm (for example, an ASM algorithm, that is, face detection and positioning are performed by using opencv, and face feature points are extracted through a trained face feature point recognition model) to obtain a first key feature point set, and feature extraction is performed on the second face image to obtain a second key feature point set. The first key feature point set and the second key feature point set comprise at least two of the face key feature points, and at least two of the face key feature points included in the first key feature point set are the same as at least two of the face key feature points included in the second key feature point set. For example, if the first set of key feature points includes eyebrows, left eye, right eye, and nose, then the second set of key feature points must include any two or more of eyebrows, left eye, right eye, and nose.
According to the first key feature point set and the second key feature point set, first depth information corresponding to the first key feature point set and second depth information corresponding to the second key feature point set can be obtained respectively. The depth information refers to three-dimensional coordinates of the key feature points in a three-dimensional coordinate system, because a real human face (a living human face) is not planar, but a face photograph and a video forged human face are planar, if three-dimensional coordinates of a plurality of key feature points in the first key feature point set (or the second key feature point set) are different, it can be considered that a human face corresponding to the first face image (or the second face image) acquired by the camera at that time is a living human face, and at that time, three-dimensional coordinates of the key feature points are different. When the three-dimensional coordinate system is selected, the direction of the camera facing the user can be used as the positive direction of the z-axis, and the positive directions of the x-axis and the y-axis can be determined according to the right-hand coordinate system.
Since some illegal molecules may be camouflaged by live body authentication using a 3D face model or a silica gel mask. Therefore, according to the first key feature point set and the second key feature point set, first gray scale distribution information corresponding to the first key feature point set and second gray scale distribution information corresponding to the second key feature point set can be respectively obtained. The gray distribution information refers to the average gray value of the key feature points in the imaged image, because the gray distribution of the real face (living face) in the imaged image is obviously different, and the gray distribution of the 3D face model or the silica gel mask is relatively uniform. Therefore, the number of pixel points corresponding to the current key feature point in the first key feature point set (or the second key feature point set) is obtained, the gray value of each pixel point is obtained, the gray average value is calculated according to the number of the pixel points corresponding to the current key feature point and the gray value, whether the calculated gray average value is larger than a preset gray threshold value or not is judged, and if the calculated gray average value is larger than the preset gray threshold value, the face corresponding to the first face image (or the second face image) collected by the camera at the moment is considered not to be the living face.
In addition, how to judge that the user is the same person during the live body verification period can respectively acquire the distance corresponding to each face key feature point in the first key feature point set and the distance corresponding to each face key feature point in the first key feature point set. The distance refers to a distance value of a certain face key feature point and another face key feature point in the same three-dimensional coordinate system. Because the distances between different face key feature points of the same face are the same, after a first key feature point set and a second key feature point set corresponding to the first face image and the second face image respectively are obtained, the distances between the face key feature points can be calculated according to the pixel coordinates of the face key feature points on the imaging image. Then, two face key feature points are selected from the first key feature point set to calculate a corresponding first distance, two face key feature points which are the same as those selected from the first key feature point set are selected from the second key feature point set to calculate a corresponding second distance, whether the difference value of the first distance and the second distance is within a preset distance threshold value or not is judged, and if the similarity reaches the preset threshold value, it is indicated that the user is the same person during the in vivo verification.
Of course, the first key feature point set and the second key feature point set may also be directly input into a face recognition model obtained through pre-training, and the similarity between the faces of the first face image corresponding to the first key feature point set and the second face image corresponding to the second key feature point set is output through the face recognition model. If the similarity reaches a preset threshold, the user is the same person during the living body verification.
S200, if the living body verification is passed and the user in the verification period is the same person, carrying out color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user;
s300, performing skin state analysis according to the facial skin image to obtain the facial skin state of the user;
s400, generating a corresponding skin care scheme according to the facial skin state.
Because individual users can perform tentative skin detection on posters and pictures by using skin detection software, skin detection results can still be generated and corresponding care schemes are displayed, and the trust of the users on the platform skin detection function can be directly influenced. Therefore, the embodiment really experiences the collected images through the deep learning algorithm, and the specific mode is to provide double-angle live detection and silence live detection, which is equivalent to a two-second video taken by a user. The double-angle in-vivo detection is self-photographing and side-face self-photographing of a front face of a user, and whether the user is a real person is judged by the 3D modeling reconstruction mode. After the living body examination, a human face comparison link can be carried out. The basic principle is as follows: firstly, face detection and identification are carried out from a picture, which is equivalent to finding the face in a picture and showing some basic key points, such as eyes, eyebrows and the like, on the whole face. Some face key points are aligned, so that data preprocessing is provided for a subsequent face recognition algorithm, and the accuracy of the whole algorithm can be improved. Preferably, the whole human face is partially scratched out, so that the interference of surrounding objects can be avoided. And inputting the user face image which only comprises the face part after the matting to the deep learning network model, and outputting a corresponding characteristic point vector by the deep learning network model. And calculating the distance difference value of different face key feature points of the first face image and the second face image, namely two faces, according to the feature point vectors, if the distance difference value is smaller than a preset distance threshold value, the faces in the first face image and the second face image are determined to be the same person, and if the distance difference value is larger than the preset distance threshold value, the faces in the first face image and the second face image are determined to be different persons. It should be noted that different preset distance thresholds are provided at different misrecognition rates.
According to the method, the human face image feature data extracted by deep learning in the human face recognition is applied to living body detection, the feature data is processed through the attention mechanism based on the channel domain, the human face image is recognized to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, the accuracy of living body detection is improved, biological features in the human face image feature data can be enhanced, and the accuracy of human face recognition is improved. The method is beneficial to improving the precision of the face living body recognition, realizes the high-precision face living body recognition, and has the advantages of stable operation, extremely low power consumption, high precision, good user experience and the like.
In one embodiment of the present invention, a method for skin care regimen recommendation comprises:
s100, according to the first face image and the second face image of the user, performing living body verification and judging whether the user is the same person during verification;
s210, if the living body verification is passed and the user in the verification period is the same person, acquiring RGB values of the first and second face images, and calculating to obtain a white balance parameter according to the RGB values;
s220, correcting each pixel in the first and second facial images according to the white balance parameters, and finishing color cast correction and brightness correction to obtain the facial skin image;
s300, performing skin state analysis according to the facial skin image to obtain the facial skin state of the user;
s400, generating a corresponding skin care scheme according to the facial skin state.
Specifically, the image is restored to the same state as that under white light irradiation by performing white balance processing on the image, so that detection can be performed by using an algorithm under normal illumination, and further, correct judgment can be obtained for face detection and identification. Compared with the current fractional value and the final skin state condition of the skin measurement influenced by strong and weak light caused by different environments, time and angles when most of skin measurement software shoots and measures the skin, the method greatly optimizes the problem through an image standardization scheme before skin detection, and specifically comprises the following steps of; and carrying out color cast correction on the image by a white balance algorithm before skin detection is carried out on the face photo, or carrying out brightness correction on the image brightness by a brightness standardization algorithm when the face photo is subjected to skin detection. Compared with the situation that the skin measurement data result in the same time is greatly different due to insufficient brightness or over-strong brightness of other skin measurement platforms, the method and the device for measuring the skin brightness of the skin measurement platform achieve great improvement.
The method and the device perform color cast correction on the whole image without depending on calibration data of a camera for shooting the image, can effectively perform blind removal on color shading of the image from unknown sources, and have wider applicability. The AI intelligent skin measuring method provides a relatively accurate skin measuring result for a user, and avoids the defects of illumination, incomplete skin data capture, low scheme matching degree and the like, so that a product matching use scheme formed by combining a facial mask machine is more reliable.
In one embodiment of the present invention, a method for skin care regimen recommendation comprises:
s100, according to the first face image and the second face image of the user, performing living body verification and judging whether the user is the same person during verification;
s200, if the living body verification is passed and the user in the verification period is the same person, carrying out color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user;
s310, color space conversion processing and gray level processing are carried out on the facial skin image;
s320, analyzing the processed facial skin image to obtain a corresponding image characteristic value;
s330, analyzing the facial skin state of the user skin according to the image characteristic value;
the image characteristic values comprise RGB values, color chroma values, texture contrast ratios and gray average values; the facial skin state comprises skin age, skin type and skin attribute value;
s400, generating a corresponding skin care scheme according to the facial skin state.
Specifically, skin type refers to the special attributes and characteristics resulting from the diversification of human skin. The method and the device have the advantages that the color space conversion processing and the gray level processing are carried out on the facial skin image so as to analyze the processed facial skin image to obtain the corresponding image characteristic value, and then the facial skin state of the skin of the user can be obtained through analysis according to the image characteristic value. The invention helps the user to select the appropriate skin care product by combining with the relevant historical record of skin state monitoring, solves the problem of difficult self-selection of the user, and simultaneously improves the suitability and the accuracy of skin care product selection, thereby improving the user experience.
In one embodiment of the present invention, a method for skin care regimen recommendation comprises:
s100, according to the first face image and the second face image of the user, performing living body verification and judging whether the user is the same person during verification;
s200, if the living body verification is passed and the user in the verification period is the same person, carrying out color cast correction and brightness correction according to the first and second face images to obtain a face skin image of the user;
s300, performing skin state analysis according to the facial skin image to obtain the facial skin state of the user;
s410, analyzing according to the skin age, the skin type, the skin attribute value and the skin attribute value to obtain a candidate skin care scheme meeting the requirements of the facial skin state of the user;
s420, screening out candidate skin care schemes which accord with the preset parameters of the user as final skin care schemes;
wherein the skin care regimen comprises a mask efficacy profile, a care period; the preset parameters comprise user living habit data and user preference data.
Specifically, the skin condition of a person is complex, the skin condition varies from person to person, a special image recognition means capable of detecting and recognizing the facial skin state of a user is not available at present, relevant skin care suggestions are subjectively provided for the user based on the experience of workers in the field of skin care or medical and beauty, and in many cases, due to personal economic benefits, the workers cannot intelligently and objectively recommend a proper skin care scheme to the user, so that the reliability of the user on the skin care scheme recommended by the workers is not high.
As shown in fig. 2, the skin state analysis, i.e., the current skin state, includes moisture, pores, blackheads, roughness, wrinkles, texture, brown spots, skin texture, skin color, and the like. Because the skin state, the attribute and the sensitivity of each person are different, the accuracy of the skin measurement result is particularly important in the mask machine nursing scheme formed according to the skin measurement result. The intelligent skin measuring function of the invention is used for accurately nursing and making a bedding by using a mask machine, and corresponding mask efficacy attributes and nursing cycle scheme recommendations are provided according to skin detection results.
According to the invention, the facial skin measurement result is combined with skin beautifying equipment to make nursing scheme recommendation, mask selection recommendation and use frequency recommendation (28-day customized nursing collocation), each attribute value score corresponding to the skin measurement result generated by skin detection is performed for the first time, the lowest score value represents that the attribute of the skin is poor (if the acne value score is low, the acne problem of the skin of a user is proved to be serious), and the corresponding recommendation is performed according to the nursing attributes of the effects of raw materials such as fruits and vegetables, plant herbs and essential oils recorded by background data, so that the idea of precise skin care is achieved. The invention helps the user to select the appropriate skin care product by combining with the relevant historical record of skin state monitoring, solves the problem of difficult self-selection of the user, and simultaneously improves the suitability and the accuracy of skin care product selection, thereby improving the user experience.
In one embodiment of the present invention, a skin care regimen recommendation system comprises:
the judging module is used for performing living body verification and judging whether the user is the same person during verification according to the first face image and the second face image of the user;
the image processing module is used for carrying out color cast correction and brightness correction according to the first and second face images to obtain a facial skin image of the user if the living body verification is passed and the user in the verification period is the same person;
the analysis module is used for carrying out skin state analysis according to the facial skin image to obtain the facial skin state of the user;
and the generating module is used for generating a corresponding skin care scheme according to the facial skin state.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above method embodiment, which is not described in detail herein.
Based on the foregoing embodiment, the determining module includes:
the system comprises an acquisition unit, a verification unit and a face recognition unit, wherein the acquisition unit is used for acquiring face orientation information of a user after triggering living body verification and face recognition processes;
the control unit is used for sending an adjusting instruction if the face direction information does not accord with a preset viewing range; if the face orientation information accords with a preset viewing range, shooting to obtain the first face image and the second face image;
the extraction unit is used for respectively carrying out face detection extraction on the first face image and the second face image to obtain a corresponding first key feature point set and a corresponding second key feature point set;
the judging unit is used for judging whether the living body characteristics are met or not and whether the user is the same person or not according to the first key characteristic point set and the second key characteristic point set;
and the determining unit is used for determining that the in-vivo verification passes and the user in the verification period is the same person if the similarity of the first key feature point set and the second key feature point set reaches a preset threshold and both the first key feature point set and the second key feature point set accord with the in-vivo feature.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above method embodiment, which is not described in detail herein.
Based on the foregoing embodiments, the image processing module includes:
the calculating unit is used for acquiring RGB values of the first and second face images and calculating white balance parameters according to the RGB values;
and the processing unit is used for correcting each pixel in the first and second facial images according to the white balance parameters, and finishing color cast correction and brightness correction to obtain the facial skin image.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above method embodiment, which is not described in detail herein.
Based on the foregoing embodiments, the analysis module includes:
the image processing unit is used for carrying out color space conversion processing and gray level processing on the facial skin image;
the analysis unit is used for analyzing and obtaining a corresponding image characteristic value according to the processed facial skin image; analyzing the facial skin state of the user skin according to the image characteristic value;
the image characteristic values comprise RGB values, color chroma values, texture contrast ratios and gray average values; the facial skin condition includes skin age, skin type, skin attribute score.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above method embodiment, which is not described in detail herein.
Based on the foregoing embodiments, the generating module includes:
the analysis generating unit is used for analyzing and obtaining a skin care scheme meeting the requirements of the facial skin state of the user according to the skin age, the skin type, the skin attribute value and the skin attribute value;
wherein the skin care regimen comprises a mask efficacy profile, a care period; the preset parameters comprise user living habit data and user preference data.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above method embodiment, which is not described in detail herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only used for distinguishing the program modules from one another, and are not used for limiting the protection scope of the application.
In one embodiment of the invention, a terminal device comprises a processor and a memory, wherein the memory is used for storing a computer program; and the processor is used for executing the computer program stored on the memory to realize the skin care scheme recommendation method in the corresponding method embodiment.
The terminal equipment can be desktop computers, notebooks, palm computers, tablet computers, mobile phones, man-machine interaction screens and other equipment. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the foregoing is merely an example of a terminal device and is not limiting of terminal devices, and that more or fewer components than those shown, or some of the components in combination, or different components may be included, such as: the terminal device may also include input/output interfaces, display devices, network access devices, communication buses, communication interfaces, and the like. A communication interface and a communication bus, and may further comprise an input/output interface, wherein the processor, the memory, the input/output interface and the communication interface complete communication with each other through the communication bus. The memory stores a computer program, and the processor is used for executing the computer program stored on the memory to realize the skin care scheme recommendation method in the corresponding method embodiment.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit of the terminal device, such as: hard disk or memory of the terminal device. The memory may also be an external storage device of the terminal device, such as: the terminal equipment is provided with a plug-in hard disk, an intelligent memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like. Further, the memory may also include both an internal storage unit and an external storage device of the terminal device. The memory is used for storing the computer program and other programs and data required by the terminal device. The memory may also be used to temporarily store data that has been output or is to be output.
A communication bus is a circuit that connects the described elements and enables transmission between the elements. For example, the processor receives commands from other elements through the communication bus, decrypts the received commands, and performs calculations or data processing according to the decrypted commands. The memory may include program modules such as a kernel (kernel), middleware (middleware), an Application Programming Interface (API), and applications. The program modules may be comprised of software, firmware or hardware, or at least two of the same. The input/output interface forwards commands or data entered by a user via the input/output interface (e.g., sensor, keyboard, touch screen). The communication interface connects the terminal equipment with other network equipment, user equipment and a network. For example, the communication interface may be connected to a network by wire or wirelessly to connect to external other network devices or user devices. The wireless communication may include at least one of: wireless fidelity (WiFi), Bluetooth (BT), Near Field Communication (NFC), Global Positioning Satellite (GPS) and cellular communications, among others. The wired communication may include at least one of: universal Serial Bus (USB), high-definition multimedia interface (HDMI), asynchronous transfer standard interface (RS-232), and the like. The network may be a telecommunications network and a communications network. The communication network may be a computer network, the internet of things, a telephone network. The terminal device may be connected to the network via a communication interface, and a protocol used by the terminal device to communicate with other network devices may be supported by at least one of an application, an Application Programming Interface (API), middleware, a kernel, and a communication interface.
In an embodiment of the present invention, a storage medium stores at least one instruction, and the instruction is loaded and executed by a processor to implement the operations performed by the embodiments of the skin care plan recommendation method. For example, the storage medium may be a read-only memory (ROM), a Random Access Memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
They may be implemented in program code that is executable by a computing device such that it is executed by the computing device, or separately, or as individual integrated circuit modules, or as a plurality or steps of individual integrated circuit modules. Thus, the present invention is not limited to any specific combination of hardware and software.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units may be stored in a storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by sending instructions to relevant hardware through a computer program, where the computer program may be stored in a storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program may be in source code form, object code form, an executable file or some intermediate form, etc. The storage medium may include: any entity or device capable of carrying the computer program, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the content of the storage medium may be increased or decreased as appropriate according to the requirements of legislation and patent practice in the jurisdiction, for example: in certain jurisdictions, in accordance with legislation and patent practice, computer-readable storage media do not include electrical carrier signals and telecommunications signals.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

Translated fromChinese
1.一种护肤方案推荐方法,其特征在于,包括步骤:1. a skin care scheme recommendation method, is characterized in that, comprises the steps:根据用户的第一面部图像和第二面部图像,进行活体验证并判断验证期间的用户是否为同一个人;According to the first facial image and the second facial image of the user, perform live verification and determine whether the user during the verification is the same person;若活体验证通过且验证期间的用户为同一个人,根据所述第一、第二面部图像进行偏色修正和亮度修正得到用户的面部肌肤图像;If the living body verification is passed and the user during the verification period is the same person, perform color cast correction and brightness correction according to the first and second facial images to obtain the user's facial skin image;根据所述面部肌肤图像进行肌肤状态分析,得到用户的面部肌肤状态;Perform skin state analysis according to the facial skin image to obtain the user's facial skin state;根据所述面部肌肤状态生成对应的护肤方案。A corresponding skin care plan is generated according to the facial skin state.2.根据权利要求1所述的护肤方案推荐方法,其特征在于,所述根据用户的第一面部图像和第二面部图像,进行活体验证并判断验证期间的用户是否为同一个人包括步骤:2. The method for recommending a skin care plan according to claim 1 , wherein, according to the first facial image and the second facial image of the user, performing living verification and judging whether the user during verification is the same person comprises the steps:触发活体验证和人脸识别流程后获取用户的面部方位信息;Obtain the user's facial orientation information after triggering the live verification and face recognition processes;若所述面部方位信息不符合预设取景范围发出调整指令;If the facial orientation information does not conform to the preset viewing range, an adjustment instruction is sent;若所述面部方位信息符合预设取景范围,拍摄获取所述第一面部图像和第二面部图像;If the facial orientation information conforms to the preset viewing range, photographing to obtain the first facial image and the second facial image;对第一面部图像和第二面部图像,分别进行人脸检测提取得到对应的第一关键特征点集和第二关键特征点集;For the first facial image and the second facial image, face detection and extraction are respectively performed to obtain the corresponding first key feature point set and second key feature point set;根据所述第一关键特征点集和第二关键特征点集,判断是否符合活体特征以及用户是否为同一个人;According to the first key feature point set and the second key feature point set, determine whether it conforms to the living body characteristics and whether the user is the same person;若所述第一关键特征点集和第二关键特征点集相似度达到预设阈值,且均符合活体特征,则活体验证通过且验证期间的用户为同一个人。If the similarity between the first key feature point set and the second key feature point set reaches a preset threshold and both conform to living body characteristics, the living body verification is passed and the user during the verification period is the same person.3.根据权利要求1所述的护肤方案推荐方法,其特征在于,所述根据所述第一、第二面部图像进行偏色修正和亮度修正得到用户的面部肌肤图像包括步骤:3. The method for recommending a skin care plan according to claim 1, wherein the performing color cast correction and brightness correction according to the first and second facial images to obtain the user's facial skin image comprises the steps of:获取所述第一、第二面部图像的RGB值,根据所述RGB值计算得到白平衡参数;Obtain the RGB values of the first and second facial images, and calculate and obtain a white balance parameter according to the RGB values;根据所述白平衡参数对所述第一、第二面部图像中每个像素进行校正,完成偏色修正和亮度修正得到所述面部肌肤图像。Correct each pixel in the first and second facial images according to the white balance parameter, and complete color cast correction and brightness correction to obtain the facial skin image.4.根据权利要求1-3任一项所述的护肤方案推荐方法,其特征在于,所述根据所述面部肌肤图像进行肌肤状态分析,得到用户的面部肌肤状态包括步骤:4. The method for recommending a skin care plan according to any one of claims 1-3, wherein the performing skin state analysis according to the facial skin image to obtain the user's facial skin state comprises the steps:对所述面部肌肤图像进行色彩空间转换处理和灰度处理;performing color space conversion processing and grayscale processing on the facial skin image;根据处理后的面部肌肤图像分析得到对应的图像特征值;According to the processed facial skin image analysis, the corresponding image feature value is obtained;根据所述图像特征值,分析得到用户肌肤的面部肌肤状态;According to the image feature value, analyze and obtain the facial skin state of the user's skin;其中,所述图像特征值包括RGB值、色泽色度值、纹理对比度、灰度平均值;所述面部肌肤状态包括肤龄、肤质、皮肤类型、肌肤属性分值。Wherein, the image feature value includes RGB value, color chroma value, texture contrast, gray average value; the facial skin state includes skin age, skin quality, skin type, and skin attribute score.5.根据权利要求4所述的护肤方案推荐方法,其特征在于,所述根据所述面部肌肤状态生成对应的护肤方案包括步骤:5. The method for recommending a skin care plan according to claim 4, wherein the generating a corresponding skin care plan according to the facial skin state comprises the steps of:根据所述肤龄、肤质、皮肤类型、肌肤属性分值,分析得到符合用户面部肌肤状态需求的护肤方案;According to the skin age, skin texture, skin type, and skin attribute scores, analyze and obtain a skin care plan that meets the needs of the user's facial skin condition;其中,所述护肤方案包括面膜功效属性、护理周期;所述预先设置参数包括用户生活习惯数据、用户偏好数据。Wherein, the skin care scheme includes mask efficacy attributes and nursing cycle; the preset parameters include user living habit data and user preference data.6.一种护肤方案推荐系统,其特征在于,包括:6. A skin care regimen recommendation system, comprising:判断模块,用于根据用户的第一面部图像和第二面部图像,进行活体验证并判断验证期间的用户是否为同一个人;a judgment module, used for performing living verification and judging whether the users during the verification are the same person according to the first facial image and the second facial image of the user;图像处理模块,用于若活体验证通过且验证期间的用户为同一个人,根据所述第一、第二面部图像进行偏色修正和亮度修正得到用户的面部肌肤图像;The image processing module is used for obtaining the facial skin image of the user by performing color cast correction and brightness correction according to the first and second facial images if the living body verification is passed and the user during the verification period is the same person;分析模块,用于根据所述面部肌肤图像进行肌肤状态分析,得到用户的面部肌肤状态;an analysis module, configured to analyze the skin state according to the facial skin image to obtain the user's facial skin state;生成模块,用于根据所述面部肌肤状态生成对应的护肤方案。The generating module is configured to generate a corresponding skin care plan according to the facial skin state.7.根据权利要求6所述的护肤方案推荐系统,其特征在于,所述判断模块包括:7. The skin care regimen recommendation system according to claim 6, wherein the judging module comprises:获取单元,用于触发活体验证和人脸识别流程后获取用户的面部方位信息;The acquisition unit is used to acquire the facial orientation information of the user after triggering the process of living body verification and face recognition;控制单元,用于若所述面部方位信息不符合预设取景范围发出调整指令;若所述面部方位信息符合预设取景范围,拍摄获取所述第一面部图像和第二面部图像;a control unit, configured to issue an adjustment instruction if the facial orientation information does not conform to the preset viewing range; if the facial orientation information conforms to the preset viewing range, photograph and obtain the first facial image and the second facial image;提取单元,用于对第一面部图像和第二面部图像,分别进行人脸检测提取得到对应的第一关键特征点集和第二关键特征点集;an extraction unit for performing face detection and extraction on the first facial image and the second facial image to obtain the corresponding first key feature point set and second key feature point set;判断单元,用于根据所述第一关键特征点集和第二关键特征点集,判断是否符合活体特征以及用户是否为同一个人;a judging unit for judging whether it conforms to living body characteristics and whether the user is the same person according to the first key feature point set and the second key feature point set;确定单元,用于若所述第一关键特征点集和第二关键特征点集相似度达到预设阈值,且均符合活体特征,则活体验证通过且验证期间的用户为同一个人。A determining unit, configured to pass the in vivo verification and the user during verification is the same person if the similarity between the first key feature point set and the second key feature point set reaches a preset threshold and both conform to living body characteristics.8.根据权利要求6所述的护肤方案推荐系统,其特征在于,所述图像处理模块包括:8. The skin care regimen recommendation system according to claim 6, wherein the image processing module comprises:计算单元,用于获取所述第一、第二面部图像的RGB值,根据所述RGB值计算得到白平衡参数;a computing unit, configured to obtain the RGB values of the first and second facial images, and calculate and obtain a white balance parameter according to the RGB values;处理单元,用于根据所述白平衡参数对所述第一、第二面部图像中每个像素进行校正,完成偏色修正和亮度修正得到所述面部肌肤图像。The processing unit is configured to correct each pixel in the first and second facial images according to the white balance parameter, and complete color cast correction and brightness correction to obtain the facial skin image.9.根据权利要求6-8任一项所述的护肤方案推荐系统,其特征在于,所述分析模块包括:9. The skin care regimen recommendation system according to any one of claims 6-8, wherein the analysis module comprises:图像处理单元,用于对所述面部肌肤图像进行色彩空间转换处理和灰度处理;an image processing unit for performing color space conversion processing and grayscale processing on the facial skin image;分析单元,用于根据处理后的面部肌肤图像分析得到对应的图像特征值;根据所述图像特征值,分析得到用户肌肤的面部肌肤状态;an analysis unit, configured to analyze and obtain a corresponding image feature value according to the processed facial skin image; analyze and obtain the facial skin state of the user's skin according to the image feature value;其中,所述图像特征值包括RGB值、色泽色度值、纹理对比度、灰度平均值;所述面部肌肤状态包括肤龄、肤质、皮肤类型、肌肤属性分值。Wherein, the image feature value includes RGB value, color chroma value, texture contrast, gray average value; the facial skin state includes skin age, skin quality, skin type, and skin attribute score.10.根据权利要求9所述的护肤方案推荐系统,其特征在于,所述生成模块包括:10. The skin care regimen recommendation system according to claim 9, wherein the generating module comprises:分析生成单元,用于根据所述肤龄、肤质、皮肤类型、肌肤属性分值,分析得到符合用户面部肌肤状态需求的护肤方案;The analysis and generation unit is used to analyze and obtain a skin care plan that meets the needs of the user's facial skin state according to the skin age, skin quality, skin type, and skin attribute score;其中,所述护肤方案包括面膜功效属性、护理周期;所述预先设置参数包括用户生活习惯数据、用户偏好数据。Wherein, the skin care scheme includes mask efficacy attributes and nursing cycle; the preset parameters include user living habit data and user preference data.
CN202111325303.7A2021-11-102021-11-10Skin care scheme recommendation method and systemPendingCN114219868A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111325303.7ACN114219868A (en)2021-11-102021-11-10Skin care scheme recommendation method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111325303.7ACN114219868A (en)2021-11-102021-11-10Skin care scheme recommendation method and system

Publications (1)

Publication NumberPublication Date
CN114219868Atrue CN114219868A (en)2022-03-22

Family

ID=80696827

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111325303.7APendingCN114219868A (en)2021-11-102021-11-10Skin care scheme recommendation method and system

Country Status (1)

CountryLink
CN (1)CN114219868A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115049426A (en)*2022-06-072022-09-13蓝橙(天津)生物科技有限公司Artificial intelligence algorithm model for personalized skin care recommendation
CN118279888A (en)*2024-05-292024-07-02广州诗妃生物科技有限公司Blackhead removal targeting control method and blackhead removal targeting control system

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104732200A (en)*2015-01-282015-06-24广州远信网络科技发展有限公司Skin type and skin problem recognition method
CN106264523A (en)*2015-06-032017-01-04武汉朗立创科技有限公司Skin protection suggesting method based on skin and environment measuring and system
CN108269175A (en)*2018-01-292018-07-10杭州美界科技有限公司A kind of facial skin care product of combination user custom recommend method
CN108875468A (en)*2017-06-122018-11-23北京旷视科技有限公司Biopsy method, In vivo detection system and storage medium
CN109730637A (en)*2018-12-292019-05-10中国科学院半导体研究所 A system and method for quantitative analysis of facial images
CN110381303A (en)*2019-05-312019-10-25成都品果科技有限公司Portrait automatic exposure white balance correction method and system based on skin color statistics
CN113377020A (en)*2021-05-102021-09-10深圳数联天下智能科技有限公司Device control method, device and storage medium
CN113610844A (en)*2021-08-312021-11-05深圳市邻友通科技发展有限公司Intelligent skin care method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104732200A (en)*2015-01-282015-06-24广州远信网络科技发展有限公司Skin type and skin problem recognition method
CN106264523A (en)*2015-06-032017-01-04武汉朗立创科技有限公司Skin protection suggesting method based on skin and environment measuring and system
CN108875468A (en)*2017-06-122018-11-23北京旷视科技有限公司Biopsy method, In vivo detection system and storage medium
CN108269175A (en)*2018-01-292018-07-10杭州美界科技有限公司A kind of facial skin care product of combination user custom recommend method
CN109730637A (en)*2018-12-292019-05-10中国科学院半导体研究所 A system and method for quantitative analysis of facial images
CN110381303A (en)*2019-05-312019-10-25成都品果科技有限公司Portrait automatic exposure white balance correction method and system based on skin color statistics
CN113377020A (en)*2021-05-102021-09-10深圳数联天下智能科技有限公司Device control method, device and storage medium
CN113610844A (en)*2021-08-312021-11-05深圳市邻友通科技发展有限公司Intelligent skin care method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李忠明,等: "皮肤组织光学参数的变化对漫反射率、吸收比和光能流率的影响", 咸宁学院学报, 31 December 2010 (2010-12-31), pages 1 - 5*
王悦扬: "基于多光谱成像的人脸活体检测", 中国优秀硕士学位论文全文数据库 信息科技辑, 15 June 2014 (2014-06-15), pages 138 - 1039*
都伊林: "智能安防新发展与应用", 31 May 2018, 武汉:华中科技大学出版社, pages: 51*
陈荣,等: "皮肤的光学模型", 激光生物学报, 31 December 2005 (2005-12-31), pages 1 - 4*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115049426A (en)*2022-06-072022-09-13蓝橙(天津)生物科技有限公司Artificial intelligence algorithm model for personalized skin care recommendation
CN118279888A (en)*2024-05-292024-07-02广州诗妃生物科技有限公司Blackhead removal targeting control method and blackhead removal targeting control system
CN118279888B (en)*2024-05-292024-09-17广州诗妃生物科技有限公司Blackhead removal targeting control method and blackhead removal targeting control system

Similar Documents

PublicationPublication DateTitle
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
CN107730444B (en)Image processing method, image processing device, readable storage medium and computer equipment
CN109583285B (en)Object recognition method
CN107818305B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN106897658B (en) Method and device for identifying living body of human face
CN107945135B (en) Image processing method, device, storage medium and electronic device
EP3241151B1 (en)An image face processing method and apparatus
CN105072327B (en)A kind of method and apparatus of the portrait processing of anti-eye closing
CN107862663A (en) Image processing method, device, readable storage medium and computer equipment
CN107909057A (en) Image processing method, device, electronic device, and computer-readable storage medium
CN107886484A (en) Beautifying method, device, computer readable storage medium and electronic device
CN107742274A (en) Image processing method, device, computer-readable storage medium, and electronic device
CN108022206A (en)Image processing method, image processing device, electronic equipment and computer readable storage medium
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN108810406B (en) Portrait light effect processing method, device, terminal and computer-readable storage medium
CN107862274A (en) Beautifying method, device, electronic device and computer-readable storage medium
CN107862653A (en)Method for displaying image, device, storage medium and electronic equipment
CN107993209A (en)Image processing method, device, computer-readable recording medium and electronic equipment
CN108022207A (en)Image processing method, device, storage medium and electronic equipment
CN107911625A (en)Light measuring method, light measuring device, readable storage medium and computer equipment
CN109859857A (en)Mask method, device and the computer readable storage medium of identity information
CN111222380B (en)Living body detection method and device and recognition model training method thereof
CN107909058A (en)Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2016172923A1 (en)Video detection method, video detection system, and computer program product
US11315360B2 (en)Live facial recognition system and method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp