Movatterモバイル変換


[0]ホーム

URL:


CN109255674B - Trial makeup data processing system and method - Google Patents

Trial makeup data processing system and method
Download PDF

Info

Publication number
CN109255674B
CN109255674BCN201810897344.5ACN201810897344ACN109255674BCN 109255674 BCN109255674 BCN 109255674BCN 201810897344 ACN201810897344 ACN 201810897344ACN 109255674 BCN109255674 BCN 109255674B
Authority
CN
China
Prior art keywords
controller
user
cosmetics
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810897344.5A
Other languages
Chinese (zh)
Other versions
CN109255674A (en
Inventor
贾润芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youngzone Shanghai Intelligence Technology Co ltd
Original Assignee
Youngzone Shanghai Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youngzone Shanghai Intelligence Technology Co ltdfiledCriticalYoungzone Shanghai Intelligence Technology Co ltd
Priority to CN201810897344.5ApriorityCriticalpatent/CN109255674B/en
Publication of CN109255674ApublicationCriticalpatent/CN109255674A/en
Application grantedgrantedCritical
Publication of CN109255674BpublicationCriticalpatent/CN109255674B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a makeup trial data processing system and a makeup trial data processing method, wherein when a controller senses that cosmetics leave a storage rack through a cosmetic leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through a touch intelligent mirror and gives a score, and if the score is higher than a preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for a user, manual operation is not needed, human resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.

Description

Trial makeup data processing system and method
Technical Field
The invention relates to the technical field of computers, in particular to a makeup trial data processing system and a makeup trial data processing method.
Background
In practical application, different users are suitable for different types of cosmetics, and workers need to judge which type of cosmetics the users are suitable for according to skin colors, skin types, irrelevant features and the like of the users.
However, the method for determining the cosmetic suitable for the user in the prior art needs manual judgment, and is long in time consumption and low in efficiency.
Disclosure of Invention
The embodiment of the invention aims to provide a makeup trial data processing system and a makeup trial data processing method, which are used for solving the problem that the method for determining the type of cosmetics suitable for a user in the prior art is low in efficiency.
In order to achieve the purpose, the technical scheme of the embodiment of the invention is as follows:
the embodiment of the invention provides a makeup trial data processing system, which comprises a controller, a goods shelf, a touch intelligent mirror, a camera and at least one cosmetic off-cabinet sensing device, wherein:
the cosmetic off-cabinet sensing device is arranged on the storage rack and used for placing at least one cosmetic;
the camera is arranged on the touch intelligent mirror;
the touch intelligent mirror, the camera and the at least one cosmetic off-cabinet sensing device are respectively and electrically connected with the controller;
the controller is used for judging whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determining a target face part and a target color corresponding to the cosmetics;
the controller is further used for acquiring a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller is further used for covering the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller is also used for displaying the currently obtained after-makeup image through the touch intelligent mirror;
the controller is further used for determining the score of the currently obtained finished image through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller is further configured to obtain a user portrait corresponding to the currently obtained first face image, where the user portrait includes at least one of information of an age, a gender, a race, a skin quality, a skin color, a face shape, facial features, favorite colors, and a purchasing habit of the user;
the controller is further configured to establish a first correspondence between the user representation and the target type, and determine the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Further, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the system further comprises: panorama camera, proximity sensor, light intensity sensor, sound intensity sensor, wherein:
the panoramic camera, the proximity sensor, the light intensity sensor and the sound intensity sensor are respectively arranged on the goods shelf and are respectively electrically connected with the controller;
the controller is further used for acquiring a first image and current time through the panoramic camera according to a preset time interval, determining a model according to a preset number of people, and determining the number of first users in the first image;
the controller is further used for acquiring first light intensity through the light intensity sensor according to the preset time interval;
the controller is further configured to obtain a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller is further used for judging whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is further configured to establish and store a second corresponding relationship among the current time, the number of the first users, the first light intensity, and the first sound intensity;
the controller is further configured to determine a second correspondence comprising a maximum number of first users.
Further, the controller is specifically configured to:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
The embodiment of the invention also provides a makeup trial data processing method which is applied to the system of any one of the above implementation modes, and the method comprises the following steps:
the controller judges whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determines a target face part and a target color corresponding to the cosmetics;
the controller acquires a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller covers the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller displays the currently obtained made-up image through the touch intelligent mirror;
the controller determines the score of the currently obtained image after makeup through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller acquires a user portrait corresponding to a first face image which is acquired currently, wherein the user portrait comprises at least one of information of age, gender, race, skin color, facial form, facial features, favorite color and purchasing habit of the user;
the controller establishes a first correspondence between the user representation and the target type, and determines the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Further, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the method further comprises:
the controller acquires a first image and current time according to a preset time interval through the panoramic camera according to the preset time interval, determines a model according to a preset number of people, and determines the number of first users in the first image;
the controller acquires first light intensity through the light intensity sensor according to the preset time interval;
the controller acquires a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller judges whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is used for establishing and storing a second corresponding relation among the current time, the first user number, the first light intensity and the first sound intensity;
the controller determines a second corresponding relationship including the maximum first user number.
Further, the controller obtains a user portrait corresponding to a currently obtained first face image, and the controller specifically includes:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
When the similarity between the first face image and any one face image in the preset corresponding relationship between the face image and the user image is greater than a preset threshold value, the user portrait corresponding to the face image is determined as the user portrait corresponding to the first face image.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the controller senses that the cosmetics leave the storage rack through the cosmetics leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through the touch intelligent mirror and gives the score, and if the score is higher than the preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for the user, manual operation is not needed, manpower resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.
Drawings
Fig. 1 is a schematic structural diagram of a makeup test data processing system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a makeup trial data processing method according to an embodiment of the present invention.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
Embodiment 1 of the present invention provides a makeup test data processing system, a schematic structural diagram of which can be seen in fig. 1, and the system includes a controller, a shelf, a touchsmart mirror 109, a camera, and at least one cosmetic off-cabinet sensing device 104, wherein:
at least one cosmetic out-of-cabinet sensing device 104 disposed on theshelf 102 for placing at least one cosmetic;
thecamera 103 is arranged on the touchintelligent mirror 109;
the touchintelligent mirror 109, thecamera 103 and the at least one cosmetic off-cabinet sensing device 104 are respectively electrically connected with thecontroller 101;
thecontroller 101 is configured to determine whether the cosmetics placed on each cosmetic off-cabinet sensing device 104 leave the shelf through each cosmetic off-cabinet sensing device 104, and determine a target face part and a target color corresponding to the cosmetics;
thecontroller 101 is further configured to obtain a first face image of a user currently located in front of the smart touch mirror in real time through thecamera 103;
thecontroller 101 is further configured to overlay a target color on a target face part in the currently acquired first face image to obtain a finished image;
thecontroller 101 is further configured to display a currently obtained made-up image by touching thesmart mirror 109;
thecontroller 101 is further configured to determine a score of the currently obtained finished image through a preset scoring model;
if the score is not less than the preset score, thecontroller 101 is further configured to determine a target type of the cosmetic;
thecontroller 101 is further configured to obtain a user portrait corresponding to the currently obtained first face image, where the user portrait includes at least one of information of an age, a gender, a race, a skin quality, a skin color, a facial shape, facial features, favorite colors, and a purchasing habit of the user;
thecontroller 101 is further configured to establish a first correspondence between the user representation and the target type, and determine the target type as a type of cosmetic suitable for the user.
Thecontroller 101 may be any type of device such as a chip, a central processing unit, a mobile phone, a tablet computer, or a personal computer, as long as the above functions can be achieved. The preset time length can be any time length and can be set according to actual conditions.
The types of cosmetics may be classified according to the function and/or the site of action, such as concealer or whitener, or lipstick or blush.
In an implementation scenario, thecontroller 101 is specifically configured to:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
In one embodiment, the user image may be encrypted by an encryption algorithm, and the information has tamper resistance and high security.
In an implementation scenario, identification information of each cosmetic leaving thecabinet sensing device 104 may be set in advance, and a corresponding relationship between the identification information of each cosmetic leaving thecabinet sensing device 104 and a type of the cosmetic may be set. Thecontroller 101 may determine the target face and face portion, the target color and the target type of the cosmetics corresponding to the cosmetics according to the identification information of the cosmetics leaving thecabinet sensing device 104 by which cosmetics leaving thecabinet sensing device 104 senses that the cosmetics leave the cabinet.
In one implementation scenario, the at least one cosmetic-leavingcabinet sensing device 104 may be at least one photoelectric sensor (not shown) and/or at least one gravity sensor (not shown).
If the control determines whether the cosmetics placed on the cosmetics off-cabinet sensing device 104 leave theshelf 102 through the photoelectric sensing sensor, specifically, thecontroller 101 determines whether the photoelectric sensing sensor senses that the light intensity is increased, and if the determination result is yes, it determines that the cosmetics placed on the photoelectric sensing sensor leave theshelf 102.
If the control determines whether the cosmetics placed on the cosmetics off-bin sensing device 104 leave theshelf 102 through the gravity sensing sensor, specifically, thecontroller 101 determines that the weight of the cosmetics decreases when the gravity sensing sensor senses the cosmetics off-bin sensing device, and if the determination result is yes, the controller determines that the cosmetics placed on the gravity sensing sensor leave theshelf 102.
It should be noted that if the system includes at least one gravity sensor, the system may include at least one basket (not shown), the at least one basket being disposed on theshelf 102, and the at least one gravity sensor being disposed under the at least one basket. Wherein, a gravity induction sensor can be arranged under one basket, and at least two gravity induction sensors can also be arranged.
In the embodiment of the invention, when the controller senses that the cosmetics leave the storage rack through the cosmetics leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through the touchintelligent mirror 109 and gives the score, and if the score is higher than the preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for the user, so that manual operation is not needed, the human resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.
In one implementation scenario, the system may further include:panoramic camera 105,proximity sensor 106,light intensity sensor 107,sound intensity sensor 108, wherein:
thepanoramic camera 105, theproximity sensor 106, thelight intensity sensor 107 and thesound intensity sensor 108 are respectively arranged on theshelf 102 and are respectively electrically connected with thecontroller 101;
thecontroller 101 is further configured to obtain a first image and current time according to a preset time interval through thepanoramic camera 105 according to the preset time interval, determine a model according to a preset number of people, and determine the number of first users in the first image;
thecontroller 101 is further configured to obtain a first light intensity through thelight intensity sensor 107 according to a preset time interval;
thecontroller 101 is further configured to obtain a first sound intensity through thesound intensity sensor 108 according to a preset time interval;
thecontroller 101 is further configured to determine whether a user is in front of theshelf 102 through theproximity sensor 106 at preset time intervals;
thecontroller 101 is further configured to establish and store a second corresponding relationship among the current time, the number of the first users, the first light intensity, and the first sound intensity;
thecontroller 101 is further configured to determine a second corresponding relationship including the maximum number of the first users.
By means of the system, it can be determined at what time, at what light intensity and at what sound intensity, the passenger flow is the largest.
In an implementation scenario, after the made-up image is displayed by touching thesmart mirror 109, thecontroller 101 may further obtain at least one color number information corresponding to the cosmetic, where each color number information includes a color number name and a color, display the at least one color number information in the touchsmart mirror 109, receive and respond to a selection instruction of a user, select the color number information corresponding to the selection instruction of the user, obtain the image of the user in real time through thecamera 103, overlay the color included in the selected color number information onto a target face portion in the currently obtained image of the user, obtain the made-up image, and display the currently obtained made-up image through the touchsmart mirror 109. Or the system further comprises a microphone (not shown in the figure) electrically connected with thecontroller 101; then
Acontroller 101, further configured to: after the currently obtained makeup image is displayed through the touchintelligent mirror 109, at least one piece of color number information corresponding to the makeup is obtained, the color number information is displayed in the touchintelligent mirror 109, and makeup trial voice information of the user is received through a microphone, wherein the makeup trial voice information of the user comprises a color number name, the color corresponding to the color number name included in the makeup trial voice information of the user is selected in response to the makeup trial voice information of the user, the image of the user is obtained through thecamera 103 in real time, the color corresponding to the color number name included in the makeup trial voice information of the selected user is covered on a target face part in the currently obtained image of the user, the makeup image is obtained, and the currently obtained makeup image is displayed through the touchintelligent mirror 109.
The user can perform double-click or long-press operation and the like on at least one piece of color number information displayed in the touchintelligent mirror 109 to trigger the touchintelligent mirror 109 to generate a selection instruction of the user, trigger the intelligent mirror to send the instruction to thecontroller 101, and execute subsequent operation after thecontroller 101 receives the instruction. Thecontroller 101 may recognize the makeup-trying voice information of the user through a voice recognition engine (not shown in the figure), where the response time of the engine is less than 0.4s, the accuracy rate is > 97%, and the denoising rate is > 61.5%. The engine is connected to thecontroller 101.
In addition, the user can also control the makeup trial by voice, and thecontroller 101 can collect the makeup trial voice information of the user by the microphone, determine the color number information that the user wants to use according to the information, and then execute the subsequent operation.
The user does not need to perform any click operation, and can perform makeup trial only by controlling the system through voice, so that the makeup trial efficiency and the convenience are improved.
Example 2
An embodiment 2 of the present invention provides a makeup test data processing method, which is applied to a system in any of the above implementation forms, and a flow diagram of the method can be seen in fig. 2, where the method includes the following steps:
step 201, the controller judges whether the cosmetics placed on each cosmetic off-cabinet sensing device leave the shelf through each cosmetic off-cabinet sensing device, and determines the target face part and the target color corresponding to the cosmetics.
And step 202, the controller acquires a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera.
And step 203, the controller covers the target color on the target human face part in the currently acquired first human face image to obtain a finished image.
And step 204, the controller displays the currently obtained made-up image through the touch intelligent mirror.
And step 205, the controller determines the score of the currently obtained image after makeup through a preset scoring model.
Instep 206, if the score is not less than the predetermined score, the controller is further configured to determine a target type of the cosmetic.
And step 207, the controller acquires a user portrait corresponding to the currently acquired first face image, wherein the user portrait comprises at least one of information of age, gender, race, skin color, facial form, facial features, favorite color and purchasing habit of the user.
Instep 208, the controller establishes a first correspondence between the user representation and the target type and determines the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Furthermore, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the method further comprises:
the controller acquires a first image and current time according to a preset time interval through the panoramic camera according to the preset time interval, determines a model according to a preset number of people, and determines the number of first users in the first image;
the controller acquires first light intensity through the light intensity sensor according to a preset time interval;
the controller acquires first sound intensity through the sound intensity sensor according to a preset time interval;
the controller judges whether a user is in front of the goods shelf or not through the proximity sensor according to a preset time interval;
the controller is used for establishing and storing a second corresponding relation among the current time, the number of the first users, the first light intensity and the first sound intensity;
and the controller determines a second corresponding relation comprising the maximum first user number.
Further, the controller obtains a user portrait controller corresponding to the currently acquired first face image, and specifically includes:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
The technical features of the embodiments 1 and 2 can be freely combined, and the present invention is not limited to this.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (8)

CN201810897344.5A2018-08-082018-08-08Trial makeup data processing system and methodActiveCN109255674B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810897344.5ACN109255674B (en)2018-08-082018-08-08Trial makeup data processing system and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810897344.5ACN109255674B (en)2018-08-082018-08-08Trial makeup data processing system and method

Publications (2)

Publication NumberPublication Date
CN109255674A CN109255674A (en)2019-01-22
CN109255674Btrue CN109255674B (en)2022-03-04

Family

ID=65050088

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810897344.5AActiveCN109255674B (en)2018-08-082018-08-08Trial makeup data processing system and method

Country Status (1)

CountryLink
CN (1)CN109255674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112819767A (en)*2021-01-262021-05-18北京百度网讯科技有限公司Image processing method, apparatus, device, storage medium, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101371272A (en)*2006-01-172009-02-18株式会社资生堂Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
CN103093357A (en)*2012-12-072013-05-08江苏乐买到网络科技有限公司Cosmetic makeup trying system of online shopping
CN106942878A (en)*2017-03-172017-07-14合肥龙图腾信息技术有限公司Partial enlargement make up system, apparatus and method
CN108053365A (en)*2017-12-292018-05-18百度在线网络技术(北京)有限公司For generating the method and apparatus of information
CN108198052A (en)*2018-03-022018-06-22北京京东尚科信息技术有限公司User's free choice of goods recognition methods, device and intelligent commodity shelf system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101371272A (en)*2006-01-172009-02-18株式会社资生堂Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
CN103093357A (en)*2012-12-072013-05-08江苏乐买到网络科技有限公司Cosmetic makeup trying system of online shopping
CN106942878A (en)*2017-03-172017-07-14合肥龙图腾信息技术有限公司Partial enlargement make up system, apparatus and method
CN108053365A (en)*2017-12-292018-05-18百度在线网络技术(北京)有限公司For generating the method and apparatus of information
CN108198052A (en)*2018-03-022018-06-22北京京东尚科信息技术有限公司User's free choice of goods recognition methods, device and intelligent commodity shelf system

Also Published As

Publication numberPublication date
CN109255674A (en)2019-01-22

Similar Documents

PublicationPublication DateTitle
US10559102B2 (en)Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
CN109242765B (en)Face image processing method and device and storage medium
CN108229415A (en)Information recommendation method and device, electronic equipment and computer-readable storage medium
CN104598869A (en)Intelligent advertisement pushing method based on human face recognition device
CN105204351B (en)control method and device of air conditioning unit
CN108920490A (en)Assist implementation method, device, electronic equipment and the storage medium of makeup
CN111488057B (en)Page content processing method and electronic equipment
CN107948506A (en)Image processing method and device and electronic equipment
CN108712603B (en) An image processing method and mobile terminal
CN105378657A (en)Apparatus and associated methods
CN108021308A (en)Image processing method, device and terminal
CN105243358A (en)Face detection-based shopping recommendation system and method
CN110443769A (en)Image processing method, image processing device and terminal equipment
CN107909011B (en) Face recognition method and related products
WO2019105411A1 (en)Information recommending method, intelligent mirror, and computer readable storage medium
CN107832784A (en)A kind of method of image beautification and a kind of mobile terminal
CN104463782B (en)Image processing method, device and electronic equipment
CN110415062A (en) Information processing method and device based on clothing try-on
CN103886284A (en)Character attribute information identification method and device and electronic device
CN107948503A (en)A kind of photographic method, camera arrangement and mobile terminal
CN109255674B (en)Trial makeup data processing system and method
CN110246206B (en)Eyebrow penciling assisting method, device and system
CN108681398A (en)Visual interactive method and system based on visual human
CN108646918A (en)Visual interactive method and system based on visual human
CN115481284A (en)Cosmetic method and device based on cosmetic box, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp