Movatterモバイル変換


[0]ホーム

URL:


CN113468932B - Intelligent mirror and makeup teaching method - Google Patents

Intelligent mirror and makeup teaching method
Download PDF

Info

Publication number
CN113468932B
CN113468932BCN202010349883.2ACN202010349883ACN113468932BCN 113468932 BCN113468932 BCN 113468932BCN 202010349883 ACN202010349883 ACN 202010349883ACN 113468932 BCN113468932 BCN 113468932B
Authority
CN
China
Prior art keywords
makeup
user
face
tool
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010349883.2A
Other languages
Chinese (zh)
Other versions
CN113468932A (en
Inventor
孙锦
赵启东
黄利
李广琴
刘晓潇
杨斌
杨雪洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co LtdfiledCriticalHisense Co Ltd
Priority to CN202010349883.2ApriorityCriticalpatent/CN113468932B/en
Publication of CN113468932ApublicationCriticalpatent/CN113468932A/en
Application grantedgrantedCritical
Publication of CN113468932BpublicationCriticalpatent/CN113468932B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请提供了一种智能镜及化妆教学方法。该智能镜包括显示屏,用于显示图像或视频;摄像头,用于采集用户的人脸图像;处理器,被配置为:响应于用户的指令,在显示屏上播放化妆教学视频,响应于用户的化妆部位的变换动作,检测出用户的当前化妆部位,并在显示屏上播放当前化妆部位对应的化妆教学视频片段,可以实时将化妆教学视频切换至当前化妆部位对应的化妆教学视频片段,从而可以实现全程无需操作镜面,并减少镜面污染,提高用户的化妆体验。

The present application provides a smart mirror and a makeup teaching method. The smart mirror includes a display screen for displaying images or videos; a camera for collecting a user's face image; and a processor configured to: respond to a user's instruction, play a makeup teaching video on the display screen, respond to the user's makeup part change action, detect the user's current makeup part, and play a makeup teaching video clip corresponding to the current makeup part on the display screen, and can switch the makeup teaching video to the makeup teaching video clip corresponding to the current makeup part in real time, so that the whole process can be achieved without operating the mirror surface, and the mirror surface pollution is reduced, thereby improving the user's makeup experience.

Description

Intelligent mirror and cosmetic teaching method
Technical Field
The application relates to the technical field of intelligent household equipment, in particular to an intelligent mirror and a cosmetic teaching method.
Background
Along with development of scientific technology, the smart mirror also gradually enters into people's daily life, however, when using the smart mirror to watch makeup teaching video while making up, the user needs to control the video, watches the teaching at different positions, needs to use the hand to operate the mirror surface to perhaps hold cosmetics or have the spot on hand, and the operation mirror surface can cause inconvenience this moment, and pollutes the mirror surface and influence the formation of image, brings bad makeup experience for the user.
In summary, there is a need for an intelligent mirror and a cosmetic teaching method for realizing that the mirror does not need to be operated in the whole cosmetic process, reducing the mirror pollution and improving the cosmetic experience of users.
Disclosure of Invention
The application provides an intelligent mirror and a cosmetic teaching method, which are used for realizing that the mirror is not required to be operated in the whole cosmetic process, reducing mirror pollution and improving the cosmetic experience of a user.
In a first aspect, in an exemplary embodiment of the present application, there is provided a smart mirror, including:
the display screen is used for displaying images or videos;
the camera is used for collecting face images of the user;
a processor configured to:
responding to the instruction of the user, and playing a makeup teaching video on the display screen;
and responding to the transformation action of the makeup part of the user, detecting the current makeup part of the user, and playing the makeup teaching video clip corresponding to the current makeup part on the display screen.
In some exemplary embodiments, the processor is configured to:
Acquiring a face image of the user acquired by the camera, wherein the face image comprises a face and a cosmetic tool;
performing target detection on the face image of the user, and determining the type and the position of the makeup tool of the user;
Determining whether a current makeup location of the user is identified according to the type and location of the makeup tool of the user;
If yes, responding to the current makeup part of the user, and playing a makeup teaching video clip corresponding to the current makeup part on the display screen;
Otherwise, carrying out key point recognition on the face image of the user, recognizing the key point positions of all parts of the face of the user, determining the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, determining the current makeup part of the user according to the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, and playing the makeup teaching video clip corresponding to the current makeup part on the display screen in response to the current makeup part of the user.
In some exemplary embodiments, the processor is configured to:
Performing target detection on the face image of the user, and detecting the type and the position of the makeup tool of the user;
determining a center point of the position of the make-up tool according to the position of the make-up tool;
Comparing the central point of the position of the makeup tool with the key point positions of all the parts of the human face, and determining whether the central point of the position of the makeup tool is positioned in the key point surrounding area of any one of the parts of the human face;
If yes, determining the corresponding face part in each part of the face as the current makeup part of the user, otherwise, comparing the key points of each part of the face with the central point of the position of the makeup tool, determining the key point with the minimum distance from the central point of the position of the makeup tool, determining whether the distance between the key point with the minimum distance and the central point of the position of the makeup tool meets a set threshold value, if yes, determining the face part corresponding to the key point with the minimum distance as the current makeup part of the user, and if not, determining the face part of the user as the current makeup part of the user.
In some exemplary embodiments, the processor is configured to:
inputting the face image of the user into a face key point detection model for detection and identification, and identifying the key point positions of each part of the face;
And determining the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool according to the key point positions of all parts of the face and the positions of the makeup tool.
In some exemplary embodiments, the processor is configured to:
acquiring a first training sample, wherein the first training sample comprises a face image;
Labeling the face image, and determining the key point positions of each part of the face in the face image;
generating a labeling file corresponding to each part of the human face according to the key point positions of each part of the human face in the human face image, wherein the labeling file corresponding to each part of the human face comprises the key point positions of each part of the human face;
training the convolutional neural network by using the face image and the labeling files corresponding to the parts of the face to obtain the face key point detection model.
In some exemplary embodiments, the processor is configured to:
Inputting the face image of the user into a target detection model for recognition, and recognizing the type and the position of the makeup tool of the user;
and determining the current makeup part of the user according to the type and the position of the makeup tool of the user.
In some exemplary embodiments, the processor is configured to:
acquiring a second training sample, wherein the second training sample comprises a face image;
Labeling the face image, and determining the type and the position of a cosmetic tool in the face image;
Generating a annotation file corresponding to the makeup tool according to the type and the position of the makeup tool in the face image, wherein the annotation file corresponding to the makeup tool comprises the type and the position of the makeup tool;
Training the convolutional neural network by using the facial image and the annotation file corresponding to the makeup tool to obtain the target detection model.
In some exemplary embodiments, the smart mirror further comprises a memory;
The processor is configured to:
Acquiring various makeup teaching videos;
Detecting and identifying each makeup teaching video, identifying each makeup part of a face in the makeup teaching video, and segmenting the makeup teaching video according to each makeup part of the face;
and storing the cut cosmetic teaching video in the memory.
In a second aspect, in an exemplary embodiment of the present application, there is provided a cosmetic teaching method including:
responding to the instruction of a user, and playing a makeup teaching video on a display screen;
and responding to the transformation action of the makeup part of the user, detecting the current makeup part of the user, and playing the makeup teaching video clip corresponding to the current makeup part on the display screen.
According to the technical scheme, the user can learn to make up by responding to the instruction of the user and playing the make-up teaching video on the display screen, when the user changes the make-up position of the user, the current make-up position of the user can be detected in real time, the make-up teaching video is switched to the make-up teaching video clip corresponding to the current make-up position in real time, the user can continuously follow the make-up teaching video without any operation, so that make-up is completed, convenience and quickness are realized, the whole process can be realized, the mirror surface pollution is reduced, and the make-up experience of the user is improved.
In some exemplary embodiments, the method further comprises:
Acquiring a face image of the user acquired by the camera, wherein the face image comprises a face and a cosmetic tool;
performing target detection on the face image of the user, and determining the type and the position of the makeup tool of the user;
Determining whether a current makeup location of the user is identified according to the type and location of the makeup tool of the user;
If yes, responding to the current makeup part of the user, and playing a makeup teaching video clip corresponding to the current makeup part on the display screen;
Otherwise, carrying out key point recognition on the face image of the user, recognizing the key point positions of all parts of the face of the user, determining the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, determining the current makeup part of the user according to the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, and playing the makeup teaching video clip corresponding to the current makeup part on the display screen in response to the current makeup part of the user.
According to the technical scheme, the type and the position of the face image of the user are determined by carrying out target detection on the face image of the user, whether the current makeup part of the user can be identified is judged according to the type and the position of the makeup tool of the user, if the current makeup part of the user can be identified, the makeup teaching video segments corresponding to the current makeup part are played on the display screen for the user to learn to make up, if the appearance of the makeup tools of different makeup parts can be similar or even identical and cannot be effectively distinguished, and if the makeup parts cannot be judged directly by judging the makeup tools, the face image of the user is required to be subjected to key point identification, the key point positions of the parts of the face of the user are identified, the attribution relation between the key point positions of the parts of the face and the positions of the makeup tool is determined, and the current makeup part of the user is determined according to the attribution relation between the key point positions of the parts of the face and the positions of the makeup tool.
In some exemplary embodiments, the method further comprises:
Performing target detection on the face image of the user, and detecting the type and the position of the makeup tool of the user;
determining a center point of the position of the make-up tool according to the position of the make-up tool;
Comparing the central point of the position of the makeup tool with the key point positions of all the parts of the human face, and determining whether the central point of the position of the makeup tool is positioned in the key point surrounding area of any one of the parts of the human face;
If yes, determining the corresponding face part in each part of the face as the current makeup part of the user, otherwise, comparing the key points of each part of the face with the central point of the position of the makeup tool, determining the key point with the minimum distance from the central point of the position of the makeup tool, determining whether the distance between the key point with the minimum distance and the central point of the position of the makeup tool meets a set threshold value, if yes, determining the face part corresponding to the key point with the minimum distance as the current makeup part of the user, and if not, determining the face part of the user as the current makeup part of the user.
According to the technical scheme, the center point of the position of the makeup tool is compared with the key point positions of all the parts of the human face, when the center point of the position of the makeup tool is determined to be located in the key point surrounding area of any one of all the parts of the human face, the corresponding part of the human face is determined to be the current makeup part of the user, otherwise, the key points of all the parts of the human face are compared with the center point of the position of the makeup tool to determine the current makeup part of the user, so that the intelligent mirror can accurately switch the makeup teaching video clips in real time on the premise that the user does not feel, and the intelligent mirror does not need to be manually operated, thereby improving the makeup experience of the user.
In some exemplary embodiments, the method further comprises:
inputting the face image of the user into a face key point detection model for detection and identification, and identifying the key point positions of each part of the face;
And determining the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool according to the key point positions of all parts of the face and the positions of the makeup tool.
In some exemplary embodiments, the method further comprises:
acquiring a first training sample, wherein the first training sample comprises a face image;
Labeling the face image, and determining the key point positions of each part of the face in the face image;
generating a labeling file corresponding to each part of the human face according to the key point positions of each part of the human face in the human face image, wherein the labeling file corresponding to each part of the human face comprises the key point positions of each part of the human face;
training the convolutional neural network by using the face image and the labeling files corresponding to the parts of the face to obtain the face key point detection model.
According to the technical scheme, the face image and the labeling file corresponding to each part of the face are trained on the convolutional neural network to obtain the face key point detection model, so that when the current makeup part of the user cannot be identified according to the type and the position of the makeup tool, the key point positions of each part of the face of the user are quickly and accurately identified through the face key point detection model, and the current makeup part of the user is identified.
In some exemplary embodiments, the method further comprises:
Inputting the face image of the user into a target detection model for recognition, and recognizing the type and the position of the makeup tool of the user;
and determining the current makeup part of the user according to the type and the position of the makeup tool of the user.
According to the technical scheme, the type and the position of the makeup tool of the user can be quickly and accurately identified by inputting the face image of the user into the target detection model for identification, and the current makeup part of the user can be determined according to the type and the position of the makeup tool of the user.
In some exemplary embodiments, the method further comprises:
acquiring a second training sample, wherein the second training sample comprises a face image;
Labeling the face image, and determining the type and the position of a cosmetic tool in the face image;
Generating a annotation file corresponding to the makeup tool according to the type and the position of the makeup tool in the face image, wherein the annotation file corresponding to the makeup tool comprises the type and the position of the makeup tool;
Training the convolutional neural network by using the facial image and the annotation file corresponding to the makeup tool to obtain the target detection model.
According to the technical scheme, the convolutional neural network is trained through the labeling files corresponding to the face images and the makeup tools, so that the target detection model is obtained, and the type and the position of the makeup tools of the user can be detected in real time by the intelligent mirror when the user learns to make up by contrasting the makeup teaching video.
In some exemplary embodiments, the method further comprises:
Acquiring various makeup teaching videos;
Detecting and identifying each makeup teaching video, identifying each makeup part of a face in the makeup teaching video, and segmenting the makeup teaching video according to each makeup part of the face;
And storing the cut cosmetic teaching video in a memory.
According to the technical scheme, the makeup teaching video is segmented according to each makeup part of the face, and after the segmented makeup teaching video is stored in the memory of the intelligent mirror in advance, when a user learns to make up by contrasting the makeup teaching video, the user responds to the transformation action of the makeup part of the user, and the makeup teaching video is switched to the makeup teaching video segment corresponding to the current makeup part in real time.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a hardware configuration of a smart mirror according to some embodiments of the present application;
FIG. 2 is a block diagram of a software configuration of a smart mirror according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a user interface of a smart mirror according to some embodiments of the present application;
FIG. 4 is a flow chart of a method for teaching makeup according to some embodiments of the present application;
Fig. 5 is a schematic flow chart for identifying each makeup part of a face in a makeup teaching video according to some embodiments of the present application;
FIG. 6 is a schematic flow chart of identifying a makeup part of a face of a user according to some embodiments of the present application;
FIG. 7 is a schematic flow chart of determining the attribution relationship between the positions of key points of each part of a face and the positions of cosmetic tools according to some embodiments of the present application;
fig. 8 is a schematic diagram of a page for switching a makeup teaching video by an intelligent mirror according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a block diagram schematically illustrating a hardware configuration of a smart mirror according to an embodiment of the present application. As shown in fig. 1, smart mirror 100 includes a processor 110, a detector 120, a communication interface 130, a display screen 140, an illumination lamp 150, an audio output interface 160, a memory 170, and a power supply 180.
The processor 110 includes a CPU processor 111, a RAM112, a ROM113, a graphics processor 114, a communication interface 115, a video processor 116, an audio processor 117, and a communication bus. Wherein the RAM112 and the ROM113 are connected with the CPU processor 111, the graphic processor 114, the communication interface 115, the video processor 116, and the audio processor 117 through a communication bus, and the communication interface 115 may include a first interface 115-1 to an nth interface 115-n. These interfaces may also be network interfaces that are connected to external devices via a network.
A ROM113 for storing instructions for various system starts. If the power of the smart mirror 100 starts to be started when the power-on signal is received, the CPU processor 111 executes a system start instruction in the ROM, and copies the operating system stored in the memory 170 into the RAM112, so that the running of the start operating system starts. When the operating system is started, the CPU processor 111 copies various applications in the memory 170 to the RAM112, and then starts running the various applications.
A graphics processor 114 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer that generates various objects based on the operator, and the result of rendering is displayed on the display screen 140.
CPU processor 111 is operative to execute operating system and application program instructions stored in memory 170. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 111 may include a plurality of processors. The plurality of processors may include one main processor and a plurality or one sub-processor. A main processor for performing some operations of the smart mirror 100 in a pre-power-up mode and/or displaying an image in a normal mode. A plurality of or a sub-processor for one operation in a standby mode or the like.
The video processor 116 is configured to receive an external video signal, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of an input signal, and obtain a signal that is directly displayed or played on the display screen 140.
The video processor 116, by way of example, includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signals, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
The frame rate conversion module is configured to convert the input video frame rate, for example, converting the 60Hz frame rate into the 120Hz frame rate or the 240Hz frame rate, and the common format is implemented in an inserting frame manner.
The display format module is used for converting the received frame rate into a video output signal, and changing the video output signal to a signal conforming to the display format, such as outputting an RGB data signal.
An audio processor 117 for receiving an external audio signal, performing decompression and decoding, and noise reduction, digital-to-analog conversion, and amplification processing according to a standard codec protocol of an input signal, to obtain a sound signal that can be played in a speaker.
In other exemplary embodiments, the video processor 116 may include one or more chip components. The audio processor 117 may also comprise one or more chips. And in other exemplary embodiments, the video processor 116 and the audio processor 117 may be separate chips or may be integrated together with the processor 110 in one or more chips.
The detector 120 is a signal that the smart mirror 100 uses to collect the external environment or interact with the outside. The detector 120 comprises an image collector 121, such as a camera, a video camera or the like, which may be used for collecting external environmental scenes, as well as for collecting attributes of the user or face images of the user.
In other exemplary embodiments, the detector 120 may further include a sound collector 122, such as a microphone, that may be used to receive the user's sound, including a voice signal of a control command of the user controlling the smart mirror 100, or collect ambient sound for identifying the type of ambient scene.
In other exemplary embodiments, the detector 120 may further include a weather collector 123, such as an air temperature detector, for collecting current weather temperature, or collecting weather climate attribute data such as current season.
A communication interface 130, a component for communicating with an external device or external server according to various communication protocol types. For example, the communication interface 130 may be a Wifi module 131, a bluetooth module 132, a wired ethernet module 133, a usb134, or other network communication protocol modules or a near field communication protocol module.
The smart mirror 100 may establish control signal and data signal transmission and reception with an external control device or a content providing device through the communication interface 130.
The display 140 is configured to receive the image signal input from the video processor 116, and display video content, images, and a menu manipulation interface. The display 140 includes a display component for presenting images, and a drive component for driving the display of images. The video content may be displayed from video content stored in the memory 170 or from video content input by the communication interface 130. In addition, a display screen 160 displays a user manipulation UI interface generated in the smart mirror 100 and used to control the smart mirror 100, and a Graphical User Interface (GUI) displayed on the display screen 140 may also be used by the user to input a user's operation command.
The illumination lamp 150 is used for providing light supplement for a user when learning makeup using the smart mirror 100.
The audio output interface 160 is configured to receive an audio signal output by the audio processor 117, where the audio output interface 160 may include a speaker 161 (such as a loudspeaker) carried by the audio output interface 160, or an external audio output terminal 162 that outputs the audio signal to a generating device of an external device, such as an external audio interface, an earphone interface, and so on.
A memory 170 for storing various operating programs, data and applications for driving and controlling the smart mirror 100. The memory 170 may store various control signal instructions input by a user. Including storing various software modules for driving the smart mirror 100. Such as various software modules stored in the memory 170, including a base module, a detection module, a display control module, a communication module, etc.
The base module is used for signal communication between various hardware in the smart mirror 100 and sending processing and control signals to the upper layer module, the detection module is used for collecting various information from various detectors or user input interfaces and performing digital-to-analog conversion and analysis management, and the display control module is used for controlling the display screen 140 to display image content and can be used for playing multimedia image content, UI interface and other information. And the communication module is used for carrying out control and data communication with external equipment.
Meanwhile, the memory 170 may also be used to store received external data and user data, images in various user interfaces, visual effect maps, and the like.
In addition, the memory 170 is specifically configured to store an operating program for driving the processor 110 in the smart mirror 100, and store various application programs built in the smart mirror 100, various application programs downloaded by a user from an external device, various graphical user interfaces related to the application, various objects related to the graphical user interfaces, user data information, and various internal data supporting the application. The memory 170 is also used to store system software such as OS kernel, middleware and applications, and to store drivers and related data such as the display screen 140, the communication interface 130, and the input/output interface of the detector 120, or to store other user data.
A power supply 180 for providing power support for the starting and operation of the various elements in the smart mirror 100. May be in the form of a battery and associated control circuitry. Under the operation of a user, the power input by the external power supply provides power supply support for the smart mirror 100. The power supply 180 may include a built-in power circuit installed inside the smart mirror 100, or may be an external power source installed on the smart mirror 100, and a power interface for providing an external power source in the smart mirror 100.
Fig. 2 is a block diagram schematically illustrating a software configuration of a smart mirror according to an embodiment of the present application. As shown in fig. 2, may include an operating system 171, an interface layout manager 172, an event delivery system 173, and application programs 174.
The operating system 171, which includes execution operating software for handling various basic system services and for performing hardware-related tasks, acts as an intermediary for data processing done between applications and hardware components, such as an android operating system. In some embodiments, a portion of the operating system kernel may contain a series of software to manage the hardware resources of the smart mirror 100 and to serve other programs or software code.
In other embodiments, portions of the operating system kernel may contain one or more device drivers, which may be a set of software code in the operating system that helps operate or control the smart mirror-associated devices or hardware. The driver may contain code to operate video, audio and/or other multimedia components. Examples include display, camera, flash, and WiFi.
The accessibility module 1711 is configured to access or modify an application program, so as to implement accessibility of the application program and operability of display content thereof.
A communication module 1712 for connecting with other peripheral devices via related communication interfaces and communication networks.
The user interface module 1713 is configured to provide an object for displaying a user interface, so that each application program can access the object, and can implement operability of a user. Such as the front-end interactive interface of a smart mirror.
Control applications 1714 for controllable process management, including runtime applications, and the like.
The event delivery system 173, which may be implemented within the operating system 171 or within the application 174, is in some embodiments implemented on the one hand within the operating system 171 and simultaneously within the application 174, is configured to monitor various user-entered events and to implement one or more sets of predefined operational handlers in response to the recognition results of the various events or sub-events based on the various event designations.
The event recognition module 1732 is configured to input various event definitions for various user input interfaces, recognize various events or sub-events, and transmit the events or sub-events to the processor 110 for executing one or more corresponding sets of processing programs. For example, the processor 110 processes the corresponding event or sub-event according to the logic program and the core algorithm stored in the smart mirror 100, and presents the processed result on the display screen 140.
Where an event or sub-event refers to an input detected by one or more detectors in smart mirror 100. Such as various sub-events entered by the user's voice or various sub-events entered by manipulation of the display.
The interface layout manager 172 directly or indirectly receives events or sub-events from the event delivery system 173 that are intercepted by each user input, and is configured to update the layout of the user interface, including, but not limited to, the location of each control or sub-control in the interface, and the size or location of the container, the level, etc. of various execution operations associated with the interface layout.
Fig. 3 schematically illustrates a user interface of a smart mirror according to an embodiment of the present application. As shown in fig. 3, the user interface includes a plurality of view display areas, and for example, when the user clicks on a makeup teaching video that he wants to learn, the smart mirror automatically divides the screen into an upper area and a lower area, that is, a first view display area 301 and a second view display area 302, where the first view display area 301 is used for the user to look into the makeup, and the second view display area 302 is used for playing the makeup teaching video that he learns. The display screen (such as a mirror surface) of the intelligent mirror can be in a full-screen mode so as to facilitate a user to browse a corresponding cosmetic mall, or can be in a full-mirror mode so as to facilitate the user to look into a mirror or make-up and other purposes, or can be in a half-mirror mode so as to facilitate the user to learn make-up or be used for other purposes while watching a make-up teaching video, one or more different items are arranged in each view display area, and a selector for indicating that any item is selected is also arranged in the user interface.
It should be noted that, the multiple view display areas may be visual boundaries, or may be an invisible boundary. For example, different view display areas can be identified by different background colors of the view display areas, and visible identification such as boundary lines and invisible boundaries can also be provided. There may also be no visible or non-visible boundaries, but only the associated views in a range of areas displayed on the screen, with the same changing properties of size and/or arrangement, and the range of areas being viewed as the presence of boundaries of the same view partition.
In the smart mirror provided by some embodiments of the present application, the processor may be configured to play a makeup teaching video on the display screen in response to an instruction of the user, detect a current makeup location of the user in response to a transformation action of the makeup location of the user, and play a makeup teaching video clip corresponding to the current makeup location on the display screen. For example, a user wants to learn a makeup video of an appointment, a corresponding makeup teaching video of the appointment can be determined according to the makeup of the appointment through an operation instruction triggered by a display screen or a voice instruction issued by voice equipment, the makeup teaching video is played on a display screen 140 on a smart mirror, and when the smart mirror recognizes that the user changes a makeup position, the makeup teaching video is immediately switched to a makeup teaching video segment corresponding to the current makeup position for the user to watch and learn and compare with makeup.
Optionally, the processor is further configured to acquire a face image of the user acquired by the camera, wherein the face image comprises a face and a makeup tool, perform target detection on the face image of the user, determine the type and the position of the makeup tool of the user, determine whether the current makeup part of the user is identified according to the type and the position of the makeup tool of the user, if so, play a makeup teaching video segment corresponding to the current makeup part on the display screen in response to the current makeup part of the user, otherwise, perform key point identification on the face image of the user, identify the key point positions of all parts of the face of the user, determine the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, determine the current makeup part of the user according to the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, and play the makeup teaching video segment corresponding to the current makeup part on the display screen in response to the current makeup part of the user. For example, when a user learns makeup teaching videos of an appointment, the current makeup part of the user is identified according to the type and the position of the makeup tool of the user, then a makeup teaching video clip corresponding to the current makeup part is played on a display screen, if the appearance of the makeup tools of different makeup parts can be similar or even the same, for example, an eyebrow pencil is used when the eyebrows are drawn, an eyeliner pencil is used when the eyeliner is drawn, the eyebrow pencil and the eyeliner pencil are similar, effective distinction cannot be performed, the makeup part cannot be judged directly by judging the makeup tool, the key point identification is carried out on the face image of the user, the key point positions of all parts of the face of the user are identified, the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool is determined, the current makeup part of the user is determined according to the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, and the makeup teaching video clip corresponding to the current makeup part is played on the display screen in response to the current makeup part of the user.
Optionally, the processor is further configured to acquire a face image of the user acquired by the camera, wherein the face image comprises a face and a makeup tool, perform target detection on the face image of the user, determine the type and the position of the makeup tool of the user, determine whether the current makeup part of the user is identified according to the type and the position of the makeup tool of the user, if so, play a makeup teaching video segment corresponding to the current makeup part on the display screen in response to the current makeup part of the user, otherwise, perform key point identification on the face image of the user, identify the key point positions of all parts of the face of the user, determine the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, determine the current makeup part of the user according to the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool, and play the makeup teaching video segment corresponding to the current makeup part on the display screen in response to the current makeup part of the user.
Optionally, the processor is further configured to perform target detection on the face image of the user, detect the type and the position of the makeup tool of the user, determine the center point of the position of the makeup tool according to the position of the makeup tool, compare the center point of the position of the makeup tool with the positions of the key points of the positions of the faces, determine whether the center point of the position of the makeup tool is located in the surrounding area of the key point of any face part in the positions of the faces, if so, determine the corresponding face part in the positions of the faces as the current makeup part of the user, otherwise compare the key point of the positions of the faces with the center point of the position of the makeup tool, determine the key point with the smallest distance from the center point of the position of the makeup tool, determine whether the distance between the minimum key point and the center point of the position of the makeup tool meets the set threshold, if not meet the set threshold, determine the face part corresponding to the minimum distance as the current makeup part of the user, and if not meet the set threshold, determine the face part of the user as the current makeup part of the user.
Optionally, the processor is further configured to input a face image of the user to the face key point detection model for detection and identification, identify key point positions of each part of the face, and determine attribution relations between the key point positions of each part of the face and the positions of the makeup tool according to the key point positions of each part of the face and the positions of the makeup tool.
Optionally, the processor is further configured to obtain a first training sample, wherein the first training sample comprises a face image, label the face image, determine the key point positions of all parts of the face in the face image, generate labeling files corresponding to all parts of the face according to the key point positions of all parts of the face in the face image, the labeling files corresponding to all parts of the face comprise the key point positions of all parts of the face, and train the convolutional neural network with the face image and the labeling files corresponding to all parts of the face to obtain the face key point detection model.
Optionally, the processor is further configured to input the face image of the user to the target detection model for recognition, recognize the type and position of the cosmetic tool of the user, and determine the current cosmetic location of the user according to the type and position of the cosmetic tool of the user.
Optionally, the processor is further configured to obtain a second training sample, wherein the second training sample comprises a face image, label the face image, determine the type and the position of a makeup tool in the face image, generate a label file corresponding to the makeup tool according to the type and the position of the makeup tool in the face image, the label file corresponding to the makeup tool comprises the type and the position of the makeup tool, and train the convolutional neural network by the face image and the label file corresponding to the makeup tool to obtain the target detection model.
Optionally, the processor is further configured to acquire various makeup teaching videos, detect and identify each makeup teaching video, identify each makeup part of a face in the makeup teaching video, segment the makeup teaching video according to each makeup part of the face, and store the segmented makeup teaching videos in the memory.
Fig. 4 shows a schematic flow chart of a cosmetic teaching method. This process may be performed by smart mirror 100.
As shown in fig. 4, the process includes:
In step 401, in response to a user's instruction, a cosmetic teaching video is played on a display screen.
In the embodiment of the application, the makeup teaching video is played on the display screen in response to the instruction of the user. According to the makeup teaching video which the user wants to learn, the makeup teaching video is played in the corresponding display screen area on the intelligent mirror so that the user can learn to make up. Types of cosmetic tutorial videos may include, among others, dating, shopping, attending weddings or interviews, etc.
The face of the user detected in real time and the makeup content provider or the makeup teaching video which can be obtained from the network are required to be identified, but only the makeup user detected in real time is ensured to be identified in real time in order to ensure the face makeup part. Therefore, before the makeup teaching video is played on the display screen, the makeup teaching video needs to be identified at each makeup part of the face, and the makeup teaching video is segmented according to each makeup part of the face and stored in the memory of the smart mirror in advance. Fig. 5 is a schematic flow chart for identifying each makeup part of a face in a makeup teaching video. This process may be performed by smart mirror 100.
As shown in fig. 5, the process includes:
step 501, a face image in a cosmetic teaching video is obtained.
In the embodiment of the application, the face image in the makeup teaching video may refer to any face image in the makeup teaching video, and the face image may include a face and a makeup tool. The makeup teaching video may be various makeup teaching videos provided by a makeup content provider or may be acquired from a network.
Step 502, inputting the face image into the target detection model for processing.
In the embodiment of the application, the face image is input into the target detection model for processing, so that the type and the position of the cosmetic tool in the face image are obtained. Specifically, the target detection model is constructed based on a convolutional neural network, a huge number of face images containing makeup tools are firstly obtained from the Internet and reality, the makeup tools in the face images are marked through a marking method, corresponding marking files containing marking coordinates and types of the makeup tools are generated, the face images and the corresponding marking files are used as training data of the convolutional neural network, the convolutional neural network learns the positions and the characteristics of the makeup tools in the face images through a series of calculation operations on the input face images and the marking information, the target detection model is generated after training is completed, a new face image containing the makeup tools is input into the target detection model, and then the target detection model automatically outputs the positions and the types of the makeup tools in the face images.
In step 503, the type and position of the cosmetic tool in the face image are obtained.
Step 504 determines whether a current cosmetic location in the cosmetic teaching video is identified.
In the embodiment of the present application, it is determined whether the current makeup location in the makeup teaching video is identified, if yes, step 508 is performed, and if not, step 505 is performed.
Step 505, the face image is input into a face key point detection model for processing.
In the embodiment of the application, after determining that the current makeup part in the makeup teaching video cannot be identified, a face image is input into a face key point detection model for processing, so that the key point positions of all parts of the face are obtained. Specifically, because the appearance of the makeup tools at different makeup positions can be similar or even the same, such as using an eyebrow pencil when drawing eyebrows and using an eyeliner pencil when drawing eyeliner, the eyebrow pencil and the eyeliner pencil are similar and cannot be effectively distinguished, the makeup positions cannot be judged directly by judging the makeup tools, and then the face image in the makeup process can be detected by using a face key point detection model. Each key point has its own fixed type, such as eyes, mouth, eyebrows, etc., and contains position coordinate information. The face key point detection model is constructed based on a convolutional neural network, a large number of face images are acquired from the network and reality, points of all parts of the face on the face images, such as eyes, noses, mouths, eyebrows and the like, are marked by a marking method, the number of each part of points is fixed, and corresponding marking files containing key point position coordinate information are generated during marking. The method comprises the steps of inputting a labeled face image and a labeled file as training data into a convolutional neural network, learning the position characteristics of key points in the face image through a series of calculation operations by the convolutional neural network, generating a face key point detection model after training is completed, and inputting a new face image into the face key point detection model, so that the face key point detection model automatically outputs the key point positions of all parts in a face.
Step 506, obtaining the key point positions of each part of the face.
Step 507, determining the attribution relation between the key point positions of each part of the face and the positions of the makeup tools.
In the embodiment of the application, the attribution relation between the key point positions of all parts of the human face and the positions of the makeup tools is determined according to the key point positions of all parts of the human face and the positions of the makeup tools, and the current makeup part in the makeup teaching video is determined according to the attribution relation between the key point positions of all parts of the human face and the positions of the makeup tools.
Step 508, determining the current makeup location in the makeup teaching video.
In the embodiment of the application, the current makeup part in the makeup teaching video can be determined by utilizing target detection or face key point detection of the face image.
It should be noted that, according to the process, the recognition of each makeup part of the face can be completed, the makeup teaching video is segmented according to each makeup part of the face, and then the segmented makeup teaching video is pre-stored in the memory of the smart mirror.
Step 402, responding to the transformation action of the makeup part of the user, detecting the current makeup part of the user, and playing the makeup teaching video clip corresponding to the current makeup part on the display screen.
In the embodiment of the application, when the intelligent mirror recognizes that the user changes the makeup part, the makeup teaching video is immediately switched to the makeup teaching video segment corresponding to the current makeup part in real time.
After the makeup teaching video is segmented according to each makeup part of the face, and the segmented makeup teaching video is stored in a memory of the intelligent mirror in advance, when a user learns to make up by contrasting the makeup teaching video, the current makeup part of the user is detected in response to the transformation action of the makeup part of the user, and a makeup teaching video segment corresponding to the current makeup part is played on a display screen. Fig. 6 is a schematic flow chart for identifying a makeup part of a face of a user. This process may be performed by smart mirror 100.
As shown in fig. 6, the flow includes:
Step 601, acquiring a face image of a user.
In an embodiment of the present application, the face image of the user may include a face and a cosmetic tool. The face image is acquired by a camera of the intelligent mirror when a user learns to make up by contrast with the make-up teaching video. When the intelligent mirror detects the face of the user, the intelligent mirror acquires video stream resources, converts the video stream resources into image resources according to frames, and then inputs the image resources into a target detection model for detection and identification.
Step 602, inputting a face image of a user into a target detection model for detection.
In the embodiment of the application, the face image of the user is input into the target detection model for detection, so that the type and the position of the makeup tool of the user can be identified. For example, for the identification of certain cosmetic sites, the identification of the cosmetic site may be performed directly by identifying the cosmetic tool based on the specificity of the cosmetic tool. For example, when makeup is applied to the face using a make-up egg or a puff, large-scale concealer and foundation are generally applied to the face, and when lipstick is applied, a coloring operation is generally applied to the lips. Therefore, the target detection model can be utilized to detect and identify makeup tools such as makeup eggs, powder puffs and lipsticks, when the relevant makeup tools are detected in the video picture for 1s, the current makeup part of a user can be judged, and the makeup teaching video can immediately jump to the video segment corresponding to the makeup part of the user so that the user can watch the makeup teaching video of the makeup part and then learn to make up with the makeup teaching video.
Step 603, obtaining the type and position of the cosmetic tool in the face image.
Step 604 determines whether the current cosmetic location of the user is identified.
In an embodiment of the present application, it is determined whether a current cosmetic location of a user is identified according to a type and a location of a cosmetic tool of the user. If yes, go to step 608, if no, go to step 605.
Step 605, inputting the face image of the user into a face key point detection model for detection.
In the embodiment of the application, when the current makeup part of the user cannot be identified, the face image of the user is input into the face key point detection model for detection, and the key point positions of all parts of the face are identified.
Step 606, obtaining the key point positions of each part of the face.
Step 607, determining the attribution relationship between the key point positions of the parts of the face and the positions of the makeup tools.
In the embodiment of the application, because the appearance of the makeup tool of different makeup parts can be similar or even the same, the makeup parts cannot be judged directly by judging the makeup tool, key point identification is needed to be carried out on the face image of the user, the key point positions of all parts of the face of the user are identified, and the attribution relation between the key point positions of all parts of the face and the positions of the makeup tool is determined. Fig. 7 is a schematic flow chart for judging the attribution relation between the positions of key points of various parts of a face and the positions of cosmetic tools. This process may be performed by smart mirror 100.
As shown in fig. 7, the flow includes:
Step 701, obtaining a center point of a position of a cosmetic tool.
In the embodiment of the application, the position of the makeup tool is analyzed, and the center point of the position of the makeup tool is calculated.
Step 702, obtaining a key point surrounding area of each part of the face.
In the embodiment of the application, the positions of the key points of each part of the face are analyzed, and the surrounding areas of the key points of each part of the face are determined.
In step 703, it is determined whether the center point of the position of the cosmetic tool is located within the key point surrounding area of any one of the face parts.
In the embodiment of the application, the positions of the central point of the position of the makeup tool and the key points of all the parts of the human face are compared, and whether the central point of the position of the makeup tool is positioned in the key point surrounding area of any one of the parts of the human face is determined. If yes, go to step 704, if no, go to step 705.
Step 704, determining the corresponding face part in each part of the face as the current cosmetic part of the user.
In the embodiment of the application, if the center point of the position of the makeup tool is determined to be located in the key point surrounding area of any face part in all parts of the face, the face part is determined to be the current makeup part of the user.
Step 705, calculating a keypoint with minimum distance from a center point of a position of the cosmetic tool.
In the embodiment of the application, if the center point of the position of the makeup tool is determined not to be located in the key point surrounding area of any face part in the face parts, the key point with the minimum distance from the center point of the position of the makeup tool is calculated.
Step 706, determining whether a distance between the minimum key point and a center point of the position of the cosmetic tool satisfies a set threshold.
In the embodiment of the application, whether the distance between the key point with the smallest distance and the central point of the position of the cosmetic tool meets a set threshold value is determined. If yes, go to step 707, if no, go to step 708. Wherein the set threshold may be empirically set.
Step 707, determining the face position corresponding to the key point with the smallest distance as the current makeup position of the user.
In the embodiment of the application, when the distance between the key point with the minimum distance and the central point of the position of the cosmetic tool is determined to meet the set threshold, the face part corresponding to the key point with the minimum distance is determined to be the current cosmetic part of the user.
Step 708, determining the face of the user as the current cosmetic location of the user.
In the embodiment of the application, when the distance between the key point with the minimum distance and the central point of the position of the cosmetic tool is determined to not meet the set threshold value, the face of the user is determined to be the current cosmetic position of the user.
Meanwhile, when the face key points of the face of the user are detected, the target detection model is utilized to detect and identify the makeup tool in the face image, including the head of a makeup pen, the head of a makeup brush and the like, and the makeup tool is detected to obtain the position information and the category information of the makeup tool. After the position information of the key points of the human face and the position information of the makeup tool are obtained, the two pieces of position information are analyzed in the same human face image. The method comprises the steps of firstly calculating the center point of the position of a makeup tool, comparing the center point with position information of a key point of a human face, judging the current makeup part of a user as the part if the center point of the position of the makeup tool falls in the key point surrounding area of a certain part of the human face, calculating the key point of the human face nearest to the center point of the position of the makeup tool if the center point of the position of the makeup tool does not fall in the key point surrounding area of the certain part of the human face, acquiring the position information of the key point of the human face if the distance between the two points is within a set threshold, judging the current makeup part of the user as the part, and judging the current makeup part of the user as the face if the distance between the two points is outside the set threshold.
At step 608, a current cosmetic location of the user is determined.
In the embodiment of the application, the current makeup part of the user can be determined according to target detection or face key point detection of the face image of the user.
Step 609, playing the makeup teaching video clip corresponding to the current makeup part.
In the embodiment of the application, the current makeup part of the user is detected in response to the transformation action of the makeup part of the user, and the makeup teaching video clip corresponding to the current makeup part is played on the display screen. For example, when the smart mirror recognizes that the user changes the makeup location, the makeup teaching video is immediately switched to the makeup teaching video clip corresponding to the current makeup location in real time, and a page in which the smart mirror switches the makeup teaching video may be as shown in fig. 8.
According to the technical scheme, the intelligent mirror and the makeup teaching method are provided, the makeup teaching video can be played on the display screen for the user to learn to make up by responding to the instruction of the user, when the user changes the makeup part of the user, the current makeup part of the user can be detected in real time, the makeup teaching video is switched to the makeup teaching video segment corresponding to the current makeup part in real time, the user can continuously follow the makeup teaching video without any operation, so that makeup is completed, convenience and quickness are realized, the mirror surface does not need to be operated in the whole process, mirror surface pollution is reduced, and the makeup experience of the user is improved.
Since the communication terminal and the computer storage medium in the embodiments of the present application may be applied to the above-mentioned processing method, the technical effects that can be obtained by the communication terminal and the computer storage medium may also refer to the above-mentioned method embodiments, and the embodiments of the present application are not described herein again.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

Responding to the transformation action of the makeup part of the user, detecting the current makeup part of the user, and playing a makeup teaching video clip corresponding to the current makeup part on the display screen; the processor is configured to acquire face images of the user acquired by the camera, wherein the face images comprise faces and cosmetic tools; the method comprises the steps of detecting a target of a face image of a user, determining the position of a makeup tool of the user, identifying key points of the face image of the user, identifying the key points of all parts of the face of the user, determining the central point of the position of the makeup tool according to the position of the makeup tool, comparing the central point of the position of the makeup tool with the key points of all parts of the face, determining whether the central point of the position of the makeup tool is located in a key point surrounding area of any face part of all parts of the face, if yes, determining the corresponding face part of all parts of the face as the current makeup part of the user, otherwise, comparing the key points of all parts of the face with the central point of the position of the makeup tool, determining the key point with the minimum distance from the central point of the position of the makeup tool, determining whether the distance between the key point with the minimum distance and the central point of the position of the makeup tool meets a set threshold, if yes, determining that the corresponding face part of all parts of the face is located in the key point surrounding area of any face part of the face, if yes, determining that the face part of the face corresponding to be not located in the face part of the face is the current makeup part of the face of the user.
Detecting a current cosmetic location of the user in response to a transformation action of the cosmetic location of the user, playing a makeup teaching video clip corresponding to the current makeup part on the display screen; wherein, the detecting of the current makeup location of the user in response to the transforming action of the makeup location of the user includes: a face image of the user is acquired, the face image comprises a face and a make-up tool; performing target detection on the face image of the user to determine the position of the makeup tool of the user; performing key point recognition on the face image of the user, and recognizing the key point positions of all parts of the face of the user; determining a center point of the position of the make-up tool according to the position of the make-up tool; comparing the central point of the position of the makeup tool with the key point positions of all the parts of the human face, and determining whether the central point of the position of the makeup tool is positioned in the key point surrounding area of any one of the parts of the human face; if so, the first and second data are not identical, determining the corresponding face part in each part of the face as the current makeup part of the user; otherwise, comparing the key points of each part of the face with the central points of the positions of the makeup tools, and determining the key point with the minimum distance from the central point of the positions of the makeup tools; determining whether a distance between the minimum key point and a center point of the position of the cosmetic tool meets a set threshold; if the set threshold value is met, and determining the face part of the person corresponding to the key point with the minimum distance as the current makeup part of the user, and if the set threshold is not met, determining the face of the user as the current makeup part of the user.
CN202010349883.2A2020-04-282020-04-28 Intelligent mirror and makeup teaching methodActiveCN113468932B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010349883.2ACN113468932B (en)2020-04-282020-04-28 Intelligent mirror and makeup teaching method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010349883.2ACN113468932B (en)2020-04-282020-04-28 Intelligent mirror and makeup teaching method

Publications (2)

Publication NumberPublication Date
CN113468932A CN113468932A (en)2021-10-01
CN113468932Btrue CN113468932B (en)2025-01-24

Family

ID=77865883

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010349883.2AActiveCN113468932B (en)2020-04-282020-04-28 Intelligent mirror and makeup teaching method

Country Status (1)

CountryLink
CN (1)CN113468932B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116466602A (en)*2022-01-112023-07-21芜湖美的厨卫电器制造有限公司 a dressing table
CN117831104B (en)*2023-12-302024-05-24佛山瀚镜智能科技有限公司Intelligent mirror cabinet and control method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107153805A (en)*2016-03-022017-09-12北京美到家科技有限公司Customize makeups servicing unit and method
CN108606453A (en)*2018-04-192018-10-02郑蒂A kind of intelligent cosmetic mirror

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105488472B (en)*2015-11-302019-04-09华南理工大学 A digital makeup method based on sample templates
CN107341435A (en)*2016-08-192017-11-10北京市商汤科技开发有限公司Processing method, device and the terminal device of video image
CN107220960B (en)*2017-05-272021-01-05无限极(中国)有限公司Make-up trial method, system and equipment
CN107969058A (en)*2017-12-292018-04-27上海斐讯数据通信技术有限公司A kind of intelligence dressing table and control method
CN108920490A (en)*2018-05-142018-11-30京东方科技集团股份有限公司Assist implementation method, device, electronic equipment and the storage medium of makeup
CN109671142B (en)*2018-11-232023-08-04南京图玩智能科技有限公司Intelligent cosmetic method and intelligent cosmetic mirror

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107153805A (en)*2016-03-022017-09-12北京美到家科技有限公司Customize makeups servicing unit and method
CN108606453A (en)*2018-04-192018-10-02郑蒂A kind of intelligent cosmetic mirror

Also Published As

Publication numberPublication date
CN113468932A (en)2021-10-01

Similar Documents

PublicationPublication DateTitle
US11381756B2 (en)DIY effects image modification
CN110868635B (en)Video processing method and device, electronic equipment and storage medium
US20210303855A1 (en)Augmented reality item collections
WO2021194755A1 (en)Combining first user interface content into second user interface
CN112579826A (en)Video display and processing method, device, system, equipment and medium
JP7421010B2 (en) Information display method, device and storage medium
CN111949782B (en)Information recommendation method and service equipment
CN113641442A (en) Interactive method, electronic device and storage medium
KR20170137491A (en)Electronic apparatus and operating method thereof
US11640700B2 (en)Methods and systems for rendering virtual objects in user-defined spatial boundary in extended reality environment
CN113468932B (en) Intelligent mirror and makeup teaching method
KR20170039379A (en)Electronic device and method for controlling the electronic device thereof
CN112560540A (en)Beautiful makeup putting-on recommendation method and device
CN113824982A (en)Live broadcast method and device, computer equipment and storage medium
CN116962563B (en) Interaction method, device and medium
WO2025036409A1 (en)Media content processing method and device, and storage medium and program product
WO2024148963A1 (en)Makeup assisting method, and electronic device
CN113468372B (en)Intelligent mirror and video recommendation method
CN115017522A (en) Permission recommendation method and electronic device
CN114143580A (en)Display device and handle control pattern display method
RU2817182C1 (en)Information display method, device and data medium
CN119991870A (en) Picture book generation method and display device
CN120321443A (en) Image processing method and display device
CN119718121A (en) Media data generation method, device, equipment, medium and product
CN118295620A (en)Display equipment, server and digital human interaction method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp