Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an eye protection display method and a learning machine with an eye protection mode, wherein the eye protection display method and the learning machine with the eye protection mode carry out adaptive adjustment on the current display state by acquiring the attribute information of any one of the current display content, a viewer and the external environment, wherein the adaptive adjustment not only comprises different display adjustment modes of adjusting whether the display operation is terminated or not, adjusting the display brightness and adjusting the display time, but also can carry out targeted display mode adjustment on the viewing action of the current viewer and the external environment, so that the uniform display mode adjustment on all the viewers by adopting a single display adjustment mode can be avoided, and the learning machine can carry out the appropriate eye protection mode adjustment on different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
The invention provides an eye protection display method which is characterized by comprising the following steps:
the method comprises the steps of (1) obtaining attribute information of any one of display content, a viewer and an external environment, and determining evaluation information of a current display state according to the attribute information;
step (2), judging the validity of any one of the current display content, the identity of the viewer and the viewing action of the viewer according to the evaluation information;
step (3), according to the judging result about the legality of any one of the current display content, the viewer identity and the viewer watching action, adjusting the current display state;
further, in the step (1), attribute information on any one of the display content, the viewer, and the external environment is acquired, and it is determined that evaluation information on the current display state specifically includes, based on the attribute information,
step (101), acquiring current display text and/or image information as attribute information of the display content, or acquiring facial images of different angles of a viewer as attribute information of the viewer, or acquiring brightness data of a face area and a non-face area of the viewer as attribute information of the external environment;
step (102), when the attribute information is the display text and/or image information, extracting symbol pixel characteristics of the display text and/or image information, and calculating the extracted symbol pixel characteristics through a first preset evaluation model to obtain the evaluation information;
step (103), when the attribute information is the face images of different angles of the viewer, extracting the face related features of the face images, and calculating the extracted face related features through a second preset evaluation module to obtain the evaluation information;
a step (104) of, when the attribute information is the luminance data regarding each of the face region and the non-face region of the viewer, performing luminance-related feature extraction processing on the luminance data, and performing calculation processing on the extracted luminance-related feature by using a third preset evaluation model to obtain the evaluation information;
further, in the step (102), the calculating the extracted symbol pixel characteristics by a first preset evaluation model to obtain the evaluation information specifically includes,
matching the symbol pixel characteristics with preset display content characteristics through the first preset evaluation model to obtain matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
or,
in the step (103), the obtaining of the evaluation information specifically includes performing calculation processing on the extracted face-related feature by using a second preset evaluation module,
converting the face-related feature into a face recognition feature or a face-display face distance value through the second preset evaluation model to serve as the evaluation information;
or,
in the step (104), the calculating the extracted brightness-related feature by using a third preset evaluation model to obtain the evaluation information specifically includes,
converting the brightness-related features into brightness difference values of a face region and a non-face region of a viewer through the third preset evaluation model to serve as the evaluation information;
further, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action according to the evaluation information specifically includes,
when the evaluation information is the matching degree between the symbol pixel characteristics and the preset display content characteristics, if the matching degree meets a preset matching degree range, determining that the current display content is legal, otherwise, determining that the current display content is not legal;
or,
when the evaluation information is the face recognition feature, if the face recognition feature is matched with a preset face feature library, determining that the identity of the viewer is legal, otherwise, determining that the identity of the viewer is not legal;
or,
when the evaluation information is the face-display surface distance value, if the face-display surface distance value meets a preset distance range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal;
or,
when the evaluation information is the brightness difference value between the face area and the non-face area of the viewer, if the brightness difference value between the face area and the non-face area of the viewer meets a preset brightness difference range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal;
further, in the step (3), the adjusting the current display state according to the judgment result on the validity of any one of the current display content, the viewer identity, and the viewer viewing action specifically includes,
when the current display content is determined to be legal, the normal operation of the current display operation is maintained, and when the current display content is determined not to be legal, all the current display operations are terminated;
or,
when the identity of the viewer is determined to be legal, setting a preset display time length for the current viewer, terminating all current display operations under the condition that the actual viewing time length of the current viewer exceeds the preset display time length, and directly terminating all current display operations when the identity of the viewer is determined not to be legal;
or,
and when the watching action of the viewer is determined to be legal, adjusting the display brightness in real time, and when the watching action of the viewer is determined not to be legal, directly terminating all current display operations.
The invention also provides a learning machine with an eye protection mode, which is characterized in that:
the learning machine with the eye protection mode comprises an attribute information acquisition module, an evaluation information determination module, a legality judgment module and a display state adjustment module; wherein,
the attribute information acquisition module is used for acquiring attribute information about any one of the display content of the learning machine, a corresponding viewer and the external environment;
the evaluation information determining module can be used for determining evaluation information about the current display state of the learning machine according to the attribute information;
the legality judging module is used for judging the legality of any one of the currently displayed content, the identity of the viewer and the viewing action of the viewer of the learning machine according to the evaluation information;
the display state adjusting module is used for adjusting the current display state of the learning machine according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer;
further, the attribute information acquisition module comprises a display content acquisition submodule, a face image acquisition submodule and a brightness data acquisition submodule; wherein,
the display content acquisition submodule is used for acquiring current display text and/or image information as attribute information of the display content;
the facial image acquisition sub-module is used for acquiring facial images of different angles of a viewer as attribute information of the viewer;
the brightness data acquisition submodule is used for acquiring brightness data of a face area and a non-face area of a viewer respectively as attribute information of the external environment;
further, the evaluation information determination module comprises a first evaluation information determination submodule, a second evaluation information determination submodule and a third evaluation information determination submodule; wherein,
the first evaluation information determining submodule is used for matching the symbol pixel characteristics corresponding to the display text and/or image information with preset display content characteristics through a first preset evaluation model so as to obtain the matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
the second evaluation information determination submodule is used for converting face related features corresponding to face images of different angles of a viewer into face recognition features or face-display face distance values through a second preset evaluation model to serve as the evaluation information;
the third evaluation information determination submodule is used for converting brightness related characteristics corresponding to brightness data of a face area and a non-face area of a viewer into brightness difference values of the face area and the non-face area of the viewer through a third preset evaluation model to serve as the evaluation information;
further, the validity judging module comprises a first validity judging submodule, a second validity judging submodule, a third validity judging submodule and a fourth validity judging submodule; wherein,
the first validity judging submodule is used for judging whether the currently displayed content of the learning machine has validity or not according to the matching degree between the symbol pixel characteristics and the preset display content characteristics;
the second legality judging submodule is used for judging whether the identity of the viewer corresponding to the learning machine is legal or not according to the face recognition feature;
the third legality judging submodule is used for judging whether the watching action of the learning machine corresponding to the watcher is legal or not according to the fact that the face-display face distance value meets a preset distance range;
the fourth legality judging submodule is used for judging whether the watching action of the viewer corresponding to the learning machine is legal or not according to the brightness difference value of the face area and the non-face area of the viewer;
further, the display state adjusting module comprises a display operation adjusting submodule and a display brightness adjusting submodule; wherein,
the display operation adjusting submodule is used for directly terminating all current display operations of the learning machine when the current display content is determined not to have legality, or the identity of a viewer is determined not to have legality, or the viewing action of the viewer is determined not to have legality;
and the display brightness adjusting submodule is used for adjusting the display brightness of the learning machine when the watching action of the viewer is determined to be legal.
Compared with the prior art, the eye protection display method and the learning machine with the eye protection mode perform adaptive adjustment on the current display state by acquiring the attribute information of any one of the current display content, the viewer and the external environment, wherein the adaptive adjustment not only comprises different display adjustment modes of adjusting whether the display operation is terminated or not, adjusting the display brightness and adjusting the display duration, but also can perform targeted display mode adjustment aiming at the viewing action of the current viewer and the external environment, so that the uniform display mode adjustment of all the viewers by adopting a single display adjustment mode can be avoided, and the learning machine can perform appropriate eye protection mode adjustment aiming at different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an eye protection display method according to an embodiment of the present invention. The eye protection display method comprises the following steps:
and (1) acquiring attribute information about any one of the display content, the viewer and the external environment, and determining evaluation information about the current display state according to the attribute information.
Preferably, in the step (1), attribute information on any one of the display content, the viewer, and the external environment is acquired, and it is determined that the evaluation information on the current display state specifically includes,
step (101), acquiring current display text and/or image information as attribute information of the display content, or acquiring face images of different angles of a viewer as attribute information of the viewer, or acquiring brightness data of a face area and a non-face area of the viewer as attribute information of the external environment;
step (102), when the attribute information is the display character and/or image information, extracting symbol pixel characteristics of the display character and/or image information, and calculating the extracted symbol pixel characteristics through a first preset evaluation model to obtain the evaluation information;
step (103), when the attribute information is the face image related to different angles of the viewer, extracting the face related features of the face image, and calculating the extracted face related features through a second preset evaluation module to obtain the evaluation information;
and (104) when the attribute information is the brightness data of the face area and the non-face area of the viewer, extracting brightness-related features of the brightness data, and calculating the extracted brightness-related features through a third preset evaluation model to obtain the evaluation information.
Preferably, in the step (102), the calculating the extracted symbol pixel feature by using a first preset evaluation model to obtain the evaluation information specifically includes,
and matching the symbol pixel characteristics with preset display content characteristics through the first preset evaluation model to obtain the matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information.
Preferably, in the step (103), the calculating process of the extracted face-related feature by the second preset evaluation module to obtain the evaluation information specifically includes,
converting the face-related feature into a face recognition feature or a face-display face distance value as the evaluation information through the second preset evaluation model.
Preferably, in the step (104), the performing, by a third preset evaluation model, a calculation process on the extracted brightness-related feature to obtain the evaluation information specifically includes,
and converting the brightness related characteristics into brightness difference values of the face area and the non-face area of the viewer through the third preset evaluation model to serve as the evaluation information.
And (2) judging the validity of any one of the current display content, the identity of the viewer and the viewing action of the viewer according to the evaluation information.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the matching degree between the symbol pixel feature and the preset display content feature, if the matching degree meets a preset matching degree range, it is determined that the current display content has legality, otherwise, it is determined that the current display content does not have legality, specifically, the preset display content feature may include a corresponding information feature in a text content, an image content or a video content database that a viewer allows to learn and watch on a learning machine, so that it can be ensured that only when the content currently watched by the viewer on the learning machine is the content specified in the database, the watching behavior of the current viewer has legality, thereby preventing the viewer from watching the content in the non-database.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the face identification feature, if the face identification feature is matched with a preset face feature library, the identity of a viewer is determined to be legal, otherwise, the identity of the viewer is determined not to be legal, specifically, the preset face feature library registers and stores face feature information of a plurality of different specified legal users in advance, and through the matching processing of the face identification feature and the preset face feature library, the learning machine can be ensured to be capable of carrying out corresponding display operation states only if the viewer of the current learning machine is the specified legal user, otherwise, the learning machine is in a closed state, so that the unspecified legal user cannot operate.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the face-display face distance value, if the face-display face distance value meets a preset distance range, it is determined that the viewing action of the viewer is legal, otherwise, it is determined that the viewing action of the viewer is not legal, specifically, a face image of the viewer is obtained through a front camera of a learning machine, since the smaller the distance between the viewer and a screen of the learning machine, the larger and the smaller the proportion occupied by the face region of the viewer in the face image, according to the above principle, the face-display face distance value can be obtained by calculating the proportion occupied by the face region of the viewer in the face image, wherein in the face image, the square area of the face of the viewer can be W H, W and H are respectively the width and the height of the square frame of the face of the viewer, and the whole image area of the face image can be W H, w and H are the height and width of the face image, respectively.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the brightness difference value between the face area and the non-face area of the viewer, if the brightness difference value between the face area and the non-face area of the viewer meets a preset brightness difference range, it is determined that the viewing action of the viewer is legal, otherwise, it is determined that the viewing action of the viewer is not legal, specifically, a front camera of a learning machine is used to obtain a face image of the viewer, because the brightness a of the face area of the viewer in the face image is different from the brightness b of the non-face area under different ambient illumination, when the face image is under daytime illumination or illumination of an illumination source, the brightness of the face area of the viewer is substantially the same as the brightness of the non-face area, and under a relatively dark environment, the brightness difference value between the face area of the viewer and the non-face area of the viewer is calculated, whether a current viewer performs a viewing action under normal external environment brightness can be accurately judged, wherein the calculation formula of the brightness a of the face region of the viewer is a-L1/(W H), wherein L1 is the fitted brightness distribution value of the face region of the viewer, and W and H are the width and height of the face box of the viewer respectively, the calculation formula of the brightness b of the non-face region of the viewer is b-L2/(W H-W H), wherein L2 is the fitted brightness distribution value of the non-face region of the viewer, and W and H are the height and width of the face image respectively.
And (3) adjusting the current display state according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
and when the current display content is determined to be legal, the normal operation of the current display operation is maintained, and when the current display content is determined not to be legal, all current display operations are terminated.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
when the identity of the viewer is determined to be legal, a preset display time length is set for the current viewer, all current display operations are terminated under the condition that the actual watching time length of the current viewer exceeds the preset display time length, and when the identity of the viewer is determined not to be legal, all current display operations are directly terminated.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
when the viewer watching action is determined to be legal, adjusting the display brightness in real time, and when the viewer watching action is determined not to be legal, directly terminating all current display operations, specifically, calculating a ratio (W × H)/(W × H) of a box area of the face of the viewer to an overall image area of the face image, wherein if the ratio (W × H)/(W × H) is greater than a preset ratio threshold M, directly terminating all current display operations, and if the ratio (W × H)/(W × H) is less than or equal to the preset ratio threshold M, adjusting the display brightness in real time; in addition, a brightness difference value a-b between the brightness a of the face region of the viewer and the brightness b of the non-face region in the face image can be calculated, wherein if the brightness difference value a-b is greater than a preset brightness threshold value N, all current display operations are directly terminated, and if the brightness difference value a-b is less than or equal to the preset brightness threshold value N, the display brightness is adjusted in real time.
Fig. 2 is a schematic structural diagram of a learning machine with an eye protection mode according to an embodiment of the present invention. The learning machine with the eye protection mode comprises an attribute information acquisition module, an evaluation information determination module, a legality judgment module and a display state adjustment module; wherein,
the attribute information acquisition module is used for acquiring the display content of the learning machine, and attribute information corresponding to any one of a viewer and the external environment;
the evaluation information determining module can be used for determining evaluation information about the current display state of the learning machine according to the attribute information;
the legality judging module is used for judging the legality of any one of the currently displayed content, the identity of the viewer and the viewing action of the viewer of the learning machine according to the evaluation information;
the display state adjusting module is used for adjusting the current display state of the learning machine according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer.
Preferably, the attribute information acquisition module comprises a display content acquisition submodule, a face image acquisition submodule and a brightness data acquisition submodule;
preferably, the display content obtaining sub-module is configured to obtain current display text and/or image information as attribute information of the display content;
preferably, the facial image acquisition sub-module is configured to acquire facial images of different angles with respect to a viewer as the attribute information of the viewer;
preferably, the luminance data acquisition sub-module is configured to acquire luminance data on each of a face region and a non-face region of the viewer as the attribute information of the external environment;
preferably, the evaluation information determination module includes a first evaluation information determination sub-module, a second evaluation information determination sub-module, and a third evaluation information determination sub-module;
preferably, the first evaluation information determining sub-module is configured to perform matching processing on the symbol pixel feature corresponding to the display text and/or image information and a preset display content feature through a first preset evaluation model, so as to obtain a matching degree between the symbol pixel feature and the preset display content feature as the evaluation information;
preferably, the second evaluation information determination submodule is configured to convert, as the evaluation information, face-related features corresponding to the face images at different angles with respect to the viewer into face recognition features or face-display face distance values by a second preset evaluation model;
preferably, the third evaluation information determination submodule is configured to convert, as the evaluation information, the luminance-related feature corresponding to the luminance data of each of the viewer face region and the non-face region into a luminance difference value between the viewer face region and the non-face region by using a third preset evaluation model;
preferably, the validity judging module comprises a first validity judging submodule, a second validity judging submodule, a third validity judging submodule and a fourth validity judging submodule;
preferably, the first validity judging sub-module is configured to judge whether the currently displayed content of the learning machine is valid according to a matching degree between the symbol pixel feature and the preset display content feature;
preferably, the second validity judging submodule is configured to judge whether the identity of the viewer corresponding to the learning machine is valid according to the face recognition feature;
preferably, the third validity judging submodule is configured to judge whether the viewing action of the viewer corresponding to the learning machine is valid according to that the face-display face distance value satisfies a preset distance range;
preferably, the fourth validity judging submodule is configured to judge whether the viewing action of the learning machine corresponding to the viewer is valid according to the brightness difference between the face region and the non-face region of the viewer;
preferably, the display state adjusting module comprises a display operation adjusting submodule and a display brightness adjusting submodule;
preferably, the display operation adjustment submodule is used for directly terminating all current display operations of the learning machine when the current display content is determined not to be legal, or the identity of the viewer is determined not to be legal, or the viewing action of the viewer is determined not to be legal;
preferably, the display brightness adjusting sub-module is used for adjusting the display brightness of the learning machine when the validity of the watching action of the viewer is determined.
It can be seen from the above embodiments that the eye protection display method and the learning machine with the eye protection mode adaptively adjust the current display state by obtaining the attribute information about any one of the current display content, the viewer and the external environment, wherein the adaptive adjustment not only includes different display adjustment modes of adjusting whether the display operation is terminated, adjusting the display brightness, and adjusting the display duration, but also can perform targeted display mode adjustment for the viewing action of the current viewer and the external environment, so that uniform display mode adjustment for all viewers by using a single display adjustment mode can be avoided, and the learning machine can perform appropriate eye protection mode adjustment for different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.