Movatterモバイル変換


[0]ホーム

URL:


CN110490140A - Screen display state method of discrimination, device, computer equipment and storage medium - Google Patents

Screen display state method of discrimination, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN110490140A
CN110490140ACN201910773812.2ACN201910773812ACN110490140ACN 110490140 ACN110490140 ACN 110490140ACN 201910773812 ACN201910773812 ACN 201910773812ACN 110490140 ACN110490140 ACN 110490140A
Authority
CN
China
Prior art keywords
screen
image
human body
working region
display state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910773812.2A
Other languages
Chinese (zh)
Inventor
周康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co LtdfiledCriticalShanghai Eye Control Technology Co Ltd
Priority to CN201910773812.2ApriorityCriticalpatent/CN110490140A/en
Publication of CN110490140ApublicationCriticalpatent/CN110490140A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

This application involves a kind of screen display state method of discrimination, device, computer equipment and storage mediums.The described method includes: obtaining working region image;It detects in the working region image with the presence or absence of characteristics of human body;When the characteristics of human body is not detected, screen area image is extracted from the working region image;The screen display state is differentiated according to the Pixel Information of the screen area image.It can be improved the identification effect of screen display state using this method.

Description

Screen display state method of discrimination, device, computer equipment and storage medium
Technical field
This application involves such as field of computer technology, more particularly to a kind of screen display state method of discrimination, device, meterCalculate machine equipment and storage medium.
Background technique
With the development of information technology, played increasingly in the more and more common and life at us of electronic officeImportant role.A large amount of data is saved in electronic equipment, and wherein many data are all confidential data, do in electronizationHow to ensure that the safety of data in electronic equipment is an important task in public process.
In order to ensure the safety of the electronic bits of data in electronic equipment, generally needing after staff leaves station will be electricSub- equipment carries out screen locking, to prevent the leakage of information in electronic equipment.In traditional technology, detected after staff leaves stationWhether screen locking is typically based on screen manually checks, display state that cannot in real time to electronic curtain based on the scheme manually checkedIt is monitored, reduces the efficiency that the display state to electronic curtain is monitored.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of can be improved and differentiate screen display state efficiencyMethod, apparatus, computer equipment and storage medium.
A kind of screen display state method of discrimination, which comprises
Obtain working region image;
It detects in the working region image with the presence or absence of characteristics of human body;
When the characteristics of human body is not detected, screen area image is extracted from the working region image;
The screen display state is differentiated according to the Pixel Information of the screen area image.
In one of the embodiments, the method also includes:
When detecting the characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;
Calculate the matching degree of the characteristics of human body Yu the personnel characteristics of human body;
When the matching degree is less than preset matching degree threshold value, screen area figure is extracted from the working region imagePicture differentiates the screen display state according to the Pixel Information of the screen area image.
It is described in one of the embodiments, to differentiate that the screen is shown according to the Pixel Information of the screen area imageAfter state, comprising:
When determining the screen display state is bright screen, the screen display state is changed to screen protection shapeState.
It is described in one of the embodiments, to differentiate that the screen is shown according to the Pixel Information of the screen area imageState, comprising:
Obtain total pixel value of the screen area image;
Bianry image is converted by the screen area image, calculates the valid pixel value of the bianry image;
Calculate the ratio of the valid pixel value Yu total pixel value;
When the ratio be greater than presetted pixel threshold value when, determine the screen display state be bright screen, when the ratio notWhen greater than presetted pixel threshold value, determine the screen display state for screen protection state.
In one of the embodiments, the method also includes:
Video flowing is obtained, extracts multiple working region images from the video flowing according to predeterminated frequency;
In multiple continuous described working region images in preset duration, the characteristics of human body is not detected,And when determining the screen display state and being bright screen, the screen display state is changed to screen protection state.
It whether there is characteristics of human body in the detection working region image in one of the embodiments, comprising:
The working region image is inputted into human body attitude machine learning model;
Key point coordinate is obtained according to the human body attitude machine learning model;
It connects each key point coordinate and obtains profile diagram;
When the similarity of the profile diagram and human body contour outline figure is more than default similarity threshold, the working region is determinedThere are characteristics of human body in image, when the similarity is less than default similarity threshold, determine in the working region imageThere is no characteristics of human body.
It is described in one of the embodiments, that screen area image is extracted from the working region image, comprising:
The working region image is inputted into target detection machine learning model, obtains matching under multiple scale match patternsThe location information and matching probability in region;
Obtain the location information of the maximum matching area of the matching probability;
Screen area image corresponding with the location information is extracted from the working region image.
A kind of screen display state discriminating gear, described device include:
Image collection module, for obtaining working region image;
Characteristics of human body's detection module, for detecting in the working region image with the presence or absence of characteristics of human body;
Screen picture extraction module, for being mentioned from the working region image when the characteristics of human body is not detectedTake screen area image;
First display condition discrimination module, for differentiating that the screen is aobvious according to the Pixel Information of the screen area imageShow state.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processingThe step of device realizes the above method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processorThe step of above method is realized when row.
Above-mentioned screen display state method of discrimination, device, computer equipment and storage medium obtain working region image,Characteristics of human body's judgement is carried out to the working region image got first, when characteristics of human body is not present, triggering screen shows shapeState detection instruction, the Pixel Information by obtaining screen differentiate screen display state, realize to the automatic of screen display stateChange detection and differentiate, improves the identification effect of screen display state.
Detailed description of the invention
Fig. 1 is the application scenario diagram of screen display state method of discrimination in one embodiment;
Fig. 2 is the flow diagram of screen display state method of discrimination step in one embodiment;
Fig. 3 is the flow diagram of screen display state method of discrimination step in another embodiment;
Fig. 4 is the structural block diagram of screen display state discriminating gear in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understoodThe application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, notFor limiting the application.
Screen display state method of discrimination provided by the present application, can be applied in application environment as shown in Figure 1.ItsIn, server 104 is communicated by network with terminal 102, and server 104 obtains the working region image of acquisition, detects instituteIt states in the image of working region with the presence or absence of characteristics of human body;When the characteristics of human body is not detected, from the working region imageMiddle extraction screen area image;The screen display state is differentiated according to the Pixel Information of the screen area image.Work as serviceDevice 104 determines screen display state when being bright screen state, generates screen protection and instructs and be sent to terminal 102, and 102, terminalIt is instructed according to screen protection, its screen display state is changed to screen protection state.
Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computerWith portable wearable device, server 104 can use the server set of the either multiple server compositions of independent serverGroup realizes.
In one embodiment, as shown in Fig. 2, providing a kind of screen display state method of discrimination, it can be applied to terminal,Also it can be applied to server, below in an example by taking screen display state method of discrimination is applied to server 104 as an exampleIt is illustrated, specifically includes the following steps:
Step S210 obtains working region image.
Working region is the operating area of staff's operation.Working region image is got by imaging techniqueDigital picture.Specifically, working region image can be the image that captured in real-time obtains, or obtain from captured in real-timeVideo flowing in the picture frame that intercepts.
Step S220 is detected and be whether there is characteristics of human body in the image of working region.
Characteristics of human body is the feature for characterizing human figure, can be used to identify human body.Such as characteristics of human body can be peopleSkeleton character, hand-characteristic of human body of body etc., in another embodiment, characteristics of human body may be face characteristic etc..ClothesBusiness device obtains working region image, and carrying out feature extraction to the working region image of acquisition specifically can use feature extractionAlgorithm carries out feature extraction.The characteristic point in the image of working region is extracted first with feature extraction algorithm, then utilizes featureWhether recognizer judges comprising characteristics of human body in the characteristic point extracted, as long as human body can be characterized by having in the characteristic point extractedCharacteristic point, it can judge in the image of working region comprising characteristics of human body, and then judge there is staff in working region.
In another embodiment, human body spy is directly detected from the image of working region using characteristics of human body's detection algorithmSign judges to sentence comprising staff when characteristics of human body is not detected in the image of working region when detecting characteristics of human bodyCharacteristics of human body is not included in disconnected working region image.
In one embodiment, a station is included at least in the image of working region, in general, a station corresponding oneComputer equipment used in a work and a staff.When in the image of working region including a station, detectionIt whether include characteristics of human body in the corresponding working region of the station;When in the image of working region including multiple stations, examine respectivelyIt whether surveys in the corresponding working region of each station comprising characteristics of human body, the station that characteristics of human body is not detected is extracted as target workPosition.
Step S230 extracts screen area image when characteristics of human body is not detected from the image of working region.
Screen area can be the corresponding image-region of display of computer equipment, can be used for showing letter in screen areaBreath.Specifically, when display is in running order, screen area is in bright screen state, can be used for showing information;Work as displayWhen in off working state, screen area is in guard mode, and protectable information is not leaked under guard mode, improves informationSafety.Wherein the screen under guard mode refers to that the information in screen can not be directly viewable, such as can be the screen that goes outState, screen lock state also can use shelter and block etc. to screen.
Specifically, when not including characteristics of human body from the characteristic point recognized in the image of working region, or from workspaceWhen characteristics of human body being not detected in area image, show not including staff in the image of working region.From the image of working regionExtract screen area image.It further include extracting the corresponding screen of each aiming station in the image of working region when including multiple stationsCurtain area image.
Step S240 differentiates screen display state according to the Pixel Information of screen area image.
Pixel is to form the basic unit of image, contains the brightness value and chromatic value of image in Pixel Information.Work as workMake screen area image to be extracted from the image of working region, using in screen area image there is no when staff in regionPixel Information the display state of screen area image is differentiated.It in one embodiment, can be by obtaining screen areaBrightness value in area image Pixel Information differentiates the display state of screen according to brightness value, specifically, presets when brightness value is greater thanDetermine that screen display state is otherwise bright screen judges screen display state for guard mode when threshold value.
In above-mentioned screen display state method of discrimination, characteristics of human body's inspection is carried out to the working region image got firstIt surveys, when characteristics of human body is not present, triggers screen display state detection instruction, the Pixel Information by obtaining screen differentiates screenDisplay state realizes automatic detection and differentiation to screen display state, improves the identification effect of screen display state.
In one embodiment, referring to FIG. 3, Fig. 3 is screen display state method of discrimination step in another embodimentFlow diagram, method includes:
Step S210 obtains working region image.
Step S220 detects whether that there are characteristics of human body.
Step S222 is obtained and the associated personnel characteristics of human body of working region image when detecting characteristics of human body.
Wherein, personnel characteristics of human body is characteristics of human body corresponding with the associated staff of working region image.Generally,The corresponding staff of one station, in other embodiments, a station can also correspond to multiple staff.With stationAssociated staff is to have permission the personnel interacted with the display equipment on station.
In one embodiment, the corresponding staff of each station is acquired in advance, obtains the characteristics of human body of staffObtain personnel characteristics of human body, each station and personnel characteristics of human body be associated binding, and will be associated with binding information store toIn database.Specifically, server extracts station information from the image of working region, is searched from database according to station informationAssociated personnel characteristics of human body calls matching algorithm by the personnel characteristics of human body found and the characteristics of human body detected progressMatch.Wherein station information can be station number, such as station number be 001, according to station number in the database search withAssociated personnel characteristics of human body.
Step S224 calculates the matching degree of characteristics of human body and personnel characteristics of human body.
Specifically, server calls matching algorithm by the characteristics of human body detected from the image of working region and has permissionPersonnel characteristics of human body matched to obtain matching degree.Wherein, matching process may include the hand-characteristic of acquisition is matched,Skeleton character carries out matching or face characteristic is matched, herein with no restrictions.
Matching degree is compared by step S226 with preset matching degree threshold value, when matching degree is less than preset matching degree threshold valueWhen, execute the step of S230 extracts screen area image from the image of working region.
Specifically, characteristics of human body and personnel characteristics of human body are inputted into matching algorithm, the matching degree of both matching algorithm output.Specifically, matching algorithm can convert (Scale-invariant feature transform or SIFT) for scale invariant featureAlgorithm, or accelerate robust feature algorithm (Speeded Up Robust Features or SURF) etc..It is obtained according to matching algorithmMatching degree is taken, when matching degree is greater than preset matching degree threshold value, determines that the characteristics of human body detected exactly belongs to permissionStaff's, when matching degree is not more than preset matching degree threshold value, determine that the characteristics of human body detected is without permissionThe characteristics of human body of personnel extracts screen area image from the image of working region at this time.
Step S240 differentiates screen display state according to the Pixel Information of screen area image.
In this step, it is referred to above-described embodiment and differentiates that screen is shown to according to the Pixel Information of screen area imageThe method of state description, details are not described herein.
It in the present embodiment, further include to the human body detected after detecting characteristics of human body from the image of working regionFeature is further differentiated, is matched by the characteristics of human body that will test with the personnel characteristics of human body with permission,Judge whether the personnel in working region are legal personnel according to matching result, when not being legal personnel, triggering is extractedThe operation of screen area image differentiates the screen display state in screen area, and then improves the safety of information.
In one embodiment, after according to the Pixel Information of screen area image differentiation screen display state, comprising: whenDetermine screen display state be bright screen when, screen display state is changed to screen protection state.
The display state of screen is the state that screen shows information.In one embodiment, the display state of screen can be withIt is divided into the bright screen state of screen and screen protection state, screen has luminance information in bright screen state, can show information;InTransient copy can be protected in screen protection state, to prevent the information quilt on screen under this guard modeNon- permission personnel steal.Specifically, screen protection state may include go out screen state and using outside plant to screen carry outIt blocks so that screen is in protected state.
In one embodiment, the screen under screen lock state has luminance information, differentiates that screen is in bright screen state.At itIn his embodiment, the displaying information on screen in screen lock state is stored into screen locking information database in advance, when server is sentencedNot Chu screen display state when being bright screen, the displaying information on screen under bright screen state is obtained, when from screen locking information data storehouse matchingWhen to the displaying information on screen, determine that screen display state is guard mode.Wherein, displaying information on screen can be screensaver imageOr text information with screen locking mark etc..
In one embodiment, when characteristics of human body is not detected from the image of working region, and differentiate screen area figureWhen as being bright screen state, screen display state is changed to screen protection state.In another embodiment, when from working regionWhen the characteristics of human body detected is non-permission personnel corresponding feature, judgement illegal personnel of personnel on station at this time, andWhen differentiating that arriving screen display state is bright screen, screen display state is changed to screen protection state, is shown on screen with protectingThe information shown is not leaked, and further increases the safety of information.
In the present embodiment, when determine in the image of working region do not include staff or comprising non-permission work peopleMember, but screen be bright screen state when, screen display state is changed to screen protection state in time, realizes the guarantor to informationShield, prevents information to be stolen.
In one embodiment, screen display state is differentiated according to the Pixel Information of screen area image, comprising: obtain screenTotal pixel value of curtain area image;Bianry image is converted by screen area image, calculates the valid pixel value of bianry image;MeterCalculate the ratio of valid pixel value and total pixel value;When ratio is greater than presetted pixel threshold value, judgement screen display state is bright screen,When ratio is not more than presetted pixel threshold value, determine that screen display state is screen protection state.
Pixel is the smallest elementary area, and image is made of pixel.Obtain the process of the effective information in image namelyObtain the process of Pixel Information in image.Server obtains total pixel value of screen area image, in one embodiment, total pictureElement value can be the product of the pixel in pixel and width direction on length direction in screen area image.
Bianry image is the image carried out after binary conversion treatment, and the brightness value of each pixel is 0 (black in bianry imageColor) or 255 (whites), that is, the effect that whole image presentation is only black and white.Specifically, first by the screen area of acquisitionArea image is converted to gray level image, and it is corresponding then to obtain screen area image to the gray level image progress binary conversion treatment of acquisitionBianry image.
In one embodiment, the threshold value for carrying out binary conversion treatment to gray level image can be set to 60-130, such as canThink 80, brightness value in gray level image is denoted as 255 greater than 80 value, the value less than 80 is denoted as 0, it is right in other embodimentsThe selection of threshold value with no restrictions, can be arranged according to particular situation.
Valid pixel in bianry image refers to the pixel with luminance information.Specifically, by brightness value in bianry imageEffective pixel points are extracted as 255 pixel, calculate the ratio of the total pixel of effective pixel points Zhan, are assessed according to ratio currentLuminance information in screen area image.
In the present embodiment, screen area image is converted into bianry image, shape is shown to screen in screen area imageThe judgement of state is transferred to and calculates accounting of the effective pixel points in total pixel in bianry image, determines current screen according to accountingThe display state of curtain area image, improves the identification effect to screen area image display status.
In one embodiment, method further include: obtain video flowing, extract multiple works from video flowing according to predeterminated frequencyMake area image;In multiple continuous working region images in preset duration, characteristics of human body is not detected, and differentiateWhen screen display state is bright screen out, screen display state is changed to screen protection state.
Video flowing refers to the video data of transmission.In one embodiment, video flowing can be the workspace of real time monitoringThe video information in domain.It include multiple picture frames in video flowing, server intercepts out multiple from video flowing with preset sample frequencyPicture frame is as working region image.For example, server from monitor video with the decimation in frequency picture frame of 60 frame per second, and it is rightEach picture frame extracted carries out real-time characteristics of human body's detection, detects with the presence or absence of characteristics of human body in each figure phenomenon frame, and examiningIt measures in picture frame when there is no characteristics of human body, extracts the screen area image in picture frame, and further according to screen areaImage discriminating screen display state.It is to detect human body in multiple continuous working region images in preset durationWhen feature, and when to determine the screen display state in multiple continuous working region images be all bright screen, by the aobvious of screenShow that state is changed to screen protection state.Wherein, during between the setting of preset duration should ensure that at this moment, staff is leftSo that screen is in guard mode immediately after station, cannot still cause the leakage of information.Preset duration can be according to job specificationAnd the privacy degrees of action are dynamically set, and may be, for example, 1 minute or 5 minutes etc..
In one embodiment, preset duration is greater than the sampling time interval of the interception image frame from video flowing, to guaranteeA Zhang Gong is at least obtained in preset duration makees area image.In another embodiment, two are at least obtained in preset durationZhang Gong makees area image, to guarantee to judge by accident in primary working region image-detection process, such as will have work peopleThe working region of member is determined as that staff is not present, and screen display state is changed to screen protection state, influences work peopleThe working efficiency of member.
In the present embodiment, by obtaining working region image in real time according to certain frequency from monitor video, to obtainingThe working region image taken carries out characteristics of human body's detection and the judgement of screen display state, realizes to the real-time of monitor videoProcessing.When staff being not present in a period of time continuous in working region but screen display state is bright screen, by screenDisplay state is changed to screen protection state, realizes the protection to display information on screen.
And it can just be changed after staff is not present in working region and is continued for some time for the state of bright screenThe display state of screen is prevented from briefly leaving station as staff, be carried out in the case where will cause information leakageThe display state for changing screen, makes troubles to staff.
In one embodiment, detecting whether there is characteristics of human body in the image of working region, comprising: by working region imageInput human body attitude machine learning model;Key point coordinate is obtained according to human body attitude machine learning model;Connect each key pointCoordinate obtains profile diagram;When the similarity of profile diagram and human body contour outline figure is more than default similarity threshold, working region is determinedThere are characteristics of human body in image, when similarity is less than default similarity threshold, determine that people is not present in the image of working regionBody characteristics.
The target of Attitude estimation is to depict the shape of human body in an image or a video in human body attitude machine learning model.Human body attitude machine learning model includes DensePose model, OpenPose model, Realtime Multi-Person PoseEstimation model, AlphaPose model, Human Body Pose Estimation model and DeepPose modelDeng.
The key point coordinate in the image of working region is obtained by human body attitude learning model, wherein key point coordinate can beConstitute the key point coordinate of human body attitude.
In one embodiment, key point coordinate includes: skeleton joint point coordinate and hand joint point coordinate.Wherein, boneFrame body joint point coordinate bpoints include human skeleton 25 major joint points, such as nose, neck, right shoulder, right hand elbow,Right finesse, left shoulder, left hand elbow, left finesse, rumpbone, right hipbone, right knee, ankle, left hipbone, left knee, left ankle, the right sideEye, left eye, auris dextra, left ear, right crus of diaphragm toe, left foot toe, left heel, right crus of diaphragm toe and right crus of diaphragm with.Hand joint point coordinate hpoints42 major joint points comprising human body both hands, each 21 points of right-hand man, by taking the right hand as an example, including the palm root, the palm abdomen, thumb root,In thumb, thumbtip, index finger root, index finger be close, in index finger, forefinger tip, middle finger root, middle finger be close, in middle finger, middle finger tip, the third fingerIn root, the nameless close, third finger, unknown finger tip, little finger root, little finger be close, in little finger and little finger point.
Specifically, server obtains video flowing, extracts picture frame from video flowing according to predeterminated frequency, mentions from picture frameOriginal operating region image is taken, original operating region image is normalized to obtain working region image.Wherein normalizingChange processing includes the picture specification by original operating region image procossing at suitable human body attitude machine learning model.In a realityIt applies in example, by obtaining the size of the training set image in human body attitude learning model, extremely by original operating region Image AdjustingWorking region image is obtained with the consistent size of training set image.
Working region image is inputted into OpenPose human body attitude machine learning model, according to human body attitude machine learning mouldType obtains key point coordinate, connects each key point coordinate and obtains profile diagram, when the similarity of profile diagram and human body contour outline figure is more thanWhen default similarity threshold, determine that there are characteristics of human body in the image of working region, when similarity is less than default similarity thresholdWhen, determine that characteristics of human body is not present in the image of working region, wherein default similarity threshold may be greater than 80% etc., does not make hereinLimitation.
In the present embodiment, using human body attitude learning model automatic identification key point coordinate and according to the key recognizedIt puts coordinate to discriminate whether to belong to characteristics of human body, improves the efficiency and accuracy rate of characteristics of human body's identification.
In one embodiment, screen area image is extracted from the image of working region, comprising: working region image is defeatedEnter target detection machine learning model, obtains the location information and matching probability of matching area under multiple scale match patterns;It obtainsTake the location information of the maximum matching area of matching probability;Screen area corresponding with location information is extracted from the image of working regionArea image.
In order to more accurately extract screen area image from the image of working region, working region image is inputted firstTarget detection model obtains the image of working region image at multiple scales according to target detection model.Wherein, acquisition is moreThe fog-level of scale image becomes larger, and simulates scenery from the near to the remote in retina forming process, both contains the overall situationGlobal Information, and contain local details, more fully information can be extracted.
It is multiple dimensioned to obtain screen area image most by considering in the case where not knowing screen area picture sizeGood scale improves the efficiency and accuracy that screen area image is extracted in the image of working region.
According to multiple scale images of acquisition, the zone position information in each scale image where object to be matched is obtained,And the probability of region location information.Specifically, obtain location information of the screen area in each scale image andWith probability, wherein location information can pass through coordinate representation.Server mentions the corresponding image of the maximum location information of matching probabilityIt is taken as screen area, specifically, screen area image is extracted from the image of working region according to the corresponding coordinate of location information.ItsIn, target detection machine learning model can be convolutional neural networks model, YOLOV model etc., and wherein YOLOV model is divided into againYOLOV1 model, YOLOV2 model and YOLOV3 model.
In one embodiment, screen area image is obtained using YOLOV3 model.Working region image is carried out firstNormalized obtains the picture size for being suitble to YOLOV3 model, and the working region image after obtained normalization is inputtedYOLOV3 model, image of the YOLOV3 model extraction working region image under a variety of scales, and obtain multiple scale matching mouldsThe location information and matching probability of matching area under formula obtain the location information of the maximum matching area of matching probability, from workScreen area image corresponding with location information is extracted in area image.
In the present embodiment, by target detection machine learning model realize to the automatic identification of screen area image with mentionIt takes, realizes the automatic processing and detection of data, improve the efficiency and accuracy of image procossing.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these stepsExecution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-stepsCompletion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successivelyIt carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternatelyIt executes.
In one embodiment, as shown in figure 4, providing a kind of screen display state discriminating gear, comprising: image obtainsModule 410, characteristics of human body's detection module 420, screen picture extraction module 430 and the first display condition discrimination module 440.
Image collection module 410, for obtaining working region image.
Characteristics of human body's detection module 420, for detecting in the working region image with the presence or absence of characteristics of human body.
Screen picture extraction module 430, for when the characteristics of human body is not detected, from the working region imageExtract screen area image.
First display condition discrimination module 440, for differentiating the screen according to the Pixel Information of the screen area imageCurtain display state.
In one embodiment, described device further include:
Personnel characteristics obtain module, for obtaining and closing with the working region image when detecting the characteristics of human bodyThe personnel characteristics of human body of connection.
Matching degree computing module, for calculating the matching degree of the characteristics of human body Yu the personnel characteristics of human body.
Second display condition discrimination module, is used for when the matching degree is less than preset matching degree threshold value, from the workScreen area image is extracted in area image, differentiates that the screen shows shape according to the Pixel Information of the screen area imageState.
In one embodiment, device further include:
State change module is shown, for when determining the screen display state is bright screen, the screen to be shownState is changed to screen protection state.
In one embodiment, the first display condition discrimination module 440, comprising:
Total pixel value acquiring unit, for obtaining total pixel value of the screen area image;
Valid pixel value computing unit calculates the two-value for converting bianry image for the screen area imageThe valid pixel value of image;
Ratio calculation unit, for calculating the ratio of the valid pixel value Yu total pixel value;
First display condition discrimination unit, for determining that the screen is aobvious when the ratio is greater than presetted pixel threshold valueShow that state is bright screen, when the ratio is not more than presetted pixel threshold value, determines the screen display state for screen protection shapeState.
In one embodiment, device further include:
Multiple image collection modules extract multiple works according to predeterminated frequency for obtaining video flowing from the video flowingMake area image;
Third shows condition discrimination module, for when multiple continuous described working region images in preset durationIn, when the characteristics of human body is not detected, and determining the screen display state and is bright screen, the screen is shown into shapeState is changed to screen protection state.
In one embodiment, characteristics of human body's detection module 420, comprising:
First input unit, for the working region image to be inputted human body attitude machine learning model;
Coordinate acquiring unit, for obtaining key point coordinate according to the human body attitude machine learning model;
Profile diagram acquiring unit obtains profile diagram for connecting each key point coordinate;
Second display condition discrimination unit, it is similar more than presetting to the similarity of human body contour outline figure for working as the profile diagramWhen spending threshold value, determine that there are characteristics of human body in the working region image, when the similarity is less than default similarity thresholdWhen, determine that there is no characteristics of human body in the working region image.
In one embodiment, screen picture extraction module 430, comprising:
Second input unit obtains multiple for the working region image to be inputted target detection machine learning modelThe location information and matching probability of matching area under resolution match mode;
Position acquisition unit, for obtaining the location information of the maximum matching area of the matching probability;
Screen area image acquisition unit, it is corresponding with the location information for being extracted from the working region imageScreen area image.
Specific restriction about screen display state discriminating gear may refer to differentiate above for screen display stateThe restriction of method, details are not described herein.Modules in above-mentioned screen display state discriminating gear can be fully or partially throughSoftware, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipmentIt manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or moreThe corresponding operation of modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junctionComposition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface andDatabase.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipmentInclude non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and dataLibrary.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculatingThe database of machine equipment differentiates data for storing screen display state.The network interface of the computer equipment is used for and outsideTerminal passes through network connection communication.To realize a kind of screen display state differentiation side when the computer program is executed by processorMethod.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tiedThe block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipmentIt may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored withComputer program, which performs the steps of when executing computer program obtains working region image;Detect the workIt whether there is characteristics of human body in area image;When the characteristics of human body is not detected, extracted from the working region imageScreen area image;The screen display state is differentiated according to the Pixel Information of the screen area image.
It is also performed the steps of when processor executes computer program in one of the embodiments, described when detectingWhen characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;Calculate the characteristics of human body and the peopleThe matching degree of member characteristics of human body;When the matching degree is less than preset matching degree threshold value, extracted from the working region imageScreen area image differentiates the screen display state according to the Pixel Information of the screen area image.
The picture according to the screen area image is realized when processor executes computer program in one of the embodiments,Prime information is also used to when differentiating the step after the screen display state: being bright screen when determining the screen display stateWhen, the screen display state is changed to screen protection state.
The picture according to the screen area image is realized when processor executes computer program in one of the embodiments,Prime information is also used to when differentiating the step of the screen display state: obtaining total pixel value of the screen area image;By instituteIt states screen area image and is converted into bianry image, calculate the valid pixel value of the bianry image;Calculate the valid pixel valueWith the ratio of total pixel value;When the ratio is greater than presetted pixel threshold value, determine that the screen display state is bright screen,When the ratio is not more than presetted pixel threshold value, determine the screen display state for screen protection state.
Acquisition video flowing is also performed the steps of when processor executes computer program in one of the embodiments, is pressedMultiple working region images are extracted from the video flowing according to predeterminated frequency;When multiple continuous described works in preset durationIt, will be described when making in area image, the characteristics of human body is not detected, and determining the screen display state and be bright screenScreen display state is changed to screen protection state.
Realizing to detect in the working region image when processor executes computer program in one of the embodiments, isNo there are be also used to when the step of characteristics of human body: the working region image is inputted human body attitude machine learning model;According toThe human body attitude machine learning model obtains key point coordinate;It connects each key point coordinate and obtains profile diagram;When describedWhen the similarity of profile diagram and human body contour outline figure is more than default similarity threshold, determine that there are human bodies in the working region imageFeature determines that there is no characteristics of human body in the working region image when the similarity is less than default similarity threshold.
It realizes when processor executes computer program in one of the embodiments, and is extracted from the working region imageIt is also used to when the step of screen area image: the working region image being inputted into target detection machine learning model, is obtained moreThe location information and matching probability of matching area under a resolution match mode;Obtain the maximum matching of the matching probabilityThe location information in region;Screen area image corresponding with the location information is extracted from the working region image.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculatedMachine program performs the steps of when being executed by processor obtains working region image;Detect in the working region image whetherThere are characteristics of human body;When the characteristics of human body is not detected, screen area image is extracted from the working region image;RootThe screen display state is differentiated according to the Pixel Information of the screen area image.
It also performs the steps of to work as when computer program is executed by processor in one of the embodiments, and detects instituteWhen stating characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;Calculate the characteristics of human body with it is describedThe matching degree of personnel characteristics of human body;When the matching degree is less than preset matching degree threshold value, mentioned from the working region imageScreen area image is taken, the screen display state is differentiated according to the Pixel Information of the screen area image.
It realizes when computer program is executed by processor in one of the embodiments, according to the screen area imagePixel Information is also used to when differentiating the step after the screen display state: being bright screen when determining the screen display stateWhen, the screen display state is changed to screen protection state.
It realizes when computer program is executed by processor in one of the embodiments, according to the screen area imagePixel Information is also used to when differentiating the step of the screen display state: obtaining total pixel value of the screen area image;It willThe screen area image is converted into bianry image, calculates the valid pixel value of the bianry image;Calculate the valid pixelThe ratio of value and total pixel value;When the ratio is greater than presetted pixel threshold value, determine that the screen display state is brightScreen determines the screen display state for screen protection state when the ratio is not more than presetted pixel threshold value.
Acquisition video flowing is also performed the steps of when computer program is executed by processor in one of the embodiments,Multiple working region images are extracted from the video flowing according to predeterminated frequency;When multiple in preset duration are continuous describedWhen in the image of working region, the characteristics of human body is not detected, and determining the screen display state and is bright screen, by instituteIt states screen display state and is changed to screen protection state.
It realizes and is detected in the working region image when computer program is executed by processor in one of the embodiments,With the presence or absence of characteristics of human body step when be also used to: by the working region image input human body attitude machine learning model;RootKey point coordinate is obtained according to the human body attitude machine learning model;It connects each key point coordinate and obtains profile diagram;Work as instituteThe similarity for stating profile diagram and human body contour outline figure is more than when presetting similarity threshold, to determine that there are people in the working region imageBody characteristics determine that there is no human body spies in the working region image when the similarity is less than default similarity thresholdSign.
It realizes when computer program is executed by processor in one of the embodiments, and is mentioned from the working region imageIt is also used to when taking the step of screen area image: the working region image being inputted into target detection machine learning model, is obtainedThe location information and matching probability of matching area under multiple resolution match modes;Obtain the matching probability maximum describedLocation information with region;Screen area image corresponding with the location information is extracted from the working region image.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be withRelevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computerIn read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,To any reference of memory, storage, database or other media used in each embodiment provided herein,Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may includeRandom access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancingType SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodimentIn each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lanceShield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneouslyIt cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the artIt says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the applicationRange.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

CN201910773812.2A2019-08-212019-08-21Screen display state method of discrimination, device, computer equipment and storage mediumPendingCN110490140A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910773812.2ACN110490140A (en)2019-08-212019-08-21Screen display state method of discrimination, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910773812.2ACN110490140A (en)2019-08-212019-08-21Screen display state method of discrimination, device, computer equipment and storage medium

Publications (1)

Publication NumberPublication Date
CN110490140Atrue CN110490140A (en)2019-11-22

Family

ID=68552499

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910773812.2APendingCN110490140A (en)2019-08-212019-08-21Screen display state method of discrimination, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN110490140A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112069043A (en)*2020-08-042020-12-11北京捷通华声科技股份有限公司 A terminal equipment state detection method, model generation method and device
CN116522417A (en)*2023-07-042023-08-01广州思涵信息科技有限公司Security detection method, device, equipment and storage medium for display equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108881621A (en)*2018-05-302018-11-23上海与德科技有限公司A kind of screen locking method, device, terminal and storage medium
CN109147659A (en)*2018-08-232019-01-04西安蜂语信息科技有限公司Control method for screen display, device, equipment and storage medium
CN109521875A (en)*2018-10-312019-03-26联想(北京)有限公司A kind of screen control method, electronic equipment and computer readable storage medium
CN110046600A (en)*2019-04-242019-07-23北京京东尚科信息技术有限公司Method and apparatus for human testing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108881621A (en)*2018-05-302018-11-23上海与德科技有限公司A kind of screen locking method, device, terminal and storage medium
CN109147659A (en)*2018-08-232019-01-04西安蜂语信息科技有限公司Control method for screen display, device, equipment and storage medium
CN109521875A (en)*2018-10-312019-03-26联想(北京)有限公司A kind of screen control method, electronic equipment and computer readable storage medium
CN110046600A (en)*2019-04-242019-07-23北京京东尚科信息技术有限公司Method and apparatus for human testing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
雷李辉: "基于图像识别的计算机自动锁屏系统的研究", 《保密科学技术》*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112069043A (en)*2020-08-042020-12-11北京捷通华声科技股份有限公司 A terminal equipment state detection method, model generation method and device
CN116522417A (en)*2023-07-042023-08-01广州思涵信息科技有限公司Security detection method, device, equipment and storage medium for display equipment
CN116522417B (en)*2023-07-042023-09-19广州思涵信息科技有限公司Security detection method, device, equipment and storage medium for display equipment

Similar Documents

PublicationPublication DateTitle
CN110502986B (en)Method, device, computer equipment and storage medium for identifying positions of persons in image
Lin et al.MSAFF-Net: Multiscale attention feature fusion networks for single image dehazing and beyond
CN105893920B (en)Face living body detection method and device
CN110991231B (en)Living body detection method and device, server and face recognition equipment
CN111429476B (en)Method and device for determining action track of target person
JP7151875B2 (en) Image processing device, image processing method, and program
CN107844742B (en)Facial image glasses minimizing technology, device and storage medium
EP3869448A1 (en)Iris authentication device, iris authentication method, and recording medium
CN111552984A (en) Encryption method, device, device and storage medium for displaying information
CN113435353A (en)Multi-mode-based in-vivo detection method and device, electronic equipment and storage medium
CN108228742B (en)Face duplicate checking method and device, electronic equipment, medium and program
CN113869253A (en) Liveness detection method, training method, device, electronic device and medium
CN110490140A (en)Screen display state method of discrimination, device, computer equipment and storage medium
CN112348112A (en)Training method and device for image recognition model and terminal equipment
CN113239233A (en)Personalized image recommendation method, device, equipment and storage medium
Thamaraimanalan et al.Multi biometric authentication using SVM and ANN classifiers
Dhruva et al.Novel algorithm for image processing based hand gesture recognition and its application in security
CN111881740A (en)Face recognition method, face recognition device, electronic equipment and medium
CN109697421A (en)Evaluation method, device, computer equipment and storage medium based on micro- expression
CN111274602B (en)Image characteristic information replacement method, device, equipment and medium
CN113569676A (en)Image processing method, image processing device, electronic equipment and storage medium
Kennedy et al.Implementation of an embedded Masked Face Recognition System using huskylens system-on-chip module
JP2023025914A (en)Face authentication device, face authentication method, and computer program
Zhang et al.Face spoofing detection based on 3D lighting environment analysis of image pair
WO2020003400A1 (en)Face collation system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20191122

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp