Movatterモバイル変換


[0]ホーム

URL:


CN107247920A - Interaction control method, device and computer-readable recording medium - Google Patents

Interaction control method, device and computer-readable recording medium
Download PDF

Info

Publication number
CN107247920A
CN107247920ACN201710317463.4ACN201710317463ACN107247920ACN 107247920 ACN107247920 ACN 107247920ACN 201710317463 ACN201710317463 ACN 201710317463ACN 107247920 ACN107247920 ACN 107247920A
Authority
CN
China
Prior art keywords
visitor
face
image
identification result
facial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710317463.4A
Other languages
Chinese (zh)
Inventor
陈志博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN201710317463.4ApriorityCriticalpatent/CN107247920A/en
Publication of CN107247920ApublicationCriticalpatent/CN107247920A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The present invention relates to a kind of interaction control method, device and computer-readable recording medium, methods described includes:Acquired image frames;According to described image frame, the visitor interaction content associated with the visitor's identification result obtained to the progress recognition of face of described image frame is obtained;Obtained according to described image frame and tend to visitor's move;Trend visitor's move is generated according to visitor's spatiality, and the measures of dispersion between facial image and default face template of the visitor's spatiality according to described image frame corresponding to visitor's identification result is determined;According to the trend visitor move movement, and export visitor's interaction content.The scheme that the application is provided drastically increases the efficiency interacted with visitor.

Description

Interaction control method, device and computer-readable recording medium
Technical field
The present invention relates to field of computer technology, more particularly to a kind of interaction control method, device and computer-readableStorage medium.
Background technology
With the progress of society and the development of science and technology, the interaction between people is more and more frequently, it is necessary to the field interactedScape is also more and more.In traditional visitor comes to visit scene, for the personnel that come to visit, it usually needs staff enters to visiting personnelRow information is registered, then is interacted by staff by manual type with visitor.
However, traditional is this by the visiting personal information of manual type registration, then the visitor's friendship interacted with visitorMutual mode causes the efficiency interacted with visitor low, it is necessary to consume substantial amounts of manpower and materials, and introduce substantial amounts of workloadUnder.
The content of the invention
Based on this, it is necessary to for interacted caused by traditional visitor's interactive mode with visitor inefficiency the problem of,A kind of interaction control method, device and computer-readable recording medium are provided.
A kind of interaction control method, methods described includes:
Acquired image frames;
According to described image frame, obtain with carrying out visitor's identification result phase that recognition of face is obtained to described image frameVisitor's interaction content of association;
Obtained according to described image frame and tend to visitor's move;Trend visitor's move is according to visitor's space shapeState generate, facial image of the visitor's spatiality according to described image frame corresponding to visitor's identification result withMeasures of dispersion between default face template is determined;
According to the trend visitor move movement, and export visitor's interaction content.
A kind of interaction control device, described device includes:
Acquisition module, for acquired image frames;
Recognition result acquisition module, is obtained for according to described image frame, obtaining with carrying out recognition of face to described image frameVisitor's interaction content that the visitor's identification result arrived is associated;
Instruction acquisition module, tends to visitor's move for being obtained according to described image frame;The trend visitor movementInstruction is generated according to visitor's spatiality, visitor's spatiality visitor's identification result according to described image frameMeasures of dispersion between corresponding facial image and default face template is determined;
Output module, for being moved according to the trend visitor move, and exports visitor's interaction content.
In one embodiment, the recognition result acquisition module is additionally operable to extract the face figure that described image frame includesThe face characteristic data of picture;The visitor image matched according to the face characteristic data query and the facial image;According toThe visitor image obtains visitor's identification result;Obtain in visitor's interaction associated with visitor's identification resultHold.
In one embodiment, the recognition result acquisition module is additionally operable to determine the face figure that described image frame includesAccounting as accounting for described image frame, extracts the face characteristic data that accounting exceedes the facial image of default accounting;And/or, it is determined thatThe definition for the facial image that described image frame includes, the face for extracting the facial image that definition exceedes clarity threshold is specialLevy data.
In one embodiment, the recognition result acquisition module is additionally operable to described image frame inputting neutral net mouldType;Obtain the characteristic pattern corresponding with described image frame of the neural network model output;According to being determined the characteristic patternThe face characteristic data for the facial image that picture frame includes;Each visitor in the face characteristic data and visitor image storehouse is schemedAs corresponding face characteristic data compare;Choose corresponding face characteristic data and the people determined in the visitor image storehouseFace characteristic similarity highest visitor image, is used as the visitor image matched with the facial image.
In one embodiment, the recognition result acquisition module is additionally operable to be determined according to visitor's identification resultCorresponding visitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Combine visitor's identificationAs a result visitor's interaction content is obtained with visitor's interaction content template.
In one embodiment, the instruction acquisition module is additionally operable to extract the ambient image that described image frame includesBarrier characteristic;According to the barrier characteristic dyspoiesis thing distribution map;In the distribution of obstacles mapIn, plan mobile route according to visitor's spatiality;Visitor's move is tended to according to mobile route generation.
The computer that is stored with a kind of computer-readable recording medium, the computer-readable recording medium is executable to be referred toOrder, when the computer executable instructions are executed by processor so that the step of the computing device interaction control method.
Computer-readable instruction is stored in a kind of computer equipment, including memory and processor, the memory, instituteWhen stating computer-readable instruction by the computing device so that the step of the computing device interaction control method.
Above-mentioned interaction control method, device, computer-readable recording medium and computer equipment, are collecting picture frameAfterwards, it is possible to automatically obtain the visitor associated with the visitor's identification result obtained to picture frame progress recognition of faceBetween interaction content, and facial image and default face template according to corresponding to visitor's identification result in picture frameTrend visitor's move that visitor's spatiality determined by measures of dispersion is generated.So automatically it can locally be visited according to trendObjective move adjustment position completes the interaction with visitor, it is to avoid manually-operated tedious steps, drastically increasesThe efficiency interacted with visitor.
A kind of interaction control method, methods described includes:
Receive the picture frame that visitor's interactive device is sent;
Recognition of face is carried out to described image frame, visitor's identification result is obtained;
Obtain the visitor's interaction content associated with visitor's identification result;
Facial image according to described image frame corresponding to visitor's identification result and default face template itBetween measures of dispersion, determine visitor's spatiality;
Trend visitor's move suitable for visitor's interactive device is generated according to visitor's spatiality;
Visitor's interaction content and the trend visitor move are sent to visitor's interactive device so that instituteState visitor's interactive device and perform trend visitor's move, and export visitor's interaction content.
In one embodiment, it is described that recognition of face is carried out to described image frame, visitor's identification result is obtained, is wrappedInclude:
Extract the face characteristic data for the facial image that described image frame includes;
The visitor image matched according to the face characteristic data query and the facial image;
Visitor's identification result is obtained according to the visitor image.
In one embodiment, the face characteristic data for extracting the facial image that described image frame includes, including:
The facial image that determining described image frame includes accounts for the accounting of described image frame, extracts accounting and exceedes preset ratioFacial image face characteristic data;And/or,
The definition for the facial image that described image frame includes is determined, the face that definition exceedes clarity threshold is extractedThe face characteristic data of image.
In one embodiment, the face characteristic data for extracting the facial image that described image frame includes, including:
Described image frame is inputted into neural network model;
Obtain the characteristic pattern corresponding with described image frame of the neural network model output;
The face characteristic data for the facial image that described image frame includes are determined according to the characteristic pattern;
The visitor image matched according to the face characteristic data query and the facial image, including:
Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;
Choose corresponding face characteristic data and the face characteristic data similarity determined in the visitor image storehouseHighest visitor image, is used as the visitor image matched with the facial image.
In one embodiment, visitor's interaction content that the acquisition is associated with visitor's identification result, including:
Corresponding visitor's attribute is determined according to visitor's identification result;
Search visitor's interaction content template corresponding with visitor's attribute;
Combine visitor's identification result and obtain visitor's interaction content with visitor's interaction content template.
A kind of interaction control device, described device includes:
Receiving module, the picture frame for receiving the transmission of visitor's interactive device;
Identification module, for carrying out recognition of face to described image frame, obtains visitor's identification result;
Acquisition module, for obtaining the visitor's interaction content associated with visitor's identification result;
Determining module, for the facial image corresponding to visitor's identification result according to described image frame and in advanceIf the measures of dispersion between face template, visitor's spatiality is determined;
Directive generation module, for generating the trend visitor suitable for visitor's interactive device according to visitor's spatialityMove;
Sending module, is handed over for visitor's interaction content and the trend visitor move to be sent to the visitorMutual equipment so that visitor's interactive device performs trend visitor's move, and exports visitor's interaction content.
In one embodiment, the identification module is additionally operable to extract the face for the facial image that described image frame includesCharacteristic;The visitor image matched according to the face characteristic data query and the facial image;According to the visitorImage obtains visitor's identification result.
In one embodiment, the identification module is additionally operable to determine that the facial image that includes of described image frame is accounted for describedThe accounting of picture frame, extracts the face characteristic data that accounting exceedes the facial image of default accounting;And/or, determine described imageThe definition for the facial image that frame includes, extracts the face characteristic data that definition exceedes the facial image of clarity threshold.
In one embodiment, the identification module is additionally operable to described image frame inputting neural network model;Obtain instituteState the characteristic pattern corresponding with described image frame of neural network model output;Determined to wrap in described image frame according to the characteristic patternThe face characteristic data of the facial image included;By face characteristic data people corresponding with each visitor image in visitor image storehouseFace characteristic compares;Choose corresponding face characteristic data and the face characteristic data determined in the visitor image storehouseSimilarity highest visitor image, is used as the visitor image matched with the facial image.
In one embodiment, the acquisition module determines that corresponding visitor belongs to according to visitor's identification resultProperty;Search visitor's interaction content template corresponding with visitor's attribute;Visitor's identification result is combined to visit with describedObjective interaction content template obtains visitor's interaction content.
In one embodiment, described device also includes:
Interactive module, for obtaining the interactive instruction initiated according to visitor's identification result;According to the interactionInstruction determines corresponding interactive object mark;Set up visitor's interactive device and the corresponding equipment of interactive object mark itBetween communication connection.
The computer that is stored with a kind of computer-readable recording medium, the computer-readable recording medium is executable to be referred toOrder, when the computer executable instructions are executed by processor so that the step of the computing device interaction control method.
Computer-readable instruction is stored in a kind of computer equipment, including memory and processor, the memory, instituteWhen stating computer-readable instruction by the computing device so that the step of the computing device interaction control method.
Above-mentioned interaction control method, device, computer-readable recording medium and computer equipment, are receiving visitor's interactionAfter the picture frame that equipment is sent, recognition of face is carried out to the picture frame and obtains visitor's identification result, you can is obtained and visitorCarry out interactive visitor's interaction content.Further according to the facial image corresponding to visitor's identification result in picture frame and default peopleMeasures of dispersion between face template, you can determine visitor's spatiality, and generation causes visitor's interactive device to becoming that visitor movesTo visitor's move.Sent by visitor's interaction content with tending to visitor's move to visitor's interactive device, Fang KejiaoMutual equipment automatically can complete interaction with visitor according to tending to visitor's move adjustment position, it is to avoid artificial behaviourThe tedious steps of work, drastically increase the efficiency interacted with visitor.Moreover, visitor's interactive device is sent after picture frame, according toThe trend visitor's move received can be automatically performed the interaction with visitor with visitor's interaction content, greatly reduceThe integrated difficulty and maintenance cost of visitor's interactive device.
Brief description of the drawings
Fig. 1 is the applied environment figure of interaction control method in one embodiment;
Fig. 2 be one embodiment in be used for realize interaction control method server cut-away view;
Fig. 3 be one embodiment in be used for realize interaction control method electronic equipment cut-away view;
Fig. 4 is the schematic flow sheet of interaction control method in one embodiment;
Fig. 5 is the schematic flow sheet of interaction control method in another embodiment;
Fig. 6 is to realize the equipment Organization Chart of interaction control method in one embodiment;
Fig. 7 is the timing diagram of interaction control method in one embodiment;
Fig. 8 is the structured flowchart of interaction control device in one embodiment;
Fig. 9 is the structured flowchart of interaction control device in another embodiment;
Figure 10 is the structured flowchart of interaction control device in further embodiment;
Figure 11 is the structured flowchart of interaction control device in another embodiment.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and ExamplesThe present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, andIt is not used in the restriction present invention.
Fig. 1 is the applied environment figure of interaction control method in one embodiment.Reference picture 1, the interaction control method applicationIn intersection control routine.Intersection control routine includes visitor's interactive device 110 and server 120, and visitor's interactive device 110 passes throughNetwork is connected with server 120.Visitor's interactive device 110 is equipment moveable and with interactive function.Visitor's interaction is setStandby 110 can be specifically robot etc..Server 120 can be specifically independent physical server or physical servicesDevice cluster.The interaction control method can be applied to visitor's interactive device 110, can also be applied to server 120.
Fig. 2 is the internal structure schematic diagram of electronic equipment in one embodiment.As shown in Fig. 2 the electronic equipment includes leading toCross processor, non-volatile memory medium, built-in storage and the network interface of system bus connection, camera, loudspeaker and aobviousDisplay screen.Wherein, the non-volatile memory medium of electronic equipment is stored with operating system, and can also be stored with computer-readable instruction,When the computer-readable instruction is executed by processor, it may be such that processor realizes a kind of interaction control method.The processor is used forThere is provided and calculate and control ability, support the operation of whole electronic equipment.Computer-readable instruction can be stored in the built-in storage,When the computer-readable instruction is by the computing device, a kind of interaction control method of the computing device may be such that.NetworkInterface is used to carry out network service with server, such as sends picture frame to server, visitor's identity that the reception server is returned is knownOther result etc..The display screen of electronic equipment can be LCDs or electric ink display screen etc..The electronic equipment can be withIt is visitor's interactive device 110.It will be understood by those skilled in the art that the structure shown in Fig. 2, is only and application scheme phaseThe block diagram of the part-structure of pass, does not constitute the restriction for the electronic equipment being applied thereon to application scheme, specific electricitySub- equipment can include, than more or less parts shown in figure, either combining some parts or with different partsArrangement.
Fig. 3 is the internal structure schematic diagram of server in one embodiment.As shown in Fig. 2 the server is included by beingProcessor, non-volatile memory medium, built-in storage and the network interface of bus of uniting connection.Wherein, the server is non-volatileProperty storage medium is stored with operating system and database, and can also be stored with computer-readable instruction, the computer-readable instructionWhen being executed by processor, it may be such that processor realizes a kind of a kind of interaction control method of server.The processor of the serverCalculated and control ability for providing, support the operation of whole server.Computer-readable finger can be stored in the built-in storageOrder, when the computer-readable instruction is by the computing device, may be such that a kind of interaction control method of the computing device.ShouldThe network interface of server is used to communicate by network connection with outside electronic equipment according to this, such as receives electronic equipment and sendPicture frame and return to visitor's identification result etc. to electronic equipment.Server can be either more with independent serverIndividual server group into server cluster realize.It will be understood by those skilled in the art that the structure shown in Fig. 3, is onlyThe block diagram of the part-structure related to application scheme, does not constitute the limit for the server being applied thereon to application schemeFixed, specific server can include, than more or less parts shown in figure, either combining some parts or with notSame part arrangement.
As shown in figure 4, there is provided a kind of interaction control method in one embodiment.The present embodiment is main in this wayIllustrated applied to the server 120 in above-mentioned Fig. 1.Reference picture 4, the interaction control method specifically includes following steps:
S402, receives the picture frame that visitor's interactive device is sent.
Wherein, visitor's interactive device is equipment moveable and with interactive function.Visitor's interactive device can be logicalCross the device movement of itself configuration or moved by external auxiliary device.Visitor's interactive device can be carried by display screenFor the interactive function based on picture, the interactive function based on sound can also be provided by loudspeaker.
In one embodiment, visitor's interactive device can gather image by camera under the current visual field of cameraFrame, the picture frame collected is sent to server according to RTP, server receives visitor's interactive device and sentPicture frame.Wherein, the visual field of camera can change because of the posture of visitor's interactive device and the change of position.
In one embodiment, visitor's interactive device be able to will specifically be adopted according to fixed or dynamic frame per second acquired image framesCollect obtained picture frame to send to server, server receives the picture frame that visitor's interactive device is sent.Wherein, fixed or dynamicFrame per second picture frame can be made to be fixed according to this or continuous dynamic menu is formed when dynamic frame per second is played so that server canFollow the trail of the special object in continuous dynamic menu.
In one embodiment, visitor's interactive device can call camera to open camera-shooting scanning pattern, and real time scan is worked asDestination object under the preceding visual field, visitor's interactive device, which can detect, whether there is facial image in the picture under the current visual field,If so, then generating picture frame in real time according to certain frame per second, the picture frame generated is sent to server, server is receivedThe picture frame that visitor's interactive device is sent.
In one embodiment, camera can be the camera built in visitor's interactive device, or external and visitorThe camera of interactive device association.
S404, carries out recognition of face to picture frame, obtains visitor's identification result.
Wherein, visitor's identification result is the data for reflecting visitor's identity.Visitor's identity can be the name of visitorWord, social status or job information etc..Specifically, server can receive visitor's interactive device transmission picture frame after,First detect and whether there is facial image in the picture frame, if so, then the facial image that the picture frame includes is identified.
In one embodiment, the picture frame received can be inputted one or once input multiple classification by serverDevice, detects and whether there is facial image in the picture frame.Wherein, for the classification in detection image frame with the presence or absence of facial imageDevice, is that obtained grader is trained as training data using facial image and inhuman face image.
Further, server it is also possible to use a rectangular window, according to preset direction and default step-length in picture frameIt is mobile, so as to carry out window scanning, facial image in the video in window of scanning extremely is extracted in scanning.Server can extract peopleAfter face image unit, the facial image unit that overlapping region exceedes default anti-eclipse threshold is filtered out, according to the people retained after filteringFace image unit obtains facial image.
In one embodiment, visitor image storehouse is provided with server, visitor image storehouse includes some visitor images.Server can be after the picture frame of visitor's interactive device transmission be received, by the facial image in the picture frame received and visitorThe visitor image that image library includes compares, and detects whether matched between the picture frame received and visitor image.Server canWhen being matched between the picture frame and visitor image received, judge facial image that the picture frame includes with visitor image as phaseSame character image, obtains the corresponding visitor's identity information of the visitor image as visitor's identification result.
Wherein, visitor image can be the real human face image for reflecting correspondence visitor.That can be uploaded from visitorIn the pictorial information that people's data, history are delivered, by the image of the correspondence self-defined selection of visitor, or selection is automatically analyzed by systemA pictures, be used as corresponding visitor image.
In one embodiment, whether server matches between the picture frame and visitor image that detection is received, specificallyThe similarity between the picture frame received and visitor image can be calculated.Server can first extract the picture frame received and visitorThe respective feature of image so that the more big then similarity of difference calculated between the difference between two features, feature is lower, feature itBetween the smaller then similarity of difference it is higher.Wherein, server calculates the similarity between the picture frame received and visitor imageWhen, the accelerating algorithm suitable for image processor can be used, arithmetic speed is improved.
In one embodiment, server can extract picture frame after the picture frame of visitor's interactive device transmission is receivedIn view data, and detect the view data whether include face characteristic data.If server is detected in the view dataComprising face characteristic data, then judge that the picture frame includes facial image.Server can be carried further from the view dataTake face characteristic data, then by the face characteristic data of the extraction face characteristic corresponding with each visitor image in visitor image storehouseData compare, and obtain visitor's identification result.
S406, obtains the visitor's interaction content associated with visitor's identification result.
Wherein, visitor's interaction content is the content interacted with visitor.Visitor's interaction content can include text, figureAt least one of piece, audio or video.Visitor's interaction content can be the unified content set or with visitor's bodyThe related content of part, can also be the content related to visitor's attribute.
Visitor's identification result is associated with visitor's interaction content, for marking visitor's interaction content.Pass through visitor's identityRecognition result can navigate to associated visitor's interaction content.One visitor's identification result can associate one or moreVisitor's interaction content.Multiple visitor's identification results can associate visitor's interaction content.
In one embodiment, server can set visitor's interaction content in advance, and visitor's interaction content and visitor are markedKnow association, then visitor's interaction content of setting is stored in database or file, when needed from database or fileIt is middle to read.Server can pull the corresponding visitor of visitor's identification result after identification obtains visitor's identification resultThe associated visitor's interaction content of mark.Visitor's interaction content can be direct exportable visitor's interaction content, can alsoIt is the visitor's interaction content template for treating completion.
In one embodiment, be stored with visitor's data on server.Visitor's data can be visitor or keeperVisitor's internet data that the visitor's personal information or server of upload are crawled from internet.Visitor interconnects netting indexAccording to comment information produced under the Twitter message and comment scene that such as, visitor produces under hair Twitter message scene etc..
Further, service implement body can carry out semantic analysis or word frequency analysis to visitor's data, and finding out being capable of generationThe keyword of table visitor, so as to form visitor's label according to the keyword.Specifically keyword can be clustered again, by acquisitionEach class is used as visitor's label;Or keyword can be compared with categorized visitor's label, so that keyword be reflectedIt is mapped to visitor's label.Visitor's label is the portrait to visitor, is the peculiar mark of visitor.Visitor's portrait is for service goal groupSketching the contours for body real features, is the synthesis prototype of true visitor.
Further, server may correspond to visitor's label and set visitor's interaction content.Server is being recognized and must visitedAfter objective identification result, visitor's interaction content associated by the corresponding visitor's label of visitor's identification result can be pulled.
Between S408, the facial image and default face template according to corresponding to visitor's identification result in picture frameMeasures of dispersion, determines visitor's spatiality.
Wherein, default face template is the reference facial image pre-set.Default face template can be by with reference to certainlyThe face of right people is placed in the image of the collection of the picture center under visitor's interactive device camera view.Visitor's spatiality isThe state of visitor in three dimensions, including physical location and posture etc..
Specifically, server will recognize the face figure of completion after the facial image that picture frame includes is identifiedAs being compared with default face template, the facial image in picture frame corresponding to visitor's identification result and default people are calculatedMeasures of dispersion between face template.Facial image of the server in picture frame is calculated corresponding to visitor's identification result is with presettingDuring measures of dispersion between face template, facial image and the respective feature of default face template can be first extracted, so that it is special to calculate twoMeasures of dispersion between levying.Facial image of the server in calculating obtains picture frame corresponding to visitor's identification result is with presettingAfter measures of dispersion between face template, visitor's spatiality is obtained further according to the measures of dispersion.
S410, trend visitor's move suitable for visitor's interactive device is generated according to visitor's spatiality.
Wherein, it is adaptable to which the trend visitor move of visitor's interactive device is used to drive visitor's interactive device to move to visitorIt is dynamic.It is to pass through the finger that device object that is that conversion is obtained and being adapted directly is performed to tend to visitor's moveOrder.
Specifically, server is it is determined that identify visitor's spatiality corresponding to the facial image of identity in picture frameAfterwards, it can determine that visitor's interactive device is moved to the mobile route of visitor according to visitor's spatiality, and determine that visitor's interaction is setThe traveling data that standby next step is taken action.The traveling data may include direct of travel and gait of march etc..Server can further according toThe traveling data of determination are converted into becoming suitable for visitor's interactive device by the compiling of instruction agreement that visitor's interactive device is adaptedTo visitor's move.
In one embodiment, server can receive the picture frame sequence that visitor's interactive device is continuously transmitted, and to figureAfter the facial image completion identification in frame, track identification successfully facial image, and to figure in picture frame sequenceAs each picture frame in frame sequence determines corresponding visitor's spatiality, mobile road is accordingly adjusted according to visitor's spatialityFootpath, trend visitor's move is accordingly adjusted further according to mobile route.
S412, visitor's interaction content and trend visitor's move is sent to visitor's interactive device so that visitor interactsEquipment, which is performed, tends to visitor's move, and exports visitor's interaction content.
Specifically, server can send trend visitor's move of visitor's interaction content of acquisition and generation to visitorInteractive device.Visitor's interactive device is moved after receiving visitor's interaction content and tending to visitor's move according to visitor is tended toDynamic instruction tends to visitor's movement, and exports visitor's interaction content.
In one embodiment, visitor's interactive device can call display screen to export visitor's interaction content.Visitor's interaction contentSuch as text, picture or video etc..Visitor's interactive device can also obtain the corresponding pattern data of visitor's interaction content, so thatAccording to the pattern data, visitor's interaction content is shown in display screen.Visitor's interactive device can also call loudspeaker output visitObjective interaction content.Visitor's interaction content such as audio etc..
Above-mentioned interaction control method, after the picture frame of visitor's interactive device transmission is received, pedestrian is entered to the picture frameFace identification obtains visitor's identification result, you can obtain carrying out interactive visitor's interaction content with visitor.Further according to picture frameThe measures of dispersion between facial image and default face template corresponding to middle visitor's identification result, you can determine visitor spaceState, and generate the trend visitor's move for make it that visitor's interactive device is moved to visitor.By visitor's interaction content with becomingSent to visitor's move to visitor's interactive device, visitor's interactive device can be automatically according to trend visitor's moveAdjustment position completes and the interaction of visitor, it is to avoid manually-operated tedious steps, drastically increases and is interacted with visitorEfficiency.Moreover, visitor's interactive device is sent after picture frame, interacted according to the trend visitor move received with visitor interiorAppearance can be automatically performed the interaction with visitor, significantly reduce the integrated difficulty and maintenance cost of visitor's interactive device.
In one embodiment, step S404 includes:Extract the face characteristic data for the facial image that picture frame includes;The visitor image matched according to face characteristic data query and facial image;Visitor's identification knot is obtained according to visitor imageReally.
Specifically, be stored with visitor image storehouse on server.Visitor image storehouse includes some visitor images, per frame visitor figureAs the corresponding guest identification that is stored with.Guest identification is used for one visitor of unique mark.
In one embodiment, in the interaction control method, the face characteristic for the facial image that picture frame includes is extractedThe step of data, includes:Picture frame is inputted into neural network model;Obtain the corresponding with picture frame of neural network model outputCharacteristic pattern;The face characteristic data for the facial image that picture frame includes are determined according to characteristic pattern.
The visitor image that visitor image storehouse includes can be inputted neural network model one by one by server, obtain the nerve netThe characteristic pattern of network model output, extracts the corresponding face characteristic data of visitor image from this feature figure, by visitor image correspondenceIn extraction face characteristic data storage in database or file.Server be able to will be visited when needing to carry out identificationThe picture frame that objective interactive device is sent inputs the neural network model, and face characteristic data are extracted according to the characteristic pattern of output, thenThe corresponding face characteristic data of visitor image are read from database or file to be contrasted.
Wherein, neural network model is the complex network model as formed by being interconnected multilayer.Neural network model canIncluding multilayer feature conversion layer, every layer of Feature Conversion layer has corresponding nonlinear change operator, and every layer of nonlinear change is calculatedSon can be multiple, and a nonlinear change operator carries out nonlinear change to the image of input in every layer of Feature Conversion layer, obtainsOperation result is used as to characteristic pattern (Feature Map).The neural network model is to be used as instruction using the image including facial imagePractice data, carry out the model that learning training is obtained.
Server can obtain the characteristic pattern of last layer network layer output.Characteristic pattern is to defeated by nonlinear change operatorThe image progress entered handles what obtained response was constituted.Server can determine the corresponding people of image of input according to this feature figureFace characteristic.Face characteristic data can be for reflecting the sex of people, the profile of face, hair style, glasses, nose, mouthAnd one or more characteristic informations therein such as the distance between each face's organ.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:Determine imageThe facial image that frame includes accounts for the accounting of picture frame, extracts the face characteristic number that accounting exceedes the facial image of default accountingAccording to;And/or, the definition for the facial image that picture frame includes is determined, the face figure that definition exceedes clarity threshold is extractedThe face characteristic data of picture.
Specifically, server may recognize that the quantity for the pixel that each facial image in picture frame is included, and examineAccounting of the quantity of the pixel in the pixel that whole image frame is included is surveyed, the facial image is obtained on picture frameAccounting, then the accounting is compared with default accounting.The facial image that server can exceed default accounting in accounting is determined asQualified facial image, extracts the face characteristic data of the facial image.
Whether the definition of each facial image in server also detectable image frame exceedes default clarity threshold.Wherein, the definition is used for the readability for reflecting each thin portion shadow line of facial image and its border in respective image frame.ServerThe picture frame of selection can be converted into gray level image, the rate of gray level of gray level image is detected, determined according to rate of gray level clearClear degree.Locate grey scale change faster, represent that definition is higher;Locate grey scale change slower, represent that definition is lower.
In one embodiment, server can also extract the characteristic point of facial image in picture frame, extract characteristic pointQuantity exceedes the face characteristic data of the facial image of default characteristic point amount threshold.Wherein, characteristic point is people in picture frameFace image has distinct characteristic and can effectively reflect the point of image substantive characteristics, and this feature point has the energy of mark face characteristicPower.Such as, corresponding characteristic point of face's organ etc..Characteristic point amount threshold can be arranged as required to.
In above-described embodiment, when the accounting for detecting facial image and accounting for picture frame, to exceed preset ratio and/or definition superWhen crossing default clarity threshold, the face characteristic data of the facial image are extracted, so as to ensure that extracted face characteristicThe quality of data.
In one embodiment, the step of visitor image matched according to face characteristic data query and facial image, wrapsInclude:Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose visitor imageIn storehouse corresponding face characteristic data with determine face characteristic data similarity highest visitor image, as with facial imageThe visitor image matched.
Server is each in the corresponding face characteristic data of picture frame and visitor image storehouse for sending visitor's interactive deviceWhen the corresponding face characteristic data of visitor image compare, the difference between two face characteristic data, face characteristic data can be calculatedBetween the more big then similarity of difference it is lower, the smaller then similarity of difference between face characteristic data is higher.Similarity can be adoptedWith the Hamming distance that cryptographic Hash is each perceived between cosine similarity or image.
In the present embodiment, the face characteristic data of facial image are extracted by neural network model, then by correspondingThe facial image that similarity between face characteristic data is come in the picture frame for the transmission of visitor's interactive device matches visitor image,So that visitor's identification result is more accurate.
Server can be obtained and the visit after the visitor image matched according to face characteristic data query and facial imageThere is the guest identification of corresponding relation in objective image, so that using the guest identification as in the picture frame sent with visitor's interactive deviceThe corresponding visitor's identification result of facial image.
In above-described embodiment, using face characteristic data as according to visitor's identification is carried out, by from the image receivedThe face characteristic image extracted in frame has completed identification with reflecting that the visitor image of visitor's real human face is matched, it is ensured thatThe accuracy of visitor's identification.
In one embodiment, step S406 includes:Corresponding visitor's category is determined according to visitor's identification resultProperty;Search visitor's interaction content template corresponding with visitor's attribute;Visitor's identification result is combined to visit with describedObjective interaction content template obtains visitor's interaction content.
Wherein, visitor's attribute is the data for reflecting visitor's feature.Visitor's attribute such as visitor's sex or visitor societyMeeting status etc..Visitor's interaction content template is the template for being used to generate visitor's interaction content pre-set.Such as, text typeThe corresponding visitor's interaction content template of visitor's interaction content can be text style etc..
Specifically, server can set visitor's interaction content template in advance.Visitor's interaction content template can be unifiedTemplate, such as " XXX, welcome!”;Visitor's interaction content template can also be corresponding to visitor's attribute and set personalizationVisitor's interaction content template.Visitor's interaction content template can be text template, audio template or video template etc..
Server obtains corresponding visit according to visitor's identification result after visitor's identification result is gotObjective data, visitor's attribute is extracted according to visitor's data, then search interact with the visitor that visitor's attribute has corresponding relation it is interiorMolar plate, visitor's identification result is added in the visitor's interaction content template found, generates visitor's interaction content.
For example, server identifies that the corresponding visitor's data of facial image in picture frame are:Name " Abc ", propertyNot Wei " female ", the related visitor's interaction content template of visitor's attribute is " Mrs X, welcome!", then visitor's interaction content is" Mrs A, welcome!”.
In the present embodiment, according to the dynamically generating personalized visitor's interaction content of visitor's attribute so that the friendship with visitorMutual content is more rich, the variation of interaction content presentation mode.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction in interaction control method.Step S408 includes:According to the facial contour size of facial image corresponding with visitor's identification result in picture frame, withAnd the measures of dispersion between the facial contour size of default face template, determine visitor's depth distance;According in picture frame with visitorThe face deflection angle of the corresponding facial image of identification result, and between the face deflection angle of default face templateMeasures of dispersion, determine visitor's direction.
Specifically, default face template is the reference facial image pre-set.Default face template can be by referenceThe face of natural person is placed in the image of the collection of the picture center under visitor's interactive device camera view.Visitor's depth distanceIt is visitor's distance apart from camera in place.Visitor is towards for reflecting orientation of the visitor relative to camera.VisitVisitor's direction can be specifically visitor's face orientation.
Server can detect facial contour size in the default face template, the face deflection of the default face template in advanceAngle, and when gathering the default face template reference man apart from camera distance, and by obtained facial contour size, peopleFace deflection angle with apart from storage.
Server can detect the face of the facial image after the facial image included to picture frame completes identificationProfile size, the facial contour size and the facial contour size of default face template is contrasted, based on optical imageryComputation model calculates visitor's depth distance, i.e. distance of the corresponding natural person of the facial image apart from camera.
Server also can detect the face deflection angle of the facial image, by the face deflection angle and default face templateFace deflection angle contrast, determine measures of dispersion of the facial image relative to default face template, according to the measures of dispersion determineFacial image relative to default face template deflection angle, so as to obtain the corresponding natural person of visitor's direction, i.e. facial imageRelative to the orientation of camera.
In the present embodiment, to preset natural person's spatiality for being reflected of face template as standard, it will be wrapped in picture frameThe facial image included is contrasted with default face template, accurate visitor's spatiality is obtained, to realize according to visitorTrend visitor action planning of the spatiality to visitor's interactive device.
In one embodiment, step S410 includes:Extract the barrier characteristic for the ambient image that picture frame includesAccording to;According to barrier characteristic dyspoiesis thing distribution map;In distribution of obstacles map, advised according to visitor's spatialityDraw mobile route;Trend visitor's move suitable for visitor's interactive device is generated according to mobile route.
Wherein, the representation of distribution of obstacles map can be grid representation, geological information representation orTopological representation mode etc..In one embodiment, server can be by the obstacle information discrete values of ambient image in picture frameMetaplasia is into digital raster map.Specifically, a rectangular window can be used in server, according to preset direction and default step-length in figureAs being moved in frame, so as to carry out window scanning, obstacle information in the video in window of scanning extremely is extracted in scanning, and this is hinderedHinder thing Information Number value, obtain and the one-to-one barrier quantization parameter of video in window, formation digital raster map.The windowImage is a grid cell, one barrier quantization parameter of each grid cell correspondence.The barrier quantization parameter is used forReflect that the grid cell has the confidence level of barrier.Numerical value is higher to represent that the grid cell there is a possibility that barrier moreGreatly.
Server can determine final position and visitor after dyspoiesis thing distribution map further according to visitor's spatialityDirection of the interactive device at incoming terminal position.Server chooses the shifting for position of reaching home from distribution of obstacles map againDynamic path.Specifically, the corresponding barrier quantization parameter of grid cell that server can include according to digital raster map, choosingThe grid cell for taking continuous and corresponding barrier quantization parameter small, forms mobile route.
Server further according to mobile route, can determine the traveling data of visitor's interactive device next step action.The traveling numberAccording to may include direct of travel and gait of march etc..Server can be assisted further according to the compiling of instruction being adapted with visitor's interactive deviceThe traveling data of determination are converted into trend visitor's move suitable for visitor's interactive device by view.
In one embodiment, server can receive the picture frame sequence that visitor's interactive device is continuously transmitted, and to figureAfter the facial image completion identification included as frame, successful facial image of track identification in picture frame sequence, andCorresponding visitor's spatiality is determined to each picture frame in picture frame sequence, and shifting is accordingly adjusted according to visitor's spatialityDynamic path.
In the present embodiment, path planning is carried out by the distribution of obstacles map of generation, it is ensured that visitor's interaction is setThe accuracy of the standby path planning and avoidance for tending to visitor's motion.
In one embodiment, after step S412, the interaction control method also includes:Obtain according to visitor's identificationAs a result the interactive instruction initiated;Corresponding interactive object mark is determined according to interactive instruction;Visitor's interactive device is set up with interactingCommunication connection between the corresponding equipment of object identity.
Wherein, interactive instruction is used to trigger the instruction interacted.Interactive object is identified for one interaction of unique markObject.Interactive object can be specifically that third party user can also be third party's service equipment etc..
Specifically, can be stored with visitor image corresponding with visitor and visiting item in server.Server can be in dockingAfter the facial image identification success that the picture frame received includes, inquiry is corresponding with visitor's identification result to be comeItem is visited, and after visiting item is inquired, triggers interactive instruction.Visiting item such as, with third party user carries out voice and led toWords carry out data transmission with third party's service equipment.
For example, server is carrying out identification to visitor and is confirming that the visitor is predetermined visiting personnel in advanceAfterwards, it may be determined that the visitor needs the interactive object accessed.Server can set up visitor's interactive device end corresponding with interactive objectThe call link at end so that visitor directly can be interacted by visitor's interactive device with accessed interactive object.ServerAlso the contact method of accessed interactive object can be sent to visitor's interactive device, by visitor's interactive device to user's exhibitionShow.
In one embodiment, visitor's interactive device also includes sound collection equipment.Visitor's interaction is set up in server to setAfter the standby communication connection between the corresponding equipment of interactive object mark, visitor's interactive device can call sound collection equipment to gatherVisitor's voice data, the voice data collected is sent to server and responded.
In the present embodiment, the interactive instruction that can be automatically initiated according to visitor's identification result, is provided for visitorTripartite services, and improves the practicality and service coverage of visitor's interactive device.
As shown in figure 5, there is provided a kind of interaction control method in one embodiment.The present embodiment is main in this wayIllustrated applied to the electronic equipment in above-mentioned Fig. 2.Reference picture 5, the interaction control method specifically includes following steps:
S502, acquired image frames.
In one embodiment, electronic equipment can be obtained by camera, the acquired image frames under the current visual field of cameraTake the picture frame collected.Wherein, the visual field of camera can change because of the posture of visitor's interactive device and the change of position.
In one embodiment, electronic equipment specifically can obtain collection according to fixed or dynamic frame per second acquired image framesObtained picture frame.Wherein, fixed or dynamic frame per second can be such that picture frame is fixed according to this or shape when dynamic frame per second is playedInto continuous dynamic menu, so that the special object in the traceable continuous dynamic menu of electronic equipment.
In one embodiment, electronic equipment can call camera to open camera-shooting scanning pattern, and real time scan is currentDestination object under the visual field, electronic equipment, which can detect, whether there is facial image in the picture under the current visual field, if so, then pressingPicture frame is generated in real time according to certain frame per second, obtains the picture frame of generation.
S504, according to picture frame, obtains related to the visitor's identification result obtained to picture frame progress recognition of faceVisitor's interaction content of connection.
In one embodiment, the picture frame collected can be inputted one or once input multiple classification by electronic equipmentDevice, detects and whether there is facial image in the picture frame.Wherein, for the classification in detection image frame with the presence or absence of facial imageDevice, is that obtained grader is trained as training data using facial image and inhuman face image.
Further, electronic equipment it is also possible to use a rectangular window, according to preset direction and default step-length in picture frameMiddle movement, so as to carry out window scanning, facial image in the video in window of scanning extremely is extracted in scanning.Electronic equipment can carriedTake after facial image unit, filter out the facial image unit that overlapping region exceedes default anti-eclipse threshold, retain according to after filteringFacial image unit obtain facial image.
In one embodiment, visitor image storehouse is provided with electronic equipment, visitor image storehouse includes some visitor's figuresPicture.Facial image and visitor image storehouse that electronic equipment can include the picture frame of collection after the picture frame of collection is obtainedWhether the visitor image included compares, matched between the picture frame and visitor image that detect collection.Electronic equipment can be in collectionPicture frame and visitor image between when matching, judge facial image that the picture frame includes with visitor image as identical personageImage, obtains the corresponding visitor's identity information of the visitor image as visitor's identification result.
In one embodiment, whether electronic equipment matches between the picture frame and visitor image of detection collection, specificallyThe similarity between the picture frame of collection and visitor image can be calculated.Electronic equipment can first extract the picture frame and visitor's figure of collectionAs respective feature, so that the more big then similarity of difference calculated between the difference between two features, feature is lower, between featureThe smaller then similarity of difference it is higher.
In one embodiment, electronic equipment can extract the image that picture frame includes after the picture frame of collection is obtainedData, and detect whether the view data includes face characteristic data.If electronic equipment, which is detected, includes people in the view dataFace characteristic, then judge that the picture frame includes facial image.Electronic equipment further can extract people from the view dataFace characteristic, then by the face characteristic data of the extraction face characteristic data corresponding with each visitor image in visitor image storehouseCompare, obtain visitor's identification result.
In one embodiment, electronic equipment can pull visitor's identity after identification obtains visitor's identification resultVisitor's interaction content associated by the corresponding guest identification of recognition result.Visitor's interaction content can be directly exportable visitsObjective interaction content or the visitor's interaction content template for treating completion.
In one embodiment, be stored with visitor's data on electronic equipment.Visitor's data can be visitor or managementVisitor's internet data that the visitor's personal information or electronic equipment that member uploads are crawled from internet.Further,Electronic equipment specifically can carry out semantic analysis or word frequency analysis to visitor's data, and the keyword of visitor can be represented by finding out,So as to form visitor's label according to the keyword.Electronic equipment may correspond to visitor's label and set visitor's interaction content.Electronics is setFor after identification obtains visitor's identification result, it can pull associated by the corresponding visitor's label of visitor's identification resultVisitor's interaction content.
S506, obtains according to picture frame and tends to visitor's move;Tend to visitor's move according to visitor's spatialityGeneration, facial image of visitor's spatiality according to corresponding to visitor's identification result in picture frame and default face template itBetween measures of dispersion determine.
In one embodiment, electronic equipment will have been recognized after the facial image that picture frame includes is identifiedInto facial image be compared with default face template, calculate the face figure corresponding to visitor's identification result in picture framePicture and the measures of dispersion between default face template.People of the electronic equipment in picture frame is calculated corresponding to visitor's identification resultDuring measures of dispersion between face image and default face template, facial image and the respective feature of default face template can be first extracted,So as to calculate the measures of dispersion between two features.Electronic equipment is in calculating obtains picture frame corresponding to visitor's identification resultAfter measures of dispersion between facial image and default face template, visitor's spatiality is obtained further according to the measures of dispersion.
In one embodiment, electronic equipment is it is determined that identify the visitor corresponding to the facial image of identity in picture frameIt after spatiality, can determine that visitor's interactive device is moved to the mobile route of visitor according to visitor's spatiality, and determine to visitThe traveling data of objective interactive device next step action.The traveling data may include direct of travel and gait of march etc..Electronic equipmentThe traveling data of determination can be converted into trend visitor's movement further according to the compiling of instruction agreement being adapted with visitor's interactive deviceInstruction.
In one embodiment, electronic equipment can continuous acquisition picture frame, and in the facial image included to picture frameComplete after identification, track identification successfully facial image, and to each figure in picture frame sequence in picture frame sequenceAs frame determines corresponding visitor's spatiality, mobile route is accordingly adjusted according to visitor's spatiality, further according to mobile routeCorresponding adjustment tends to visitor's move.
S508, is moved, and export visitor's interaction content according to visitor's move is tended to.
Specifically, electronic equipment is after getting visitor's interaction content and tending to visitor's move, according to trend visitorMove tends to visitor's movement, and exports visitor's interaction content.
Above-mentioned interaction control method, after picture frame is collected, it is possible to automatically obtain and pedestrian is entered to the picture frameVisitor's interaction content that visitor's identification result that face identification is obtained is associated, and according to visitor's identification in picture frameAs a result the trend that visitor's spatiality determined by the measures of dispersion between corresponding facial image and default face template is generatedVisitor's move.So locally automatically it can complete to interact with visitor according to tending to visitor's move adjustment positionJourney, it is to avoid manually-operated tedious steps, drastically increases the efficiency interacted with visitor.
In one embodiment, according to picture frame in above-mentioned interaction control method, obtain with carrying out face knowledge to picture frameThe step of visitor's interaction content that the visitor's identification result not obtained is associated, includes:Extract the face that picture frame includesThe face characteristic data of image;The visitor image matched according to face characteristic data query and facial image;Schemed according to visitorAs obtaining visitor's identification result;Obtain the visitor interaction content associated with visitor's identification result.
Specifically, be stored with visitor image storehouse on electronic equipment.Visitor image storehouse includes some visitor images, per frame visitorThe corresponding guest identification that is stored with of image.Guest identification is used for one visitor of unique mark.
In one embodiment, the face characteristic for the facial image that picture frame includes is extracted in above-mentioned interaction control methodThe step of data, includes:Picture frame is inputted into neural network model;Obtain the corresponding with picture frame of neural network model outputCharacteristic pattern;The face characteristic data for the facial image that picture frame includes are determined according to characteristic pattern.
The visitor image that visitor image storehouse includes can be inputted neural network model one by one by electronic equipment, obtain the nerveThe characteristic pattern of network model output, extracts the corresponding face characteristic data of visitor image, by visitor image pair from this feature figureShould in extraction face characteristic data storage in database or file.The picture frame of collection can be inputted the god by electronic equipmentThrough network model, face characteristic data are extracted according to the characteristic pattern of output, then visitor image is read from database or fileCorresponding face characteristic data are contrasted.
Electronic equipment can obtain the characteristic pattern of last layer network layer output.Characteristic pattern is by nonlinear change operator pairThe image progress of input handles what obtained response was constituted.Electronic equipment can determine that the image of input is corresponding according to this feature figureFace characteristic data.
In one embodiment, the face characteristic for the facial image that picture frame includes is extracted in above-mentioned interaction control methodData, including:The facial image that determining picture frame includes accounts for the accounting of picture frame, extracts the face that accounting exceedes default accountingThe face characteristic data of image;And/or, the definition for the facial image that picture frame includes is determined, definition is extracted and exceedes clearlyThe face characteristic data of the facial image of clear degree threshold value.
Specifically, electronic equipment may recognize that the quantity for the pixel that each facial image in picture frame is included, andAccounting of the quantity of the pixel in the pixel that whole image frame is included is detected, the facial image is obtained on picture frameAccounting, then the accounting is compared with default accounting.The facial image that electronic equipment can exceed default accounting in accounting is sentencedIt is set to qualified facial image, extracts the face characteristic data of the facial image.
Whether the definition of each facial image in electronic equipment also detectable image frame exceedes default definition thresholdValue.Wherein, the definition is used for the readability for reflecting each thin portion shadow line of facial image and its border in respective image frame.ElectronicsThe picture frame of selection can be converted into gray level image by equipment, detect the rate of gray level of gray level image, true according to rate of gray levelDetermine definition.Locate grey scale change faster, represent that definition is higher;Locate grey scale change slower, represent that definition is lower.
In one embodiment, electronic equipment can also extract the characteristic point of facial image in picture frame, extract characteristic pointQuantity exceed default characteristic point amount threshold facial image face characteristic data.Wherein, during characteristic point is picture frameFacial image has distinct characteristic and can effectively reflect the point of image substantive characteristics, and this feature point has mark face characteristicAbility.Such as, corresponding characteristic point of face's organ etc..Characteristic point amount threshold can be arranged as required to.
In above-described embodiment, when the accounting for detecting facial image and accounting for picture frame, to exceed preset ratio and/or definition superWhen crossing default clarity threshold, the face characteristic data of the facial image are extracted, so as to ensure that extracted face characteristicThe quality of data.
In one embodiment, matched in above-mentioned interaction control method according to face characteristic data query and facial imageVisitor image the step of include:By face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouseCompare;Face characteristic data similarity highest visitor of the corresponding face characteristic data with determining in visitor image storehouse is chosen to schemePicture, is used as the visitor image matched with facial image.
Electronic equipment is by the corresponding face characteristic data of the picture frame of collection and each visitor image phase in visitor image storehouseWhen the face characteristic data answered compare, the difference between the difference between two face characteristic data, face characteristic data can be calculatedMore big then similarity is lower, and the smaller then similarity of difference between face characteristic data is higher.Similarity can be similar using cosineThe Hamming distance of cryptographic Hash is each perceived between degree or image.
In the present embodiment, the face characteristic data of facial image are extracted by neural network model, then by correspondingThe facial image that similarity between face characteristic data is come in the picture frame for the transmission of visitor's interactive device matches visitor image,So that visitor's identification result is more accurate.
Electronic equipment can be obtained with being somebody's turn to do after the visitor image matched according to face characteristic data query and facial imageThere is the guest identification of corresponding relation in visitor image, so as to regard the guest identification as the picture frame sent with visitor's interactive deviceIn the corresponding visitor's identification result of facial image.
In above-described embodiment, using face characteristic data as according to visitor's identification is carried out, by from the image collectedThe face characteristic image extracted in frame has completed identification with reflecting that the visitor image of visitor's real human face is matched, it is ensured thatThe accuracy of visitor's identification.
In one embodiment, the visitor associated with visitor's identification result is obtained in above-mentioned interaction control method to hand overThe step of mutual content, includes:Corresponding visitor's attribute is determined according to visitor's identification result;Search corresponding with visitor's attributeVisitor's interaction content template;Combination visitor's identification result obtains visitor's interaction content with visitor's interaction content template.
Specifically, electronic equipment is obtained after visitor's identification result is got according to visitor's identification resultCorresponding visitor's data, extract visitor's attribute, then search and there is corresponding relation with visitor's attribute according to visitor's dataVisitor's interaction content template, visitor's identification result is added in the visitor's interaction content template found, generates visitorInteraction content.
In the present embodiment, according to the dynamically generating personalized visitor's interaction content of visitor's attribute so that the friendship with visitorMutual content is more rich, the variation of interaction content presentation mode.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor court in above-mentioned interaction control methodTo.Visitor's spatiality is determined by following steps:According to face figure corresponding with visitor's identification result in picture frameThe facial contour size of picture, and the measures of dispersion between the facial contour size of default face template, determine visitor's depth distance;According to the face deflection angle of facial image corresponding with visitor's identification result in picture frame, and default face templateFace deflection angle between measures of dispersion, determine visitor's direction.
Specifically, electronic equipment can detect the face after the facial image included to picture frame completes identificationThe facial contour size of image, the facial contour size and the facial contour size of default face template are contrasted, are based onThe computation model of optical imagery calculates visitor's depth distance, i.e. distance of the corresponding natural person of the facial image apart from camera.Electronic equipment also can detect the face deflection angle of the facial image, by the face deflection angle and the face of default face templateDeflection angle is contrasted, and determines measures of dispersion of the facial image relative to default face template, visitor court is determined according to the measures of dispersionTo i.e. orientation of the corresponding natural person of facial image relative to camera.
In the present embodiment, to preset natural person's spatiality for being reflected of face template as standard, it will be wrapped in picture frameThe facial image included is contrasted with default face template, accurate visitor's spatiality is obtained, to realize according to visitorTrend visitor action planning of the spatiality to visitor's interactive device.
In one embodiment, visitor's move being tended in above-mentioned interaction control method can be obtained by following steps:Extract the barrier characteristic for the ambient image that picture frame includes;It is distributed ground according to barrier characteristic dyspoiesis thingFigure;In distribution of obstacles map, mobile route is planned according to visitor's spatiality;Tend to visitor according to mobile route generation to moveDynamic instruction.
In one embodiment, electronic equipment can by the obstacle information discrete values metaplasia of ambient image in picture frame intoDigital raster map.Electronic equipment can determine final position after dyspoiesis thing distribution map further according to visitor's spatialityAnd direction of visitor's interactive device at incoming terminal position.Electronic equipment is chosen from distribution of obstacles map reach eventually againThe mobile route of point position.
Electronic equipment further according to mobile route, can determine the traveling data of electronic equipment next step action.The traveling dataIt may include direct of travel and gait of march etc..Electronic equipment be able to will be determined further according to the compiling of instruction agreement being adapted with itselfTraveling data be converted into trend visitor's move.
In one embodiment, electronic equipment can continuous acquisition picture frame, and in the face figure included to picture frameAs completing after identification, track identification successfully facial image, and to each in picture frame sequence in picture frame sequencePicture frame determines corresponding visitor's spatiality, and accordingly adjusts mobile route according to visitor's spatiality.
In the present embodiment, path planning is carried out by the distribution of obstacles map of generation, it is ensured that visitor's interaction is setThe accuracy of the standby path planning and avoidance for tending to visitor's motion.
In one embodiment, after step S508, the interaction control method also includes:Obtain according to visitor's identificationAs a result the interactive instruction initiated;Corresponding interactive object mark is determined according to interactive instruction;It is corresponding with interactive object mark to setIt is standby to set up communication connection.
Wherein, interactive instruction is used to trigger the instruction interacted.Interactive object is identified for one interaction of unique markObject.Interactive object can be specifically that third party user can also be third party's service equipment etc..
Specifically, can be stored with visitor image corresponding with visitor and visiting item in electronic equipment.Electronic equipment can beAfter the facial image identification success included to the picture frame collected, inquire about corresponding with visitor's identification resultVisiting item, and after visiting item is inquired, trigger interactive instruction.Visiting item such as, language is carried out with third party userSound is conversed or carried out data transmission with third party's service equipment.
In the present embodiment, the interactive instruction that can be automatically initiated according to visitor's identification result, is provided for visitorTripartite services, and improves the practicality and service coverage of visitor's interactive device.
Fig. 6 shows the equipment Organization Chart that interaction control method is realized in one embodiment.With reference to Fig. 6, visitor's interaction is setIt is standby to include the camera for acquired image frames, display screen or loudspeaker for showing interaction content, for entering with serverThe radio management module of row data interaction, and the action module that control visitor interactive device is moved.Server includes and visitorInteractive device or the radio management module that data interaction is carried out with third party device, the Working mould for carrying out visitor's identificationBlock, for storing the database module of visitor image, the assembling module for generating the related visitor's interaction content of visitor's attribute,Interacted for the network management module being managed to data, and for the third party that communication connection is set up with third party deviceModule.
In above-described embodiment, the processing procedure of the picture frame to collecting can be carried out on visitor's interactive device, also can beCarried out on server.
Fig. 7 shows the timing diagram of interaction control method in one embodiment.With reference to Fig. 7, visitor's interactive device is from visitorAcquired image frames in visiting reality scene, then the picture frame of collection is sent to server.Server is receiving picture frameAfterwards, picture frame is inputted into neural network model, obtains the characteristic pattern corresponding with the picture frame of neural network model output.ClothesThe facial image that business device can determine picture frame again and include accounts for the accounting of the picture frame, extracted from characteristic pattern accounting exceed it is defaultThe face characteristic data of the facial image of ratio;And/or, the definition for the facial image that picture frame includes is determined, from featureThe face characteristic data that definition exceedes the facial image of clarity threshold are extracted in figure.
Server can be after face characteristic data be extracted, by each visitor in the face characteristic data of extraction and visitor image storehouseThe corresponding face characteristic data of image compare, and choose the described of corresponding face characteristic data and determination in the visitor image storehouseFace characteristic data similarity highest visitor image, as the visitor image matched with the facial image, according to the visitObjective image obtains visitor's identification result.Server can determine corresponding visitor's attribute further according to visitor's identification result,Visitor's interaction content template corresponding with visitor's attribute is searched, visitor's identification result and visitor's interaction content is combinedTemplate obtains visitor's interaction content, then visitor's interaction content is sent to visitor's interactive device.
Server can also be big according to the facial contour of facial image corresponding with visitor's identification result in picture frameIt is small, and the measures of dispersion between the facial contour size of default face template, determine visitor's depth distance;According in picture frame withThe face deflection angle of the corresponding facial image of visitor's identification result, and default face template face deflection angleBetween measures of dispersion, determine visitor's direction.Server extracts the barrier characteristic for the ambient image that picture frame includes again,According to barrier characteristic dyspoiesis thing distribution map, in distribution of obstacles map, planned according to visitor's spatialityMobile route, generates trend visitor's move suitable for visitor's interactive device according to mobile route, then will tend to visitor and moveDynamic instruction is sent to visitor's interactive device.
After server can also obtain visitor's identification result carrying out recognition of face to picture frame, obtain according to visitor's bodyThe interactive instruction that part recognition result is initiated, determines that corresponding interactive object is identified according to interactive instruction, sets up visitor's interactive deviceWith the communication connection between the corresponding equipment of interactive object mark.
The above-mentioned data handling procedure carried out in the server can be carried out in visitor's interactive device.
As shown in figure 8, in one embodiment there is provided a kind of interaction control device 800, including:Receiving module 801,Identification module 802, acquisition module 803, determining module 804, directive generation module 805 and sending module 806.
Receiving module 801, the picture frame for receiving the transmission of visitor's interactive device.
Identification module 802, for carrying out recognition of face to picture frame, obtains visitor's identification result.
Acquisition module 803, for obtaining the visitor's interaction content associated with visitor's identification result.
Determining module 804, for the facial image according to corresponding to visitor's identification result in picture frame and default peopleMeasures of dispersion between face template, determines visitor's spatiality.
Directive generation module 805, for generating the trend visitor suitable for visitor's interactive device according to visitor's spatialityMove.
Sending module 806, for visitor's interaction content and trend visitor's move to be sent to visitor's interactive device, makesObtain visitor's interactive device and perform and tend to visitor's move, and export visitor's interaction content.
Above-mentioned interaction control device 800, after the picture frame of visitor's interactive device transmission is received, is carried out to the picture frameRecognition of face obtains visitor's identification result, you can obtain carrying out interactive visitor's interaction content with visitor.Further according to imageThe measures of dispersion between facial image and default face template in frame corresponding to visitor's identification result, you can determine that visitor is emptyBetween state, and generate and cause the trend visitor's move moved to visitor of visitor's interactive device.By visitor's interaction content withTend to visitor's move to send to visitor's interactive device, visitor's interactive device automatically can refer to according to trend visitor's movementMake adjustment position complete and the interaction of visitor, it is to avoid manually-operated tedious steps, drastically increase and visitor hands overMutual efficiency.Moreover, visitor's interactive device is sent after picture frame, interacted according to the trend visitor's move received with visitorContent can be automatically performed the interaction with visitor, significantly reduce the integrated difficulty of visitor's interactive device and safeguard intoThis.
In one embodiment, identification module 802 is additionally operable to extract the face characteristic for the facial image that picture frame includesData;The visitor image matched according to face characteristic data query and facial image;Visitor's identity is obtained according to visitor imageRecognition result.
In the present embodiment, using face characteristic data as according to visitor's identification is carried out, by from the image receivedThe face characteristic image extracted in frame has completed identification with reflecting that the visitor image of visitor's real human face is matched, it is ensured thatThe accuracy of visitor's identification.
In one embodiment, identification module 802 is additionally operable to determine that the facial image that includes of picture frame accounts for picture frameAccounting, extracts the face characteristic data that accounting exceedes the facial image of default accounting;And/or, determine the people that picture frame includesThe definition of face image, extracts the face characteristic data that definition exceedes the facial image of clarity threshold.
In the present embodiment, when the accounting for detecting facial image and accounting for picture frame, to exceed preset ratio and/or definition superWhen crossing default clarity threshold, the face characteristic data of the facial image are extracted, so as to ensure that extracted face characteristicThe quality of data.
In one embodiment, identification module 802 is additionally operable to picture frame inputting neural network model;Obtain neutral netThe characteristic pattern corresponding with picture frame of model output;The face characteristic for the facial image that picture frame includes is determined according to characteristic patternData;Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose visitor's figureAs corresponding face characteristic data in storehouse and the face characteristic data similarity highest visitor image determined, as with face figureAs the visitor image matched.
In the present embodiment, the face characteristic data of facial image are extracted by neural network model, then by correspondingThe facial image that similarity between face characteristic data is come in the picture frame for the transmission of visitor's interactive device matches visitor image,So that visitor's identification result is more accurate.
In one embodiment, acquisition module 803 determines corresponding visitor's attribute according to visitor's identification result;SearchVisitor's interaction content template corresponding with visitor's attribute;Combination visitor's identification result must visit with visitor's interaction content templateObjective interaction content.
In the present embodiment, according to the dynamically generating personalized visitor's interaction content of visitor's attribute so that the friendship with visitorMutual content is more rich, the variation of interaction content presentation mode.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.Determining module 804 is also usedIn the facial contour size according to facial image corresponding with visitor's identification result in picture frame, and default face mouldMeasures of dispersion between the facial contour size of plate, determines visitor's depth distance;According in picture frame with visitor's identification resultThe face deflection angle of corresponding facial image, and the measures of dispersion between the face deflection angle of default face template, reallyDetermine visitor's direction.
In the present embodiment, to preset natural person's spatiality for being reflected of face template as standard, it will be wrapped in picture frameThe facial image included is contrasted with default face template, accurate visitor's spatiality is obtained, to realize according to visitorTrend visitor action planning of the spatiality to visitor's interactive device.
In one embodiment, directive generation module 805 is additionally operable to extract the obstacle for the ambient image that picture frame includesThing characteristic;According to barrier characteristic dyspoiesis thing distribution map;It is empty according to visitor in distribution of obstacles mapBetween state planning mobile route;Trend visitor's move suitable for visitor's interactive device is generated according to mobile route.
In the present embodiment, path planning is carried out by the distribution of obstacles map of generation, it is ensured that visitor's interaction is setThe accuracy of the standby path planning and avoidance for tending to visitor's motion.
As shown in figure 9, in one embodiment, interaction control device 800 also includes:Interactive module 807.
Interactive module 807, for obtaining the interactive instruction initiated according to visitor's identification result;It is true according to interactive instructionFixed corresponding interactive object mark;The communication connection set up between visitor's interactive device and the corresponding equipment of interactive object mark.
In the present embodiment, the interactive instruction that can be automatically initiated according to visitor's identification result, is provided for visitorTripartite services, and improves the practicality and service coverage of visitor's interactive device.
As shown in Figure 10, in one embodiment there is provided a kind of interaction control device 1000, including:Acquisition module1001st, recognition result acquisition module 1002, instruction acquisition module 1003 and output module 1004.
Acquisition module 1001, for acquired image frames.
Recognition result acquisition module 1002, for according to picture frame, obtaining with carrying out what recognition of face was obtained to picture frameThe associated visitor's interaction content of visitor's identification result.
Instruction acquisition module 1003, tends to visitor's move for being obtained according to picture frame;Tend to visitor's moveGenerated according to visitor's spatiality, facial image of visitor's spatiality according to corresponding to visitor's identification result in picture frameMeasures of dispersion between default face template is determined.
Output module 1004, for being moved according to trend visitor's move, and exports visitor's interaction content.
Above-mentioned interaction control device 1000, after picture frame is collected, it is possible to automatically obtain with entering to the picture frameVisitor's interaction content that visitor's identification result that row recognition of face is obtained is associated, and according to visitor's identity in picture frameWhat visitor's spatiality determined by the measures of dispersion between facial image and default face template corresponding to recognition result was generatedTend to visitor's move.The friendship with visitor so locally can be automatically completed according to trend visitor's move adjustment positionMutual process, it is to avoid manually-operated tedious steps, drastically increases the efficiency interacted with visitor.
In one embodiment, recognition result acquisition module 1002 is additionally operable to extract the facial image that picture frame includesFace characteristic data;The visitor image matched according to face characteristic data query and facial image;Obtained according to visitor imageVisitor's identification result;Obtain the visitor interaction content associated with visitor's identification result.
In the present embodiment, using face characteristic data as according to visitor's identification is carried out, by from the image collectedThe face characteristic image extracted in frame has completed identification with reflecting that the visitor image of visitor's real human face is matched, it is ensured thatThe accuracy of visitor's identification.
In one embodiment, recognition result acquisition module 1002 is additionally operable to determine that the facial image that includes of picture frame is accounted forThe accounting of picture frame, extracts the face characteristic data that accounting exceedes the facial image of default accounting;And/or, determine in picture frameIncluding facial image definition, extract definition exceed clarity threshold facial image face characteristic data.
In the present embodiment, when the accounting for detecting facial image and accounting for picture frame, to exceed preset ratio and/or definition superWhen crossing default clarity threshold, the face characteristic data of the facial image are extracted, so as to ensure that extracted face characteristicThe quality of data.
In one embodiment, recognition result acquisition module 1002 is additionally operable to picture frame inputting neural network model;ObtainThe characteristic pattern corresponding with picture frame for taking neural network model to export;The facial image that picture frame includes is determined according to characteristic patternFace characteristic data;Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Corresponding face characteristic data and the face characteristic data similarity highest visitor image determined in visitor image storehouse are chosen, is madeFor the visitor image matched with facial image.
In the present embodiment, the face characteristic data of facial image are extracted by neural network model, then by correspondingThe facial image that similarity between face characteristic data is come in the picture frame for the transmission of visitor's interactive device matches visitor image,So that visitor's identification result is more accurate.
In one embodiment, recognition result acquisition module 1002 is additionally operable to be determined accordingly according to visitor's identification resultVisitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Visitor's identification result is combined to hand over visitorMutual content template obtains visitor's interaction content.
In the present embodiment, according to the dynamically generating personalized visitor's interaction content of visitor's attribute so that the friendship with visitorMutual content is more rich, the variation of interaction content presentation mode.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.Instruction acquisition module1003 are additionally operable to the facial contour size according to facial image corresponding with visitor's identification result in picture frame, and in advanceIf the measures of dispersion between the facial contour size of face template, determines visitor's depth distance;According in picture frame with visitor's identityThe face deflection angle of the corresponding facial image of recognition result, and the difference between the face deflection angle of default face templateDifferent amount, determines visitor's direction.
In the present embodiment, to preset natural person's spatiality for being reflected of face template as standard, it will be wrapped in picture frameThe facial image included is contrasted with default face template, accurate visitor's spatiality is obtained, to realize according to visitorTrend visitor action planning of the spatiality to visitor's interactive device.
In one embodiment, instruction acquisition module 1003 is additionally operable to extract the obstacle for the ambient image that picture frame includesThing characteristic;According to barrier characteristic dyspoiesis thing distribution map;It is empty according to visitor in distribution of obstacles mapBetween state planning mobile route;Trend visitor's move suitable for visitor's interactive device is generated according to mobile route.
In the present embodiment, path planning is carried out by the distribution of obstacles map of generation, it is ensured that visitor's interaction is setThe accuracy of the standby path planning and avoidance for tending to visitor's motion.
As shown in figure 11, in one embodiment, interaction control device 1000 also includes:Interactive module 1005.
Interactive module 1005, for obtaining the interactive instruction initiated according to visitor's identification result;According to interactive instructionIt is determined that corresponding interactive object mark;Equipment corresponding with interactive object mark sets up communication connection.
In the present embodiment, the interactive instruction that can be automatically initiated according to visitor's identification result, is provided for visitorTripartite services, and improves the practicality and service coverage of visitor's interactive device.
In one embodiment, a kind of computer-readable recording medium, is stored thereon with computer-readable instruction, the calculatingMachine readable instruction realizes following steps when being executed by processor:
Receive the picture frame that visitor's interactive device is sent;
Recognition of face is carried out to picture frame, visitor's identification result is obtained;
Obtain the visitor's interaction content associated with visitor's identification result;
The difference between facial image and default face template according to corresponding to visitor's identification result in picture frameAmount, determines visitor's spatiality;
Trend visitor's move suitable for visitor's interactive device is generated according to visitor's spatiality;
Visitor's interaction content and trend visitor's move are sent to visitor's interactive device so that visitor's interactive device is heldRow tends to visitor's move, and exports visitor's interaction content.
The computer-readable instruction stored on above computer readable storage medium storing program for executing when executed, is handed over receiving visitorAfter the picture frame that mutual equipment is sent, recognition of face is carried out to the picture frame and obtains visitor's identification result, you can obtains and visitsVisitor carries out interactive visitor's interaction content.Further according to the facial image corresponding to visitor's identification result in picture frame with presettingMeasures of dispersion between face template, you can determine visitor's spatiality, and generate so that what visitor's interactive device was moved to visitorTend to visitor's move.Sent by visitor's interaction content with tending to visitor's move to visitor's interactive device, visitorInteractive device can be completed automatically and the interaction of visitor according to tending to visitor's move adjustment position, it is to avoid artificialThe tedious steps of operation, drastically increase the efficiency interacted with visitor.Moreover, visitor's interactive device is sent after picture frame, pressThe interaction with visitor can be automatically performed according to the trend visitor's move and visitor's interaction content received, is greatly droppedThe low integrated difficulty and maintenance cost of visitor's interactive device.
In one embodiment, recognition of face is carried out to picture frame, obtains visitor's identification result, including:Extract figureAs the face characteristic data for the facial image that frame includes;The visitor matched according to face characteristic data query and facial imageImage;Visitor's identification result is obtained according to visitor image.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:Determine imageThe facial image that frame includes accounts for the accounting of picture frame, extracts the face characteristic number that accounting exceedes the facial image of preset ratioAccording to;And/or, the definition for the facial image that picture frame includes is determined, the face figure that definition exceedes clarity threshold is extractedThe face characteristic data of picture.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:By picture frameInput neural network model;Obtain the characteristic pattern corresponding with picture frame of neural network model output;Determined to scheme according to characteristic patternAs the face characteristic data for the facial image that frame includes.The visitor matched according to face characteristic data query and facial imageImage, including:Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose and visitIn objective image library corresponding face characteristic data with determine face characteristic data similarity highest visitor image, as with peopleThe visitor image that face image matches.
In one embodiment, the visitor's interaction content associated with visitor's identification result is obtained, including:According to visitorIdentification result determines corresponding visitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Combine visitorIdentification result obtains visitor's interaction content with visitor's interaction content template.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.According in picture frame withMeasures of dispersion between the corresponding facial image of visitor's identification result and default face template, determines visitor's space shapeState, including:According to the facial contour size of facial image corresponding with visitor's identification result in picture frame, and it is defaultMeasures of dispersion between the facial contour size of face template, determines visitor's depth distance;According in picture frame with visitor's identity knowThe face deflection angle of the corresponding facial image of other result, and the difference between the face deflection angle of default face templateAmount, determines visitor's direction.
In one embodiment, the trend visitor movement according to the generation of visitor's spatiality suitable for visitor's interactive device refers toOrder, including:Extract the barrier characteristic for the ambient image that picture frame includes;According to barrier characteristic dyspoiesisThing distribution map;In distribution of obstacles map, mobile route is planned according to visitor's spatiality;It is suitable according to mobile route generationTrend visitor's move for visitor's interactive device.
In one embodiment, computer-readable instruction causes processor performing visitor's interaction content and tending to visitorMove is sent to visitor's interactive device so that visitor's interactive device, which is performed, tends to visitor's move, and exports visitor's friendshipAfter mutual content, following steps are performed:Obtain the interactive instruction initiated according to visitor's identification result;It is true according to interactive instructionFixed corresponding interactive object mark;The communication connection set up between visitor's interactive device and the corresponding equipment of interactive object mark.
In one embodiment, a kind of computer-readable recording medium, is stored thereon with computer-readable instruction, the calculatingMachine readable instruction realizes following steps when being executed by processor:
Acquired image frames;
According to picture frame, the visit associated with the visitor's identification result obtained to picture frame progress recognition of face is obtainedObjective interaction content;
Obtained according to picture frame and tend to visitor's move;Tend to visitor's move to be generated according to visitor's spatiality,Between facial image and default face template of visitor's spatiality according to corresponding to visitor's identification result in picture frameMeasures of dispersion is determined;
Moved according to visitor's move is tended to, and export visitor's interaction content.
The computer-readable instruction stored on above computer readable storage medium storing program for executing when executed, is collecting picture frameAfterwards, it is possible to automatically obtain the visitor associated with the visitor's identification result obtained to picture frame progress recognition of faceBetween interaction content, and facial image and default face template according to corresponding to visitor's identification result in picture frameTrend visitor's move that visitor's spatiality determined by measures of dispersion is generated.So automatically it can locally be visited according to trendObjective move adjustment position completes the interaction with visitor, it is to avoid manually-operated tedious steps, drastically increasesThe efficiency interacted with visitor.
In one embodiment, according to picture frame, obtain the visitor's identity obtained with carrying out recognition of face to picture frame and knowThe associated visitor's interaction content of other result, including:Extract the face characteristic data for the facial image that picture frame includes;According toThe visitor image that face characteristic data query matches with facial image;Visitor's identification result is obtained according to visitor image;Obtain the visitor interaction content associated with visitor's identification result.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:Determine imageThe facial image that frame includes accounts for the accounting of picture frame, extracts the face characteristic number that accounting exceedes the facial image of default accountingAccording to;And/or, the definition for the facial image that picture frame includes is determined, the face figure that definition exceedes clarity threshold is extractedThe face characteristic data of picture.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:By picture frameInput neural network model;Obtain the characteristic pattern corresponding with picture frame of neural network model output;Determined to scheme according to characteristic patternAs the face characteristic data for the facial image that frame includes.The visitor matched according to face characteristic data query and facial imageImage, including:Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose and visitIn objective image library corresponding face characteristic data with determine face characteristic data similarity highest visitor image, as with peopleThe visitor image that face image matches.
In one embodiment, the visitor interaction content associated with visitor's identification result is obtained, including:According to visitObjective identification result determines corresponding visitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Combination is visitedObjective identification result obtains visitor's interaction content with visitor's interaction content template.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.Computer-readable instructionAlso so that processor is performing following steps:According to the people of facial image corresponding with visitor's identification result in picture frameFace profile size, and the measures of dispersion between the facial contour size of default face template, determine visitor's depth distance;According to figureAs the face of the face deflection angle of facial image corresponding with visitor's identification result in frame, and default face templateMeasures of dispersion between deflection angle, determines visitor's direction.
In one embodiment, computer-readable instruction also causes processor performing following steps:Extract in picture frameIncluding ambient image barrier characteristic;According to barrier characteristic dyspoiesis thing distribution map;In barrierIn distribution map, mobile route is planned according to visitor's spatiality;Visitor's move is tended to according to mobile route generation.
In one embodiment, computer-readable instruction causes processor performing according to visitor's interactive device moveIt is mobile, and after exporting visitor's interaction content, perform following steps:The interaction initiated according to visitor's identification result is obtained to refer toOrder;Corresponding interactive object mark is determined according to interactive instruction;Communication link is set up with the corresponding equipment of interactive object markConnect.
Computer-readable instruction, computer are stored in a kind of computer equipment, including memory and processor, memoryWhen readable instruction is executed by processor so that computing device following steps:
Receive the picture frame that visitor's interactive device is sent;
Recognition of face is carried out to picture frame, visitor's identification result is obtained;
Obtain the visitor's interaction content associated with visitor's identification result;
The difference between facial image and default face template according to corresponding to visitor's identification result in picture frameAmount, determines visitor's spatiality;
Trend visitor's move suitable for visitor's interactive device is generated according to visitor's spatiality;
Visitor's interaction content and trend visitor's move are sent to visitor's interactive device so that visitor's interactive device is heldRow tends to visitor's move, and exports visitor's interaction content.
Above computer equipment, after the picture frame of visitor's interactive device transmission is received, face is carried out to the picture frameIdentification obtains visitor's identification result, you can obtain carrying out interactive visitor's interaction content with visitor.Further according in picture frameThe measures of dispersion between facial image and default face template corresponding to visitor's identification result, you can determine visitor's space shapeState, and generate the trend visitor's move for make it that visitor's interactive device is moved to visitor.By visitor's interaction content with tend toVisitor's move is sent to visitor's interactive device, and visitor's interactive device can be adjusted automatically according to trend visitor's moveWhole position completes the interaction with visitor, it is to avoid manually-operated tedious steps, drastically increases what is interacted with visitorEfficiency.Moreover, visitor's interactive device is sent after picture frame, according to the trend visitor move received and visitor's interaction contentThe interaction with visitor can be automatically performed, the integrated difficulty and maintenance cost of visitor's interactive device is significantly reduced.
In one embodiment, recognition of face is carried out to picture frame, obtains visitor's identification result, including:Extract figureAs the face characteristic data for the facial image that frame includes;The visitor matched according to face characteristic data query and facial imageImage;Visitor's identification result is obtained according to visitor image.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:Determine imageThe facial image that frame includes accounts for the accounting of picture frame, extracts the face characteristic number that accounting exceedes the facial image of preset ratioAccording to;And/or, the definition for the facial image that picture frame includes is determined, the face figure that definition exceedes clarity threshold is extractedThe face characteristic data of picture.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:By picture frameInput neural network model;Obtain the characteristic pattern corresponding with picture frame of neural network model output;Determined to scheme according to characteristic patternAs the face characteristic data for the facial image that frame includes.The visitor matched according to face characteristic data query and facial imageImage, including:Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose and visitIn objective image library corresponding face characteristic data with determine face characteristic data similarity highest visitor image, as with peopleThe visitor image that face image matches.
In one embodiment, the visitor's interaction content associated with visitor's identification result is obtained, including:According to visitorIdentification result determines corresponding visitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Combine visitorIdentification result obtains visitor's interaction content with visitor's interaction content template.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.According in picture frame withMeasures of dispersion between the corresponding facial image of visitor's identification result and default face template, determines visitor's space shapeState, including:According to the facial contour size of facial image corresponding with visitor's identification result in picture frame, and it is defaultMeasures of dispersion between the facial contour size of face template, determines visitor's depth distance;According in picture frame with visitor's identity knowThe face deflection angle of the corresponding facial image of other result, and the difference between the face deflection angle of default face templateAmount, determines visitor's direction.
In one embodiment, the trend visitor movement according to the generation of visitor's spatiality suitable for visitor's interactive device refers toOrder, including:Extract the barrier characteristic for the ambient image that picture frame includes;According to barrier characteristic dyspoiesisThing distribution map;In distribution of obstacles map, mobile route is planned according to visitor's spatiality;It is suitable according to mobile route generationTrend visitor's move for visitor's interactive device.
In one embodiment, computer-readable instruction causes processor performing visitor's interaction content and tending to visitorMove is sent to visitor's interactive device so that visitor's interactive device, which is performed, tends to visitor's move, and exports visitor's friendshipAfter mutual content, following steps are performed:Obtain the interactive instruction initiated according to visitor's identification result;It is true according to interactive instructionFixed corresponding interactive object mark;The communication connection set up between visitor's interactive device and the corresponding equipment of interactive object mark.
Computer-readable instruction, computer are stored in a kind of computer equipment, including memory and processor, memoryWhen readable instruction is executed by processor so that computing device following steps:
Acquired image frames;
According to picture frame, the visit associated with the visitor's identification result obtained to picture frame progress recognition of face is obtainedObjective interaction content;
Obtained according to picture frame and tend to visitor's move;Tend to visitor's move to be generated according to visitor's spatiality,Between facial image and default face template of visitor's spatiality according to corresponding to visitor's identification result in picture frameMeasures of dispersion is determined;
Moved according to visitor's move is tended to, and export visitor's interaction content.
Above computer equipment, after picture frame is collected, it is possible to automatically obtain with carrying out face to the picture frameThe associated visitor's interaction content of obtained visitor's identification result is recognized, and according to visitor's identification knot in picture frameThe trend that visitor's spatiality determined by the measures of dispersion between facial image and default face template corresponding to fruit is generated is visitedObjective move.So locally automatically it can complete to interact with visitor according to tending to visitor's move adjustment positionJourney, it is to avoid manually-operated tedious steps, drastically increases the efficiency interacted with visitor.
In one embodiment, according to picture frame, obtain the visitor's identity obtained with carrying out recognition of face to picture frame and knowThe associated visitor's interaction content of other result, including:Extract the face characteristic data for the facial image that picture frame includes;According toThe visitor image that face characteristic data query matches with facial image;Visitor's identification result is obtained according to visitor image;Obtain the visitor interaction content associated with visitor's identification result.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:Determine imageThe facial image that frame includes accounts for the accounting of picture frame, extracts the face characteristic number that accounting exceedes the facial image of default accountingAccording to;And/or, the definition for the facial image that picture frame includes is determined, the face figure that definition exceedes clarity threshold is extractedThe face characteristic data of picture.
In one embodiment, the face characteristic data for the facial image that picture frame includes are extracted, including:By picture frameInput neural network model;Obtain the characteristic pattern corresponding with picture frame of neural network model output;Determined to scheme according to characteristic patternAs the face characteristic data for the facial image that frame includes.The visitor matched according to face characteristic data query and facial imageImage, including:Face characteristic data face characteristic data corresponding with each visitor image in visitor image storehouse are compared;Choose and visitIn objective image library corresponding face characteristic data with determine face characteristic data similarity highest visitor image, as with peopleThe visitor image that face image matches.
In one embodiment, the visitor interaction content associated with visitor's identification result is obtained, including:According to visitObjective identification result determines corresponding visitor's attribute;Search visitor's interaction content template corresponding with visitor's attribute;Combination is visitedObjective identification result obtains visitor's interaction content with visitor's interaction content template.
In one embodiment, visitor's spatiality includes visitor's depth distance and visitor's direction.Computer-readable instructionAlso so that processor is performing following steps:According to the people of facial image corresponding with visitor's identification result in picture frameFace profile size, and the measures of dispersion between the facial contour size of default face template, determine visitor's depth distance;According to figureAs the face of the face deflection angle of facial image corresponding with visitor's identification result in frame, and default face templateMeasures of dispersion between deflection angle, determines visitor's direction.
In one embodiment, computer-readable instruction also causes processor performing following steps:Extract in picture frameIncluding ambient image barrier characteristic;According to barrier characteristic dyspoiesis thing distribution map;In barrierIn distribution map, mobile route is planned according to visitor's spatiality;Visitor's move is tended to according to mobile route generation.
In one embodiment, computer-readable instruction causes processor performing according to visitor's interactive device moveIt is mobile, and after exporting visitor's interaction content, perform following steps:The interaction initiated according to visitor's identification result is obtained to refer toOrder;Corresponding interactive object mark is determined according to interactive instruction;Communication link is set up with the corresponding equipment of interactive object markConnect.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be withThe hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be readIn storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated betweenMatter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Each technical characteristic of above example can be combined arbitrarily, to make description succinct, not to above-described embodimentIn each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lanceShield, is all considered to be the scope of this specification record.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneouslyTherefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the artFor, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present inventionProtect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (19)

CN201710317463.4A2017-05-052017-05-05Interaction control method, device and computer-readable recording mediumPendingCN107247920A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710317463.4ACN107247920A (en)2017-05-052017-05-05Interaction control method, device and computer-readable recording medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710317463.4ACN107247920A (en)2017-05-052017-05-05Interaction control method, device and computer-readable recording medium

Publications (1)

Publication NumberPublication Date
CN107247920Atrue CN107247920A (en)2017-10-13

Family

ID=60017326

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710317463.4APendingCN107247920A (en)2017-05-052017-05-05Interaction control method, device and computer-readable recording medium

Country Status (1)

CountryLink
CN (1)CN107247920A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108764044A (en)*2018-04-252018-11-06维沃移动通信有限公司 A light supplement method, device and mobile terminal
CN108875529A (en)*2018-01-112018-11-23北京旷视科技有限公司Face space-location method, device, system and computer storage medium
CN109968365A (en)*2017-12-282019-07-05沈阳新松机器人自动化股份有限公司The control method and robot of a kind of robot control system, control system
CN110070016A (en)*2019-04-122019-07-30北京猎户星空科技有限公司A kind of robot control method, device and storage medium
CN110955879A (en)*2019-11-292020-04-03腾讯科技(深圳)有限公司Device control method, device, computer device and storage medium
CN111339996A (en)*2020-03-202020-06-26北京百度网讯科技有限公司Method, device and equipment for detecting static obstacle and storage medium
CN111491004A (en)*2019-11-282020-08-04赵丽侠Information updating method based on cloud storage
CN113696197A (en)*2021-08-272021-11-26北京声智科技有限公司Visitor reception method, robot and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101086604A (en)*2006-06-092007-12-12索尼株式会社Imaging apparatus, control method of imaging apparatus, and computer program
US20120316676A1 (en)*2011-06-102012-12-13Microsoft CorporationInteractive robot initialization
US20130342652A1 (en)*2012-06-222013-12-26Microsoft CorporationTracking and following people with a mobile robotic device
CN103499334A (en)*2013-09-052014-01-08小米科技有限责任公司Method, apparatus and electronic instrument for distance measurement
CN103576686A (en)*2013-11-212014-02-12中国科学技术大学Automatic guide and obstacle avoidance method for robot
CN105116994A (en)*2015-07-072015-12-02百度在线网络技术(北京)有限公司 Intelligent robot tracking method and tracking device based on artificial intelligence
CN105701447A (en)*2015-12-302016-06-22上海智臻智能网络科技股份有限公司Guest-greeting robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101086604A (en)*2006-06-092007-12-12索尼株式会社Imaging apparatus, control method of imaging apparatus, and computer program
US20120316676A1 (en)*2011-06-102012-12-13Microsoft CorporationInteractive robot initialization
US20130342652A1 (en)*2012-06-222013-12-26Microsoft CorporationTracking and following people with a mobile robotic device
CN103499334A (en)*2013-09-052014-01-08小米科技有限责任公司Method, apparatus and electronic instrument for distance measurement
CN103576686A (en)*2013-11-212014-02-12中国科学技术大学Automatic guide and obstacle avoidance method for robot
CN105116994A (en)*2015-07-072015-12-02百度在线网络技术(北京)有限公司 Intelligent robot tracking method and tracking device based on artificial intelligence
CN105701447A (en)*2015-12-302016-06-22上海智臻智能网络科技股份有限公司Guest-greeting robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GABRIEL HERMOSILLA 等: "Face Recognition using Thermal Infrared Images for Human-Robot Interaction Applications:A Comparative Study", 《FACE RECOGNITION USING THERMAL INFRARED IMAGES FOR HUMAN-ROBOT INTERACTION APPLICATIONS: A COMPARATIVE STUDY》*
付晓玲等: "基于证件识别技术的访客管理系统", 《微计算机信息》*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109968365A (en)*2017-12-282019-07-05沈阳新松机器人自动化股份有限公司The control method and robot of a kind of robot control system, control system
CN108875529A (en)*2018-01-112018-11-23北京旷视科技有限公司Face space-location method, device, system and computer storage medium
CN108764044A (en)*2018-04-252018-11-06维沃移动通信有限公司 A light supplement method, device and mobile terminal
CN110070016A (en)*2019-04-122019-07-30北京猎户星空科技有限公司A kind of robot control method, device and storage medium
CN111491004A (en)*2019-11-282020-08-04赵丽侠Information updating method based on cloud storage
CN110955879A (en)*2019-11-292020-04-03腾讯科技(深圳)有限公司Device control method, device, computer device and storage medium
CN111339996A (en)*2020-03-202020-06-26北京百度网讯科技有限公司Method, device and equipment for detecting static obstacle and storage medium
CN111339996B (en)*2020-03-202023-05-09北京百度网讯科技有限公司Method, device, equipment and storage medium for detecting static obstacle
CN113696197A (en)*2021-08-272021-11-26北京声智科技有限公司Visitor reception method, robot and computer-readable storage medium

Similar Documents

PublicationPublication DateTitle
CN107247920A (en)Interaction control method, device and computer-readable recording medium
US11917288B2 (en)Image processing method and apparatus
CA3019224C (en)Information display method, device, and system
CN105426850B (en)Associated information pushing device and method based on face recognition
CN107341442B (en) Motion control method, device, computer equipment and service robot
CN111491187B (en)Video recommendation method, device, equipment and storage medium
US20190333478A1 (en)Adaptive fiducials for image match recognition and tracking
CN103140862B (en)User interface system and operational approach thereof
CN110163806A (en)A kind of image processing method, device and storage medium
CN109325450A (en)Image processing method, image processing device, storage medium and electronic equipment
CN110163076A (en)A kind of image processing method and relevant apparatus
CN106874826A (en)Face key point-tracking method and device
WO2017005014A1 (en)Method and device for searching matched commodities
JP2023511243A (en) Image processing method and apparatus, electronic device, and recording medium
KR20180054407A (en)Apparatus for recognizing user emotion and method thereof, and robot system using the same
CN106303599A (en)A kind of information processing method, system and server
CN107018330A (en)A kind of guidance method and device of taking pictures in real time
CN110991325A (en)Model training method, image recognition method and related device
CN110135304A (en)Human body method for recognizing position and attitude and device
CN112907569A (en)Head image area segmentation method and device, electronic equipment and storage medium
CN114662076A (en) Business information determination method, system, apparatus, device, medium and program product
Yan et al.Human-object interaction recognition using multitask neural network
CN109034059A (en)Silent formula human face in-vivo detection method, device, storage medium and processor
CN116631026A (en) An image recognition method, model training method and device
CN117935278A (en)Method and device for generating interest point data, related equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20171013

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp