Movatterモバイル変換


[0]ホーム

URL:


CN108537094A - Image processing method, device and system - Google Patents

Image processing method, device and system
Download PDF

Info

Publication number
CN108537094A
CN108537094ACN201710123786.XACN201710123786ACN108537094ACN 108537094 ACN108537094 ACN 108537094ACN 201710123786 ACN201710123786 ACN 201710123786ACN 108537094 ACN108537094 ACN 108537094A
Authority
CN
China
Prior art keywords
information
mobile device
identified
movement
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710123786.XA
Other languages
Chinese (zh)
Other versions
CN108537094B (en
Inventor
廖可
于海华
王炜
杨林举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co LtdfiledCriticalRicoh Co Ltd
Priority to CN201710123786.XApriorityCriticalpatent/CN108537094B/en
Publication of CN108537094ApublicationCriticalpatent/CN108537094A/en
Application grantedgrantedCritical
Publication of CN108537094BpublicationCriticalpatent/CN108537094B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

A kind of image processing method, device and system are provided, method includes the following steps:In a synchronization time window, multiple panoramic image frames for receiving the inertial guidance data of the mobile device entrained by first movement body and being shot by panoramic camera;Inertial guidance data based on the mobile device entrained by first movement body obtains the relevant first information of motion track with mobile device, and the first information includes amount of movement and direction information of the motion track of mobile device in a synchronization time window;Relevant second information of motion track with the one or more moving body to be identified detected in panoramic image frame is obtained based on panoramic image frame, the second information includes amount of movement and direction information of the motion track of one or more moving bodys to be identified in a synchronization time window;The first movement body for comparing the similitude of the first information and the second information to match the moving body to be identified in panoramic image frame with carry mobile device.

Description

Image processing method, device and system
Technical field
This disclosure relates to image processing field, and relate more specifically to a kind of image procossing of the identity of moving body for identificationMethods, devices and systems.
Background technology
Video monitor is currently widely used.For example, monitoring the people's in specific region using surveillance cameraActivity condition, and specific people is identified by modes such as recognitions of face.However, the accuracy of the result so identified depends onThe accuracy of face recognition algorithms, moreover, the variation of the direction of the face of people on the move, light and shade, color etc. may be very big, andIf people is back to video camera or moves out video camera and enters back into video camera, this may result in accurately identifying spy in this wayFixed people, or same people is identified as different people.
Invention content
According to an aspect of the present invention, a kind of image processing method is provided, is included the following steps:In a synchronization timeIn window, receives the inertial guidance data of the mobile device entrained by first movement body and shot by panoramic camera multiple completeScape picture frame;Inertial guidance data based on the mobile device entrained by the first movement body obtains the movement with the mobile deviceThe relevant first information in track, the first information include the motion track of the mobile device in one synchronization time windowIn amount of movement and direction information;Based on one or more detected in panoramic image frame acquisition and the panoramic image frameRelevant second information of motion track of a moving body to be identified, second information include one or more of shiftings to be identifiedAmount of movement and direction information of the motion track of kinetoplast in one synchronization time window;Compare the first information and secondFirst movement body of the similitude of information to match the moving body to be identified in panoramic image frame with carry the mobile device.
According to another aspect of the present invention, a kind of mobile device is provided, including:Data collector is configured as collecting andFrom in the inertial guidance data of mobile device;Communicator is configured as sending the inertial guidance data collected to image processing apparatus,In, described image processing unit receives the mobile device entrained by first movement body in a synchronization time windowInertial guidance data and the multiple panoramic image frames shot by panoramic camera;Based on the movement entrained by the first movement bodyThe inertial guidance data of equipment obtains the relevant first information of motion track with the mobile device, and the first information includes describedAmount of movement and direction information of the motion track of mobile device in one synchronization time window;Based on the panoramic image frameRelevant second information of motion track with the one or more moving body to be identified detected in the panoramic image frame is obtained,Second information includes the motion track of one or more of moving bodys to be identified in one synchronization time windowAmount of movement and direction information;It is to be identified in panoramic image frame to match to compare the similitude of the first information and the second informationMoving body and the first movement body for carrying the mobile device.
According to another aspect of the present invention, a kind of image processing apparatus is provided, including:Receiver is configured as at oneOr in multiple windows synchronization time, receives the inertial guidance data of the mobile device entrained by first movement body and pass through panoramic shootingMultiple panoramic image frames of machine shooting;First calculator is configured as based on the mobile device entrained by the first movement bodyInertial guidance data obtain with the relevant first information of motion track of the mobile device, the first information includes the movementAmount of movement and direction information of the motion track of equipment in one synchronization time window;Second calculator, is configured as baseThe moving rail with the one or more moving body to be identified detected in the panoramic image frame is obtained in the panoramic image frameRelevant second information of mark, second information include the motion track of one or more of moving bodys to be identified described oneAmount of movement in a synchronization time window and direction information;Comparator is configured as the first information described in comparison and the second informationFirst movement body of the similitude to match the moving body to be identified in panoramic image frame with carry the mobile device.
According to another aspect of the present invention, a kind of image processing system is provided, including:One or more processors;OneOr multiple memories, wherein one or more computer executable programs are stored, by one or more of processorsFollowing steps are carried out when execution:In one or more windows synchronization time, the mobile device entrained by first movement body is receivedInertial guidance data and by panoramic camera shoot multiple panoramic image frames;Based on the shifting entrained by the first movement bodyThe inertial guidance data of dynamic equipment obtains the relevant first information of motion track with the mobile device, and the first information includes instituteState amount of movement and direction information of the motion track of mobile device in one synchronization time window;Based on the panoramic pictureFrame obtains and states relevant second information of motion track of the one or more moving bodys to be identified detected in panoramic image frame,Second information includes the motion track of one or more of moving bodys to be identified in one synchronization time windowAmount of movement and direction information;It is to be identified in panoramic image frame to match to compare the similitude of the first information and the second informationMoving body and the first movement body for carrying the mobile device.
Description of the drawings
Figure 1A shows the example scene graphs using image procossing scheme according to an embodiment of the invention.Figure 1B is shown entirelyThe picture frame example of scape video camera shooting.
Fig. 2 shows the exemplary hardware block diagrams of the image procossing scheme of application each embodiment according to the present invention.
Fig. 3 A-3D show the example flow diagram of the image processing method of multiple embodiments according to the present invention.
Fig. 4 shows the particular flow sheet of an example of image processing method according to an embodiment of the invention.
Fig. 5 shows the synchronizing process flow of an example of image processing method according to an embodiment of the inventionFigure.
Fig. 6 shows window synchronization time of an example of image processing method according to an embodiment of the inventionSchematic diagram.
Fig. 7 show an example of image processing method according to an embodiment of the invention based on including panoramaThe panoramic video stream of picture frame obtains the flow chart of the step of motion track.
Fig. 8 show obtained in step as shown in Figure 7 with the relevant information of motion track (including step-length and steering)Example schematic diagram.
Fig. 9 show the moving body and regional center calculated in step as shown in Figure 7 in panoramic image frame it is practical away fromFrom schematic diagram.
Figure 10 shows that distance according to Fig.9, calculates the schematic diagram of amount of movement.
Figure 11 shows the schematic diagram for the steering that moving body is judged in step as shown in Figure 7.
Figure 12 show an example of image processing method according to an embodiment of the invention based on inertial navigation numberAccording to come flow chart the step of obtaining motion track.
Figure 13 A and 13B are shown in the step shown in Figure 12 judges the schematic diagram turned to based on inertial guidance data.
Figure 14 shows the impossible moving rail of filtering in image processing method according to an embodiment of the inventionThe flow chart of the identification of mark and moving body.
Figure 15 shows the signal of the comparison time window in image processing method according to an embodiment of the inventionFigure.
Figure 16 shows the block diagram of image processing apparatus according to an embodiment of the invention.
Figure 17 shows the block diagrams of mobile device according to an embodiment of the invention.
Figure 18 shows the block diagram of the example image processing system suitable for being used for realizing embodiment of the present invention.
Specific implementation mode
Specific embodiments of the present invention are reference will now be made in detail to now, instantiate the example of the present invention in the accompanying drawings.Although will knotThe specific embodiment description present invention is closed, it will be understood that, it is not intended to limit the invention to the embodiment of description.On the contrary, it is desirable to coverCover the change for including within the spirit and scope of the present invention, modification and the equivalent being defined by the following claims.It should be noted that thisIn the method and step that describes can realize that and any functional block or function arrangement can quilts by any functional block or function arrangementIt is embodied as the combination of physical entity or logic entity or the two.
In order to make those skilled in the art more fully understand the present invention, with reference to the accompanying drawings and detailed description to this hairIt is bright to be described in further detail.
Note that the example next to be introduced only is a specific example, and not as limiting embodiments of the inventionIt is necessary for the specific shape for showing and describing, hardware, connection relation, step, numerical value, condition, data, sequence etc..AbilityField technique personnel can by read this specification use the present invention design come construct do not mentioned in this specification it is moreEmbodiment.
Figure 1A shows the example scene graphs using image procossing scheme according to an embodiment of the invention.Figure 1B is shown entirelyThe picture frame example of scape video camera shooting.
The embodiment of the present invention mainly utilizes the panoramic shooting for being deployed in region (the class semicircular area in such as Fig. 1) centerA kind of panoramic video stream (panoramic picture frame sequence) of machine (Theta1, Theta2 in such as Fig. 1, area sensor) shooting and shiftingMobile device (user's mobile device client 1,2,3 in such as Figure 1A, a kind of user that kinetoplast carries (such as the user as people)Sensor) inertial guidance data that is sensed carrys out the authentication for moving body.The visual range of panoramic camera can for example coverIts corresponding region, in this region, it is understood that there may be multiple moving bodys for carrying mobile device may have other moving bodys (dryDisturb factor).Moving body is illustrated with specific user's example, but actually moving body is not limited to people, and can be after such asOther moving bodys, such as animal, machine etc..The identity letter of the user can be identified by being stored in the mobile device that each user carriesBreath.Mobile device and panoramic camera can be in communication with each other in wirelessly or non-wirelessly network, and send data to server.ServiceDevice can be received panoramic image frame from panoramic camera by wirelessly or non-wirelessly mode and from the inertial navigation of mobile deviceData come from panoramic video stream and inertial guidance data by the algorithm of each embodiment according to the present invention to handle and compareMost similar track is carried out authenticating user identification and binding by the similarity of track, obtains identity information authentication result.
Fig. 2 shows the exemplary hardware block diagrams of the image procossing scheme of application each embodiment according to the present invention.
Hardware block diagram shown in Fig. 2 includes:1) data server U1 and device clients U2;2) in data serverIn, there is algoritic module U11, for handling panoramic video stream and inertial guidance data;Data center module U12 is used for data management;It is logicalBelieve module U13, for wireless or wired network communication, sends and receivees data;3) there are two types of different device clients, useFamily sensor U22 and area sensor U21:User sensor U22 can be the mobile device that user carries, area sensorU21 can be arranged on the panoramic camera at ceiling.There are two function modules for each device clients:Data collectionModule U221, U211 and communication module U222, U212.Data gathering module U221, U211 is used for data sampling and pretreatment, leads toLetter module U222, U212 sends and receives for data.
Fig. 3 A-3D show the example flow diagram of the image processing method 30 of multiple embodiments according to the present invention.
Image processing method 30 shown in Fig. 3 A includes:Step S31 is received in one or more windows synchronization timeThe inertial guidance data of mobile device entrained by first movement body and the multiple panoramic image frames shot by panoramic camera;StepRapid S32, the inertial guidance data based on the mobile device entrained by the first movement body obtain the moving rail with the mobile deviceThe relevant first information of mark, the first information include the motion track of the mobile device in one synchronization time windowAmount of movement and direction information;Step S33, based on one detected in panoramic image frame acquisition and the panoramic image frameRelevant second information of motion track of a or multiple moving bodys to be identified, second information includes one or more of wait forIdentify amount of movement and direction information of the motion track of moving body in one synchronization time window;Step S34, relatively described inThe similitude of the first information and the second information is to match the moving body to be identified in panoramic image frame and carry the mobile deviceFirst movement body.
Here, in one embodiment, synchronization time, window can be moving body, when specifically for example people takes a step primaryBetween, the rule that people takes a step can be so followed to judge the motion track of people.
It is given birth in this way, can handle and compare to come from the multiple moving bodys to be identified detected in panoramic video stream and carryAt the first movement body of the mobile device of inertial guidance data track similarity will most similar track carry out moving body (such as withFamily) authentication and binding, obtain identity information authentication result.For example, if come from detected in panoramic video stream it is moreThere are one the motion tracks of moving body to be identified and carrying to generate inertial guidance data in multiple motion tracks of a moving body to be identifiedMobile device certain moving body track similarity it is maximum, it may be considered that the moving body to be identified is exactly to carry the movementThe first movement body of equipment.
Note that the panoramic image frame mentioned in the present specification is not limited to the whole of the panoramic image frame, but canWith at least part including panoramic image frame, such as the one of the whole (360 degree of visual angles) of panoramic image frame and panoramic image framePartly (such as 180 degree visual angle, 90 degree of visual angles, visual angles etc.).Although for example, having taken 360 degree with panoramic cameraPanoramic image frame, but it may be concerned only with the moving body at some region (visual angle), therefore can only extract panoramic image frameIn the region (visual angle) carry out the image procossing mentioned in the disclosure, can so save calculation amount and amount of storage etc.Cost.
In one embodiment, as shown in Figure 3B, the mobile device based on entrained by the first movement body is usedDerivative is according to obtaining and the step S32 of the relevant first information of motion track of the mobile device may include:Step S321, baseObtain pedestrian's dead reckoning (PDR) data in the inertial guidance data, indicate the motion track of the mobile device gait,Step-length and direction information.
It is set as with the movement in this way, the gait of the motion track of mobile device, step-length and direction information can be obtainedThe relevant first information of standby motion track, the motion track information as first movement body.
In one embodiment, as shown in Figure 3B, the behavior of first movement body is analyzed to obtain based on the inertial guidance dataPedestrian's dead reckoning (PDR) data are obtained, indicate the gait of the motion track of the mobile device, the step of step-length and direction informationSuddenly S321 includes:Step S3211 analyzes the gait of first movement body based on the inertial guidance data;S3212, based on described usedAcceleration value, gyroscope value and acceleration value threshold value, the step of gyroscope value threshold value and the first movement body of the derivative inState filters out a part of impossible acceleration value and gyroscope value to obtain alternative acceleration value and alternative gyroscope value;S3213, in a synchronization time window, at the beginning of based on the alternative acceleration value and in synchronization time windowSpeed obtains step-length;S3214 is based on step-length threshold value, filters out impossible step-length to obtain alternative amount of movement;S3215 is based onThe alternative gyroscope value obtains rotating vector value as direction information.
In this way, specifically having obtained the gait of mobile device, step-length and direction information.
In one embodiment, as shown in Figure 3 C, described based on described there are multiple moving bodys to be identifiedPanoramic image frame obtains the step of the second information relevant with the motion track of the moving body to be identified in the panoramic image frameS33 may include:Step S331 obtains the multiple moving bodys to be identified detected and exists in one window synchronization timeAll coordinate positions in the multiple panoramic image frame;Step S332 connects the beginning in one window synchronization timeAll coordinates of the multiple moving body to be identified in two panoramic image frames in the panoramic image frame at the end ofPosition is to obtain all possible track;Step S333, the displacement threshold in predetermined time length based on moving body to be identifiedValue, filters impossible track to obtain alternate trajectory from all possible track.
Here, in a synchronization time window (such as a step), there may be multiple panoramic image frames, take away first when the beginningLast panoramic image frame of panoramic image frame at the end of carries out coordinate position detection and trajectory calculation, to reduce calculation amount.Certainly calculating other two panoramic image frames can also.In general, if system real time requires height, and window synchronization time is againSmall (such as within a step), can appropriate frame-skipping, or even only handle first in each of multiple continuous windows synchronization timeFrame and last frame;If system processing power is strong, it is desirable that greatly (for example, multistep), that can for precision height or window synchronization timeWith in a synchronization time window compartment of terrain take multiple panoramic image frames to calculate.But in the disclosure, usually with a stepAs window synchronization time, and a general step is 1 second, and as 1 second synchronization time, the amount of movement of people in window was little, so canDirectly to be calculated with two panoramic image frames at the beginning and end of in a synchronization time window.
Here, can be without recognition of face, it is only necessary to which the position identification for carrying out moving body to be identified obtains each panoramaOne or more positions of one or more of picture frame moving body to be identified, it is suitable by the time by multiple panoramic image framesSequence connects these positions, possible all motion tracks of exhaustive one or more moving bodys to be identified one by one.Due to panorama sketchAs the shooting of frame has known time interval, then can more all motion tracks between the time span of people once taken a stepAmount of movement and common people a step length (as the displacement threshold value in predetermined time length), then filter and once take a stepTime span between amount of movement be more than the displacement threshold value in predetermined time length that track, then can obtain moreMeet the alternate trajectory of people's walking rule.
Certainly, the displacement threshold value of people taken a step and the time span taken a step only are schematically illustrated herein, however, this programme is not limited toThis, can also be other kinds of displacement threshold value and time span, such as position predetermined time length in of the people in runningMove threshold value, the displacement threshold value etc. that certain animal moves in predetermined time length.
In one embodiment, as shown in Figure 3 C, based in panoramic image frame acquisition and the panoramic image frameThe step S33 of relevant second information of motion track of moving body to be identified can also include:Step S334 is based on the panoramaImage obtains the actual range at the multiple center of the moving body to be identified from panoramic image frame;Step S335, based on described standbyTrack is selected, rotation angle of the multiple moving body to be identified relative to the panoramic camera is obtained;Step S336 is based on instituteDistance and the rotation angle are stated, the alternative amount of movement of the multiple moving body to be identified is obtained.
Here, since to be analogous to the photo that fisheye camera is taken the same for panoramic image frame, in two panoramic image framesThe distance that not actually moves of the distance between two moving bodys.The characteristics of flake is imaged is closer to center, and distortion is smaller,Image is more consistent with practical;Further away from center, distortion is bigger, image with it is practical be not consistent (such as in Figure 1B on the left of blackCamber line is office partition sideline in practice in fact, it should be straight line, pattern distortion is at curve).Therefore in order to by panoramaThe distance indicated in picture frame is scaled the distance actually moved, can be waited for by two identified in two panoramic image framesIdentify position and the panoramic picture frame center (such as representing near the installation site of panoramic camera) of moving body it is respective away fromFrom and rotation angle of two moving bodys to be identified relative to the panoramic camera, counted by triangle geometry principleThe alternative amount of movement calculated between two moving bodys to be identified identified in two panoramic image frames is used as practical amount of movement.
In one embodiment, as shown in Figure 3 C, described to be obtained and the panoramic image frame based on the panoramic image frameIn the step S33 of relevant second information of motion track of moving body to be identified can also include:Step S337, based on describedThe actual range at center of the moving body to be identified from panoramic image frame one synchronization time window at the beginning and end of twoDifference in size in a panoramic image frame judges the direction information of the moving body to be identified;Step S338, if the sizeDifference is zero, then the position coordinates based on the moving body to be identified window synchronization time at the beginning and end of twoDifference in size in a panoramic image frame judges direction information of the moving body to be identified in window synchronization time.
This is a kind of mode for the direction information for judging moving body being simple and efficient.By once arriving simple twice compareJudge, then can obtain direction information, with learn moving body be move clockwise or counterclockwise mobile, orMoving body is moved to the left or moves right etc..
In one embodiment, as shown in Figure 3D, the similitude of the first information and the second information withWith in panoramic image frame moving body to be identified and carry the step S34 of first movement body of the mobile device and may include:Step S341 determines that relatively time window, the length of the relatively time window are times of the length of window synchronization timeNumber;Step S342, in the relatively time window, in the amount of movement and second information in the first informationAll alternative amount of movements are to obtain Distance conformability degree;Step S343, in the relatively time window, the first informationIn direction information and second information in direction information with obtain turn to similarity;Step S344 is based on the distanceSimilarity and the steering similarity judge which and the carrying mobile device in one or more of moving bodys to be identifiedFirst movement body match.
In this way, compare its similarity to amount of movement and direction information respectively, can judge in moving body to be identified which withThe first movement body for carrying mobile device matches, to be believed using the identity of the mark first movement body stored in mobile deviceThe information of breath etc. identifies the moving body to be identified taken from panoramic camera.The mode for comparing similarity may includeMinimum variance, classification and clustering algorithm, such as K arest neighbors (KNN) and K mean values (K-Means), do not elaborate herein.
It is given birth in this way, can handle and compare to come from the multiple moving bodys to be identified detected in panoramic video stream and carryAt the first movement body of the mobile device of inertial guidance data track similarity will most similar track carry out moving body (such as withFamily) authentication and binding, obtain identity information authentication result.
4-15 illustrates an example of image processing method according to an embodiment of the invention below in conjunction with the accompanying drawingsThe example of the details of detailed process, wherein user as moving body, and user take a step a step time span as synchronizeThe example of time window, certainly, this is only example and unrestricted, and the invention is not limited thereto.
Fig. 4 shows the particular flow sheet of an example of image processing method according to an embodiment of the invention.
The work flow diagram of one example of the image processing method includes:1) step S1, image processing system is (hereinafter referred to asSystem) platform initialization;Step S11 obtains the panoramic video stream for including panoramic image frame shot by panoramic camera,With step S12, the inertial guidance data of the inertial navigation sensor from the first entrained mobile device is obtained;2) step S211, whenWhen system receives binding trigger condition, system setting in motion is detected to detect the people of movement, and step from panoramic video streamS212, the coordinate points (position) of based drive people estimate trace information;3) step S221, contemporaneously or in parallel, system startPDR analyses are carried out from inertial guidance data, to analyze the behavior of people, and step S222, (as a synchronization time in each stepOne example of window) material calculation (example as amount of movement) and turn to (information);4) step S3, in a comparisonIn time window (multiple for being equal to window synchronization time), system is to coming from panoramic video stream and from the track of mobile deviceInformation is compared, and filters off impossible track;5) step S4, the similarity based on track, will stored in mobile deviceThe identity information of one people is tied to the high someone detected of track similarity, completes authentication procedures.
It can handle and compare the people for coming from the multiple identity to be identified detected in panoramic video stream and carry and generateThe similarity of the first track of the mobile device of inertial guidance data will most similar (similarity is high) track into pedestrian bodyPart certification and binding, obtain identity information authentication result.
In addition, binding trigger condition is a kind of signal opened and synchronized with identification process as mentioned herein, can be based onWireless signal approaches to judge, such as is approached by low-power consumption bluetooth (Bluetooth Low Energy, BLE) to judge,It is handled, is approached by PDR analyses to judge close to judgement by video.In this way, opening basis based on binding trigger conditionThe mechanism of the image procossing of the embodiment of the present invention, image processing system is not necessarily to synchronize within the entire period, and can reducePower consumption.
Fig. 5 shows being triggered using binding for an example of image processing method according to an embodiment of the inventionCondition synchronizes the flow chart of process.
The synchronizing process flow chart includes:1) prerequisite:Step S11, panoramic camera obtain including panoramic image framePanoramic video stream, step S12, obtain the inertial navigation sensor from the first entrained mobile device inertial guidance data;2)Step S221-a, when binding trigger condition (such as user approaches, wirelessly close to judgement, the close judgement of video etc.) arrival, placeInertial guidance data is managed, analyzes to obtain step number by PDR, further obtains the step-length among a step and steering;And step S221-b,It is a step or a few steps that window synchronization time, which is arranged,;3) step S211 identifies people, and step in synchronization time windowS212 tracks people in synchronization time window;4) contemporaneously or in parallel, step S222, based on PDR data into the tracking of pedestrian.
It is this field skill to obtain step number using inertial guidance data and further obtain the step-length among a step and steeringWell known to art personnel, it is not described in detail herein.
In this way, the mechanism of image procossing according to an embodiment of the invention is opened based on binding trigger condition, at imageReason system is not necessarily to synchronize within the entire period, and can reduce power consumption.
Fig. 6 shows window synchronization time of an example of image processing method according to an embodiment of the inventionSchematic diagram.
Here, using people take a step a step as window synchronization time example, therefore, " new a step " as shown in FIG. 6 and3 frame panorama sketch in panoramic video flow data may have been collected in a synchronization time window between another " new a step "As frame and inertial guidance data i2-i6.
And this as a step synchronization time window length may be used various ways obtain.For example, by inertial navigationData carry out PDR analyses to carry out step number detection to obtain the time span of a step as synchronization time window, or according toThe step number sensing data of step number sensor obtains the time span of a step as synchronization time window, or can also pass throughIt pre-sets the take a step predetermined time length of a step of usual people and is used as synchronization time window.
Here, being synchronized to the trajectory processing of the panoramic image frame of panoramic video stream and to inertial navigation using window synchronization timeThe trajectory processing of data, can so to compare the start time point of the similarity of track and terminate time point be it is identical,The comparison of the similarity of track so can be more accurately carried out, to more accurately judge whether two tracks identical or phaseSeemingly.
Fig. 7 show an example of image processing method according to an embodiment of the invention based on including panoramaThe panoramic video stream of picture frame obtains the flow chart of the step of motion track.
Include come the flow chart for obtaining track based on panoramic video stream:1) step S212-a, according to a synchronization time windowThe position coordinates point of the people detected in panoramic image frame in mouthful, connects them in chronological order, proprietary to obtainAll possible path locus;2) step S212-b, in each window, it is assumed that people moves in a step, obtains synchronization timeTo the starting point and maximal end point of each mobile route of the people in every step, elimination length is longer than the step length threshold of usual peoplePath, to reduce calculation amount;3) step S212-c, the people and regional center for calculating detection (are usually the installation of panoramic cameraPosition projects to the point on ground, and specific computational methods are for example by between the installation site of panoramic camera point and subpointDistance, that is, panoramic camera mounting height) actual range;4) step S212-d, based on step S212-c's as a result,It calculates and the relevant information in track, including actual step-length (example as amount of movement) and steering.
Fig. 8 show obtained in step as shown in Figure 7 with the relevant information of motion track (including step-length and steering)Example schematic diagram.
With the relevant information of motion track obtained from the trajectory processing of panoramic video stream at this, by following steps groupAt as shown in Figure 8:
C-1) in each window synchronization time, proprietary all position coordinates points for being detected;
C-2 these position coordinates points) are based on, wait until all possible track, tracking quantity such as formula (1):
NumPath=∏ length (pti) (1)
Wherein, ptiIndicate the proprietary all position coordinates points detected.
C-3) in each window, using initial and last point as trajectory path, filter off the rail of repetition synchronization timeMark, obtains possible track, for example, the left side of Fig. 8 from the 2nd synchronization time window two points 2 (be respectively [pt21.x,Pt21.y], [pt22.x., pt22.y]) to the 3rd synchronization time window a point 3 ([pt3.x, pt3.y]), then it is same to the 4thWalk all of three points 4 ([pt41.x., pt41.y], [pt42.x., pt42.y], [pt43.x., pt43.y]) of time windowPossible track (every a line as listed by the downside of Fig. 8 represents a possible track);
C-4) between different windows synchronization time, in conjunction with the possible track obtained in step c-3, it is big to filter off saltus stepTrack (i.e. the length of trajectory path be more than common people a step step-length threshold value track), obtain alternate trajectory;
C-5) the characteristic based on panoramic picture, calculate people in the zone from regional center (panoramic camera position)Actual range, as shown in formula (2):
Wherein, H is height of the panoramic camera apart from ground, and h is the height of people (can estimate one in the case of more peopleA mean value, such as 1.7m), p is a pixel size of imaging sensor, and f is the fish-eye of panoramic cameraFocal length.
C-6 step-length and the steering of people) are calculated as a result, in synchronization time window based on c-5's.
Fig. 9 show the moving body and regional center calculated in step as shown in Figure 7 in panoramic image frame it is practical away fromFrom schematic diagram.
See Fig. 9, the intersection point of the optical axis and ground that are usually located at the panoramic camera at ceiling can be used as in regionThe heart, and assume that the Pixel Dimensions of the people detected are p, the actual height of people is h, the position coordinates point of the people detected and regionThe pixel distance at center is r, and the pixels tall of the position coordinates point is n, pixel wide m.
In this way, the coordinate position of people and the actual range d of regional center can be calculated by following formula.
Figure 10 shows that distance according to Fig.9, calculates the schematic diagram of the practical amount of movement of people.
Step-length of the practical amount of movement of people referred to herein as in a certain window synchronization time can be calculated by formula (3)It obtains, as shown in Figure 10.
d12+d22+2×d1×d2×cos α1=SL2 (3)
Here, d1Actual range for the people and regional center that are detected in previous panoramic image frame, d2It is next latterThe actual range of the people and regional center that are detected in panoramic image frame, SL indicate people from previous panoramic image frame to latter panorama sketchAs the practical amount of movement of frame, α1Indicate the people detected from previous panoramic image frame to latter panoramic image frame relative to described completeThe rotation angle of scape video camera.x1、y1And x2、y2It is to be detected in two panoramic image frames in same window synchronization time respectivelyTo coordinate of the people on panoramic image frame.Because for panoramic image frame, although distance and amount of movement can be distorted,But for angle, it is based especially on the angle of regional center, panoramic picture and really corresponding, so using panorama hereAngle α on picture frame1As angle in practice.
Figure 11 shows the schematic diagram for the steering that moving body is judged in step as shown in Figure 7.
Steering refers to the direction of rotation in a certain window synchronization time, can be obtained by following steps:
C-6-1 the rotation direction of history) is obtained;
C-6-2) in current synchronization time window, compare actual range of the people from regional center detected and (compareThe coordinate points of the initial detection of the synchronization actual window from the actual range of regional center with the synchronous actual window mostActual range of the coordinate points of detection afterwards from regional center), it is based on actual range comparison result, judges possible rotation sideFormula;
C-6-3) in current synchronization time window, compare the position coordinates of the people detected, compared based on coordinate, judgePossible rotating manner;
C-6-4) based on preceding two judging results, the steering of each track is obtained.
Specifically, the detailed process compared is described in detail with reference to shown in figure 11.
In turning to and calculating, it is known that amount is:The coordinate points (xi, yi) of the people detected in each panoramic image frame, detectionThe actual range di of the people and regional center that arrive, wherein i are the numbers of panoramic image frame.
Carry out following steps:
1, compared by the di between panoramic image frame, first judges that people is also to be proximate to center far from center:
For example, if d2 is more than d1, illustrate that the actual range at people and center is distal to first in second panoramic image frameThe actual range of people and center in a panoramic image frame, it may be considered that people is deep.
So, the range that the fine line segment table of the possible motion range lower-left of the people shows.
2, by the comparison of xi, yi, more specifically judge the possible motion range of people:
For example, if x2 is more than x1, y1 is more than y2, then the possible motion range of people is that the solid line segment form of lower section showsRange.
3. passing through history value (x0, y0) (coordinate value of the people detected in i.e. previous window synchronization time) or futureValue (x3, y3) (for example, coordinate value of the people detected in latter window synchronization time) judges possible steering.
For example, if X0<X1,Y0>Y1 or x3<x2,y3<Y2 then may determine that people is to rotate clockwise, in Figure 11Arrow indicates.
4, by xi, yi calculates the amplitude (specific angle value) of people's rotation.The calculating of the amplitude or angle value is basisWhat the coordinate on image carried out.The distortion that fish eye images are brought can lead to certain error, but due to according to first two qualitative ratiosThe approximate range of rotation is relatively determined, in addition people is limited in short time mobile range, it can be with the steering angle in image substantiallyEstimate actual steering angle.
First, (X0, y0) and (x1, y1) linear equation are carried out:
Secondly, (X2, y2) and (x1, y1) equation of a circle are carried out:
(x′-x1)2+(y′-y1)2=(x2-x1)2+(y2-y1)2 (6)
Then, the two solves the orientation judgement plus front, obtain (x ', y ').
Then, solve (x ', y '), (X2, y2) and (x1, y1) trigonometric equation, wherein obtaining steering angle α:
Note that it is not high in required precision, in the system for needing quick response, what first three substantially judgement walked can be met the requirementsIn the case of, it is not necessary to the calculating of the 4th step is carried out, or can be in step 4 directly by first verifying one steering angle of estimation, to reachTo the live effect of quick response.
Figure 12 show an example of image processing method according to an embodiment of the invention based on inertial navigation numberAccording to come flow chart the step of obtaining motion track.
Trajectory processing flow chart based on inertial guidance data, including:1) step, S221-a obtain inertial guidance data:Acceleration, topSpiral shell instrument and Magnetic Sensor;2) step S221-b, is based on inertial guidance data, and PDR analytic processes obtain further information:Step number, step-length,It turns to;3) step S222-a is based on user behavior, and acceleration value and turning value have inverse function relationship, is based on this relationship, filteringFall unreasonable inertial guidance data;4) in each step time window, step S222-b obtains step-length and step S222-c, obtainsIt turns to.
PDR data are the inertial guidance datas based on mobile device, analyze to obtain in conjunction with PDR.Inertial guidance data includes acceleratingSpend the result data of meter, gyroscope, Magnetic Sensor.PDR analyses include gait detection, step-length analysis and turn to analysis.
Amount of movement is the step-length of the people in a certain window synchronization time, is obtained by following steps, as shown in figure 12:
D-1) based on basic inertial guidance data, user behavior is analyzed, PDR data are obtained;
D-2) based on the relationship between acceleration value, gyroscope value and user behavior, be filled into a part of acceleration value andGyroscope value (as rotating vector);
D-3) in this window, step-length SL is obtained according to formula (8), and judge to walk according to user behavior synchronization timeLong feasibility:
Wherein, a indicates that acceleration value, v_0 indicate the initial initial velocity value in synchronization time window, and t is indicated shouldSynchronization time window time span.The initial velocity value can be obtained for example, by following three kinds of methods:One is prioriCard is default, and the speed of travel of people is usually 1-1.5m/s;Another kind is retained at previous step (or upper synchronization time window)Final speed, then using this speed as the initial velocity of (or next window synchronization time) in next step;Finally one is recognizeEach step of behaving all is primary short stop, and initial velocity is all 0.The first may be used herein, more accurately, and error is smaller.
Figure 13 A and 13B are shown in the step shown in Figure 12 judges the schematic diagram turned to based on inertial guidance data.
Steering in a certain window synchronization time typically refers to rotating vector, is obtained by following steps, such as Figure 13It is shown:
D-1) based on basic inertial guidance data, user behavior is analyzed, PDR data are obtained;
D-2) based on the relationship between acceleration value, gyroscope value and user behavior, be filled into a part of acceleration value andGyroscope value (rotating vector);
D-3) in this window, obtain rotating vector value, as shown in FIG. 13A synchronization time.
Specifically, in the step of obtaining turning to, rotating vector can be worth to by the gyroscope in inertial guidance dataRotat ion vector (rv) are used as direction information, this is the arc-tangent value of angle, and range is from -1 to 1.If inertial guidance dataIn output gyroscope value, can by traditional PDR algorithms, call relevant software module generate similar steering toAmount.Specific method for solving can solve inertial guidance data by quaternary number equation:
Wherein θ indicates that the rotation angle (specified inceptive direction when relative to system initialization) around y-axis, Δ θ indicateRotation angle between inertial guidance data sampling twice, (x, y, z, w) indicate the quaternary number of corresponding rotation,ψ is indicated respectively around x-axisWith the rotation angle (as shown in Figure 13 B) of y-axis.
In this way, having been obtained and the relevant step-length of motion track and direction information by inertial guidance data.
Note that although above example some specific formula, these formula are only example, those skilled in the artOther formula can be constructed according to the principle of the application, differ a citing herein.
Figure 14 shows the impossible moving rail of filtering in image processing method according to an embodiment of the inventionThe flow chart of the identification of mark and moving body.Figure 15 shows image processing method according to an embodiment of the inventionIn comparison time window schematic diagram.
The process that identification and identity binding are carried out by comparing motion track is such as schemed including following stepsShown in 14:
E-1) step S411, in each window synchronization time, from the testing result for come from panoramic video streamTo amount of movement (being herein step-length) and direction information;
E-2) step S421 is moved in each window from coming from inertial guidance data and PDR analyses synchronization timeMomentum (being herein step-length) and direction information;
E-3) determine that relatively time window is as shown in figure 15 the multiple proportion of window synchronization time;
E-4-1) step S412 compares the movement for coming from panoramic video stream and inertial guidance data in relatively time windowInformation is measured, similarity is obtained, as shown in formula (11), is defined as filter A:
∑Sim(dpi,dti) (11)
Wherein, dpi indicates the amount of movement obtained based on inertial guidance data, and dti indicates the shifting obtained based on panoramic video streamMomentum.Filter A be filter out dissimilar amount of movement, such as two amount of movements variance it is larger etc..For example, frontIt is usually a value less than 1 to have asked similarity sim, similarity, is represented closer to 1 more similar.So filtering can be directSim value recklings, or the certain threshold range of setting are removed, the elimination of threshold value is less than.
E-4-2) step S422 compares the steering for coming from panoramic video stream and inertial guidance data in relatively time windowInformation obtains similarity, as shown in formula (12), is defined as filter B:
∑Sim(tp′i,tti) (12)
Wherein, the direction information that tp'i expressions are obtained based on inertial guidance data, and what tti expressions were obtained based on panoramic video streamDirection information.Filter B be filter out dissimilar direction information, such as two direction informations variance it is larger etc..For example, it is usually a value less than 1 that similarity sim, similarity have been asked in front, represented closer to 1 more similar.So filtering canTo be directly to remove sim value recklings, or set certain threshold range, it is less than the elimination of threshold value.
In this way, in step S43, filter joint A and B filter out impossible track.
E-4-3) step S44 filters off impossible track in conjunction with the historical information of the comparison time window of history, definitionFor filter C:
In conjunction with the window of history, window synchronization time is extended, that is, extend the comparison time window for comparing trackLength, such as before be in 1s relatively, have now become in 3s relatively, the track compared also increases.Then according further toFilter A and filter B is filtered as filter C by the comparison of amount of movement and steering.
E-5) finally, step S45 is filtered off all impossible tracks, is obtained by filter A, filter B, filter CTo most possible track and the corresponding people detected in two tracks, the people detected in panoramic video stream is identifiedFor the identity of people corresponding with the inertial guidance data of mobile device.
Certainly, filter A, B, C mentioned above are only examples, actually any to filter out impossible trackMode is all feasible, is included in scope of the present application.
Figure 16 shows the block diagram of image processing apparatus 1600 according to an embodiment of the invention.
Image processing apparatus 1600 includes:Receiver 1601 is configured as in one or more windows synchronization time, connectsThe multiple panoramic image frames received the inertial guidance data of the mobile device entrained by first movement body and shot by panoramic camera;First calculator 1602, be configured as inertial guidance data based on the mobile device entrained by the first movement body obtain with it is describedThe relevant first information of motion track of mobile device, the first information include the motion track of the mobile device describedAmount of movement and direction information in one synchronization time window;Second calculator 1603 is configured as being based on the panoramic image frameRelevant second information of motion track with the one or more moving body to be identified detected in the panoramic image frame is obtained,Second information includes the motion track of one or more of moving bodys to be identified in one synchronization time windowAmount of movement and direction information;Comparator 1604 is configured as the similitude of the first information described in comparison and the second information to matchThe first movement body of moving body to be identified and the carrying mobile device in panoramic image frame.
In one embodiment, the first calculator 1602 can be configured as:Pedestrian is obtained based on the inertial guidance dataDead reckoning (PDR) data, indicate the gait, step-length and direction information of the motion track of the mobile device.
In one embodiment, the first calculator 1602 can be configured as:First is analyzed based on the inertial guidance dataThe gait of moving body;Based on acceleration value, gyroscope value and acceleration value threshold value in the inertial guidance data, gyroscope value thresholdThe gait of value and the first movement body, filters out a part of impossible acceleration value and gyroscope value to obtain alternative accelerationAngle value and alternative gyroscope value;In a synchronization time window, based on the alternative acceleration value and in the synchronization time windowSpeed at the beginning of in mouthful obtains step-length;Based on step-length threshold value, impossible step-length is filtered out to obtain alternative amount of movement;BaseRotating vector value is obtained as direction information in the alternative gyroscope value.
In one embodiment, the second calculator 1603 can be configured as:In one window synchronization time, obtainAll coordinate positions of the multiple moving bodys to be identified that must be detected in the multiple panoramic image frame;It connects one sameThe multiple moving body to be identified in two panoramic image frames at the beginning and end of walking in time window is in the panoramaAll coordinate positions in picture frame are to obtain all possible track;Based on moving body to be identified in predetermined time lengthDisplacement threshold value, impossible track is filtered from all possible track to obtain alternate trajectory.
In one embodiment, the second calculator 1603 can be configured as:It is described to be obtained based on the panoramic image frameThe step of the second information relevant with the motion track of the moving body to be identified in the panoramic image frame further includes:Based on describedPanoramic picture obtains the actual range at the multiple center of the moving body to be identified from panoramic image frame;Based on the alternative railMark obtains rotation angle of the multiple moving body to be identified relative to the panoramic camera;Based on the distance and describedRotation angle obtains the alternative amount of movement of the multiple moving body to be identified.
In one embodiment, the second calculator 1603 can be additionally configured to:Based on the moving body to be identified from completeThe actual range at the center of scape picture frame one synchronization time window at the beginning and end of two panoramic image frames inDifference in size judges the direction information of the moving body to be identified;If the difference in size is zero, based on described to be identifiedThe position coordinates of moving body window synchronization time at the beginning and end of two panoramic image frames in difference in size,Judge direction information of the moving body to be identified in window synchronization time.
In one embodiment, comparator 1604 can be configured as:Determine relatively time window, the relatively time windowThe length of mouth is the multiple of the length of window synchronization time;In the relatively time window, the first informationIn amount of movement and second information in all alternative amount of movements to obtain Distance conformability degree;In the relatively time windowIn, the direction information in direction information and second information in the first information is to obtain turning to similarity;BaseIn the Distance conformability degree and the steering similarity, which and carrying institute in one or more of moving bodys to be identified are judgedThe first movement body for stating mobile device matches.
It is given birth in this way, can handle and compare to come from the multiple moving bodys to be identified detected in panoramic video stream and carryAt the first movement body of the mobile device of inertial guidance data track similarity will most similar track carry out moving body (such as withFamily) authentication and binding, obtain identity information authentication result.For example, if come from detected in panoramic video stream it is moreThere are one the motion tracks of moving body to be identified and carrying to generate inertial guidance data in multiple motion tracks of a moving body to be identifiedMobile device certain moving body track similarity it is maximum, it may be considered that the moving body to be identified is exactly to carry the movementThe first movement body of equipment.
Figure 17 shows the block diagrams of mobile device according to an embodiment of the invention.
A kind of mobile device 1700 shown in Figure 17 includes:Data collector 1701 is configured as collection and comes from movementThe inertial guidance data of equipment;Communicator 1702 is configured as sending the inertial guidance data collected to image processing apparatus 1600,In, described image processing unit 1600 receives the movement entrained by first movement body and sets in a synchronization time windowStandby inertial guidance data and the multiple panoramic image frames shot by panoramic camera;Entrained by the first movement bodyThe inertial guidance data of mobile device obtains the relevant first information of motion track with the mobile device, and the first information includesAmount of movement and direction information of the motion track of the mobile device in one synchronization time window;Based on the panorama sketchAs frame obtains the motion track relevant second with the one or more moving body to be identified detected in the panoramic image frameInformation, second information include the motion track of one or more of moving bodys to be identified in one synchronization time windowIn amount of movement and direction information;The similitude for comparing the first information and the second information is waited for matching in panoramic image frameIt identifies moving body and carries the first movement body of the mobile device.
Here, image processing apparatus 1600 can also have the structure described with reference to figure 17, this will not be repeated here.
It is given birth in this way, can handle and compare to come from the multiple moving bodys to be identified detected in panoramic video stream and carryAt the first movement body of the mobile device of inertial guidance data track similarity will most similar track carry out moving body (such as withFamily) authentication and binding, obtain identity information authentication result.For example, if come from detected in panoramic video stream it is moreThere are one the motion tracks of moving body to be identified and carrying to generate inertial guidance data in multiple motion tracks of a moving body to be identifiedMobile device certain moving body track similarity it is maximum, it may be considered that the moving body to be identified is exactly to carry the movementThe first movement body of equipment.
Figure 18 shows the block diagram of the example image processing system suitable for being used for realizing embodiment of the present invention.
The image processing system may include processor (H1);Memory (H2), is coupled in the processor (H1), andComputer executable instructions are wherein stored, the side for carrying out each embodiment as described herein when being executed by the processorThe step of method.
Processor (H1) can include but is not limited to such as one or more processor or or microprocessor.
Memory (H2) can include but is not limited to for example, random access memory (RAM), read-only memory (ROM), fastFlash memory, eprom memory, eeprom memory, register, hard disc, soft dish, solid state disk, removable dish, CD-ROM,DVD-ROM, Blu-ray disc etc..
In addition to this, which can also include data/address bus (H3), input/output (I/O) bus (H4), showShow device (H5) and input-output apparatus (H6) (for example, keyboard, mouse, loud speaker etc.) etc..
Processor (H1) can by I/O buses (H4) via wired or wireless network (not shown) and external equipment (H5,H6 etc.) communication.
Memory (H2) can also store at least one computer executable instructions, for when being executed by processor (H1)The step of each function and/or method in this technology described embodiment.
Certainly, above-mentioned specific embodiment is only example and unrestricted, and those skilled in the art can be according to the present inventionDesign merge and combine some steps and device from above-mentioned each embodiment described separately realize the present invention effect,This merging and the embodiment being composed are also included in the present invention, and do not describe this merging and combination one by one herein.
Note that the advantages of referring in the disclosure, advantage, effect etc. are only exemplary rather than limitation, it must not believe that these are excellentPoint, advantage, effect etc. are that each embodiment of the present invention is prerequisite.In addition, detail disclosed above merely toExemplary effect and the effect being easy to understand, and it is unrestricted, and it is above-mentioned specific for that must use that above-mentioned details is not intended to limit the present inventionDetails realize.
The block diagram of device, device, equipment, system involved in the disclosure only as illustrative example and is not intended toIt is required that or hint must be attached in such a way that box illustrates, arrange, configure.As those skilled in the art will appreciate that, it can be connected by any way, arrange, configure these devices, device, equipment, system.Such as "include", "comprise", " tool" etc. word be open vocabulary, refer to " including but not limited to ", and can be used interchangeably with it.Vocabulary used herein above"or" and " and " refer to vocabulary "and/or", and can be used interchangeably with it, unless it is not such that context, which is explicitly indicated,.Here madeVocabulary " such as " refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
Step flow chart and above method description in the disclosure only as illustrative example and are not intended to requireOr imply the step of must carrying out each embodiment according to the sequence that provides.It as the skilled person will recognize, can be withThe sequence of the step in above example is carried out in any order.Such as " thereafter ", the word of " then ", " following " etc. is notIt is intended to the sequence of conditioning step;These words are only used for the description that guiding reader reads over these methods.In addition, for example using article"one", " one " or "the" be not interpreted the element being limited to odd number for any reference of the element of odd number.
In addition, the step and device in each embodiment herein are not only defined in some embodiment and carry out, thingIn reality, can with concept according to the present invention come combine in each embodiment herein relevant part steps and partial devices withConceive new embodiment, and these new embodiments are intended to be included within the scope of the present invention.
Each operation of process as described above can be by can carry out any means appropriate of corresponding functionIt carries out.The means may include various hardware and or software components and/or module, including but not limited to the circuit of hardware, specialIntegrated circuit (ASIC) or processor.
Can using be designed to carry out the general processor of function described here, digital signal processor (DSP),ASIC, field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logics, fromScattered hardware component or its arbitrary combination and logical block, module and the circuit of each illustration realized or be described.It is generalProcessor can be microprocessor, but as an alternative, the processor can be any commercially available processor, controlDevice, microcontroller or state machine.Processor is also implemented as the combination of computing device, such as the combination of DSP and microprocessor,Multi-microprocessor, the one or more microprocessors to cooperate with DSP core or any other such configuration.
In conjunction with the method or the step of algorithm that the disclosure describes can be directly embedded within hardware, the software that executes of processorIn module or in the combination of both.Software module can reside in any type of tangible media.It can useSome examples of storage medium include random access memory (RAM), read-only memory (ROM), flash memory, EPROMMemory, eeprom memory, register, hard disc, removable dish, CD-ROM etc..Storage medium can be couple to processor withJust the processor can be from the read information and to the storage medium write information.In alternative, storage is situated betweenMatter can be whole with processor.Software module can be single instruction or many instructions, and can be distributed in severalBetween program on different code segments, different and across multiple storage mediums.
Method disclosed herein includes one or more actions of the method for realizing description.Method and/or action canWith interchangeable with one another without departing from the scope of the claims.In other words, it unless specifying the particular order of action, otherwise can repairChange the sequence specifically acted and/or use without departing from the scope of the claims.
Above-mentioned function can be realized by hardware, software, firmware or its arbitrary combination.If implemented in software, function canTo be stored on practical computer-readable medium as one or more instruction.Storage medium can be visited by computerAny available tangible media asked.By example rather than limit, such computer-readable medium may include RAM,ROM, EEPROM, CD-ROM or other optical discs storage, magnetic disc storage or other magnetic memory devices can be used for carrying or depositThe desired program code of storage instruction or data structure form and any other tangible media that can be accessed by computer.Such asAs used herein, dish (disk) and disk (disc) include compact disk (CD), laser disk, CD, digital versatile disc (DVD), soft dishAnd Blu-ray disc, wherein dish usual magnetic ground reproduce data, and disk using laser optics reproduce data.
Therefore, computer program product can carry out operation given herein.For example, such computer program product canTo be the computer-readable tangible medium with the instruction of tangible storage (and/or coding) on it, which can be by oneOr multiple processors are executed to carry out operation described here.Computer program product may include the material of packaging.
Software or instruction can also be transmitted by transmission medium.It is, for example, possible to use such as coaxial cable, optical fiber lightThe transmission medium of the wireless technology of cable, twisted-pair feeder, digital subscriber line (DSL) or such as infrared, radio or microwave from website, clothesBusiness device or other remote source softwares.
In addition, the module and/or other means appropriate for carrying out methods and techniques described here can be appropriateWhen downloaded by user terminal and/or base station and/or other modes obtain.For example, such equipment can be couple to server withPromote the transmission of the means for carrying out method described here.Alternatively, various methods described here can be via storage partPart (such as physical storage medium of RAM, ROM, CD or soft dish etc.) provides, so that user terminal and/or base station can beIt is couple to the equipment or obtains various methods when providing storage unit to the equipment.Furthermore, it is possible to utilize for that will retouch hereinThe methods and techniques stated are supplied to any other technology appropriate of equipment.
Other examples and realization method are in the scope of the disclosure and the accompanying claims and spirit.For example, due to softwareEssence, function described above can use by processor, hardware, firmware, hardwired or these arbitrary combination executeSoftware realization.It realizes that the feature of function can also be physically located in each position, including is distributed and exists so as to the part of functionDifferent physical locations are realized.Moreover, it is as used herein, including use in the claims, with "at least one"The item of beginning enumerates enumerating for the middle "or" instruction separation used, and meaning is enumerated so as to such as " A, B or C's is at least one "A or B or C or AB or AC or BC or ABC (i.e. A and B and C).In addition, wording " exemplary " does not mean that the example of description isIt is preferred or more preferable than other examples.
The technology instructed defined by the appended claims can not departed from and carried out to the various of technology described hereChange, replace and changes.In addition, the scope of the claims of the disclosure is not limited to process described above, machine, manufacture, thingComposition, means, method and the specific aspect of action of part.It can be essentially identical using being carried out to corresponding aspect described hereFunction either realize essentially identical result there is currently or to be developed later processing, machine, manufacture, event groupAt, means, method or action.Thus, appended claims include such processing within its scope, machine, manufacture, eventComposition, means, method or action.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use thisInvention.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined hereinGeneral Principle can be applied to other aspect without departing from the scope of the present invention.Therefore, the present invention is not intended to be limited toAspect shown in this, but according to the widest range consistent with principle disclosed herein and novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to the reality of the present inventionIt applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skillArt personnel will be recognized that its certain modifications, modification, change, addition and sub-portfolio.

Claims (10)

Wherein, described image processing unit receives the movement entrained by first movement body in a synchronization time windowThe inertial guidance data of equipment and the multiple panoramic image frames shot by panoramic camera;Entrained by the first movement bodyThe inertial guidance data of mobile device obtain and the relevant first information of motion track of the mobile device, the first information packetInclude amount of movement and direction information of the motion track of the mobile device in one synchronization time window;Based on the panoramaPicture frame obtains and the motion track of the one or more moving bodys to be identified detected in the panoramic image frame relevant theTwo information, second information include the motion track of one or more of moving bodys to be identified in one synchronization timeAmount of movement in window and direction information;Compare the similitude of the first information and the second information to match in panoramic image frameMoving body to be identified and the first movement body for carrying the mobile device.
CN201710123786.XA2017-03-032017-03-03Image processing method, device and systemActiveCN108537094B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710123786.XACN108537094B (en)2017-03-032017-03-03Image processing method, device and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710123786.XACN108537094B (en)2017-03-032017-03-03Image processing method, device and system

Publications (2)

Publication NumberPublication Date
CN108537094Atrue CN108537094A (en)2018-09-14
CN108537094B CN108537094B (en)2022-11-22

Family

ID=63488428

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710123786.XAActiveCN108537094B (en)2017-03-032017-03-03Image processing method, device and system

Country Status (1)

CountryLink
CN (1)CN108537094B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110547803A (en)*2019-07-272019-12-10华南理工大学pedestrian height estimation method suitable for overlooking shooting of fisheye camera
CN111950520A (en)*2020-08-272020-11-17重庆紫光华山智安科技有限公司Image recognition method and device, electronic equipment and storage medium
CN112699706A (en)*2019-10-222021-04-23广州弘度信息科技有限公司Fall detection method, system and storage medium
CN112734938A (en)*2021-01-122021-04-30北京爱笔科技有限公司Pedestrian position prediction method, device, computer equipment and storage medium
CN112849648A (en)*2020-12-312021-05-28重庆国际复合材料股份有限公司Intelligent tray identification system and method
WO2023282835A1 (en)*2021-07-082023-01-12Spiideo AbA data processing method, system and computer program product in video production of a live event

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103179378A (en)*2011-12-262013-06-26天津市亚安科技股份有限公司Video monitoring device with privacy sheltering function and privacy sheltering method
CN103994765A (en)*2014-02-272014-08-20北京工业大学Positioning method of inertial sensor
US20150085111A1 (en)*2013-09-252015-03-26Symbol Technologies, Inc.Identification using video analytics together with inertial sensor data
CN104501814A (en)*2014-12-122015-04-08浙江大学Attitude and position estimation method based on vision and inertia information
CN104515521A (en)*2013-09-262015-04-15株式会社巨晶片Pedestrian observation system, recording medium, and estimation of direction of travel
CN104969030A (en)*2013-02-042015-10-07株式会社理光Inertial device, method, and program
CN105074381A (en)*2013-01-212015-11-18可信定位股份有限公司 Method and apparatus for determining misalignment between equipment and pedestrians
CN105556243A (en)*2013-08-062016-05-04高通股份有限公司Method and apparatus for position estimation using trajectory
CN105550670A (en)*2016-01-272016-05-04兰州理工大学Target object dynamic tracking and measurement positioning method
JP2016129309A (en)*2015-01-092016-07-14富士通株式会社Object linking method, device and program
US20160259979A1 (en)*2012-03-222016-09-08Bounce Imaging, Inc.Remote surveillance sensor apparatus
CN105987694A (en)*2015-02-092016-10-05株式会社理光Method and apparatus for identifying user of mobile equipment
CN106257909A (en)*2015-06-162016-12-28Lg电子株式会社Mobile terminal and control method thereof
US20160379074A1 (en)*2015-06-252016-12-29Appropolis Inc.System and a method for tracking mobile objects using cameras and tag devices
US20170010124A1 (en)*2015-02-102017-01-12Mobileye Vision Technologies Ltd.Systems and methods for uploading recommended trajectories

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103179378A (en)*2011-12-262013-06-26天津市亚安科技股份有限公司Video monitoring device with privacy sheltering function and privacy sheltering method
US20160259979A1 (en)*2012-03-222016-09-08Bounce Imaging, Inc.Remote surveillance sensor apparatus
CN105074381A (en)*2013-01-212015-11-18可信定位股份有限公司 Method and apparatus for determining misalignment between equipment and pedestrians
CN104969030A (en)*2013-02-042015-10-07株式会社理光Inertial device, method, and program
CN105556243A (en)*2013-08-062016-05-04高通股份有限公司Method and apparatus for position estimation using trajectory
US20150085111A1 (en)*2013-09-252015-03-26Symbol Technologies, Inc.Identification using video analytics together with inertial sensor data
CN104515521A (en)*2013-09-262015-04-15株式会社巨晶片Pedestrian observation system, recording medium, and estimation of direction of travel
CN103994765A (en)*2014-02-272014-08-20北京工业大学Positioning method of inertial sensor
CN104501814A (en)*2014-12-122015-04-08浙江大学Attitude and position estimation method based on vision and inertia information
JP2016129309A (en)*2015-01-092016-07-14富士通株式会社Object linking method, device and program
CN105987694A (en)*2015-02-092016-10-05株式会社理光Method and apparatus for identifying user of mobile equipment
US20170010124A1 (en)*2015-02-102017-01-12Mobileye Vision Technologies Ltd.Systems and methods for uploading recommended trajectories
CN106257909A (en)*2015-06-162016-12-28Lg电子株式会社Mobile terminal and control method thereof
US20160379074A1 (en)*2015-06-252016-12-29Appropolis Inc.System and a method for tracking mobile objects using cameras and tag devices
CN105550670A (en)*2016-01-272016-05-04兰州理工大学Target object dynamic tracking and measurement positioning method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MICHAEL PETER 等: "Refinement of coarse indoor models using position traces and a low-cost range camera", 《INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION》*
毕朝国 等: "一种基于多传感器数据融合的移动主体运行轨迹捕捉机制", 《计算机科学》*
熊林欣: "多源数据融合的目标跟踪方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》*

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110547803A (en)*2019-07-272019-12-10华南理工大学pedestrian height estimation method suitable for overlooking shooting of fisheye camera
CN110547803B (en)*2019-07-272021-12-21华南理工大学Pedestrian height estimation method suitable for overlooking shooting of fisheye camera
CN112699706A (en)*2019-10-222021-04-23广州弘度信息科技有限公司Fall detection method, system and storage medium
CN111950520A (en)*2020-08-272020-11-17重庆紫光华山智安科技有限公司Image recognition method and device, electronic equipment and storage medium
CN111950520B (en)*2020-08-272022-12-02重庆紫光华山智安科技有限公司Image recognition method and device, electronic equipment and storage medium
CN112849648A (en)*2020-12-312021-05-28重庆国际复合材料股份有限公司Intelligent tray identification system and method
CN112734938A (en)*2021-01-122021-04-30北京爱笔科技有限公司Pedestrian position prediction method, device, computer equipment and storage medium
WO2023282835A1 (en)*2021-07-082023-01-12Spiideo AbA data processing method, system and computer program product in video production of a live event

Also Published As

Publication numberPublication date
CN108537094B (en)2022-11-22

Similar Documents

PublicationPublication DateTitle
CN108537094A (en)Image processing method, device and system
JP7726357B2 (en) Information processing device, information processing method, information processing program, and information processing system
EP3872689B1 (en)Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method
US10043064B2 (en)Method and apparatus of detecting object using event-based sensor
CN105955308B (en)The control method and device of a kind of aircraft
US9317762B2 (en)Face recognition using depth based tracking
JP5771413B2 (en) Posture estimation apparatus, posture estimation system, and posture estimation method
JP5631086B2 (en) Information processing apparatus, control method therefor, and program
CN109784130B (en)Pedestrian re-identification method, device and equipment thereof
CN103945134B (en)A kind of shooting of photo and inspection method and its terminal
CN104573617B (en)A kind of camera shooting control method
WO2012101962A1 (en)State-of-posture estimation device and state-of-posture estimation method
JP2019121019A (en)Information processing device, three-dimensional position estimation method, computer program, and storage medium
CN107368769A (en)Human face in-vivo detection method, device and electronic equipment
TW201832182A (en)Movement learning device, skill discrimination device, and skill discrimination system
JP2017174259A (en)Moving body counting device and program
JP2022526468A (en) Systems and methods for adaptively constructing a 3D face model based on two or more inputs of a 2D face image
JP2020106970A (en)Human detection device and human detection method
Sun et al.When we first met: Visual-inertial person localization for co-robot rendezvous
Athilakshmi et al.Enhancing real-time human tracking using YOLONAS-DeepSort fusion models
Zhang et al.Robust multi-view multi-camera face detection inside smart rooms using spatio-temporal dynamic programming
Jurado et al.Inertial and imaging sensor fusion for image-aided navigation with affine distortion prediction
KR20130058172A (en)System and the method thereof for sensing the face of intruder
CN117292435A (en)Action recognition method and device and computer equipment
Schauerte et al.Multi-modal and multi-camera attention in smart environments

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp