Movatterモバイル変換


[0]ホーム

URL:


CN109886208A - Method, apparatus, computer equipment and the storage medium of object detection - Google Patents

Method, apparatus, computer equipment and the storage medium of object detection
Download PDF

Info

Publication number
CN109886208A
CN109886208ACN201910137428.3ACN201910137428ACN109886208ACN 109886208 ACN109886208 ACN 109886208ACN 201910137428 ACN201910137428 ACN 201910137428ACN 109886208 ACN109886208 ACN 109886208A
Authority
CN
China
Prior art keywords
point
anchor point
characteristic
detectable substance
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910137428.3A
Other languages
Chinese (zh)
Other versions
CN109886208B (en
Inventor
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co LtdfiledCriticalBeijing Dajia Internet Information Technology Co Ltd
Priority to CN201910137428.3ApriorityCriticalpatent/CN109886208B/en
Publication of CN109886208ApublicationCriticalpatent/CN109886208A/en
Application grantedgrantedCritical
Publication of CN109886208BpublicationCriticalpatent/CN109886208B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The disclosure is directed to a kind of method, apparatus of object detection, computer equipment and storage mediums, belong to technical field of computer vision.The described method includes: determining the characteristic pattern of target image;Determine multiple characteristic points in the characteristic pattern;Corresponding each characteristic point, is determined multiple reference points respectively, is determined at least one anchor point respectively centered on each reference point;Based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the corresponding location information of each detectable substance for including in the target image and detectable substance classification.Using the disclosure, when carrying out the detection of intensive wisp, it is possible to reduce the omission of object.

Description

Method, apparatus, computer equipment and the storage medium of object detection
Technical field
This disclosure relates to which technical field of computer vision more particularly to a kind of method, apparatus of object detection, computer are setStandby and storage medium.
Background technique
Object detection is a key problem of computer vision field.The target of object detection is picture to be detected firstIn whether include object to be detected, also, if in picture include object to be detected, it is also necessary to determine the position of the objectAnd type.
The method of object detection in the related technology are as follows: firstly, determining multiple anchors centered on the characteristic point in characteristic patternPoint.Then, it is detected for each anchor point, there are in the case where detectable substance in anchor point, exports the location information of detectable substanceWith detectable substance classification.
One anchor point can only identify an object, when in the corresponding multiple anchor points of a characteristic point include multiple objectsWhen, since the central point of these anchor points is identical, so the detection zone that these anchor points are responsible for has very big coincidence, to be directed to thisWhen a little anchor points are detected, it may only be possible to detect identical object, cause the omission of remaining object.
Summary of the invention
Present disclose provides a kind of method, apparatus of object detection, computer equipment and storage mediums, are able to solve existingWhen the approach application of object detection is detected to intensive wisp, the technical issues of object is omitted is often resulted in.
According to the first aspect of the embodiments of the present disclosure, a kind of method of object detection is provided, comprising:
Determine the characteristic pattern of target image;
Determine multiple characteristic points in the characteristic pattern;
Corresponding each characteristic point, determines multiple reference points respectively, determines at least one respectively centered on each reference pointAnchor point;
Based on each anchor point determined, object detection is carried out to the characteristic pattern, obtain include in the target imageThe corresponding location information of each detectable substance and detectable substance classification.
Optionally, each characteristic point of the correspondence determines multiple reference points respectively, true respectively centered on each reference pointAt least one fixed anchor point, comprising:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor pointPoint.
Optionally, described to determine that multiple reference points are determined respectively centered on each reference point in each initial anchor pointAt least one anchor point, comprising:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, each characteristic point of the correspondence determines multiple reference points respectively, true respectively centered on each reference pointAt least one fixed anchor point, comprising:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectivelyA reference point determines at least one anchor point centered on each reference point respectively.
Optionally, the characteristic pattern of the determining target image, comprising:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the target figureAfter the corresponding location information of each detectable substance and detectable substance classification that include as in, further includes:
It shows the target image, the corresponding location information of each detectable substance and detectable substance classification is based on, in the meshEach detectable substance is added in logo image and is marked.
Optionally, described based on each anchor point determined, object detection is carried out to the characteristic pattern, obtains the target figureThe corresponding location information of each detectable substance and detectable substance classification for including as in, comprising:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classificationIn, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target imageThe corresponding location information of object and detectable substance classification.
According to the second aspect of an embodiment of the present disclosure, a kind of device of object detection is provided, including
Determination unit is configured to determine that the characteristic pattern of target image, determines multiple characteristic points in the characteristic pattern, rightEach characteristic point is answered, determines multiple reference points respectively, determines at least one anchor point respectively centered on each reference point;
Detection unit, is configured as based on each anchor point determined, carries out object detection to the characteristic pattern, obtains describedThe corresponding location information of each detectable substance and detectable substance classification for including in target image.
Optionally, the determination unit, is configured as:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor pointPoint.
Optionally, the determination unit, is configured as:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, the determination unit, is configured as:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectivelyA reference point determines at least one anchor point centered on each reference point respectively.
Optionally, the determination unit, is configured as:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described device further include:
Marking unit is configured as showing the target image, is based on the corresponding location information of each detectable substance and inspectionIt is other to survey species, each detectable substance is added in the target image and is marked.
Optionally, the detection unit, is configured as:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classificationIn, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target imageThe corresponding location information of object and detectable substance classification.
According to the third aspect of an embodiment of the present disclosure, a kind of computer equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Execute method described in the first aspect of the embodiment of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, it is specialSign is, when the instruction in the storage medium is executed by the processor of computer equipment, computer equipment is heldMethod described in row embodiment of the present disclosure first aspect.
According to a fifth aspect of the embodiments of the present disclosure, a kind of application program, including one or more instruction are provided, this oneItem or a plurality of instruction can be executed by the processor of server, to complete method described in the first aspect of the embodiment of the present disclosure.
The technical scheme provided by this disclosed embodiment can include the following benefits:
It in the embodiment of the present disclosure, is primarily based on each characteristic point and determines multiple reference points, then, with each reference point beThe heart generates at least one anchor point.So that each characteristic point corresponds to multiple anchor points not concentrically.
Compared with technical solution in the related technology, since each characteristic point corresponds to the different anchor point in multiple centers, fromAnd the anchor point of different location is made to be responsible for the object detection in respective region, the coincidence of the responsible detection zone of the anchor point of different locationIt is less, so that when applying to the detection of intensive wisp, the omission of object is less when the method that the embodiment of the present disclosure provides.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, notThe disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the inventionExample, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of flow chart of the method for object detection shown according to an exemplary embodiment.
Fig. 2 is a kind of block diagram of the device of object detection shown according to an exemplary embodiment.
Fig. 3 is a kind of structural block diagram of terminal shown according to an exemplary embodiment.
Fig. 4 is a kind of structural block diagram of computer equipment shown according to an exemplary embodiment.
Fig. 5 is the characteristic pattern of target image shown according to an exemplary embodiment.
Fig. 6 is the characteristic pattern shown according to an exemplary embodiment comprising anchor point.
Fig. 7 is the characteristic pattern shown according to an exemplary embodiment comprising anchor point.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related toWhen attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodimentDescribed in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appendedThe example of device and method being described in detail in claims, some aspects of the invention are consistent.
The embodiment of the present disclosure provides a kind of method of object detection, and this method can be realized by computer equipment.Wherein,The computer equipment can be the mobile terminals such as mobile phone, tablet computer, notebook and monitoring device, be also possible to desktop computerEqual fixed terminals, are also possible to server.
The method that the embodiment of the present disclosure provides can be applied in the scene for carrying out object detection to image, for example, can be withApplied in the scenes such as intelligent traffic system, intelligent monitor system, military target detection and medical navigation operation.Moreover, thisThe method that open embodiment provides, is particularly suitable for the scene that the image that wherein there are multiple wisps is detected and identifiedIn, the Face datection such as in group photo greatly, the field intensive number of people in public place detection and the density of the shoal of fish is estimatedJing Zhong.
Fig. 1 is a kind of flow chart of the method for object detection shown according to an exemplary embodiment, as shown in Figure 1, shouldMethod is for including the following steps in computer equipment.
In a step 101, the characteristic pattern of target image is determined.
Wherein, target image refers to the image of object detection to be carried out.
In an implementation, before the characteristic pattern for determining target image, it is also necessary to obtain target image.Target image can pass throughThe mode acquired in real time obtains, and this mode is mainly used in the computer equipments such as monitoring device, and monitoring device acquires in real timeMonitor video, and continue to obtain the picture frame in monitor video as target image.Target image can also be pre- by extractingThe mode of the image document or video data that are first stored in computer equipment obtains.
After obtaining target image, target image can be input in neural network model, generate characteristic pattern.This embodiment partyNeural network model in formula can be CNN (Convolutional Neural Network, convolutional neural networks) model,It can be VGG (Visual Geometry Group, visual geometric group) model.It further, can be in order to reduce calculation amountFirst target image is zoomed in and out, then the target image after scaling is input in neural network model and carries out object detection.
It include multistage convolutional layer in neural network model, target image is input to after neural network model, neural networkModel can successively carry out process of convolution to target image by the convolutional layer of level-one level-one, and can successively obtain convolutional layers at different levelsCharacteristic pattern.Choose in the characteristic pattern of convolutional layers at different levels characteristic pattern for being determined as target image.
For persistently obtaining the case where picture frame is as target image from monitor video, as soon as every acquisition target image,The target image is input in neural network model, thus, obtain the corresponding characteristic pattern of each target image.
Optionally, in order to enable the result of object detection is more accurate, the feature of target image different scale can be usedFigure carries out object detection, and the feature degree of different scale is made to be each responsible for the detection of various sizes of object, corresponding treatment processIt can be such that the characteristic pattern of the multiple and different scales of determining target image.
It in an implementation, include multistage convolutional layer in neural network model, target image is input to after neural network model,Process of convolution can successively be carried out to target image by the convolutional layer of level-one level-one, and can successively obtain the spy of convolutional layers at different levelsSign figure.Wherein, the scale of the corresponding characteristic pattern of the forward convolutional layer of the convolution number of plies is larger, compares smaller suitable for detecting sizeObject.The scale of the corresponding characteristic pattern of the convolutional layer of the convolution number of plies rearward is smaller, compares biggish suitable for detecting sizeObject.
From the corresponding characteristic pattern of convolutional layers at different levels, the characteristic pattern of multiple and different scales is chosen, target image is determined asCharacteristic pattern makes the characteristic pattern of different scale be each responsible for the detection of various sizes of object, thus, improve the accurate of object detectionRate.
Specific operating method can be with are as follows: target image is input in vgg16 neural network model first, is then utilizedSsd (single shot multibox detector, the more frame detectors of single-point) frame, extract conv3_3, conv4_3 andTri- layers of characteristic pattern of conv5_3, as the characteristic pattern of target image, to improve the accuracy rate of object detection.
In a step 102, multiple characteristic points in characteristic pattern are determined.
In step 103, corresponding each characteristic point, determines multiple reference points respectively, centered on each reference point respectivelyDetermine at least one anchor point.
Wherein, anchor point can also be known as pre-selection frame and anchor etc..
In an implementation, each characteristic point can correspond to n reference point, and each reference point should have m anchor point, it is assumed that featureThe number of point is p, then p × m × n anchor point has been determined in this feature figure altogether, and characteristic pattern is divided into p × m × n by these anchor pointsFeature graph region.
It in the embodiment of the present disclosure, is primarily based on each characteristic point and determines multiple reference points, then, with each reference point beThe heart generates at least one anchor point.So that each characteristic point corresponds to multiple anchor points not concentrically.
Compared with technical solution in the related technology, since each characteristic point corresponds to the different anchor point in multiple centers, fromAnd the anchor point of different location is made to be responsible for the object detection in respective region, the coincidence of the responsible detection zone of the anchor point of different locationIt is less, so that when applying to the detection of intensive wisp, the omission of object is less when the method that the embodiment of the present disclosure provides.
Optionally, the mode of this anchor point division then can be generated into anchor by pre-generating initial anchor pointPoint, corresponding treatment process can be such that corresponding each characteristic point, determine at least one initial anchor point;In each initial anchor pointThe middle multiple reference points of determination determine at least one anchor point centered on each reference point respectively.
In an implementation, corresponding each characteristic point, it is first determined at least one anchor dot center point, then with each initialCentered on anchor point central point, at least one initial anchor point is generated.When generating initial anchor point, it is also necessary to design the scale of initial anchor pointInformation and percent information, wherein dimensional information characterizes the size of the area of initial anchor point, and percent information characterizes the length of initial anchor pointWidth is than that (can be width with the size of initial anchor point in the vertical direction using the size of initial anchor point in the horizontal direction as lengthDegree).Multiple initial anchor points can be generated based on anchor dot center point, the dimensional information of initial anchor point and percent information.For example,The center of characteristic point can be determined as to the central point of initial anchor point, set the area of initial anchor point as 1, length-width ratio 1:1, such asShown in Fig. 6.
After generating initial anchor point, need to choose multiple reference points in initial anchor point.It, can be with first when choosing reference pointFour vertex of beginning anchor point or the central point of initial anchor point are origin, using horizontal direction as x-axis, using vertical direction as y-axis, are come trueThe coordinate of fixed each reference point.
After determining reference point, centered on these reference points, and designs the area of anchor point and length-width ratio and (can be existed with anchor pointSize in horizontal direction is length, using the size of anchor point in the vertical direction as width), at least one anchor point is determined respectively.
Optionally, each initial anchor point can be uniformly divided into several anchor points, corresponding treatment process can be such thatMultiple equally distributed reference points are determined in each initial anchor point, it, will each just based on the reference point in each initial anchor pointBeginning anchor point is respectively divided into multiple anchor points, and the central point of each anchor point divided is a reference point.
In an implementation, after generating initial anchor point, several reference points are uniformly determined in initial anchor point, it is then several with thisCentered on a reference point, an anchor point is determined respectively.Assuming that the number of the reference point determined in an initial anchor point is k, then oneA anchor point is divided into k anchor point, and the k anchor point shape is identical, and the area of each anchor point is initial anchor point area1/k.
For example, as shown in fig. 6, the center of each characteristic point is determined as anchor dot center point, then with each initialCentered on anchor point central point, an initial anchor point is generated.The scale of the initial anchor point is 1, length-width ratio 1.Namely each featureThe corresponding initial anchor point of point, which is the square framework that an area is 1.In the initial anchor point, uniformlyFour reference points have been selected, using horizontal direction as x-axis, and have to the right been positive direction of the x-axis using the upper left corner of initial anchor point as origin,Using vertical direction as y-axis, and be downwards positive direction of the y-axis, the coordinates of this four reference points be respectively (0.25,0.25), (0.25,0.75), (0.75,0.25) and (0.75,0.75).Centered on this four reference points, anchor point is divided into 4 size phasesDeng the square anchor point that area is 0.25.
Optionally, multiple reference points can be directly determined, then centered on these reference points, determine at least one respectivelyAnchor point, corresponding treatment process can be such that corresponding each characteristic point, based on preset multiple reference points relative to characteristic pointLocation information determines multiple reference points respectively, centered on each reference point, determines at least one anchor point respectively.
In an implementation, can location information of the first preset reference point relative to characteristic point be then based on these location informationsDetermine multiple reference points.
Can be using the center of each characteristic point as origin, using horizontal direction as x-axis, and be to the right positive direction of the x-axis, with verticalDirection is y-axis, and is downwards positive direction of the y-axis, to determine the coordinate of reference point.For example, as shown in fig. 7, determining the seat of reference pointIt is designated as (0.25, -0.25), (- 0.25,0.25) (- 0.25, -0.25) and (0.25,0.25), then the reference point is located at right with itThe surrounding for the characteristic point answered, and in the horizontal direction away from this feature point be 0.25, be away from this feature point in the vertical direction0.25。
After determining reference point, the area information and percent information of anchor point are designed, the area that can preset anchor point is 0.25, longWidth is than being 1:1, then a reference point answers an anchor point, as shown in Figure 7.
The anchor point of multiple and different areas and different length-width ratios, the quantity of Lai Zengjia anchor point can also be preset.For example, design anchorThe area of point has 1 and 2 two kind, and length-width ratio has 1:2 and two kinds of 2:1, then a reference point answers four anchor points, i.e. an area is1 and length-width ratio be 1:2 anchor point, the anchor point that an area is 1 and length-width ratio is 2:1, an area is 2 and length-width ratio is 1:2Anchor point and an area is 2 and length-width ratio is 2:1 anchor point.
At step 104, based on each anchor point determined, object detection is carried out to characteristic pattern, obtains wrapping in target imageThe corresponding location information of each detectable substance and detectable substance classification included.
In an implementation, characteristic pattern is divided into multiple and different feature graph regions, feature graph region by each anchor point determinedQuantity it is identical as the quantity for the anchor point determined.
Successively the feature graph region for including in each anchor point is detected, a detection is obtained based on each feature graph regionAs a result.Each testing result includes the corresponding location information of detectable substance and detectable substance classification that each feature graph region includes.SoAfterwards, all testing results are integrated and is handled, finally obtain the corresponding position of each detectable substance for including in target imageInformation and detectable substance classification.
Optionally, the feature graph region that can include by each anchor point determined, is detected with different detection models,Corresponding treatment process can be such that the feature graph region that each anchor point that will be determined includes, and be input to different testing sample classificationIn corresponding detection model, show that each anchor point corresponds to the testing result of different detection models;It is corresponding different based on each anchor pointThe testing result of detection model determines the corresponding location information of each detectable substance for including in target image and detectable substance classification.
Wherein, different classes of detection model is responsible for the detection of different classes of object, and detection model can be classifier.
In an implementation, the feature graph region all anchor points determined, is sequentially inputted to different classes of detection modelIn, every kind of detection model detects the feature graph region for including in each anchor point, and every kind of detection model is for each featureRegion obtains a testing result, includes the detectable substance for belonging to the corresponding object category of this detection model in the testing resultLocation information, if this feature graph region does not include to belong to the detectable substance of the corresponding object category of this detection model, shouldLocation information is empty information.Then, testing result of every class detection model based on all feature graph regions carries out location informationDuplicate removal processing.
Finally, it is corresponding to obtain each detectable substance for including in target image according to the testing result that all disaggregated models obtainLocation information and detectable substance classification.
Optionally, after determining the corresponding location information of each detectable substance for including in target image and detectable substance classification,The position for the detectable substance that can be will test in the target image and type mark come out, and corresponding treatment process can be such thatDisplaying target image is based on the corresponding location information of each detectable substance and detectable substance classification, adds in the target image to each detectable substanceIt labels.
In an implementation, it is some can be in the scene of displaying target image, can be in the target image of display to whereinDetectable substance be marked.Label can be position mark and type mark to detectable substance in target image, do not need intoWhen row category label, label such as to face in big group photo only can also carry out position mark to detectable substance.
The form of position mark can be in the target image, outline detectable substance with rectangle frame.The form of category label canTo be the classification belonging to the text importing detectable substance by the rectangle frame of position mark.
For applying and offender is marked in Intellectualized monitoring scene, each of monitor video is schemedAfter carrying out above-mentioned object detection processing as frame, when detecting crime one's share of expenses for a joint undertaking in picture frame, square is used in the picture frameShape frame outlines crime one's share of expenses for a joint undertaking, then display treated picture frame.
Fig. 2 is a kind of device block diagram of object detection shown according to an exemplary embodiment.Referring to Fig. 2, the device packetInclude determination unit 201 and detection unit 202.
Determination unit 201 is configured to determine that the characteristic pattern of target image, determines multiple features in the characteristic patternPoint, corresponding each characteristic point, is determined multiple reference points respectively, is determined at least one anchor point respectively centered on each reference point;
Detection unit 202, is configured as based on each anchor point determined, carries out object detection to the characteristic pattern, obtainsThe corresponding location information of each detectable substance and detectable substance classification for including in the target image.
Optionally, determination unit 201 are configured as:
Corresponding each characteristic point, determines at least one initial anchor point;
Determine that multiple reference points determine at least one anchor centered on each reference point respectively in each initial anchor pointPoint.
Optionally, determination unit 201 are configured as:
Multiple equally distributed reference points are determined in each initial anchor point, based on the reference point in each initial anchor point,Each initial anchor point is respectively divided into multiple anchor points, the central point of each anchor point divided is a reference point.
Optionally, determination unit 201 are configured as:
Corresponding each characteristic point, the location information based on preset multiple reference points relative to characteristic point, determination is more respectivelyA reference point determines at least one anchor point centered on each reference point respectively.
Optionally, determination unit 201 are configured as:
Determine the characteristic pattern of the multiple and different scales of target image.
Optionally, described device further include:
Marking unit 203 is configured as showing the target image, based on the corresponding location information of each detectable substance andDetectable substance classification is added each detectable substance in the target image and is marked.
Optionally, detection unit 202 are configured as:
The feature graph region for including by each anchor point determined is input to the corresponding detection model of different testing sample classificationIn, show that each anchor point corresponds to the testing result of different detection models;
The testing result that different detection models are corresponded to based on each anchor point determines each detection for including in the target imageThe corresponding location information of object and detectable substance classification.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this methodEmbodiment in be described in detail, no detailed explanation will be given here.
Fig. 3 is a kind of structural block diagram of terminal shown according to an exemplary embodiment.The terminal 300 can be portableMobile terminal, such as: smart phone, tablet computer.Terminal 300 be also possible to referred to as user equipment, portable terminal etc. otherTitle.
In general, terminal 300 includes: processor 301 and memory 302.
Processor 301 may include one or more processing cores, such as 4 core processors, 9 core processors etc..PlaceReason device 301 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmedLogic array) at least one of example, in hardware realize.Processor 301 also may include primary processor and coprocessor, masterProcessor is the processor for being handled data in the awake state, also referred to as CPU (Central ProcessingUnit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?In some embodiments, processor 301 can be integrated with GPU (Graphics Processing Unit, image processor),GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 301 can also be wrappedAI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learningCalculating operation.
Memory 302 may include one or more computer readable storage mediums, which canTo be tangible and non-transient.Memory 302 may also include high-speed random access memory and nonvolatile memory,Such as one or more disk storage equipments, flash memory device.In some embodiments, non-transient in memory 302Computer readable storage medium for storing at least one instruction, at least one instruction for performed by processor 301 withThe method for realizing object detection provided herein.
In some embodiments, terminal 300 is also optional includes: peripheral device interface 303 and at least one peripheral equipment.Specifically, peripheral equipment includes: radio circuit 304, touch display screen 305, camera 306, voicefrequency circuit 307, positioning componentAt least one of 308 and power supply 309.
Peripheral device interface 303 can be used for I/O (Input/Output, input/output) is relevant outside at least onePeripheral equipment is connected to processor 301 and memory 302.In some embodiments, processor 301, memory 302 and peripheral equipmentInterface 303 is integrated on same chip or circuit board;In some other embodiments, processor 301, memory 302 and outerAny one or two in peripheral equipment interface 303 can realize on individual chip or circuit board, the present embodiment to this notIt is limited.
Radio circuit 304 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetratesFrequency circuit 304 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 304 turns electric signalIt is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 304 wrapsIt includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chipGroup, user identity module card etc..Radio circuit 304 can be carried out by least one wireless communication protocol with other terminalsCommunication.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G,4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetratesFrequency circuit 304 can also include NFC (Near Field Communication, wireless near field communication) related circuit, thisApplication is not limited this.
Touch display screen 305 is for showing UI (User Interface, user interface).The UI may include figure, textSheet, icon, video and its their any combination.Touch display screen 305 also have acquisition touch display screen 305 surface orThe ability of the touch signal of surface.The touch signal can be used as control signal and be input to processor 301 and be handled.TouchingDisplay screen 305 is touched for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or soft keyboard.In some embodimentsIn, touch display screen 305 can be one, and the front panel of terminal 300 is arranged;In further embodiments, touch display screen 305It can be at least two, be separately positioned on the different surfaces of terminal 300 or in foldover design;In still other embodiments, touchDisplay screen 305 can be flexible display screen, be arranged on the curved surface of terminal 300 or on fold plane.Even, touch display screen305 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Touch display screen 305 can use LCD (LiquidCrystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)Etc. materials preparation.
CCD camera assembly 306 is for acquiring image or video.Optionally, CCD camera assembly 306 include front camera andRear camera.In general, front camera is for realizing video calling or self-timer, rear camera is for realizing photo or videoShooting.In some embodiments, rear camera at least two are main camera, depth of field camera, wide-angle imaging respectivelyAny one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-anglePan-shot and VR (Virtual Reality, virtual reality) shooting function are realized in camera fusion.In some embodimentsIn, CCD camera assembly 306 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, be also possible to double-colored temperature flash of lightLamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for the light compensation under different-colour.
Voicefrequency circuit 307 is used to provide the audio interface between user and terminal 300.Voicefrequency circuit 307 may include wheatGram wind and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and converts sound waves into electric signal and be input to processor301 are handled, or are input to radio circuit 304 to realize voice communication.For stereo acquisition or the purpose of noise reduction, wheatGram wind can be it is multiple, be separately positioned on the different parts of terminal 300.Microphone can also be array microphone or omnidirectional's acquisitionType microphone.Loudspeaker is then used to that sound wave will to be converted to from the electric signal of processor 301 or radio circuit 304.Loudspeaker canTo be traditional wafer speaker, it is also possible to piezoelectric ceramic loudspeaker.When loudspeaker is piezoelectric ceramic loudspeaker, not only may be usedTo convert electrical signals to the audible sound wave of the mankind, the sound wave that the mankind do not hear can also be converted electrical signals to surveyAway from etc. purposes.In some embodiments, voicefrequency circuit 307 can also include earphone jack.
Positioning component 308 is used for the current geographic position of positioning terminal 300, to realize navigation or LBS (LocationBased Service, location based service).Positioning component 308 can be the GPS (Global based on the U.S.Positioning System, global positioning system), China dipper system or Russia Galileo system positioning groupPart.
Power supply 309 is used to be powered for the various components in terminal 300.Power supply 309 can be alternating current, direct current,Disposable battery or rechargeable battery.When power supply 309 includes rechargeable battery, which can be wired charging electricityPond or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is by wirelessThe battery of coil charges.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 300 further includes having one or more sensors 310.The one or more sensors310 include but is not limited to: acceleration transducer 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314,Optical sensor 315 and proximity sensor 316.
The acceleration that acceleration transducer 311 can detecte in three reference axis of the coordinate system established with terminal 300 is bigIt is small.For example, acceleration transducer 311 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 301 canWith the acceleration of gravity signal acquired according to acceleration transducer 311, touch display screen 305 is controlled with transverse views or longitudinal viewFigure carries out the display of user interface.Acceleration transducer 311 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 312 can detecte body direction and the rotational angle of terminal 300, and gyro sensor 312 canTo cooperate with acquisition user to act the 3D of terminal 300 with acceleration transducer 311.Processor 301 is according to gyro sensor 312Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shootingImage stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 305 in terminal 300 can be set in pressure sensor 313.Work as pressureWhen the side frame of terminal 300 is arranged in sensor 313, it can detecte user to the gripping signal of terminal 300, believed according to the grippingNumber carry out right-hand man's identification or prompt operation.When the lower layer of touch display screen 305 is arranged in pressure sensor 313, Ke YigenAccording to user to the pressure operation of touch display screen 305, realization controls the operability control on the interface UI.OperabilityControl includes at least one of button control, scroll bar control, icon control, menu control.
Fingerprint sensor 314 is used to acquire the fingerprint of user, according to the identity of collected fingerprint recognition user.KnowingNot Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation, the sensitive operation by processor 301Including solution lock screen, check encryption information, downloading software, payment and change setting etc..End can be set in fingerprint sensor 314Front, the back side or the side at end 300.When being provided with physical button or manufacturer Logo in terminal 300, fingerprint sensor 314 canTo be integrated with physical button or manufacturer Logo.
Optical sensor 315 is for acquiring ambient light intensity.In one embodiment, processor 301 can be according to opticsThe ambient light intensity that sensor 315 acquires controls the display brightness of touch display screen 305.Specifically, when ambient light intensity is higherWhen, the display brightness of touch display screen 305 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 305 is brightDegree.In another embodiment, the ambient light intensity that processor 301 can also be acquired according to optical sensor 315, dynamic adjustThe acquisition parameters of CCD camera assembly 306.
Proximity sensor 316, also referred to as range sensor are generally arranged at the front of terminal 300.Proximity sensor 316 is usedIn the distance between the front of acquisition user and terminal 300.In one embodiment, when proximity sensor 316 detects userWhen the distance between front of terminal 300 gradually becomes smaller, touch display screen 305 is controlled by processor 301 and is cut from bright screen stateIt is changed to breath screen state;When proximity sensor 316 detects user and the distance between the front of terminal 300 becomes larger, byProcessor 301 controls touch display screen 305 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 300 of structure shown in Fig. 3, can wrapIt includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
Fig. 4 is a kind of structural schematic diagram of computer equipment shown according to an exemplary embodiment, the computer equipmentIt can be the server in above-described embodiment.The computer equipment 400 can generate bigger difference because configuration or performance are differentIt is different, it may include one or more processors (central processing units, CPU) 401 and one or oneAbove memory 402, wherein at least one instruction is stored in the memory 402, at least one instruction is by describedProcessor 401 loads and executes the method to realize above-mentioned object detection.
In the embodiment of the present disclosure, a kind of non-transitorycomputer readable storage medium is additionally provided, when the storage mediumIn instruction by computer equipment processor execute when so that computer equipment is able to carry out to complete above-mentioned object detectionMethod.
In the embodiment of the present disclosure, a kind of application program, including one or more instruction are additionally provided, one or more fingerOrder can be executed by the processor of server, the method to complete above-mentioned object detection.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention itsIts embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes orPerson's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosureOr conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by followingClaim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, andAnd various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.

Claims (10)

CN201910137428.3A2019-02-252019-02-25Object detection method and device, computer equipment and storage mediumActiveCN109886208B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910137428.3ACN109886208B (en)2019-02-252019-02-25Object detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910137428.3ACN109886208B (en)2019-02-252019-02-25Object detection method and device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN109886208Atrue CN109886208A (en)2019-06-14
CN109886208B CN109886208B (en)2020-12-18

Family

ID=66929163

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910137428.3AActiveCN109886208B (en)2019-02-252019-02-25Object detection method and device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN109886208B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110866478A (en)*2019-11-062020-03-06支付宝(杭州)信息技术有限公司 Object recognition method, device and device in an image
CN111476306A (en)*2020-04-102020-07-31腾讯科技(深圳)有限公司Object detection method, device, equipment and storage medium based on artificial intelligence
CN112199987A (en)*2020-08-262021-01-08北京贝思科技术有限公司Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment
CN113076955A (en)*2021-04-142021-07-06上海云从企业发展有限公司Target detection method, system, computer equipment and machine readable medium
CN114596706A (en)*2022-03-152022-06-07阿波罗智联(北京)科技有限公司Detection method and device of roadside sensing system, electronic equipment and roadside equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106529527A (en)*2016-09-232017-03-22北京市商汤科技开发有限公司Object detection method and device, data processing deice, and electronic equipment
CN107316001A (en)*2017-05-312017-11-03天津大学Small and intensive method for traffic sign detection in a kind of automatic Pilot scene
CN108304808A (en)*2018-02-062018-07-20广东顺德西安交通大学研究院A kind of monitor video method for checking object based on space time information Yu depth network
CN108681718A (en)*2018-05-202018-10-19北京工业大学A kind of accurate detection recognition method of unmanned plane low target

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106529527A (en)*2016-09-232017-03-22北京市商汤科技开发有限公司Object detection method and device, data processing deice, and electronic equipment
CN107316001A (en)*2017-05-312017-11-03天津大学Small and intensive method for traffic sign detection in a kind of automatic Pilot scene
CN108304808A (en)*2018-02-062018-07-20广东顺德西安交通大学研究院A kind of monitor video method for checking object based on space time information Yu depth network
CN108681718A (en)*2018-05-202018-10-19北京工业大学A kind of accurate detection recognition method of unmanned plane low target

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU W 等: ""SSD: Single shot multibox detector"", 《EUROPEAN CONFERENCE ON COMPUTER VISION》*
翁昕: ""目标检测网络SSD的区域候选框的设置问题研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》*
陈康: ""基于深度卷积神经网络的汽车驾驶场景目标检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110866478A (en)*2019-11-062020-03-06支付宝(杭州)信息技术有限公司 Object recognition method, device and device in an image
CN110866478B (en)*2019-11-062022-04-29支付宝(杭州)信息技术有限公司 Object recognition method, device and device in an image
CN115035407A (en)*2019-11-062022-09-09支付宝(杭州)信息技术有限公司 Object recognition method, device and device in an image
CN111476306A (en)*2020-04-102020-07-31腾讯科技(深圳)有限公司Object detection method, device, equipment and storage medium based on artificial intelligence
CN111476306B (en)*2020-04-102023-07-28腾讯科技(深圳)有限公司Object detection method, device, equipment and storage medium based on artificial intelligence
CN112199987A (en)*2020-08-262021-01-08北京贝思科技术有限公司Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment
CN113076955A (en)*2021-04-142021-07-06上海云从企业发展有限公司Target detection method, system, computer equipment and machine readable medium
CN114596706A (en)*2022-03-152022-06-07阿波罗智联(北京)科技有限公司Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN114596706B (en)*2022-03-152024-05-03阿波罗智联(北京)科技有限公司Detection method and device of road side perception system, electronic equipment and road side equipment

Also Published As

Publication numberPublication date
CN109886208B (en)2020-12-18

Similar Documents

PublicationPublication DateTitle
CN110083791B (en)Target group detection method and device, computer equipment and storage medium
CN109829456A (en)Image-recognizing method, device and terminal
CN109886208A (en)Method, apparatus, computer equipment and the storage medium of object detection
CN110348543A (en)Eye fundus image recognition methods, device, computer equipment and storage medium
CN109712224A (en)Rendering method, device and the smart machine of virtual scene
CN110097576A (en)The motion information of image characteristic point determines method, task executing method and equipment
CN110064200A (en)Object construction method, device and readable storage medium storing program for executing based on virtual environment
CN110570460A (en)Target tracking method and device, computer equipment and computer readable storage medium
CN109815150A (en)Application testing method, device, electronic equipment and storage medium
CN108363982B (en)Method and device for determining number of objects
CN110288689A (en)The method and apparatus that electronic map is rendered
CN112308103B (en)Method and device for generating training samples
CN109558837A (en)Face critical point detection method, apparatus and storage medium
CN109522863A (en)Ear's critical point detection method, apparatus and storage medium
CN109992685A (en)A kind of method and device of retrieving image
CN109324739A (en)Control method, device, terminal and the storage medium of virtual objects
CN110290426A (en)Method, apparatus, equipment and the storage medium of showing resource
CN110335224A (en)Image processing method, device, computer equipment and storage medium
CN110334736A (en)Image-recognizing method, device, electronic equipment and medium
CN111598896A (en)Image detection method, device, equipment and storage medium
CN109948581A (en)Picture and text rendering method, device, equipment and readable storage medium storing program for executing
CN109583370A (en)Human face structure grid model method for building up, device, electronic equipment and storage medium
CN111784841A (en)Method, apparatus, electronic device, and medium for reconstructing three-dimensional image
CN112990424B (en) Method and device for training neural network model
CN111860064B (en)Video-based target detection method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp