Embodiment
Hereinafter, the preferred embodiments of the present invention are described in detail with reference to accompanying drawing.Note, in the present description and drawings, there is substantially the same step and represent with the identical Reference numeral of element, and will be omitted the repetition of explanation of these steps and element.
In following examples of the present invention, the concrete form of image capture device includes but not limited to intelligent mobile phone, personal computer, personal digital assistant, portable computer, tablet computer, multimedia player etc.According to an example of the present invention, image capture device can be hand-hold electronic equipments.Preferably, according to another example of the present invention, image capture device can be wear electronic equipment.In addition, according to another example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.
Fig. 1 depicts the process flow diagram of the method 100 according to the image procossing of the embodiment of the present invention.Below, the method for the image procossing according to the embodiment of the present invention is described with reference to Fig. 1.The method 100 of image procossing can be used for above-mentioned image capture device.
As shown in Figure 1, in step S101, the first scene is gathered, and show the realtime graphic gathered.As mentioned above, according to an example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.In the case, in step S101, when lens component is arranged in the viewing area of user, by collecting unit, the first scene that user's transmitting lens chip part is watched is gathered, and obtain realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
In step s 102, determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, in step s 103, using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, in step s 103, first can identify the acquisition target in realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then, in identified acquisition target, determine that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, after determining the acquisition target that its primary importance is corresponding with the position of operating body, in step s 103, also can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determine the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device, then in the object of the first scene, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Fig. 2 a to Fig. 2 c shows according to an example of the present invention, determines the key diagram of the illustrative case of destination object in realtime graphic.In the example shown in Fig. 2 a to Fig. 2 c, show the realtime graphic 200 of the first scene gathered.As shown in Figure 2 a, when determining to have occurred operating body 210 in realtime graphic 200, first identifying the acquisition target 221,222,223 and 224 in realtime graphic, and obtaining acquisition target 221,222,223 and 224 primary importance in realtime graphic.Such as, contours extract can be performed to realtime graphic 200, thus obtain the profile of each acquisition target, thus determine each acquisition target and position thereof.
Then, as shown in dotted line frame in Fig. 2 b, in realtime graphic 200, determine that its primary importance acquisition target 222 corresponding with the position of operating body 210 is as destination object.In addition, as shown in the dotted line frame in Fig. 2 c, determine the first distance between object corresponding with the first acquisition target 222 in the first scene and image capture device, and determine in the first scene with the acquisition target 221 except the first acquisition target 222, second distance between 223 and 224 corresponding objects and image capture device.Then the first scene object (namely, object corresponding to object 221,222,223 and 224) in, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target 223 and 224 corresponding to target object also as destination object.Thus user once can specify in the multiple destination objects in realtime graphic, and do not need to specify destination object one by one, facilitate the operation of user.
Return Fig. 1, in step S104, determine the target area of destination object in realtime graphic.Then, in step S105, image procossing is carried out to target area.Fig. 3 a shows according to an example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 a, according to an example of the present invention, in step S105, regulate the transparency of determined target area 311,312 and 313, so that virtualization process can be carried out to target area 311,312 and 313, thus virtualization destination object 222,223 and 224.
Fig. 3 b shows according to another example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 b, according to an example of the present invention, in step S105, second image 321,322 and 323 relevant to realtime graphic 200 can be obtained, and described second image 321,322 and 323 is filled in target area 311,312 and 313 to cover described destination object.
In the method for the image procossing provided at above-mentioned the present embodiment, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
Below, with reference to Fig. 4, image capture device is according to an embodiment of the invention described.Fig. 4 is the exemplary block diagram of the image capture device 400 illustrated according to the embodiment of the present invention.As shown in Figure 4, the image capture device 400 of the present embodiment comprises collecting unit 410, display unit 420, operating body recognition unit 430, object determining unit 440, area determination unit 450 and graphics processing unit 460.The modules of image capture device 400 performs each step/function of the method 100 of the matching unit in above-mentioned Fig. 1, therefore, succinct in order to describe, and no longer specifically describes.
Such as, collecting unit 410 can gather the first scene, and display unit 420 can show gathered realtime graphic.According to an example of the present invention, image capture device 400 also can comprise lens component.Collecting unit 410 and display unit 420 can correspondingly with lens component be arranged.In the case, when lens component is arranged in the viewing area of user, the first scene by collecting unit 410 pairs of user's transmitting lens chip part viewings gathers, and obtains realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit 410 can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
Operating body recognition unit 430 can determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, object determining unit 440 can using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, object determining unit 440 can comprise Object identifying module and object determination module.Acquisition target in Object identifying module identifiable design realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then object determination module is in identified acquisition target, determines that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, object determining unit 440 also can comprise distance determination module and object determination module.After determining the acquisition target that its primary importance is corresponding with the position of operating body, distance determination module can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determines the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device.Then object determination module is in the object of the first scene, determines that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference.Last described object determination module also can using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Area determination unit 450 can determine the target area of destination object in realtime graphic.Then graphics processing unit 460 can carry out image procossing to target area.According to an example of the present invention, graphics processing unit 460 can carry out virtualization process to target area, with virtualization destination object.Can know from experience ground, according to another example of the present invention, graphics processing unit can obtain second image relevant to realtime graphic, and by the second image completion to target area with coverage goal object.Then, the image handled by display unit displayable image processing unit 460.
In the image capture device that above-mentioned the present embodiment provides, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
In addition, as mentioned above, preferably, according to an example of the present invention, image capture device can be wear electronic equipment.Such as, image capture device is spectacle image capture device.Fig. 5 shows the key diagram that the image capture device 400 shown in Fig. 4 is an illustrative case of spectacle image capture device.For simplicity, no longer composition graphs 5 describes spectacle image capture device 500 part similar with image capture device 400.
As shown in Figure 5, image capture device 500 also can comprise picture frame module 510, lens component 520 and fixed cell.Lens component 520 is arranged in picture frame module 510.The fixed cell of image capture device 500 comprises the first sway brace 531 and the second sway brace 532.As shown in Figure 3, the first sway brace comprises the first pontes 531(as shown in the dash area in Fig. 5) and the first retaining part 532.The first pontes 531 connects picture frame module 510 and the first retaining part 532.Second sway brace comprises the second coupling part 541(as shown in the dash area in Fig. 5) and the second retaining part 542.Second coupling part 541 connects picture frame module 510 and the second retaining part 542.In addition, can the 3rd retaining part (not shown) be set in picture frame module 510.Particularly, the 3rd retaining part can be arranged on the position of picture frame module 510 between two eyeglasses.By the first retaining part, the second retaining part and the 3rd retaining part, wear-type electronic equipment is maintained at the head of user.Particularly, the first retaining part and the second retaining part can be used for the ear the first sway brace and the second sway brace being supported on user, and the 3rd retaining part can be used for bridge of the nose place picture frame module 510 being supported on user.
In the present embodiment, the collecting unit (not shown) of image capture device 500 can be arranged accordingly with lens component 520, with determine the image that collecting unit gathers and the scene that user sees basically identical.Such as, collecting unit can be arranged in the picture frame module 510 between two lens component.Alternatively, the collecting unit of image capture device 500 also can be arranged in picture frame module 510 with in lens component accordingly.In addition, the collecting unit of image capture device 500 also can comprise two acquisition modules, and be arranged on accordingly respectively in picture frame module 510 with two eyeglasses, collecting unit can process the image that two acquisition modules gather, with the image gathered in conjunction with two acquisition modules, make the scene that the image after processing truly is seen closer to user.
Fig. 6 shows the block diagram according to the display unit 600 in image capture device 500.As shown in Figure 6, display unit 600 can comprise the first display module 610, first optical system 620, first light guide member 630 and the second light guide member 640.Fig. 5 shows the key diagram of a signal situation of the display unit 600 shown in Fig. 6.
First display module 610 can be arranged in picture frame module 510, and is connected with first data transmission line.The first vision signal that first display module 610 can transmit according to the first data transmission line of image capture device 500 shows the first image.First data transmission line can be arranged in fixed cell and picture frame module.Display can be sent to display unit by first data transmission line.Display unit can show to user according to display.In addition, although be described for data line in the present embodiment, the present invention is not limited thereto, such as, according to another example of the present invention, also by wireless transmission method, display is sent to display unit.In addition, according to an example of the present invention, the first display module 610 can be the display module of the miniature display screen that size is less.
First optical system 620 also can be arranged in picture frame module 510.First optical system 620 can receive the light sent from the first display module, and carries out light path converting to the light sent from the first display module, to form the first amplification virtual image.That is, the first optical system 620 has positive refractive power.Thus user can know viewing first image, and the size of image that user watches is by the restriction of the size of display unit.
Such as, optical system can comprise with convex lens.Alternatively, in order to the interference reducing aberration, avoid dispersion etc. to cause imaging, bring user better visual experience, optical system also can by the multiple lens forming lens subassemblies comprising convex lens and concavees lens.In addition, according to an example of the present invention, the first display module 610 and the first optical system 620 can be set accordingly along the optical axis of 4 optical systems.Alternatively, according to another example of the present invention, display unit also can comprise the 5th light guide member, so that the light launched from the first display module 610 is sent to the first optical system 620.
As shown in Figure 7, the light sent from the first display module 610 is received in the first optical system 620, and after carrying out light path converting to the light sent from the first display module 610, the light through the first optical system can be sent to the second light guide member 640 by the first light guide member 630.Second light guide member 640 can be arranged in lens component 520.And the second light guide member can receive the light transmitted by the first light guide member 630, and the light transmitted by the first light guide member 630 reflects to the eyes of the user wearing wear-type electronic equipment.
Return Fig. 5, lens component 520 meets the first predetermined transmittance on the direction from inner side to outside, makes user can watch surrounding environment while the virtual image is amplified in viewing first.Such as, image generation unit is arranged according to described image, when generating the first image about described destination object, display unit shows the first image generated, make user while see the destination object in the first scene through eyeglass, see the first image be superimposed upon on destination object shown by display unit.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two.And software module can be placed in the computer-readable storage medium of arbitrary form.In order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Those skilled in the art can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
It should be appreciated by those skilled in the art that and can be dependent on design requirement and other factors carries out various amendment, combination, incorporating aspects and replacement to the present invention, as long as they are in the scope of appended claims and equivalent thereof.