CROSS REFERENCE TO RELATED APPLICATION This application is based upon and claims the benefit of priority from Japanese Patent Applications Nos. 2005-037675 and 2005-038424, both filed on Feb. 15, 2005, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to an electronic camera which captures an image of a subject, and particularly relates to an electronic camera which can detecting a characteristic portion of a subject, for example, a face.
2. Description of the Related Art
There has been proposed a camera which, in order to focus on a face of a subject when the subject is a person, detects the face portion of the person in a field captured by the camera and autofocuses on the detected face area.
However, it is conceivable that such problems that the face is not detectable or detection accuracy is low may occur depending on a shooting situation or shooting environment, which may lead to confusing a user.
For example, Japanese Unexamined Patent Application Publication No. 2001-215403 discloses an electronic camera having a face recognition function and performing focusing control according to the eyes of a subject.
However, the electronic camera in Japanese Unexamined Patent Application Publication No. 2001-215403 has a difficulty in focusing on the eyes of the subject when the subject closes his or her eyes or wears glasses, and the focusing operation thereof is low in stability. Therefore, there is still room for improvement.
SUMMARY OF THE INVENTION The present invention is made to solve any one of the above problems in the related art. An object of the present invention is to provide an electronic camera which prevents a user from being confused. Another object of the present invention is to provide an electronic camera which can stably focus on a person as a subject by recognizing his/her face.
Hereinafter, the present invention will be described.
An electronic camera according to a first aspect of the present invention includes a face detecting section, a setting section, and a controlling section. The face detecting section detects a face of a subject. The setting section sets a scene shooting mode to adjust a shooting condition to an optimum shooting condition in accordance with each pre-assumed shooting scene. The controlling section controls the face detection of the face detecting section only when the setting section has set a scene shooting mode for shooting a scene including a person.
According to the electronic camera of the above first aspect, it is desirable that the scene shooting mode for shooting the scene including a person be a portrait shooting mode.
According to the electronic camera of the above first aspect, it is desirable that the controlling section does not allow the face detecting section to perform the face detection when the portrait shooting mode is a portrait shooting mode for shooting a night landscape.
According to the electronic camera of the above first aspect, it is desirable that the controlling section control a shooting lens to focus on a face area detected by the face detecting section.
According to the electronic camera of the above first aspect, it is desirable that it further include a function setting section which sets a function for each scene shooting mode, and that the scene shooting mode for shooting the scene including a person is provided with a setting item regarding the face detection.
According to the electronic camera of the above first aspect, it is desirable that it stop a digital zoom function of electronically magnifying a magnifying power during the face detection by the face detecting section.
According to the electronic camera of the above first aspect, it is desirable that it stop a closeup shooting function of shifting a shooting lens for closeup shooting during the face detection by the face detecting section.
According to the electronic camera of the above first aspect, it is desirable that it further include a display section which displays a subject image obtained before shooting, and that during the face detection by the face detecting section, the amount of shooting information for display on the display section is reduced compared with while the face detection is not performed.
An electronic camera according to a second aspect of the present invention includes an image pickup device, a face recognizing section, a focus area specifying section, and a focusing section. The image pickup device photoelectrically converts a subject image obtained by an optical shooting system to generate an image signal of an image shooting plane. The face recognizing section detects a face area in the image shooting plane according to the image signal. The focus area specifying section sets as a specified focus area, a focus area including a contour of the face area among a group of focus areas arranged in the image shooting plane. The focusing section calculates a focus evaluation value of the subject image according to the image signal corresponding to the specified focus area and detects as a focusing position a position of the optical shooting system when the focus evaluation value is maximum.
According to the electronic camera of the above second aspect, it is desirable that the focus area specifying section set, as the specified focus area, a part of plural focus areas including the contour of the face area. In this case, it is particularly desirable that the focus area specifying section set, as the specified focus area, a focus area overlapping with the contour of the face area at an upper side or a side thereof.
According to the electronic camera of the above second aspect, it is desirable that the focus area specifying section change the specified focus area to a focus area located below the face area, when the focusing position is not detected in the focus area including the contour of the face area.
According to the electronic camera of the above second aspect, the face recognizing section detects a direction of the face based on a positional relation of face parts in the face area. It is desirable that the focus area specifying section change a position of the focus area to be the specified focus area, according to the direction of the face.
According to the electronic camera of the above second aspect, it is desirable that it further include an attitude detecting section which detects a shooting attitude of the electronic camera, and that the focus area specifying section change a position of the focus area to be the specified focus area, according to the shooting attitude.
According to the electronic camera of the above second aspect, it is desirable that it further include an electronic viewfinder. The electronic viewfinder displays a viewfinder image of the image shooting plane according to the image signal, and displays an indication of focusing failure associated with a face area of the viewfinder image when the focusing position is not detected in the specified focus area.
BRIEF DESCRIPTION OF THE DRAWINGS The nature, principle, and utility of the invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings in which like parts are designated by identical reference numbers in which:
FIG. 1 is an external view of anelectronic camera1 according to a first embodiment;
FIG. 2 is a block diagram showing functions of theelectronic camera1 according to the first embodiment;
FIG. 3 is views showing a modeselect dial105 for selecting a shooting scene mode and a menu screen corresponding to the selected shooting scene mode;
FIG. 4 is views showing the modeselect dial105 for selecting a shooting scene mode and a menu screen corresponding to the selected shooting scene mode;
FIG. 5 is a flowchart showing control performed by aCPU111;
FIG. 6 is a flowchart showing control performed by theCPU111 in a face recognizing AF mode;
FIG. 7 is a flowchart showing face detection AF control performed by theCPU111;
FIG. 8 is a flowchart showing AF control by a detection area performed by theCPU111;
FIG. 9 is a flowchart showing AF control, in which a central area is weighted, performed by theCPU111;
FIG. 10 is a view showing a display example of an image plane displayed on amonitor103;
FIG. 11 is a view showing a display example of the image plane displayed on themonitor103;
FIG. 12 is a view showing a display example of the image plane displayed on themonitor103;
FIG. 13 is a view showing a display example of the image plane displayed on themonitor103;
FIG. 14 is a view showing a display example of the image plane displayed on themonitor103;
FIG. 15 is a block diagram showing an overview of an electronic camera of a second embodiment;
FIG. 16 is a flowchart showing a shooting operation in the second embodiment;
FIG. 17 is a view showing the position of a specified focus area in the second embodiment;
FIG. 18 is a view showing a viewfinder image at the time of face recognition in the second embodiment;
FIG. 19 is a block diagram showing an overview of an electronic camera of a third embodiment;
FIG. 20 is a flowchart showing a shooting operation in a fourth embodiment; and
FIG. 21 is a view showing the position of a specified focus area in the fourth embodiment.
DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a description is given of embodiments of the invention with reference to the accompanying drawings.
Configuration of First Embodiment First, the configuration of anelectronic camera1 according to a first embodiment of the present invention will be described.
FIG. 1 is an external view of theelectronic camera1 according to the first embodiment. InFIG. 1, theelectronic camera1 includes arelease button101, acruciform key102, amonitor103, adecision button104, a modeselect dial105, a zoom button,106, amenu button107, aplay button108, acloseup shooting button109, and anoptical viewfinder110.
Therelease button101 is a button capable of detecting two-stage operations: a half-press stage and a full-press stage. Therelease button101 is manipulated by a user when the user instructs the start of shooting. Thecruciform key102 is manipulated by the user to move a cursor or the like on themonitor103. Thedecision button104 is a button manipulated by the user when the user selects and decides an item with thecruciform key102 or the like. Thedecision button104 is manipulated by the user also when the user switches on/off states of themonitor103.
The modeselect dial105 is a dial which enables the user to change a camera function such as a shooting scene selection by turning it. Thezoom button106 is a button manipulated by the user when the user optically and electronically scales up or down an image recorded at the time of shooting. Thezoom button106 is manipulated by the user also when at the time of replay of image data, the user electronically scales up or down replayed image. Themenu button107 is a button manipulated by the user when the user wants to display a menu screen. Theplay button108 is a button manipulated by the user when the user replays image data recorded in a memory. Thecloseup shooting button109 is a button manipulated by the user when the user shoots a close subject such as a close plant. Theoptical viewfinder110 is for the user to optically check a field.
FIG. 2 is a block diagram showing functions of theelectronic camera1 according to the first embodiment. InFIG. 2, theelectronic camera1 is composed of aCPU111, aface detecting section112, a built-inmemory113, amanipulation circuit114, adisplay circuit115, amonitor103, animage processing circuit116, animage pickup device117, azoom lens118, azoom driver119, afocus lens120, afocus driver121, anaperture122, anaperture driver123, and amemory card slot124. Needless to say, there are other circuits to realize functions of the electronic camera but have little relation to the first embodiment, so that a description thereof will be omitted.
TheCPU111 is a circuit which processes programs to realize various functions executed in theelectronic camera1. TheCPU111 executes the programs stored in a memory in theCPU111, that is, the built-inmemory113 and controls various circuits in theelectronic camera1. Theface detecting section112 extracts a characteristic portion of image data picked up by theimage pickup device117 and detects a face area, face size, and so on of a subject. InFIG. 2, a function block called theface detecting section112 is described for explanation, but in the first embodiment, a face detection function is realized in software by a face detection program executed by theCPU111. Of course, it is possible to realize theface detecting section112 by a hardware circuit.
The built-inmemory113 is a memory to store image data, a control program, and so on. For example, a nonvolatile semiconductor memory is used as the built-inmemory113. The built-inmemory113 stores the face detection program which is executed to detect the face area of the subject. Further, the built-inmemory113 can store face information such as the face position and face size obtained by face detection. Themanipulation circuit114 detects manipulations of manipulation buttons such as therelease button101, thecruciform key102, and thedecision button104 provided in theelectronic camera1 and transfers them to theCPU111. Further, themanipulation circuit114 detects a half-press manipulation and a full-press manipulation of therelease button101. Thedisplay circuit115 is a circuit to generate image plane data displayed on themonitor103. Themonitor103 is a liquid crystal display provided on a rear surface of theelectronic camera1. Thismonitor103 displays the image plane data generated by thedisplay circuit115.
A shooting lens is an optical lens to focus a subject image onto a light-receiving plane of theimage pickup device117. This shooting lens is composed of thezoom lens118, thefocus lens120, and so on. Out of the lenses composing the shooting lens, thezoom lens118 is a lens to realize scale-up and scale-down of the optical image focused on theimage pickup device117. Thiszoom lens118 is moved by a motor. Thezoom driver119 is a circuit to drive the motor by a command of theCPU111 and move thezoom lens119 to a predetermined position. Out of the lenses composing the shooting lens, thefocus lens120 is a lens to adjust focus. Thisfocus lens120 is moved by a motor. Thefocus driver121 is a circuit to drive the motor by a command of theCPU111 and move thefocus lens120 to a predetermined position.
Theaperture122 is to adjust the amount of light of the field incident on theimage pickup device117. Theaperture driver123 is a circuit to drive a motor by a command of theCPU111 and control open and closed states of theaperture122. Theimage pickup device117 is a device to convert the optical image inputted through the shooting lens into electric image signals. For example, theimage pickup device118 is composed of a CCD, or the like. Theimage processing circuit116 analog-to-digital converts the electric signals outputted from theimage pickup device117 to generate digital signals. Further, theimage processing circuit116 performs interpolation processing or the like on the digital converted signals to generate image data. A memory card is inserted into thememory card slot124. Thememory card slot124 writes data such as image data in the memory card or deletes data in the memory card.
Next, an operation in theelectronic camera1 according to the first embodiment will be described. A setting manipulation to use the face detection function in theelectronic camera1 will be described. When the main subject is a person, the user of the camera wants to focus on a face portion obtained by face detection. In theelectronic camera1, a result of face detection is used as one of options for deciding an area brought into autofocus (AF).
FIG. 3 andFIG. 4 are views each showing the modeselect dial105 to select a shooting scene mode and a menu screen corresponding to the selected shooting scene mode.
FIG. 3A shows the modeselect dial105 in a state where a portrait shooting mode is selected.FIG. 3B shows a menu screen displayed on themonitor103 when themenu button107 is manipulated in the portrait shooting mode. The user selects a function icon by manipulating thecruciform key102 on the menu screen and manipulates thedecision button104. Thus, theelectronic camera1 performs a selected function. Anicon201 inFIG. 3B is an icon selected when a face detection AF function is performed. The operation of face detection AF will be described later.
FIG. 4A shows the modeselect dial105 in a state where a night landscape portrait shooting mode is selected.FIG. 4B shows a menu screen displayed on themonitor103 when themenu button107 is manipulated in the night landscape portrait shooting mode. As just shown, in theelectronic camera1 of the first embodiment, the menu screen of the night landscape portrait mode is not provided with theicon201 which is selectable from the menu screen of the portrait mode.
A person is sometimes shot in the night landscape portrait mode. Therefore, it is effective to detect a face even in the night landscape portrait mode and autofocus on a detected face area. However, to detect the face of the subject, some degree of brightness of a face portion is required, but it is expected that the brightness is not sufficient to detect the face in a scene of the night landscape portrait. Hence, in order to prevent the user from being confused because the face detection is not feasible, the face detection AF is made unsettable in the night landscape portrait mode. Similarly, also when a sport shooting mode is selected via the modeselect dial105, it is difficult to detect a face of a moving person. Therefore, the sport shooting mode is not provided with the icon to select the face detection AF either. This results in avoiding in advance the user from being confused because the user cannot recognize the face.
When the face detection AF of theicon201 on the menu screen shown inFIG. 3B is selected, theelectronic camera1 performs control so as to focus on the closest face portion of the detected face. When the face detection AF is selected, theelectronic camera1 is automatically switched to a constant AF mode. In the constant AF mode, irrespective of the manipulation of therelease button101, theelectronic camera1 repeats focusing by AF. Then, by half pressing therelease button101, theelectronic camera1 performs control to make an AF lock.
Subsequently, a face detection operation in theelectronic camera1 according to the first embodiment will be described.
In theelectronic camera1, the face detection is performed using a moving image (a through image) picked up by theimage pickup device117. This through image is displayed on themonitor103 for check-up of a subject image to be shot. Moreover, during the face detection, theelectronic camera1 performs control in such a manner as to increase the brightness of the through image as compared with while the face detection is not performed. Increasing the brightness of the through image makes it easy to detect the face.
Further, theelectronic camera1 repeatedly performs the face detection using the through image until therelease button101 is half pressed. When the face is not detected at this time, the face detection is assisted by performing central-area AF and multi-area AF to bring the subject into focus.
Furthermore, theelectronic camera1 repeatedly stores face detection information such as the face position and face size as a result of the face detection in the built-inmemory113 while overwriting the latest piece thereof until therelease button101 is half pressed. This makes it possible to read and use the preceding detection result stored in the built-inmemory113 even if the face is not detected through the half-press manipulation. Besides, when the face detection information cannot be obtained in the half-press manipulation, theelectronic camera1 forcibly performs AF using another area such as a central area.
Hereinafter, shooting control by the face detection will be specifically described using a flowchart.
FIG. 5 is a flowchart showing control performed by theCPU111. The flow shown inFIG. 5 is set to a still image shooting mode, and starts by detecting the manipulation of themenu button107.
First, in step S01, theCPU111 detects whether a scene shooting mode in which it is assumed that a person is shot is selected. In theelectronic camera1 of the first embodiment, it is detected whether the portrait mode or the night landscape portrait mode is selected by the modeselect dial105. If it is selected, theCPU111 goes to step S02. On the other hand, if it is not selected, theCPU111 goes to step S07. In step S02, theCPU111 determines whether the selected shooting scene is the night landscape portrait mode. If it is the nigh landscape portrait mode, the CPU goes to step S07. On the other hand, if it is not the night landscape portrait mode, the CPU goes to step S03.
In step S03, the menu screen having the face detection AF as an option which is shown inFIG. 3B is displayed on he monitor103. Subsequently, in step S04, theCPU111 detects whether thedecision button104 is manipulated. If the manipulation of thedecision button104 is detected, theCPU111 goes to step SOS. On the other hand, if the manipulation thereof is not detected, theCPU111 continues the detection. In step505, theCPU111 determines whether the face detection AF is selected. If the face detection AF is selected, theCPU111 goes to step S06. On the other hand, if any option other than the face detection AF is selected, theCPU111 goes to step S09. In step S06, theCPU111 controls theelectronic camera1 in a face recognition AF mode. The face recognition AF mode will be described using a flowchart inFIG. 6.
On the other hand, in step S07, the menu screen shown inFIG. 4B is displayed on themonitor103. Subsequently, in step S08, theCPU111 detects whether thedecision button104 is manipulated. If the manipulation thereof is detected, theCPU111 goes to step S09. On the other hand, if the manipulation thereof is not detected, theCPU111 continues the detection. Then, in step S09, theCPU111 controls theelectronic camera1 in a normal AF mode.
Through the above control, the face detection is prevented from performed in the night landscape mode in which it is probable that the face cannot be detected, even during the scene shooting mode for shooting a person, which prevents the user from being confused.
Next, the face recognition AF mode will be described.FIG. 6 is the flowchart showing control performed by theCPU111 in the face recognition AF mode. This flow is executed in step S06 inFIG. 5.
First, in step S51, theCPU111 makes a setting not to perform an electronic zoom function. By preventing the electronic zoom function, the face detection can be continued using the through image. Then, in step S52, theCPU111 makes a setting not to perform a closeup shooting function. This is because through the closeup shooting only part of the face is shot and the part of the face is not sufficient to detect the face. Subsequently, in step S53, theCPU111 prohibits themonitor103 from turning off. Without themonitor103 being in an on state, the user cannot check up the result of face detection, so that he/she takes a shot even when the face detection of theelectronic camera1 is erroneous. Thereafter, in step S54, theCPU111 switches the display of themonitor103 to a simple display. In the simple display, themonitor103 displays a reduced number of items of shooting information such as a memory remaining capacity indication which is superimposed on image data. A display example of shooting information in the face detection AF mode is shown inFIG. 11. A display example of shooting information in the normal AF mode is shown inFIG. 14. Switching to the simple display such as shown inFIG. 11 can make it easier for the user to check up a face detection frame as much as possible. This is because in the simple display, it is unlikely that the frame displayed when the face is detected overlaps with the display of the shooting information. Moreover, for example, even during the display setting to display the shooting information, theCPU111 stops the shooting information display. When the shooting information display is not set, it is needless to say that theCPU111 continues the non-display of the shooting information.
In step S55, theCPU111 measures the brightness of the field using the image captured by theimage pickup device117. In step S56, theCPU111 adjusts the brightness based on the measured field brightness. Then, theCPU111 displays the through image on themonitor103. In step S57, theCPU111 performs face detection AF control.
The above control can reduce defects in the face detection AF mode and user's dissatisfaction.
Next, the face detection AF control will be described.FIG. 7 is a flowchart showing the face detection AF control performed by theCPU111. This flow is executed in step S57 inFIG. 6.
First, in step S101, theCPU111 determines whether the face is detected. If the face is detected, theCPU111 goes to step S102. On the other hand, if the face is not detected, theCPU111 goes to step S106. In step S102, if a face character shown inFIG. 10 is displayed while being superimposed on the subject image on themonitor103, theCPU111 deletes the display of the face character. Then, in step S103, theCPU111 sets an area depending on the position and size of the detected face, and as shown inFIG. 11, displays frames while superimposing them on the subject image on themonitor103. If the number of detected faces is plural, a frame is displayed on each face. In step S104, theCPU111 performs AF in the set area. In step S105, theCPU111 temporarily stores detected face information in the built-inmemory113. By storing the detected face information here, theCPU111 can decide an AF area by using the face information stored in the built-inmemory113 when the face is not detected in the half-press manipulation in which the AF area is finally decided. The face information stored in the built-inmemory113 is overwritten with new face information in the next face detection.
On the other hand, in step S106, theCPU111 displays the face character shown inFIG. 10 while superimposing it on the subject image on themonitor103. The display of the face character indicates the user that the face detection AF mode is being performed, and informs the user of the size of the face ideal for the face detection control by the size of the face character. Incidentally, it is not necessary to display the face character constantly, and it is only required to display it every several seconds. Then, in step S107, theCPU111 performs AF in an AF area in which a central area is weighted. There is a possibility that although the person is within the field, the face is too blurred to be detected, and hence, the face detection is assisted by focusing on the subject image in a central portion with a high possibility that the main subject is there to focus. In step S108, theCPU111 determines again whether the face is detected. If the face is not detected, theCPU111 goes to step S109. On the other hand, if the face is detected, the CPU goes to step S102. In step S109, theCPU111 performs multi-area AF. Similarly to AF in the central area in step S107, a face of a person outside the central portion can be detected.
Next, in step S110, theCPU111 determines again whether the face is detected. If the face is not detected, theCPU111 goes to step S111. On the other hand, if the face is detected, theCPU111 goes to step S102. In step S111, theCPU111 detects whether therelease button101 is half pressed. If the half-press manipulation is detected, theCPU111 goes to step S113. On the other hand, if the half-press manipulation is not detected, theCPU111 goes to step S112. In step S112, theCPU111 deletes the face information stored in the built-inmemory113 and returns to step S101.
In step S113, if the face character is displayed while being superimposed on the subject image on themonitor103, theCPU111 deletes the display of the face character. Then, in step S114, theCPU111 determines whether the face is detected in order to specify the face which is regarded as a final AF area. If the face is detected, theCPU111 goes to step S115. On the other hand, if the face is not detected, theCPU111 goes to step S116. In step S115, theCPU111 sets an area set according to the position and size of the detected face as the final AF area, and performs AF control. This AF control in the area detected by the face detection will be described later usingFIG. 8. In step S116, theCPU111 detects whether there is the face information stored in step S107 in the memory. With the face information therein, theCPU111 goes to step S117. On the other hand, without the face information therein, theCPU111 goes to step S118.
In step S117, theCPU111 sets an area based on the stored face information as the AF area and performs AF control. The AF control in the area detected by the stored face detection will be described later usingFIG. 8. Therefore, if the face is not detected in the half-press manipulation, the face area detected immediately therebefore with little time difference is used as the AF area. This makes it possible to focus on the face portion almost without fail. In particular, this can respond to one-press manipulation in which therelease button101 is fully-pressed with one press. In step S118, theCPU111 performs AF control with the central area as the AF area. The AF control in the central area will be described later usingFIG. 9. Consequently, when the face is not detected, the central area which is likely to include the main subject is automatically used as the AF area. This increases a possibility that the main subject comes into focus. Moreover, it becomes unnecessary for the user to return to the menu screen and reset the AF area, which prevents a photo opportunity from being missed.
In step S119, theCPU111 detects whether therelease button101 is fully pressed. If therelease button101 is fully pressed, theCPU111 goes to step S120. On the other hand, if it is not fully pressed, theCPU111 goes to step S121. In step S120, theCPU111 performs shooting and recording processing. In step S121, theCPU111 detects whether therelease button101 is half pressed. If therelease button101 is half pressed, theCPU111 returns to step S119. On the other hand, if it is not half pressed, theCPU111 returns to step S101.
Next, the AF control in the detected area where the face has been detected will be described.FIG. 8 is a flowchart showing the AF control in the detected area performed by theCPU111. This flow is executed in step S15 and step S117 inFIG. 7.
First, in step S201, theCPU111 displays a face detected area frame set corresponding to the position and size of the detected face while superimposing it on the through image as shown inFIG. 12. When plural faces are detected, frames are displayed on the respective faces. Then, the frame of the largest or closest face is switched from a white frame (a thin-line frame inFIG. 11) to a red frame (a thick-line frame inFIG. 12). Subsequently, in step S202, theCPU111 sets the area displayed by the red frame inFIG. 12 to the AF area and performs AF control. Thereafter, in step S203, theCPU111 detects whether focus is achieved. If focus is achieved, theCPU111 goes to step S204. On the other hand, if focus is not achieved, theCPU111 goes to step S205. In step S204, theCPU111 switches the frame shown by the thick-line frame inFIG. 12 from the red frame to a blue frame. This makes it possible to inform the user that focus is achieved. Moreover, as shown inFIG. 13, a frame display which is not to the AF area may be deleted. Meanwhile, inFIG. 205, theCPU111 displays the frame shown by the thick-line frame inFIG. 12 which remains the red frame in a blinking state. This makes it possible to inform the user that focus is not achieved.
Next, the AF control in step S118 inFIG. 7 will be described.FIG. 9 is a flowchart showing the AF control in which the central area is weighted performed by theCPU111. This flow is executed in step S118 inFIG. 7.
First, in step S301, theCPU111 performs the AF control with the central area being weighted. Then, in step S302, theCPU111 detects whether focus is achieved. If focus is achieved, theCPU111 goes to step S303. On the other hand, if focus is not achieved, theCPU111 goes to step S304. In step S303, theCPU111 displays anicon301 shown inFIG. 14 in an on-state while superimposing it on the subject image on themonitor103. This makes it possible to inform the user that focus is achieved in an area other than the face detected area. On the other hand, in step S304, theCPU111 displays theicon301 shown inFIG. 14 in a blinking state while superimposing it on the subject image on themonitor103. This makes it possible to inform the user that focus is not achieved in the area other than the face detected area.
Description of Second EmbodimentFIG. 15 is a block diagram showing an overview of an electronic camera of a second embodiment. The electronic camera of the second embodiment includes ashooting lens11,lens driving mechanisms12, animage pickup device13, an analogsignal processing section14, an A/D conversion section15, animage processing section16, a compression/decompression section17, amemory18, a card I/F19, a monitor I/F20 and aliquid crystal display21, amanipulation section22, aCPU23, and abus24. Incidentally, theimage processing section16, the compression/decompression section17, thememory18, the card I/F19, the monitor I/F20, and theCPU23 are connected respectively via thebus24.
The shootinglens11 is composed of a group of plural lenses including a focusing lens for adjusting the focusing position. The position of this shooinglens11 in an optical axis direction is adjusted by thelens driving mechanisms12.
Theimage pickup device13 is placed on the image space side of the shootinglens11. Photodetectors which photoelectrically convert the subject image to generate analog image signals are two-dimensionally arranged on a light-receiving plane (a plane facing the shooting lens11) of theimage pickup device13. An output of theimage pickup device13 is connected to the analogsignal processing section14.
Further, even when a shutter is not released, theimage pickup device13 exposes the subject at predetermined intervals and outputs the analog image signals (through image signals) by thinning-out reading or the like. This through image signal is used for AF calculation, AE calculation, and face recognition by theCPU23, generation of a viewfinder moving image by theimage processing section26, and so on. Incidentally, theimage pickup device13 of the second embodiment may adopt either a sequential charge transfer method (for example, a CCD) or an XY address method (for example, a CMOS).
The analogsignal processing section14 is composed of a CDS circuit which performs correlated double sampling, a gain circuit which amplifies the outputs of the analog image signals, a clamp circuit which clamps the waveform of an input signal at a fixed voltage level, and so on. The A/D conversion section15 converts the analog image signals outputted from the analogimage processing section14 into digital image signals. An output of the A/D conversion section15 is connected to theimage processing section16 and theCPU23, respectively.
Theimage processing section16 performs image processing (defective pixel correction, gamma correction, interpolation, color conversion, edge enhancement, and so on) on the digital image signals when the shutter is released to generate shooting image data. Further, theimage processing section16 generates viewfinder images sequentially based on the digital image signals (through image signals) when the shutter is not released.
Furthermore, theimage processing section16 combines and displays a rectangular AF frame showing a face area as an AF target with the viewfinder image based on face recognition information described later (SeeFIG. 18). Besides, theimage processing section16 gives an indication of focusing failure to the viewfinder image using the above AF frame based on focusing failure information described later. Examples of this indication of focusing failure are a way of displaying the AF frame in a blinking state, a way of making a change to the color of the AF frame in a normal state, and so on. Incidentally, when the AF calculation is made twice and each ends in focusing failure as described later, theimage processing section16 gives different indications of focusing failure respectively for the first and second times.
The compression/decompression section17 performs processing of compressing the shooting image data after image processing in a JPEG format and processing of decompressing and reconstructing the image data compressed in the JPEG format. Thememory18 is composed of an SDRAM or the like and has a capacity capable of recording image data corresponding to plural frames. Image data before and after the image processing by theimage processing section16 is temporarily stored in thismemory18.
A connector to connectstorage media25 is formed in the card I/F19. Thestorage media25 are composed of a publicly known semiconductor memory and the like, and the above shooting image data is finally stored in thestorage media25. Incidentally, the shooting image data generated in the second embodiment conforms to the Exif (Exchangeable image file format for digital still cameras) standard, and a main body of the shooting image data and supplementary information (shooting information and so on) on the shooting image data are stored in association with each other.
Theliquid crystal display21 is connected to the monitor I/F20. Theliquid crystal display21 is mainly placed at a rear portion of the electronic camera. The viewfinder images sequentially outputted from theimage processing section16 are displayed by moving images on theliquid crystal display21 during shooting. A replay image plane of the shooting image data, a setting image plane to change various kinds of settings of the electronic camera, and so on are also displayed on theliquid crystal display21.
Themanipulation section22 includes an input button to perform switching between various kinds of shooting modes (such as a shooting mode and a replay mode) of the electronic camera and input settings, a release button, and so on.
TheCPU23 controls the operation of each section of the electronic camera according to a sequence program stored in a ROM not shown. For example, theCPU23 performs an AE calculation, a calculation of a white balance gain, and so on based on the through image signals. TheCPU23 generates the supplementary information on the shooting image data based on the Exif standards when the shutter is released. Especially in the second embodiment, theCPU23 has the following functions.
First, theCPU23 performs publicly known face recognition processing on the through image signals to detect a face area of a person within the image shooting plane. Then, theCPU23 generates face recognition information indicating the position of the face area within the shooing image plane. In the second embodiment, theCPU23 also detects a vertical direction of the face based on the positional relationship among face parts (eyes, a nose, a mouth, and so on) at the time of face recognition.
Incidentally, as an example of the face recognition processing, in Japanese Unexamined Patent Application Publication No. 8-63597, (1) a method of extracting a contour of a flesh-colored area based on color and detecting a face by the degree of matching with a face contour template which is prepared in advance, (2) a method of finding an eye candidate area and detecting a face by the degree of matching with an eye template, (3) a method of finding a feature quantity defined from a two-dimensional Fourier transform result of a face candidate area found by a face contour template and a two-dimensional Fourier transform result of a face template image including eyes, a nose, a mouth, and so on which is prepared in advance and detecting a face by subjecting the feature amount to threshold processing, and the like are disclosed.
Secondly, theCPU23 performs a contrast detection system AF calculation based on through image signals of a specified focus area located in the image shooting plane. Here, theCPU23 selects the specified focus area from among plural focus areas (a group of focus areas) arranged regularly within the image shooting plane based on the face recognition information. In the second embodiment, all of the focus areas located within a rectangular area which surrounds the contour of the face area compose the specified focus area. In the second embodiment, the specified focus area is set to match the range of the above AF frame. Incidentally, the range of the specified focus area rarely perfectly matches the face area, whereby a surrounding portion adjacent to the face area is included in the specified focus area, which causes a high contrast in a contour portion of the face area (SeeFIG. 17).
Moreover, in the second embodiment, when the focusing position is not detected by the first AF calculation (the specified focus area including the face area), theCPU23 changes the focus area (focus area which is likely to include the body of the subject) located under the face area to the specified focus area. Note that, this change of the specified focus area is set with reference to the vertical direction of the face detected using the face parts by theCPU23.
Here, the contrast detection system AF calculation is performed based on a principle that there is a correlation between the degree of the blur and the contrast of the image, and the contrast of the image becomes maximum when focus is achieved. More specifically, theCPU23 first extracts a high-frequency component in a predetermined band by a band-pass filter from the through-image signals corresponding to the specified focus area. TheCPU23 then generates a focus evaluation value regarding the subject image in the specified focus area by integrating an absolute value of the high-frequency component. This focus evaluation value is maximum when the contrast is maximum at a focusing position.
Thereafter, theCPU23 moves the focusing lens in a predetermined direction and compares focus evaluation values before and after the movement. If the focus evaluation value after the movement is larger, the contrast is regarded as trending higher, and theCPU23 moves the focusing lens further in the same direction and performs the same calculation. On the other hand, if the focus evaluation value after the movement is smaller, the contrast is trending lower, and theCPU23 moves the focusing lens in an opposite direction and performs the same calculation. By repeating the above processing, theCPU23 searches for a peak of the focus evaluation value (a focusing position). The above operation is generally called a hill-climbing operation. Incidentally, if the focusing position is not detected in the specified focus area, theCPU23 outputs focusing failure information to theimage processing section16.
The shooting operation in the second embodiment will be described below with reference to a flowchart inFIG. 16.
Step S1101: TheCPU23 allows theimage pickup device13 to generate the through image signal at predetermined intervals. Theimage processing section16 generates a viewfinder image based on the through image signals. TheCPU23 displays the viewfinder image by the moving image on theliquid crystal display21. Accordingly, the user can frame the subject by the viewfinder image displayed on theliquid crystal monitor21.
Step S1102: TheCPU23 determines whether the release button is half pressed. If the release button is half pressed (YES side), theCPU23 goes to S1103. On the other hand, if no force is applied to the release button (NO side), theCPU23 stands by until the release button is half pressed.
Step S1103: TheCPU23 detects the face area of the subject within the image shooting plane based on the through image signals. Then, theCPU23 generates the face recognition information when there is a face area within the image shooting plane.
Step S1104: TheCPU23 determines whether the face area is detected in S1103. If the face area is detected (YES side), theCPU23 goes to S1105. On the other hand, if the face area is not detected (NO side), theCPU23 goes to S1108.
Step S1105: TheCPU23 sets the focus areas within a rectangular area which surrounds the contour of the face area to the specified focus area (SeeFIG. 17). Then, theCPU23 performs the AF calculation by the hill-climbing operation based on the through image signals of the specified focus area. Incidentally in the AF calculation in S1105, theimage processing section16 combines and displays the AF frame with the face area in the viewfinder image (SeeFIG. 18).
Step S1106: TheCPU23 determines whether the focusing position is detected in the specified focus area (S1105). If the focusing position is detected (YES side), theCPU23 goes to S1109. On the other hand, if the focusing position is not detected (NO side), theCPU23 generates the focusing failure information and goes to S1107.
Step S1107: In this case, theCPU23 changes the focus area located under the face area to the specified focus area. Then, theCPU23 performs again the AF calculation in the specified area after the change, and thereafter goes to S1109. Also when the focusing position is not detected by this second AF calculation, theCPU23 generates the focusing failure information. Incidentally, in the AF calculation in S1107, theimage processing section16 gives the indication of focusing failure by the AF frame of the viewfinder image based on the first or second focusing failure information.
Step S1108: Meanwhile, in this case, there is no person within the image shooting plane or the face of the person as the subject is not detected. Therefore, theCPU23 selects the focus area in the normal operation and performs the AF calculation.
Step S1109: Then, theCPU23 shoots the subject and generates the shooting image data by the user fully pressing the release button. Incidentally, by using a MakerNote tag of the Exif standard when the shooing image data is generated, theCPU23 records supplementary information such as the presence or absence of face recognition and the position of the specified focus area used for the AF calculation in the shooting image data.
Step S1110: TheCPU23 determines whether there is a shooting ending instruction inputted by the user. If there is the shooting ending instruction (YES side), theCPU23 stops the generation of the through image signals and so on, and ends the shooting. On the other hand, if there is no shooting ending instruction (NO side), theCPU23 returns to S1102 and repeats a series of operations. The above is the description of the shooting operation of the second embodiment.
Next, effects of the above second embodiment will be described.
(1) In the second embodiment, theCPU23 performs the AF calculation in the specified focus area including the contour of the face area, whereby the person in the image shooting plane can be easily brought into focus. Especially in the specified focus area, a high contrast occurs in the contour portion of the face area. Accordingly, compared with when the focus is detected only in a portion with a low contrast within the face area, a search for a contrast peak becomes easier in the second embodiment. Namely, focusing accuracy in the face of the subject increases. Moreover, the detection of the contour of the face area is relatively easy, which reduces a possibility that the focusing accuracy is influenced by the expression of the face of the subject.
(2) In the second embodiment, when the focusing position is not detected by the first AF calculation with the face area as the specified focus area, theCPU23 performs the second AF calculation in the specified area where the body of the person is located (S1107). Accordingly, even if focus cannot be achieved in the face area, the person as the subject can be brought into focus with a high probability. TheCPU23 estimates the position of the body from the direction of the face and sets the second specified focus area. Accordingly, in the second AF calculation, the stable focusing accuracy can be ensured regardless of a shooting attitude the electronic camera such as the normal position or vertical position.
(3) In the second embodiment, for the AF calculation the AF frame is combined with the face area in the viewfinder image for display (S1105). Hence, the user can easily keep track of the person as an AF target from the viewfinder image on theliquid crystal display21. Further, in the second AF calculation, the viewfinder image is displayed with the indication of focusing failure using the AF frame (S1107). The first and second focusing failure displays are different, so that the user can relatively easily judge whether the person is brought into focus from the display state of the AF frame.
(4) In the second embodiment, the shooting image data contains the supplementary information such as the presence or absence of face recognition, the position of the specified focus area used for the AF calculation. Accordingly, referring to the supplementary information of the shooting image data with a viewer such as a personal computer, the user can know the situation at the time the shooting was made ex post facto.
Description of Third EmbodimentFIG. 19 is a block diagram showing an overview of an electronic camera of a third embodiment. In the description of the following embodiments, the same numerals and symbols are used to designate components common to the second embodiment, and a description thereof will be omitted.
The third embodiment is a modified example of the second embodiment, and its configuration differs from that of the second embodiment in that anattitude sensor26 is connected to theCPU23. Theattitude sensor26 detects a shooting attitude in which the electronic camera is held in a normal position, an upper right vertical position shooting attitude in which the right side of the electronic camera is located at an upper position, an upper light vertical shooting attitude in which the left side of the electronic camera is located at an upper position, and an inverted position shooting attitude in which the electronic camera is inverted. When the focusing position is not detected in the specified focus area including the face area, theCPU23 changes the focus area located under the face area to the specified focus area based on an output of theattitude sensor26.
In the third embodiment, almost the same effects as in the first embodiment can be obtained. Furthermore, since the position of the specified focus area is changed by the output of theattitude sensor26, it is possible to reduce the calculation load of theCPU23 regarding the detection of the face in the vertical direction.
Description of Fourth EmbodimentFIG. 20 is a flowchart showing a shooting operation in a fourth embodiment. Here, steps except S1205 of the fourth embodiment correspond to steps of the second embodiments, respectively, and a duplicate description will be omitted. Further, a block diagram of an electronic camera in the fourth embodiment is common to the second embodiment or the third embodiment, and it will be not shown.
step S1205: TheCPU23 sets, out of focus areas overlapping the contour of the face area, only part of the focus areas to the specified focus area based on the face recognition information (S1203) (SeeFIG. 21). Then, theCPU23 performs the AF calculation by the hill-climbing operation based on the through image signals of the specified focus area.
Since the contour portion of the face area is set to the specified focus area also in S1205, the focusing accuracy in the face of the subject can be increased similarly to the second embodiment. However, in the focus area overlapping the lower contour of the face area, there is a high possibility that the contrast is lowered by the flesh-colored potion of a neck. Therefore, it is desirable that in S1205, theCPU23 set the focus area overlapping the upper contour or the side contour of the face area to the specified focus area. Incidentally, when the upper or side focus area of the face area is used, the specified focus area is selected based on the vertical direction of the face detected by theCPU23 or the output of theattitude sensor26.
In this fourth embodiment, the specified focus area is smaller than that in the second embodiment, so that the calculation amount in the AF calculation is also reduced. Accordingly, the fourth embodiment makes it possible to simplify the circuit configuration of theCPU23 and further speed up the AF calculation.
Supplementary Description of Embodiments When there is a focusing failure in the specified focus area including the face area in the above second embodiment, the second AF calculation may be performed, for example, in the focus area at the center of the image shooting plane, regardless of a result of the face recognition. Further, in the fourth embodiment, when there is a focusing failure in the first specified focus area, the second AF calculation may be performed in another focus area of the face area. Furthermore, in the second embodiment, the focus failure indication may be given only when there is a focusing failure in the second AF calculation.
The invention is not limited to the above embodiments and various modifications may be made without departing from the spirit and scope of the invention. Any improvement may be made in part or all of the components.