CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation application of application U.S. Ser. No. 12/909,904 filed in the United States on Oct. 22, 2010, which is related to and claims priority benefit of Korean Patent Application No. 10-2009-109686, filed on Nov. 13, 2009 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein by reference.
BACKGROUND1. Field of the General Inventive Concept
The present general inventive concept relates to a photographing apparatus and a method of providing a photographed video, and more particularly, to a photographing apparatus to photograph or record high quality videos and a method of providing a photographed video.
2. Description of the Related Art
With the development of technologies related to a photographing apparatus, the quality of videos which are photographed by photographing apparatuses has greatly improved. Recently developed photographing apparatuses can photograph video in full high definition (HD) resolution.
When a photographing apparatus photographs a video in high resolution, the photographing apparatus obtains a high quality image, which is an advantage. However, the higher the resolution of an image, the larger the size of the photographed video data that represents the image will be, and the photographing apparatus may require a large capacity storage medium to store the high resolution video data.
In addition, the large capacity of the high resolution video may be a problem when providing a broadcasting service in real-time. This is because the photographed video has to be transmitted in real-time for a real-time broadcasting service, but a communications network cannot guarantee a communication speed to transmit a full HD image in real-time.
As described above, it is difficult for the photographing apparatus to photograph a video in high resolution and to transmit the photographed video in real-time. Therefore, there is a need for a method to broadcast a high resolution video photographed by a photographing apparatus in real-time.
SUMMARYThe present general inventive concept provides a photographing apparatus to convert a photographed or recorded video into two types of formats, to store the converted videos, and to transmit the videos wirelessly in real-time, and a method of providing a photographed video.
Additional features and utilities of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Embodiments of the present general inventive concept may be achieved by providing a photographing apparatus, including an image sensor to photograph an object and to output first video data, a scaler to adjust the size of the first video data, and to output second video data, a video encoder to encode the first video data and the second video data, and to output a first video stream and a second video stream, a first multiplexer (MUX) to multiplex the first video stream and an audio stream, and to convert the multiplexed stream into a video file of a first format, a second multiplexer (MUX) to multiplex one of the first video stream and the second video stream and the audio stream, and to convert the multiplexed stream into a video file of a second format, and a communication unit to transmit the video file of the second format wirelessly.
The photographing apparatus may further include a selection unit to select one of the first video stream and the second video stream according to a manipulation of a user, and to output the selected video stream.
The photographing apparatus may further include a storage unit to store at least one of the video file of the first format and the video file of the second format.
The first MUX may output the video file of the first format to the storage unit, the second MUX may multiplex the second video stream with the audio stream, convert the multiplexed stream into a video file of the second format, and output the video file of the second format to the communication unit, and the communication unit may transmit the video file of the second format wirelessly to the outside.
The video file of the first format may be a video file having the same definition as the first video data photographed by the image sensor.
The video file of the first format may be a video file having full high definition (HD).
The video file of the second format may be a video file having capacity which is the same as or less than the capacity capable of being transmitted by the communication unit wirelessly.
The photographing apparatus may further include a control unit to control the second MUX to perform at least one of storing the video file of the second format in the storage unit and transmitting the video file of the second format wirelessly through the communication unit according to a manipulation of a user.
The scaler may adjust the size of the first video data by performing at least one of adjusting resolution of the first video data and extracting a specific portion from the first video data.
Embodiments of the present general inventive concept may also be achieved by providing a method of providing a photographed video, including photographing an object and generating first video data, adjusting the size of the first video data, and generating the second video data, encoding the first video data and the second video data to be converted into a first video stream and a second video stream, multiplexing the first video stream and an audio stream, and converting the multiplexed stream into a video file of a first format, multiplexing one of the first video stream and the second video stream and the audio stream, and converting the multiplexed stream into a video file of a second format, and transmitting the video file of the second format wirelessly.
The converting the multiplexed stream into a video file of a second format may select one of the first video stream and the second video stream according to a manipulation of a user, multiplex the selected video stream and the audio stream, and convert the multiplexed stream into a video file of a second format.
The method may further include storing at least one of the video file of the first format and the video file of the second format.
The storing may store the video file of the first format, the converting the multiplexed stream into the video file of the second format may multiplex the second video stream with the audio stream, and convert the multiplexed stream into a video file of the second format, and the transmitting may transmit the video file of the second format wirelessly to an outside.
The video file of the first format may be a video file having the same definition as that of the first video data photographed by an image sensor.
The video file of the first format may be a video file having full high definition (HD).
The video file of the second format may be a video file having capacity which is the same as or less than the capacity capable of being transmitted by the communication unit wirelessly.
The method may further include controlling to perform at least one of storing the video file of the second format and transmitting the video file of the second format wirelessly according to a manipulation of a user.
The generating the second video data may adjust the size of the first video data by performing at least one of adjusting resolution of the first video data and extracting a specific portion from the first video data, and generate the second video data.
Embodiments of the present general inventive concept may also be achieved by providing a photographing apparatus including an image sensor unit to capture moving images, convert the moving images into an electrical signal and output the electrical signal as video data, a scaler unit to reduce the size of the video data, and a communication unit to wirelessly transmit the reduced size video data in real time.
A video encoder unit may receive the video data and the reduced size video data and create a plurality of video streams to correspond to the video data and reduced size video data, and a multiplexer unit may convert one of the plurality of video streams into a video file to be transmitted by the communication unit.
A selection unit may select one of the plurality of video streams to be converted by the multiplexer, and a control unit may control the output of the selection unit according to a pre-programmed option or a manipulation of a user.
The video file may include a moov box and an mdat box.
The video encoder unit may further include an encoder, a first buffer to temporarily store the video data output from the image sensor, and a second buffer to temporarily store the reduced video data output from the scaler unit, wherein the first and second buffers may alternately input the video data and reduced video data to the encoder.
Embodiments of the present general inventive concept may also be achieved by providing a method of providing a photographed video including converting moving images into an electrical signal and outputting the electrical signal as video data, reducing the size of the video data, and wirelessly transmitting the reduced size video data in real time.
The method may further include receiving the reduced size video data in an encoder unit and encoding the reduced size video data into a video stream.
The method may further include temporarily storing the video data in a first buffer of the encoder unit, and temporarily storing the reduced video data in a second buffer of the encoder unit, wherein the first and second buffers alternately input the video data and reduced video data to an encoder within the encoder unit.
The method may further include multiplexing the video stream and an audio stream and converting the multiplexed stream into a video file including a track having data and timing information to correspond to different portions of the video file.
Embodiments of the present general inventive concept may also be achieved by a method of providing a photographed video, the method including converting a photographed video into video data having two types of video data formats, temporarily storing the converted video data, and transmitting the video data wirelessly in real time.
The transmitted video data may be reduced size video data.
The reduced size video data may be reduced by reducing the resolution of the video data or selecting a portion of the video data.
The method may further include receiving the video data and reduced size video data in an encoder and parallel encoding the video data and the reduced size video data into a plurality of video streams to be transmitted wirelessly in real time.
BRIEF DESCRIPTION OF THE DRAWINGSThese and/or other features and utilities of the present general inventive concept will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which
FIG. 1 is a block diagram illustrating a photographing apparatus according to an exemplary embodiment of the present general inventive concept;
FIG. 2 is a block diagram illustrating a video encoder according to an exemplary embodiment of the present general inventive concept;
FIG. 3 is a flowchart illustrating a method of providing a photographed video according to an exemplary embodiment of the present general inventive concept;
FIG. 4 is a view illustrating a video which is photographed by a photographing apparatus and broadcast in real-time, according to an exemplary embodiment of the present general inventive concept;
FIG. 5 is a view illustrating an original photographed video and a video having resolution converted by a scaler according to an exemplary embodiment of the present general inventive concept;
FIG. 6 is a view illustrating an original photographed video and a portion extracted from the original photographed video by a scaler according to an exemplary embodiment of the present general inventive concept;
FIGS. 7A and 7B are views illustrating the structure of a video file of MPEG layer4 (MP4) format according to an exemplary embodiment of the present general inventive concept;
FIGS. 8A to 8C are views illustrating the structure of a video file to be broadcast in real-time according to an exemplary embodiment of the present general inventive concept.
DETAILED DESCRIPTION OF THE EMBODIMENTSReference will now be made in detail to the embodiments of the present general inventive concept, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present general inventive concept by referring to the figures.
In the present general inventive concept described herein, the terms “photograph” and “photographing” may be used interchangeably with terms such as “record” and “recording” to mean that not simply that a still image may be photographed, but that a moving video may be photographed to record or film short, long or continuous streaming segments of moving video images. Thus, when the specification describes photographing an object, a video, or moving images, these terms also mean that a video, not simply a still image photograph, may be recorded, stored, and/or transmitted in the various embodiments described herein. When discussing a “photographing apparatus” in the present general inventive concept, the term is meant to include an apparatus that may take traditional digital photographs and also include a video camera and related apparatuses that may capture moving video images.
FIG. 1 is a block diagram illustrating a photographingapparatus100 according to an exemplary embodiment of the present general inventive concept. Referring toFIG. 1, the photographingapparatus100 may include amicrophone110, anaudio encoder115, animage sensor120, ascaler123, avideo encoder125, aselection unit130, a first multiplexer (MUX)140, a second multiplexer (MUX)150, astorage unit160, acommunication unit170, acontrol unit180, and amanipulation unit190.
Themicrophone110 may receive voice and other audio sounds, convert the voice or other sounds into audio data (A0), and output the audio data (A0) to theaudio encoder115. That is, themicrophone110 may convert an acoustic signal into the audio data (A0), which is an electrical signal.
Theaudio encoder115 compresses the input audio data (A0), and may generate a compressed audio stream (A1). Theaudio encoder115 may compress audio data using encoding techniques such as advanced audio coding (AAC), audio coding (AC3), and so on.
Theimage sensor120 photographs an object, and outputs first video data (V0) corresponding to the photographed video. That is, theimage sensor120 converts incoming light into the first video data (V0) which can be an electrical signal. Theimage sensor120 may be implemented using a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS).
Thescaler123 may adjust the size of the first video data (V0), generate second video data (V1) having a different or smaller size than the first video data (V0) and output the second video data (V1). Thescaler123 may adjust the size of the first video data (V0) by performing at least one of adjusting the resolution of the first video data (V0) and extracting one or a plurality of specific portions from the video data (V0) to produce the second video data (V1).
Specifically, thescaler123 may generate the second video data (V1) of which an image or file size is smaller than that of the first video data (V0) by adjusting the resolution of the first video data (V0) from a high resolution to a lower resolution. For instance, thescaler123 may generate the second video data (V1) by adjusting the resolution of the first video data (V0) from full high definition (HD) to standard definition (SD).
In addition, thescaler123 may generate the second video data (V1) of which the file size is smaller than that of the first video data (V0) by extracting at least one portion from the first video data (V0). For example, thescaler123 may extract a portion selected by a user from the first video data (V0), and generate the second video data (V1) using the extracted portion. Also, the extracted portion may be a pre-programmed portion to capture a desired section of the video being recorded.
Therefore, thescaler123 may adjust the resolution of the first video data (V0) or extract a specific portion from the first video data (V0) according to a pre-programmed selection or a selection of a user to generate or output the second video data (V1) of which the size is adjusted.
Thescaler123 may generate the second video data (V1) by reducing the size of the first video data (V0) so that thecommunication unit170 may transmit the second video data (V1) wirelessly or by wired connection in real-time. As the first video data (V0) which is an original video photographed by the photographingapparatus100 may be a high quality video, there is a possibility that it may be difficult or impossible to transmit the first video data (V0) over limited bandwidth of thecommunication unit170 or accessible by thecommunication unit170. Therefore, thescaler123 may reduce the size of the first video data (V0), which is the original video data representing the original captured or recorded video, to generate the second video data (V1), thereby enabling the photographingapparatus100 to broadcast the photographed video over bandwidth of or accessible by thecommunication unit170.
Thevideo encoder125 may encode the first video data (V0) and the second video data (V1), and output a first video stream (V2) and a second video stream (V3), respectively. Specifically, thevideo encoder125 may encode the first video data (V0) and the second video data (V1) in parallel to provide greater output options for the photographingapparatus100. For the parallel processing, thevideo encoder125 may be implemented using buffers such as RAM memory to temporarily store the video data and video streams. The structure of thevideo encoder125 will be explained in detail with reference toFIG. 2.
FIG. 2 is a block diagram illustrating thevideo encoder125 according to an exemplary embodiment of the present general inventive concept. Referring toFIG. 2, thevideo encoder125 may include anencoder200, afirst buffer210, asecond buffer220, athird buffer230, and afourth buffer240.
Thefirst buffer210 may temporarily store the first video data (V0) output from theimage sensor120, and thesecond buffer220 may temporarily store the second video data (V1) output from thescaler123. The first andsecond buffers210 and220 may alternately input the first and second video data (V0) and (V1) to theencoder200 by a predetermined unit, for example, a frame unit.
Theencoder200 may encode the video data temporarily stored in the first andsecond buffers210 and220, and temporarily store the encoded video data in the third andfourth buffers230 and240. Specifically, theencoder200 may encode the first video data (V0) input into thefirst buffer210, and temporarily store the encoded first video stream (V2) in thethird buffer230. Theencoder200 may encode the second video data (V1) input into thesecond buffer220, and temporarily store the encoded second video stream (V3) in thefourth buffer240.
As illustrated inFIGS. 1 and 2, thethird buffer230 of thevideo encoder125 may output the first video stream (V2) to thefirst MUX140 and to theselection unit130. Thefourth buffer240 of thevideo encoder125 may output the second video stream (V3) to theselection unit130.
Referring again toFIG. 1, theselection unit130 selects one of the first video stream (V2) and the second video stream (V3) according to a pre-programmed selection or manipulation of a user, and outputs the selected video stream (V2) or (V3) to thesecond MUX150. Theselection unit130 may be controlled by thecontrol unit180, and may be implemented as a switch unit to switch one of the first video stream (V2) and the second video stream (V3) and to output the video stream (V2) or (V3) to thesecond MUX150.
If the bandwidth of thecommunication unit170 or accessible by thecommunication unit170 is enough to transmit the first video stream (V2) wirelessly in real-time, a user may manipulate theselection unit130 to select the first video stream (V2). Thecontroller180 may also be programmed to automatically make this determination, and transmit the video stream (V2). Usually, as the first video stream (V2) is in high quality and may require a large bandwidth for transmission, it may be difficult or impossible to transmit the first video stream (V2) wirelessly in real-time. When these difficulties arise, thecontrol unit180 may control theselection unit130 to select the second video stream (V3) of which the size has been scaled to be smaller.
Thefirst MUX140 may multiplex the first video stream (V2) and the audio stream (A1), and convert the multiplexed stream into a video file of a first format. Thefirst MUX140 outputs the video file of the first format and stores the video file in thestorage unit160. Herein, the first format may represent a format suitable to store a video file. Since the video file of the first format is an original photographed video, thefirst MUX140 converts the first video stream (V2) into the first format suitable to store a file. For example, the first format may be a format such as MP4, audio video interleave (AVI), and so on. The structure of the file of the MP4 format is illustrated inFIGS. 7A and 7B, and will be explained later.
The video file of the first format may be a video file having the same resolution as that of the first video data photographed by theimage sensor120. For example, the video file of the first format may be a video file having full HD quality.
Thesecond MUX150 may multiplex the audio stream (A1) and the video stream (V2) or (V3) selected by theselection unit130, and converts the multiplexed stream into a video file of the second format. Since the video file of the second format may be transmitted wirelessly in real-time, thesecond MUX150 may convert the video stream input by theselection unit130 into the second format which is suitable to transmit a file wirelessly in real-time. The second format is suitable for wireless transmission in real-time. The file structure of a file format suitable for wireless transmission in real-time is illustrated inFIGS. 8A to 8C, and thus this will be explained later. The video format illustrated inFIGS. 7A and 7B may also be used for wireless transmission of the video files as described herein. As described above, since the video file of the second format is for wireless transmission, its capacity may be the same as or less than the capacity that can be transmitted wirelessly by thecommunication unit170 in real-time.
Thestorage unit160 may store programs to execute various operations of the photographingapparatus100. Thestorage unit160 may also store the video file of the first format and the video file of the second format. Thestorage unit160 may be implemented using a hard disc drive, a non-volatile memory, an external memory such as a flash drive, and other storage mediums as are known in the art.
Thecommunication unit170 may be connected to an intranet network, an internet network or an external device for communication. Thecommunication unit170 may transmit a video file of the second format in a wireless manner. For example, thecommunication unit170 may transmit the video file of the second format to a server which transmits a video wirelessly in real-time via an intranet or internet network. Therefore, clients connected to a broadcast server may view the video photographed or recorded by the photographingapparatus100 in real-time. Thecommunication unit170 may also be implemented using a wireless local area network (LAN).
Themanipulation unit190 can receive a manipulation from a user, and transmit the manipulation to thecontrol unit180. Specifically, themanipulation unit190 may be buttons or a touchscreen provided on the photographingapparatus100.
Thecontrol unit180 may control overall operations of the photographingapparatus100. In more detail, thecontrol unit180 may control thesecond MUX150 to perform at least one of storing the video file of the second format in thestorage unit160 and transmitting the video file of the second format wirelessly through thecommunication unit170 according to the manipulation of the user or instructions pre-stored in thestorage unit160. Thecontrol unit180 may control thesecond MUX150 to output the video file of the second format to thestorage unit160, thereby storing the video file of the second format in thestorage unit160. The controllingunit180 may also control thesecond MUX150 to output the video file of the second format to thecommunication unit170 in order to transmit the video file of the second format wirelessly in real-time.
Thecontrol unit180 may also control thescaler123 to adjust the size of the first video data (V0) by performing at least one of adjusting the resolution of the first video data (V0) and extracting specific portions from the first video data (V0) according to the manipulation of the user, or the capabilities of thecommunication unit170.
Thecontrol unit180 may control theselection unit130 to select one of the first video stream (V2) and the second video stream (V3), and to output the selected video stream (V2) or (V3) to thesecond MUX150 according to the manipulation of the user or program stored in thecontrol unit160 orcontroller180. Thus, a user using the internet can view the video captured by the user in real-time.
The photographingapparatus100 may store the photographed video having high quality in thestorage unit160, as well as transmit the video wirelessly in real-time externally to the photographingapparatus100. Therefore, a user may transmit and broadcast a photographed or recorded video in real-time via a network while photographing an object using the photographingapparatus100.
Hereinbelow, a method of providing a photographed video will be explained in detail with reference toFIG. 3.FIG. 3 is a flowchart illustrating a method of providing a photographed video according to an exemplary embodiment of the present general inventive concept.
The photographingapparatus100 photographs or records an object through theimage sensor120, and generates and outputs the first video data (V0) in operation S310. That is, theimage sensor120 converts light into the first video data (V0) which may be an electrical signal.
The photographingapparatus100 may adjust the size of the first video data (V0), and generate and outputs the second video data with adjusted size (V1) in operation S320. The photographingapparatus100 may perform at least one of adjusting the resolution of the first video data (V0) and extracting at least one specific portion from the first video data (V0) to adjust the size of the first video data (V0).
Specifically, the photographingapparatus100 may generate the second video data (V1) of which the file size is smaller than that of the first video data (V0) by adjusting the resolution of the first video data (V0) from a high resolution to a lower resolution. For example, the photographingapparatus100 may adjust the resolution of the first video data (V0) from full high definition (HD) to standard definition (SD), and generate the second video data (V1).
The photographingapparatus100 may also generate the second video data (V1) of which the file size is smaller than that of the first video data (V0) by extracting at least one portion from the first video data (V0). For example, the photographingapparatus100 may extract a portion which may be pre-programmed to be extracted or a user selects from the first video data (V0), and generates the second video data (V1) using the extracted portion.
Therefore, the photographingapparatus100 may adjust the resolution of the first video data (V0) or may extract at least one specific portion from the first video data (V0) according to a pre-programmed selection or a selection of a user to generate or output the second video data (V1) of which the size has been adjusted.
Thus, the photographingapparatus100 can reduce the size of the first video data (V0) to have a capacity which is the same as or less than the capacity capable of being transmitted wirelessly in real-time by thecommunication unit170, and generate the second video data (V1). Since the first video data (V0) which is an original video photographed by the photographingapparatus100 is a high quality video, there is a possibility that it is difficult or impossible to transmit the first video data (V0) over available bandwidth of thecommunication unit170. Therefore, the photographingapparatus100 may reduce the size of the first video data (V0) which is the original video to generate the second video data (V1), thereby enabling the photographingapparatus100 to transmit the photographed video over the bandwidth accessible by thecommunication unit170.
The photographingapparatus100 encodes the first video data (V0) and the second video data (V1), and outputs the first video stream (V2) and the second video stream (V3), respectively in operation S330. Specifically, the photographingapparatus100 encodes the first video data (V0) and the second video data (V1) in parallel.
The photographingapparatus100 multiplexes the first video stream (V2) and the audio stream (A1), and converts the multiplexed stream into a video file of a first format in operation S340. Herein, the first format represents a format suitable to store a video file. Since the video file of the first format is an original photographed video, thefirst multiplexer140 of the photographingapparatus100 converts the first video stream (V2) into the first format suitable to store a file. For example, the first format may be a format such as MP4, audio video interleave (AVI), and so on. The structure of a file of the MP4 format is illustrated inFIGS. 7A and 7B, and thus this will be explained later.
The photographingapparatus100 may select one of the first video stream (V2) and the second video stream (V3) according to a program of a manipulation of a user in operation S350. If the bandwidth of thecommunication unit170 is enough to transmit the first video stream (V2) wirelessly in real-time, a user may manipulate the photographingapparatus100 to select the first video stream (V2). Usually, as the first video stream (V2) is a high quality video, it is impossible to transmit the first video stream (V2) wirelessly in real-time. In such a case, the photographingapparatus100 may select the second video stream (V3) of which the size has been scaled to smaller.
The photographingapparatus100 may multiplex the audio stream (A1) and the video stream selected from the first and second video streams (V2) and (V3), and convert the multiplexed stream into a video file of a second format in operation S360. Since the video file of the second format will be transmitted wirelessly in real-time, the photographingapparatus100 may convert the video stream input by theselection unit130 into the second format which is suitable to transmit a file wirelessly in real-time. The second format is also a file format suitable for a wireless transmission in real-time. The file structure of a file format suitable for wireless transmission in real-time is illustrated inFIGS. 8A to 8C, and thus this will be explained later. As described above, since the video file of the second format is for wireless transmission, the video file may have the capacity which is the same as or less than the capacity capable of being transmitted wirelessly by thecommunication unit170 in real-time.
The photographingapparatus100 stores the video file of the first format and the video file of the second format in operation S370.
The photographingapparatus100 may transmit a video file of the second format wirelessly through communication network such as an intranet or internet network. For example, the photographingapparatus100 may transmit the video file of the second format to a server which broadcasts a video in real-time through thecommunication unit170. Clients, customers, social-networking sites, business networking applications, etc. connected to the server may view the video recorded or photographed by the photographingapparatus100 in real-time.
Through the above process, the photographingapparatus100 may store the photographed video having high quality in thestorage unit160, as well as broadcast the video in real-time external to the photographingapparatus100 to be used and seen by external users. Therefore, a user may record, transmit and broadcast a photographed video in real-time via over a network while photographing an object using the photographingapparatus100.
FIG. 4 is a view illustrating a video which is photographed by the photographingapparatus100 and broadcast in real-time, according to an exemplary embodiment of the present general inventive concept.
Referring toFIG. 4, the photographingapparatus100 may photograph moving images such as live objects or ascene410, and temporarily store a photographed video of the objects orscene410 having full HD quality in thestorage unit160 as a full HD quality video. The photographingapparatus100 may scale the photographed video having full HD quality to have SD quality. The photographingapparatus100 may wirelessly transmit the scaled video having SD quality to anexternal device420 such as a television, computer monitor, handheld device, or other displays as are known in the art. As illustrated inFIG. 4, the video photographed by the photographingapparatus100 may be displayed on the camera itself or on anexternal display420 in SD quality. Also, upon transmission, the video having SD quality may be embedded with a code to allow the SD quality video to be expanded to HD at a receiving end of the transmission.
The photographingapparatus100 may store an original photographed video having high quality, as well as broadcast the photographed video converted to have low quality suitable for wireless transmission external to the photographingapparatus100.
Hereinbelow, a method of scaling a high quality video will be explained with reference toFIGS. 5 and 6.
FIG. 5 is a view illustrating an original photographed video and a video having resolution converted by thescaler123 according to an exemplary embodiment of the present general inventive concept.FIG. 5 illustrates thatfirst video data500 which is an original photographed video having high quality may be scaled tosecond video data510 having lower quality through resolution conversion.
As illustrated inFIGS. 1 and 5, thescaler123 of the photographingapparatus100 scales thefirst video data500 having 1920×1080 resolution to thesecond video data510 having 720×480, and generates thesecond video data510 of which the size has been reduced.
FIGS. 1 and 6 illustrate an original photographed video and aportion605 which thescaler123 may extract from the original photographed video according to an exemplary embodiment of the present general inventive concept.FIG. 6 illustrates that theportion605 offirst video data600 which is an original photographed video may be extracted, thereby being scaled tosecond video data610.
As illustrated inFIGS. 1 and 6, thescaler123 of the photographingapparatus100 extracts theportion605 from thefirst video data600 having a specific size, scales theportion605 to thesecond video data610, and generates thesecond video data610 of which the size has been reduced. Theportion605 may be selected by the manipulation of a user, or may be a predetermined portion. The extractedportion610 is not limited to an upper left hand corner portion of theoriginal video605 illustrated inFIG. 6, but may be any portion or portions programmed into thecontroller180 or selected by a user through themanipulation unit190.
Hereinbelow, the first format and the second format generated by thefirst MUX140 and thesecond MUX150 will be explained in detail with reference toFIGS. 7A to 8C.
FIGS. 7A and 7B are views illustrating the structure of a video file of MPEG layer4 (MP4) format that may stored in thestorage unit160 or transmitted by thecommunication unit170 according to an exemplary embodiment of the present general inventive concept.FIG. 7A illustrates the structure of theISO file system710 of the MP4 format. TheMoov box720 of the ISO file system illustrated inFIG. 7A may include a number of different boxes and hierarchies, and may provide for basic functionality of the MPEG 4 file to be stored or transmitted by thecommunication unit170 over the an intranet or internet network, received by external devices, decoded, and displayed on an HD or SD display device. Basic functionality included in the Moov file may include dimensions of a video file, or the duration of audio that is multiplexed with the captured video to create a video file.
As illustrated inFIG. 7A, theMoov box720 may include a plurality of boxes including adata track box725 for a video stream and adata track box727 for audio an audio stream that may be encoded in thevideo encoder125 andaudio encoder115 before being multiplexed in thesecond multiplexer unit150 as illustrated inFIG. 1. The ISO file illustrated inFIG. 7A may also include an “mdat”box730 where the actual raw information for the ISO file may be stored while being transmitted over a network or stored in the photographingapparatus100. The boxes, also known as atoms, illustrated inFIG. 7A represent quantities of data that may make up an MPEG 4 file to be stored or transmitted.
An MPEG 4 video file may include a number of audio and video streams as described above. Each stream may be stored in a separate track in a file. A track in a file may represent a timed series of media, such as successive video frames. Various tracks may include the atoms, or boxes, as illustrated inFIG. 7B.
FIG. 7B illustrates the structure of aMoov box740 or atom in the ISO file of the MP4 format. The video file of the MP4 format having such file system may be a video file format suitable to store a file. The structure of the movie header Moov may include a movie header “mvhd” which may give basic information about the content of the video file to be stored or transmitted wirelessly, such as date of formation and duration. The track header “trak” stores metadata for each track, that may be used later by a user. The file may include additional boxes such as “tkhd”, another track header with track reference box “tref”, a media box “mdia” to include media declarations. The mdia box may include another media header “mdhd,” a handler box “hdlr” and media information “minf.” The file to be stored or transmitted may further break down in either the audio or video track of the Moov box to include a video media header “vmhd,” data information “dinf” with a data reference “dref”, and a sample table “stbl” that includes sample timing, location information, and structural data. The sample table box “stbl” is broken down further into a sample table sample description “stsd” box, a sample table time to sample “stts” box, a sample table sample to chunk “stsc” box, and a sample table chunk offset “stco” box.”
The files including the audio and video combined streams may be transmitted over the intranet and internet using internet protocols such as Real-time Transport Protocol (RTP) controlled by thecontroller180.
FIGS. 8A to 8C are views illustrating the structure of a video file to be broadcast in real-time according to an exemplary embodiment of the present general inventive concept.
FIG. 8A illustrates the operation of avideo encoder125, anaudio encoder115, a file format MUX (that is, the second MUX150), andcommunication unit170 when real-time broadcasting starts according to an exemplary embodiment of the present general inventive concept. As illustrated inFIG. 8A, when real-time broadcasting starts, video information data and audio information data are output first from being encoded in thevideo encoder unit125 and audio encoder unit825 to themultiplexer150. Input data transfer takes place between the encoders and themultiplexer150. Themultiplexer150 combines the audio and video streams output from theencoders125 and115 into a video file that includes an output data transfer to thecommunication unit170. Themultiplexer150 may convert the audio streams into a video file including audio and video track information, such as thevideo track725 andaudio track727 of theMoov box720 illustrated inFIG. 7A. Necessary data and timing information is recorded in the headers and other boxes of audio and video tracks illustrated inFIG. 7B to be transmitted from themultiplexer150 to thecommunication unit170 to transmit the audio and video files wirelessly external to the photographingapparatus100.
FIG. 8B illustrates the data structure of a file system suitable for wireless transmission in real-time, andFIG. 8C illustrates the structure of Message_T illustrated inFIG. 8B.
The video file of the second format may be implemented in the structures ofFIGS. 8B and 8C. The video file of the second format may be applied to any format which allows a file to be wirelessly transmitted in real-time.
In this exemplary embodiment of the present general inventive concept, the photographing apparatus may be any device which photographs motion pictures, and may be a camcorder, a video camera, digital camera, cellular telephone, MP3 player, and so on, which may record moving images.
According to various exemplary embodiments of the present general inventive concept, the photographing apparatus to convert the photographed video into two types of formats, to store the video, and to transmit the video wirelessly in real-time, and the method of providing a photographed video thereof are provided. Therefore, a user can store high quality video photographed or recorded using the photographing apparatus, as well as broadcasting the photographed video in real-time.
Although a few embodiments of the present general inventive concept have been illustrated and described, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the appended claims and their equivalents.