Movatterモバイル変換


[0]ホーム

URL:


US11765356B2 - Method and device for encoding/decoding images - Google Patents

Method and device for encoding/decoding images
Download PDF

Info

Publication number
US11765356B2
US11765356B2US17/828,745US202217828745AUS11765356B2US 11765356 B2US11765356 B2US 11765356B2US 202217828745 AUS202217828745 AUS 202217828745AUS 11765356 B2US11765356 B2US 11765356B2
Authority
US
United States
Prior art keywords
current block
transform
block
intra
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/828,745
Other versions
US20220295066A1 (en
Inventor
Hui Yong KIM
Sung Chang LIM
Jin Ho Lee
Jin Soo Choi
Jin Woong Kim
Gwang Hoon Park
Kyung Yong Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Kyung Hee University
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Kyung Hee University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI, Kyung Hee UniversityfiledCriticalElectronics and Telecommunications Research Institute ETRI
Priority to US17/828,745priorityCriticalpatent/US11765356B2/en
Publication of US20220295066A1publicationCriticalpatent/US20220295066A1/en
Application grantedgrantedCritical
Publication of US11765356B2publicationCriticalpatent/US11765356B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method and a device for encoding/decoding images are disclosed. The method for encoding images comprises the steps of: deriving a scan type of a residual signal for a current block according to whether or not the current block is a transform skip block; and applying the scan type to the residual signal for the current block, wherein the transform skip block is a block to which transform for the current block is not applied and is specified on the basis of information indicating whether or not transform for the current block is to be applied.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a Continuation Application of U.S. patent application Ser. No. 16/999,252, filed on Aug. 21, 2020, which is a Continuation Application of U.S. patent application Ser. No. 16/413,870 filed on May 16, 2019, which is a Continuation Application of U.S. patent application Ser. No. 15/630,831 filed on Jun. 22, 2017, now U.S. Pat. No. 10,341,661 issued on Jul. 2, 2019, which is a Continuation Application of U.S. patent application Ser. No. 15/255,035 filed on Sep. 1, 2016, now U.S. Pat. No. 9,723,311 issued on Aug. 1, 2017, which is a Continuation Application of U.S. patent application Ser. No. 14/406,438 having a 371(c) date of Dec. 8, 2014, now U.S. Pat. No. 9,497,465 issued on Nov. 15, 2016, which is a U.S. national stage application of International Application No. PCT/KR2013/005616 filed on Jun. 25, 2013, which claims the benefit of Korean Patent Applications Nos. 10-2012-0071446 and 10-2013-0073067, filed on Jun. 29, 2012 and Jun. 25, 2013, respectively, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein by reference for all purposes.
TECHNICAL FIELD
The present invention relates to the encoding and decoding of an image and, more particularly, to a method of scanning residual signals.
BACKGROUND ART
Broadcast service having High Definition (HD) resolution (1280×1024 or 1920×1080) is extended nationwide and globally. Accordingly, many users are accustomed to video having high resolution and high picture quality. Accordingly, a lot of institutes are giving impetus to the development of the next-generation image device. Furthermore, as there is a growing interest in Ultra High Definition (UHD) havingresolution 4 times higher than that of HDTV along with HDTV, moving image standardization organizations have become recognized a need for compression technology for an image having higher resolution and higher picture quality. Furthermore, there is an urgent need for a new standard which can maintain the same picture quality and also have many advantages in terms of a frequency band or storage through higher compression efficiency than that of H.264/AVC that is now used in HDTV, mobile phones, and Blue ray players.
Today, Moving Picture Experts Group (MPEG) and Video Coding Experts Group (VCEG) are jointly standardizing High Efficiency Video Coding (HEVC), that is, the next-generation video codec, and are aiming to encode an image including a UHD image with compression efficiency twice than that of H.264/AVC. This can provide an image having a lower frequency than and higher picture quality than a current image even in 3D broadcasting and a mobile communication network as well as HD and UHD images.
DISCLOSURETechnical Problem
The present invention provides a method and apparatus for encoding and decoding an image, which are capable of improving encoding and decoding efficiency.
The present invention provides a method and apparatus for scanning residual signals, which are capable of improving encoding and decoding efficiency.
Technical Solution
In accordance with an aspect of the present invention, there is provided an image decoding method. The method includes deriving a scan type for the residual signals of a current block depending on whether or not the current block is a transform skip block and applying the scan type to the residual signals of the current block, wherein the transform skip block is the current block to which transform has not been applied and is specified based on information indicating whether or not to apply the transform to the current block.
The deriving of the scan type for the residual signals of the current block may include driving any one of a vertical scan, a horizontal scan, and an up-right scan as the scan type for the residual signals if the current block is a transform skip block.
The deriving of the scan type for the residual signals of the current block may include setting a scan type, derived based on an intra-prediction mode of the current block, as the scan type for the residual signals if the current block is a transform skip block.
A horizontal scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is a vertical scan.
A vertical scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is a horizontal scan.
An up-right scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is not a vertical scan or a horizontal scan.
The deriving of the scan type for the residual signals of the current block may include deriving any one of a vertical scan, a horizontal scan, and an up-right scan as the scan type for the residual signals of the current block if the current block is a transform skip block and a size of the current block is a specific size or lower.
The specific size may be a 4×4 size.
The deriving of the scan type for the residual signals of the current block may include setting a scan type, deriving based on an intra-prediction mode of the current block, again if the current block is a transform skip block and a size of the current block is a specific size or lower.
A horizontal scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is a vertical scan.
A vertical scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is a horizontal scan.
An up-right scan may be set again as the scan type for the residual signals if the scan type derived based on the intra-prediction mode of the current block is not a vertical scan or a horizontal scan.
The specific size may be a 4×4 size.
The deriving of the scan type for the residual signals of the current block may include deriving the scan type for the residual signals of the current block based on an intra-prediction mode of the current block if the current block is not a transform skip block.
In accordance with another aspect of the present invention, there is provided an image decoding apparatus. The apparatus include a scan type deriving module for deriving a scan type for residual signals of a current block depending on whether or not the current block is a transform skip block and a scanning module for applying the scan type to the residual signals of the current block, wherein the transform skip block is the current block to which transform has not been applied and is specified based on information indicating whether or not to apply the transform to the current block.
In accordance with yet another aspect of the present invention, there is provided an image encoding method. The method includes deriving a scan type for residual signals of a current block depending on whether or not the current block is a transform skip block and applying the scan type to the residual signals of the current block, wherein the transform skip block is the current block to which transform has not been applied and is specified based on information indicating whether or not to apply the transform to the current block.
In accordance with further yet another aspect of the present invention, there is provided an image encoding apparatus. The apparatus includes a scan type deriving module for deriving a scan type for residual signals of a current block depending on whether or not the current block is a transform skip block and a scanning module for applying the scan type to the residual signals of the current block, wherein the transform skip block is the current block to which transform has not been applied and is specified based on information indicating whether or not to apply the transform to the current block.
Advantageous Effects
Since a transform process is not performed on a block to which a transform skip algorithm has been applied, a block on which an existing transform process has been performed and the transform skip block have different transform coefficient characteristics. Accordingly, encoding and decoding efficiency for residual signals can be improved by providing a method and apparatus for deriving a scan type, which are suitable for the characteristics of a transform skip block, not a transform coefficient scan method applied to a block on which an existing transform process has been performed.
DESCRIPTION OF DRAWINGS
FIG.1 is a block diagram showing the construction of an image encoding apparatus to which an embodiment of the present invention is applied;
FIG.2 is a block diagram showing the construction of an image decoding apparatus to which an embodiment of the present invention is applied;
FIG.3 is a diagram schematically showing the partition structure of an image when encoding the image;
FIG.4 is a diagram showing the forms of a PU that may be included in a CU;
FIG.5 is a diagram showing the forms of a TU that may be included in a CU;
FIG.6 is a diagram showing an example of intra-prediction modes;
FIG.7 is a diagram showing an example of an up-right scan method for transform coefficients;
FIG.8 is a flowchart illustrating an embodiment of a method for determining a scan type according to an intra-prediction mode;
FIG.9 is a flowchart illustrating an example of a method of selecting a frequency transform method for residual signals (or residual image);
FIG.10 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with an embodiment of the present invention;
FIG.11 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with another embodiment of the present invention;
FIG.12 is a diagram showing examples of scan types to which the present invention can be applied;
FIG.13 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with an embodiment of the present invention;
FIG.14 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with another embodiment of the present invention;
FIG.15 is a diagram showing an example of a difference in the resolution between a luma block and a chroma block;
FIG.16 is a diagram showing another example of a difference in the resolution between a luma block and a chroma block;
FIG.17 is a schematic block diagram of an encoding apparatus in accordance with an embodiment of the present invention; and
FIG.18 is a schematic block diagram of a decoding apparatus in accordance with an embodiment of the present invention.
MODE FOR INVENTION
Hereinafter, some exemplary embodiments of the present invention are described in detail with reference to the accompanying drawings. Furthermore, in describing the embodiments of this specification, a detailed description of the known functions and constitutions will be omitted if it is deemed to make the gist of the present invention unnecessarily vague.
In this specification, when it is said that one element is connected or coupled with the other element, it may mean that the one element may be directly connected or coupled with the other element or a third element may be connected or coupled between the two elements. Furthermore, in this specification, when it is said that a specific element is included, it may mean that elements other than the specific element are not excluded and that additional elements may be included in the embodiments of the present invention or the scope of the technical spirit of the present invention.
Terms, such as the first and the second, may be used to describe various elements, but the elements are not restricted by the terms. The terms are used to only distinguish one element from the other element. For example, a first element may be named a second element without departing from the scope of the present invention. Likewise, a second element may be named a first element.
Furthermore, element units described in the embodiments of the present invention are independently shown to indicate difference and characteristic functions, and it does not mean that each of the element units is formed of a piece of separate hardware or a piece of software. That is, the element units are arranged and included, for convenience of description, and at least two of the element units may form one element unit or one element may be divided into a plurality of element units and the plurality of divided element units may perform functions. An embodiment into which the elements are integrated or embodiments from which some elements are separated are also included in the scope of the present invention, unless they depart from the essence of the present invention.
Furthermore, in the present invention, some elements are not essential elements for performing essential functions, but may be optional elements for improving only performance. The present invention may be implemented using only essential elements for implementing the essence of the present invention other than elements used to improve only performance, and a structure including only essential elements other than optional elements used to improve only performance is included in the scope of the present invention.
FIG.1 is a block diagram showing the construction of an image encoding apparatus to which an embodiment of the present invention is applied.
Referring toFIG.1, theimage encoding apparatus100 includes amotion estimation module111, amotion compensation module112, anintra-prediction module120, aswitch115, asubtractor125, atransform module130, aquantization module140, anentropy encoding module150, aninverse quantization module160, aninverse transform module170, anadder175, afilter module180, and areference picture buffer190.
Theimage encoding apparatus100 can perform encoding on an input image in intra-mode or inter-mode and output a bit stream. In the case of intra-mode, theswitch115 can switch to intra mode. In the case of inter-mode, theswitch115 can switch to inter-mode. Intra-prediction means intra-frame prediction, and inter-prediction means inter-frame. Theimage encoding apparatus100 can generate a prediction block for the input block of the input image and then encode a difference between the input block and the prediction block. Here, the input image can mean the original picture.
In the case of intra-mode, theintra-prediction module120 can generate the prediction block by performing spatial prediction using a value of the pixel of an already encoded block neighboring a current block.
In the case of inter-mode, themotion estimation module111 can obtain a motion vector by searching a reference picture, stored in thereference picture buffer190, for a region that is most well matched with the input block in a motion prediction process. Themotion compensation module112 can generate the prediction block by performing motion compensation using the motion vector and the reference picture stored in thereference picture buffer190. Here, the motion vector is a two-dimensional (2-D) vector used in inter-prediction, and the motion vector can indicate an offset between a picture to be now encoded/decoded and a reference picture.
Thesubtractor125 can generate a residual block based on the difference between the input block and the generated prediction block.
Thetransform module130 can perform transform on the residual block and output a transform coefficient according to the transformed block. Furthermore, thequantization module140 can output a quantized coefficient by quantizing the received transform coefficient according to a quantization parameter.
Theentropy encoding module150 can perform entropy encoding on a symbol according to a probability distribution based on values calculated by thequantization module140, an encoding parameter value calculated in an encoding process, etc. and output a bit stream according to the entropy-coded symbols. If entropy encoding is applied, the size of a bit stream for a symbol to be encoded can be reduced because the symbol is represented by allocating a small number of bits to a symbol having a high incidence and a large number of bits to a symbol having a low incidence. Accordingly, the compression performance of image encoding can be improved through entropy encoding. Theentropy encoding module150 can use such encoding methods as exponential Golomb, Context-Adaptive Binary Arithmetic Coding (CABAC), and Context-Adaptive Binary Arithmetic Coding (CABAC) for the entropy encoding.
Theimage encoding apparatus100 according to the embodiment ofFIG.1 performs inter-prediction encoding, that is, inter-frame prediction encoding, and thus a picture that has been coded needs to be decoded and stored in order to be used as a reference picture. Accordingly, a quantized coefficient is dequantization by thedequantization module160 and inverse transformed by theinverse transform module170. The dequantized and inversely transformed coefficient is added to the prediction block through theadder175, thereby generating a reconstructed block.
The reconstructed block experiences thefilter module180. Thefilter module180 can apply one or more of a deblocking filter, a Sample Adaptive Offset (SAO), and an Adaptive Loop Filter (ALF) to the reconstructed block or the reconstructed picture. Thefilter module180 may also be called an adaptive in-loop filter. The deblocking filter can remove block distortion generated at the boundary of blocks. The SAO can add a proper offset value to a pixel value in order to compensate for a coding error. The ALF can perform filtering based on a value obtained by comparing a reconstructed picture with the original picture. The reconstructed block that has experienced thefilter module180 can be stored in thereference picture buffer190.
FIG.2 is a block diagram showing the construction of an image decoding apparatus to which an embodiment of the present invention is applied.
Referring toFIG.2, theimage decoding apparatus200 includes anentropy decoding module210, adequantization module220, aninverse transform module230, anintra-prediction module240, amotion compensation module250, afilter module260, and areference picture buffer270.
Theimage decoding apparatus200 can receive a bit stream outputted from an encoder, perform decoding on the bit stream in intra-mode or inter-mode, and output a reconstructed image, that is, a reconstructed image. In the case of intra-mode, a switch can switch to intra-mode. In the case of inter-mode, the switch can switch to inter-mode.
Theimage decoding apparatus200 can obtain a reconstructed residual block from the received bit stream, generate a prediction block, and generate a reconstructed block, that is, a restoration block, by adding the reconstructed residual block to the prediction block.
Theentropy decoding module210 can generate symbols including a symbol having a quantized coefficient form by performing entropy decoding on the received bit stream according to a probability distribution.
If an entropy decoding method is applied, the size of a bit stream for each symbol can be reduced because the symbol is represented by allocating a small number of bits to a symbol having a high incidence and a large number of bits to a symbol having a low incidence.
The quantized coefficient is dequantized by thedequantization module220 and is inversely transformed by theinverse transform module230. As a result of the dequantization/inverse transform of the quantized coefficient, a reconstructed residual block can be generated.
In the case of intra-mode, theintra-prediction module240 can generate the prediction block by performing spatial prediction using a value of the pixel of an already encoded block neighboring a current block. In the case of inter-mode, themotion compensation module250 can generate the prediction block by performing motion compensation using a motion vector and a reference picture stored in thereference picture buffer270.
The residual block and the prediction block are added together by anadder255. The added block experiences thefilter module260. Thefilter module260 can apply at least one of a deblocking filter, an SAO, and an ALF to the reconstructed block or the reconstructed picture. Thefilter module260 outputs a reconstructed image, that is, a reconstructed image. The reconstructed image can be stored in thereference picture buffer270 and can be used for inter-frame prediction.
FIG.3 is a diagram schematically showing the partition structure of an image when encoding the image.
In High Efficiency Video Coding (HEVC), encoding is performed in a coding unit in order to efficiency partition an image.
Referring toFIG.3, in HEVC, animage300 is sequentially partitioned in the Largest Coding Unit (hereinafter referred to as an LCU), and a partition structure is determined based on the LCUs. The partition structure means a distribution of Coding Units (hereinafter referred to as CUs) for efficiently encoding an image within theLCU310. This distribution can be determined based on whether or not one CU will be partitioned into four CUs each reduced by half from the one CU in a width size and a height size. Likewise, the partitioned CU can be recursively partitioned into four CUs each reduced by half from the partitioned CU in a width size and a height size.
Here, the partition of a CU can be recursively performed up to a predetermined depth. Information about the depth is information indicative of the size of a CU, and information about the depth of each CU is stored. For example, the depth of an LCU can be 0, and the depth of the Smallest Coding Unit (SCU) can be a predetermined maximum depth. Here, the LCU is a CU having a maximum CU size as described above, and the SCU is a CU having a minimum CU size.
Whenever partition is performed from theLCU310 by half in a width size and a height size, the depth of a CU is increased by 1. A CU on which partitions has not been performed has a 2N×2N size for each depth, and a CU on which partition is performed is partitioned from a CU having a 2N×2N size to four CUs each having an N×N size. The size of N is reduced by half whenever the depth is increased by 1.
Referring toFIG.3, the size of the LCU having a minimum depth of 0 can be 64×64 pixels, and the size of the SCU having a maximum depth of 3 can be 8×8 pixels. Here, the LCU having 64×64 pixels can be represented by a depth of 0, a CU having 32×32 pixels can be represented by a depth of 1, a CU having 16×16 pixels can be represented by a depth of 2, and the SCU having 8×8 pixels can be represented by a depth of 3.
Furthermore, information about whether or not a specific CU will be partitioned can be represented through partition information of 1 bit for each CU. The partition information can be included in all CUs other than the SCU. For example, if a CU is not partitioned, partition information of 0 can be stored. If a CU is partitioned, partition information of 1 can be stored.
Meanwhile, a CU partitioned from the LCU can include a Prediction Unit (PU) (or Prediction Block (PB)), that is, a basic unit for prediction, and a Transform Unit (TU) (or Transform Block (TB)), that is, a basic unit for transform.
FIG.4 is a diagram showing the forms of a PU that may be included in a CU.
A CU that is no longer partitioned, from among CUs partitioned from the LCU, is partitioned into one or more PUs. This behavior itself is also call partition. A Prediction Unit (hereinafter referred to as a PU) is a basic unit on which prediction is performed and encoded in any one of skip mode, inter-mode, and intra mode. The PU can be partitioned in various forms depending on each mode.
Referring toFIG.4, in the case of skip mode, a 2N×2N mode410 having the same size as a CU can be supported without partition within the CU.
In the case of inter-mode, 8 partitioned forms, for example, the 2N×2N mode410, a 2N×N mode415, an N×2N mode420, an N×N mode425, a 2N×nU mode430, a 2N×nD mode435, an nL×2N mode440, and an nR×2N mode445 can be supported within a CU.
In the case of intra-mode, the 2N×2N mode410 and the N×N mode425 can be supported within a CU.
FIG.5 is a diagram showing the forms of a TU that may be included in a CU.
A Transform Unit (hereinafter referred to as a TU) is a basic unit used for spatial transform and a quantization/dequantization (scaling) process within a CU. A TU can have a rectangular or square form. A CU that is no longer partitioned, from among CUs partitioned from the LCU, can be partitioned into one or more TUs.
Here, the partition structure of the TU can be a quad-tree structure. For example, as shown inFIG.5, oneCU510 can be partitioned once or more depending on a quad-tree structure, so that theCU510 is formed of TUs having various sizes.
Meanwhile, in HEVC, as in H.264/AVC, intra-frame prediction (hereinafter called intra-prediction) encoding is performed. Here, prediction encoding is performed using neighboring blocks located near a current block depending on an intra-prediction mode (or prediction directivity) of the current block. In H.264/AVC, encoding is performed using a prediction mode having 9 directivities. In contrast, in HEVC, encoding is performed using a total of 36 prediction modes including 33 directional prediction modes and 3 non-directional prediction modes.
FIG.6 is a diagram showing an example of intra-prediction modes. Different mode numbers can be assigned to respective intra-prediction mode.
Referring toFIG.6, a total of 36 intra-prediction modes are present. The total of 36 intra-prediction modes can include 33 directional modes and 3 non-directional modes depending on a direction in which reference pixels used to estimate a pixel value of a current block and/or a prediction method.
The 3 non-directional modes include a planar (Intra_Planar) mode, a DC (Intra_DC) mode, and an LM mode (Intra_FromLuma) in which a chroma signal is derived from a restored luma signal. In intra-prediction, all the 3 non-directional modes may be used or some of them may be used. For example, only the planar mode and the DC mode may be used, and the LM mode may not be used.
Encoding for 36 intra-prediction modes, such as those shown inFIG.6, can be applied to a luma signal and a chroma signal. For example, in the case of a luma signal, modes other than the LM mode of the 36 intra-prediction modes can be encoded. In the case of a chroma signal, an intra-prediction mode can be encoded using three methods as in Table 1.
Table 1 is an example of an encoding method for an intra-prediction mode of a chroma signal.
TABLE 1
Chroma intra-
predictionIntra-prediction scan mode for lumasignal
coding mode
026101X (0 <= X < 35)
034000 0 (Planar)
12634262626 (Vert.)
21010341010 (Hor.) EM
3111341 (DC)
4LMLMLMLMLMLM
5026101XDM
Methods of encoding intra-prediction modes of 3 chroma signals are described with reference to Table 1. A first method is to use a Derived Mode (DM) in which an intra-prediction mode of a luma signal is applied to an intra-prediction mode of a chroma signal without change. A second method is to use a coding mode (Explicit Mode (EM)) in which an actual intra-prediction mode is applied. An intra-prediction mode of a chroma signal which is encoded in the EM mode includes a planar mode Planar, a DC mode DC, a horizontal mode Hor, a vertical mode Ver, and a mode at an eighth place vertically (i.e., Ver+8 or No. 34 mode). A third method is to use an LM mode in which a chroma signal is predicted from a restored luma signal. The most efficient methods of the three encoding methods can be selected.
A prediction image for a signal obtained by performing prediction using the above-described intra-prediction mode can have a residual value with the original image. A residual image having the residual value between the prediction image and the original image can be subject to frequency domain transform and quantization and then to entropy coding.
In order to improve entropy coding efficiency, the coefficients of the quantized image having a 2-D form can be arrayed in a 1-D form. When arraying the quantization coefficients again, a zigzag scan method is used in an image encoding method, such as existing H.264/AVC, but an up-right scan method is basically used in HEVC.
Furthermore, frequency domain transform can include integer transform, Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), and intra-prediction mode-dependent DCT/DST.
FIG.7 is a diagram showing an example of an up-right scan method for transform coefficients.
When encoding quantization coefficients for a block having a specific size, the block having a specific size can be partitioned into 4×4 size subblocks and encoded.
Referring toFIG.7, a 16×16 size block can be partitioned into 16 4×4 size subblocks and encoded. In a decoding process, whether or not a transform coefficient is present in each subblock can be checked based on flag information parsed from a bit stream. For example, the flag can be significant_coeff_group_flag (sigGrpFlag). If a value of significant_coeff_group_flag is 1, it may mean that any one transform coefficient quantized into a corresponding 4×4 subblock is present. In contrast, if a value of significant_coeff_group_flag is 0, it may mean that any transform coefficient quantized into a corresponding 4×4 subblock is not present. An up-right scan type has been basically used in a scan type (or scan order) for the 4×4 subblocks shown inFIG.7 and a scan type for significant_coeff_group_flag.
Although an up-right scan method has been illustrated as being applied inFIG.7, a scan method for a quantization coefficient includes up-right, horizontal, and vertical scans. For example, the up-right scan method can be basically used in inter-prediction, and the up-right, horizontal, and vertical scan methods can be selectively used in intra-prediction.
A scan type in intra-prediction can be differently selected depending on an intra-prediction mode, which can be applied to both a luma signal and a chroma signal. Table 2 below shows an example of a method of determining a scan type according to an intra-prediction mode.
TABLE 2
log2TrafoSize − 2
IntraPredModeValue0123
00000
10000
2-50000
 6-142200
15-210000
22-301100
31-350000
In Table 2, IntraPredModeValue means an intra-prediction mode. Here, in the case of a luma signal, IntraPredModeValue corresponds to a value of IntraPredMode. In the case of a chroma signal, IntraPredModeValue corresponds to a value of IntraPredModeC. log 2TrafoSize means that the size of a current transform block is indicated by a log. For example, when a value of IntraPredModeValue is 1, it means a DC mode (DC; Intra_DC). When a value of log 2TrafoSize-2 is 1, it means an 8×8 size block.
Furthermore, in Table 2,numbers 0, 1, and 2 determined by the intra-prediction mode ‘IntraPredModeValue’ and the current transform block size ‘log 2TrafoSize’ indicate scan types. For example, an up-right scan type can be indicated by 0, a horizontal scan type can be indicated by 1, and a vertical scan type can be indicated by 2.
FIG.8 is a flowchart illustrating an embodiment of a method for determining a scan type according to an intra-prediction mode.
The method ofFIG.8 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. In the embodiment ofFIG.8, although the method ofFIG.8 is illustrated as being performed in the encoding apparatus for convenience of description, the method ofFIG.8 can also be equally applied to the decoding apparatus.
InFIG.8, IntraPredMode means an intra-prediction mode for a luma signal, and IntraPredModeC means an intra-prediction mode for a chroma signal. IntraPredMode(C) may mean an intra-prediction mode for a luma signal or a chroma signal depending on the components of the signal.
Referring toFIG.8, if a current block is not an intra-prediction mode at step S800, the encoding apparatus determines that an up-right scan is used as a scan for residual signals at step S860.
If the current block is an intra-prediction mode and IntraPredModeC for a chroma signal is an LM mode at step S810, the encoding apparatus determines that an up-right scan is used as a scan for residual signals at step S860.
If the current block is an intra-prediction mode and IntraPredModeC for the chroma signal is not an LM mode at step S810, the encoding apparatus determines a scan type for residual signals depending on IntraPredMode(C) of the current block.
If a mode value of IntraPredMode(C) of the current block is 6 or more and 14 or less at step S820, the encoding apparatus determines that a vertical scan is used as a scan for residual signals at step S840.
If the mode value of IntraPredMode(C) is not 6 or more and 14 or less and the mode value of IntraPredMode(C) of the current block is 22 or more and 30 or less at step S830, the encoding apparatus determines that a horizontal scan is used as a scan for residual signals at step S850.
If not, that is, the mode value of IntraPredMode(C) is 6 or more and 14 or less and is not 22 or more and 30 or less, the encoding apparatus determines that an up-right scan is used as a scan for residual signals at step S860.
Meanwhile, as described above, a residual value (or residual signal or residual) between the original image and the prediction image is subject to frequency domain transform and quantization and then entropy coding. Here, in order to improve encoding efficiency attributable to the frequency domain transform, integer transform, DCT, DST, and intra-prediction mode-dependent DCT/DST are selectively applied depending on the size of a block.
Furthermore, in order to improve encoding efficiency, a transform skip algorithm can be applied to screen content, such a document image or a presentation image of PowerPoint. If the transform skip algorithm is applied, a residual value (or residual signal or residual) between the original image and a prediction image is directly quantized and then subject to entropy coding without a frequency transform process. Accordingly, a frequency transform process is not performed on a block to which the transform skip algorithm has been applied.
FIG.9 is a flowchart illustrating an example of a method of selecting a frequency transform method for residual signals (or residual image).
The method ofFIG.9 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. Although the method ofFIG.9 is illustrated as being performed in the encoding apparatus in the embodiment ofFIG.9 for convenience of description, the method ofFIG.9 can also be equally applied to the decoding apparatus.
Referring toFIG.9, if a current block has been encoded in an intra-prediction mode and is not a block of a luma signal at step S900, the encoding apparatus uses integer transform or DCT as a frequency transform method for the residual images of the luma and chroma signals of the current block at step S990.
If the current block has been encoded in an intra-prediction mode and is a block of a luma signal at step S900, the encoding apparatus obtains IntraPredMode for the luma signal of the current block at step S910.
The encoding apparatus checks whether or not the current block is a block of a 4×4 size (iWidth==4) at step S920.
If, as a result of the check, the current block is not a block of a 4×4 size (iWidth==4), the encoding apparatus uses integer transform or DCT as a frequency transform method for the residual images of the luma and chroma signals of the current block at step S990.
If, as a result of the check, the current block is a block of a 4×4 size (iWidth==4), the encoding apparatus checks an intra-prediction mode of the current block.
If, as a result of the check, a mode value of the intra-prediction mode of the current block is 2 or more and 10 or less at step S930, the encoding apparatus uses DST in a horizontal direction and DCT in a vertical direction as a frequency transform method for the luma signal of the current block at step S960. DCT can be used as the frequency transform method for the chroma signal of the current block in both the horizontal and vertical direction.
If, as a result of the check at step S930, the mode value of the intra-prediction mode of the current block is 0 or 11 or more and 25 or less at step S940, the encoding apparatus uses DST in both the horizontal and vertical directions as a frequency transform method for the luma signal of the current block at step S970. DCT can be used as the frequency transform method for the chroma signal of the current block in both the horizontal and vertical directions.
If, as a result of the check at step S940, the mode value of the intra-prediction mode of the current block is 26 or more and 34 or less at step S950, the encoding apparatus uses DCT in a horizontal direction and DST in a vertical direction as a frequency transform method for the luma signal of the current block at step S980. DCT can be used as a frequency transform method for the chroma signal of the current block in both the horizontal and vertical directions.
If, as a result of the check at step S950, the mode value of the intra-prediction mode of the current block is not 26 or more and not 34 or less, the encoding apparatus used DCT in both the horizontal and vertical directions as a frequency transform method for the residual images of the luma and chroma signals of the current block at step S990.
InFIG.9, ‘iWidth’ is an indicator indicative of the size of a transform block, and a value of iWidth according to the size of each transform block can be assigned as follows.
For example, if the size of a transform block is 64×64, a value of iWidth can be 64. If the size of a transform block is 32×32, a value of iWidth can be 32. If the size of a transform block is 16×16, a value of iWidth can be 16. If the size of a transform block is 8×8, a value of iWidth can be 8. If the size of a transform block is 4×4, a value of iWidth can be 4. If the size of a transform block is 2×2, a value of iWidth can be 2.
In relation to the contents ofFIG.9, a transformation process for scaled transform coefficients is as follows.
In this case, input is as follows.
    • The width of a current transform block; nW
    • The height of the current transform block; nH
    • An array of transform coefficients having an element dij; (nW×nH) array d
    • Information indicating whether or not the transform skip algorithm has been applied to the current transform block
    • An index for the luma signal and the chroma signal of the current transform block; cIdx
If cIdx is 0, it means a luma signal. If cIdx is 1 or cIdx is 2, it means a chroma signal. Furthermore, if cIdx is 1, it means Cb in a chroma signal. If cIdx is 2, it means Cr in a chroma signal.
    • A quantization parameter; qP
In this case, output is as follows.
    • An array for a residual block obtained by performing inverse transform on the scaled transform coefficients; (nW×nH) array r
If coding mode ‘PredMode’ for a current block is intra-prediction mode, a value of Log 2(nW*nH) is 4, and a value of cIdx is 0, parameters ‘horizTrType’ and ‘vertTrType’ are obtained through Table 3 below depending on intra-prediction mode of a luma signal. If not, the parameters ‘horizTrType’ and ‘vertTrType’ are set to 0.
TABLE 3
IntraPredMode01234567891011121314151617
vertTrType100000000001111111
horizTrType101111111111111111
IntraPredMode1819202122232425262728293031323334
vertTrType11111111111111111
horizTrType11111111000000000
A residual signal for the current block is obtained according to the following sequence.
First, if the transform skip algorithm for the current block has been applied, the following process is performed.
    • 1. If cIdx is 0, shift=13−BitDepthγ. If not, shift=13−BitDepthC.
    • 2. An array rij(i=0 . . . (nW)−1, j=0 . . . (nH)−1) for the residual block is set as follows.
If the shift is greater than 0, rij=(dij+(1<<(shift−1)))>>shift. If not, rij=(dij<<(−shift).
If the transform skip algorithm for the current block has not been applied, the following process is performed.
An inverse transform process is performed on scaled transform coefficients using values of the parameters ‘horizTrType’ and ‘vertTrType’. First, the size (nW, nH) of the current block, an array ‘(nW×nH) array d’ for the scaled transform coefficients, and the parameter ‘horizTrType’ are received, and an array ‘(nW×nH) array e’ is outputted by performing 1-D inverse transform horizontally.
Next, the array ‘(nW×nH) array e’ is received, and the array ‘(nW×nH) array g’ is derived as inEquation 1.
gij=Clip3(−32768,32767,(eij+64)>>7)  [Equation 1]
Next, the size (nW, nH) of the current block, the array ‘nW×nH array g’, and the parameter ‘vertTrType’ are received, and 1-dimensional inverse transform is performed horizontally.
Next, an array ‘(nW×nH) array r’ for the residual block is set as inEquation 17 below depending on cIdx.
rij=(fij+(1<<(shift−1)))>>shift  [Equation 2]
InEquation 2, the shift=20−BitDepthγ when cIdx is 0. If not, the shift=20−BitDepthC. BitDepth means the number of bits (e.g., 8 bits) of a sample for the current image.
Meanwhile, as described above, a frequency transform process is not performed on a block to which a transform skip algorithm has been applied (hereinafter referred to as a transform skip block). Accordingly, a block to which an existing frequency transform process has been performed and a transform skip block have different transform coefficient characteristics. That is, encoding efficiency can be reduced if a transform coefficient scan method applied to a block on which an existing frequency transform process has been performed is applied to a transform skip block. Accordingly, the present invention provides a coefficient scan method which can be applied to a transform skip block.
[Embodiment 1] Method and Apparatus for Unifying a Scan Type for all Transform Skip Blocks
FIG.10 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with an embodiment of the present invention.
The method ofFIG.10 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. Although the method ofFIG.10 is illustrated as being performed in the encoding apparatus in the embodiment ofFIG.10 for convenience of description, the method ofFIG.10 can also be equally applied to the decoding apparatus.
Referring toFIG.10, a scan type for the residual signals (or transform coefficients) of a current block can be determined depending on whether or not a transform skip algorithm has been applied to the current residual signal.
If, as a result of the determination at step S1000, it is determined that the residual signals (or transform coefficients) of the current block is a transform skip block, the encoding apparatus determines a horizontal scan as the scan type for the residual signals of the current block at step S1010.
If, as a result of the determination at step S1000, it is determined that the residual signals (or transform coefficients) of the current block is not a transform skip block, the encoding apparatus determines the scan type for the residual signals of the current block based on an intra-prediction mode of the current block at step S1020. For example, any one of up-right, horizontal, and vertical scans can be derived as the scan type for the residual signal based on an intra-prediction mode of the current block. In this case, for example, the method ofFIG.8 can be used.
In the embodiment ofFIG.10, a horizontal scan has been illustrated as a scan type for the residual signals of the current block when the current block is a transform skip block. However, this is only an example, and the present invention is not limited to the example. For example, if a current block is a transform skip block, an up-right scan or a vertical scan may be determined as a scan type for the residual signals of the current block.
FIG.11 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with another embodiment of the present invention.
The method ofFIG.11 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. Although the method ofFIG.11 is illustrated as being performed in the encoding apparatus in the embodiment ofFIG.11 for convenience of description, the method ofFIG.11 can also be equally applied to the decoding apparatus.
Referring toFIG.11, the encoding apparatus parses information indicating whether or not residual signals (or transform coefficients) is present in a current block at step S1100.
For example, the information indicating whether or not residual signals are present in a current block can be ‘cbf(coded block flag)’. If the residual signals are present in the current block, that is, if one or more transform coefficients other than 0 are included in the current block, a value of cbf can be 1. If the residual signals are not present in the current block, a value of cbf can be 0.
If the information indicating whether or not residual signals are present in a current block indicates that the residual signals are present in the current block, for example, when a value of cbf is 1 at step S1105, a next process is performed. If the information indicating whether or not residual signals are present in a current block indicates that the residual signals are not present in the current block, for example, when a value of cbf is 0 at step S1105, the process of deriving a scan type shown inFIG.11 is terminated at step S1110.
If the information indicating whether or not residual signals are present in a current block indicates that the residual signals are present in the current block (e.g., cbf==1), the encoding apparatus parses information indicative of a residual value in a step of quantizating the current block at step S1115. For example, information indicating a residual value in the step of quantizing the current block can be a parameter ‘cu_qp_delta’.
The information (i.e., cu_qp_delta) indicating a residual value in the step of quantizing the current block is not related to the deriving of a scan type for the residual signals of the current block. Accordingly, step S1115 may be omitted, and next step S1120 may be performed.
The encoding apparatus sets information about the size of the current block at step S1120.
For example, the information about the size of the current block can be set using a parameter ‘log 2TrafoSize’. The parameter ‘log 2TrafoSize’ can be a value obtained by right shifting and performing operations for the sum of ‘log 2TrafoWidth’ indicating the width of the current block and ‘log 2TrafoHeight’ indicating the height of the current block. Here, the parameter ‘log 2TrafoSize’ means the size of a TU block for a luma signal.
If any one of the width and height ‘log 2TrafoWidth’ and ‘log 2TrafoHeight’ of the current block is 1 (i.e., the width and height of the current block have a size of 2) at step S1125, the encoding apparatus sets both the width and height ‘log 2TrafoWidth’ and ‘log 2TrafoHeight’ of the current block to 2 at step S1130. That is, the width and height of the current block are set as a size of 4.
If a transform skip algorithm has been generally applied to a current picture including the current block (i.e., transform_skip_enabled_flag==1), a mode is not a mode in which transform and quantization are not performed (i.e., !cu_tranquant_bypass_flag), the coding mode of the current block has been coded in an intra-prediction mode (i.e., PredMode==MODE_INTRA), and both the width and height ‘log 2TrafoWidth’ and ‘log 2TrafoHeight’ of the current block are 2 at step S1135, the encoding apparatus parses information indicating whether or not to apply transform to the current block, for example, transform_skip_flag at step S1140.
If the coding mode of the current block has been coded in an intra-prediction mode (i.e., PredMode==MODE_INTRA) and the information indicating whether or not to apply transform to the current block indicates that transform is not applied to the current block, for example, a value of transform_skip_flag is 0 (i.e., if the current block is a transform skip block), the encoding apparatus can determine a scan type for the residual signals of the current block based on an intra-prediction mode of the current block as described above with reference toFIG.8 at steps S1150 to S1160.
For example, if a value of cIdx, that is, an indicator indicating the color component of the current block, is 0 at step S1150, that is, if the current block is a luma signal, a scan type for the residual signals of the current block can be determined based on IntraPredMode for the luma signal of the current block at step S1155. If a value of cIdx of the current block is not 0 at step S1150, that is, if the current block is a chroma signal, a scan type for the residual signals of the current block can be determined based on IntraPredModeC for the chroma signal of the current block at step S1160.
Here, scanIdx can be an index value indicative of a scan type for the residual signals of the current block. For example, if a value of scanIdx is 0, it can indicate an up-right scan. If a value of scanIdx is 1, it can indicate a horizontal scan. If a value of scanIdx is 2, it can indicate a vertical scan. ScanType can be a table indicating a scan type that is determined by the intra-prediction modes of Table 2 and the size of the current block. IntraPredMode means an intra-prediction mode for a luma signal, and IntraPredModeC means an intra-prediction mode for a chroma signal.
If the coding mode of the current block has been coded in an intra-prediction mode (i.e., PredMode==MODE_INTRA) and the information indicating whether or not to apply transform to the current block indicates that transform is applied to the current block at step S1145, for example, if a value of transform_skip_flag is 1 (i.e., the current block is a transform skip block), the encoding apparatus determines any one of up-right, horizontal, and vertical scans as a scan type for the residual signals of the current block at step S1165. For example, a value of scanIdx can be set to 0, and an up-right scan can be determined as a scan type for the residual signals of the current block.
In the present embodiment, an up-right scan has been determined as a scan type for the residual signals of a current block when the current block is a transform skip block. However, this is only an example, and the present invention is not limited to the example. For example, if a current block is a transform skip, a horizontal scan (scanIdx=1) or a vertical scan (scanIdx=2) may be set as a scan type for the residual signals of the current block.
The encoding apparatus parses coefficients for the current block using the determined scan type at step S1170.
FIG.12 is a diagram showing examples of scan types to which the present invention can be applied.
FIG.12(a) shows an example in which a diagonal scan (or up-right scan) is applied to a 4×4 size block. Residual signals (or transform coefficients) within the 4×4 size block can be scanned in order, such as that ofFIG.12(a).
The diagonal scan type ofFIG.12(a) is only an example, and the present invention is not limited thereto. For example, the residual signals may be scanned using a diagonal scan type in which the 4×4 size block ofFIG.12(a) has been rotated 180 degrees to the right.
FIG.12(b) shows an example in which a vertical scan is applied to a 4×4 size block. Residual signals (or transform coefficients) within the 4×4 size block can be scanned in order, such as that ofFIG.12(b).
The vertical scan type ofFIG.12(b) is only an example, and the present invention is not limited thereto. For example, the residual signals may be scanned using a vertical scan type in which the 4×4 size block ofFIG.12(b) has been rotated 180 degrees to the right.
FIG.12(c) shows an example in which a horizontal scan is applied to a 4×4 size block. Residual signals (or transform coefficients) within the 4×4 size block can be scanned in order, such as that ofFIG.12(c).
The vertical scan type ofFIG.12(c) is only an example, and the present invention is not limited thereto. For example, the residual signals may be scanned using a horizontal scan type in which the 4×4 size block ofFIG.12(c) has been rotated 180 degrees to the right.
Tables 4 and 5 can be obtained by incorporating the examples ofFIGS.10 and11 into a coding syntax for a Transform Unit (TU) and residual signals.
Table 4 shows a TU coding syntax in accordance with an embodiment of the present invention.
TABLE 4
Descriptor
transform_unit( x0L, y0L, x0C, y0C, log2TrafoWidth, log2TrafoHeight, trafoDepth, blkIdx )
{
 if( cbf_luma[ x0L ][ y0L ][ trafoDepth ] | | cbf_cb[ x0C ][ y0C ][ trafoDepth ] | |
  cbf_cr[ x0C ][ y0C ][ trafoDepth ] {
  if( (diff_cu_qp_delta_depth > 0 ) && !IsCuQpDeltaCoded ) {
   cu_qp_deltaae(v)
   IsCuQpDeltaCoded = 1
  }
  log2TrafoSize = ( ( log2TrafoWidth + log2TrafoHeight ) >> 1 )
  if( cbf_luma[ x0L ][ y0L ][ trafoDepth ] )
   residual_coding( x0L, y0L, log2TrafoWidth, log2TrafoHeight, 0 )
  if( log2TrafoSize > 2 ) {
   if( cbf_cb [ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth − 1, log2TrafoHeight − 1, 1 )
   if( cbf_cr[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth − 1, log2TrafoHeight − 1, 2 )
  } else if( blkIdx = = 3 ) {
   if( cbf_cb[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth, log2TrafoHeight, 1 )
   if( cbf_cr[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth, log2TrafoHeight, 2 )
  }
 }
}
Referring to Table 4, transform_unit indicates a bit stream for the coefficients of one TU block. Here, whether or not to parse coding information (i.e., residual coding) about residual signals for the TU block is determined based on information (cbf_luma) indicating whether or not the residual signals are present in a luma signal and information (cbf_cb, cbf_cr) indicating whether or not the residual signals are present in a chroma signal.
Table 5 shows a residual signal coding syntax in accordance with an embodiment of the present invention.
TABLE 5
Descriptor
residual_coding( x0, y0, log2TrafoWidth, log2TrafoHeight, cIdx ) {
 if( log2TrafoWidth = = 1 | | log2TrafoHeight = = 1 ) {
  log2TrafoWidth = 2
  log2TrafoHeight = 2
 }
 If( transform_skip_enabled_flag && !cu_transquant_bypass_flag &&
  (PredMode = = MODE_INTRA) &&
  ( log2TrafoWidth = = 2) && (log2TrafoHeight = = 2) )
  transform_skip_flag[ x0 ][ y0 ][ cIdx ]ae(v)
  if( PredMode = = MODE_INTRA && !
transform_skip_flag[ x0 ][ y0 ][ cIdx ] ) {
   if( cIdx = = 0 )
    scanIdx = ScanType[ log2TrafoSize − 2 ][ IntraPredMode ]
   else
    scanIdx = ScanType[ log2TrafoSize − 2 ][ IntraPredModeC ]
  } else
   scanIdx = 0
...
Referring to Table 5, residual_coding means a bit stream for the coefficients of one TU block. Here, the one TU block can be a luma signal or a chroma signal.
log 2TrafoWidth refers to the width of a current block, and log 2TrafoHeight refers to the height of the current block. log 2TrafoSize refers to a value obtained by right shifting and performing operations for the sum of received log 2TrafoWidth and log 2TrafoHeight and means a TU block size for a luma signal.
PredMode refers to a coding mode for a current block. PredMode is intra in the case of intra-frame coding and is inter in the case of inter-frame coding.
scanIdx can be an index indicating a scan type for the luma signal of a current TU block. For example, when a value of scanIdx is 0, it can indicate an up-right scan. When a value of scanIdx is 1, it can indicate a horizontal scan. When a value of scanIdx is 2, it can indicate a vertical scan.
ScanType can be a table indicating a scan type that is determined by an intra-prediction mode of Table 2 and the size of a current block. Here, “ScanType=DIAG” or “Up-right” is one example.
IntraPredMode refers to an intra-prediction mode for a luma signal, and IntraPredModeC refers to an intra-prediction mode for a chroma signal.
In the embodiments ofFIGS.10 and11, a method of unifying a scan type for all transform skip blocks has been described. In other words, the same scan type has been applied to transform skip blocks. In the following embodiment of the present invention, in the case of a transform skip block, a method of setting a scan type again is described.
[Embodiment 2] Method and Apparatus for Deriving a Scan Type for a Transform Skip Block
FIG.13 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with an embodiment of the present invention.
The method ofFIG.1 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. Although the method ofFIG.13 is illustrated as being performed in the encoding apparatus in the embodiment ofFIG.13 for convenience of description, the method ofFIG.13 can also be equally applied to the decoding apparatus.
Referring toFIG.13, the encoding apparatus determines a scan type for the residual signals of a current block based on an intra-prediction mode of the current block at step S1300.
For example, any one of up-right, horizontal, and vertical scans can be derived as a scan type for the residual signals of the current block based on an intra-prediction mode of the current block. In this case, for example, the method ofFIG.8 can be used.
If the residual signals (or transform coefficients) of the current block is a transform skip block at step S1310, the encoding apparatus sets a scan type for the residual signals of the current block again at steps S1330 to S1370. If the residual signals (or transform coefficients) of the current block is not a transform skip block at step S1310, the process of deriving a scan type shown inFIG.13 is terminated at step S1320. Here, a scan type determined at step S1300 is used as the scan type for the residual signals of the current block.
If the residual signals of the current block correspond to a transform skip block and the scan type determined based on an intra-prediction mode of the current block is a vertical scan at step S1330, the encoding apparatus sets a scan type for the residual signals of the current block as a horizontal direction again at step S1350.
If the residual signals of the current block correspond to a transform skip block and the scan type determined based on an intra-prediction mode of the current block is a horizontal scan at step S1340, the encoding apparatus sets a scan type for the residual signals of the current block as a vertical scan again at step S1360.
If the residual signals of the current block correspond to a transform skip block and the scan type determined based on an intra-prediction mode of the current block is not any one of vertical and horizontal scans at step S1340, the encoding apparatus sets a scan type for the residual signals of the current block as an up-right scan again at step S1370.
The method of setting a scan type again depending on whether or not the residual signals of a current block correspond to a transform skip block in the embodiment ofFIG.13 can be applied in various ways. For example, the scan type of a luma signal which is derived using the embodiment ofFIG.13 can be equally applied to a chroma signal. That is, the scan type of the luma signal becomes the same as that of the chroma signal. In contrast, the embodiment ofFIG.13 can be applied to each of a luma signal and a chroma signal. In another embodiment, a scan type for a current block may be determined based on a scan type for neighboring blocks. In yet another embodiment, in the case of a transform skip block, another scan type other than the existing scan types (e.g., vertical, horizontal, and up-right) may be used.
FIG.14 is a flowchart illustrating a method of deriving a scan type for residual signals (or transform coefficients) in accordance with another embodiment of the present invention.
The method ofFIG.14 can be performed in the encoding apparatus ofFIG.1 or the decoding apparatus ofFIG.2. Although the method ofFIG.14 is illustrated as being performed in the encoding apparatus in the embodiment ofFIG.14 for convenience of description, the method ofFIG.14 can also be equally applied to the decoding apparatus.
Referring toFIG.14, the encoding apparatus parses information indicating whether or not residual signals (or transform coefficients) are present in a current block at step S1400.
For example, the information indicating whether or not residual signals are present in a current block can be ‘cbf’. If residual signals are present in the current block, that is, if one or more transform coefficients other than 0 are included in the current block, a value of ‘cbf’ can be 1. If a residual signal is not present in the current block, a value of ‘cbf’ can be 0.
If the information indicating whether or not residual signals are present in the current block indicates that the residual signals are present in the current block, for example, when a value of cbf is 1 at step S1405, a next process is performed. If the information indicating whether or not residual signals are present in the current block indicates that a residual signal is not present in the current block, for example, when a value of cbf is 0 at step S1405, a next process for the current block is terminated at step S1410.
If the information indicating whether or not residual signals are present in the current block indicates that the residual signals are present in the current block, for example, when a value of cbf is 1, the encoding apparatus parses information indicative of a residual value in a step of quantizing the current block at step S1415. For example, information indicating the residual value in the step of quantizing the current block can be a parameter ‘cu_qp_delta’.
The information (i.e., cu_qp_delta) indicating the residual value in the step of quantizing the current block is not related to the deriving of a scan type for the residual signals of the current block. Accordingly, step S1415 may be omitted, and next step S1420 may be performed.
The encoding apparatus sets information about the size of the current block at step S1420.
For example, the information about the size of the current block can be set using a parameter ‘log 2TrafoSize’. The parameter ‘log 2TrafoSize’ can be a value obtained by right shifting and performing operation for the sum of ‘log 2TrafoWidth’ indicative of the width of the current block and ‘log 2TrafoHeight’ indicative of the height of the current block. Here, the parameter ‘log 2TrafoSize’ means the size of a TU block for a luma signal.
If any one of log 2TrafoWidth and log 2TrafoHeight indicating the size of the current block is 1 (i.e., the width and height of the current block have a size of 2) at step S1425, the encoding apparatus sets both log 2TrafoWidth and log 2TrafoHeight of the current block to 2 at step S1430. That is, the width and height of the current block are set to a size of 4.
If a transform skip algorithm has been generally applied to a current picture including the current block (i.e., transform_skip_enabled_flag==1), a mode is not a mode in which transform and quantization are not performed (i.e., !cu_tranquant_bypass_flag), the coding mode of the current block has been coded in an intra-prediction mode (i.e., PredMode==MODE_INTRA), and both log 2TrafoWidth and log 2TrafoHeight of the current block are 2 at step S1435, the encoding apparatus parses information indicating whether or not to apply transform to the current block, for example, transform_skip_flag at step S1440.
If the coding mode of the current block has been coded in an intra-prediction mode (i.e., PredMode==MODE_INTRA) at step S1445, the encoding apparatus can determine a scan type for the residual signals of the current block based on an intra-prediction mode of the current block as described above with reference toFIG.8 at steps S1450 to S1460.
For example, if a value of cIdx, that is, an indicator indicating the color component of the current block, is 0 at step S1450, that is, if the current block is a luma signal, the encoding apparatus can determine a scan type for the residual signals of the current block based on IntraPredMode for the luma signal of the current block at step S1455. If a value of cIdx of the current block is not 0 at step S1450, that is, if the current block is a chroma signal, the encoding apparatus can determine a scan type for the residual signals of the current block based on IntraPredModeC for the chroma signal of the current block at step S1460.
Here, scanIdx can be an index value indicating a scan type for the residual signals of the current block. For example, if a value of scanIdx is 0, it can indicate an up-right scan. If a value of scanIdx is 1, it can indicate a horizontal scan. If a value of scanIdx is 2, it can indicate a vertical scan. ScanType can be a table indicating a scan type determined by an intra-prediction mode of Table 2 and the size of the current block. IntraPredMode refers to an intra-prediction mode for a luma signal, and IntraPredModeC refers to an intra-prediction mode for a chroma signal.
If the coding mode of the current block has not been coded in an intra-prediction mode at step S1445, the encoding apparatus determines any one of up-right, horizontal, and vertical scans as a scan type for the residual signals of the current block at step S1465. For example, a value of scanIdx can be set to 0, and an up-right can be determined as a scan type for the residual signals of the current block.
The encoding apparatus sets the determined scan type again depending on whether or not the current block is a transform skip block at step S1470.
For example, the determined scan type can be set again using the method ofFIG.13. If the current block is a transform skip block (i.e., the parsed transform_skip_flag is 1), the encoding apparatus can set a horizontal scan as a scan type again if the determined scan type is a vertical scan and set a vertical scan as a scan type again if the determined scan type is a horizontal scan.
The encoding apparatus parses the coefficients of the current block using the scan type set again at step S1475.
Tables 6 and 7 can be obtained by incorporating the examples ofFIGS.13 and14 into a coding syntax for a Transform Unit (TU) and residual signals.
Table 6 shows a TU coding syntax in accordance with an embodiment of the present invention.
TABLE 6
Descriptor
transform_unit( x0L, y0L, x0C, y0C, log2TrafoWidth, log2TrafoHeight, trafoDepth, blkIdx ) {
 if( cbf_luma[ x0L ][ y0L ][ trafoDepth ] | | cbf_cb[ x0C ][ y0C ][ trafoDepth ] | |
  cbf_cr[ x0C ][ y0C ][ trafoDepth ] {
  if( (diff_cu_qp_delta_depth > 0 ) && !IsCuQpDeltaCoded ) {
   cu_qp_deltaae(v)
   IsCuQpDeltaCoded = 1
  }
  log2TrafoSize = ( ( log2TrafoWidth + log2TrafoHeight ) >> 1 )
  if( PredMode = = MODE_INTRA ) {
   scanIdx = ScanType[ log2TrafoSize − 2 ][ IntraPredMode ]
   scanIdxC = ScanType[ log2TrafoSize − 2 ][ IntraPredModeC ]
  } else {
   scanIdx = 0
   scanIdxC = 0
  }
  if( cbf_luma[ x0L ][ y0L ][ trafoDepth ] )
   residual_coding( x0L, y0L, log2TrafoWidth, log2TrafoHeight, scanIdx, 0 )
  if( log2TrafoSize > 2 ) {
   if( cbf_cb[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth − 1, log2TrafoHeight − 1, scanIdxC, 1 )
   if( cbf_cr[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth − 1, log2TrafoHeight − 1, scanIdxC, 2 )
  } else if( blkIdx = = 3 ) {
   if( cbf_cb[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth, log2TrafoHeight, scanIdxC, 1 )
   if( cbf_cr[ x0C ][ y0C ][ trafoDepth ] )
    residual_coding( x0C, y0C, log2TrafoWidth, log2TrafoHeight, scanIdxC, 2 )
  }
 }
}
Referring to Table 6, transform_unit indicates a bit stream for the coefficients of one TU block. Here, whether or not to parse coding information (residual coding) about residual signals for the TU block is determined based on information (cbf_luma) indicating whether or not the residual signals are present in a luma signal and information (cbf_cb, cbf_cr) indicating whether or not the residual signals are present in a chroma signal.
Table 7 shows a residual signal coding syntax in accordance with an embodiment of the present invention.
TABLE 7
Descriptor
residual_coding( x0, y0, log2TrafoWidth, log2TrafoHeight, scanIdx, cIdx ) {
 if( log2TrafoWidth = = 1 | | log2TrafoHeight = = 1 ) {
  log2TrafoWidth = 2
  log2TrafoHeight = 2
 }
 If( transform_skip_enabled_flag && !cu_transquant_bypass_flag &&
  (PredMode = = MODE_INTRA) &&
  ( log2TrafoWidth = = 2) && (log2TrafoHeight = = 2) )
  transform_skip_flag[ x0 ][ y0 ][ cIdx ]ae(v)
  if( transform_skip_flag[ x0 ][ y0 ][ cIdx ] ) {
   scanIdx = (scanIdx = = 1) ? 2 : (scanIdx = = 2) ? 1 : 0
...
Referring to Table 7, residual_coding means a bit stream for the coefficients of one TU block. Here, the one TU block can be a luma signal or a chroma signal.
log 2TrafoWidth refers to the width of a current block, and log 2TrafoHeight refers to the height of the current block. log 2TrafoSize means a result obtained by right shifting and performing operations for the sum of received log 2TrafoWidth and log 2TrafoHeight and refers to the size of a TU block for a luma signal.
PredMode refers to a coding mode for the current block. PredMode is intra in the case of intra-frame coding and is inter in the case of inter-frame coding.
scanIdx can be an index indicative of a scan type for the luma signal of a current TU block. For example, if a value of scanIdx is 0, it can indicate an up-right scan. If a value of scanIdx is 1, it can indicate a horizontal scan. If a value of scanIdx is 2, it can indicate a vertical scan.
ScanType can be a table indicating a scan type that is determined by an intra-prediction mode of Table 2 and the size of a current block. Here, “ScanType=DIAG” or “Up-right” is only one example.
IntraPredMode refers to an intra-prediction mode for a luma signal, and IntraPredModeC refers to an intra-prediction mode for a chroma signal.
Meanwhile, the above-described embodiments can have different ranges of application depending on the size of a block, the depth of a CU, or the depth of a TU. A parameter (e.g., information about the size or depth of a block) that determines the range of application may be set by an encoder and a decoder so that the parameter has a predetermined value or may be set to have a predetermined value according to a profile or level. When an encoder writes a parameter value into a bit stream, a decoder may obtain the value from the bit stream and use the value.
If the range of application is different depending on the depth of a CU, the following three methods can be applied to the above-described embodiments as illustrated in Table 8. The method A is applied to only a depth having a specific depth or higher, the method B is applied to only a depth having a specific depth or lower, and the method C is applied to only a specific depth.
Table 8 shows an example of methods of determining a range in which the methods of the present invention are applied depending on the depth of a CU (or TU). In Table 8, ‘O’ means that a corresponding method is applied to a corresponding depth of a CU (or TU), and ‘X’ means that a corresponding method is not applied to a corresponding depth of a CU (or TU).
TABLE 8
Depth of CU (or TU) indicating
range of applicationMethod AMethod BMethod C
0XX
1XX
2
3XX
4XX
Referring to Table 8, if the depth of a CU (or TU) is 2, all the method A, the method B, and the method C can be applied to the embodiments of the present invention.
If the embodiments of the present invention are not applied to all the depths of a CU (or TU), it may be indicated using a specific indicator (e.g., flag) or may be represented by signaling a value that is 1 greater than a maximum value of the depth of a CU as a value of the depth of a CU indicating a range of application.
Furthermore, a method of determining a range in which the methods of the present invention are applied depending on the depth of a CU (or TU) can be applied to each of the embodiment 1 (FIGS.10 and11) of the present invention and the embodiment 2 (FIGS.13 and14) of the present invention or a combination of theembodiments 1 and 2.
Furthermore, a method of determining a range in which the methods of the present invention are applied depending on the depth of a CU (or TU) can be applied to a case where a luma signal and a chroma signal have different resolutions. A method of determining a range in which a frequency transform method (or scan type) is applied when a luma signal and a chroma signal have different resolution is described below with reference toFIGS.15 and16.
FIG.15 is a diagram showing an example of a difference in the resolution between a luma block and a chroma block.
Referring toFIG.15, assuming that a chroma signal has a ¼ size of a luma signal (e.g., the luma signal has a 416×240 size and a chroma signal has a 208×120 size), aluma block1510 having an 8×8 size corresponds to achroma block1520 having a 4×4 size.
In this case, the 8×8size luma block1510 can include four luma blocks each having a 4×4 size and can have intra-prediction modes in the respective 4×4 size luma blocks. In contrast, the 4×4size chroma block1520 may not be partitioned into 2×2 size chroma blocks. The 4×4size chroma block1520 can have one intra-prediction mode.
Here, if the 4×4size chroma block1520 has been coded in an LM mode ‘Intra_FromLuma’ or the 4×4size chroma block1520 has been coded in an DM mode (i.e., mode in which an intra-prediction mode of a luma signal is used as an intra-prediction mode of a chroma signal without change), any one of intra-prediction modes of the four 4×4 size luma blocks can be used as an intra-prediction mode of the 8×8size luma block1510 for deriving a frequency transform method (or scan type) for the residual signals of the 4×4size chroma block1520.
In order to selectively apply a frequency transform method (or scan type) to the residual signals of a chroma signal, one of the followingmethods 1 to 4 can be used as a method of driving an intra-prediction mode in various ways.
    • 1. An intra-prediction mode of a block placed at the left top of a luma signal block can be used.
    • 2. An intra-prediction mode of a block placed at the left top, left bottom, or right bottom of a luma signal block can be used.
    • 3. An average value or middle value of the four luma signal blocks can be used.
    • 4. An average value or middle value using intra-prediction modes of the four luma signal blocks of a current block and chroma signal blocks of blocks neighboring a current block can be used.
In addition to the 1 to 4 methods, an intra-prediction mode for a chroma signal can be derived in various ways.
FIG.16 is a diagram showing another example of a difference in the resolution between a luma block and a chroma block.
Referring toFIG.16, aluma block1610 having a 16×16 size can have one intra-prediction mode. In contrast, achroma block1620 having an 8×8 size can be partitioned into four chroma blocks each having a 4×4 size. Each of the 4×4 size chroma blocks can have an intra-prediction mode.
If the 8×8size chroma block1620 has been coded in an LM mode ‘Intra_FromLuma’ or the 8×8size chroma block1620 has been coded in a DM mode (i.e., mode in which an intra-prediction mode of a luma signal is used as an intra-prediction mode of a chroma signal without change), an intra-prediction mode of the 16×16size luma block1610 can be used to derive a frequency transform method (or scan type) for the residual signals of the 8×8size chroma block1620. In another embodiment, an intra-prediction mode can be derived from blocks (i.e., luma blocks or chroma blocks) neighboring a current block in order to derive a frequency transform method (or scan type) for the residual signals of the 8×8size chroma block1620.
The frequency transform method or the scan type can be differently applied to a chroma block depending on the size of a luma block or to a luma signal image and a chroma signal image or may be differently applied depending on a horizontal scan and a vertical scan.
Table 9 is an example schematically showing a combination of methods for determining a range of application depending on a block size, a chroma signal and a luma signal, and vertical and horizontal scans.
TABLE 9
LUMACHROMA
BLOCKBLOCKLUMACHROMAHOR.VER.
SIZESIZEAPPLIEDAPPLIEDAPPLIEDAPPLIEDMETHODS
4(4 × 4, 4 × 2,2(2 × 2)O or XO or XO or XO or XA 1, 2, . . .
2 × 4)4(4 × 4, 4 × 2,O or XO or XO or XO or XB 1, 2, . . .
2 × 4)
8(8 × 8, 8 × 4,O or XO or XO or XO or XC 1, 2, . . .
4 × 8, 2 × 8,
etc.)
16(16 × 16,O or XO or XO or XO or XD 1, 2, . . .
16 × 8,
4 × 16,
2 × 16, etc.)
32(32 × 32)O or XO or XO or XO or XE 1, 2, . . .
8(8 × 8, 8 × 4,2(2 × 2)O or XO or XO or XO or XF 1, 2, . . .
2 × 8, etc.)4(4 × 4, 4 × 2,O or XO or XO or XO or XG 1, 2, . . .
2 × 4)
8(8 × 8, 8 × 4,O or XO or XO or XO or XH 1, 2, . . .
4 × 8, 2 × 8,
etc.)
16(16 × 16,O or XO or XO or XO or XI 1, 2, . . .
16 × 8,
4 × 16,
2 × 16, etc.)
32(32 × 32)O or XO or XO or XO or XJ 1, 2, . . .
16(16 × 16,2(2 × 2)O or XO or XO or XO or XK 1, 2, . . .
8 × 16,4(4 × 4, 4 × 2,O or XO or XO or XO or XL 1, 2, . . .
4 × 16, etc.)2 × 4)
8(8 × 8, 8 × 4,O or XO or XO or XO or XM 1, 2, . . .
4 × 8, 2 × 8,
etc.)
16(16 × 16,O or XO or XO or XO or XN 1, 2, . . .
16 × 8,
4 × 16,
2 × 16, etc.)
32(32 × 32)O or XO or XO or XO or XO 1, 2, . . .
In the case of the method ‘G 1’ of the methods listed in Table 9, if the size of the luma block is 8(8×8, 8×4, 2×8, etc.) and the size of the chroma block is 4(4×4, 4×2, 2×4), the embodiment 1 (FIGS.10 and11) of the present invention or the embodiment 2 (FIGS.13 and14) of the present invention can be applied to a luma signal, a chroma signal, a horizontal signal, and a vertical signal.
In the case of the method ‘M 1’ of the methods listed in Table 9, if the size of the luma block is 16(16×16, 8×16, 2×16, etc.) and the size of the chroma block is 4(4×4, 4×2, 2×4), the embodiment 1 (FIGS.10 and11) of the present invention or the embodiment 2 (FIGS.13 and14) of the present invention can be applied to a luma signal, a chroma signal, and a horizontal signal, but may not be applied to a vertical signal.
FIG.17 is a schematic block diagram of an encoding apparatus in accordance with an embodiment of the present invention.
Referring toFIG.17, theencoding apparatus1700 includes a scantype deriving module1710 and ascanning module1720.
The scantype deriving module1710 derives a scan type for the residual signals of a current block depending on whether or not the current block is a transform skip block.
Here, the transform skip block is a block in which transform has not been applied to a current block and can be specified by information, for example, transform_skip_flag indicating whether or not to apply transform to the current block.
A detailed method of deriving a scan type for the residual signals of a current block depending on whether or not the current block is a transform skip block has been described in detail in connection with the embodiments of this specification.
Thescanning module1720 applies the scan type, derived by the scantype deriving module1710, to the residual signals of the current block. For example, the residual signals of the current block can be scanned as in the scan type shown inFIG.12.
FIG.18 is a schematic block diagram of a decoding apparatus in accordance with an embodiment of the present invention.
Referring toFIG.18, thedecoding apparatus1800 includes a scantype deriving module1810 and ascanning module1820.
The scantype deriving module1810 derives a scan type for the residual signals of a current block depending on whether or not the current block is a transform skip block.
Here, the transform skip block is a block in which transform has not been applied to a current block and can be specified by information, for example, transform_skip_flag indicating whether or not to apply transform to the current block.
A detailed method of deriving a scan type for the residual signals of a current block depending on whether or not the current block is a transform skip block has been described in detail in connection with the embodiments of this specification.
Thescanning module1820 applies the scan type, derived by the scantype deriving module1810, to the residual signals of the current block. For example, the residual signals of the current block can be scanned as in the scan type shown inFIG.12.
In the above-described embodiments, although the methods have been described based on the flowcharts in the form of a series of steps or blocks, the present invention is not limited to the sequence of the steps, and some of the steps may be performed in a different order from that of other steps or may be performed simultaneous to other steps. Furthermore, those skilled in the art will understand that the steps shown in the flowchart are not exclusive and the steps may include additional steps or that one or more steps in the flowchart may be deleted without affecting the scope of the present invention.
The above description is only an example of the technical spirit of the present invention, and those skilled in the art may change and modify the present invention in various ways without departing from the intrinsic characteristic of the present invention. Accordingly, the disclosed embodiments should not be construed as limiting the technical spirit of the present invention, but should be construed as illustrating the technical spirit of the present invention. The scope of the technical spirit of the present invention is not restricted by the embodiments, and the scope of the present invention should be interpreted based on the appended claims. Accordingly, the present invention should be construed as covering all modifications or variations induced from the meaning and scope of the appended claims and their equivalents.

Claims (3)

The invention claimed is:
1. A video decoding method comprising:
obtaining first information indicating whether a residual signal is present in a current block, the current block including one or more transform coefficient levels other than 0 when the first information indicates that the residual signal is present in the current block;
when the first information indicates that the residual signal is present in the current block, obtaining the residual signal of the current block;
obtaining second information indicating whether an inverse-transform is performed on the residual signal of the current block;
determining a transform type of the current block when the second information indicates that the inverse-transform is performed on the current block;
obtaining residual samples of the current block by performing the inverse-transform on the residual signal of the current block based on the determined transform type;
obtaining prediction samples of the current block based on an intra prediction mode of the current block;
reconstructing the current block based on the residual samples and the prediction samples; and
applying a filter to the reconstructed samples of the current block,
wherein the filter includes at least one of a deblocking filter or a SAO (Sample Adaptive Offset) filter,
wherein the transform type is determined to be a DCT (Discrete Cosine Transform) or a DST (Discrete Sine Transform),
wherein the transform type is determined independently to the intra prediction mode of the current block, and
wherein the transform type is determined based on a size of the current block.
2. A video encoding method comprising:
obtaining first information indicating whether a residual signal is present in a current block, the current block including one or more transform coefficient levels other than 0 when the first information indicates that the residual signal is present in the current block;
when the first information indicates that the residual signal is present in the current block, obtaining the residual signal of the current block;
obtaining second information indicating whether an inverse-transform is performed on the residual signal of the current block;
determining a transform type of the current block when the second information indicates that the inverse-transform is performed on the current block;
obtaining residual samples of the current block by performing the inverse-transform on the residual signal of the current block based on the determined transform type;
obtaining prediction samples of the current block based on an intra prediction mode of the current block;
reconstructing the current block based on the residual samples and the prediction samples; and
applying a filter to the reconstructed samples of the current block,
wherein the filter includes at least one of a deblocking filter or a SAO (Sample Adaptive Offset) filter,
wherein the transform type is determined to be a DCT (Discrete Cosine Transform) or a DST (Discrete Sine Transform),
wherein the transform type is determined independently to the intra prediction mode of the current block, and
wherein the transform type is determined based on a size of the current block.
3. A non-transitory recording medium storing a bitstream formed by a method of encoding a video, the method comprising:
obtaining first information indicating whether a residual signal is present in a current block, the current block including one or more transform coefficient levels other than 0 when the first information indicates that the residual signal is present in the current block;
when the first information indicates that the residual signal is present in the current block, obtaining the residual signal of the current block;
obtaining second information indicating whether an inverse-transform is performed on the residual signal of the current block;
determining a transform type of the current block when the second information indicates that the inverse-transform is performed on the current block;
obtaining residual samples of the current block by performing the inverse-transform on the residual signal of the current block based on the determined transform type;
obtaining prediction samples of the current block based on an intra prediction mode of the current block;
reconstructing the current block based on the residual samples and the prediction samples; and
applying a filter to the reconstructed samples of the current block,
wherein the filter includes at least one of a deblocking filter or a SAO (Sample Adaptive Offset) filter,
wherein the transform type is determined to be a DCT (Discrete Cosine Transform) or a DST (Discrete Sine Transform),
wherein the transform type is determined independently to the intra prediction mode of the current block, and
wherein the transform type is determined based on a size of the current block.
US17/828,7452012-06-292022-05-31Method and device for encoding/decoding imagesActiveUS11765356B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US17/828,745US11765356B2 (en)2012-06-292022-05-31Method and device for encoding/decoding images

Applications Claiming Priority (11)

Application NumberPriority DateFiling DateTitle
KR10-2012-00714462012-06-29
KR201200714462012-06-29
KR1020130073067AKR101725818B1 (en)2012-06-292013-06-25Method and apparatus for image encoding/decoding
KR10-2013-00730672013-06-25
PCT/KR2013/005616WO2014003423A1 (en)2012-06-292013-06-25Method and device for encoding/decoding images
US201414406438A2014-12-082014-12-08
US15/255,035US9723311B2 (en)2012-06-292016-09-01Method and device for encoding/decoding images
US15/630,831US10341661B2 (en)2012-06-292017-06-22Method and device for encoding/decoding images
US16/413,870US10827177B2 (en)2012-06-292019-05-16Method and device for encoding/decoding images
US16/999,252US11399186B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images
US17/828,745US11765356B2 (en)2012-06-292022-05-31Method and device for encoding/decoding images

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US16/999,252ContinuationUS11399186B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images

Publications (2)

Publication NumberPublication Date
US20220295066A1 US20220295066A1 (en)2022-09-15
US11765356B2true US11765356B2 (en)2023-09-19

Family

ID=50140254

Family Applications (20)

Application NumberTitlePriority DateFiling Date
US14/406,438Active2033-09-06US9497465B2 (en)2012-06-292013-06-25Method and device for encoding/decoding images
US15/255,035ActiveUS9723311B2 (en)2012-06-292016-09-01Method and device for encoding/decoding images
US15/254,847ActiveUS9641845B2 (en)2012-06-292016-09-01Method and device for encoding/decoding images
US15/255,290ActiveUS9628799B2 (en)2012-06-292016-09-02Method and device for encoding/decoding images
US15/255,261ActiveUS9635363B2 (en)2012-06-292016-09-02Method and device for encoding/decoding images
US15/630,831ActiveUS10341661B2 (en)2012-06-292017-06-22Method and device for encoding/decoding images
US16/413,870ActiveUS10827177B2 (en)2012-06-292019-05-16Method and device for encoding/decoding images
US16/997,569Active2033-09-30US11399181B2 (en)2012-06-292020-08-19Method and device for encoding/decoding images
US16/998,519Active2033-10-21US11399183B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/998,463Active2033-09-22US11399182B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/998,593Active2033-11-03US11399184B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/999,208Active2033-11-06US11399185B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images
US16/999,252Active2033-11-06US11399186B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images
US17/130,042Active2033-11-18US11595655B2 (en)2012-06-292020-12-22Method and device for encoding/decoding images
US17/828,688ActiveUS11770534B2 (en)2012-06-292022-05-31Method and device for encoding/decoding images
US17/828,745ActiveUS11765356B2 (en)2012-06-292022-05-31Method and device for encoding/decoding images
US18/160,976ActiveUS12010312B2 (en)2012-06-292023-01-27Method and device for encoding/decoding images
US18/160,981ActiveUS12177443B2 (en)2012-06-292023-01-27Method and device for encoding/decoding images
US18/653,645PendingUS20240283934A1 (en)2012-06-292024-05-02Method and device for encoding/decoding images
US18/653,636ActiveUS12355967B2 (en)2012-06-292024-05-02Method and device for encoding/decoding images

Family Applications Before (15)

Application NumberTitlePriority DateFiling Date
US14/406,438Active2033-09-06US9497465B2 (en)2012-06-292013-06-25Method and device for encoding/decoding images
US15/255,035ActiveUS9723311B2 (en)2012-06-292016-09-01Method and device for encoding/decoding images
US15/254,847ActiveUS9641845B2 (en)2012-06-292016-09-01Method and device for encoding/decoding images
US15/255,290ActiveUS9628799B2 (en)2012-06-292016-09-02Method and device for encoding/decoding images
US15/255,261ActiveUS9635363B2 (en)2012-06-292016-09-02Method and device for encoding/decoding images
US15/630,831ActiveUS10341661B2 (en)2012-06-292017-06-22Method and device for encoding/decoding images
US16/413,870ActiveUS10827177B2 (en)2012-06-292019-05-16Method and device for encoding/decoding images
US16/997,569Active2033-09-30US11399181B2 (en)2012-06-292020-08-19Method and device for encoding/decoding images
US16/998,519Active2033-10-21US11399183B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/998,463Active2033-09-22US11399182B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/998,593Active2033-11-03US11399184B2 (en)2012-06-292020-08-20Method and device for encoding/decoding images
US16/999,208Active2033-11-06US11399185B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images
US16/999,252Active2033-11-06US11399186B2 (en)2012-06-292020-08-21Method and device for encoding/decoding images
US17/130,042Active2033-11-18US11595655B2 (en)2012-06-292020-12-22Method and device for encoding/decoding images
US17/828,688ActiveUS11770534B2 (en)2012-06-292022-05-31Method and device for encoding/decoding images

Family Applications After (4)

Application NumberTitlePriority DateFiling Date
US18/160,976ActiveUS12010312B2 (en)2012-06-292023-01-27Method and device for encoding/decoding images
US18/160,981ActiveUS12177443B2 (en)2012-06-292023-01-27Method and device for encoding/decoding images
US18/653,645PendingUS20240283934A1 (en)2012-06-292024-05-02Method and device for encoding/decoding images
US18/653,636ActiveUS12355967B2 (en)2012-06-292024-05-02Method and device for encoding/decoding images

Country Status (11)

CountryLink
US (20)US9497465B2 (en)
EP (3)EP2869557B1 (en)
KR (19)KR101725818B1 (en)
CN (6)CN108712651A (en)
DK (1)DK2869557T3 (en)
ES (1)ES2961654T3 (en)
FI (1)FI2869557T3 (en)
HU (1)HUE063933T2 (en)
PL (1)PL2869557T3 (en)
TW (13)TW202527552A (en)
WO (1)WO2014003423A1 (en)

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR101503269B1 (en)*2010-04-052015-03-17삼성전자주식회사Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit
EP2869557B1 (en)2012-06-292023-08-09Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
CN105684442B (en)*2013-07-232020-02-21英迪股份有限公司 Methods for encoding/decoding images
CN114189679B (en)*2016-04-262025-09-09杜比国际公司Image decoding method, image encoding method, and method of transmitting bit stream
CN109479138B (en)*2016-07-132023-11-03韩国电子通信研究院 Image encoding/decoding method and device
CN115052142B (en)*2016-08-012025-05-23韩国电子通信研究院Image encoding/decoding method
CN117201809A (en)*2016-08-012023-12-08韩国电子通信研究院Image encoding/decoding method and apparatus, and recording medium storing bit stream
US10368107B2 (en)*2016-08-152019-07-30Qualcomm IncorporatedIntra video coding using a decoupled tree structure
WO2018044089A1 (en)2016-08-312018-03-08주식회사 케이티Method and device for processing video signal
KR102424419B1 (en)*2016-08-312022-07-22주식회사 케이티Method and apparatus for processing a video signal
CN118660159A (en)*2016-10-042024-09-17Lx 半导体科技有限公司 Image encoding/decoding method and image data transmission method
KR102791557B1 (en)*2016-10-102025-04-08삼성전자주식회사method and apparatus for encoding/decoding image
US11394974B2 (en)2017-01-032022-07-19Lg Electronics Inc.Image processing method, and device for same
KR102450506B1 (en)*2017-07-312022-10-05한국전자통신연구원Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN115037932B (en)2017-10-182025-08-26英迪股份有限公司 Image encoding/decoding method and apparatus, and recording medium storing bit stream
WO2019112394A1 (en)2017-12-072019-06-13한국전자통신연구원Method and apparatus for encoding and decoding using selective information sharing between channels
KR20190088020A (en)*2018-01-172019-07-25인텔렉추얼디스커버리 주식회사Video coding method and apparatus using multiple transform
WO2019212230A1 (en)*2018-05-032019-11-07엘지전자 주식회사Method and apparatus for decoding image by using transform according to block size in image coding system
KR20210008105A (en)2018-05-302021-01-20디지털인사이트 주식회사 Video encoding/decoding method and apparatus
KR20250111383A (en)2018-06-062025-07-22엘지전자 주식회사Method for Performing Transform Index Coding Based on Intra Prediction Mode and Apparatus Therefor
WO2020054739A1 (en)*2018-09-112020-03-19パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカThree-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
FR3086485A1 (en)*2018-09-212020-03-27Orange METHODS AND DEVICES FOR ENCODING AND DECODING A DATA FLOW REPRESENTATIVE OF AT LEAST ONE IMAGE.
US11303904B2 (en)*2018-09-282022-04-12Qualcomm IncorporatedRectangular block transform scaling
CN118301348A (en)2018-10-052024-07-05韩国电子通信研究院Image encoding/decoding method and apparatus, and recording medium storing bit stream
CA3113861A1 (en)*2018-10-122020-04-16Guangdong Oppo Mobile Telecommunications Corp., Ltd.Method for encoding/decoding image signal and device for same
IL314659A (en)2018-11-082024-09-01Guangdong Oppo Mobile Telecommunications Corp LtdImage signal encoding/decoding method and apparatus therefor
KR102462910B1 (en)2018-11-122022-11-04한국전자통신연구원Method and apparatus of quantization for weights of batch normalization layer
CA3085391C (en)2018-11-232023-10-24Lg Electronics Inc.Method for decoding image on basis of cclm prediction in image coding system, and device therefor
CN113170147B (en)2018-12-042023-04-18华为技术有限公司Video encoder, video decoder, and corresponding methods
CN118741099A (en)2018-12-122024-10-01数码士有限公司 Video signal processing method and device using current picture reference
CN111587575B (en)*2018-12-172022-09-16Lg电子株式会社 Method and device for determining scan order of transform coefficients based on high-frequency zeroing
KR102770887B1 (en)*2018-12-212025-02-24삼성전자주식회사Video encoding method and device, and video decoding method and device
WO2020139016A2 (en)*2018-12-272020-07-02엘지전자 주식회사Video decoding method and apparatus using residual rearrangement in video coding system
CN113330742B (en)*2018-12-282024-11-05韩国电子通信研究院 Video encoding/decoding method, device and recording medium storing bit stream
KR20230174294A (en)*2019-01-122023-12-27(주)휴맥스Video signal processing method and device using multiple transform kernels
MX2021008449A (en)2019-01-152021-11-03Rosedale Dynamics LlcImage coding method and device using transform skip flag.
WO2020159982A1 (en)*2019-01-282020-08-06Op Solutions, LlcShape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
CA3128424C (en)2019-02-012024-04-16Beijing Bytedance Network Technology Co., Ltd.Interactions between in-loop reshaping and inter coding tools
WO2020156535A1 (en)*2019-02-012020-08-06Beijing Bytedance Network Technology Co., Ltd.Interactions between in-loop reshaping and block differential pulse coded modulation
KR20210114386A (en)2019-02-082021-09-23주식회사 윌러스표준기술연구소 Video signal processing method and apparatus using quadratic transformation
CN118870034A (en)*2019-02-192024-10-29数码士有限公司 Video signal processing method and device based on intra-frame prediction
CN119277077A (en)2019-02-282025-01-07三星电子株式会社 A method and device for video encoding and decoding for predicting chrominance components
WO2020175965A1 (en)2019-02-282020-09-03주식회사 윌러스표준기술연구소Intra prediction-based video signal processing method and device
BR112020024331A2 (en)2019-03-032021-02-23Huawei Technologies Co., Ltd. decoder, and corresponding methods that are used for transform process
CN113545070B (en)*2019-03-082023-10-03北京字节跳动网络技术有限公司 Signaling notification of shaping information in video processing
WO2020184977A1 (en)*2019-03-112020-09-17한국전자통신연구원Intra block copy-based encoding/decoding method and device, and bitstream storage medium
KR20250060945A (en)*2019-03-122025-05-07텐센트 아메리카 엘엘씨Method and apparatus for color transform in vvc
CN119254953A (en)*2019-03-132025-01-03现代自动车株式会社 Image decoding device using differential coding
CN113574889B (en)2019-03-142024-01-12北京字节跳动网络技术有限公司 Signaling and syntax of loop shaping information
WO2020192614A1 (en)2019-03-232020-10-01Beijing Bytedance Network Technology Co., Ltd.Restrictions on adaptive-loop filtering parameter sets
US20220217405A1 (en)*2019-04-032022-07-07Lg Electronics Inc.Video or image coding for modifying reconstructed picture
CA3135966A1 (en)2019-04-122020-10-15Beijing Bytedance Network Technology Co., Ltd.Most probable mode list construction for matrix-based intra prediction
WO2020211807A1 (en)2019-04-162020-10-22Beijing Bytedance Network Technology Co., Ltd.Matrix derivation in intra coding mode
AU2020259567B2 (en)2019-04-182025-02-20Beijing Bytedance Network Technology Co., Ltd.Parameter derivation in cross component mode
CN113728636B (en)2019-04-232022-11-04北京字节跳动网络技术有限公司 Selective Use of Secondary Transforms in Codec Video
EP3942811A4 (en)2019-04-242022-06-15ByteDance Inc. RESTRICTIONS ON A QUANTIZED RESIDUAL DIFFERENTIAL PULSE CODE MODULATION REPRESENTATION OF ENCODED VIDEO ENCODER
CN113728631B (en)*2019-04-272024-04-02北京字节跳动网络技术有限公司 Intra-subblock segmentation and multi-transform selection
CN113728647B (en)2019-05-012023-09-05北京字节跳动网络技术有限公司 Context Coding with Matrix-Based Intra Prediction
KR102707777B1 (en)2019-05-012024-09-20바이트댄스 아이엔씨 Intra-coded video using quantized residual differential pulse code modulation coding
CN115955561A (en)2019-05-022023-04-11北京字节跳动网络技术有限公司 Intra Video Codec Using Multiple Reference Filters
KR20220002918A (en)2019-05-022022-01-07바이트댄스 아이엔씨 Signaling in Transform Skip Mode
EP4329309A3 (en)2019-05-102024-03-27Beijing Bytedance Network Technology Co., Ltd.Selection of secondary transform matrices for video processing
WO2020228718A1 (en)*2019-05-132020-11-19Beijing Bytedance Network Technology Co., Ltd.Interaction between transform skip mode and other coding tools
JP7265040B2 (en)2019-05-132023-04-25北京字節跳動網絡技術有限公司 Block dimension settings for conversion skip mode
WO2020228761A1 (en)*2019-05-142020-11-19Beijing Bytedance Network Technology Co., Ltd.Filter selection for intra video coding
CN113841402B (en)2019-05-192024-03-26字节跳动有限公司Transform design of large blocks in video coding and decoding
WO2020235959A1 (en)*2019-05-222020-11-26엘지전자 주식회사Method and device for decoding image by using bdpcm in image coding system
CN117354528A (en)*2019-05-222024-01-05北京字节跳动网络技术有限公司Using transform skip mode based on sub-blocks
WO2020239017A1 (en)2019-05-312020-12-03Beijing Bytedance Network Technology Co., Ltd.One-step downsampling process in matrix-based intra prediction
CN117768652A (en)2019-06-052024-03-26北京字节跳动网络技术有限公司Video processing method, apparatus, medium, and method of storing bit stream
WO2020244662A1 (en)2019-06-062020-12-10Beijing Bytedance Network Technology Co., Ltd.Simplified transform coding tools
CN113940076B (en)*2019-06-062025-01-03北京字节跳动网络技术有限公司 Apply implicit transformation selection
CN113994666B (en)2019-06-062025-01-03北京字节跳动网络技术有限公司 Implicitly selecting transform candidates
WO2020244656A1 (en)2019-06-072020-12-10Beijing Bytedance Network Technology Co., Ltd.Conditional signaling of reduced secondary transform in video bitstreams
JP7612615B2 (en)*2019-06-202025-01-14インターディジタル・シーイー・パテント・ホールディングス・ソシエテ・パ・アクシオンス・シンプリフィエ Lossless modes for versatile video coding
EP3989558A4 (en)*2019-06-242023-07-12Lg Electronics Inc. PICTURE ENCODING/DECODING METHOD AND APPARATUS WITH MAXIMUM TRANSFORMATION SIZE LIMITATION FOR A CHROMA COMPONENT ENCODING BLOCK AND BITSTREAM TRANSMISSION METHODS
EP4542995A3 (en)*2019-06-242025-07-09LG Electronics Inc.Image encoding/decoding method and apparatus using maximum transform size setting for chroma block, and method for transmitting bitstream
KR102831318B1 (en)2019-06-242025-07-07엘지전자 주식회사 Method for encoding/decoding video using maximum size limit of chroma transform block, device and method for transmitting bitstream
WO2021006697A1 (en)*2019-07-102021-01-14엘지전자 주식회사Image decoding method for residual coding and apparatus therefor
CN117640934A (en)*2019-07-122024-03-01Lg电子株式会社Image encoding method, image decoding method, and transmission method
EP3994887A4 (en)2019-08-032022-09-28Beijing Bytedance Network Technology Co., Ltd. SELECTION OF MATRICES FOR REDUCED SECONDARY TRANSFORMATION IN VIDEO CODING
CN118921457A (en)*2019-08-142024-11-08北京字节跳动网络技术有限公司Weighting factors for prediction sample filtering in intra mode
WO2021027925A1 (en)*2019-08-142021-02-18Beijing Bytedance Network Technology Co., Ltd.Position-dependent intra prediction sample filtering
CN114223208B (en)2019-08-172023-12-29北京字节跳动网络技术有限公司Context modeling for side information of reduced secondary transforms in video
AU2020334638B2 (en)2019-08-222023-11-09Lg Electronics Inc.Intra prediction device and method
EP4598023A2 (en)*2019-08-272025-08-06Hyundai Motor CompanyVideo encoding and decoding using differential coding
CN120321390A (en)*2019-08-292025-07-15Lg 电子株式会社 Device and method for image coding based on filtering
CN119743597A (en)*2019-09-092025-04-01北京字节跳动网络技术有限公司 Recursive partitioning of video codec blocks
CN118118685A (en)*2019-09-192024-05-31寰发股份有限公司Video encoding and decoding method and device
JP7323712B2 (en)*2019-09-212023-08-08北京字節跳動網絡技術有限公司 Precision transform and quantization for image and video coding
CN117579829A (en)2019-09-212024-02-20Lg电子株式会社 Image encoding/decoding devices and devices for sending data
WO2021058277A1 (en)*2019-09-232021-04-01Interdigital Vc Holdings France, SasVideo encoding and decoding using block area based quantization matrices
AU2020354148B2 (en)*2019-09-252024-02-15Lg Electronics Inc.Image encoding/decoding method and apparatus for signaling residual coding method used for encoding block to which BDPCM is applied, and method for transmitting bitstream
KR20220050966A (en)*2019-09-252022-04-25엘지전자 주식회사 Transformation-based video coding method and apparatus
KR102776241B1 (en)*2019-09-252025-03-04엘지전자 주식회사 Image encoding/decoding method, device and bitstream transmitting method for determining a segmentation mode based on a color format
US12022082B2 (en)2019-09-272024-06-25Sk Telecom Co., Ltd.Method for reconstructing residual blocks of chroma blocks, and video decoding apparatus
BR122023020425A2 (en)2019-10-042024-02-27Lg Electronics Inc DECODING/CODING APPARATUS FOR DECODING/ENCODING IMAGES AND APPARATUS FOR TRANSMITTING DATA TO AN IMAGE
EP4017010A4 (en)*2019-10-042022-11-16LG Electronics Inc. TRANSFORMATION-BASED IMAGE CODING METHOD AND APPARATUS THEREOF
JP7418561B2 (en)2019-10-042024-01-19エルジー エレクトロニクス インコーポレイティド Video coding method and device based on conversion
KR20220050203A (en)*2019-10-052022-04-22엘지전자 주식회사 Video or video coding based on advanced grammar elements related to transform skip and palette coding
WO2021066610A1 (en)*2019-10-052021-04-08엘지전자 주식회사Image or video coding on basis of transform skip- and palette coding-related data
MX2022004015A (en)2019-10-052022-05-02Lg Electronics IncImage or video coding based on signaling of transform skip- and palette coding-related information.
WO2021071287A1 (en)*2019-10-082021-04-15엘지전자 주식회사Transform-based image coding method and device for same
US12022094B2 (en)*2019-10-082024-06-25Lg Electronics Inc.Transform-based image coding method and device for same
MX2022004211A (en)*2019-10-082022-05-03Lg Electronics Inc IMAGE CODING METHOD BASED ON TRANSFORMATION, AND DEVICE FOR THE SAME.
CN119544988A (en)2019-10-282025-02-28Lg电子株式会社 Image encoding/decoding method and method for transmitting bit stream
EP4042689A4 (en)*2019-10-282023-06-07Beijing Bytedance Network Technology Co., Ltd.Syntax signaling and parsing based on colour component
CN114762340B (en)*2019-11-012024-09-10Lg电子株式会社Method and apparatus for converting compiled image
EP4042696A4 (en)*2019-11-012023-07-19Beijing Bytedance Network Technology Co., Ltd. DERIVATION OF A LINEAR PARAMETER IN A CROSS-COMPONENT VIDEO CODING
US11949873B2 (en)*2019-11-112024-04-02Lg Electronics Inc.Image coding method based on transform, and device therefor
CN114930845B (en)*2019-11-112025-02-25Lg电子株式会社 Transformation-based image coding method and device
CA3162583C (en)*2019-11-222024-02-13Lg Electronics Inc.Image encoding/decoding method and device using lossless color transform, and method for transmitting bitstream
US12368854B2 (en)*2019-12-292025-07-22Lg Electronics Inc.Transform-based image coding method and device for same
CN118945380A (en)2020-02-252024-11-12Lg电子株式会社 Image encoding/decoding equipment and equipment for sending data
WO2021190593A1 (en)2020-03-252021-09-30Beijing Bytedance Network Technology Co., Ltd.Coded video processing using enhanced secondary transform
US11405615B2 (en)*2020-03-262022-08-02Tencent America LLCJoint transform coding of multiple color components
CN119996710A (en)*2020-03-272025-05-13三星电子株式会社 Image decoding method and device
CN113473129B (en)*2020-03-302022-12-23杭州海康威视数字技术股份有限公司Encoding and decoding method and device
CN114556943B (en)*2020-04-032024-08-23Oppo广东移动通信有限公司Transformation method, encoder, decoder, and storage medium
US11770788B1 (en)*2022-06-032023-09-26Bloxtel Inc.Systems and methods for deployment of a decentralized electronic subscriber identity module

Citations (41)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030156648A1 (en)2001-12-172003-08-21Microsoft CorporationSub-block transform coding of prediction residuals
KR20040027047A (en)2002-09-272004-04-01삼성전자주식회사Encoding/decoding apparatus and method for image using predictive scanning
KR100744435B1 (en)2006-02-212007-08-01(주)씨앤에스 테크놀로지 How to Omit DCT and Quantization for High-Speed Video Compression
US20080310512A1 (en)2007-06-152008-12-18Qualcomm IncorporatedSeparable directional transforms
JP2009027541A (en)2007-07-202009-02-05Ntt Docomo Inc Image coding apparatus, method and program, and image decoding apparatus, method and program
US20090097548A1 (en)2007-10-152009-04-16Qualcomm IncorporatedEnhancement layer coding for scalable video coding
US20100061454A1 (en)2008-08-122010-03-11Lg Electronics Inc.Method of processing a video signal
US20100086029A1 (en)2008-10-032010-04-08Qualcomm IncorporatedVideo coding with large macroblocks
US20110243232A1 (en)*2010-04-052011-10-06Samsung Electronics Co., Ltd.Method and apparatus for encoding video by using dynamic-range transformation, and method and apparatus for decoding video by using dynamic-range transformation
CA2857849A1 (en)2010-04-232011-10-27Soo Mi OhApparatus and method for encoding a moving picture
CN102265618A (en)2008-12-252011-11-30夏普株式会社 Image decoding device and image encoding device
US20110317757A1 (en)2010-06-252011-12-29Qualcomm IncorporatedIntra prediction mode signaling for finer spatial prediction directions
US20120008683A1 (en)2010-07-092012-01-12Qualcomm IncorporatedSignaling selected directional transform for video coding
KR20120012383A (en)2010-07-312012-02-09오수미 Predictive block generator
TW201216254A (en)2010-09-132012-04-16Qualcomm IncCoding and decoding a transient frame
CN102447896A (en)2010-09-302012-05-09华为技术有限公司Method, device and system for processing image residual block
TW201223286A (en)2010-07-282012-06-01Qualcomm IncCoding motion vectors in video coding
KR20120062307A (en)2010-12-062012-06-14에스케이 텔레콤주식회사Video encoding/decoding method and apparatus for noise component in spatial domain
TW201225678A (en)2010-09-092012-06-16Qualcomm IncEfficient coding of video parameters for weighted motion compensated prediction in video coding
WO2012087713A1 (en)2010-12-222012-06-28Qualcomm IncorporatedMode dependent scanning of coefficients of a block of video data
US20120170649A1 (en)2010-12-292012-07-05Qualcomm IncorporatedVideo coding using mapped transforms and scanning modes
US20120201300A1 (en)2009-09-142012-08-09Sk Telecom Co., Ltd.Encoding/decoding method and device for high-resolution moving images
US20120301040A1 (en)2010-02-022012-11-29Alex Chungku YieImage encoding/decoding method for rate-distortion optimization and apparatus for performing same
US20130003859A1 (en)2011-06-302013-01-03Qualcomm IncorporatedTransition between run and level coding modes
US20130003824A1 (en)2011-07-012013-01-03Qualcomm IncorporatedApplying non-square transforms to video data
US20130034153A1 (en)2010-04-162013-02-07Sk Telecom Co., Ltd.Video encoding/decoding apparatus and method
US20130083857A1 (en)2011-06-292013-04-04Qualcomm IncorporatedMultiple zone scanning order for video coding
US20130114730A1 (en)2011-11-072013-05-09Qualcomm IncorporatedCoding significant coefficient information in transform skip mode
US20130114695A1 (en)2011-11-072013-05-09Qualcomm IncorporatedSignaling quantization matrices for video coding
US8483285B2 (en)2008-10-032013-07-09Qualcomm IncorporatedVideo coding using transforms bigger than 4×4 and 8×8
US20130188740A1 (en)*2010-09-132013-07-25Industry-Academic Coooperation Foundation Hanbat National UniversityMethod and apparatus for entropy encoding/decoding
US20140098859A1 (en)2011-04-012014-04-10Lg Electronics Inc.Entropy decoding method, and decoding apparatus using same
US9014262B2 (en)2011-11-042015-04-21Infobridge Pte. Ltd.Method of generating reconstructed block
US9100648B2 (en)2009-06-072015-08-04Lg Electronics Inc.Method and apparatus for decoding a video signal
US20150249840A1 (en)2009-06-072015-09-03Lg Electronics Inc.Method and apparatus for decoding a video signal
US20150326883A1 (en)2012-09-282015-11-12Canon Kabushiki KaishaMethod, apparatus and system for encoding and decoding the transform units of a coding unit
US9374582B2 (en)2011-11-042016-06-21Infobridge Pte. Ltd.Apparatus of decoding video data
US20160219290A1 (en)2015-01-262016-07-28Qualcomm IncorporatedEnhanced multiple transforms for prediction residual
US9497465B2 (en)2012-06-292016-11-15Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
US20170180731A1 (en)2011-11-042017-06-22Infobridge Pte. Ltd.Apparatus of encoding an image
US20200186806A1 (en)2008-08-042020-06-11Dolby Laboratories Licensing CorporationPredictive motion vector coding

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2857849B2 (en)1995-04-191999-02-17日精樹脂工業株式会社 Input device of injection molding machine
JP4153410B2 (en)*2003-12-012008-09-24日本電信電話株式会社 Hierarchical encoding method and apparatus, hierarchical decoding method and apparatus, hierarchical encoding program and recording medium recording the program, hierarchical decoding program and recording medium recording the program
KR100813963B1 (en)*2005-09-162008-03-14세종대학교산학협력단 Lossless encoding and decoding method for video
CN101292537B (en)*2005-11-082010-10-20松下电器产业株式会社Moving picture coding method, moving picture decoding method, and apparatuses of the same
KR100772873B1 (en)*2006-01-122007-11-02삼성전자주식회사Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
KR100927733B1 (en)*2006-09-202009-11-18한국전자통신연구원 An apparatus and method for encoding / decoding selectively using a transformer according to correlation of residual coefficients
KR101619972B1 (en)*2008-10-022016-05-11한국전자통신연구원Apparatus and method for coding/decoding image selectivly using descrete cosine/sine transtorm
KR101543301B1 (en)*2008-10-152015-08-10에스케이 텔레콤주식회사/ / Video encoding/decoding apparatus and Hybrid Block Motion Compensation/Overlapped Block Motion Compensation method and apparatus
KR20110091000A (en)*2008-11-072011-08-10미쓰비시덴키 가부시키가이샤 Picture coding device and picture decoding device
PT3567853T (en)*2009-03-232023-12-19Ntt Docomo Inc PREDICTIVE IMAGE DECODING DEVICE AND A PREDICTIVE IMAGE DECODING METHOD
JPWO2011024602A1 (en)*2009-08-262013-01-24シャープ株式会社 Image encoding apparatus and image decoding apparatus
US8885711B2 (en)*2009-12-172014-11-11Sk Telecom Co., Ltd.Image encoding/decoding method and device
KR101379188B1 (en)*2010-05-172014-04-18에스케이 텔레콤주식회사Video Coding and Decoding Method and Apparatus for Macroblock Including Intra and Inter Blocks
KR101483179B1 (en)*2010-10-062015-01-19에스케이 텔레콤주식회사Frequency Transform Block Coding Method and Apparatus and Image Encoding/Decoding Method and Apparatus Using Same
US9288496B2 (en)*2010-12-032016-03-15Qualcomm IncorporatedVideo coding using function-based scan order for transform coefficients
CN103782598A (en)*2011-06-302014-05-07华为技术有限公司 Fast Encoding Method for Lossless Encoding

Patent Citations (50)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030156648A1 (en)2001-12-172003-08-21Microsoft CorporationSub-block transform coding of prediction residuals
US20200169749A1 (en)2001-12-172020-05-28Microsoft Technology Licensing, LlcVideo coding / decoding with sub-block transform sizes and adaptive deblock filtering
KR20040027047A (en)2002-09-272004-04-01삼성전자주식회사Encoding/decoding apparatus and method for image using predictive scanning
KR100744435B1 (en)2006-02-212007-08-01(주)씨앤에스 테크놀로지 How to Omit DCT and Quantization for High-Speed Video Compression
US20080310512A1 (en)2007-06-152008-12-18Qualcomm IncorporatedSeparable directional transforms
JP2009027541A (en)2007-07-202009-02-05Ntt Docomo Inc Image coding apparatus, method and program, and image decoding apparatus, method and program
US20090097548A1 (en)2007-10-152009-04-16Qualcomm IncorporatedEnhancement layer coding for scalable video coding
US20200186806A1 (en)2008-08-042020-06-11Dolby Laboratories Licensing CorporationPredictive motion vector coding
US20100061454A1 (en)2008-08-122010-03-11Lg Electronics Inc.Method of processing a video signal
US8483285B2 (en)2008-10-032013-07-09Qualcomm IncorporatedVideo coding using transforms bigger than 4×4 and 8×8
TW201028010A (en)2008-10-032010-07-16Qualcomm IncVideo coding with large macroblocks
US20100086029A1 (en)2008-10-032010-04-08Qualcomm IncorporatedVideo coding with large macroblocks
US8792738B2 (en)2008-12-252014-07-29Sharp Kabushiki KaishaImage decoding apparatus and image coding apparatus
CN102265618A (en)2008-12-252011-11-30夏普株式会社 Image decoding device and image encoding device
US20150249840A1 (en)2009-06-072015-09-03Lg Electronics Inc.Method and apparatus for decoding a video signal
US9100648B2 (en)2009-06-072015-08-04Lg Electronics Inc.Method and apparatus for decoding a video signal
US20120201300A1 (en)2009-09-142012-08-09Sk Telecom Co., Ltd.Encoding/decoding method and device for high-resolution moving images
US20120301040A1 (en)2010-02-022012-11-29Alex Chungku YieImage encoding/decoding method for rate-distortion optimization and apparatus for performing same
US20110243232A1 (en)*2010-04-052011-10-06Samsung Electronics Co., Ltd.Method and apparatus for encoding video by using dynamic-range transformation, and method and apparatus for decoding video by using dynamic-range transformation
US20130034153A1 (en)2010-04-162013-02-07Sk Telecom Co., Ltd.Video encoding/decoding apparatus and method
CA2857849A1 (en)2010-04-232011-10-27Soo Mi OhApparatus and method for encoding a moving picture
US20110317757A1 (en)2010-06-252011-12-29Qualcomm IncorporatedIntra prediction mode signaling for finer spatial prediction directions
US20170238015A1 (en)2010-07-092017-08-17Qualcomm IncorporatedSignaling selected directional transform for video coding
US20120008683A1 (en)2010-07-092012-01-12Qualcomm IncorporatedSignaling selected directional transform for video coding
US9661338B2 (en)2010-07-092017-05-23Qualcomm IncorporatedCoding syntax elements for adaptive scans of transform coefficients for video coding
TW201223286A (en)2010-07-282012-06-01Qualcomm IncCoding motion vectors in video coding
KR20120012383A (en)2010-07-312012-02-09오수미 Predictive block generator
TW201225678A (en)2010-09-092012-06-16Qualcomm IncEfficient coding of video parameters for weighted motion compensated prediction in video coding
TW201216254A (en)2010-09-132012-04-16Qualcomm IncCoding and decoding a transient frame
US20130188740A1 (en)*2010-09-132013-07-25Industry-Academic Coooperation Foundation Hanbat National UniversityMethod and apparatus for entropy encoding/decoding
CN102447896A (en)2010-09-302012-05-09华为技术有限公司Method, device and system for processing image residual block
KR20120062307A (en)2010-12-062012-06-14에스케이 텔레콤주식회사Video encoding/decoding method and apparatus for noise component in spatial domain
WO2012087713A1 (en)2010-12-222012-06-28Qualcomm IncorporatedMode dependent scanning of coefficients of a block of video data
US20120170649A1 (en)2010-12-292012-07-05Qualcomm IncorporatedVideo coding using mapped transforms and scanning modes
US20140098859A1 (en)2011-04-012014-04-10Lg Electronics Inc.Entropy decoding method, and decoding apparatus using same
US20150245077A1 (en)2011-04-012015-08-27Lg Electronics Inc.Entropy decoding method, and decoding apparatus using same
US20130083857A1 (en)2011-06-292013-04-04Qualcomm IncorporatedMultiple zone scanning order for video coding
US20130003859A1 (en)2011-06-302013-01-03Qualcomm IncorporatedTransition between run and level coding modes
US20130003824A1 (en)2011-07-012013-01-03Qualcomm IncorporatedApplying non-square transforms to video data
US9014262B2 (en)2011-11-042015-04-21Infobridge Pte. Ltd.Method of generating reconstructed block
US9374582B2 (en)2011-11-042016-06-21Infobridge Pte. Ltd.Apparatus of decoding video data
US20170180731A1 (en)2011-11-042017-06-22Infobridge Pte. Ltd.Apparatus of encoding an image
US20130114695A1 (en)2011-11-072013-05-09Qualcomm IncorporatedSignaling quantization matrices for video coding
US20130114730A1 (en)2011-11-072013-05-09Qualcomm IncorporatedCoding significant coefficient information in transform skip mode
US20160373746A1 (en)2012-06-292016-12-22Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
US9628799B2 (en)2012-06-292017-04-18Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
US20160373783A1 (en)2012-06-292016-12-22Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
US9497465B2 (en)2012-06-292016-11-15Electronics And Telecommunications Research InstituteMethod and device for encoding/decoding images
US20150326883A1 (en)2012-09-282015-11-12Canon Kabushiki KaishaMethod, apparatus and system for encoding and decoding the transform units of a coding unit
US20160219290A1 (en)2015-01-262016-07-28Qualcomm IncorporatedEnhanced multiple transforms for prediction residual

Non-Patent Citations (16)

* Cited by examiner, † Cited by third party
Title
Bross, B., et al. "Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11" 6th M, Torino, IT Meeting [2011].
Bross, Benjamin et al., "High efficiency video coding (HEVC) text specification draft 7", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 9th Meeting: Geneva, Switzerland, Apr. 27-May 7, 2012 (298 pages in English).
Casas, J.R et al., "Residual Image Coding Using Mathematical Morphology." Dept. of Signal Theory and Communications, Univ. Politecnica de Catalunya. IEEE, 1994. (6 pages in English).
Chen, Chung-Ming, et al. "Window Architecture for Deblocking Filter in H.264/AVC." International Symposium on Signal Processing and Information Technology. 2006, IEEE. (7 pages in English).
Ichigaya, Atsuro et al., "Performance report of adaptive DCT/DST selection", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 4th Meeting: Daegu, Republic of Korea, Jan. 19-28, 2011 (5 pages in English).
International Search Report dated Oct. 11, 2013 in corresponding International Patent Application No. PCT/KR2013/005616 (5 pages, in Korean with English translation).
Korean Office Action dated Oct. 30, 2014 in corresponding Korean Patent Application No. 10-2014-0087607 (4 pages, in Korean).
Lan, Cuiling, et al., "Intra transform skipping" JCTVC-I0408, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 9th Meeting: Geneva, CH, April 27-May 7, 2012, (6 pages in English).
Marpe, Detlev, et al. "Context-Based Adaptive Binary Arithmetic Coding in the H. 264/AVC Video Compression Standard." IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 7 (Jul. 2003): 620-636. (17 pages in English).
Mrak, Marta et al., "Transform skip mode", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 6th Meeting: Torino, Italy, Jul. 14- 22, 2011 (9 pages in English).
Mrak, Marta, et al., "Transform skip mode." Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino, IT, 2011, (6 pages).
Saxena, Ankur et al., "CE7: Mode-dependent DCT/DST for intra prediction in video coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 4th Meeting: Daegu, Republic of Korea, Jan. 20-28, 2011 (pp. 1-8).
Saxena, Ankur et al., "CE7: Mode-dependent DCT/DST without 4*4 full matrix multiplication for intra prediction", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 5th Meeting: Geneva, Switzerland, Mar. 16-23, 2011 (pp. 1-10).
Somasundaram, K. et al. "A Pattern-Based Residual Vector Quantization (PBRVQ) Algorithm For Compressing Images." Current Trends in Information Technology (CTIT), 2009 International Conference on the. IEEE, 2009. (7 pages in English).
Sun, Chi-Chia et al. "A Configurable IP Core for Inverse Quantized Discrete Cosine and Integer Transforms With Arbitrary Accuracy." Circuits and Systems (APCCAS), 2010 IEEE Asia Pacific Conference on. IEEE, 2010. (4 pages in English).
W. Zhu, "Non Transform Mode for Inter Coding," Document JCTVC-H0061, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 8th Meeting, San Jose, Calif., Feb. 1-10, 2012.

Also Published As

Publication numberPublication date
KR20200043335A (en)2020-04-27
KR20140093205A (en)2014-07-25
KR20140098036A (en)2014-08-07
TW201709740A (en)2017-03-01
US20160373783A1 (en)2016-12-22
US11399183B2 (en)2022-07-26
KR20200043332A (en)2020-04-27
US20200382785A1 (en)2020-12-03
US11399181B2 (en)2022-07-26
US12177443B2 (en)2024-12-24
US11399186B2 (en)2022-07-26
TW202527552A (en)2025-07-01
US20200382786A1 (en)2020-12-03
CN104488270A (en)2015-04-01
TW202527553A (en)2025-07-01
DK2869557T3 (en)2023-11-06
TWI737549B (en)2021-08-21
TWI876927B (en)2025-03-11
KR101527444B1 (en)2015-06-11
KR20210134568A (en)2021-11-10
US20190273924A1 (en)2019-09-05
TWI619379B (en)2018-03-21
CN108712651A (en)2018-10-26
CN108712652A (en)2018-10-26
US11399185B2 (en)2022-07-26
TW202112136A (en)2021-03-16
EP2869557A1 (en)2015-05-06
US20150172658A1 (en)2015-06-18
CN108712650A (en)2018-10-26
KR101809728B1 (en)2017-12-15
US11399182B2 (en)2022-07-26
TW202015414A (en)2020-04-16
TW201415894A (en)2014-04-16
EP4216547A1 (en)2023-07-26
KR20210036333A (en)2021-04-02
TW201820876A (en)2018-06-01
TWI621353B (en)2018-04-11
US20160373747A1 (en)2016-12-22
KR102635509B1 (en)2024-02-08
TWI619378B (en)2018-03-21
WO2014003423A1 (en)2014-01-03
US20230199186A1 (en)2023-06-22
TWI684355B (en)2020-02-01
US20170289548A1 (en)2017-10-05
KR102103004B1 (en)2020-04-21
US11770534B2 (en)2023-09-26
EP4216546A1 (en)2023-07-26
EP2869557A4 (en)2016-07-06
US11399184B2 (en)2022-07-26
KR102323263B1 (en)2021-11-08
CN104488270B (en)2018-05-18
TW202428029A (en)2024-07-01
US9641845B2 (en)2017-05-02
KR101809731B1 (en)2017-12-15
TWI715361B (en)2021-01-01
US9723311B2 (en)2017-08-01
KR20140004006A (en)2014-01-10
US20200382787A1 (en)2020-12-03
TW202215849A (en)2022-04-16
HUE063933T2 (en)2024-02-28
KR102635508B1 (en)2024-02-08
KR102635510B1 (en)2024-02-08
KR20170140145A (en)2017-12-20
KR20170044631A (en)2017-04-25
KR20200043333A (en)2020-04-27
US20200382788A1 (en)2020-12-03
KR20200043331A (en)2020-04-27
US20220295065A1 (en)2022-09-15
US20200382783A1 (en)2020-12-03
KR20140092799A (en)2014-07-24
US11595655B2 (en)2023-02-28
CN108632611A (en)2018-10-09
TWI751963B (en)2022-01-01
US9635363B2 (en)2017-04-25
KR101725818B1 (en)2017-04-11
KR101809730B1 (en)2017-12-15
US12010312B2 (en)2024-06-11
US20240291989A1 (en)2024-08-29
US10341661B2 (en)2019-07-02
KR20210134567A (en)2021-11-10
US20200382784A1 (en)2020-12-03
US20240283934A1 (en)2024-08-22
WO2014003423A4 (en)2014-02-27
US9628799B2 (en)2017-04-18
PL2869557T3 (en)2024-02-19
KR20240023073A (en)2024-02-20
US20210112249A1 (en)2021-04-15
ES2961654T3 (en)2024-03-13
TW201717640A (en)2017-05-16
TWI563836B (en)2016-12-21
KR20240018565A (en)2024-02-13
TWI596935B (en)2017-08-21
KR20210134569A (en)2021-11-10
TWI839662B (en)2024-04-21
US10827177B2 (en)2020-11-03
TW201709739A (en)2017-03-01
KR20200043334A (en)2020-04-27
TW201635794A (en)2016-10-01
US20220295066A1 (en)2022-09-15
KR20170044630A (en)2017-04-25
US20240195974A1 (en)2024-06-13
US20160373748A1 (en)2016-12-22
KR20240023072A (en)2024-02-20
KR101809729B1 (en)2017-12-15
EP2869557B1 (en)2023-08-09
US12355967B2 (en)2025-07-08
US9497465B2 (en)2016-11-15
FI2869557T3 (en)2023-11-02
TW202141981A (en)2021-11-01
CN108712649A (en)2018-10-26
US20160373746A1 (en)2016-12-22

Similar Documents

PublicationPublication DateTitle
US11765356B2 (en)Method and device for encoding/decoding images
US11595643B2 (en)Encoding method and decoding method, and device using same

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp