Movatterモバイル変換


[0]ホーム

URL:


Next Article in Journal
Co-Clustering under the Maximum Norm
Previous Article in Journal
A Geometric Orthogonal Projection Strategy for Computing the Minimum Distance Between a Point and a Spatial Parametric Curve
 
 
Search for Articles:
Title / Keyword
Author / Affiliation / Email
Journal
Article Type
 
 
Section
Special Issue
Volume
Issue
Number
Page
 
Logical OperatorOperator
Search Text
Search Type
 
add_circle_outline
remove_circle_outline
 
 
Journals
Algorithms
Volume 9
Issue 1
10.3390/a9010016
Font Type:
ArialGeorgiaVerdana
Font Size:
AaAaAa
Line Spacing:
Column Width:
Background:
Article

Multiband and Lossless Compression of Hyperspectral Images

Dipartimento di Informatica, Università di Salerno, Fisciano (SA) 84084, Italy
*
Author to whom correspondence should be addressed.
Algorithms2016,9(1), 16;https://doi.org/10.3390/a9010016
Submission received: 28 December 2015 /Revised: 1 February 2016 /Accepted: 5 February 2016 /Published: 18 February 2016

Abstract

:
Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors,etc.). We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

    1. Introduction

    Hyperspectral imaging instruments collect information by exploring the electromagnetic spectrum of a specific geographical area. In contrast to the human eye and traditional camera sensors, which can only perceive visible light (i.e., the wavelengths between 360 to 760 nanometers (nm)), spectral imaging techniques allow to cover a significant portion of wavelengths (i.e., the frequencies of ultraviolet and infrared rays). It is important to note that the spectrum is subdivided into different spectral bands. Therefore, hyperspectral images can be viewed as three-dimensional data (often referred asdatacubes).
    For instance, theAirborne Visible/Infrared Imaging Spectrometer (AVIRIS) [1] hyperspectral sensor (NASA Jet Propulsion Laboratory (JPL) [2]) measures from 380 to 2500 nm of the electromagnetic spectrum. In particular, the spectrum is subdivided into 224 spectral bands.
    From the analysis of hyperspectral data, it is possible to identify and/or classify materials, objects,etc. Such capabilities are related to the fact that some objects and materials have a unique signature (a sort offingerprint) in the electromagnetic spectrum, therefore this fingerprint can be used for identification purposes.
    Hyperspectral data are widely used in real-life applications including agriculture, mineralogy, physics, surveillance,etc. For instance, in geological applications the capabilities of hyperspectral remote sensing are exploited to identify various types of minerals or to search for minerals and oil.
    One of the most important parameters to evaluate the precision of a sensor is the spectral resolution, which is the width between two adjacent bands. For instance, by considering the AVIRIS hyperspectral images, the spectral resolution is 10 nm. The spatial resolution is a relevant aspect too. Informally, the spatial resolution denotes how extensive is the geographical area mapped by the sensor into a pixel. It could be difficult to recognize materials and/or objects from a pixel, if a too wide area is mapped into it.
    Many hundreds of gigabytes can be produced every day by a single hyperspectral sensor. Therefore, it is necessary to compress these data, in order to transmit and to store them efficiently. Since such data are often used in delicate tasks and there are high costs involved in the acquisitions, lossless compression is generally required.
    This paper focuses on a novel technique for the lossless compression of hyperspectral images. The proposed algorithm is based on the predictive coding model and the proposed predictive structure uses a configurable multiband three-dimensional structure. It is possible to customize our predictor by individuating the number of the previous bands which will be used as references and the wideness of the prediction context. Through appropriate configurations of such parameters, the computational complexity and the memory usage can be optimized depending on the hardware available.
    Because of its high configurability, our algorithm is suitable for “on board” implementations on hardware with limited capabilities, as for example on an airplane or on a satellite.
    The experimental results we have achieved are comparable and often better, with respect to other state of the art approaches. Our scheme provides a good trade-off between computational complexity/memory usage and compression performances.
    The rest of the paper is organized as follows:Section 2 briefly reviews previous work on lossless and lossy compression of hyperspectral images,Section 3 outlines the proposed lossless compression approach andSection 4 focuses on the description of experimental results. Finally,Section 5 highlights our conclusion and future work directions.

    2. Related Works

    Lossless compression of hyperspectral images is generally based on the predictive coding model. The predictive-based approaches have different advantages: they use limited resources in terms of computational power and memory usage and achieve good compression performances. Often, these models are suitable for on board implementations.
    Spectral-oriented Least SQuares (SLSQ) [3], Linear Predictor (LP) [3], Fast Lossless (FL) [4], CALIC-3D [5], M-CALIC [5], and RLS [6] are among the state-of-art predictive-based techniques.
    TheConsultative Committee for Space Data Systems (CCSDS) has specified theCCSDS 123 standard, which outlines a method for lossless compression of multispectral and hyperspectral image data and a format for storing the compressed data [7,8]. The main objective is to establish a Recommended Standard for a multispectral and hyperspectral images, and to specify the compressed data format. In literature, many proposed approaches implement the recommendations of the CCSDS 123 standard for the lossless compression of hyperspectral images, as for instance, the ones described in [9,10,11].
    Other approaches are designed for offline compression, since they use more sophisticated techniques and/or they require the complete availability of the hyperspectral image. These approaches are not suitable for an on board implementation but can achieve better compression performances. Mielikainen, in [12], proposed an approach for the compression of hyperspectral image through Look-Up Table (LUT). LUT predicts each pixel by using all the pixels in the current and in the previous band, by searching the nearest neighbor, in the previous band, which has the same pixel value as the pixel located in the same spatial coordinates as the current pixel. LUT has high compression performances, but it uses more resources in terms of memory and CPU usage.
    Other lossless techniques are based on dimensionality reduction through principal component transform [13] or they are based on the clustered differential pulse code modulation [14]. An error-resilient lossless compression technique is proposed in [15].
    For the lossy compression of hyperspectral images, the compression algorithms are generally based on 3D frequency transforms: as for examples 3-D Discrete Wavelet Transform (3D-DWT) [16], 3-D Discrete Cosine Transform (3D-DCT) [17], Karhunen–Loève transform (KLT) [18],etc. These approaches are easily scalable. On the other hand, they must maintain the entire hyperspectral image at the same time in memory. Locally optimal Partitioned Vector Quantization (LPVQ) [19,20] applies a Partitioned Vector Quantization (PVQ) scheme independently to each pixel of the hyperspectral image.
    The variable sizes of the partitions are chosen adaptively and the indices are entropy coded. The codebook is included as part of the coded output. This technique can be used also in lossless mode, but the high costs required in terms of CPU and memory do not allow an on board implementation.

    3. Lossless Multiband Compression for Hyperspectral Images (LMBHI)

    Hyperspectral images present two typologies of correlations:
    • inter-band correlation;
    • intra-band correlation.
    In particular, contiguous bands are strongly correlated (inter-band correlation) and the pixels are generally correlated, since, for instance, two adjacent pixels map adjacent areas, possibly composed of the same material,etc. (intra-band correlation). Such characterizations are exploited by the compression strategies, in order to optimize the redundancy among the third dimension. The main aim of our approach, which we denoted asLossless MultiBand compression for Hyperspectral Images (LMBHI), is to exploit the correlation with a predictive coding model.
    In detail, for each pixel,X, of the input hyperspectral image, LMBHI performs the prediction of the current pixel,X^, by selecting the appropriate prediction context ofX (three-dimensional or a bi-dimensional contexts).
    All the pixels that belong to the first band are predicted by using a bi-dimensional predictive structure: the 2-D Linearized Median Predictor (2-D LMP) [21], which exploits only the intra-band correlation, since the first band has no reference bands. The other pixels are predicted by using a new three-dimensional predictive approach, which uses a prediction context composed of the neighboring pixels ofX and its reference pixels in the previous bands.
    Once the prediction step is computed, the prediction errore (defined in Equation (1)) is modeled and coded.
    e=XX^

    3.1. Review of the 2-D Linearized Median Predictor (2D-LMP)

    The2-D Linearized Median Predictor (2D-LMP) [21] uses a prediction context that is composed by three neighboring pixels ofX, namely,IA,IB, andIC, as shown inFigure 1. In particular, the predictive structure is derived from the well-established2-D Median Predictor, which is used in JPEG-LS [22]. The 2-D Median Predictor has the following predictive structure outlined in the Equation (2).
    X^={max(IA,IB)if ICmin(IA, IB)min(IA, IB)if ICmax(IA, IB)IA+IBICotherwise
    Basically, Median Predictor is in charge of selecting one of the above three options, depending on the context. By combining all the three options, it is possible to obtain the predictive structure of 2D-LMP, defined as in the Equation (3).
    X^=2 ·(IA+IB)3IC3

    3.2. 3-D MultiBand Linear Predictor (3D-MBLP)

    The Multiband Linear Predictor (3D-MBLP) uses a prediction context by considering two parameters:
    • B: number of the previous bands, that are considered for the prediction;
    • N: number of the samples for the current and each previous band, which will be used for the creation of the prediction context.
    First of all, we define a bi-dimensional enumerationE, graphically represented inFigure 2. The main aim of such an enumeration, is to permit the relative indexing of the pixels with respect to the pixel which is currently under analysis (which has0 as index inFigure 2).
    In order to define the prediction context of 3D-MBLP, we use the following notations:
    • Ii, j: indicates thei-th pixel of thej-th band, according to the enumerationE;
    • I0,j: denotes the pixel that has the same spatial coordinates ofX, of thej-th band, according to the enumerationE.
    In the following, we suppose that the current band is thek-th band. In particular, by using our notations, it is possible to observe thatX can be also addressed asI0, k In detail, the 3D-MBLP predictor is based on the least squares optimization technique and the prediction is computed by means of the Equation (4).
    X^=i=1Bαi·I0, ki
    The coefficients:α0=[α1, , αB]T are chosen to minimize the energy of the prediction error described by the Equation (5).
    P=i=1N(Ii, kI^i, k)2
    P can be rewritten in matrix notation by means of the following equation:
    P=(CαX)T·(CαX)
    where
    C= [I1, k1I1, kBIN, k1IN, kB]
    andX=[I1,k,,IN,k]T.
    Subsequently, by taking the derivate ofP and by setting it to zero, we obtain the optimal coefficients by means of the Equation (6).
    (CTC)α0=(CTX)
    Once the coefficientsα0, which solve the linear system Equation (6), are computed, the prediction,X^, of the current pixelX, can be calculated.

    3.3. Modeling and Coding of Prediction Errors

    Starting from the consideration that a prediction error can assume positive or negative values. Similarly to [23], we use an invertible mapping function (highlighted in the Equation (7)), in order to have only non-negative values. It is important to note that the mapping function does not alter the redundancy among the errors. For the coding of the mapped prediction errors we use the Arithmetic Coder (AC) scheme.
    M(error)={2·|error|if error02·|error|1otherwise

    3.4. Computational Complexity

    The main computational costs of our approach are due to the resolution of the linear system Equation (6), used to generate the optimal coefficients, which need the computation of the predicted pixel. By using the normal equation method, the linear system can be solved with(N+B3)·B2 floating-point operations [24].Figure 3 shows the trend of the computational complexity of our predictive model, in terms of number of operations (Y-axis) that are required for the solving of the linear system, by using configurations with different parameters (X-axis).
    If we use only the previous band as a reference (B=1), only about 20 operations are needed to solve the system. Instead, four or nine times more operations are required if we use two previous bands (B=2) or three previous bands (B=3). A linear system can have three kinds of solutions: no solutions, one solution, and infinity solutions. In the first and the third scenarios, the proposed predictive structure cannot perform the prediction. In these scenarios, it is desirable to use another low-complexity predictive structure and we have used the3-D Distances-based Linearized Median Predictor (3D-DLMP) [21].

    4. Experimental Results

    We performed experiments on two datasets of AVIRIS hyperspectral images: the1997 AVIRIS Dataset (Section 4.1) and theCCSDS Dataset (Section 4.2).
    In our experiments we considered also the PAQ8 algorithm (described in [25]) for the coding of the prediction error. PAQ8 is a state-of-the-art lossless compression algorithm that belongs to the PAQ family of compression algorithm. It is important to note that the PAQ8 family is strictly related to the well-establishedPrediction by Partial Matching scheme (PPM) [26]. In general, the PAQ8 algorithm achieve a high degree of compression performances, but the PAQ8 scheme has significant computational complexity. Therefore, such scheme is not fully adequate to be used for on board applications.
    The experiments are performed by using a non-optimizedJava-basedproof-of-concept of our approach, which employs few minutes on a medium-end laptop (equipped with an Intel Core i5 4200 M processor and 8 GB of RAM).

    4.1. 1997 AVIRIS Dataset

    Each one of AVIRIS hyperspectral image of the AVIRIS ’97 dataset is subdivided into scenes (the number of scenes is highlighted inTable 1). It is important to note that each scene has 614 columns, 512 lines, and 224 spectral bands. In addition, each pixel is stored by using 16 bits.
    InTable 2 andTable 3, we report the results achieved by usingB = 1 withN = 8 andN = 16, respectively. Subsequently, inTable 4 andTable 5, we report the results achieved, by usingB = 2. Finally,Table 6 andTable 7 report the results achieved by usingB = 3 and theN parameter equal to 8 and equal to 16, respectively. All the results are reported in terms ofBits Per Sample (BPS). In each table we report the results achieved by using both the AC and the PAQ8 schemes for the coding of the prediction errors.
    InTable 8 andTable 9, the average results on all the tested hyperspectral images are reported. In detail, the first column indicates theN parameter and from the second to the fourth columns the average results forB = 1,B = 2, and B = 3, respectively.
    As it is possible to observe fromFigure 4 andFigure 5, which graphically represent the average results, the best results are achieved when the following parameters are used:N = 16 andB = 2 (Figure 4b andFigure 5b). The worst results are obtained by using the following parameters:N = 8 andB = 3.

    Comparison with other Approaches

    In order to compare the experimental results achieved by our approach, we consider the Compression Ratio (C.R.) as a measure for the compression performances. In detail,Table 10 reports the results achieved by considering several parameters on all the hyperspectral images of the used dataset. More precisely, the results are reported in terms of C.R. and they are compared with other state of the art lossless compression schemes.
    From the experimental results, it should be observed that LMBHI gets its best results by using two previous bands as references (i.e. when the following parameter is used:B=2), LMBHI outperforms, in average, all the other state of the art approaches.
    On the other hand, when only the previous band is used (i.e., whenB=1), LMBHI outperforms all the compared state of the art techniques, with the exception of LPVQ. But, LPVQ is not suited for on board implementation.
    In this latter case, our approach achieves better results with respect to LPVQ on three of the five hyperspectral images: Moffett Field, Jasper Ridge, and Low Altitude, but LPVQ gains on Cuprite and especially on Lunar Lake. In addition, LUT obtains better results of our approach on two of four compared hyperspectral images: Lunar Lake and Jasper Ridge.
    The high flexibility and adaptability of our approach makes it considerable for on board implementations. In fact, the coding parameters can be customized depending on the hardware available.

    4.2. CCSDS Dataset

    In this section we focus on the experimental results we have achieved by considering the CCSDS Dataset, which is composed by five calibrated and seven uncalibrated hyperspectral images. This dataset is provided by Consultative Committee for Space Data Systems (CCSDS) Multispectral and Hyperspectral Data Compression [27].
    InTable 11, we shortly describe the dataset by reporting the number of scenes (second column) and the number of samples per line (third column) for the calibrated and the uncalibrated images (first column). The samples of the calibrated and the uncalibrated images are stored by using 16 bits (16-bit signed integer for the calibrated and 16-bit unsigned for the uncalibrated), except for theHawaii andMaine images in which the samples are stored by using 12 bits (unsigned) [27]. Each image is composed by 512 lines.
    InTable 12, we report our results in terms of bits-per sample (BPS). The results refer to the calibrated hyperspectral images (first column), by using several configurations for our approach (columns from the second to the fourth). Analogously toTable 12, inTable 13 we report our experimental results for the uncalibrated images. In each table, we report the results by using the AC and the PAQ8 schemes.
    The best results are achieved when the following configuration is used:B=2 andN=16.

    Comparison with Other Approaches

    We have compared our results on the CCSDS dataset with other state-of-art approaches.Table 14 reports the results achieved by considering several values for theB andN parameters on the calibrated hyperspectral images of the CCSDS dataset, by using the AC scheme as well as the PAQ8 scheme for the coding of the prediction errors.
    Table 15 andTable 16 report the comparison between the proposed approach and other approaches for the 16-bit uncalibrated and 12-bit uncalibrated hyperspectral images of the CCSDS dataset.
    FromTable 12 andTable 13, it comes clear that the best results are achieved when the value ofN is equal to16 andB is equal to2.
    By looking atTable 14, it should be observed that our approach, when the following configuration is used:N=16 andB=2, achieves results that are comparable but slightly worse with respect to FL and FL# [27]. Our approach outperforms all the other techniques when the PAQ8 scheme is used for the coding of prediction errors withN=16 andB=2. However, in such configuration the computational complexity of our approach is not suitable for on board implementations.

    5. Conclusions and Future Works

    In this paper, we have investigated on the lossless compression of hyperspectral images by introducing a multiband three-dimensional predictive structure, we named as 3D-MBLP.
    Because of its configurability, it is possible to implement the algorithm on different typologies of sensors, by using appropriate configuration for each type of sensors. Moreover, the proposed approach can be also easily scaled for future generation sensors which will have better hardware capabilities. The experimental results we achieved are comparable and often outperform the other state of the art lossless compression techniques.
    In future works, we will include a pre-processing stage before the compression of the hyperspectral image, which substantially reorders the bands by considering their correlation. This will possibly improve the compression performance as in [28,29].

    Acknowledgments

    The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

    Author Contributions

    All the authors worked together and contributed equally.

    Conflicts of Interest

    The authors declare no conflict of interest.

    References

    1. AVIRIS NASA Page. Available online:http://aviris.jpl.nasa.gov/ (accessed on 1 December 2015).
    2. Jet Propulsion Laboratory (JPL) Page. Available online:http://www.jpl.nasa.gov/ (accessed on 1 December 2015).
    3. Rizzo, F.; Carpentieri, B.; Motta, G.; Storer, J.A. Low-complexity lossless compression of hyperspectral imagery via linear prediction.IEEE Signal Process. Lett.2005,12, 138–141. [Google Scholar] [CrossRef]
    4. Klimesh, M. Low-complexity lossless compression of hyperspectral imagery via adaptive filtering.IPN Prog. Report2005,42, 1–10. [Google Scholar]
    5. Magli, E.; Olmo, G.; Quacchio, E. Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC.Geosci. Remote Sens. Lett.2004,1, 21–25. [Google Scholar] [CrossRef]
    6. Song, J.; Zhang, Z.; Chen, X. Lossless compression of hyperspectral imagery via RLS filter.Electron. Lett.2013,49, 992–994. [Google Scholar] [CrossRef]
    7. Shah, D.; Bera, K.; Sanjay, J. Software Implementation of CCSDS Recommended Hyperspectral Lossless Image Compression.Int. J. Image Gr. Signal Process.2015,4, 35–41. [Google Scholar] [CrossRef]
    8. Consultative Committee for Space Data Systems (CCSDS), Lossless Multispectral & Hyperspectral Image, Lossless Multispectral & Hyperspectral Image Compression. Available online:http://public.ccsds.org/publications/archive/123x0b1ec1.pdf (accessed on 5 February 2016).
    9. Snchez, J.E.; Auge, E.; Santal, J.; Blanes, I.; Serra-Sagrist, J.; Kiely, A.B. Review and implementation of the emerging CCSDS recommended standard for multispectral and hyperspectral lossless image coding. In Proceedings of the 2011 First International Conference on Data Compression, Communications and Processing (CCP), Palinuro, Italy, 21–24 June 2011; pp. 222–228.
    10. Keymeulen, D.; Aranki, N.; Hopson, B.; Kiely, A.; Klimesh, M.; Benkrid, K. GPU lossless hyperspectral data compression system for space applications. In Proceedings of the Aerospace Conference, Big Sky, MT, USA, 3–10 March 2012; pp. 1–9.
    11. Keymeulen, D.; Aranki, N.; Bakhshi, A.; Luong, H.; Sarture, C.; Dolman, D. Airborne demonstration of FPGA implementation of Fast Lossless hyperspectral data compression system. In Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Leicester, UK, 14–17 July 2014; pp. 278–284.
    12. Mielikainen, J. Lossless compression of hyperspectral images using lookup tables.IEEE Signal Process. Lett.2006,13, 157–160. [Google Scholar] [CrossRef]
    13. Pickering, M.; Ryan, M. Efficient spatial-spectral compression of hyperspectral data.IEEE Trans. Geosci. Remote Sens.2001,39, 1536–1539. [Google Scholar] [CrossRef]
    14. Wu, J.; Kong, W.; Mielikainen, J.; Huang, B. Lossless Compression of Hyperspectral Imagery via Clustered Differential Pulse Code Modulation with Removal of Local Spectral Outliers.IEEE Signal Process. Lett.2015,22, 2194–2198. [Google Scholar] [CrossRef]
    15. Abrando, A.; Barni, M.; Magli, E.; Nencini, F. Error-Resilient and low-complexity on-board lossless compression of hyperspectral images by means of distributed source coding.IEEE Trans. Geosci.2010,48, 1892–1904. [Google Scholar] [CrossRef]
    16. Lim, S.; Sohn, K.; Lee, C. Compression for hyperspectral images using three dimensional wavelet transform. In Proceedings of the IGARSS, Sydney, Australia, 9–13 July 2001; pp. 109–111.
    17. Markman, D.; Malah, D. Hyperspectral image coding using 3D transforms. In Proceedings of the IEEE ICIP, Thessaloniki, Greece, 7–10 October 2001; pp. 114–117.
    18. Penna, B.; Tillo, T.; Magli, E.; Olmo, G. Transform coding techniques for lossy hyperspectral data compression.IEEE Trans. Geosci.2007,45, 1408–1421. [Google Scholar] [CrossRef]
    19. Carpentieri, B.; Storer, J.A.; Motta, G.; Rizzo, F. Compression of hyperspectral imagery. In Proceedings of the IEEE Data Compression Conference (DCC 03), Snowbird, UT, USA, 25–27 March 2003; pp. 317–324.
    20. Motta, G.; Rizzo, F.; Storer, J.A.Hyperspectral Data Compression; Springer Science: Berlin, Germany, 2006. [Google Scholar]
    21. Pizzolante, R.; Carpentieri, B. Lossless, low-complexity, compression of three-dimensional volumetric medical images via linear prediction. In Proceedings of the 18th International Conference on Digital Signal. Processing (DSP), Fira, Greece, 1–3 July 2013; pp. 1–6.
    22. Carpentieri, B.; Weinberger, M.; Seroussi, G. Lossless compression of continuous tone images.Proc. IEEE2000,88, 1797–1809. [Google Scholar] [CrossRef]
    23. Motta, G.; Storer, J.A.; Carpentieri, B. Lossless image coding via adaptive linear prediction and classifications.Proc. IEEE2000,88, 1790–1796. [Google Scholar] [CrossRef]
    24. Golub, G.H.; van Loan, C.F.Matrix Computations; The Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
    25. Knoll, B.; de Freitas, N. A Machine Learning Perspective on Predictive Coding with PAQ8. In Proceedings of the Data Compression Conference (DCC), Snowbird, UT, USA, 10–12 April 2012; pp. 377–386.
    26. Salomon, D.; Motta, G.Handbook of Data Compression; Springer: Berlin, Germany, 2010. [Google Scholar]
    27. Kiely, A.B.; Klimesh, M. Exploiting calibration-induced artifacts in lossless compression of hyperspectral imagery.IEEE Trans. Geosci. Remote Sens.2009,47, 2672–2678. [Google Scholar] [CrossRef]
    28. Carpentieri, B. Hyperpectral images: Compression, visualization and band ordering.Proc. IPCV2011,2, 1023–1029. [Google Scholar]
    29. Pizzolante, R.; Carpentieri, B. Visualization, band ordering and compression of hyperspectral images.Algorithms2012,5, 76–97. [Google Scholar] [CrossRef]
    Algorithms 09 00016 g001 1024
    Figure 1. The prediction context of the 2D-LMP predictive structure. The gray part is already coded and the white part is not coded yet.
    Figure 1. The prediction context of the 2D-LMP predictive structure. The gray part is already coded and the white part is not coded yet.
    Algorithms 09 00016 g001
    Algorithms 09 00016 g002 1024
    Figure 2. The enumerationE we used for the relative indexing with respect to the current pixel, identified with 0 as index.
    Figure 2. The enumerationE we used for the relative indexing with respect to the current pixel, identified with 0 as index.
    Algorithms 09 00016 g002
    Algorithms 09 00016 g003 1024
    Figure 3. The number of operations (Y-axis) required to solve the linear system Equation (6), by using different parameters (X-axis).
    Figure 3. The number of operations (Y-axis) required to solve the linear system Equation (6), by using different parameters (X-axis).
    Algorithms 09 00016 g003
    Algorithms 09 00016 g004 1024
    Figure 4. Graphical representation of the average results by using the AC scheme for the coding of the prediction errors.N=8 (a) andN=16 (b).
    Figure 4. Graphical representation of the average results by using the AC scheme for the coding of the prediction errors.N=8 (a) andN=16 (b).
    Algorithms 09 00016 g004
    Algorithms 09 00016 g005 1024
    Figure 5. Graphical representation of the average results by using the PAQ8 scheme for the coding of the prediction errors.N=8 (a) andN=16 (b).
    Figure 5. Graphical representation of the average results by using the PAQ8 scheme for the coding of the prediction errors.N=8 (a) andN=16 (b).
    Algorithms 09 00016 g005
    Table
    Table 1. Description of the dataset used.
    Table 1. Description of the dataset used.
    ImagesNumber of Scenes
    Lunar Lake3
    Moffett Field4
    Jasper Ridge6
    Cuprite5
    Low Altitude8
    Table
    Table 2. Achieved results by using the following parameters:N=8,B=1. (N.P. indicates that the scene is not present).
    Table 2. Achieved results by using the following parameters:N=8,B=1. (N.P. indicates that the scene is not present).
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 015.05604.99855.14635.06495.06024.98014.96994.91245.37845.3101
    Scene 025.00994.95325.10234.99185.05244.97075.04574.99165.40155.3320
    Scene 034.99634.93814.96724.84265.09875.01504.99304.93695.30665.2429
    Scene 04N.P.5.18885.09845.11605.03535.03804.98385.33415.2687
    Scene 05N.P.N.P.5.05324.97465.03584.98215.38105.3151
    Scene 06N.P.N.P.5.05254.9726N.P.5.31455.2536
    Scene 07N.P.N.P.N.P.N.P.5.31105.2511
    Scene 08N.P.N.P.N.P.N.P.5.32685.2712
    Average5.02074.96335.10124.99945.07224.99145.01654.96145.34425.2806
    Table
    Table 3. Achieved results by using the following parameters:N=16,B=1.
    Table 3. Achieved results by using the following parameters:N=16,B=1.
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 015.02124.96655.14075.05045.05474.96674.93344.87915.36585.2917
    Scene 024.97674.92285.10834.97965.04854.95765.01944.96635.39065.3151
    Scene 034.96454.90944.96544.82215.09525.00204.95794.90495.28925.2206
    Scene 04N.P.5.18435.08375.11585.02555.00644.95405.31585.2456
    Scene 05N.P.N.P.5.04884.96225.00024.94915.37015.2981
    Scene 06N.P.N.P.5.04634.9604N.P.5.29635.2321
    Scene 07N.P.N.P.N.P.N.P.5.29365.2303
    Scene 08N.P.N.P.N.P.N.P.5.31315.2544
    Average4.98754.93295.09974.98405.06824.97914.98354.93075.32935.2610
    Table
    Table 4. Achieved results by using the following parameters:N=8,B=2.
    Table 4. Achieved results by using the following parameters:N=8,B=2.
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 015.01414.97075.09345.04625.01404.96584.93644.89295.32955.2879
    Scene 024.96894.92595.01724.95255.00724.95835.03264.99185.34915.3065
    Scene 034.95584.91114.88904.80645.05015.00054.96254.92045.26485.2221
    Scene 04N.P.5.12545.07155.06545.02005.01054.96935.28815.2451
    Scene 05N.P.N.P.5.00954.96245.00124.96085.32815.2866
    Scene 06N.P.N.P.5.01624.9639N.P.5.27075.2297
    Scene 07N.P.N.P.N.P.N.P.5.27325.2326
    Scene 08N.P.N.P.N.P.N.P.5.29275.2552
    Average4.97964.93595.03134.96925.02714.97854.98864.94705.29955.2582
    Table
    Table 5. Achieved results by using the following parameters:N=16,B=2.
    Table 5. Achieved results by using the following parameters:N=16,B=2.
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 014.91754.87325.02064.97034.93974.88844.83874.79455.24825.2058
    Scene 024.87434.83034.95134.88064.93414.88174.94834.90615.26895.2254
    Scene 034.86324.81744.81604.72444.97554.92274.86794.82505.17995.1359
    Scene 04N.P.5.04964.99294.99404.94584.91944.87715.20175.1572
    Scene 05N.P.N.P.4.93584.88594.90474.86385.24915.2066
    Scene 06N.P.N.P.4.94184.8863N.P.5.18595.1444
    Scene 07N.P.N.P.N.P.N.P.5.18835.1470
    Scene 08N.P.N.P.N.P.N.P.5.21085.1728
    Average4.88504.84034.95944.89214.95354.90184.89584.85335.21665.1744
    Table
    Table 6. Achieved results by using the following parameters:N=8,B=3.
    Table 6. Achieved results by using the following parameters:N=8,B=3.
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 015.10605.06995.18155.14155.10375.06405.03244.99655.42385.3862
    Scene 025.06215.02655.09195.03625.09685.05635.13635.10185.44145.4026
    Scene 035.04795.01124.96494.89325.13745.09625.06185.02695.36105.3221
    Scene 04N.P.5.20855.16145.15235.11485.10845.07375.38315.3446
    Scene 05N.P.N.P.5.09995.06115.09635.06315.42025.3825
    Scene 06N.P.N.P.5.11385.0697N.P.5.36565.3285
    Scene 07N.P.N.P.N.P.N.P.5.37075.3341
    Scene 08N.P.N.P.N.P.N.P.5.39185.3581
    Average5.11175.03595.11735.05815.08705.07705.39475.05245.39475.3573
    Table
    Table 7. Achieved results by using the following parameters:N=16,B=3.
    Table 7. Achieved results by using the following parameters:N=16,B=3.
    ScenesLunar LakeMoffett FieldJasper RidgeCupriteLow Altitude
    ACPAQ8ACPAQ8ACPAQ8ACPAQ8ACPAQ8
    Scene 014.92434.88425.02064.97584.94134.89684.84944.80945.25465.2153
    Scene 024.88224.84224.93834.87544.93534.88984.96674.92825.27315.2327
    Scene 034.87044.82914.80524.72264.97394.92794.88234.84345.18875.1476
    Scene 04N.P.5.04454.99314.99194.95014.93204.89325.20935.1681
    Scene 05N.P.N.P.4.93794.89444.91494.87765.25285.2133
    Scene 06N.P.N.P.4.95104.9012N.P.5.19355.1545
    Scene 07N.P.N.P.N.P.N.P.5.19805.1594
    Scene 08N.P.N.P.N.P.N.P.5.22145.1854
    Average4.89234.85184.95224.89174.95524.91004.90914.87045.22395.1845
    Table
    Table 8. Average Results on the 1997 AVIRIS Images (AC).
    Table 8. Average Results on the 1997 AVIRIS Images (AC).
    NB = 1B = 2B = 3
    85.11105.06525.2211
    165.09364.98214.9865
    Table
    Table 9. Average Results on the 1997 AVIRIS Images (PAQ8).
    Table 9. Average Results on the 1997 AVIRIS Images (PAQ8).
    NB = 1B = 2B = 3
    85.03925.01785.1161
    165.01754.93244.9417
    Table
    Table 10. Compression results, in terms of compression ratio (C.R.) achieved by LMBHI (by using various parameter configurations), compared to other lossless compression methods.
    Table 10. Compression results, in terms of compression ratio (C.R.) achieved by LMBHI (by using various parameter configurations), compared to other lossless compression methods.
    Methods/ImagesLunar LakeMoffett FieldJasper RidgeCupriteLow AltitudeAverage
    3D-MBLP + PAQ8 (N = 16,B = 2)3.313.273.263.303.093.25
    3D-MBLP + PAQ8 (N = 8,B = 2)3.243.223.213.233.043.19
    3D-MBLP + PAQ8 (N = 8,B = 1)3.223.203.213.233.033.18
    3D-MBLP + AC (N = 16,B = 2)3.273.233.233.273.073.21
    3D-MBLP + AC (N = 8,B = 2)3.213.183.183.213.023.16
    3D-MBLP + AC (N = 8,B = 1)3.183.143.163.192.993.13
    LPVQ3.313.013.123.272.973.14
    SLSQ3.153.143.153.152.983.11
    JPEG-20002.982.992.962.982.822.95
    LP3.052.882.943.032.762.93
    JPEG-LS2.872.902.872.872.742.85
    Diff. JPEG20002.942.832.822.922.692.84
    Diff. JPEG-LS2.932.842.812.912.72.84
    M-CALIC3.193.273.063.14N.D.N.D.
    CALIC-3D3.063.083.093.25N.D.N.D.
    LUT3.443.233.403.17N.D.N.D.
    Table
    Table 11. Description of the CCSDS dataset.
    Table 11. Description of the CCSDS dataset.
    ImagesNum. of Scenes (Denotation of the Scenes)Samples
    Calibrated
    Yellowstone5 scenes (3, 10, 11, 18)677
    Uncalibrated
    Yellowstone5 scenes (3, 10, 11, 18)680
    Hawaii1 scene (1)614
    Maine1 scene (10)680
    Table
    Table 12. Achieved results for the calibrated images of the CCSDS dataset.
    Table 12. Achieved results for the calibrated images of the CCSDS dataset.
    Images/ConfigurationsN = 8,B = 1N = 8,B = 2N = 16,B = 2
    ACPAQ8ACPAQ8ACPAQ8
    Yellowstone 04.18814.08984.04353.99173.97833.9198
    Yellowstone 34.08313.96583.94493.87013.87953.7972
    Yellowstone 103.54703.39743.47123.34883.39253.2592
    Yellowstone 113.78783.71283.70353.65543.63113.5751
    Yellowstone 184.14954.03503.97733.91543.90823.8403
    Average3.95113.84013.82803.75633.75793.6783
    Table
    Table 13. Achieved results for the uncalibrated images of the CCSDS dataset.
    Table 13. Achieved results for the uncalibrated images of the CCSDS dataset.
    Images/ConfigurationsN = 8,B = 1N = 8,B = 2N = 16,B = 2
    Calibrated (16-bit)
    ACPAQ8ACPAQ8ACPAQ8
    Yellowstone 06.78896.60346.45646.40126.40856.3411
    Yellowstone 36.68546.49616.35536.27576.31066.2183
    Yellowstone 106.06355.86205.83975.70965.77545.6309
    Yellowstone 116.30236.17016.08896.04186.03295.9723
    Yellowstone 186.79096.58406.42356.35046.37156.2865
    Average6.52626.34316.23286.15576.17986.0898
    Calibrated (12-bit)
    Hawaii2.95332.90412.87482.84342.79652.7616
    Maine3.07462.98352.95282.90302.90052.8413
    Average3.01402.94382.91382.87322.84852.8015
    Table
    Table 14. Comparison with other lossless compression methods (calibrated images). The results are reported in bits-per-sample (BPS).
    Table 14. Comparison with other lossless compression methods (calibrated images). The results are reported in bits-per-sample (BPS).
    Methods/Scenes (Yellowstone Calibrated)03101118Average
    3D-MBLP + PAQ8 (N = 16,B = 2)3.923.803.263.583.843.68
    3D-MBLP + PAQ8 (N = 8,B = 2)3.993.873.353.663.923.76
    3D-MBLP + PAQ8 (N = 8,B = 1)4.093.973.403.714.043.84
    3D-MBLP + AC (N = 16,B = 2)3.983.883.393.633.913.76
    3D-MBLP + AC (N = 8,B = 2)4.043.953.473.703.983.83
    3D-MBLP + AC (N = 8,B = 1)4.184.083.553.794.153.95
    FL3.963.833.403.633.943.75
    FL#3.913.793.373.593.903.71
    LUT#4.824.623.964.344.844.52
    LAIS-LUT#4.484.313.714.024.484.20
    TSP-W13.943.813.373.603.923.73
    TSP-W23.993.863.423.673.973.78
    Table
    Table 15. Comparison with other lossless compression methods (16-bit uncalibrated images). The results are reported in bits-per-sample (BPS).
    Table 15. Comparison with other lossless compression methods (16-bit uncalibrated images). The results are reported in bits-per-sample (BPS).
    Methods/Scenes (Yellowstone Uncalibrated)03101118Average
    3D-MBLP + PAQ8 (N = 16,B = 2)6.346.225.635.976.296.09
    3D-MBLP + PAQ8 (N = 8,B = 2)6.406.285.716.046.356.16
    3D-MBLP + PAQ8 (N = 8,B = 1)6.606.505.866.176.586.34
    3D-MBLP + AC (N = 16,B = 2)6.416.315.786.036.376.18
    3D-MBLP + AC (N = 8,B = 2)6.466.365.846.096.426.23
    3D-MBLP + AC (N = 8,B = 1)6.796.696.066.306.796.53
    FL6.236.105.655.866.326.03
    FL#6.206.075.605.816.265.99
    LUT#7.146.916.266.697.206.84
    LAIS-LUT#6.786.606.006.306.826.50
    TSP-W16.236.095.595.836.286.01
    TSP-W26.276.135.645.886.326.05
    Table
    Table 16. Comparison with other lossless compression methods (12-bit uncalibrated images). The results are reported in bits-per-sample (BPS).
    Table 16. Comparison with other lossless compression methods (12-bit uncalibrated images). The results are reported in bits-per-sample (BPS).
    Methods/Scenes (Yellowstone Uncalibrated)HawaiiMaineAverage
    3D-MBLP + PAQ8 (N = 16,B = 2)2.762.842.80
    3D-MBLP + PAQ8 (N = 8,B = 2)2.842.902.87
    3D-MBLP + PAQ8 (N = 8,B = 1)2.902.982.94
    3D-MBLP + AC (N = 16,B = 2)2.802.902.85
    3D-MBLP + AC (N = 8,B = 2)2.872.952.91
    3D-MBLP + AC (N = 8,B = 1)2.953.073.01
    FL2.642.722.68
    FL#2.582.682.63
    LUT#3.263.453.35
    LAIS-LUT#3.053.193.12
    TSP-W12.612.712.66
    TSP-W22.622.742.68

    © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

    Share and Cite

    MDPI and ACS Style

    Pizzolante, R.; Carpentieri, B. Multiband and Lossless Compression of Hyperspectral Images.Algorithms2016,9, 16. https://doi.org/10.3390/a9010016

    AMA Style

    Pizzolante R, Carpentieri B. Multiband and Lossless Compression of Hyperspectral Images.Algorithms. 2016; 9(1):16. https://doi.org/10.3390/a9010016

    Chicago/Turabian Style

    Pizzolante, Raffaele, and Bruno Carpentieri. 2016. "Multiband and Lossless Compression of Hyperspectral Images"Algorithms 9, no. 1: 16. https://doi.org/10.3390/a9010016

    APA Style

    Pizzolante, R., & Carpentieri, B. (2016). Multiband and Lossless Compression of Hyperspectral Images.Algorithms,9(1), 16. https://doi.org/10.3390/a9010016

    Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further detailshere.

    Article Metrics

    No
    No

    Article Access Statistics

    For more information on the journal statistics, clickhere.
    Multiple requests from the same IP address are counted as one view.
    Algorithms, EISSN 1999-4893, Published by MDPI
    RSSContent Alert

    Further Information

    Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

    Guidelines

    For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

    MDPI Initiatives

    Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series

    Follow MDPI

    LinkedIn Facebook X
    MDPI

    Subscribe to receive issue release notifications and newsletters from MDPI journals

    © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated
    Terms and Conditions Privacy Policy
    We use cookies on our website to ensure you get the best experience.
    Read more about our cookieshere.
    Accept
    Back to TopTop
    [8]ページ先頭

    ©2009-2025 Movatter.jp