Movatterモバイル変換


[0]ホーム

URL:


WO2014035144A1 - Display system with image display correction mechanism and method of operation thereof - Google Patents

Display system with image display correction mechanism and method of operation thereof
Download PDF

Info

Publication number
WO2014035144A1
WO2014035144A1PCT/KR2013/007734KR2013007734WWO2014035144A1WO 2014035144 A1WO2014035144 A1WO 2014035144A1KR 2013007734 WKR2013007734 WKR 2013007734WWO 2014035144 A1WO2014035144 A1WO 2014035144A1
Authority
WO
WIPO (PCT)
Prior art keywords
test sample
known test
display
control unit
compensation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2013/007734
Other languages
French (fr)
Inventor
Li Tao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co LtdfiledCriticalSamsung Electronics Co Ltd
Priority claimed from KR1020130102110Aexternal-prioritypatent/KR20140029252A/en
Publication of WO2014035144A1publicationCriticalpatent/WO2014035144A1/en
Anticipated expirationlegal-statusCritical
Ceasedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method of operation of a display system includes: sending a known test sample; retrieving the known test sample; comparing the known test sample and the retrieved known test sample; generating a compensation model based on the comparison for correcting the retrieved known test sample; and providing the compensation model for displaying a corrected display on a device.

Description

DISPLAY SYSTEM WITH IMAGE DISPLAY CORRECTION MECHANISM AND METHOD OF OPERATION THEREOF
An embodiment of the present invention relates generally to a display system, and more particularly to a system for image display.
Modern consumer and industrial electronics, especially devices such as graphical display systems, televisions, projectors, cellular phones, portable digital assistants, and combination devices, are providing increasing levels of functionality to support modern life including image display. Research and development in the existing technologies can take a myriad of different directions.
Image display decisions are made every day in many aspects of commerce. When consumers purchase products, image display can be a major influence on the purchasing decision. In modern commerce, there is the ability to purchase over the Internet or through other virtual means. This ability makes the accurate and precise image display even more important since such displays represent through presentation the image display of physical products. It would be disappointing to the purchaser to purchase a product over the Internet under the belief that it was the one in the image display and receive the product only to later realize that the image display was not displayed properly when the product was purchased.
The ability to reproduce precise and accurate image display in a display device is also important in the business of advertising, product packaging and product manufacturing. When such materials are being created, online virtual collaboration systems can be used to review and approve product prototypes of printed images. In these cases, it would be advantageous to use a system or method for reproducing precise and accurate image displays on display devices.
Web video is becoming a very important media type as the Internet is becoming a part of our ordinary life and a major channel to obtain information. Many types of information are available by watching web videos on your PC, PDA, smart phone, and digital media player. In order to allow the web video clips to be readily accessible under the restriction of network bandwidth and storage space, web video clips are generally compressed heavily, which results in degraded images and video.
Thus, a need still remains for an image display system with image display correction mechanism to display images. In view of the ever-increasing commercial competitive pressures, along with growing consumer expectations and the diminishing opportunities for meaningful product differentiation in the marketplace, it is increasingly critical that answers be found to these problems. Additionally, the need to reduce costs, improve efficiencies and performance, and meet competitive pressures adds an even greater urgency to the critical necessity for finding answers to these problems.
Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.
An embodiment of the present invention provides a method of operation of a display system including: sending a known test sample; retrieving the known test sample; comparing the known test sample and the retrieved known test sample; generating a compensation model based on the comparison for correcting the retrieved known test sample; and providing the compensation model for displaying a corrected display on a device.
An embodiment of the present invention provides a method of operation of a display system including: sending a known test sample having a gamut; retrieving the known test sample having a degraded gamut; comparing, with a control unit, the gamut of the known test sample and the degraded gamut of the retrieved known test sample; generating a compensation model based on the comparison for correcting the retrieved known test sample; and providing the compensation model for displaying a corrected display on a device.
An embodiment of the present invention provides a display system, including: a communication unit configured to send a known test sample and configured to retrieve the known test sample; a control unit configured to compare the known test sample and the retrieved known test sample, and configured to generate a compensation model based on the comparison for correcting the retrieved known test sample; and a storage unit configured to provide the compensation model for displaying a corrected display on a device.
Certain embodiments of the invention have other steps or elements in addition to or in place of those mentioned above. The steps or elements will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.
FIG. 1 is a display system with image display correction mechanism in an embodiment of the present invention.
FIGs. 2a and 2b are examples of images on a display interface of the first device of FIG. 1.
FIG. 3 is an exemplary block diagram of the display system.
FIG. 4 is a control flow of the display system.
FIG. 5 is a flow chart of a method of operation of a display system in a further embodiment of the present invention.
The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of the present invention.
In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.
The drawings showing embodiments of the system are semi-diagrammatic, and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing figures. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the figures is arbitrary for the most part. Generally, the invention can be operated in any orientation.
The term "module" referred to herein can include software, hardware, or a combination thereof in the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, and application software. Also for example, the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof.
Referring now to FIG. 1, therein is shown adisplay system 100 with image display correction mechanism in an embodiment of the present invention. Thedisplay system 100 includes afirst device 102, such as a client or a server, connected to asecond device 106, such as a client or server. Thefirst device 102 can communicate with thesecond device 106 with acommunication path 104, such as a wireless or wired network.
For example, thefirst device 102 can be of any of a variety of display devices, such as a cellular phone, personal digital assistant, a notebook computer, a liquid crystal display (LCD) system, a light emitting diode (LED) system, or other multi-functional display or entertainment device. Thefirst device 102 can couple, either directly or indirectly, to thecommunication path 104 to communicate with thesecond device 106 or can be a stand-alone device.
For illustrative purposes, thedisplay system 100 is described with thefirst device 102 as a display device, although it is understood that thefirst device 102 can be different types of devices. For example, thefirst device 102 can also be a device for presenting images or a multi-media presentation. A multi-media presentation can be a presentation including sound, a sequence of streaming images or a video feed, or a combination thereof. As an example, thefirst device 102 can be a high definition television, a three dimensional television, a computer monitor, a personal digital assistant, a cellular phone, or a multi-media set.
Thesecond device 106 can be any of a variety of centralized or decentralized computing devices, or video transmission devices. For example, thesecond device 106 can be a multimedia computer, a laptop computer, a desktop computer, a video game console, grid-computing resources, a virtualized computer resource, cloud computing resource, routers, switches, peer-to-peer distributed computing devices, a media playback device, a Digital Video Disk (DVD) player, a three-dimension enabled DVD player, a recording device, such as a camera or video camera, or a combination thereof. In another example, thesecond device 106 can be a signal receiver for receiving broadcast or live stream signals, such as a television receiver, a cable box, a satellite dish receiver, or a web enabled device.
Thesecond device 106 can be centralized in a single room, distributed across different rooms, distributed across different geographical locations, embedded within a telecommunications network. Thesecond device 106 can couple with thecommunication path 104 to communicate with thefirst device 102.
For illustrative purposes, thedisplay system 100 is described with thesecond device 106 as a computing device, although it is understood that thesecond device 106 can be different types of devices. Also for illustrative purposes, thedisplay system 100 is shown with thesecond device 106 and thefirst device 102 as end points of thecommunication path 104, although it is understood that thedisplay system 100 can have a different partition between thefirst device 102, thesecond device 106, and thecommunication path 104. For example, thefirst device 102, thesecond device 106, or a combination thereof can also function as part of thecommunication path 104.
Thecommunication path 104 can span and represent a variety of networks. For example, thecommunication path 104 can include wireless communication, wired communication, optical, ultrasonic, or the combination thereof. Satellite communication, cellular communication, Bluetooth, Infrared Data Association standard (lrDA), wireless fidelity (WiFi), and worldwide interoperability for microwave access (WiMAX) are examples of wireless communication that can be included in thecommunication path 104. Ethernet, digital subscriber line (DSL), fiber to the home (FTTH), and plain old telephone service (POTS) are examples of wired communication that can be included in thecommunication path 104. Further, thecommunication path 104 can traverse a number of network topologies and distances. For example, thecommunication path 104 can include direct connection, personal area network (PAN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN), or a combination thereof.
Referring now to FIGs. 2a and 2b, therein is shown examples of images on afirst display 202 of the first device of FIG. 1. FIG. 2a depicts a known sample ofgraphics 204 onfirst display 202 of thefirst device 102 and FIG. 2b depicts a degraded sample ofgraphics 204 shown asgraphics 214 on thesecond display 212 on thefirst device 102. Thefirst display 202 can displaygraphics 204 that include images or video.
Thegraphics 204 including images or video can preferably be formed ofmultiple pixels 206 having a color or chromaticity. Each of apixel 206 or group of thepixels 206 can be measured or mapped using mathematical language, equations, or expressions, such as gamut mapping.
Similarly, thegraphics 214 including images or video can preferably be formed ofmultiple pixels 216 having a color or chromaticity. The color or chromaticity of thepixels 216 can change or degrade through processes such as uploading, downloading, any transfer method, or combination thereof, from the color or chromaticity of thepixels 206.
Color of thegraphics 204 can be measured or modeled for each of thepixel 206 or groups of thepixel 206. The measured or modeled color of thegraphics 204 can be characterized using a gamut mapping technique. A known test sample of thegraphics 204 can be characterized by a gamut map for comparison with a gamut map of thegraphics 214 that has been transferred such as uploaded or downloaded from remote device such as a web server including a web video server.
The visual differences or comparison in terms of brightness, color and sharpness between the known test sample and the downloaded known test sample can be seen by common viewers. Generally the downloaded known test examples have degraded image qualities which include decreased brightness, less brightness and color contrast, changed colors, and reduced sharpness, etc.
Color fade and shift is one of the common image quality problems associated with web video. To solve this problem, an image color correction or improvement technique can include several aspects related to color including but not limited to chromaticity, luminance, hue, saturation, chroma, intensity, wavelength, purity, or combination thereof.
For example, color quality of the image or video can be improved by inversely compensating the improper color changes such as color fade or shift introduced by transferring such as uploading or downloading thegraphics 204, based on an appropriate degradation model, such as Color Degradation Model Metadata (CDMM) of a web video server from which the image(s) or video was sourced.
All these image quality degradations can be quantitatively detected and characterized. A preferred characterization and correction method is developed and provided in the invented display system to deal with the problem of changed or degraded colors, which however can also improve other types of image qualities to some extent. For example, with corrected colors, the color contrast can be significantly improved. In addition, brightness contrast as well as image sharpness may be enhanced as well.
A Color Degradation Model Metadata can be provided by a compute server, thefirst device 102, or thesecond device 106 for specific sources such as web servers based on comparison with the known test sample of thegraphics 204 and a transferred sample of thegraphics 204 from the specific source or web server. The compensation model, which can include Color Degradation Model Metadata for the specific source or web server, can be applied for improved quality to other graphics including images or video received from the specific web server.
It has been discovered that the display system with image display correction mechanism provides consistently improved graphics including images and video, at least based on application of a compensation model, including a degradation model, such as Color Degradation Model Metadata, a correction model, or combination thereof, applied to images and video from a specific source. The gamut map of thegraphics 204 are compared to the degraded gamut map of thegraphics 214 characterizing at least color fade and shift for the specific remote device.
Referring now to FIG. 3, therein is shown an exemplary block diagram of thedisplay system 100. Thedisplay system 100 can include thefirst device 102, thecommunication path 104, and thesecond device 106. Thefirst device 102 can send information in afirst device transmission 308 over thecommunication path 104 to thesecond device 106. Thesecond device 106 can send information in asecond device transmission 310 over thecommunication path 104 to thefirst device 102.
For illustrative purposes, thedisplay system 100 is shown with thefirst device 102 as a client device, although it is understood that thedisplay system 100 can have thefirst device 102 as a different type of device. For example, thefirst device 102 can be a server having a display interface and a display.
Also for illustrative purposes, thedisplay system 100 is shown with thesecond device 106 as a server, although it is understood that thedisplay system 100 can have thesecond device 106 as a different type of device. For example, thesecond device 106 can be a client device.
For brevity of description in this embodiment of the present invention, thefirst device 102 will be described as a client device and thesecond device 106 will be described as a server device. The present invention is not limited to this selection for the type of devices. The selection is an example of the present invention.
Thefirst device 102 can include afirst control unit 312, afirst storage unit 314, afirst communication unit 316, and a first user interface 318. Thefirst control unit 312 can include afirst control interface 322. Thefirst control unit 312 can execute afirst software 326 to provide the intelligence of thedisplay system 100.
Thefirst control unit 312 can be implemented in a number of different manners. For example, thefirst control unit 312 can be a processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof. Thefirst control interface 322 can be used for communication between thefirst control unit 312 and other functional units in thefirst device 102. Thefirst control interface 322 can also be used for communication that is external to thefirst device 102.
Thefirst control interface 322 can receive information from the other functional units or from external sources, or can transmit information to the other functional units or to external destinations. The external sources and the external destinations refer to sources and destinations external to thefirst device 102.
Thefirst control interface 322 can be implemented in different ways and can include different implementations depending on which functional units or external units are being interfaced with thefirst control interface 322. For example, thefirst control interface 322 can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), optical circuitry, waveguides, wireless circuitry, wireline circuitry, or a combination thereof. ?
Thefirst storage unit 314 can store thefirst software 326. Thefirst storage unit 314 can also store the relevant information, such as data representing incoming images, data representing previously presented image, sound files, or a combination thereof.
Thefirst storage unit 314 can be a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof. For example, thefirst storage unit 314 can be a nonvolatile storage such as non-volatile random access memory (NVRAM), Flash memory, disk storage, or a volatile storage such as static random access memory (SRAM).
Thefirst storage unit 314 can include afirst storage interface 324. Thefirst storage interface 324 can be used for communication between thefirst storage unit 314 and other functional units in thefirst device 102. Thefirst storage interface 324 can also be used for communication that is external to thefirst device 102.
Thefirst storage interface 324 can receive information from the other functional units or from external sources, or can transmit information to the other functional units or to external destinations. The external sources and the external destinations refer to sources and destinations external to thefirst device 102.
Thefirst storage interface 324 can include different implementations depending on which functional units or external units are being interfaced with thefirst storage unit 314. Thefirst storage interface 324 can be implemented with technologies and techniques similar to the implementation of thefirst control interface 322.
Thefirst communication unit 316 can enable external communication to and from thefirst device 102. For example, thefirst communication unit 316 can permit thefirst device 102 to communicate with thesecond device 106 of FIG. 1, an attachment, such as a peripheral device or a desktop computer, and thecommunication path 104.
Thefirst communication unit 316 can also function as a communication hub allowing thefirst device 102 to function as part of thecommunication path 104 and not limited to be an end point or terminal unit to thecommunication path 104. Thefirst communication unit 316 can include active and passive components, such as microelectronics or an antenna, for interaction with thecommunication path 104.
Thefirst communication unit 316 can include afirst communication interface 328. Thefirst communication interface 328 can be used for communication between thefirst communication unit 316 and other functional units in thefirst device 102. Thefirst communication interface 328 can receive information from the other functional units or can transmit information to the other functional units.
Thefirst communication interface 328 can include different implementations depending on which functional units are being interfaced with thefirst communication unit 316. Thefirst communication interface 328 can be implemented with technologies and techniques similar to the implementation of thefirst control interface 322.
The first user interface 318 allows a user (not shown) to interface and interact with thefirst device 102. The first user interface 318 can include an input device and an output device. Examples of the input device of the first user interface 318 can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, an infrared sensor for receiving remote signals, or any combination thereof to provide data and communication inputs.
The first user interface 318 can include afirst display interface 330 as an output device. Thefirst display interface 330 can output to thefirst display 202, such as a projector, a video screen, a speaker, or any combination thereof.
Thefirst control unit 312 can operate the first user interface 318 to display information generated by thedisplay system 100. Thefirst control unit 312 can also execute thefirst software 326 for the other functions of thedisplay system 100. Thefirst control unit 312 can further execute thefirst software 326 for interaction with thecommunication path 104 via thefirst communication unit 316.
Thesecond device 106 can be used for implementing the present invention in a multiple device embodiment with thefirst device 102. Thesecond device 106 can provide the additional or higher performance processing power compared to thefirst device 102. Thesecond device 106 can include asecond control unit 334, asecond communication unit 336, a second user interface 338, and asecond storage unit 346.
The second user interface 338 allows a user (not shown) to interface and interact with thesecond device 106. The second user interface 338 can include an input device and an output device. Examples of the input device of the second user interface 338 can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, or any combination thereof to provide data and communication inputs. Examples of the output device of the second user interface 338 can include asecond display interface 340. Thesecond display interface 340 can output to asecond display 212 of FIG. 2, such as a projector, a video screen, a speaker, or any combination thereof.
Thesecond control unit 334 can execute asecond software 342 to provide the intelligence to thesecond device 106 of thedisplay system 100. Thesecond software 342 can operate in conjunction with thefirst software 326. Thesecond control unit 334 can provide additional performance compared to thefirst control unit 312.
Thesecond control unit 334 can operate the second user interface 338 to display information. Thesecond control unit 334 can also execute thesecond software 342 for the other functions of thedisplay system 100, including operating thesecond communication unit 336 to communicate with thefirst device 102 over thecommunication path 104.
Thesecond control unit 334 can be implemented in a number of different manners. For example, thesecond control unit 334 can be a processor, an embedded processor, a microprocessor, hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof.
Thesecond control unit 334 can include asecond control interface 344. Thesecond control interface 344 can be used for communication between thesecond control unit 334 and other functional units in thesecond device 106. Thesecond control interface 344 can also be used for communication that is external to thesecond device 106.
Thesecond control interface 344 can receive information from the other functional units or from external sources, or can transmit information to the other functional units or to external destinations. The external sources and the external destinations refer to sources and destinations external to thesecond device 106.
Thesecond control interface 344 can be implemented in different ways and can include different implementations depending on which functional units or external units are being interfaced with thesecond control interface 344. For example, thesecond control interface 344 can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), optical circuitry, waveguides, wireless circuitry, wireline circuitry, or a combination thereof.
Asecond storage unit 346 can store thesecond software 342. Thesecond storage unit 346 can also store the information, such as data representing incoming images, data representing previously presented image, sound files, or a combination thereof.. Thesecond storage unit 346 can be sized to provide the additional storage capacity to supplement thefirst storage unit 314.
For illustrative purposes, thesecond storage unit 346 is shown as a single element, although it is understood that thesecond storage unit 346 can be a distribution of storage elements. Also for illustrative purposes, thedisplay system 100 is shown with thesecond storage unit 346 as a single hierarchy storage system, although it is understood that thedisplay system 100 can have thesecond storage unit 346 in a different configuration. For example, thesecond storage unit 346 can be formed with different storage technologies forming a memory hierarchal system including different levels of caching, main memory, rotating media, or off-line storage.
Thesecond storage unit 346 can be a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof. For example, thesecond storage unit 346 can be a nonvolatile storage such as non-volatile random access memory (NVRAM), Flash memory, disk storage, or a volatile storage such as static random access memory (SRAM).
Thesecond storage unit 346 can include asecond storage interface 348. Thesecond storage interface 348 can be used for communication between thesecond storage unit 346 and other functional units in thesecond device 106. Thesecond storage interface 348 can also be used for communication that is external to thesecond device 106.
Thesecond storage interface 348 can receive information from the other functional units or from external sources, or can transmit information to the other functional units or to external destinations. The external sources and the external destinations refer to sources and destinations external to thesecond device 106.
Thesecond storage interface 348 can include different implementations depending on which functional units or external units are being interfaced with thesecond storage unit 346. Thesecond storage interface 348 can be implemented with technologies and techniques similar to the implementation of thesecond control interface 344.
Thesecond communication unit 336 can enable external communication to and from thesecond device 106. For example, thesecond communication unit 336 can permit thesecond device 106 to communicate with thefirst device 102 over thecommunication path 104.
Thesecond communication unit 336 can also function as a communication hub allowing thesecond device 106 to function as part of thecommunication path 104 and not limited to be an end point or terminal unit to thecommunication path 104. Thesecond communication unit 336 can include active and passive components, such as microelectronics or an antenna, for interaction with thecommunication path 104.
Thesecond communication unit 336 can include a second communication interface 350. The second communication interface 350 can be used for communication between thesecond communication unit 336 and other functional units in thesecond device 106. The second communication interface 350 can receive information from the other functional units or can transmit information to the other functional units.
The second communication interface 350 can include different implementations depending on which functional units are being interfaced with thesecond communication unit 336. The second communication interface 350 can be implemented with technologies and techniques similar to the implementation of thesecond control interface 344.
Thefirst communication unit 316 can couple with thecommunication path 104 to send information (e.g. a known test sample) to thesecond device 106 in thefirst device transmission 308. Thesecond device 106 can receive information in thesecond communication unit 336 from thefirst device transmission 308 of thecommunication path 104.
Thesecond communication unit 336 can couple with thecommunication path 104 to send information to thefirst device 102 in thesecond device transmission 310. Thefirst device 102 can receive (or retrieve) information in thefirst communication unit 316 from thesecond device transmission 310 of thecommunication path 104.
Thedisplay system 100 can be executed by thefirst control unit 312, thesecond control unit 334, or a combination thereof. For illustrative purposes, thesecond device 106 is shown with the partition having the second user interface 338, thesecond storage unit 346, thesecond control unit 334, and thesecond communication unit 336, although it is understood that thesecond device 106 can have a different partition. For example, thesecond software 342 can be partitioned differently such that some or all of its function can be included in thesecond control unit 334 and thesecond communication unit 336. Also, thesecond device 106 can include other functional units not shown in FIG. 3 for clarity.
The functional units in thefirst device 102 can work individually and independently of the other functional units. Thefirst device 102 can work individually and independently from thesecond device 106 and thecommunication path 104.
The functional units in thesecond device 106 can work individually and independently of the other functional units. Thesecond device 106 can work individually and independently from thefirst device 102 and thecommunication path 104.
For illustrative purposes, thedisplay system 100 is described by operation of thefirst device 102 and thesecond device 106. It is understood that thefirst device 102 and thesecond device 106 can operate any of the modules and functions of thedisplay system 100.
Referring now to FIG. 4, therein is shown a control flow of thedisplay system 100 with image display correction mechanism. In a manner similar to the description of FIGs. 2a and 2b, at least color fade and shift in images and video are corrected and improved. A known test sample is specifically created in such a way that the colors appearing on the images uniformly cover the whole available gamut range. So on the chromatic plane (LUV or XYZ based color spaces can be used), the data points (each point represents a color existing in the frame/image) are rather uniformly distributed across the whole gamut defined by sRGB color space.
After an image or video is uploaded to a web video server and watched on a website, the color quality of the video image generally appears degraded to various extents. If the published video clip is downloaded and watched on a local PC (or PDA or smart phone, etc.), further color quality degradation may be observed. The color quality degradation can appear to human viewers as color shift or fade while on the chromatic plane it appears as movement of data points from their original positions. If the data points on the chromatic plane are considered as a grid or mesh, then the color degradation can result in a shifting and deformation of the grid, called gamut mapping, which can be characterized using various mathematic methods.
In an embodiment of a characterization and correction method for color degradation, the quantitative comparison between the known test sample and the downloaded known test sample is performed in the chromatic domain. First, the image pixels’ colors (chromaticity values) are computed and recorded on the chromatic plane as (data) points whose xy coordinates are chromaticity values. All the points from the known test sample can be considered to construct a mesh. Similarly, all the points from the downloaded known test sample can also construct a mesh.
At each pixel location, the known test sample and the downloaded known test sample generally have different colors that correspond to two different locations on the color plane. The color change between the two colors, e.g. data points, can be characterized as a simple linear translation or displacement. However, to characterize the color change between the colors of all pixels between the two samples, the mesh concept can be utilized and the color change can be interpreted as a two-dimensional (2D) spatial transformation problem that transforms one mesh to the other. In the field of image processing, the color change of all the colors within a color gamut is called gamut mapping.
To deal with the gamut mapping problem between the known test sample and the downloaded known test sample, a preferred mathematical tool or model is the third (3rd) or fourth (4th) order bivariate polynomials.
For the modeling purpose, a mathematical tool is needed to quantitatively describe the color change behavior. Since color gamut can be a 2D space and gamut mapping can be considered as a 2D spatial transformation, a bivariate third-order (3rd-order) polynomial transfer function is a preferable mathematical model. The bivariate 3rd-order polynomial transfer function is able to describe smooth 2D spatial transformations with good precision, including translation, rotation, scaling, warping, etc. The 3rd-order polynomial transfer function also has appropriate balance between fitting precision and complexity. In principle, second-order (2nd-order) and fourth-order (4th-order) polynomial functions can also be used to characterize the gamut mapping.
The color of the known test sample of graphics, including images or video on a target display interface, can be measured or modeled in ablock 402 as a group of data points on the chromaticity plane, a set of the coefficients for which usually cover an area of the plane. The area is the first (1st) display gamut.
A processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof, such as thefirst control unit 312 of FIG. 3, thesecond control unit 334 of FIG. 3, or combination thereof, can measure or model the known test sample of graphics in theblock 402.
An external communication device, a communication hub, an end point or terminal unit, transmitter, receiver, transceiver, or combination thereof, such as thefirst communication unit 316 of FIG. 3, thesecond communication unit 336 of FIG. 3, or combination thereof, can send, receive, or retrieve the known test sample of graphics in theblock 402. A projector, a video screen, a speaker, or any combination thereof, such as thefirst display 202, thesecond display 212, or combination thereof, can provide a target display for known test sample of graphics.
The color of the known test sample of the graphics including images or video received from a remote device on the target display interface can be measured or modeled in ablock 404 as a group of data points on the chromaticity plane, which cover an area of the plane. The area is a second (2nd) display gamut.
A processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof, such as thefirst control unit 312 of FIG. 3, thesecond control unit 334 of FIG. 3, or combination thereof, can measure or model the received or retrieved graphics of the known test sample from a remote device in theblock 404. A projector, a video screen, a speaker, or any combination thereof, such as thefirst display 202, thesecond display 212, or combination thereof, can provide a target display for the received or retrieved graphics of the known test sample.
An external communication device, a communication hub, an end point or terminal unit, transmitter, receiver, transceiver, or combination thereof, such as thefirst communication unit 316 of FIG. 3, thesecond communication unit 336 of FIG. 3, or combination thereof, can send, receive, or retrieve the received or retrieved known test sample of graphics in theblock 404.
The 1st and the 2nd gamut of the known test sample are compared in ablock 406 to generate a model of gamut mapping (both forward and inverse) which can be stored in a model server such as a Color Degradation Model Metadata server in theblock 408.
A processor, an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof, such as thefirst control unit 312 of FIG. 3, thesecond control unit 334 of FIG. 3, or combination thereof, can compare the known test sample and the retrieved known test sample in theblock 406.
A volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof, such as thefirst storage unit 314, thesecond storage unit 346, or combination thereof, can store the model of gamut mapping in a model server in theblock 408.
The original image or video before uploading can be observed as very similar to the downloaded video clips if the obtained forward gamut mapping model is performed on the original image or video. Applying the gamut mapping can correctly describe the color change exhibited on the chromatic plane. Color correction can preferably be implemented in a target or user device as software or hardware with embedded algorithms. A compensation model, which can include a degradation model such as the gamut mapping based Color Degradation Model Metadata, for a specific source or web server can be applied to improve the quality of other graphics including images or video received from the specific web server in ablock 410.
An external communication device, a communication hub, an end point or terminal unit, transmitter, receiver, transceiver, or combination thereof, such as thefirst communication unit 316 of FIG. 3, thesecond communication unit 336 of FIG. 3, or combination thereof, can send, receive, or retrieve the compensation model in theblock 410.
Compensation or correction of unwanted color change can be implemented by reversing a color change process to change color quality back to the original condition, such as performing an inverse 2D spatial transformation (inverse gamut mapping) on the degraded images or video. Solving for the inverse function of the polynomial function is difficult since the polynomial function is the 3rd-order bivariate function.
Alternatively, the inverse transformation can be generated by reversing the order of the image or video. Using the original image or video as input and degraded image or video as target, the calculation produces a transfer function for transforming original image or video into degraded image or video. Given degraded image or video as input and original image or video as the output, the calculation yields a transfer function that transforms the degraded image or video into an image or video very similar to original image or video. Thus, inverse spatial transformation in target or user devices provides correction or improvement for degraded images or video. The transfer function describing the inverse 2D transformation is still a third-order bivariate polynomial function.
An exemplary degradation model such as a Color Degradation Model Metadata is provided based on the comparison of color gamut for specific source or web servers such as web video servers. Further, an exemplary compensation model can include the exemplary degradation, an exemplary correction model, or combination thereof.
The gamut mapping algorithm for web video color correction can be also described using the mathematic language, i.e. the mathematical equations or expressions.
Equations (1) and (2) define the gamut mapping relations between the target or original, and the remote or color degraded images, where A = [a1, a2] represents the chromaticity of a pixel on the original image while B = [b1, b2] is the chromaticity of a pixel on the degraded image. The transformation (or transfer function) T in Equation (1) describes the color degradation induced by the issues over the internet. According to previous discussion, T also represents the forward 2D transformation on the chromatic plane which converts A to B. On the contrary, T-1 represents the inverse transformation between A and B.
Figure PCTKR2013007734-appb-I000001
According to the spatial transformation based approach for gamut mapping, the degradation model is equivalent to the transformation that describes color change from the known test sample to the downloaded known test sample. The degradation model can also be represented by the transformation T in Equation (1).
T can be numerically determined with the offline process introduced previously. In the preferred embodiment, T is a nonlinear function based on the 3rd (or 4th) order bivariate polynomials. Accordingly, Equation (1) can be re-formulated as in Equation (3) using the matrix format.
Figure PCTKR2013007734-appb-I000002
Equation (2) can be re-formulated in the same way as in Equation (3). The twenty polynomial coefficients (t01 … t91, t02 … t92) can be computed using the chromaticity data contained in the video clips (the original and the color degraded ones) and certain mathematical tools such as, e.g., Matlab® available from The MathWorks, Inc. of Natick, Massachusetts.
If the preferred mathematical method, the 3rd or 4th order bivariate polynomials, is used to define the degradation model, the polynomial coefficients need to be computed or obtained based on the chromatic data obtained from both samples. In addition, certain mathematical tools or methods, e.g. Matlab® or user developed computer programs, are also needed to compute the polynomial coefficients.
As for color correction, it is actually defined by Equation (2), the inverse gamut mapping process. When a web video clip is downloaded or streamed to a computer/digital player, the chromatic values [b1, b2] of each pixel will be processed through Equation (2) (or more specifically the inverse mapping version of Equation (3)) to obtain the original color values [a1, a2] (or the corrected/improved color that is close to [a1, a2]) for the correct or improved color representation.
For color correction or compensation, a degradation model might not be directly needed. An inverse version of the degradation model might be used for color correction purposes. A compensation model can preferably include the inverse version of the degradation model with or without the degradation model.
To correct or compensate color degradation of a test sample, an inverse transformation can be applied to a downloaded known test sample, which is also formulated in Equation (2). Equation (2) means that the known test sample (sample with corrected colors) can be obtained by applying the inverse transformation to the downloaded known test sample (color degraded sample). Similar to the degradation model, the polynomial coefficients can be computed for the inverse transformation if the preferred mathematical method is used. The only difference is that the image (B in Equation (1) and A in Equation (2)) and the original image (A in Equation (1) and B in Equation (2)) of the gamut mapping process are reversed between the two cases (Equations (1) and (2)): degradation model and its inverse version (correction model).
For illustrative purposes, thedisplay system 100 provides color correction or compensation. It is understood that thedisplay system 100 may also provide correction, compensation, or improvement of other display elements or display parameters such as aspect ratio, scale, smoothness resolution, frame rate, or combination thereof.
It has been discovered that thedisplay system 100 provides a compensation model, including a degradation model, a correction model, or combination thereof, for specific remote devices based on a known test sample, which enables correction and improvement of any images or videos from a specificremote device 106.
Further, it has been discovered that the gamut of the known test sample of thegraphics 204 on thedisplay interface 202 of thefirst device 102 can be compared to the gamut of the degraded test sample of thegraphics 214 on thedisplay interface 202 of thefirst device 102 to provide the correction and improvement for images or video from the specificremote device 106.
Yet further, it has been discovered that other downloaded samples have the same or similar degradation characteristics as the known test samples. Therefore, thedisplay system 100 can have the proper correction mechanism for various downloaded samples of images or video.
Thedisplay system 100 has been described with module functions or order as an example. Thedisplay system 100 can partition the modules differently or order the modules differently. For example, the gamut of thegraphics 204 of FIG. 2 on thefirst device 102 of FIG. 1 may be measured after the gamut of thegraphics 214 of FIG. 2 on thesecond device 106 of FIG. 1. Thesecond display 212 of FIG. 2 may not be integral to thesecond device 106.
The modules described in this application can be hardware implementation or hardware accelerators in thefirst control unit 312 of FIG. 3 or in thesecond control unit 334 of FIG. 3. The modules can also be hardware implementation or hardware accelerators within thefirst device 102 or thesecond device 106 but outside of thefirst control unit 312 or thesecond control unit 334, respectively.
The physical transformation from the compensation model, including a degradation model, a correction model, or combination thereof, results in the movement in the physical world, such as correction or improvement in displaying thegraphics 204 or 214. Movement in the physical world results in changes to the images or video by color or chromaticity changes as perceived by user eyes.
Referring now to FIG. 5, therein is shown a flow chart of amethod 500 of operation of adisplay system 100 in a further embodiment of the present invention. Themethod 500 includes: sending a known test sample in ablock 502; retrieving the known test sample in ablock 504; comparing the known test sample and the retrieved known test sample in ablock 506; generating a compensation model based on the comparison for correcting the retrieved known test sample in ablock 508; and providing the compensation model for displaying a corrected display on a device in ablock 510.
The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization. Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance of information technology and consumer electronic products.
These and other valuable aspects of the present invention consequently further the state of the technology to at least the next level.
While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.

Claims (15)

  1. A method of operation of a display system comprising:
    sending a known test sample;
    retrieving the known test sample;
    comparing, with a control unit, the known test sample and the retrieved known test sample;
    generating a compensation model based on the comparison for correcting the retrieved known test sample; and
    providing the compensation model for displaying a corrected display on a device.
  2. The method as claimed in claim 1 wherein sending the known test includes sending the known test sample for storing the known test sample on a web server.
  3. The method as claimed in claim 1 wherein generating the compensation model includes generating a model based on spatial transformation.
  4. The method as claimed in claim 1 wherein generating the compensation model includes generating a model based on inverse spatial transformation.
  5. The method as claimed in claim 1 wherein providing the compensation model includes providing Color Degradation Model Metadata.
  6. A method of operation of a display system comprising:
    sending a known test sample having a gamut;
    retrieving the known test sample having a degraded gamut;
    comparing, with a control unit, the gamut of the known test sample and the degraded gamut of the retrieved known test sample;
    generating a compensation model based on the comparison for correcting the retrieved known test sample; and
    providing the compensation model for displaying a corrected display on a device.
  7. The method as claimed in claim 6 wherein sending the known test sample includes sending the known test sample having colors that uniformly cover an available gamut range.
  8. The method as claimed in claim 6 wherein generating the compensation model includes generating a model based on a two-dimensional spatial transformation.
  9. The method as claimed in claim 6 wherein generating the compensation model includes generating a model based on inverse spatial transformation with a reverse order of the known test sample and retrieved known test sample.
  10. The method as claimed in claim 6 wherein providing the compensation model includes providing Color Degradation Model Metadata for storage on a model server.
  11. A display system comprising:
    a communication unit configured to send a known test sample and configured to retrieve the known test sample;
    a control unit, coupled to the communication unit, configured to compare the known test sample and the retrieved known test sample, and configured to generate a compensation model based on the comparison for correcting the retrieved known test sample; and
    a storage unit, coupled to the control unit, configured to provide the compensation model for displaying a corrected display on a device.
  12. The system as claimed in claim 11 wherein the communication unit configured to send the known test sample includes the communication unit configured to send the known test sample for storing on a web server.
  13. The system as claimed in claim 11 wherein the control unit configured to generate the compensation model includes the control unit configured to generate a model based on spatial transformation.
  14. The system as claimed in claim 11 wherein the control unit configured to generate the compensation model includes the control unit configured to generate a model based on inverse spatial transformation.
  15. The system as claimed in claim 11 wherein the storage unit configured to provide the compensation model includes the storage unit configured to provide Color Degradation Model Metadata.
PCT/KR2013/0077342012-08-282013-08-28Display system with image display correction mechanism and method of operation thereofCeasedWO2014035144A1 (en)

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US201261694168P2012-08-282012-08-28
US61/694,1682012-08-28
KR1020130102110AKR20140029252A (en)2012-08-282013-08-28Display system and the method thereof
KR10-2013-01021102013-08-28

Publications (1)

Publication NumberPublication Date
WO2014035144A1true WO2014035144A1 (en)2014-03-06

Family

ID=50183885

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/KR2013/007734CeasedWO2014035144A1 (en)2012-08-282013-08-28Display system with image display correction mechanism and method of operation thereof

Country Status (1)

CountryLink
WO (1)WO2014035144A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11769466B2 (en)2020-04-212023-09-26Douyin Vision Co., Ltd.Image display method and apparatus, device, and storage medium
WO2024234309A1 (en)*2023-05-162024-11-21京东方科技集团股份有限公司Terminal, control method therefor, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5874988A (en)*1996-07-081999-02-23Da Vinci Systems, Inc.System and methods for automated color correction
US6058208A (en)*1994-10-262000-05-02Matsushita Electric Industrial Co., Ltd.Color correction device and color correction method
US6313823B1 (en)*1998-01-202001-11-06Apple Computer, Inc.System and method for measuring the color output of a computer monitor
US20030149643A1 (en)*2002-02-052003-08-07Akira AokiMethod of selling a product over the internet
US7102648B1 (en)*2000-04-112006-09-05Rah Color Technologies LlcMethods and apparatus for calibrating a color display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6058208A (en)*1994-10-262000-05-02Matsushita Electric Industrial Co., Ltd.Color correction device and color correction method
US5874988A (en)*1996-07-081999-02-23Da Vinci Systems, Inc.System and methods for automated color correction
US6313823B1 (en)*1998-01-202001-11-06Apple Computer, Inc.System and method for measuring the color output of a computer monitor
US7102648B1 (en)*2000-04-112006-09-05Rah Color Technologies LlcMethods and apparatus for calibrating a color display
US20030149643A1 (en)*2002-02-052003-08-07Akira AokiMethod of selling a product over the internet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11769466B2 (en)2020-04-212023-09-26Douyin Vision Co., Ltd.Image display method and apparatus, device, and storage medium
WO2024234309A1 (en)*2023-05-162024-11-21京东方科技集团股份有限公司Terminal, control method therefor, and storage medium

Similar Documents

PublicationPublication DateTitle
US20220383796A1 (en)System and method for a multi-primary wide gamut color system
WO2018093143A1 (en)Display apparatus and control method thereof
WO2018084516A1 (en)Electronic device, display apparatus, and control method thereof
US20150277840A1 (en)Maximizing Native Capability Across Multiple Monitors
EP3662441A1 (en)Adaptive high dynamic range tone mapping with overlay indication
US7684080B2 (en)Accommodating creative white point
JP2017072644A (en) Display control device
WO2016099186A1 (en)Display apparatus and controlling method
EP3494712A1 (en)Electronic apparatus and control method thereof
CN114067740B (en)Screen calibration method and device, electronic equipment and storage medium
WO2020116740A1 (en)Real-time broadcasting editing system and editing method
WO2014035144A1 (en)Display system with image display correction mechanism and method of operation thereof
Westland et al.RGB systems
WO2020262823A1 (en)Electronic device for processing image and image processing method thereof
WO2019017695A1 (en)Display apparatus, method of controlling the same, and computer program product thereof
US20200092441A1 (en)Perceptual hue preserved color-gamut transferring in non-uniform cie-1931 color space
US9842532B2 (en)Remote display rendering for electronic devices
JP5879120B2 (en) Video signal processing apparatus, vector scope, and program
US20140063210A1 (en)Display system with display enhancement mechanism and method of operation thereof
WO2018003938A1 (en)Video signal processing device, video signal processing method, and program
US8854476B2 (en)Display system with image display correction mechanism and method of operation thereof
WO2018080042A1 (en)Electronic apparatus and control method thereof
Han et al.360 Projection Image Analysis Method for the Calibration
US9742963B2 (en)Color rendering on a hue plane
CN101465110A (en)Electronic device, dual-view display and signal compensation device and method thereof

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:13833072

Country of ref document:EP

Kind code of ref document:A1

NENPNon-entry into the national phase

Ref country code:DE

122Ep: pct application non-entry in european phase

Ref document number:13833072

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp