Ken Werner of Display Central has a post comparing the benefits of quantum dots to OLEDs in consumer TV applications. Being the authority on quantum dot displays that we are here at Nanosys, Ken contacted us for an analysis. Here is the explanation our Ph.Ds gave Ken:
OLEDs use organometalic compounds to emit light. They typically have a central metal atom surrounded by organic ligands. The decay issues are the same as with typical organic fluorophores. In the excited state these molecules are very reactive to H2O and O2, as well as other small molecules that may be around. Once they react they become a different molecule and they will no longer fluoresce or phosphoresce and give off light. The more blue the light emission, the higher the energy of the excited state, and the more reactive the excited molecule will be. So your blue organic phosphores will have a much shorter lifetime than will red phosphores. The burn-in problem seen in OLED displays, that can be seen after just several weeks of operation with static content, is a manifestation of early blue degradation compared to green and red.
Conventional phosphores like YAG are doped materials. YAG used in white LEDs is actually cerium doped YAG. The cerium atom emits the yellow light and is surrounded by a vast amount of YAG. Quantum dots are similar in that a central core crystalline semiconductor material is used to confine the holes and electrons of the exciton (analogous to the cerium in YAG), and in our material this is surrounded by a thick shell of a different, lattice-matched semiconductor material (analogous to the YAG.) We call this a core-shell Quantum Dot structure. If the lifetime of our materials is less than that of conventional phosphors, it is typically because we have not made a perfectly lattice-matched shell, which may distort the core and cause defects at the core/shell interface that reduces the quantum yield.
The big difference here is that a perfectly made core-shell quantum dot does not have an intrinsic lifetime failure mechanism, whereas the organometallic compounds are intrinsically reactive to their environment, which makes them prone to shorter lifetimes especially at higher energies such as blue.
Display improvements were once again featured at yesterday’s Apple keynote event. The most obvious improvements may have been the larger display and thinner form factor but most interesting to dot-color are the color claims.
Just like the new iPad, Apple claims that the iPhone 5 can display “44% more color saturation.”
Apple SVP of Worldwide Marketing Phil Schiller talks color saturation at the iPhone 5 keynote
Let’s do some simple math to see how the iPhone 5 stacks up against older iPhones and last week’s color performance claim from Motorola.
So Motorola is still king of the fall 2012 smartphone color saturation, based solely on marketing claims. That said, I wouldn’t be surprised if they updated their marketing to say that the Droid Razr Maxx HD offers 28% more color saturation than the iPhone 5 once it hits store shelves in a couple weeks. I plan to measure all of the announced devices to verify these marketing claims, but for now, this is all we have to go with.
Apple also claimed to be able to match the sRGB standard used in TV and movies. With the addition of the iPhone 5, nearly all of Apple’s flagship products (with the exception of the MacBook Air) now meet this standard. This means content should look very consistent across all Apple devices and may open up the possibility for serious content creation apps in iOS.
It also means we’re only just now catching up to an average CRT display from circa 1990, as the sRGB standard is based on the capabilities of phosphor materials used in CRTs. And even still, the new displays are only covering about 35% of the range of colors a human eye can see. There’s still plenty of room for improvement in display color performance (as well as updated content delivery standards, but that is a whole different post). Hopefully if we keep on this kind of pace with display enhancements, next year we’ll start to see a push beyond the limits of last century’s color standards.
† We’re using the long outdated CIE 1931 color space and NTSC 1953 gamut standards here since this is clearly Apple’s reference when they claim 44% more saturation and sRGB coverage. 50% * 1.44 = 72% and 72% of NTSC 1953 gamut in the CIE 1931 color space is also called the sRGB color gamut.
‡ It is not clear which color space Motorola is referencing; we are assuming CIE 1931/NTSC 1953 for ease of comparison.
For many who are new to the world of display measurement, the prevalence of two distinct, but often-interchanged color spaces can be a source of confusion. Since my recent post about the color performance of Apple’s new iPad, a number of people have asked about this topic, so I thought it would be worth a closer look.
In the world of displays and color images, there exists a variety of separate standards for mapping color, CIE 1931 and CIE 1976 being the most popular among them. Despite its age, CIE 1931, named for the year of its adoption, remains a well-worn and familiar shorthand throughout the display industry. As a marketer of high color gamut display components, I can tell you from firsthand experience that CIE 1931 is the primary language of our customers. When a customer tells me that their current display “can do 72% of NTSC,” they implicitly mean 72% of NTSC 1953 color gamut as mapped against CIE 1931.
“…we strongly encourage people to abandon the use of the 1931 CIE color diagram for determining the color gamut… The 1976 CIE (u’,v’) color diagram should be used instead. Unfortunately, many continue to use the (x,y) chromaticity values and the 1931 diagram for gamut areas.”
So why are there two standards, and why are we trying to declare one of them obsolete? Let me explain.
What is a color space?
First, a little background on color spaces and how they work.
While there are a number of different types of color spaces, we are specifically interested in chromaticity diagrams, which only measure color quality, independent of other factors like luminance. A color space is a uniform representation of visible light. It maps the all of the colors visible to the human eye onto an x-y grid and assigns them measureable values. This allows us to make uniform measurements and comparisons between colors, and offers certainty that images look the same from display to display when used to create color gamut standards.
In 1931, the Commission internationale de l’éclairage or CIE (International Commission on Illumination in English) defined the most commonly used color space. Here’s a look at the anatomy of the CIE 1931 color space:
What makes a good color space?
An effective color space should map with reasonable accuracy and consistancy to the human perception of color. Content creators want to be sure that the color they see on their display is the same color you see on your display.
This is where the CIE 1931 standard falls apart. Based on the work of David MacAdam in the 1940’s, we learn that the variance in percieved color, when mapped in the CIE 1931 color space, is not linear from color to color. In other words, if you show a group of people the same green, then map what they see against the CIE 1931 color space, they will report seeing a wide decprepancy of different hues of green. However, if you show the same group a blue image, there will be much more agreement on what color blue they are seeing. This uneveness creates problems when trying to make uniform measurements with CIE 1931.
The result of MacAdam’s work is visualized by the MacAdam Elipses. Each elipse represents the range of colors respondents reported seeing when shown a single color, which was the dot in the center of each elipse:
A better standard
It was not until 1976 that the CIE was able to settle on a significantly more linear color space. If we reproduce MacAdam’s work using the new standard, variations in percieve color are minimalized and the MacAdam’s Elipses mapped on a 1976 CIE diagram appear much more evenly sized and circular, as opposed to oblong. This makes color comparisons using CIE 1976 significantly more meaningful.
The difference of the CIE 1976 color space, particularly in blue and green, is immediately apparent. As an example, lets look at the color gamut measurements of the iPad 2 and new iPad we used in an earlier article. Both charts do a reasonably good job of conveying the new iPad’s increased gamut coverage at all three primaries. But, the 1976 chart captures the dramatic perceptual difference in blue (from aqua to deep blue) that you actually see when looking at the displays side by side:
The increased gamut of the new iPad is worth testing. Next time you find yourself in an Apple store, grab an iPad 2, hold it alongside a new iPad, Google up a color bar image and see the difference for yourself.
So, why do we still use CIE 1931 at all? The only real answer is that old habits die hard. The industry has relied on CIE 1931 since its inception, and change is coming slowly.
Fortunately, CIE 1931’s grip is loosening over time. The ICDM’s new measurement standard should eventually force all remaining stragglers to switch over to the more accurate 1976 standard. Until then, you can familiarize yourself with a decent color space conversion calculator, such as the handy converter we built just for this purpose:
Since the debut of the iPad in 2010, tablets have become the ultimate content consumption device, but many still to wonder if they’ll ever be capable of replacing notebooks for portable content creation.
While tablets may never truly replace notebooks for all of our content creation needs, especially typing intensive ones, a new crop of apps for iOS and Android are certainly making a case for it.
(via Brian Taylor from CandyKiller: A little doodle made with the glorious new #Paper app)
Recent creative apps like Paper by fiftythree, Adobe’s Photoshop Touch and Apple’s iPhoto for iOS have just started to scratch the surface of the creative capabilities of powerful mobile devices. These apps show us that mobile creativity, when done right, can harness the unique properties of a touchscreen handheld device to offer new capabilities that a laptop cannot duplicate. Drawing with a stylus in Paper, for example, feels remarkably precise and expressive because of a neat gesture trick- the speed of your pen controls the thickness of the line. Similarly, in Photoshop Touch and iPhoto, editing your photos by actually putting your hands on them, while less precise than a keyboard and mouse, can be a revelation for broad stroke tasks like blending two images.
Tablets clearly have the processing power, the battery life and display resolution necessary to become serious creative tools, but there’s one thing missing: color. Creative professionals normally work on displays capable of showing a range of colors that is as much as 60% wider than even the latest “high color saturation” iPad. Artists need to see the content they are creating in the same vibrant colors they see in the real world. Improving the color performance on mobile devices will make tablets truly worthy of a place in any creative professional’s regular workflow.
Back to share more of our display measurement results from the new iPad. Side note before we jump in: this is a somewhat technical post, if you aren’t familiar with the general workings of an LCD, this great live teardown by Bill Hammack is worth watching: http://youtu.be/jiejNAUwcQ8
There are two ways to improve the color gamut performance of an LCD display: you can either make the backlight better or the color filters better. In both approaches, the goal is the same: to make red, green and blue light as pure as possible. The LCD display mixes these three primary colors to make all the other colors you see on screen, thus, the more pure the individual pimary colors are, the better all colors on screen are. Based on our measurements, it looks like Apple focused on the color filters for this new display, let’s take a closer look.
In the color spectrum chart below, you can see the result of some of the color filter changes that Apple made. Notice how the red peak (on the right, in the 600 nm range) has moved to a longer wavelength. This change in wavelength means reds on the new iPad will have a deeper hue, will be less orange and more distinctly red.
Another interesting thing to look at here is the blue peak at about 450 nanometers. In our last post, we noted that blue got the biggest boost with the new display. However, the blue peak did not change in wavelength or in shape, only amplitude (or brightness), which does not affect color. So what explains the dramatic improvement in blue seen on the new display?
The above spectrum isn’t telling the whole story. It was measured from a white screen, in other words a screen with all three primary colors turned on. We see very different results when looking at a screen with a blue image, where only the blue sub pixel filters are open.
This chart shows us only the light that is allowed to pass through the blue color filters. We can see the same blue peaks that we know from the white spectrum, but there’s also some extra light getting through – notice the two small tails to the right of the blue peak? That’s green light from the backlight leaking through the blue filter.
This means that when the iPad display needs blue light to make an image, some of that green comes along with the blue whether you want it or not. You will notice that the green blip is smaller on the new iPad, meaning less green is leaking through and a purer blue is displayed. Take a look at the comparison shot here and you can see how just a hint of that green leakage is making the iPad 2’s blue (on left) appear slightly aqua by comparison.
Blue color filter comparison: iPad 2 on left, new iPad on right
Leakage like this happens because its very difficult to make a truly perfect color filter and even harder to make one that is efficient enough for a mobile display. The reason is basic physics – a better color filter is narrower, allowing only the desired color through. However, the narrower you make the filter, the less light it lets through, and less light through means the display has to be driven harder to maintain brightness. This directly affects battery life, partially explaining the new iPad’s need for a larger battery. Based on our experience, we estimate that the color improvements alone in the new display probably cause it to consume about 20-30% more power than the iPad 2’s screen.
Perfecting the color performance of a display is a critical engineering challenge and worth highlighting because its one of those tiny details that Apple is so great at. Just making this small improvement in light leakage from iPad 2 to the new iPad accounts for a stunning amount of improvement in color performance and, most importantly, it makes for a richer user experience.
Last Friday Apple released an updated version of one of their hottest products, called simply “the new iPad.” Central to the update is a brand new display featuring significantly more resolution and color saturation. Since the resolution bit has been covered to death by others and we’re interested in color here we thought we’d take a closer look at Apple’s color saturation claims.
Using the new iPad, particularly next to an “iPad 2,” the reds and greens are noticeably better, but the blues in particular are quite striking. It actually makes the blue on the iPad 2 seem more ‘aqua’ than pure blue. The color data bears this out. According to our measurements, Apple has significantly increased the saturation in all three primaries, most notably in blue:
The key color claim that Apple made on stage at the iPad announcement was that the new iPad has 44% more color saturation. What they mean by that of course depends on the context. There are a couple of different color measurement standards that Apple could be gauging the performance of the new iPad against such as CIE 1931 or CIE 1976.
An easy way to think about these standards is a bit like the temperature measures that we are all familiar with, Celsius and Fahrenheit, in that they are different ways communicating the same information. Saying, “it’s 5 degrees warmer today” means something very different to users of each system and its much the same way with color spaces, only we’re talking about measuring how the eye perceives color, not how warm it is outside.
We should also note that when people in the display industry talk about color saturation as a percentage, it is common practice to refer to a color gamut standard within a CIE color space. There are many color gamut standards in use today including: NTSC, sRGB, Adobe RGB 1998, DCI-P3, and rec 709. Each of these standards is a subset of a CIE color space. They are typically used by content creators to ensure the compatibility of their work from device to device. For example, if I create an image in Adobe RGB, I would like to display it on a screen that can show all of the colors in Adobe RGB in order to make sure it accurately reproduces all the colors in my original shot.
Based on our measurements it looks like Apple is referring to the NTSC gamut within a color space. But which color space do they mean?
A 44% improvement within the CIE 1931 color space would give the new iPad the equivalent of the sRGB standard used by HDTV broadcasts, Blu-Ray and much of the web. Given the significance of achieving that standard, some thought Apple must have been trying to say “sRGB” without confusing consumers by describing the meaning of various color standards.
According to our data, this is not the case. The new iPad only manages about 26% more saturation over the iPad 2 when measured against the CIE 1931 NTSC color space. However, the unit we measured showed a 48% increase in saturation when measured in the CIE 1976 color space, so that must be Apples frame of reference.
Measurements and standards aside, the new display looks great. The improvement in color performance will greatly enhance the user experience, and as we discussed yesterday, show’s what Apple is betting on for the functionality of future devices.
In our next post we will explain exactly how Apple achieved this improved color performance and look at ways they can improve the next generation.
Which brings us to an immovable object meeting an irresistible force. Apple doesn’t make new devices which get worse battery life than the version they’re replacing, but they also don’t make new devices that are thicker and heavier. LTE networking — and, I strongly suspect, the retina display3 — consume more power than do the 3G networking and non-retina display of the iPad 2. A three-way tug-of-war: 4G/LTE networking, battery life, thinness/weight. Something had to give. Thinness and weight lost: the iPad 3 gets 4G/LTE, battery life remains unchanged, and to achieve both of these Apple included a physically bigger battery, which in turn results in a new iPad that is slightly thicker (0.6 mm) and heavier (roughly 0.1 pound/50 grams, depending on the model).
50 grams and six-tenths of a millimeter are minor compromises, but compromises they are, and they betray Apple’s priorities: better to make the iPad slightly thicker and heavier than have battery life suffer slightly.
This point can’t be understated. For Apple, the quality of the display, both in terms of resolution and color gamut, is so critical to the experience of using an iPad that they were willing to make some major tradeoffs. In this case they not only ended up with a slightly thicker, heavier device, they also used a significantly more expensive part. The end result is a stunning display that amplifies everything that was already great about the iPad 2 so it looks like a tradeoff worth making.
We took some color performance measurements of our new iPad this morning and we’ll be posting more details shortly.
If there is one thing we can take away from CES this year, it’s that displays with better color performance are on the horizon. Two of the largest attention getters at CES this year were new displays by Sony and LG. LG unveiled a 55″ OLED and Sony displayed a new “Crystal LED” technology. While both of these displays exhibited impressive performance, including a wider color gamut, the Sony TV was a prototype only, and the LG display is expected to be available later in the year at a hefty price.
As Hubert of Ubergizmo points out, these technologies offer great promise, however, cost will be their determining factor. OLED, which has been on the horizon for what seems like forever, still looks like it will not be available to the masses for quite a while, certainly not in large formats and not at a manageable price point for the consumer.
By contrast, QDEF, offers an affordable, consumer ready solution today. Display designers who are looking for the next new thing will find that they can have a screen with high brightness, deep color, high-DPI resolution and deep blacks in a display that’s as big as they want using QDEF with no increase in cost. This is because QDEF has been designed as a drop-in diffuser sheet replacement to leverage the billions of dollars of existing installed manufacturing capacity and two-plus decades of improvements to LCD performance. With QDEF, manufacturers can easily replace the diffuser sheet in their displays with a sheet of QDEF and gain over 100% of NTSC color performance.
I attended CES 2012 in Las Vegas earlier this month where I spent most of the week showing off a pair of QDEF-hacked iPads. Also found some time got to check out some other high color performance display technology and I’ll have more on that in a later post. For now, here’s a quick review of a couple QDEF coverage highlights from CES:
First up is a video interview I did with Bill Wong from Electronic Design. It was great to see these guys again and do a bit of deeper dive on the quantum dot nanotechnology that makes QDEF go:
I also ran into Jaymi Heimbuch of Treehuger about QDEF’s ability to improve the performance of LCD displays while using less energy and requiring far less capex than OLED:
The technology is as energy efficient as LED technology, which means it is way ahead of OLEDs right now which offer beautiful displays but not necessarily a constant energy savings. In other words, while the future of OLEDs may seem bright (and companies like Samsung are still pursuing OLED displays while others like Sony have dropped out of the race), the future of LEDs is already here and the technology from Nanosys can mean vast improvements without much effort.
If you ever doubted that video games are big business Activision’s recent sales record should be enough to convince you. On its way to reaching $1 billion in sales in just over two weeks with Call of Duty: Modern Warfare 3, Activision smashed every entertainment sales record.
Every entertainment sales record.
That means books, movies and video games. Over its lifetime the franchise has generated in the neighborhood of $6 billion in revenue, which puts it squarely into a Star Wars-level stratosphere as one of the most valuable entertainment properties ever.
What does this have to do with high gamut color display technology?
One of the potential hurdles to widespread adoption of high color gamut display technologies is a lack of content that’s optimized to take advantage of all those extra colors.
With Hollywood-sized blockbuster sales comes Hollywood-sized budgets to create rich new universes for gamers to explore. The expanded creative palette that high color gamut technology offers game developers is a perfect fit. What color is the blood of a martian supposed to be when it explodes and why limit it to a range of colors typically seen on earth?
Additionally, on the platform side, electronics manufacturers could take advantage of a push into high gamut displays to differentiate their entire hardware/software ecosystem. We already know that the current PlayStation™ hardware is capable of the xvColor high gamut standard. Pairing that with wide color games and a TV that can show it might prove a useful differentiator for any platform.
Videogames may just be the driving force that finally pushes high gamut displays into the mainstream.