As Samsung recently launched its new Samsung Galaxy Note7, complete with curved active-matrix organic light-emitting diode (AMOLED) screen, it got me thinking about how far LED technology has come in a relatively short time and how important a role it now has in modern life. While scientists discovered the phenomenon of electroluminescence — where a material gives off light when an electric current or field is passed through it — more than one hundred years ago, it wasn’t until the 1960s that LEDs as we know them today started to appear.
However, 1960s LEDs were a far cry from the ones now used to light houses, streets, television screens, and mobile devices. In fact, it’s only really in the last ten years that LEDs have matured enough to underpin the wide-scale luminescent revolution now taking place. So how do LEDs work, and what’s the story of their rapid rise to prominence?
How LEDs work
LEDs work differently from incandescent or fluorescent lighting. Traditional incandescent lighting works by heating a material up to make it glow. Fluorescent lights use electrical current to make mercury vapor emit ultraviolet (UV) light, which in turn causes a layer of phosphor on the inside of the light tube to glow.
LEDs, on the other hand, contain a piece of semiconductor material made up of two distinct sides – one positive and one negative – known as a p-n junction. The positive side has “holes” in it that catch electrons as an electrical current is passed through the material. When the electrons are driven to fall into the holes, they release energy as light. The color of the light given off depends on the wavelength of energy the electrons release, which is determined by the materials the semiconductor is made of. As a result, the development of different colors of LED is closely tied to materials science and the development of semiconductors that enable ever-shorter wavelengths to be released in a very precise and controlled way.
LEDs started to appear in the 1960s. James Biard and Gary Pittman were awarded a patent in 1966 for their diode on a gallium arsenide (GaAs) substrate, which emitted near-infrared light. Shortly afterwards, Nick Holonyak developed the first visible spectrum LED, which emitted red light, using gallium arsenide phosphide (GaAsP).
Large-scale commercial production of LEDs started in the late 1960s. While these early generations weren’t particularly bright or efficient, their robustness and small size meant they became a popular solution for machinery indicator lights.
More colors, more lumens, and Haitz’s law
As semiconductor technology developed it became possible to reduce the wavelength of the emitted light, and hence create additional colors of LEDs. M. George Craford led the development of new gallium arsenide phosphide with nitrogen semiconductors, which led to the creation of yellow LEDs in 1971. It also significantly enhanced the brightness of red LEDs. The advent of aluminium gallium arsenide (AlGaAs) red LEDs in the 1980s boosted their brightness by a factor of ten. This improvement now made them bright enough for usage in automobile rear lights.
This rapid increase in brightness of red LEDs was an early manifestation of Haitz’s law for LEDs, even though the law itself wasn’t articulated publicly until 2000. Similar to Moore’s Law, it states that every decade, for a given wavelength (color), the amount of light an LED generates will go up by a factor of twenty, while simultaneously the cost per lumen will drop by a factor of ten. The law holds throughout the development of LEDs from the 1960s, and has continued to hold as a rule of thumb to this day.
As the development of red LEDs accelerated, green LEDs were beginning to emerge as well. Early examples appeared in the mid-1970s, initially using gallium phosphide (GaP). Like their red predecessors, it took a little time before truly bright green LEDs were available. Hewlett Packard debuted a high-brightness green LED in 1993, using aluminium indium gallium phosphide (AlInGaP).
As materials technology made it possible to reduce the wavelength of the emitted light further, Herbert Paul Maruska created the first known blue LED in 1972. This used gallium nitride (GaN) with magnesium on a sapphire substrate. However, the company he was working for shifted its priorities and Maruska wasn’t able to continue his work to increase the brightness of his blue LED.
It wasn’t until the 1990s that high-brightness blue LEDs appeared, and the many years of work to create them saw scientists Shuji Nakamura, Isamu Akasaki and Hiroshi Amano awarded the Nobel Prize in Physics in 2014.
White LEDs: The Holy Grail
The creation of high-brightness blue LEDs was a watershed moment because it made it possible to obtain white light from LEDs, achieved through Stokes shift by adding a layer of yellow-emitting phosphate to a blue LED. Like their red, green, and blue predecessors, early white LEDs were inefficient and expensive, but Haitz’s law held, and this changed quickly. White LEDs rapidly became a viable option for lighting everything from living rooms and streets to (most probably) the screen you are reading these words on.
Where to next?
LED technology has evolved incredibly fast over the last few decades, going from being the standby light on your television to the light behind the display itself. With Haitz’s law continuing to hold, expect to see better, brighter, and more affordable LEDs permeating into every corner of our lives. So the next time you buy a smartphone, computer, TV or bulb, think back to earlier versions that you’ve bought. Marvel at what an incredible difference LEDs have made to the size and power requirements of artificial light in a remarkably short space of time.