Demystifying the Physically Unclonable Function (PUF)

By Kristopher Ardis

Executive Director

Maxim Integrated

February 01, 2019

Story

Demystifying the Physically Unclonable Function (PUF)

PUF stands for physically unclonable function. From a technical standpoint, PUF exploits minute differences in silicon that appear from chip to chip to create a binary value.

The physically unclonable function (PUF) is not a new idea: security experts have been excited by the technology for years. However, no reliable and cost-effective integrated circuits (ICs) with integrated PUF technology were available until recently. Now the technology is available to any application developer.

The security experts say PUF is great, but why? And what does it really do? You may not be working on a design to secure the nuclear launch codes, so is there a place for PUF technology in your design?  

What is PUF?

PUF stands for physically unclonable function (or features), but this definition may still leave you scratching your head. “Unclonable” sounds like a good word, but it’s kind of odd to think about a “function” being unclonable. 

From a technical standpoint, PUF exploits minute differences in silicon that appear from chip to chip (even two chips side-by-side on the same wafer) to create a binary value. Get enough PUF cells together and you can create arbitrary-length numbers with good properties of randomly generated numbers. The chip-to-chip variances can be exploited in such a way that those arbitrary-length numbers are practically unique no matter how many chips are manufactured.

If that was a bit too technical, let’s tell the story a little bit differently: every piece of silicon manufactured has minor differences if you look down far enough into the physical design of the chip.  While the manufacturing processes of silicon are highly accurate, there are still variances in each circuit manufactured. PUF technology amplifies those minor differences to create a unique number on each chip. Think of it like our DNA: everyone’s DNA is a unique expression of that person (let’s leave out the idea of identical twins in this metaphor, since there isn’t really an equivalent in the silicon world). 

So in the end, PUF gives you a unique number. But is that really so special? Maxim, for example, has manufactured a line of products (originally from Dallas Semiconductor) for the 1-Wire bus that include 64-bit unique numbers that are lasered or programmed into each device and guaranteed unique. We’ve been doing this for around 30 years. Why is PUF special?

The comparison of PUF to a unique ID may seem obvious at first, but this is not where the strength of PUF technology lies. A unique ID inherently wants to be publicly known: it is like an address, a way for someone else to find you. PUF technology is most valuable when it is used for secret keys—instead of being the address that you don’t mind lots of people knowing, PUF is like an insanely long password you have to enter to get into your house, ensuring that only the trusted people can access the house.

Why is Key Protection Important?

I’ll explain why PUF is a superior technology for key protection in a moment, but first let’s talk about WHY key protection is important. 

If you want to secure anything in the digital world, you’re going to need to implement some form of encryption to do it. With the right cryptographic tools, you can:

  • Implement confidentiality: Protect communications from point A to point B
  • Implement integrity: Detect whether messages received have been tampered with
  • Implement authentication: Prove that a device belongs to a particular group or network

Consider the example of confidentiality: this is what most people think about when they think about encryption. A message is encrypted to ‘scramble’ it, making it infeasible for someone listening to the communication to intercept and understand the message.

Figure 1: Encryption from point A to point B.

Integrity is concerned with trying to determine if a message was tampered with. Here you could use a cryptographic algorithm called a hash, which takes an arbitrary-length data stream as input and outputs a constant-length number. A hash should be infeasible to reverse…a hash run over an input message AND a secret key, and verified by the recipient, can prove that the message was not tampered with by a third party.

Figure 2: Integrity proves that a message was not tampered with, much like the tamper seal on a bottle of medicine.

Authentication techniques are used to prove that someone or something is part of your group and can be trusted. When a new device wants to join a group, the servers of that group must “challenge” it to see if it really belongs. A one-time random number (called a challenge or a nonce) is sent to the new device, and again cryptographic algorithms are executed on the random number and the secret key to produce an answer…if the answer checks out at the server, the new device is admitted to the group.

Figure 3: Much like a signature on a postcard, a signature on a digital message helps assure you that the message comes from the claimed sender.

Note that in all three of these scenarios, the secret key plays a pivotal role: if an attacker knew the secret key, they could impersonate a valid device, create fraudulent messages or tamper with legitimate messages, or listen to sensitive communications at will. Depending on the value of the data being transferred or the value of joining a certain group, attackers may be willing to spend significant sums of time and money to figure out your secret keys. The attackers could be anyone from criminals trying to take control of financial transaction equipment, competitors trying to create devices that can operate within your infrastructure or reverse-engineer your work to create clones, or even governments trying to snoop communications or break critical industrial equipment. 

The secret keys are the most important asset in any security scheme. Attackers might have access to considerable resources and be willing to invest to extract secret keys at all costs. In silicon, the secret keys have to *be* somewhere, typically in some kind of memory cell. Let’s consider how secret keys might be stored and the benefits and drawbacks of that storage. 

How are Keys Stored Today?

In some systems, secret encryption keys are stored in external memories—non-volatile memories like NOR/NAND flashes or special external memory chips like battery-backed SRAMs. When the main system microcontroller or microprocessor needs to use that secret key, it must read it over a memory bus, where that key is transmitted in the clear. To ensure the protection of that key, some systems implement extensive and expensive physical security methods to make it difficult for an attacker to monitor those clear-text transmissions: hiding the memory bus in the middle layers of a PCB, implementing sensor circuitry around the sensitive area to detect attacks, and filling the empty space of the device with a plastic filler that is exceedingly difficult to remove. These kinds of solutions are expensive and can often be defeated by patient attackers. 

A better solution is to store secret keys in the same place they will be used. In embedded systems, this commonly means storing those keys in a non-volatile memory. They are programmed into flash or EEPROM, or possibly even manufactured into a ROM. This allows the secret keys to remain on chip; however, there are still physical attacks that can access those keys: it is possible to decapsulate the plastic package around a piece of silicon and microprobe the memory busses. 

Figure 4: Decapsulated chip.

Sometimes, ICs will implement a bit more physical security on the chip itself, perhaps by adding top layers to the silicon above the memories so they aren’t directly accessible by a microprobe. While this does make the attack a little more difficult, it is still possible to carefully remove those layers of silicon and then access the deeply embedded charges in those flash or EEPROM memories to extract secret key information.

The challenge with flash, EEPROM, and ROM technology is that when power is removed from the system, the secret key contents remain stored in those memories, and there is no power available to erase those memories in the event an attack is detected. A vast improvement over this technology is battery-backed SRAM, in particular when combined with tamper-detection sensors. In systems implementing this kind of technology, super-low-power sensors run off a small battery to detect various physical attacks, and erase the small battery-backed SRAM that stores the secret keys if an attack is detected. If an attacker removes the battery from the system to disable the sensors, this act also removes power from the SRAM and the secret key information is lost. While there are still some much more difficult attacks that can look at unpowered SRAM cells and try to determine a memory “imprint,” this is the preferred technology used in many government and financial applications today. However, in addition to being susceptible to the memory imprint inspection, there is one big drawback to this technology: the battery. It adds cost, size, and even environmental concerns. 

How Does PUF Mitigate Those Drawbacks?

A good PUF implementation addresses all of the concerns of conventional key storage:

  • Under normal operating conditions it is inherently non-volatile, so no battery or other permanent power source is needed. While the number read from any IC’s PUF circuitry should have good random characteristics (in that each bit cannot be used to predict the value of any other bit in the PUF bit sequence), the PUF in the IC will reliably produce the same result every time. 
  • PUF circuitry should be resistant to physical inspection. By amplifying the minute imperfections in the physical silicon itself, PUF circuits are inherently highly sensitive. Attempts to physically probe the PUF implementation will dramatically change the characteristics of that PUF circuit, and result in a different number being produced. 
  • The key from PUF can be generated only when required for a cryptographic operation and can be instantaneously erased thereafter.

This is a powerful combination: it provides the bill-of-materials (BOM) and environmental benefits of a non-volatile memory, with the security of a tamper-reactive SRAM. In other words, the secret is always there in the circuit, but you can never look at it. It has some similarities to Heisenberg’s uncertainty principle: you can know that the atom is there, but the mere action of observing it changes its behavior. 

The implementation of a good PUF technology isn’t enough to assure key security: once that secret key is in use, the cryptographic implementation must make sure to be resistant to side-channel attacks. But PUF does help to make sure the embedded device is not the weak point in a system for attackers to focus their efforts.

If PUF is So Great, Why Isn’t Everyone Using It?

You may not see a lot of PUF implementations in the market, so the natural question is “why not” if it really solves all these problems? PUF technology is highly difficult to implement well. In addition to security system expertise, you need analog circuit expertise to harness the minute variances in silicon AND do it reliably. Some PUF implementations plan for a certain amount of marginality in the analog circuit, so they create a PUF field of 256 bits (for example), knowing that only 50 percent of those PUF features might produce reliable bits, then mark which features are used on each production part. And because the technology relies on such minor variances, long-term quality can be a concern: will a PUF bit flip given the stresses of time, temperature, and other environmental factors? This unique mix of security, analog expertise, and quality control is a formidable challenge to implementing a good PUF technology.

Is PUF Right for Me?

Hopefully the value of PUF technology for key storage is more clear, but you may still wonder if the technology is right for you: while Maxim’s ChipDNA products, based on PUF technology, are cost effective, there is still some price difference between products with ChipDNA PUF technology and products based on other key storage technologies. While each design and each situation will be different and call for different solutions, PUF technology is the best key storage technology available. 

The technology used by attackers does not stop advancing: side-channel analysis used to be limited to academic research. Now there are open-source tools to execute a differential power-analysis attack.  Microprobes used to be only accessible by big semiconductor manufacturers, but now they are common in universities. The technology used in attacks becomes more advanced, less expensive, and more accessible over time, just like any other technology we are used to…conventional security technology today will be more susceptible to attack tomorrow. As the most advanced key storage technology available today, PUF can provide your applications with a longer lifetime before they are subject to security threats. 

In the end, the decision is a cost versus threat tradeoff…for a small cost, PUF technology can help protect your devices from threats for the foreseeable future. 

I manage Maxim's Microcontroller and Software Algorithm business lines. My team is responsible for the definition, promotion, marketing, support, and revenue of those product lines. Our microcontroller lines include high end security products for financial transaction devices and low power micros for IoT, wearable, and medical products.

More from Kristopher