by Ryan Oliver, Lake Shore Cryotronics, firstname.lastname@example.org
Although helium was first liquefied by H.K. Onnes in 1908, it wasn’t until the early 1950s that liquid helium became more readily available, following the development and commercialization of the Collins Helium Cryostat. This allowed large-scale production of liquid helium, improving the accessibility of cryogenic research.
This article explores the evolution of the techniques used to measure and control temperature since this time, with a focus on cryogenic temperatures as a tool, rather than techniques used by those working on defining temperature scales. There were many more advancements made over the years than can be addressed in this article, so key advances have been selected.
At the beginning of this time period, platinum resistive temperature devices (RTDs) were the most common sensing element due to their use in defining the international temperature scales ITS-27 and IPTS-48. These platinum resistors were far different from the industrial style sensors used in today’s systems. Most often, they were capsule or long-stem thermometers fabricated from high-purity platinum and inserted into special sheaths chosen for the temperature range and environment. Though much larger and more expensive than today’s industrial platinum devices, they were capable of measuring down to around 2 K by some accounts with good stability and repeatability. Various companies still make these types of standard platinum resistance thermometers and calibrations are available from a few national labs, including NIST in the USA and PTB in Germany.
Before a universal standard for platinum sensors began to emerge in the early 1980s (now known as the DIN/IEC 60751 standard), differences in platinum purities meant that each device behaved differently and required initial calibration. This required intercomparing with other more tedious measurement techniques such as vapor pressure thermometry.To measure the resistance of these sensors, special potentiometer thermometer bridges were available from the 1940s on, but they were expensive and normally limited to metrology labs. More commonly, once the platinum sensor was calibrated, measurement of the resistance of these devices was accomplished with do-it-yourself systems built from separate current sources and voltmeters. Great attention was paid to the formulas that could be used to convert resistance readings to equivalent temperature values, as automated systems to handle this conversion were not yet available. Over the years these formulas ranged in complexity from the rather simple Callendar-Van Dusen equation (still in use today) to the complex multi-order polynomials used for IPTS-68 and the currently accepted ITS-90 scales.
Of course, in this time period multiple refrigeration methods existed for attaining temperatures well below the limit of platinum’s usefulness. Germanium semiconductor RTDs (originally available from Scientific Instruments, CryoCal and Honeywell beginning in the late 1960s and now available from Lake Shore Cryotronics) became popular, providing highly stable and repeatable measurements for temperatures well below 1 K. Off-the-shelf carbon resistors from Allen-Bradley and Speer had also been found useful for this temperature range but were plagued with problems relating to temperature response shifts caused by thermal cycling and helium absorption. This made these devices somewhat inconvenient to use but they were still one of the better options available through the 1960s and 1970s. Carbon-Glass™ resistors developed at Corning and commercialized for cryogenic thermometry by Lake Shore provided a viable alternative by offering a more stable sensor with a wide temperature range and low magnetic field offsets. The discovery of commercially available ruthenium dioxide and bismuth ruthenate film resistors (now commonly referred to as RuOx) added yet another viable RTD option.
These negative temperature coefficient sensors (germanium, carbon, carbon-glass, and RuOx RTDs) all provided extremely high resolutions at temperatures below the useful range of platinum. However, they required more complex instrumentation with selectable current excitation levels to avoid self-heating at lower temperatures and loss of resolution at higher temperatures. Many instruments were capable of this, including DC potentiometers (like the Leeds and Northrup K-3), oscillators paired with lock-in amplifiers, and even AC resistance bridges for high-resolution, low-power cryogenic measurements.
Luckily, many of the measurement solutions for RTDs were easily adapted for a new entrant to the cryogenic sensor market in the late 1960s: the p-n junction diode. Although they made use of a different temperature dependence mechanism (forward bias voltage rather than ohmic resistance), they still required an excitation current, and a resulting voltage was measured to determine temperature. Diodes had the advantage of simpler instrumentation. Only one current excitation level was required for the entire useful temperature range and the voltages generated by these diodes were generally orders of magnitude higher than RTDs. Coupled with the fact that the first commercial diodes to be released (such as Lake Shore TG-100 GaAs diodes) exhibited a very wide temperature range (2 to 300 K), these sensors became quite popular and allowed Lake Shore to expand into other sensor opportunities.Over the years, many other diodes and transistor configurations have been investigated. The industry has largely settled on GaAlAs diodes (currently only available from Lake Shore) and silicon diodes (available from many suppliers), with silicon enjoying an overwhelmingly large share of the cryogenic temperature sensor market.
Strontium titanate glass capacitor sensors were developed and commercialized shortly after the first diode sensors and showed great promise as a highly stable sensor for use in magnetic fields. Unfortunately, these capacitors were not able to take advantage of existing measurement equipment used for RTDs or diodes. Instead, a popular solution at the time was the combination of a capacitance bridge (such as the General Radio [now IET Labs, Inc.] Type 1620).
In addition to requiring different instrumentation, capacitance measurements couldn’t take advantage of the 4-lead measurement technique used with RTDs to negate the effect of lead resistance. Lead capacitance could not easily be removed from measurements, making it particularly difficult to determine the capacitance of the sensor itself in isolation from the connecting wires. In time, it was also found that these capacitance sensors suffered from drift issues related to thermal cycling and high voltages, making them useful in very limited situations.
Capacitor sensor instrumentation options have improved somewhat over the years, with capacitance bridges still being available, as well as a simplified, less precise capacitance meter available as an option in select Lake Shore temperature controllers. Unfortunately, no amount of instrumentation improvements can solve the inherent problems of the capacitance sensor itself. An alternative sensor with high sensitivity and no magnetic field-induced errors at very low temperatures is yet be found and remains one of the industry’s great opportunities.The closest alternative solution to date came during the late 1990s with the development of the thin-film zirconium oxynitride RTD known as Cernox® (manufactured by Lake Shore). The ability to use more common resistance measurement techniques coupled with the vastly reduced magnetoresistance in comparison to carbon-glass and the wide useful temperature range resulted in Cernox eventually becoming the preferred choice for applications down to 100 mK.
With so many different sensor types being developed over the years, it is interesting to see the transition from do-it-yourself temperature control setups to all-in-one commercial offerings. By merging current sources and voltage measurement with output heater power, the cryogenic temperature controller quickly became indispensable for cryogenic research.Purely analog devices were the first to be released, usually focusing on a specific sensor type allowing optimized measurement performance. On-instrument temperature conversion was yet to be developed, so temperature comparisons were simply made in sensor units. These comparisons to a setpoint drove a heater output to create the first generation of closed loop control required for an instrument to be classified as a temperature controller. An example of this was the Lake Shore TGC-100, designed specifically for use with GaAs diodes and having just the gain portion of the PID control common in today’s controllers.
As time went on, the proliferation of PCs, microcontrollers, and digital displays saw instruments continue to grow in capability and usefulness. The addition of internal temperature conversion using calibration tables was perhaps the most significant. Initially these were quite limited, such as the Lake Shore DRC-80C, which supported curves with only 28 points loaded permanently onto a PROM (Programmable Read-Only Memory), requiring the user to remove the instrument top cover to install the PROM for a new sensor.
By interpolating between those points, the temperature in Kelvin could be reported and controlled directly by the instrument. This was a giant step forward in usability, but it resulted in an objectively less accurate measurement than what was possible with the previous generation of manual analog instruments. The previous-generation instruments were capable of milliKelvin level accuracy, while this new generation was listing accuracies of 100 mK when used with a calibrated sensor.
The convenience of this feature proved to outweigh the downsides, and development continued to further improve these numbers. It wasn’t long before controllers added support for temperature conversion tables with as many as 200 reprogrammable sensor calibration points in a single curve. Improvements were even made to the interpolation method in some instruments such as cubic-spline (used in the Cryo-con 34). Improvements like these have almost completely removed the interpolation error that was present in earlier models.
Ease of integration into larger-scale computer controlled systems was also added over time with a host of connectivity options being added to controllers such as GPIB, RS-232, USB, and Ethernet. Taking this to the extreme was the Scientific Instruments PC 2001 released in the late 1990s. It replaced most of the front interface with a connected PC and was controlled though a GUI.
More recently, manufacturers have begun focusing more heavily on the user experience, a trend being driven in many industries following the success of companies like Apple. An example of this shift can be seen in the Stanford Research Systems CTC100 that makes use of a large touchscreen, just one of many cases of temperature controller manufacturers striving to make their instruments easier to use.
Beyond ease-of-use considerations, keeping pace with the advances and needs of cryostat manufacturers has also been a top priority for controller manufacturers. Parameters such as number of simultaneously supported sensors and heater power have continued to grow as cryostats increase in complexity and size. Researchers and cryostat manufacturers have a great range of sensor choices available to them, with the dominant sensors now being Cernox, RuOx, and platinum RTDs, silicon diodes, and thermocouples. Modern instruments must have the flexibility to support all of these sensor types if their manufacturers wish to remain competitive. Given this need for flexibility, most systems today can be satisfied with a single temperature controller with four flexible sensor inputs and at least 100 W of heater power (such as the Lake Shore Model 336). These instruments have come a long way since the early days of cryogenic temperature controllers, greatly reducing the amount of time and effort required to conduct cryogenic research.
Looking forward a little, there still appears to be some appetite in the market for general purpose controllers with additional heater power capabilities. Beyond this, sensor measurement performance has largely improved to the point where additional performance would not be noticed by most users. Future instruments are likely to see improvements in usability and convenience, thereby requiring less time and attention from the user to perform their desired tasks. Some may lament the fact that this usually involves increasing instrument automation, requiring less and less knowledge from the users on how cryogenic measurements and controls are performed. However, at the end of the day, modern research projects are complex enough without adding difficulties associated with attaining a stable and reliable experiment space temperature. Any improvements to ease of use will benefit the field of cryogenic research.This, of course, assumes that some sort of breakthrough in sensor research doesn’t result in a game changer that requires a redesign of existing controller input stages. The development of a low- cost primary thermometer could easily do this, with various technologies showing promise in recent years, though none have come to fruition. This is where close ties between instrumentation manufacturers and the research community will continue to be very important sources for groundbreaking cryogenic temperature sensor development. ■