Our Blog

Know what we are thinking and doing

January 15, 2018

Design Hacks IoT Style - Energy Efficiency Advances IoT Technology

About 13% of all energy produced in the US is used to heat, cool, and ventilate buildings today. Heating, ventilation, and air conditioning (HVAC) is the largest consumer of energy in commercial buildings, totaling 37% of all energy expended in this industry. Energy is often wasted by heating, cooling, and over-ventilating un-occupied or moderately occupied spaces. The lack of accurate and reliable occupancy information is due to existing building automation and control systems that are limited in their ability to substantially reduce HVAC energy use.

How familiar does that sound? There is a high likelihood that it might be representative of your situation right now. If you’re interested in learning more about IoT sensor technology and how it can be used to improve your energy efficient efforts, we invite you to check out the rest of this blog.

Energy-efficient sensor nodes are critical to the development of the industrial internet of things (IoT). Often, IoT devices are expected to perform for years-on-end on a single battery charge. That calls for an implementation that is as energy efficient as possible. This demands achieving a holistic approach to energy optimization, one that reaches from the system level down to process and circuit-design choices.

DESIGN

For engineering teams, this is a problem they face. With optimizing energy consumption of an IoT sensor node in mind, they must ensure their design decisions interact with each other. And there are often hidden intricacies of designs that lead to energy consumption being much higher than anyone could have expected.

Here is an example: traditional wisdom points to the power consumption of a radio frequency (RF) transmitter being a major influence on energy totality. But, even though the receiver element may consume much less instantaneous power, system-level decisions that call for the device to listen for intermittent updates from a server can lead to it being active for long periods of time. This is tens of seconds per hour versus tens of milliseconds for the transmitter.

Because of the long operational life of a typical IoT sensor node, the energy used, even when subsystems are sleeping, can be responsible for a heavy drain on the battery.

Despite the complex interaction between application design and implementation, there are some high-level choices that are expected to lead toward an optimal solution. One of these is the use of integration.

INTEGTRATION

Although it is entirely possible to use 2D-IC and 3D-IC multi-chip packaging to assemble a compact IoT sensor node from off-the-shelf components, integration into a single custom integrated circuit (IC) provides not just significant benefits in terms of cost and size but reductions in the consumption of power. To communicate with off-chip memories, analogue, and RF on traditional PCB-based implementations, Input/Output (I/O) drivers with significant current draw are often required. A single system-on-chip (SoC) makes it possible to remove such power-hungry circuits.

Another fundamental consideration for designing energy efficient IoT sensor nodes is an understanding of the “duty cycle.” A duty cycle is “the cycle of operation of a machine or other device that operates intermittently rather than continuously.” Its impact on lifetime energy consumption is another factor in play. Minimizing the power consumption of individual elements is not enough to guarantee that a remote (or inaccessible sensor) can operate on a single battery charge for a decade or more. In this situation, every microjoule the node requires from its battery is important. That does not mean the system powered by a typical battery can consume no more than a few microwatts at any point in its lifetime. Such a system would not be able to take measurements and communicate them wirelessly in any practical way.

The use of duty-cycle planning makes it possible for the system to perform tasks that take significant amounts of power for short periods, trading those bursts against savings that can be made while much of the system is quiescent. For example, the radio frequency (RF) subsystem of a wireless sensor node only needs to be powered when it is active. This is likely to be one of the most power-hungry parts of the overall design because of the need to supply enough transmitter power to ensure packets of data can be delivered unfailingly; however, the power consumed by the transmitter portion of the RF subsystem is relatively easy to control. Once a packet has been delivered, the transmitter can be shut down. There can still be significant power drawn by subsystems such as the RF receiver that continue to remain active once the transmitter completed sending.

The RF receiver often needs to remain active because of timing uncertainty and this type of uncertainty has a major influence on overall energy consumption. Whereas the transmitter has predictable requirements (it only needs to be activated when data is ready to send), the receiver needs to be active for much longer. It needs to wait for acknowledgments from nodes to which it is sending data, and also needs to activate periodically to be able to listen for unsolicited messages. This results in the overall energy consumption of the RF receiver often exceeding that of the transmitter over the lifetime of the sensor, even though its instantaneous power demand is lower. An efficient design will exploit power-saving techniques such as putting much of the circuitry into a low-activity state until an RF signal is sensed. Another optimization option is to reduce the amount of time per minute the receiver is active at the cost of the sensor node’s responsiveness to external commands.

They might appear to be essential to all operations, but the microprocessor core and its memory subsystem need careful duty-cycle management. This is because they can demand very high levels of power. The problem for many designs is that software running on the processor is often responsible for core tasks such as fetching data from sensors and passing messages to the RF subsystem.

This appears to mandate that the processor be fully active whenever sensor inputs need processing.

In many cases, the work performed by the software is very simple. It quickly checks data values to see if they have passed a limit that might signal a problem. Or, in more simple measures, seeks increased activity that needs closer inspection. Activating the processor to handle all the data is wasteful and can easily be off-loaded to custom hardware or a programmable state machine. These circuits consume far less power and can run independently of the processor, so that and the memory array can be powered down.

CURRENT LEAKAGE

The power drawn during lengthy periods of sleep can be surprising even when most of the device is powered down. Energy lost through current leakage in subsystems that need to be powered consistently can incur a heavy overhead when analyzed over the lifetime of the system because the time the system spends sleeping can be orders of magnitude longer than during which the system is active. Here is the problem, though. Leakage calls for design techniques that limit leakage in subsystems such as real-time clocks and disturb controllers to the nanoamp level. It seems reasonable to disable disruptions for external events and only keep the real-time clock running in some applications; however, in that design, the system needs to wake at regular intervals to check inputs that may incur unwanted energy consumption if there is no overall change to record. If the long-term energy usage of an interrupt controller is low enough, keeping that active to respond to events as they happen may be more sensible.

When the processor and memory subsystem are powered down, a key decision is how to manage temporary data. One option is to use a specialized retention register. Another option is to use memory cells at the cost of some leakage power. And, another is to put important data, such as calibration values, into non-volatile memory (NVM). Non-volatile memory (NVM) is a type of computer memory that has the capability to hold saved data even if the power is turned off. This allows values to be restored quickly on restart but allows the leakage-prone SRAM arrays (a type of memory chip that is faster and requires less power than dynamic memory) and registers to be powered down fully until then. But NVM choices are not always easily interpreted.

Processes that are augmented for low leakage and that support high-density NVM options may not have the performance required to support efficient RF modules on-chip. The energy needed for I/O drivers that transfer data to an off-chip RF transceiver may compensate for the power savings and security advantages obtained from implementing NVM on-chip. An analysis of the application’s requirements will dictate which choice is better for the custom system on chip (SoC) solution.

For the portions of the design that will be active for much of the device’s lifetime, we must be sure to pay attention to the detail. Details seemingly small, such as choosing to put multiplex inputs into an analogue-to-digital converter (ADC), will help determine the architecture of choice for those circuits. A sigma-delta ADC may appear to offer a good trade-off between accuracy, energy efficiency, and silicon initially. It, however, is not suited to multiplexing. Often a successive approximation (SAR) architecture offers superior performance for industrial sensor signals. SAR design advances have pushed the energy per bit per conversion down into the range of tens of femtojoules.

Front-end analogue circuits are just as important as the ADC. Amplifiers and buffers that isolate and condition signals before conversion can consume high levels of power and they will be active for long periods of time. Analysis of the specific requirements for bandwidth and accuracy often allow for optimizations that reduce the energy of front-end circuitry and ADCs.

Tying the subsystems together into a working custom SoC demands the use of power-aware design methodologies to ensure subsystems and circuits are activated appropriately when its required, and can be powered down without interrupting the operation of other parts of the custom IC that need to stay running. Unified Power Format (UPF) standards have been designed to support such power-aware methodologies, but their application requires experience and attention to detail at different levels of construct.

Here is a good example:

There may be a logical connection between two subsystems that demands they be active at the same time.

But physical restrictions may call for them to form part of a larger power island, an area of the mixed-signal application-specific integration circuit (ASIC) with a common set of power and ground rails. This includes other subsystems that are not required during that time. Design verification will need to ensure that the whole island is powered up correctly. The final SoC will fail if it is not. Such physical design considerations may call for changes to the power-control architecture if the consumption of the whole island is higher than the appropriated budget. An example would be that it may call for subsystems to be assigned to different power islands.

You know what else needs to pay attention to on-chip noise is verification, which may point to further optimization of the power-island strategy. A low-noise, low-dropout (LDO) may be used to power sensitive mixed-signal sections that operate separately, for example. Once the measurements have been taken or RF communications have been completed, a higher-efficiency DC/DC converter may then be reactivated to analyze incoming data and make decisions.

In closing, although the requirements of energy efficiency in IoT sensor nodes are mostly understood, as can be observed, the implementation choices are complex and often elusive. Many factors affect the ideal solution for any given IoT sensor node application, even though a custom SoC will frequently be the best target in terms of energy and overall cost. Therefore, the ability to call on the expertise of a technical team with extensive experience in custom mixed-signal IC implementation is the key to success. Inibii can help organize that team for you.

If you’re a facility manager, how can IoT technology benefit your organization to improve your energy efficiency? If you’re still trying to figure that out, we can help.

Enjoy blogs like these? Here is one you will love: Defy Complacency in Climatic Matters

REFERENCES

https://www.autodesk.com/products/eagle/blog/top-10-things-about-iot-pcb-designer

Your comment