Automation in the Industrial IoT is Reinventing the User Interface

By European Editors

Contributed By DigiKey's European Editors

One of the posits for creating the Internet of Things is that it will bring people and technology closer together, the evolution of man to machine. Increased automation will undoubtedly be a major benefit of the Industrial IoT, but it will not remove the need for operators. In the home, ‘things’ may become more intuitive, but there will be a greater need to give context to that intuition through an interface that doesn’t represent a barrier.

Advancements in the graphical aspect of user interfaces are apparent. As higher resolution displays are integrated into everyday devices, it enables a more graphically rich user experience. The physical aspect of the interface is also evolving, bringing the two closer together in a way that is even more intuitive to the user. The keyboard is perhaps the most widely deployed form of user interface in use today, but it has inherent abstraction. The latest technologies are removing that abstraction, bringing man and machine ever closer. 

Feel the feedback

No discussion about user interface technology would be complete without looking at capacitive touch sensing. It is a technology synonymous with the smartphone revolution, and has to some extent, suffered from that. Beyond the phone, tablet and in-car navigation, the adoption of capacitive touch sensing as a technology for general purpose user interfaces has yet to realize its full potential.

As a concept, it is able to work across many mediums, not just displays. However, the lack of haptic feedback may be partly to blame for its slow uptake, at least in some applications. But that, too, could be changing. Haptic feedback, in this context, implies a mechanical sensation artificially created to mimic the feeling of depressing a mechanical button.

One way of generating that effect is by using something called a linear resonant actuator (LRA), a device that vibrates when driven at an inherent resonant frequency. Part of the challenge with driving LRAs is that the resonant frequency can vary due to temperature, age or simple production variability. The DRV2605L-Q1 from Texas Instruments is an automotive qualified haptic drive for LRAs and eccentric rotating masses (ERMs) used to create vibrations in user interfaces. The device is supplied with a library of over 100 effects, licensed from Immersion Corporation. It integrates circuitry to apply and control overdriving and braking, two techniques used to improve the haptic experience. Figure 1 shows a simplified block diagram of the device.

Simplified block diagram of DRV2605L-Q1 haptic driver from Texas Instruments

Figure 1: A simplified block diagram of the DRV2605L-Q1 haptic driver from Texas Instruments.

It uses the back EMF of the actuator as part of a closed-loop control system that offers extremely flexible control, managed over an I2C interface or a PWM signal. TI says that its patent-pending Smart-Loop control algorithm simplifies the input waveform, while offering automatic transition to open loop operation if the LRA used doesn’t generate a back EMF. A PWM drive waveform is generated internally when using open loop control. It is also able to convert an audio waveform to meaningful haptic effects.

Noise cancelling technology

Another aspect of developing a capacitive touch sensitive interface is interference caused by electrical noise. The changes in measured capacitance, depending on the method used, can be in the range of picofarads, making the process extremely susceptible to noise. The solution is often a combination of dedicated circuitry and advanced algorithms.

Although it is technically possible to generate the signals and detect the changes in capacitance using ‘general purpose’ peripherals, many microcontrollers targeting capacitive touch applications now offer variants that include dedicated hardware for touch sensing. In some cases the methodology used in the capacitive sensing technique complement the technology employed by the MCU, as is the case with Cypress Semiconductor’s CapSense technology as featured in its programmable system-on-chip (PSoC) devices (Figure 2 shows the PSoC architecture). These programmable devices can be configured to create both digital and analog peripherals, offering a high level of design flexibility. Cypress has extended this to create CapSense, which couples a switched capacitor technique with a delta-sigma modulator and uses Cypress’ Capacitive Sigma Delta (CSD) sensing algorithm to convert a sensing current to digital code. This patented approach delivers high sensitivity, even in noisy environments, at proximity distances of up to 30 cm.

Diagram of programmable system-on-chip concept from Cypress Semiconductor

Figure 2: The programmable system-on-chip concept from Cypress Semiconductor supports configurable digital and analog functions on the same device.

The PSoC 5LP, based on an ARM® Cortex®-M3 core, offers up to 62 CapSense sensors and includes the company’s SmartSense auto-tuning technology. Using a low-power mode that consumes just 300 nA, the PSoC 5LP can be used to add capacitive touch sensing to a wide range of devices, including those powered by batteries. This video provides a demonstration of CapSense.

3D sensing

While capacitive touch sensing doesn’t technically require contact between the user and the sensing surface, proximity is still relevant. The next wave of user interfaces could move beyond close proximity sensing by increasing the distance between the user and the sensing surface. One of the most promising developments in this area comes from Microchip, in the form of its 3D gesture controllers based on its patented GestIC technology. Based on the principle of near-field sensing, it combines motion tracking and proximity detection to allow gestures to be recognized in three dimensions. The MGC3030/3130 (Figure 3) employs up to five receiving electrodes to detect movement in three dimensions by measuring the e-field variance which is analyzed by the signal processing unit executing the Colibri Gesture Suite. This high level of integration delivers a single chip solution to 3D gesture recognition.

Diagram of MGC3130 GestIC controller from Microchip

Figure 3: The MGC3130 GestIC controller from Microchip offers gesture detection in three dimensions.

With a receiver sensitivity of less than 1 fF, the technology delivers a spatial resolution of up to 150 dpi and position rate of 200 positions/second, making it suitable for a wide range of applications. It is also possible to develop a system using ‘basic’ materials for electrodes, including PCB track, conductive foil or paint, or even the material used in a standard touch-sensitive display.

A significant benefit of Microchip’s GestIC technology is its high immunity to noise, and it is also unaffected by clothing such as gloves. This can be the downfall of ‘traditional’ capacitive touch sensing technologies, making them inapplicable for industrial applications. The Colibri software suite covers approach detection, position tracking and gesture recognition, which includes flick, circular and symbol gestures. The library is embedded within the device, enabling real-time and continuous operation. This ‘always on’ approach to sensing extends its suitability to a wide range of applications, including those that must offer a fast response time. Microchip claims that the low power nature of the GestIC controller means it can be used in battery powered devices. This video provides an overview of the technology.

Microchip has developed a 3D touch pad using its GestIC technology combined with projected capacitive sensing. It comes complete with an SDK (software development kit) and API for application and driver development, as well as a GUI that can be used in the development process. The kit comes with an ‘out of the box’ feature set for detecting cursor and click detection, pinch to zoom and up/down scrolling, as well as 3D gesture recognition. Additionally, the SDK enables new gesture development.

Gesture imaging

Other than voice recognition, physical gesturing offers perhaps the most natural way of interfacing to smart devices by providing contactless control that is largely immune to the limitations of the technology. As well as capacitive sensing, OEMs looking to implement gesture recognition are now also using vision-based systems. Some systems are complex, using high definition cameras and large algorithms running on powerful processors. Examples include the advanced driver assist systems now available in many cars, but simple gesture recognition doesn’t necessarily require a high specification camera and large amounts of processing power.

A particularly elegant solution is now offered by Broadcom Limited, in the form of the APDS-9500. This diminutive 18-pin surface-mount sensor (it measures just 6.87 by 3.76 by 2.86 mm) is not only able to detect proximity, it can also recognize nine different gestures including up, down, left, right, approach and recede, and clockwise/counterclockwise. As the device is image based, it doesn’t rely on the object being detected to change a capacitive field, which means it could also be used to detect the movement of objects such as doors and windows.

It integrates a photodiode-based sensor which is configured and controlled over an I2C interface, the output of which feeds into a state machine that decodes the data received from the sensors and logs it as gestures (Figure 4). The recorded data is accessed over an SPI interface and the gesture update rate can be selected as either 120 Hz (normal mode) or 240 Hz (gaming mode). The recorded gesture data can be accessed by an interrupt mechanism or by continuously polling the gesture detect interrupt flag. The proximity detection mode operates at an update rate of 10 Hz and employs an LED pulsed at 8 µs, with a peak current of 760 mA.

Diagram of APDS-9500 from Broadcom Limited

Figure 4: The APDS-9500 from Broadcom Limited integrates gesture recognition in a tiny outline.

Conclusion

The ‘User Experience’ (UX) is becoming more relevant in the Industrial IoT as automation continues to envelope more of the traditional operator’s functions. The need to effectively communicate with machines isn’t going away, it’s just being reinvented.

New technologies are now available that can successfully aid that transition, supported through comprehensive ecosystems and development environments. Microcontrollers persist as the heart of the system, and are now taking on the responsibility for forming the user interface for the next generation of human machine interface (HMI).

DigiKey logo

Disclaimer: The opinions, beliefs, and viewpoints expressed by the various authors and/or forum participants on this website do not necessarily reflect the opinions, beliefs, and viewpoints of DigiKey or official policies of DigiKey.

About this author

European Editors

About this publisher

DigiKey's European Editors