Info
Info
News Article

Sensors And Artificial Intelligence – A Powerful Symbiosis

Artificial intelligence (AI) is presently revolutionizing many diverse aspects of our society. For example, by combining the advancements in data mining and deep learning, it is now feasible to utilize AI to analyze large chunks of data from various sources, to identify patterns, provide interactive insights and make intelligent predictions. Kaustubh Gandhi, Product Manager Software, Bosch Sensortec discusses.

One example of this innovative development is applying AI to sensor-generated data, and specifically to data gathered from smartphones and other consumer devices. Motion sensor data, together with other information such as GPS location, provides massive and diverse datasets. Therefore, the question is: "How can the power of AI be leveraged to take full advantage of these synergies?"

Motion data analysis

An illustrative real-world application would be to analyze usage data to determine what a smartphone user is doing at any given moment: sitting, walking, running or sleeping?

In this case, the benefits for a smart product are self-evident:

  1. Improved customer lifetime value
    Increasing user engagement results in reduced user churn rate.
  2. More competitive product positioning
    Next-generation intelligent products to meet customers' increasing expectations.
  3. Creation of real end-user value
    Accurate detection and analysis of indoor movements enable responsive navigation product features, health risk monitoring, as well as improved device efficiency. Insights on actual use case scenarios across a wide variety of smartphone and wearable platforms will greatly assist product designers in understanding the repetitive habits and actions of users, for example to determine the right battery size or identify the proper timing for pushing notifications.

The heightened interest of smartphone manufacturers in these AI-enabled functions has clearly highlighted the importance of recognizing simple activities such as steps, which are certain to lead to more in-depth analysis of, for example, sports activities. With popular sports such as football, product designers will not focus solely on the athletes themselves, but provide benefits to coaches, fans, and even large corporations such as broadcasters and sportswear designers who will profit from the deep level of insight that can accurately quantify, improve and predict sports performance.

Data acquisition and preprocessing

Having identified the business opportunity, the next logical step is to consider how these massive datasets can be effectively acquired.

In the activity tracking example, the required raw data is collected by means of axial motion sensors, e.g. accelerometers and gyroscopes, which are installed in smartphones, wearables and other portable devices. The motion data is acquired along the three axes (x, y, z) in an entirely unobtrusive way, i.e. movements are continuously tracked and evaluated in a very user-friendly manner.

Training the model

For supervised learning approaches to AI, labeled data is required for training a 'model', so that the classification engine can use this model to classify actual user behavior. For example, we can gather motion data from test users that we know are running or walking, and provide the information to the model to help it learn.

Since this is basically a one-time method, this user 'labeling' task can be performed with very simple apps and camera systems. Our experience indicates that the human error rate in labeling tends to decline with increasing numbers of samples collected. Hence, it makes more sense to take a larger number of sample sets from a limited number of users than to take smaller sample sets from more users.

Getting the raw sensor data alone is not enough. We have observed that to achieve highly accurate classification, certain features need to be carefully defined, i.e. the system needs to be told what features or activities are important for distinguishing individual sequences from one another. The artificial learning process is iterative, and during the preprocessing stage, it is not yet evident which features will be of most relevance. Thus, the device must make certain guesstimates based on domain knowledge of the kind of information that may have an impact on classification accuracy.

For activity recognition purposes, an indicative feature could comprise of 'filtered signals', for example body acceleration (raw acceleration data from a sensor), or 'derived signals' such as Fast Fourier Transform (FFT) values or standard deviation calculations.

For example, a dataset created by UC Irvine Machine Learning Repository (UCI) defines 561 features, based on a group of 30 volunteers who performed six basic activities: standing, sitting, lying down, walking, walking downstairs and walking upstairs.

Pattern recognition and classification

Once the raw motion data is gathered, we need to apply a machine learning technique to classify and analyze it. The available machine learning options are numerous, ranging from logistic regression to neural networks.

One such learning model utilized for AI is 'Support Vector Machines' (SVMs). Physical activities such as walking comprise of a sequence of movements, and since SVMs happen to be excellent for the classification of sequences, they are a logical choice for activity classification.

An SVM is simple to use, to train, scale, and predict, so it is easy to set up multiple sample collection experiments side by side, for use in non-linear classification to handle complex real-life datasets. SVM also enables a wide range of size and performance optimization opportunities.

Having chosen a technique, we must then select a software library for the SVMs. The open source library LibSVM is an excellent choice since it is stable and well documented, supports multi-class classification, and offers extensions for all the major development platforms - from MATLAB to Android.

Challenges of always-on classification

In practice, live classification is required to perform activity recognition during product use, while the user is moving. To keep product costs to a minimum, we need to work out how to balance the costs of transmission, storage and processing, without compromising on the outcome, i.e. the quality of information.

Assuming affordable data transmission, all the data can be stored and processed on the cloud. In reality, this risks a huge data bill for the user and, of course, with the user's device now requiring an internet connection, an unavoidable Wi-Fi, Bluetooth or 4G module would further drive up the device costs.

To make matters worse, access to even 3G networks in non-urban areas can often be challenging, i.e. when hiking, cycling or swimming. This inherent reliance on substantial data transmissions to the cloud would slowdown updates, necessitate periodic syncing, thus effectively negating the actual benefits of AI motion analysis. Conversely, handling all these operations solely on the device's main processor would clearly result in substantial power draw and reduced execution cycles for other applications. Likewise, storing all data on the device itself would increase storage costs.

Squaring the circle

To resolve these seemingly conflicting factors, we can follow four principles:

  1. Decouple feature processing from the execution of the classification engine.
  2. Reduce both storage and processing demands by intelligently selecting features specifically required for accurate activity recognition.
  3. Utilize sensors that can acquire data with a lower power draw and perform sensor fusion (combine data from multiple sensors) and feature preprocessing for always-on execution.
  4. Retrain the model with system-supported data that can ascertain the user's activities.

By decoupling feature processing from the execution of the classification engine, the processor linked to the acceleration and gyroscope sensors can be far smaller. This effectively eliminates the need for continuous transmission of live data chunks to a more powerful processor. Processing features such as a FFT for transforming time domain signals to frequency domain signals would necessitate a low power fuser core for executing floating-point operations.

Furthermore, in the real world, individual sensors have physical limitations, and their output drifts over time, for example, due to offsets and non-linear scaling caused by soldering and temperature effects. To compensate for such irregularities, sensor fusion is required, and calibration needs to be fast, inline and automatic.



Figure 1: Functional process for activity classification (Source: Bosch Sensortec)

Additionally, the selected data capture rate can significantly affect the amount of computation and transmission required. Typically, a 50 Hz sample rate is sufficient for normal human activities. However, when analyzing the performance of fast moving activities or sports, a sample rate of 200 Hz may be required. Similarly, for faster response times, a separate accelerometer running at 2 kHz may be installed to determine the user's intent.

To meet these challenges head on, a low-power or application-specific sensor hub can significantly reduce the CPU cycles needed by the classification engine. Examples of such sensor hubs include Bosch Sensortec's BHI160 and BNO055. The associated software can directly generate fused sensor outputs at varied sensor data rates, and support feature processing.





Figure 2: Smart sensor hub BHI160: low-power smart-hub for activity recognition, specifically designed to enable always-on motion sensing. (Source: Bosch Sensortec)


Figure 3: Application specific sensor node BNO055: intelligent 9-axis Absolute Orientation Sensor, combining sensors and sensor fusion in a single package. (Source: Bosch Sensortec)

The initial choice of the features that will be processed, subsequently greatly impacts the size of the trained model, data volume, and the computational power required for both training and executing inline prediction. Hence, the choice of features sufficient for classifying and differentiating a particular activity is a key decision, and is likely to be a significant commercial differentiator.

Reflecting on the earlier UCI example of a full dataset of activities with 561 features, a model trained with the default LibSVM kernel achieved a test accuracy of activity classification of 91.84 %. However, after completing the training and ranking the features, selecting the most important 19 features was sufficient for achieving a test accuracy of activity classification of 85.38 %. Upon closer examination of the rankings, we found that the most relevant features are frequency domain transformations, and the mean, maximum and minimum of sliding window acceleration raw data. Interestingly, none of these features would have been possible with just preprocessing, i.e. sensor fusion was necessary for this data to be sufficiently reliable and, thus, useful for classification.

Conclusion

In summary, technology has advanced to the point where it is now practical to run advanced AI on portable devices to analyze data from motion sensors. These modern sensors operate at low power, with sensor fusion and software partitioning significantly increasing the efficiency and viability of the overall system, while also greatly simplifying application development.

To add to this sensor infrastructure, we can take advantage of open source libraries and best practice to optimize feature extraction and classification.

It is now quite realistic to offer a truly personalized user experience, leveraging AI to provide sophisticated insights based on data gathered by the sensors in smartphones, wearables and other portable devices. The next few years should bring to light a whole array of as yet unimaginable devices and solutions. AI and sensors are set to open up a new world of exciting opportunities to both designers and end users.


Figure 4: AI and sensors are set to open up a new world of exciting opportunities to both designers and end users. (Source: Bosch; Picture: Depositphotos/Krisdog)



BAE Systems Unveils Ultra Low-Light Image Sensor
OmniVision Announces Industry’s First 8 Megapixel Medical-Grade Image Sensors For Single-Use And Reusable Endoscopes
SiLC Rolls Out Chip-Integrated FMCW LiDAR Sensor
Plus Selects Aeva 4D LiDAR For The Volume Production Of Autonomous Trucks
Toshiba Expands Scope Of Its Solid-State LiDAR Solution To Address Transportation Infrastructure Monitoring
HELLA Brings Latest Passenger Car 77GHz Radar Technology Into Series Production
Scientists Make Novel Thermal Sensor
NTU Singapore Launches Quantum Science And Engineering Centre
New Wireless Torque Technology
Brewer Science Demonstrates Smart Devices & Printed Electronics Capabilities
Faraday Future Selects Velodyne As Exclusive Lidar Supplier For Flagship FF 91
RoboSense Teams Up With Webasto On Smart Roof Module With Integrated MEMS LiDAR
The All-round Smart Proximity Sensor Chip
Paragraf Introduces A Graphene Hall Sensor
Brewer Science’s Newly Launched Smart Devices Will Be Displayed At CES
Webinar: Next Generation Optical Spectrum Analyzer
Smart Eye And OmniVision Announce End-to-End Interior Sensing Solution
Lumentum Expands VCSEL Array Range
Landis+Gyr Awarded Major Smart Water Contract By South East Water
Melexis Announces Latest Triaxis Position Sensor Together With New PCB-less Packages
Continental And Iteris Collaborate To Explore Intelligent Infrastructure Technology
New Investment In Light-powered Biosensor Accelerates The Availability Of High Quality – Low Cost Tests
World-leading Pharmaceutical Developer Turns To TorqSense
Take It To The Limit With HBK’s New Force Sensor
×
Search the news archive

To close this popup you can press escape or click the close icon.
Logo
×
Logo
×
Register - Step 1

You may choose to subscribe to the Sensor Solutions Magazine, the Sensor Solutions Newsletter, or both. You may also request additional information if required, before submitting your application.


Please subscribe me to:

 

You chose the industry type of "Other"

Please enter the industry that you work in:
Please enter the industry that you work in:
 
X
Info
X
Info
Live Event