Low-power embedded MEMS motion processing for mobile applications
Advanced motion processing for mobile applications
The ability to apply semiconductor efficiencies to motion devices came about with the advent of MEMS (Micro Electro-Mechanical Systems) technology. Sensors such as accelerometers (A), gyroscopes (G), compasses/magnetometers (M) and pressure sensors (P), once large mechanical devices, can now be integrated as small semiconductor devices several millimeters in dimension. Motion processing is a burgeoning market thats seeing a huge growth in the deployment of sensor enabled devices, such as smart phones and tablets. In fact, according to a recent market research report from Yole Developpement, there will be more than 4 billion MEMS sensors in mobile devices by 2015. However, with this growth and increase in capabilities and power comes the question, how do we fuse this sensor data to bring useful information to the end user? One challenge for motion processing applications is the lack of advanced sensor and data fusion building blocks that can accurately interpret the data from increasingly available, and increasingly large, sensor networks. Without a strong signal processing layer and good analysis, the data of 4 billion sensors will remain untapped.
In the 1950’s film, “The Ten Commandments,” Actor, Charlton Heston (“Moses”) goes to the precipice of a rock overlooking the Red Sea. With his staff in hand, he parts his arms wide in a grand embracive gesture, resulting in the dramatic parting of the Red Sea. The imagination that comes from writers and movie studios often provides glimpses of new technology and this particular example illustrates the natural human desire to project ourselves to affect action, or control, from a distance. In the context of today’s technologies, it could also be an example of MEMS-enabled motion processing and motion control. This article discusses motion processing for mobile applications, providing examples from Movea’s MotionCore™ technology to illustrate throughout, and the benefits for OEMs, application developers and service providers today and in the future.MotionCore™ is a family of dedication motion processing and data fusion IP cores optimized for mobile applications that also reduce power consumption. It’s designed to be sensor agnostic, freeing the supply chain to choose the best sensors for a particular product or application. MotionCore solutions can be implemented on any processing unit in the hardware design, from sensor hub to application processor, or even distributed on several processing units. The technology represents motion processing and data fusion for smart phones and tablet applications, providing motion capabilities to all sensor equipped mobile devices.
Advanced motion processing for mobile applications
The ability to apply semiconductor efficiencies to motion devices came about with the advent of MEMS (Micro Electro-Mechanical Systems) technology. Sensors such as accelerometers (A), gyroscopes (G), compasses/magnetometers (M) and pressure sensors (P), once large mechanical devices, can now be integrated as small semiconductor devices several millimeters in dimension. Motion processing is a burgeoning market that’s seeing a huge growth in the deployment of sensor enabled devices, such as smart phones and tablets. In fact, according to a recent market research report from Yole Developpement, there will be more than 4 billion MEMS sensors in mobile devices by 2015. However, with this growth and increase in capabilities and power comes the question, how do we fuse this sensor data to bring useful information to the end user? One challenge for motion processing applications is the lack of advanced sensor and data fusion building blocks that can accurately interpret the data from increasingly available, and increasingly large, sensor networks. Without a strong signal processing layer and good analysis, the data of 4 billion sensors will remain untapped.
The growing global middle class and advanced rate of consumer adoption makes mobile devices a highly attractive platform for advanced motion processing. Mobile operating systems, like iOS, Android, and Windows 8, are already integrating MEMS accelerometers, gyroscopes and compass/magnetometers and pressure sensors. Smart phones and tablets have increasingly muscular processors and are on ever faster development cycles. By enabling more compelling, immersive and contextually aware applications, advanced embedded signal processing solutions, like Movea’s MotionCore, can provide the tools the mobile device ecosystem (i.e. IC manufacturers, platform vendors, application developers) needs to build the advanced motion features consumers want.
Implementing MotionCore technology
Offered as fixed point embedded firmware containing Movea's patented SmartMotion® technologies for a wide range of embedded CPUs, MotionCore™ gives system integrators and OEMs the ability to reduce motion processing power consumption by a factor of 10 as compared to standard motion processing implementations. This is achieved by embedding the signal processing IP below the OS, closer to the sensor hardware or on a dedicated sensor hub. In these configurations, a range of motion features are available even when the apps processor is asleep and the mobile device is in low power mode, resulting in a better User Experience.
As depicted in Figure 1, MotionCore™ is ideally suited for integration into smart phone and tablet applications and can be can be implemented on different processing units available in mobile platforms.
Click image to enlarge.
The types of semiconductor devices MotionCore™ is ideally suited for include:
Motion MEMs semiconductor chips: This integration allows MEMS manufacturers to increase the value of their sensors by integrating complex signal processing into the digital part of the chip. The goal is to provide an augmented sensor with generic motion sensing features such as calibration, event detection, simple gesture recognition and orientation computation.
MCU, DSP and peripheral processors: This integration into peripheral processors, either dedicated to sensors (i.e. the Sensor Hub concept) or, in another example, audio or video processors off-loads the main application processor of complex motion processing and data fusion algorithms and provides low power modes with motion sensing available. This also increases the usage of existing processors, and allow peripheral processor vendors to differentiate themselves by adding advanced motion features.
Mobile application processors SoC: This integration implements data fusion software on the mobile devices application processor, minimizing incremenetal costs associated smarter sensors and eliminating the need for a extra Sensor Hub processor. It is often the fasest time-to-market option. However, this implementation comes with greater power consumption needs and the additional processing load may affect other applications running on the apps processor.
Figure 2 provides an example of data fusion firmware in a typical mobile device architecture.
Click image to enlarge.
The foundation of motion processing
Any motion processing solution for mobile devices must provide extremely accurate and extremely robust algorithms for the most fundamental building blocks on top of which most other more complex features are built. Such a solution should also provide output data in different formats (e.g. quarternions, rotation matrices, and cardan angles) enabling application developers to easily integrate the motion data into different end applications. And finally, a good motion processing and data fusion solution should also provide tools to help IC manufacturers, platform vendors and application developers integrate sensor and data fusion into their respective solutions.
Over the course of many years focusing on motion processing and Movea has identified many fundamental elements of human motion. These SmartMotion™ “atoms” reduce the complexity of human movement into fundamental building blocks that can easily be combined into richer, more complex features. If the basic building blocks are like atoms, then the more complicated features are like molecules, assembled by combining the right atoms together. As new capabilities and features are continuously created, we can see that advanced motion and data fusion features start to fall into a natural organization characterized by: sensor configuration, the sensor’s location on the body or equipment, and the computational complexity of the feature.
Figure 3 shows one example of this natural organization and categorizes atomic motion atoms into columns according to the types of motion that are being analyzed: angles, gestures, trajectories, intensity, and more.
Figure 3: Table of SmartMotion Elements™.
With powerful and flexible building blocks, Movea’s Chemistry of Motion allows rapid development of custom algorithms and new features. MotionCore™ is to motion processing what carbon is to organic chemistry.
Data processing modules for motion applications
As mobile platform vendors and device OEMs embrace motion and data fusion, there are three highly sought after capabilities that many people are looking towards: gesture recognition, pedestrian navigation and activity monitoring. Figure 4 shows an overview of how MotionCore integrates these capabilities on top of MotionCore’s Foundation package of Calbration, Attitude, and Control libraries.
Click image to enlarge.
Gesture recognition provides identification of specific movements and classification of those movements against a database of pre-defined gestures a user may make. Users can, for example, create custom signatures in the air which would allow them to authenticate themselves and login into their devices or connected services in a way that is much more secure than a short sequence of numbers, the Android login screen “swipes”, or even a password. In addition to standard gesture recognition capabilities, Movea has added some extra intelligence into our algorithms called User Intent Anticipation which allows developers to repurpose the same physical movements into different actions. User Intent Anticipation works by looking more closely at “quality” of motion to understand what the user is trying to accomplish.
Figure 5 shows library of 10 Movea gestures and UIA commands.
Click image to enlarge.
Pedestrian navigation provides services for indoor and outdoor location based on MEMS intertial sensors. This application becomes important when GPS or cell site types of navigation are too inaccurate or simply not available, such as when one is in a shopping mall. Pedestrian navigation solutions attempt to determine quantities such as cadence, speed, heading, and elevation to deliver dead reckoning services while fusing in other external data sources such as maps. We discuss this in much more detail in the case study below.
Activity monitoring leverages sensors in the mobile device to analyze motion and provide a classification of the current activity of the user against a set of predefined activities. This allows, for example, instant posture classification (i.e. sitting, lying down, standing, walking, running, etc.) as well as contextual awareness functions. Contextual awareness is a form of contextual computing and a subject of great interest in the industry right now.
Case study: motion processing for indoor pedestrian navigation
Smart phones are location aware devices. Geolocalisation comes from triangulation with Wi-Fi hotspots, cell towers and/or built-in GPS functionality. Outdoors, GPS accuracy is 3-10 meters at best and can get as bad as 500 meters or worse if the GPS signal quality degrades such as it does in urban canyons and indoor spaces. Cell tower triangulation is accurate to within 100m to 1000m depending on the cell tower density. Wi-Fi triangulation gets you from 3-15 meters accuracy depending on the location of the Wi-Fi routers and the accuracy of the location database against which they are mapped. For pedestrian navigation, when one is considering finding a kiosk on a crowded tradeshow floor or trying to navigate through a maze of shops in a mall, these techniques turn out to be neither accurate nor appropriate. Indoor navigation is further complicated by inconsistent radio wave transparency and signal interference of concrete and steel structures. Stairwells, lower floors and walled off areas are particularly susceptible to radio signal loss. However, if high quality inertial navigation is applied, in combination with these other techniques, accuracies down to a couple meters are possible while reducing power consumption by a factor of 10 over the best case without inertial navigation.
Built in motion sensors can improve indoor pedestrian navigation using deduced reckoning (a.k.a. dead reckoning) techniques. Dead reckoning is the process of calculating one's current position by using a previously determined position and advancing that position based upon known or estimated distance and speeds over elapsed time and course.
Figure 6 represents the deduced reckoning process to calculate a position.
— Recurrence relation: Is an incremental approach where Pi (position at time
step i*Te) is deduced from Pi-1, vi and qi
— Initial condition: Dead-reckoning needs an initial Position P0
Click image to enlarge.
Dead reckoning’s flaw is that location errors can accumulate over time, since determining the current position location is dependent on calculations from the starting position. These errors arise out of inaccuracy of measurement tools. Known Dead Reckoning systems such as those used in Aircraft navigation, rely on a combination of expensive, military grade accelerometers and gyroscopes. The challenge in using dead reckoning for pedestrian navigation in a smartphone is implementing the same technological concept at a much lower cost with consumer grade MEMS inertial sensors.
Movea has created what some IC and platform vendors believe to be the most accurate pedestrian navigation solution for Smartphone applications. MotionCore for pedestrian navigation can overcome the cumulative “drift” of MEM’s gyroscopes and accelerometers by patented sensor and data fusion algorithms, including map information so as to create an acceptably accurate trajectory.
Figure 7 illustrates gyroscope drift for two different low-cost gyroscopes where positional errors accumulate over time assuming that the user is walking in a straight line.
Click image to enlarge.
Movea augments inertial sensor data with data from pressure sensors, to take into account changes in altitude, a tilt-compensated compass to improve the angular accuracy, and map data fusion techniques.
In an indoor pedestrian environment, where total distances travelled rarely exceed a few hundred meters without stopping at a destination point, low-cost location beacons at common destinations and/or absolute but inaccurate radio localization such as WiFi can be combined with dead reckoning.
Movea performed the following experiment to demonstrate the power of MotionCoreTM Intelligent analysis. A low cost 9-axis IMU, typical of what would be incorporated in most Smartphones, contains a set of three orthogonal sensors: a gyroscope, accelerometer and compass.
Cadence, velocity and heading are interpreted from sensor data and the MotionCore Inertial Navigation module deduces distance and position from this data also fusing in map data if available and if needed. As part of an initialization routine, the subject’s stride length is estimated either through simple morphologic inputs. Alternatively, the system can be trained to fit individuals. Step cadency determines distance over time by counting strides and the cell phone clock creates a function of distance over time. Heading information using AGM combination provides the trajectory.
The test in Figure 8 shows the accumulation of gyroscopic drift which results in a slow clockwise rotation of estimated path along the direction of drift and, ultimately, the system gets overwhelmed with cumulative errors. With Movea’s motion processing and data fusion algorithms in the loop, MotionCore’s Pedestrian Navigation module can provide much more accurate results. For this specific experiment, a person was requested to walk along a marked path of 80 meters, 10 times in a row for about a 10-12 minute walk.
Test results show an average error on distance of less than 3% and an average error on step count of less than 1%, demonstrating the capabilities of MotionCore’s Pedestrian Navigation module to leverage consumer grade sensors and arrive at a highly accurate estimation of distance and location. This is a key enabling capability that will power countless high value and immersive mobile apps in the not too distant future.
- No news
- Oscilloscope engine plugs into portable Apple products
- Sensor uses radio waves to detect subtle changes in pressure
- World's smallest wearable mouse
- ARM extends into IoT software
- Qualcomm set to snap up CSR to cover the Internet of Things
- Peregrine expands integrated RFIC/MMIC portfolio to cover DC to X-band
- ET market to grow to 4 billion units by 2018
- Mobile broadband operators scramble for VoLTE
- Bluetooth beacons nowhere precise enough
- Can superfast LEDs pave way for light-based telecommunications?
- iPhones a viable option for monitoring eye disease
- Automotive radar tests with target simulator and FM CW signal analysis
- Ultrafast broadband era emerges
- Smartphone integration shifts automotive processor demand
- ST claims 5 billion MEMS sensors shipped
- Selecting Microwave/RF Cable Assemblies for Reliable Performance Over Time
- 5 Best Practices for Designing Flexible Test Stations
- Solutions for Millimeter Wave Wireless Backhaul
- Putting FPGAs to Work in Software Radio Systems Handbook
- Using RF Recording Techniques to Resolve Interference Problems
- Industrial IoT Â– Connecting Industrial devices wirelessly