Publications

Benchmarking Machine Learning on the Myriad X Processor Onboard the ISS

Dunkel, E., Espinosa-Aranda, J.L., Romero-Cañas, J., Buckley, L., Towfic, Z., Mirza, F., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Fernandez, M., Knox, C., Wagstaff, K., Lu, S., Denbina, M., Atha, D., Swan, R.M. and Ono, H.

2021 International Space Station Research and Development Conference

FULL TEXT

We benchmark Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

High-Performance Compute Board – a Fault-Tolerant Module for On-Board Vision Processing

España-Navarro, J., Samuelsson, A., Gingsjö, H., Barendt, J., Dunne, A., Buckley, L., Reisis, D., Kyriakos, A., Papatheofanous, E.A., Bezaitis, C., Matthijs, P., Ramos, J.P. and Steenari, D.

2021 European Workshop on On-Board Data Processing (OBDP)

FULL TEXT

This technical paper describes the High-Performance Compute Board (HPCB), currently being implemented and tested by a consortium led by Cobham Gaisler in the frame of an ESA project. The first section serves as a brief introduction to the platform, whereas subsequent sections add further detail concerning the architecture, hardware, and software design. Finally, some preliminary test results are presented before summarizing the most relevant aspects of the paper in the conclusions.

Towards the Use of Artificial Intelligence on the Edge in Space Systems: Challenges and Opportunities

Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., Byrne, J., Buckley, L., Psarakis, M., Voss, K.-O. and Fanucci, L.

IEEE Aerospace and Electronic Systems Magazine Vol. 35(12), pp. 44-56

2020

FULL TEXT

The market for remote sensing space-based applications is fundamentally limited by up- and downlink bandwidth and onboard compute capability for space data handling systems. This article details how the compute capability on these platforms can be vastly increased by leveraging emerging commercial off-the-shelf (COTS) system-on-chip (SoC) technologies. The orders of magnitude increase in processing power can then be applied to consuming data at source rather than on the ground allowing the deployment of value-added applications in space, which consume a tiny fraction of the downlink bandwidth that would be otherwise required. The proposed solution has the potential to revolutionize Earth observation (EO) and other remote sensing applications, reducing the time and cost to deploy new added value services to space by a great extent compared with the state of the art. This article also reports the first results in radiation tolerance and power/performance of these COTS SoCs for space-based applications and maps the trajectory toward low Earth orbit trials and the complete life-cycle for space-based artificial intelligence classifiers on orbital platforms and spacecraft.

An Evaluation of Low-Cost Vision Processors for Efficient Star Identification

Agarwal, S., Hervas-Martin, E., Byrne, J., Dunne, A., Espinosa-Aranda, J.L. and Rijlaarsdam, D.

Sensors Vol. 20(21), pp. 6250

2020

FULL TEXT

Star trackers are navigation sensors that are used for attitude determination of a satellite relative to certain stars. A star tracker is required to be accurate and also consume as little power as possible in order to be used in small satellites. While traditional approaches use lookup tables for identifying stars, the latest advances in star tracking use neural networks for automatic star identification. This manuscript evaluates two low-cost processors capable of running a star identification neural network, the Intel Movidius Myriad 2 Vision Processing Unit (VPU) and the STM32 Microcontroller. The intention of this manuscript is to compare the accuracy and power usage to evaluate the suitability of each device for use in a star tracker. The Myriad 2 VPU and the STM32 Microcontroller have been specifically chosen because of their performance on computer vision algorithms alongside being cost-effective and low power consuming devices. The experimental results showed that the Myriad 2 proved to be efficient and consumed around 1 Watt of power while maintaining 99.08% accuracy with an input including false stars. Comparatively the STM32 was able to deliver comparable accuracy (99.07%) and power measurement results. The proposed experimental setup is beneficial for small spacecraft missions that require low-cost and low power consuming star trackers.

UB0100 AI & CV Compute Engine.

Dunne, A.

2020 ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS)

FULL TEXT

The historic success of the recent ESA Φ-sat-1 mission has demonstrated for the first time that COTS hardware acceleration of AI inference on a satellite payload in-orbit is now possible. The Deep Learning cloud detection solution deployed on Φ-sat-1 utilises an Intel Movidius Myriad 2 vision processor for inference compute. The Myriad has performance-per-watt and radiation characteristics that make it ideally suited as a payload data processor for satellite deployments, providing state-of-the-art Neural Network (NN) compute within an industry-low power envelope. Building on the hardware and software deployed on Φ-sat-1, the UB0100 CubeSat board is the next generation AI inference and Computer Vision (CV) engine that addresses the form factor and interface needs of CubeSats while exposing the compute of Myriad to the payload developer. This presentation discusses the requirements of an AI CubeSat payload data processing board (hardware, firmware, software), and demonstrates how the UB0100 solution addresses these requirements through its custom CubeSat build. An overview of the CVAI software that runs on the UB0100 will show how, in addition to AI inference and integration with popular AI frameworks, the user now has direct access to the hardware-accelerated vision functionality of the Myriad VPU. This unlocks combined image pre-processing and AI compute on a single device, enabling direct processing of data products at different levels on-satellite. The flexibility provided to the user by the UB0100 solution will be demonstrated through a selection of use cases.

Smart Doll: Emotion Recognition Using Embedded Deep Learning

Espinosa-Aranda, J., Vallez, N., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Sorci, M., Moloney, D., Pena, D. and Deniz, O.

Symmetry Vol. 10(9), pp. 387

2018

FULL TEXT

Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and deep learning applications that analyse images locally. In this article, we use the deep learning capabilities of an EoT device for a real-life facial informatics application: a doll capable of recognizing emotions, using deep learning techniques, and acting accordingly. The main impact and significance of the presented application is in showing that a toy can now do advanced processing locally, without the need of further computation in the cloud, thus reducing latency and removing most of the ethical issues involved. Finally, the performance of the convolutional neural network developed for that purpose is studied and a pilot was conducted on a panel of 12 children aged between four and ten years old to test the doll.

Eyes of Things

Deniz, O., Vallez, N., Espinosa-Aranda, J., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Moloney, D., Dehghani, A., Dunne, A., Pagani, A., Krauss, S., Reiser, R., Waeny, M., Sorci, M., Llewellynn, T., Fedorczak, C., Larmoire, T., Herbst, M., Seirafi, A. and Seirafi, K.

Sensors Vol. 17(5), pp. 1173

2017

FULL TEXT

Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.