Publications

The Φ-Sat-1 Mission: The First On-Board Deep Neural Network Demonstrator for Satellite Earth Observation

Giuffrida, G., Fanucci, L., Meoni, G., Batič, M., Buckley, L., Dunne, A., van Dijk, C., Esposito, M., Hefele, J., Vercruyssen, N., Furano, G., Pastena, M. and Aschbacher, J.

IEEE Transactions on Geoscience and Remote Sensing Vol. 60, pp. 1-14, 2022

FULL TEXT

Artificial intelligence (AI) is paving the way for a new era of algorithms focusing directly on the information contained in the data, autonomously extracting relevant features for a given application. While the initial paradigm was to have these applications run by a server hosted processor, recent advances in microelectronics provide hardware accelerators with an efficient ratio between computation and energy consumption, enabling the implementation of AI algorithms “at the edge.” In this way only the meaningful and useful data are transmitted to the end-user, minimizing the required data bandwidth, and reducing the latency with respect to the cloud computing model. In recent years, European Space Agency (ESA) is promoting the development of disruptive innovative technologies on-board earth observation (EO) missions. In this field, the most advanced experiment to date is the Φ -sat-1, which has demonstrated the potential of artificial intelligence (AI) as a reliable and accurate tool for cloud detection on-board a hyperspectral imaging mission. The activities involved included demonstrating the robustness of the Intel Movidius Myriad 2 hardware accelerator against ionizing radiation, developing a Cloudscout segmentation neural network (NN), run on Myriad 2, to identify, classify, and eventually discard on-board the cloudy images, and assessing the innovative Hyperscout-2 hyperspectral sensor. This mission represents the first official attempt to successfully run an AI deep convolutional NN (CNN) directly inferencing on a dedicated accelerator on-board a satellite, opening the way for a new era of discovery and commercial applications driven by the deployment of on-board AI.

Transfer Learning for On-Orbit Ship Segmentation

Fanizza, V., Rijlaarsdam, D., González, P.T.T. and Espinosa-Aranda, J.L.

AI4Space Workshop @ECCV 2022

FULL TEXT

With the adoption of edge AI processors for space, on-orbit inference on EO data has become a possibility. This enables a range of new applications for space-based EO systems. Since the development of on-orbit AI applications requires rarely available raw data, training of these AI networks remains a challenge. To address this issue, we investigate the effects of varying two key image parameters between training and testing data on a ship segmentation network: Ground Sampling Distance and band misalignment magnitude. Our results show that for both parameters the network exhibits degraded performance if these parameters differ in testing data with respect to training data. We show that this performance drop can be mitigated with appropriate data augmentation. By preparing models at the training stage for the appropriate feature space, the need for additional computational resources on-board for e.g. image scaling or band-alignment of camera data can be mitigated.

Benchmarking Deep Learning Inference of Remote Sensing Imagery on the Qualcomm Snapdragon and Intel Movidius Myriad X Processors Onboard the International Space Station

Dunkel, E., Swope, J., Towfic, Z., Chien, S., Russell, D., Sauvageau, J., Sheldon, D., Romero-Cañas, J., Espinosa-Aranda, J.L., Buckley, L., Hervas-Martin, E., Fernandez, M. and Knox, C.

International Geoscience and Remote Sensing Symposium (IGARSS 2022)

FULL TEXT

Deep Space missions can benefit from onboard image analysis. We demonstrate deep learning inference to facilitate future mission adoption of said algorithms. Traditional space flight hardware provides modest compute when compared to today’s laptop and desktop computers. New generations of commercial off the shelf (COTS) processors designed for embedded applications, such as the Qualcomm Snapdragon and Movidius Myriad X, deliver significant compute in small Size Weight and Power (SWaP) packaging and offer direct hardware acceleration for deep neural networks. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS). We benchmark a variety of algorithms trained on imagery from Earth or Mars, as well as some standard deep learning models for image classification.

Testing Mars Rover, Spectral Unmixing, And Ship Detection Neural Networks, And Memory Checkers On Embedded Systems Onboard The ISS

Dunkel, E., Swope, J., Candela, A., West, L., Chien, S., Buckley, L., Romero-Cañas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Towfic, Z., Russell, D., Sauvageau, J., Sheldon, D., Fernandez, M. and Knox, C.

16th Symposium on Advanced Space Technologies in Robotics and Automation

FULL TEXT

Future space missions can benefit from processing imagery onboard to detect science events, create insights, and respond autonomously. This capability can enable the discovery of new science. One of the challenges to this mission concept is that traditional space flight hardware has limited capabilities and is derived from much older computing in order to ensure reliable performance in the extreme environments of space, particularly radiation. Modern Commercial Off The Shelf (COTS) processors, such as the Movidius Myriad X and the Qualcomm Snapdragon, provide significant improvements in small Size Weight and Power (SWaP) packaging. They offer direct hardware acceleration for deep neural networks, which are state-of-the art in computer vision. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS).We benchmark a variety of algorithms on these processors. The models are run multiple times on the ISS to see if any errors develop. In addition, we run a memory checker to detect radiation effects on the embedded processors.

Radiation Test and in Orbit Performance of MpSoC AI Accelerator

Buckley, L., Dunne, A., Furano, G. and Tali, M.

IEEE Aerospace Conference 2022, Montana

FULL TEXT

φ-Sat-1 is part of the European Space Agency initiative to promote the development of disruptive innovative technology and capabilities on-board EO missions. The φ-Sat-1 satellite represents the first-ever on-board Artificial Intelligence (AI) deep Convolutional Neural Network (CNN) inference on a dedicated chip attempting to exploit artificial Deep Neural Network (DNN) capability for Earth Observation. It utilises the Myriad Vision Processing Unit (VPU), a System On Chip (SOC) that has been designed ex novo for high-performance edge compute for vision applications. In order to support Myriad’s deployment on φ-Sat-1, the first mission using AI processing for operational purposes, and future applications in general, the SOC has undergone radiation characterisation via several test campaigns in European test facilities. The first AI application developed for in-flight inference was CloudScout, a segmentation neural network that was designed specifically for φ-Sat-1 in order to achieve high detail and good granularity in the classification result, and eventually discard on-board the cloudy images acquired by the hyperspectral sensor, thus greatly enhancing the data throughput capability of the mission. In addition to the CloudScout cloud detection AI SW results acquired during φ-Sat-1’s mission, in-flight performance data was also acquired for the hardware inference engine. Four separate VPU-based inference engine test phases were executed over 70 days during the mission. The in-flight diagnostics tests for the VPU inference engine indicate that the device performed as expected on-board φ-Sat-1 without experiencing any functional upsets, or any functional degradation effects due to radiation. All future installations of the Myriad VPU in space will be equipped with this Built-In Self Test (BIST) that will allow monitoring the performance of the inference engine hardware.

Benchmarking Deep Learning On a Myriad X Processor Onboard the International Space Station (ISS)

Dunkel, E., Buckley, L., Espinosa-Aranda, J.L., Romero-Cañas, J., Hervas-Martin, E., Towfic, Z., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Wagstaff, K., Lu, S., Denbina, M., Knox, C. and Fernandez, M.

Flight Software Workshop 2022

FULL TEXT

We benchmark deep learning models on Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover as well as a Myriad X memory test. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

Benchmarking Machine Learning on the Myriad X Processor Onboard the ISS

Dunkel, E., Espinosa-Aranda, J.L., Romero-Cañas, J., Buckley, L., Towfic, Z., Mirza, F., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Fernandez, M., Knox, C., Wagstaff, K., Lu, S., Denbina, M., Atha, D., Swan, R.M. and Ono, H.

2021 International Space Station Research and Development Conference

FULL TEXT

We benchmark Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

FPGA & VPU Co-Processing in Space Applications: Development and Testing with DSP/AI Benchmarks

Leon, V., Bezaitis, C., Lentaris, G., Soudris, D., Reisis, D., Papatheofanous, E.-A., Kyriakos, A., Dunne, A., Samuelsson, A. and Steenari, D..

2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS), pp. 1-5

FULL TEXT

The advent of computationally demanding algorithms and high data rate instruments in new space applications pushes the space industry to explore disruptive solutions for onboard data processing. We examine heterogeneous computing architectures involving high-performance and low-power commercial SoCs. The current paper implements an FPGA with VPU co-processing architecture utilizing the CIF & LCD interfaces for I/O data transfers. A Kintex FPGA serves as our framing processor and heritage accelerator, while we offload novel DSP/AI functions to a Myriad2 VPU. We prototype our architecture in the lab to evaluate the interfaces, the FPGA resource utilization, the VPU computational throughput, as well as the entire data handling system’s performance, via custom benchmarking.

High-Performance Compute Board – a Fault-Tolerant Module for On-Board Vision Processing

España-Navarro, J., Samuelsson, A., Gingsjö, H., Barendt, J., Dunne, A., Buckley, L., Reisis, D., Kyriakos, A., Papatheofanous, E.A., Bezaitis, C., Matthijs, P., Ramos, J.P. and Steenari, D.

2021 European Workshop on On-Board Data Processing (OBDP)

FULL TEXT

This technical paper describes the High-Performance Compute Board (HPCB), currently being implemented and tested by a consortium led by Cobham Gaisler in the frame of an ESA project. The first section serves as a brief introduction to the platform, whereas subsequent sections add further detail concerning the architecture, hardware, and software design. Finally, some preliminary test results are presented before summarizing the most relevant aspects of the paper in the conclusions.

Towards the Use of Artificial Intelligence on the Edge in Space Systems: Challenges and Opportunities

Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., Byrne, J., Buckley, L., Psarakis, M., Voss, K.-O. and Fanucci, L.

IEEE Aerospace and Electronic Systems Magazine Vol. 35(12), pp. 44-56

2020

FULL TEXT

The market for remote sensing space-based applications is fundamentally limited by up- and downlink bandwidth and onboard compute capability for space data handling systems. This article details how the compute capability on these platforms can be vastly increased by leveraging emerging commercial off-the-shelf (COTS) system-on-chip (SoC) technologies. The orders of magnitude increase in processing power can then be applied to consuming data at source rather than on the ground allowing the deployment of value-added applications in space, which consume a tiny fraction of the downlink bandwidth that would be otherwise required. The proposed solution has the potential to revolutionize Earth observation (EO) and other remote sensing applications, reducing the time and cost to deploy new added value services to space by a great extent compared with the state of the art. This article also reports the first results in radiation tolerance and power/performance of these COTS SoCs for space-based applications and maps the trajectory toward low Earth orbit trials and the complete life-cycle for space-based artificial intelligence classifiers on orbital platforms and spacecraft.

An Evaluation of Low-Cost Vision Processors for Efficient Star Identification

Agarwal, S., Hervas-Martin, E., Byrne, J., Dunne, A., Espinosa-Aranda, J.L. and Rijlaarsdam, D.

Sensors Vol. 20(21), pp. 6250

2020

FULL TEXT

Star trackers are navigation sensors that are used for attitude determination of a satellite relative to certain stars. A star tracker is required to be accurate and also consume as little power as possible in order to be used in small satellites. While traditional approaches use lookup tables for identifying stars, the latest advances in star tracking use neural networks for automatic star identification. This manuscript evaluates two low-cost processors capable of running a star identification neural network, the Intel Movidius Myriad 2 Vision Processing Unit (VPU) and the STM32 Microcontroller. The intention of this manuscript is to compare the accuracy and power usage to evaluate the suitability of each device for use in a star tracker. The Myriad 2 VPU and the STM32 Microcontroller have been specifically chosen because of their performance on computer vision algorithms alongside being cost-effective and low power consuming devices. The experimental results showed that the Myriad 2 proved to be efficient and consumed around 1 Watt of power while maintaining 99.08% accuracy with an input including false stars. Comparatively the STM32 was able to deliver comparable accuracy (99.07%) and power measurement results. The proposed experimental setup is beneficial for small spacecraft missions that require low-cost and low power consuming star trackers.

UB0100 AI & CV Compute Engine.

Dunne, A.

2020 ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS)

FULL TEXT

The historic success of the recent ESA Φ-sat-1 mission has demonstrated for the first time that COTS hardware acceleration of AI inference on a satellite payload in-orbit is now possible. The Deep Learning cloud detection solution deployed on Φ-sat-1 utilises an Intel Movidius Myriad 2 vision processor for inference compute. The Myriad has performance-per-watt and radiation characteristics that make it ideally suited as a payload data processor for satellite deployments, providing state-of-the-art Neural Network (NN) compute within an industry-low power envelope. Building on the hardware and software deployed on Φ-sat-1, the UB0100 CubeSat board is the next generation AI inference and Computer Vision (CV) engine that addresses the form factor and interface needs of CubeSats while exposing the compute of Myriad to the payload developer. This presentation discusses the requirements of an AI CubeSat payload data processing board (hardware, firmware, software), and demonstrates how the UB0100 solution addresses these requirements through its custom CubeSat build. An overview of the CVAI software that runs on the UB0100 will show how, in addition to AI inference and integration with popular AI frameworks, the user now has direct access to the hardware-accelerated vision functionality of the Myriad VPU. This unlocks combined image pre-processing and AI compute on a single device, enabling direct processing of data products at different levels on-satellite. The flexibility provided to the user by the UB0100 solution will be demonstrated through a selection of use cases.

Smart Doll: Emotion Recognition Using Embedded Deep Learning

Espinosa-Aranda, J., Vallez, N., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Sorci, M., Moloney, D., Pena, D. and Deniz, O.

Symmetry Vol. 10(9), pp. 387

2018

FULL TEXT

Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and deep learning applications that analyse images locally. In this article, we use the deep learning capabilities of an EoT device for a real-life facial informatics application: a doll capable of recognizing emotions, using deep learning techniques, and acting accordingly. The main impact and significance of the presented application is in showing that a toy can now do advanced processing locally, without the need of further computation in the cloud, thus reducing latency and removing most of the ethical issues involved. Finally, the performance of the convolutional neural network developed for that purpose is studied and a pilot was conducted on a panel of 12 children aged between four and ten years old to test the doll.

Eyes of Things

Deniz, O., Vallez, N., Espinosa-Aranda, J., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Moloney, D., Dehghani, A., Dunne, A., Pagani, A., Krauss, S., Reiser, R., Waeny, M., Sorci, M., Llewellynn, T., Fedorczak, C., Larmoire, T., Herbst, M., Seirafi, A. and Seirafi, K.

Sensors Vol. 17(5), pp. 1173

2017

FULL TEXT

Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.