Publications
Leveraging Commercial Assets, Edge Computing, And Near Real-Time Communications For An Enhanced New Observing Strategies (Nos) Flight Demonstration
Chien S., Candela A., Zilberstein I., Rijlaarsdam D., Hendrix T., Dunne A.
Recent developments in New Space companies have led to a dramatic increase in capabilities in Earth Observation. These advances in edge computing, low latency communications, and many new on orbit assets represent a unique opportunity for Earth Observation. NASA’s New Observation System (NOS) program aims to leverage these new capabilities to achieve global reach of science events such as volcanic eruption, wildfires, flooding, not by wide swath instruments but rather by intelligent, directed sensing, onboard analysis, and dissemination of knowledge rather than data using low latency communications links. We describe ongoing efforts to deploy NOScapabilities to the CogniSAT-6/HAMMER satellite launched in March 2024, with a currently projected flight demonstration of late summer or early fall 2024.
Intelligent Space Camera for On-Orbit AI-Driven Visual Monitoring Applications
Dunne, A., Romero-Ca˜nas, J., Caulfield, S., Romih, S. and Espinosa-Aranda, J.L.
European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France
The Intelligent Space Camera (ISC) is a compact space camera with embedded Computer Vision and Artificial Intelligence capabilities, that can address applications requiring high-throughput smart processing directly at source. The camera, incorporating both hardware and software elements, is being developed in the frame of an ESA co-funded project to support space situational awareness, visual FDIR, and docking applications, among others. The camera, built around the Myriad X Vision Processing Unit (VPU), supports RTSP streaming, H.265 encoding, dynamic remote reconfiguration, and in-line AI stream processing at framerate, all directly on-camera. Processing results can be sent to the host as metadata or overlaid on the RTSP stream (e.g., as bounding boxes). This paper describes the system in its current form from a software and hardware point of view, as well as its key features and main use cases.
Enhanced Computational Storage Device Employing AI-based Triage
Guesmi, B., Hervas-Martin, E., Moloney, D. and Espinosa-Aranda, J.L.
European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France
The popularity of Artificial Intelligence (AI) applications is counterbalanced by their cost in terms of time and energy. Traditional computing systems have evolved with separate computing and storage units, which require data movement in order to perform data processing. Computational Storage Device (CSD) technologies have been proposed in order to push data processing to the data in order to reduce time consumption, memory, energy, and bandwidth usage. In this paper, an Enhanced CSD (3CSD) will be introduced. An efficient and flexible storage device-based triage, that provides a seamless workflow to process and infer data in place represents an evolution in terms of traditional CSD which employs a store-first inference-later paradigm. The experimental results show that the purpose of Multistage LILLIAN (LeveragIng Last-miLe data usIng contextbased AutolabelliNg) increases the performance of the triage detectors by 20%. Furthermore, some state-of-the-art fusion approaches are evaluated including late fusion (Ensembles) which boosted the performance of the triage subsystem significantly. Experimental results are evaluated using Intel Movidius Myriad X.
Efficient In-Orbit CNN Updates
Vallez, N., Rodriguez-Bobada, R., Dunne, A. and Espinosa-Aranda, J.L.
European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France
Building Artificial Intelligence (AI) models for Earth Observation (EO) satellites can be a difficult task since there is often no, or very limited, real training data available for a particular sensor. Even if data exists from a previous mission, a change in the sensor may make the model unsuitable. Therefore, being able to update the trained model, using the actual data provided by the sensor after the satellite is in orbit, is highly important. Thus, the main purpose of this work is to investigate and develop how to update remotely a model placed in a satellite in orbit, and how to control or specify the update size using the training parameters. The goal is to improve the accuracy performance of the model, using new data acquired from the in-orbit satellite, and to achieve this performance improvement under the criterion of reducing the size of the model update that is uplinked to the satellite.The method proposed selects which Convolutional Neural Network (CNN) layer weights must be modified and which must be fixed during training in order to maximize the accuracy increase and minimize the update file size. For a sample network, and without the proposed method, results show an update size of 44.5MB for the retrained network (with an original network size of 48.9MB), with retraining using the new data (such as that acquired from a satellite post-launch) resulting in an accuracy improvement from 78.4% to 79.9%. With the proposed Efficient Network Update (ENU) method, the generated post-training network update is only 18MB in size, but still achieves an accuracy of 78.9%. This demonstrates the reduced data bandwidth requirement to update the model, while gaining accuracy on the original trained network.
Autonomous Operational Scheduling on CogniSAT-6 Based on Onboard Artificial Intelligence
Rijlaarsdam, D., Hendrix, T., González, P.T.T., Velasco-Mata, A., Buckley, L., Miquel, J.P., Casaled, O.A. and Dunne, A.
17th Symposium on Advanced Space Technologies in Robotics and Automation
To enable the Earth Observation space systems required to serve the needs of life on Earth in the near future, these systems need to operate more efficiently and autonomously. Artificial Intelligence can be deployed on the edge on spacecraft to provide this required increased autonomy. CogniSAT-6, an upcoming CubeSat Earth Observation mission by Ubotica and Open Cosmos, will leverage this technology to interpret captured images and use this extracted information to autonomously schedule operations without any input from ground. This capability greatly increases the efficiency of Earth Observation systems and enables tip-and-cue scenarios.
Benchmarking Deep Learning Models on Myriad and Snapdragon Processors for Space Applications
Dunkel, E.R., Swope, J., Candela, A., West, L., Chien, S.A., Towfic, Z., Buckley, L., Romero-Ca nas, J., Espinosa-Aranda, J.L., Hervas-Martin, E. and Fernandez, M.R.
Journal of Aerospace Information Systems Vol. 20(10), pp. 660-674
Future space missions can benefit from processing imagery onboard to detect science events, create insights, and respond autonomously. One of the challenges to this mission concept is that traditional space flight computing has limited capabilities because it is derived from much older computing to ensure reliable performance in the extreme environments of space, particularly radiation. Modern Commercial Off The Shelf (COTS) processors, such as the Movidius Myriad X and the Qualcomm Snapdragon, provide significant improvements in small SizeWeight and Power (SWaP) packaging and offer direct hardware acceleration for deep neural networks, although these processors are not radiation hardened. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS). We find that the Myriad and Snapdragon DSP/AIP provide speed improvement over the Snapdragon CPU in all cases except single pixel networks (typically >10x for DSP/AIP). In addition, the discrepancy introduced through quantization and porting of our JPL models was usually quite low (<5%). Models are run multiple times, and memory checkers are deployed to test for radiation effects. To date, we have found no difference in output between ground and ISS runs, and no memory checker errors.
Benchmarking Deep Learning, Instrument Processing, and Mission Planning Applications on edge Processors onboard the ISS
Dunkel, E., Swope, J., West, L., Mirza, F., Chien, S., Towfic, Z., Holloway, A., Buckley, L., Romero-Canas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Fernandez, M. and Knox, C.
2023 Earth Science Technology Forum (ESTF)
The Φ-Sat-1 Mission: The First On-Board Deep Neural Network Demonstrator for Satellite Earth Observation
Giuffrida, G., Fanucci, L., Meoni, G., Batič, M., Buckley, L., Dunne, A., van Dijk, C., Esposito, M., Hefele, J., Vercruyssen, N., Furano, G., Pastena, M. and Aschbacher, J.
IEEE Transactions on Geoscience and Remote Sensing Vol. 60, pp. 1-14, 2022
Artificial intelligence (AI) is paving the way for a new era of algorithms focusing directly on the information contained in the data, autonomously extracting relevant features for a given application. While the initial paradigm was to have these applications run by a server hosted processor, recent advances in microelectronics provide hardware accelerators with an efficient ratio between computation and energy consumption, enabling the implementation of AI algorithms “at the edge.” In this way only the meaningful and useful data are transmitted to the end-user, minimizing the required data bandwidth, and reducing the latency with respect to the cloud computing model. In recent years, European Space Agency (ESA) is promoting the development of disruptive innovative technologies on-board earth observation (EO) missions. In this field, the most advanced experiment to date is the Φ -sat-1, which has demonstrated the potential of artificial intelligence (AI) as a reliable and accurate tool for cloud detection on-board a hyperspectral imaging mission. The activities involved included demonstrating the robustness of the Intel Movidius Myriad 2 hardware accelerator against ionizing radiation, developing a Cloudscout segmentation neural network (NN), run on Myriad 2, to identify, classify, and eventually discard on-board the cloudy images, and assessing the innovative Hyperscout-2 hyperspectral sensor. This mission represents the first official attempt to successfully run an AI deep convolutional NN (CNN) directly inferencing on a dedicated accelerator on-board a satellite, opening the way for a new era of discovery and commercial applications driven by the deployment of on-board AI.
Transfer Learning for On-Orbit Ship Segmentation
Fanizza, V., Rijlaarsdam, D., González, P.T.T. and Espinosa-Aranda, J.L.
AI4Space Workshop @ECCV 2022
With the adoption of edge AI processors for space, on-orbit inference on EO data has become a possibility. This enables a range of new applications for space-based EO systems. Since the development of on-orbit AI applications requires rarely available raw data, training of these AI networks remains a challenge. To address this issue, we investigate the effects of varying two key image parameters between training and testing data on a ship segmentation network: Ground Sampling Distance and band misalignment magnitude. Our results show that for both parameters the network exhibits degraded performance if these parameters differ in testing data with respect to training data. We show that this performance drop can be mitigated with appropriate data augmentation. By preparing models at the training stage for the appropriate feature space, the need for additional computational resources on-board for e.g. image scaling or band-alignment of camera data can be mitigated.
Benchmarking Deep Learning Inference of Remote Sensing Imagery on the Qualcomm Snapdragon and Intel Movidius Myriad X Processors Onboard the International Space Station
Dunkel, E., Swope, J., Towfic, Z., Chien, S., Russell, D., Sauvageau, J., Sheldon, D., Romero-Cañas, J., Espinosa-Aranda, J.L., Buckley, L., Hervas-Martin, E., Fernandez, M. and Knox, C.
International Geoscience and Remote Sensing Symposium (IGARSS 2022)
Deep Space missions can benefit from onboard image analysis. We demonstrate deep learning inference to facilitate future mission adoption of said algorithms. Traditional space flight hardware provides modest compute when compared to today’s laptop and desktop computers. New generations of commercial off the shelf (COTS) processors designed for embedded applications, such as the Qualcomm Snapdragon and Movidius Myriad X, deliver significant compute in small Size Weight and Power (SWaP) packaging and offer direct hardware acceleration for deep neural networks. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS). We benchmark a variety of algorithms trained on imagery from Earth or Mars, as well as some standard deep learning models for image classification.
Testing Mars Rover, Spectral Unmixing, And Ship Detection Neural Networks, And Memory Checkers On Embedded Systems Onboard The ISS
Dunkel, E., Swope, J., Candela, A., West, L., Chien, S., Buckley, L., Romero-Cañas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Towfic, Z., Russell, D., Sauvageau, J., Sheldon, D., Fernandez, M. and Knox, C.
16th Symposium on Advanced Space Technologies in Robotics and Automation
Future space missions can benefit from processing imagery onboard to detect science events, create insights, and respond autonomously. This capability can enable the discovery of new science. One of the challenges to this mission concept is that traditional space flight hardware has limited capabilities and is derived from much older computing in order to ensure reliable performance in the extreme environments of space, particularly radiation. Modern Commercial Off The Shelf (COTS) processors, such as the Movidius Myriad X and the Qualcomm Snapdragon, provide significant improvements in small Size Weight and Power (SWaP) packaging. They offer direct hardware acceleration for deep neural networks, which are state-of-the art in computer vision. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS).We benchmark a variety of algorithms on these processors. The models are run multiple times on the ISS to see if any errors develop. In addition, we run a memory checker to detect radiation effects on the embedded processors.
Radiation Test and in Orbit Performance of MpSoC AI Accelerator
Buckley, L., Dunne, A., Furano, G. and Tali, M.
IEEE Aerospace Conference 2022, Montana
Benchmarking Deep Learning On a Myriad X Processor Onboard the International Space Station (ISS)
Dunkel, E., Buckley, L., Espinosa-Aranda, J.L., Romero-Cañas, J., Hervas-Martin, E., Towfic, Z., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Wagstaff, K., Lu, S., Denbina, M., Knox, C. and Fernandez, M.
Flight Software Workshop 2022
Benchmarking Machine Learning on the Myriad X Processor Onboard the ISS
Dunkel, E., Espinosa-Aranda, J.L., Romero-Cañas, J., Buckley, L., Towfic, Z., Mirza, F., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Fernandez, M., Knox, C., Wagstaff, K., Lu, S., Denbina, M., Atha, D., Swan, R.M. and Ono, H.
2021 International Space Station Research and Development Conference
FPGA & VPU Co-Processing in Space Applications: Development and Testing with DSP/AI Benchmarks
Leon, V., Bezaitis, C., Lentaris, G., Soudris, D., Reisis, D., Papatheofanous, E.-A., Kyriakos, A., Dunne, A., Samuelsson, A. and Steenari, D..
2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS), pp. 1-5
High-Performance Compute Board – a Fault-Tolerant Module for On-Board Vision Processing
España-Navarro, J., Samuelsson, A., Gingsjö, H., Barendt, J., Dunne, A., Buckley, L., Reisis, D., Kyriakos, A., Papatheofanous, E.A., Bezaitis, C., Matthijs, P., Ramos, J.P. and Steenari, D.
2021 European Workshop on On-Board Data Processing (OBDP)
Towards the Use of Artificial Intelligence on the Edge in Space Systems: Challenges and Opportunities
Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., Byrne, J., Buckley, L., Psarakis, M., Voss, K.-O. and Fanucci, L.
IEEE Aerospace and Electronic Systems Magazine Vol. 35(12), pp. 44-56
2020
An Evaluation of Low-Cost Vision Processors for Efficient Star Identification
Agarwal, S., Hervas-Martin, E., Byrne, J., Dunne, A., Espinosa-Aranda, J.L. and Rijlaarsdam, D.
Sensors Vol. 20(21), pp. 6250
2020
UB0100 AI & CV Compute Engine.
Dunne, A.
2020 ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS)
Smart Doll: Emotion Recognition Using Embedded Deep Learning
Espinosa-Aranda, J., Vallez, N., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Sorci, M., Moloney, D., Pena, D. and Deniz, O.
Symmetry Vol. 10(9), pp. 387
2018
Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and deep learning applications that analyse images locally. In this article, we use the deep learning capabilities of an EoT device for a real-life facial informatics application: a doll capable of recognizing emotions, using deep learning techniques, and acting accordingly. The main impact and significance of the presented application is in showing that a toy can now do advanced processing locally, without the need of further computation in the cloud, thus reducing latency and removing most of the ethical issues involved. Finally, the performance of the convolutional neural network developed for that purpose is studied and a pilot was conducted on a panel of 12 children aged between four and ten years old to test the doll.
Eyes of Things
Deniz, O., Vallez, N., Espinosa-Aranda, J., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Moloney, D., Dehghani, A., Dunne, A., Pagani, A., Krauss, S., Reiser, R., Waeny, M., Sorci, M., Llewellynn, T., Fedorczak, C., Larmoire, T., Herbst, M., Seirafi, A. and Seirafi, K.
Sensors Vol. 17(5), pp. 1173
2017
Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.