Publications

Demonstrating Onboard Inference for Earth Science Applications with Spectral Analysis Algorithms and Deep Learning

FULL TEXT

Zilberstein, I., Candela, A., Chien, S., Rijlaarsdam, D., Hendrix, T., Buckley, L. and Dunne, A.

November 2024

In partnership with Ubotica Technologies, the Jet Propulsion Laboratory is demonstrating state-of-the-art data analysis onboard CogniSAT-6/HAMMER (CS-6). CS-6 is a satellite with a visible and near infrared range hyperspectral instrument and neural network acceleration hardware. Performing data analysis at the edge (e.g. onboard) can enable new Earth science measurements and responses. We will demonstrate data analysis and inference onboard CS-6 for numerous applications using deep learning and spectral analysis algorithms.

Flight of Dynamic Targeting on CogniSAT-6

FULL TEXT

Chien, S., Zilberstein, I., Candela, A., Rijlaarsdam, D., Hendrix, T., Dunne, A., Aragon, O. and Miquel, J.

November 2024

Dynamic targeting (DT) is a spacecraft autonomy concept in which sensor data is acquired and rapidly analyzed and used to drive subsequent observation. We describe the low Earth orbit application of this approach in which lookahead imagery is analyzed to detect clouds, thermal anomalies, or land use cases to drive higher quality near nadir imaging. Use cases for such a capability include: cloud avoidance, storm hunting, search for planetary boundary layer events, plume study, and beyond. The DT concept requires a lookahead sensor or agility to use a primary sensor in such a mode, edge computing to analyze images rapidly onboard, and a primary followup sensor. Additionally, an inter-satellite or low latency communications link can be leveraged for cross platform tasking. We describe implementation in progress to fly DT in early 2025 on the CogniSAT-6 (Ubotica/Open Cosmos) spacecraft that launched in March 2024 on the SpaceX Transporter-10 launch.

Maximizing Celestial Awareness: 6-Synchronized-Star-Trackers for Attitude Determination

FULL TEXT

Amari, A., Guesmi, B. and Moloney, D.

1st joint European Space Agency SPAICE Conference / IAA Conference on AI in and for Space

September 2024

Attitude determination is a crucial task for space missions and relies on multiple onboard sensors such as sun sensors, magnetometers, and Earth horizon sensors. Moreover, star trackers, which identify stars in a scene and match them against an existing star catalog to determine the attitude, provide superior performance compared to traditional sensors and they were previously reserved for high-end missions. With the increasing popularity of small satellites, a trade-off between cost, efficiency, and precision is often encountered. Nowadays, star sensors have undergone significant advancements, becoming more efficient and accessible due to notable enhancements in hardware and software, particularly through the integration of neural networks. This leveraging of artificial intelligence (AI) has enabled the development of a compact and reliable star sensor, potentially eliminating the need for other sensor types. In this work, 6-synchronized star-trackers (6SST), a sensor withmultiple imaging channels, is proposed to get wider celestial coverage and hence reliability. To justify this configuration, a more efficient and optimised software pipeline, along with an enhanced hardware implementation, is required.

Radiation Characterization of the COTS Myriad X Edge Vision Processing Unit and Use Case in Space Applications

Tambara, L.A., de Oliveira, Á.B., Andersson, J., Buckley, L. and Dunne, A.

RADECS

September 2024

This work presents the radiation characterization of the COTS Myriad X Vision Processing Unit, from Intel Movidius, for proton- and heavy ion-induced Single Event Effects and Total Ionizing Dose. The component has already flown on different missions, and it continues to be considered in future ones. The results obtained show that the radiation performance of the device is aligned with the performance of other components manufactured in the 16nm FinFET technology. Potential use cases of Myriad X in space computing platforms are also discussed.

Measuring AI model performance and domain shift in unlabelled image data from real scenarios

FULL TEXT

Vallez, N., Rodriguez-Bobanda, R., Dunne, A., Espinosa-Aranda, J.L. and Suarez, O.D.

September 2024

When an Artificial Intelligence model runs in a real scenario, two situations are possible: 1) the data analysed follows the same distribution as the data used for model training and therefore the model performance is similar; or 2) the distribution of the new data is different, resulting in lower model performance. This is called “data/domain shift” and its measurement is desirable in order to reduce it. For example, for a model trained using images captured with high brightness, a change in the sensor may produce darker samples and make the model fail. To mitigate this problem, the sensor can be configured to obtain brighter images and thus reduce data shift. The simplest way to measure the shift is to compare metrics for the two data distributions. However, data captured in the real scenario is not labelled and an alternative is needed. In this work we propose using the Jensen-Shannon divergence score to measure the data shift. Results, obtained by using 5-fold cross-validation, show high correlation between the proposed metric and the accuracy (-0.81, -0.87 and -0.91) when test samples are modified for different brightness, sharpness and blur. The approach has applicability to autonomously measuring domain shift in Earth Observation data.

The Intelligent Space Camera on the “Call to Adventure” SmallSat Mission

FULL TEXT

Dunne, A., Doozandeh, T., Aranda, R.R.-B., Cañas, J.R., Bermúdez, D.G., O’Connor, P. and Rijlaarsdam, D.

AIAA/USU Conference on Small Satellites (Small Sat)

August 2024

In March 2024, the first Intelligent Space Camera (ISC) from Ubotica Technologies was launched on the Apex “Call to Adventure” small satellite mission. The ISC is a designed-for-space camera payload with hardware acceleration of Artificial Intelligence (AI) and Computer Vision (CV) algorithms integrated directly into the camera. Deployable as a self-contained unit within a spacecraft, the ISC manages and executes various processes autonomously and internally without external dependencies, enabling new AI-driven automation-based paradigms and capabilities for space systems. This short paper describes the novel payload, its capabilities, and the mission it is flying on.

MANTIS, A 12U Smallsat Mission Taking Advantage of Super-Resolution and Artificial Intelligence for High-Resolution Imagery

FULL TEXT

Vallez, N., Rodriguez-Bobanda, R., Dunne, A., Espinosa-Aranda, J.L. and Suarez, O.D.

1st joint European Space Agency SPAICE Conference / IAA Conference on AI in and for Space

September 2024

The successful launch of the MANTIS mission, on November 11th 2023, was one of the key milestones of this three year long development. MANTIS is a commercial Earth Observation mission targeting the energy and mining industries whose resources are usually found in remote and hostile regions and that require dedicated and high resolution monitoring. The mission is part of the UKSA supported ESA-InCubed programme which stimulates innovation in industry by co-funding industry-initiated projects. The consortium is led by Open Cosmos and includes SATLANTIS as payload provider and Ubotica and Ingeniars as suppliers of the onboard AI capabilities. The space segment is based on a newly designed 12U Cubesat, operating from a 525km-SSO orbit. The spacecraft in its final configuration weights 14.5 kg and delivers 13km-swath images with a native GSD of 3.5m in 4 bands (RGB and NIR). One of the peculiarities of the SATLANTIS iSIM90 is its binocular configuration that allows for fast image acquisition and the application of their proprietary UHR algorithm that enhances the GSD to 2.2/3m in the final super-resolved image. To complete the payload architecture, an onboard AI accelerator applies a cloud detection algorithm able to automatically discard cloudy images enhancing the overall mission efficiency. The satellite is operated with an S-band link and downloads the images through an X-band downlink to ground stations located at Svalbard and TrollSat, Antarctica.. The images are stored and processed within Open Cosmos data platform called DataCosmos. The mission has now successfully completed the In Orbit Commissioning Review 1 38th Annual Small Satellite Conference (IOCR) and is currently calibrating the payload chain. More than 500 GB of data has already been downloaded since the end of Launch and Early Orbit Phase (LEOP). MANTIS will start the operational phase in H2 2024. This paper will briefly introduce the programmatic background, purpose, consortium and development of the MANTIS mission including the technological advancements that are not only related to the payload chain but also to the Earth Observation tailoring of the platform design and its advancements in terms of EPS, OBDH and high speed data link implementation. Focus of the paper will be the results of LEOP together with the IOCR since launch and initial results regarding the mission compliance with its user requirements. Examples of the images captured, related data processing activities and associated data products will be shown to provide a summary of the current system capabilities, including its initial performances. A conclusive section will elaborate on the exploitation for different application domains such as energy, mining, infrastructure or agriculture monitoring.

Demonstrating Onboard Inference for Earth Science Applications with Spectral Algorithms and Deep Learning

FULL TEXT

Zilberstein, I., Candela, A., Chien, S., Rijlaarsdam, D., Hendrix, T., Buckley, L. and Dunne, A.

Science Understanding through Data Science Conference (SUDS)

August 2024

In partnership with Ubotica Technologies, the Jet Propulsion Laboratory is demonstrating stateof-the-art data analysis onboard CogniSAT-6.

  • Capabilities of in-orbit assets to perform Earth science has skyrocketed in recent years
  • Performing data analysis at the edge (e.g. onboard) can enable new Earth-science measurements and responses
  • CogniSAT-6/HAMMER (CS-6) is a satellite with a visible and near infrared range hyperspectral instrument and neural network acceleration hardware [7]
  • We will demonstrate data analysis and inference onboard CS-6 for numerous applications using deep learning and spectral algorithms

The Next Era for Earth Observation Spacecraft: An Overview of CogniSAT-6

FULL TEXT

Rijlaarsdam, D., Hendrix, T., González, P.T.T., Velasco-Mata, A., Buckley, L., Miquel, J.P., Casaled, O.A. and Dunne, A.

Preprint

August 2024

Earth Observation spacecraft play a pivotal role in various critical applications impacting life on Earth. Historically, these systems have adhered to conventional operational paradigms, namely the “mow-the-lawn” and “bent pipe” approaches. In these paradigms, operational schedules are formulated on the ground and subsequently uploaded to the spacecraft for execution. Execution involves either systematically acquiring vast amounts of data (mow-the-lawn) or targeting specific areas of interest as defined by end users or operators. We aim to depart from these traditional methodologies by integrating onboard Artificial Intelligence, real-time communication, and new observing strategies in one system called CogniSAT-6. These transformative innovations will amplify the amount, speed, and quality of the information yielded by such a system by up to an order of magnitude. Consequently, these advancements are poised to revolutionize conventional Earth Observation systems from static entities into dynamic, intelligent, and interconnected instruments for highly efficient information gathering. This paper provides an overview of the current state of the art in autonomous Earth Observation spacecraft and the application of onboard processing in Earth Observation spacecraft. An overview is given of the CogniSAT-6 mission, its concept of operations, system architecture, and data processing design. Since we believe that the technology presented here will have a significant impact on society, an ethical framework for such systems is presented. Finally, the benefits of the technology and implications for EO systems going forward are discussed.

EoFNets: EyeonFlare Networks to Predict Solar Flare Using Temporal Convolutional Network

FULL TEXT

Guesmi, B., Daghrir, J., Moloney, D., Ortega, C.U., Furano, G., Mandorlo, G., Hervas-Martin, E. and Espinosa-Aranda, J.L.

10th International Conference on Control, Decision and Information Technologies (CoDIT)

July 2024

Solar Active Regions are characterised by their intense magnetic activity, which often leads to solar phenomena such as solar flares, and coronal mass ejections (CMEs). With the recent advancement of computing technologies and the huge integration of Artificial Intelligence (AI), many approaches have been proposed for forecasting solar eruptions using machine learning. In this study, we propose the use of a Temporal Convolutional Network (TCN) for predicting whether an active region will be flaring in a specific window of time and defining the flare class. The dataset is categorised into three different subsets based on the flare class and trained separately with the same TCN architecture to apply late fusion. The proposed solar flare prediction ensemble (EoFNets) is based on both the physical characteristics of the active region (EoFPhyNet) and geometric features (EoFGeoNet). Experimental results show that TCN outperforms long short-term memory (LSTM) in three cases. Our main aim is to deploy deep-learning-based approaches onboard for faster and more accurate real-time monitoring as well as leveraging the higher sampling rates for improved time-series predictions. Many major benefits can be realised if the deep learning models can be implemented onboard, including a sizeable reduction in the volume of downlinked data, and improved system latency. However, implementing deep learning models in space can be a critical task, as most approaches require high computational and memory resources, both of which are limited in typical spacecraft onboard data handling systems. Nevertheless, the EoFNets network outlined in this paper has been optimised to fit the resource constraints of a space platform deployed at the extreme edge far from Earth. Two low-power hardware targets are considered, namely the IntelMovidius MyriadX and Rockchip RK3588S. To the best of our knowledge, this is the first time that such a TCN network has been proposed for solar flare forecasting.

Hardware-Aware, Deep-Learning Approaches for Image Denoising and Star Detection for Star Tracker Sensor

FULL TEXT

Guesmi, B. and Moloney, D.

10th International Conference on Control, Decision and Information Technologies (CoDIT)

July 2024

In recent years, Deep Neural Networks (DNNs) approaches have outperformed traditional techniques for several computer vision problems. This has been made possible by the increase of computational resources represented by Graphical Processing Units (GPU) that allow training using large datasets and the availability of deep learning accelerators for inference. On the other hand, the attitude determination accuracy requirements for spacecraft are increasing. The most accurate attitude determination sensor for spacecraft is the so-called star sensor or star tracker. With the increase in low-cost satellite platforms such as CubeSats, research into the improvement of star sensor accuracy for low-power and low-cost sensor architectures remains a relevant subject. In this context, we examine several methods for noise reduction and star detection for improving centroiding performance. More specifically, an efficient and robust denoising method for star images using an Auto-Encoder (AE) is proposed. This method enhances the image quality for systems sensitive to noise. Furthermore, an accurate and lightweight algorithm based on an existing YOLO (You Only Look Once) architecture is proposed to detect the location of stars in the image. In this work, the YOLO bounding boxes are used to describe the space region around the stars. Subsequently, the star centroid within the bounding box is computed using the COG (Center Of Gravity) method. This method removes the need for centroiding algorithms sliding over the entire image area. An extensive comparison of the proposed denoising technique with other traditional filters confirms that the proposed method resists all noise models and reconstructs well the corrupted images. Experiments show that the proposed YOLO-based star detector achieves high accuracy with a lightweight architecture without any extra latency.

Deploying Machine Learning Anomaly Detection Models to Flight Ready AI Boards [Full text]

FULL TEXT

Murphy, J., Buckley, M., Buckley, L., Taylor, A., O’brien, J. and Namee, B.M.

Proceedings of the 3rd Workshop on AI for Space in conjunction with IEEE/CVF Conference on Computer Vision and Pattern Recognition

June 2024

This study explores the development and implementation of machine learning (ML) models on space-qualified AI boards aiming to identify the most effective solution for implementing anomaly detection systems on space missions. We investigate various ML anomaly detection techniques including Autoencoders Long Short-Term Memory (LSTM) cells Isolation Forests and Transformers. These models were trained on a univariate dataset derived from real space missions and deployed on hardware engineered for space environments. Our analysis extends to a diverse array of hardware platforms to comprehensively assess performance. Specifically we explore space flight ready boards (Ubotica CogniSAT-XE1 and XE2 which incorporate Intel’s Myriad 2 and X chips respectively); commercial non-space flight ready edge-AI boards (NVIDIA’s Jetson Nano and Google Coral); and Field Programmable Gate Array (FPGA) implementations (from Microchip AMD and NanoXplore) to provide a thorough comparison of available platforms for onboard anomaly detection. This paper therefore provides a detailed examination of both the optimal ML models and hardware platforms for deploying univariate anomaly detection systems in space flight contexts and draws conclusions about which ones are most suitable.

A Novel Georeferencing Approach Based on On-Board Insight Extraction Using Siamese Neural Networks for Comparable Embeddings Generation

del-Pozo, D.G., Velasco-Mata, A., Espinosa-Aranda, J.L., Vállez, N. and Dunne, A.

4S Symposium

May 2024

This paper describes a novel approach for performing geolocation of insights extracted on-board satellites. This approach takes advantage of neural networks capabilities to overcome a significant constraint of many SmallSats and CubeSats developed nowadays: the latency in delivering raw data to ground. Neural networks, after a training process, are able to compress raw imagery into numerical vectors, referred to as embeddings, which are much smaller, while keeping the most relevant information. These embeddings are small enough to be transmitted over always-available Inter Satellite Links alongside insights extracted on-board, thereby decoupling their transmission from the overpass of ground stations, reducing latency to delivery. We developed an algorithm to compare these embeddings to a database of pre-calculated reference embeddings, enabling insight georeferencing. The developed neural network is able to capture the most notable features contained within the provided image, so the embedding comparison can be performed even in unfavourable scenarios where there exist shifts, rotations, occlusions or other distortions with respect to the reference imagery, providing results with an absolute error lower than 20 meters for a variety of regions. This implementation also focuses on developing the most accurate solution possible to provide fast responses for time-critic scenarios.

The OPS-SAT case: A data-centric competition for onboard satellite image classification

FULL TEXT

Meoni, G., Märtens, M., Derksen, D., See, K., Lightheart, T., Sécher, A., Martin, A., Rijlaarsdam, D., Fanizza, V. and Izzo, D.

Astrodynamics

March 2024

While novel artificial intelligence and machine learning techniques are evolving and disrupting established terrestrial technologies at an unprecedented speed, their adaptation onboard satellites is seemingly lagging. A major hindrance in this regard is the need for high-quality annotated data for training such systems, which makes the development process of machine learning solutions costly, time-consuming, and inefficient. This paper presents “the OPS-SAT case”, a novel data-centric competition that seeks to address these challenges. The powerful computational capabilities of the European Space Agency’s OPS-SAT satellite are utilized to showcase the design of machine learning systems for space by using only the small amount of available labeled data, relying on the widely adopted and freely available open-source software. The generation of a suitable dataset, design and evaluation of a public data-centric competition, and results of an onboard experimental campaign by using the competition winners’ machine learning model directly on OPS-SAT are detailed. The results indicate that adoption of open standards and deployment of advanced data augmentation techniques can retrieve meaningful onboard results comparatively quickly, simplifying and expediting an otherwise prolonged development period.

The Use of AI in Operational Space Weather Missions

FULL TEXT

Furano, G., Ortega, C.U., Tali, M., Guesmi, B., Moloney, D., Dean, M., Longepe, N. and Mathieu, P.-P.

104th American Meteorological Society (AMS) Annual Meeting

January 2024

Artificial intelligence (AI) has increasingly found its way into various space applications, with edge AI proving particularly useful in certain scenarios. Edge AI refers to AI that is implemented at the edge of a network, meaning it can operate locally without the need for a constant internet connection. This is particularly useful in space applications, where connectivity can be very limited or even non-existent during long periods of time. One application of edge AI in space is solar observation satellites. These satellites are typically equipped with a variety of sensors that collect data about the Sun, such as images of solar flares and coronal mass ejections (CMEs). However, the limited communication capabilities of these satellites can be a hindrance to their performance and commercial applications. Edge AI can help to overcome this limitation by allowing the satellite to process and analyze data locally, rather than constantly transmitting it back to Earth for processing. The ESA VIGIL Mission is a space weather mission that will monitor the Sun from the L5 Lagrange point, and the first of its kind to do it operationally from that position. It will provide early warnings of solar storms and help to protect critical infrastructure on Earth.. This will allow the mission to provide uninterrupted observations of the Sun from a perspective ahead of the Earth in its translation journey around the Sun. The VIGIL mission will carry a suite of remote sensing instruments to study the Sun, including a coronagraph, a magnetograph and a heliospheric imager; it also includes in situ instruments, a magnetometer, and a plasma analyzer, in order to aid the forecasting capabilities. The VIGIL mission is tasked to provide early warnings of solar storms and ejections, which can disrupt power grids, communications, and navigation systems on Earth. For the VIGIL mission, ESA has developed a dedicated “computational memory” space grade hardware and an Artificial Intelligence ensemble system tool to automatically classify solar flares. The computational memory will store the data collected by the VIGIL mission’s instruments. The tool to automatically classify solar flares will use this data to identify and classify solar flares. The tool uses the following information to classify solar flares:

  • Active region (AR): An AR is a grouping of sunspots on the Sun’s surface.
  • Sunspot images: Sunspot images are used to determine the number and size of sunspots in an AR.
  • Magnitude level: The magnitude level of a flare is a measure of its intensity.
  • -Number of spots: The number of spots in an AR is a good indicator of the flare’s potential intensity.
  • Spots class: The spots class is a measure of the magnetic complexity of the sunspots in an AR.
  • Observed time: The observed time is the time at which the flare was observed.

The tool first determines, from a single point of view, if an AR is present. If it is, then the tool proceeds to determine the magnitude level of the potential flare. The magnitude level is then used to predict the flare class. The tool also determines if a running solar flare is associated with a coronal mass ejection (CME). A CME is a large expulsion of plasma and magnetic field from the Sun. Data from several points of view in the Solar System would increase the accuracy of the CME propagation model. Data ensemble techniques proposed in this work can be generalized to multiple instrument ensemble, and even multi-point observation ensemble within the Solar System, with the purpose of increasing not only accuracy (as usually focused) but also increased robustness of a system-of-systems, in this case, the multi-satellite Space Weather Network Alert System. Hardware needed to run the applications is readily available today for both on-ground and on the edge. The different hardware options range from high-TRL Radiation-Hard-by-Design classical processors with extensive flight heritage to up-screened commercial grade devices. Performance measurements are available for both ends of the spectrum and the devices can therefore be chosen for the desired reliability level. An important aspect of integrating AI into systems requiring high reliability is the ability to qualify both the hardware and the software stacks, including the operating systems. Both fully space qualified and fully commercial operating systems have been used for characterizing the performance of the applications to ensure that the appropriate reliability can be achieved. This hardware has a very small SWaP impact for edge applications and allows for in flight seamless update of inference code, adapting to unforeseen changing mission conditions and to opportunistic science or operational goals. The developed hardware can perform inferences very quickly, which could allow for oversampling on board the instruments. It can also be used on ground data centers to provide 24/7 observations. The system has demonstrated promising accuracy and performance in its first results, and it has shown the ability to detect CMEs and predict the possibility of CME occurrence from active areas on the Sun.

The AutoIce Challenge

FULL TEXT

Andreas Stokholm, Jørgen Buus-Hinkler, Tore Wulf, Anton Korosov, Roberto Saldo, Leif Toudal Pedersen, David Arthurs, Ionut Dragan, Iacopo Modica, Juan Pedro, Annekatrien Debien, Xinwei Chen, Muhammed Patel, Fernando Jose Pena Cantu, Javier Noa Turnes, Jinman Park, Linlin Xu, Katharine Andrea Scott, David Anthony Clausi, Yuan Fang, Mingzhe Jiang, Saeid Taleghanidoozdoozan, Neil Curtis Brubacher, Armina Soleymani, Zacharie Gousseau, Michał Smaczny, Patryk Kowalski, Jacek Komorowski, David Rijlaarsdam, Jan Nicolaas van Rijn, Jens Jakobsen, Martin Samuel James Rogers, Nick Hughes, Tom Zagon, Rune Solberg, Nicolas Longépé, and Matilde Brandt Kreiner

Volume 18, issue 8 TC, 18, 3471–3494

August 2024

Mapping sea ice in the Arctic is essential for maritime navigation, and growing vessel traffic highlights the necessity of the timeliness and accuracy of sea ice charts. In addition, with the increased availability of satellite imagery, automation is becoming more important. The AutoICE Challenge investigates the possibility of creating deep learning models capable of mapping multiple sea ice parameters automatically from spaceborne synthetic aperture radar (SAR) imagery and assesses the current state of the automatic-sea-ice-mapping scientific field. This was achieved by providing the tools and encouraging participants to adopt the paradigm of retrieving multiple sea ice parameters rather than the current focus on single sea ice parameters, such as concentration. The paper documents the efforts and analyses, compares, and discusses the performance of the top-five participants’ submissions. Participants were tasked with the development of machine learning algorithms mapping the total sea ice concentration, stage of development, and floe size using a state-of-the-art sea ice dataset with dual-polarised Sentinel-1 SAR images and 22 other relevant variables while using professionally labelled sea ice charts from multiple national ice services as reference data. The challenge had 129 teams representing a total of 179 participants, with 34 teams delivering 494 submissions, resulting in a participation rate of 26.4 %, and it was won by a team from the University of Waterloo. Participants were successful in training models capable of retrieving multiple sea ice parameters with convolutional neural networks and vision transformer models. The top participants scored best on the total sea ice concentration and stage of development, while the floe size was more difficult. Furthermore, participants offered intriguing approaches and ideas that could help propel future research within automatic sea ice mapping, such as applying high downsampling of SAR data to improve model efficiency and produce better results.

Intelligent Space Camera for On-Orbit AI-Driven Visual Monitoring Applications

FULL TEXT

Dunne, A., Romero-Ca˜nas, J., Caulfield, S., Romih, S. and Espinosa-Aranda, J.L.

European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France

The Intelligent Space Camera (ISC) is a compact space camera with embedded Computer Vision and Artificial Intelligence capabilities, that can address applications requiring high-throughput smart processing directly at source. The camera, incorporating both hardware and software elements, is being developed in the frame of an ESA co-funded project to support space situational awareness, visual FDIR, and docking applications, among others. The camera, built around the Myriad X Vision Processing Unit (VPU), supports RTSP streaming, H.265 encoding, dynamic remote reconfiguration, and in-line AI stream processing at framerate, all directly on-camera. Processing results can be sent to the host as metadata or overlaid on the RTSP stream (e.g., as bounding boxes). This paper describes the system in its current form from a software and hardware point of view, as well as its key features and main use cases.

Enhanced Computational Storage Device Employing AI-based Triage

FULL TEXT

Guesmi, B., Hervas-Martin, E., Moloney, D. and Espinosa-Aranda, J.L.

European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France

October 2023

The popularity of Artificial Intelligence (AI) applications is counterbalanced by their cost in terms of time and energy. Traditional computing systems have evolved with separate computing and storage units, which require data movement in order to perform data processing. Computational Storage Device (CSD) technologies have been proposed in order to push data processing to the data in order to reduce time consumption, memory, energy, and bandwidth usage. In this paper, an Enhanced CSD (3CSD) will be introduced. An efficient and flexible storage device-based triage, that provides a seamless workflow to process and infer data in place represents an evolution in terms of traditional CSD which employs a store-first inference-later paradigm. The experimental results show that the purpose of Multistage LILLIAN (LeveragIng Last-miLe data usIng contextbased AutolabelliNg) increases the performance of the triage detectors by 20%. Furthermore, some state-of-the-art fusion approaches are evaluated including late fusion (Ensembles) which boosted the performance of the triage subsystem significantly. Experimental results are evaluated using Intel Movidius Myriad X.

Implementation of the Φsat-2 on board image processing chain

FULL TEXT

Melega, N., Longepe, N., Marchese, V., Paskeviciute, A., Aragon, O., Babkina, I., Marin, A., Nalepa, J., Buckley, L., Guerrisi, G., Oliviera, S. and Stein, H.

Vol. 12729 Sensors, Systems, and Next-Generation Satellites XXVII, pp. 264-276

October 2023

The Φsat-2 mission from the European Space Agency (ESA) is part of Φsat mission lineup aimed to address innovative mission concepts making use of advanced onboard processing including Artificial Intelligence. Φsat-2 is based on a 6U CubeSat with a medium-high resolution VIS/NIR multispectral payload (eight bands plus NIR) combined with a hardware accelerated unit capable of running several AI applications throughout the mission lifetime. As images are acquired, and after the application of dTDI processing, the raw data is transferred through SpaceWire to a payload pre-processor where level L1B will be produced. At this stage radiometric and geometric processing are carried out in conjunction with georeferencing. Once the data is pre-processed, it is fed to the AI processor through the primary computer and made available to the onboard applications; orchestration is done via a dedicated version of the NanoSat MO Framework. The following applications are currently baselined and additional two will be selected via dedicated AI Challenge by Q3 2023: SAT2MAP for autonomous detection of streets during emergency scenarios; Cloud Detection application and service for data reduction; the Autonomous Vessel Awareness to detect and classify vessel types and the deep compression application (CAE) that has the goal of reducing the amount of acquired data to improve the mission effectiveness.

Autonomous Operational Scheduling on CogniSAT-6 Based on Onboard Artificial Intelligence

FULL TEXT

Rijlaarsdam, D., Hendrix, T., González, P.T.T., Velasco-Mata, A., Buckley, L., Miquel, J.P., Casaled, O.A. and Dunne, A.

17th Symposium on Advanced Space Technologies in Robotics and Automation

October 2023

To enable the Earth Observation space systems required to serve the needs of life on Earth in the near future, these systems need to operate more efficiently and autonomously. Artificial Intelligence can be deployed on the edge on spacecraft to provide this required increased autonomy. CogniSAT-6, an upcoming CubeSat Earth Observation mission by Ubotica and Open Cosmos, will leverage this technology to interpret captured images and use this extracted information to autonomously schedule operations without any input from ground. This capability greatly increases the efficiency of Earth Observation systems and enables tip-and-cue scenarios.

Efficient In-Orbit CNN Updates

FULL TEXT

Vallez, N., Rodriguez-Bobada, R., Dunne, A. and Espinosa-Aranda, J.L.

European Data Handling and Data Processing for Space Conference, Juan-Les-pins, France

Building Artificial Intelligence (AI) models for Earth Observation (EO) satellites can be a difficult task since there is often no, or very limited, real training data available for a particular sensor. Even if data exists from a previous mission, a change in the sensor may make the model unsuitable. Therefore, being able to update the trained model, using the actual data provided by the sensor after the satellite is in orbit, is highly important. Thus, the main purpose of this work is to investigate and develop how to update remotely a model placed in a satellite in orbit, and how to control or specify the update size using the training parameters. The goal is to improve the accuracy performance of the model, using new data acquired from the in-orbit satellite, and to achieve this performance improvement under the criterion of reducing the size of the model update that is uplinked to the satellite.The method proposed selects which Convolutional Neural Network (CNN) layer weights must be modified and which must be fixed during training in order to maximize the accuracy increase and minimize the update file size. For a sample network, and without the proposed method, results show an update size of 44.5MB for the retrained network (with an original network size of 48.9MB), with retraining using the new data (such as that acquired from a satellite post-launch) resulting in an accuracy improvement from 78.4% to 79.9%. With the proposed Efficient Network Update (ENU) method, the generated post-training network update is only 18MB in size, but still achieves an accuracy of 78.9%. This demonstrates the reduced data bandwidth requirement to update the model, while gaining accuracy on the original trained network.

Benchmarking Deep Learning Models on Myriad and Snapdragon Processors for Space Applications

FULL TEXT

Dunkel, E.R., Swope, J., Candela, A., West, L., Chien, S.A., Towfic, Z., Buckley, L., Romero-Ca nas, J., Espinosa-Aranda, J.L., Hervas-Martin, E. and Fernandez, M.R.

Journal of Aerospace Information Systems Vol. 20(10), pp. 660-674

Future space missions can benefit from processing imagery onboard to detect science events, create insights, and respond autonomously. One of the challenges to this mission concept is that traditional space flight computing has limited capabilities because it is derived from much older computing to ensure reliable performance in the extreme environments of space, particularly radiation. Modern Commercial Off The Shelf (COTS) processors, such as the Movidius Myriad X and the Qualcomm Snapdragon, provide significant improvements in small SizeWeight and Power (SWaP) packaging and offer direct hardware acceleration for deep neural networks, although these processors are not radiation hardened. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS). We find that the Myriad and Snapdragon DSP/AIP provide speed improvement over the Snapdragon CPU in all cases except single pixel networks (typically >10x for DSP/AIP). In addition, the discrepancy introduced through quantization and porting of our JPL models was usually quite low (<5%). Models are run multiple times, and memory checkers are deployed to test for radiation effects. To date, we have found no difference in output between ground and ISS runs, and no memory checker errors.

The ESA ΦSat-2 Mission: An AI Enhanced Multispectral CubeSat for Earth Observation

FULL TEXT

Melega, N., Marchese, V., Paškevičiūtė-Kidron, A., Dominguez, B.C., Longepe, N., Casaled, O.A., Babkina, I., Marin, A., Steyn, H., Buckley, L., Nalepa, J., Oliveira, S. and Guerrisi, G.

Proceedings of the AIAA/USU Conference on Small Satellites (Small Sat)

August 2023

As part of an initiative to promote the development and implementation of innovative technologies on-board Earth Observation (EO) missions, the European Space Agency (ESA) kicked off the first Φsat related activities in 2018 with the aim of enhancing the already ongoing FSSCAT project with Artificial Intelligence (AI). The selected Φsat-2 concept will provide a combination of on-board processing capabilities (including AI) and a medium to high resolution multispectral instrument from Visible to Near Infra-Red (VIS/NIR) able to acquire 8 bands (7 + Panchromatic) provided by SIMERA SENSE Europe (BE). These resources will be made available to a series of dedicated applications that will run on-board the spacecraft. The mission prime is Open Cosmos (UK), supported by CGI (IT) to coordinate the payload operations for at least 12 months after LEOP and commissioning phase. During the nominal phase the various AI applications will be fine-tuned after the on-ground training and then routinely run. A series of AI applications that could be potentially embarked are under development. The first one is called SAT2MAP and is expected to autonomously detect streets from acquired images. It is developed by CGI (IT). The second AI application is an enhancement of the Φsat-1 cloud detection experiment, able to prioritize data to be downloaded to ground, based on standard cloud coverage and new concentration measurements. It is developed by KP Labs (PL) and it is based on a U-Ne. This application will mainly act as an on-board service for the other applications, relieving them of the task of assessing the presence of the clouds. The Autonomous Vessel Awareness application aims to detect and classify various vessel types in the maritime domain. This would enable a reduced amount of data to be downloaded (only image patches including the vessel) improving the response time for final users (e.g maritime authorities). In this case the AI technique used is a combination of Single Image Super resolution (SRCNN) and Yolo-based Convoluted Neural Network (CNN). The Deep Compression application generically reduces the amount of data to be downloaded to ground with limited information loss. The image is compressed on-board and then reconstructed on ground by means of a decoder. It can achieve a compression rate of about 7 per band. It is based on the use of a Convolutional Auto Encoder (CAE). Two more AI applications will be selected by ESA through a dedicated challenge open to institutions, Agencies and industries that will be run in the first half of 2023. The Φsat-2 mission successfully passed the CDR phase at the end of 2022 aiming for a launch in 2024.

AIVIONIC – Artificial intelligence techniques in on-board avionics and software

FULL TEXT

Quintana, M., Parreira, B., Hinz, R., Belfo, J., Rosa, P., Balsalobre, J., Membibre, F., Latorre, A., Buckley, F., Espisona-Aranda, J., Hervas-Martin, E., Gamero-Tello, J., Bloise, I., Feruglio, L., Varile, M., Silvestrini, S., Piccinin, M., Brandonisio, A., Lunghi, P. and Vasconcelos, J.

ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques

July 2023

The use of Artificial Intelligence (AI) and in particular Deep Learning (DL) has led to major advances in several industries including automotive, agriculture and healthcare, disrupting traditional approaches and leading to a myriad of novel ground-breaking applications. The space domain has also been reached by the innovation potential of AI, chiefly in Earth Observation applications. Moreover, the increasingly more powerful processing units, together with enhanced and less computationally intensive AI algorithms, make it possible to explore new AI applications, especially for onboard implementations. The objective of the AIVIONIC technology development project is to implement a HW/SW demonstrator of an AI-based Visual Navigation System. This follows a novel development line towards demonstrating the use of AI in space critical systems, in a dependable manner. Neural Network (NN)-based algorithms were identified considering specific mission characteristics such as on-board implementability and algorithm adaptability and flexibility. Lightweight, modular AI processing pipelines were selected and implemented, employing Object Detection and Keypoint Regression Networks which comply with the onboard processing resource and latency restrictions while offering the desired performances. Rigorous validation plays a major role for safety- and mission-critical elements. In AIVIONIC, the AI validation logic followed two complementary approaches. Firstly, validation was performed in all steps of AI development process, starting from the design, prototyping, and training of the AI solutions, and ending in the implementation and validation in the target HW. Both synthetic and laboratory image data sets for the AI-IP which were specifically created during the project. Extensive Monte Carlo campaigns were performed to measure the impact of input data variations, including such in the image data, on the overall navigation performance and to assess the robustness of the AI-IP – Navigation pipeline. Secondly, AI runtime monitoring, referring to the active monitoring of the AI algorithms while in operation was implemented to support the AI algorithm validation. The HW platform for the AIVIONIC visual navigation system is composed of Ubotica’s CogniSat ecosystem which is based on the Intel family of Myriad Vision Processing Units, together with the Xilinx Zynq UltraScale+, for both image pre-processing and AI inference for the Flight elements of the architecture. The architecture can support multiple VPUs for both redundancy and performance. The obtained results show that the objective of developing a HW/SW demonstrator for a vision-based relative navigation system using AI in a dependable manner has been achieved by the AIVIONIC study, reaching TRL 4. The AI techniques reach the accuracies and latencies to meet the mission requirements, and provide advantages in terms of flexibility and reusability. Data availability and AI dependability methods play a key role for the development and use of AI in space critical systems, and AIVIONIC provides successful solutions for both. The paper will describe the main challenges faced, results obtained, and progress made during the project.

Benchmarking Deep Learning, Instrument Processing, and Mission Planning Applications on edge Processors onboard the ISS

FULL TEXT

Dunkel, E., Swope, J., West, L., Mirza, F., Chien, S., Towfic, Z., Holloway, A., Buckley, L., Romero-Cañas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Fernandez, M. and Knox, C.

ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques

June 2023

Transfer Learning for On-Orbit Ship Segmentation

FULL TEXT

Fanizza, V., Rijlaarsdam, D., González, P.T.T. and Espinosa-Aranda, J.L.

AI4Space Workshop @ECCV

October 2022

With the adoption of edge AI processors for space, on-orbit inference on EO data has become a possibility. This enables a range of new applications for space-based EO systems. Since the development of on-orbit AI applications requires rarely available raw data, training of these AI networks remains a challenge. To address this issue, we investigate the effects of varying two key image parameters between training and testing data on a ship segmentation network: Ground Sampling Distance and band misalignment magnitude. Our results show that for both parameters the network exhibits degraded performance if these parameters differ in testing data with respect to training data. We show that this performance drop can be mitigated with appropriate data augmentation. By preparing models at the training stage for the appropriate feature space, the need for additional computational resources on-board for e.g. image scaling or band-alignment of camera data can be mitigated.

Benchmarking Deep Learning, Instrument Processing, and Mission Planning Applications on edge Processors onboard the ISS

FULL TEXT

Dunkel, E., Swope, J., West, L., Mirza, F., Chien, S., Towfic, Z., Holloway, A., Buckley, L., Romero-Canas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Fernandez, M. and Knox, C.

2023 Earth Science Technology Forum (ESTF)

We benchmark deep learning models on Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover as well as a Myriad X memory test. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

Transfer Learning for On-Orbit Ship Segmentation

FULL TEXT

Fanizza, V., Rijlaarsdam, D., González, P.T.T. and Espinosa-Aranda, J.L.

AI4Space Workshop @ECCV 2022

With the adoption of edge AI processors for space, on-orbit inference on EO data has become a possibility. This enables a range of new applications for space-based EO systems. Since the development of on-orbit AI applications requires rarely available raw data, training of these AI networks remains a challenge. To address this issue, we investigate the effects of varying two key image parameters between training and testing data on a ship segmentation network: Ground Sampling Distance and band misalignment magnitude. Our results show that for both parameters the network exhibits degraded performance if these parameters differ in testing data with respect to training data. We show that this performance drop can be mitigated with appropriate data augmentation. By preparing models at the training stage for the appropriate feature space, the need for additional computational resources on-board for e.g. image scaling or band-alignment of camera data can be mitigated.

Validating a CNN-based Pose Estimation System for Relative Navigation with an Uncooperative Spacecraft on Myriad X Space Grade Processor

FULL TEXT

Hendrix Tom, Rijlaarsdam David, Buckley Le’onie and Cassinis, L.P.

2022 Clean Space Industry Days

October 2022

This study evaluates a convolutional neural network (CNN)-based pose estimation system for real-time relative navigation with uncooperative spacecraft, leveraging Intel’s Myriad X Vision Processing Unit (VPU) on Ubotica’s CogniSAT CubeSat board. The research explores adapting the Pose HRNet model, initially trained on GPUs, to operate effectively on the Myriad X—a low-power processing platform designed for small satellites. Key findings reveal the model achieves comparable accuracy and performance on the VPU, with a significantly higher inference efficiency per watt, critical for power-constrained space environments. The study validates the full processing pipeline in an orbital context, demonstrating the potential for deploying onboard pose estimation capabilities in future small satellite missions.

Benchmarking Deep Learning Inference of Remote Sensing Imagery on the Qualcomm Snapdragon and Intel Movidius Myriad X Processors Onboard the International Space Station

FULL TEXT

Dunkel, E., Swope, J., Towfic, Z., Chien, S., Russell, D., Sauvageau, J., Sheldon, D., Romero-Cañas, J., Espinosa-Aranda, J.L., Buckley, L., Hervas-Martin, E., Fernandez, M. and Knox, C.

International Geoscience and Remote Sensing Symposium (IGARSS 2022)

Deep Space missions can benefit from onboard image analysis. We demonstrate deep learning inference to facilitate future mission adoption of said algorithms. Traditional space flight hardware provides modest compute when compared to today’s laptop and desktop computers. New generations of commercial off the shelf (COTS) processors designed for embedded applications, such as the Qualcomm Snapdragon and Movidius Myriad X, deliver significant compute in small Size Weight and Power (SWaP) packaging and offer direct hardware acceleration for deep neural networks. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS). We benchmark a variety of algorithms trained on imagery from Earth or Mars, as well as some standard deep learning models for image classification.

Testing Mars Rover, Spectral Unmixing, And Ship Detection Neural Networks, And Memory Checkers On Embedded Systems Onboard The ISS

FULL TEXT

Dunkel, E., Swope, J., Candela, A., West, L., Chien, S., Buckley, L., Romero-Cañas, J., Espinosa-Aranda, J.L., Hervas-Martin, E., Towfic, Z., Russell, D., Sauvageau, J., Sheldon, D., Fernandez, M. and Knox, C.

16th Symposium on Advanced Space Technologies in Robotics and Automation

Future space missions can benefit from processing imagery onboard to detect science events, create insights, and respond autonomously. This capability can enable the discovery of new science. One of the challenges to this mission concept is that traditional space flight hardware has limited capabilities and is derived from much older computing in order to ensure reliable performance in the extreme environments of space, particularly radiation. Modern Commercial Off The Shelf (COTS) processors, such as the Movidius Myriad X and the Qualcomm Snapdragon, provide significant improvements in small Size Weight and Power (SWaP) packaging. They offer direct hardware acceleration for deep neural networks, which are state-of-the art in computer vision. We deploy neural network models on these processors hosted by Hewlett Packard Enterprise’s Spaceborne Computer-2 onboard the International Space Station (ISS).We benchmark a variety of algorithms on these processors. The models are run multiple times on the ISS to see if any errors develop. In addition, we run a memory checker to detect radiation effects on the embedded processors.

Radiation Test and in Orbit Performance of MpSoC AI Accelerator

FULL TEXT

Buckley, L., Dunne, A., Furano, G. and Tali, M.

IEEE Aerospace Conference 2022, Montana

φ-Sat-1 is part of the European Space Agency initiative to promote the development of disruptive innovative technology and capabilities on-board EO missions. The φ-Sat-1 satellite represents the first-ever on-board Artificial Intelligence (AI) deep Convolutional Neural Network (CNN) inference on a dedicated chip attempting to exploit artificial Deep Neural Network (DNN) capability for Earth Observation. It utilises the Myriad Vision Processing Unit (VPU), a System On Chip (SOC) that has been designed ex novo for high-performance edge compute for vision applications. In order to support Myriad’s deployment on φ-Sat-1, the first mission using AI processing for operational purposes, and future applications in general, the SOC has undergone radiation characterisation via several test campaigns in European test facilities. The first AI application developed for in-flight inference was CloudScout, a segmentation neural network that was designed specifically for φ-Sat-1 in order to achieve high detail and good granularity in the classification result, and eventually discard on-board the cloudy images acquired by the hyperspectral sensor, thus greatly enhancing the data throughput capability of the mission. In addition to the CloudScout cloud detection AI SW results acquired during φ-Sat-1’s mission, in-flight performance data was also acquired for the hardware inference engine. Four separate VPU-based inference engine test phases were executed over 70 days during the mission. The in-flight diagnostics tests for the VPU inference engine indicate that the device performed as expected on-board φ-Sat-1 without experiencing any functional upsets, or any functional degradation effects due to radiation. All future installations of the Myriad VPU in space will be equipped with this Built-In Self Test (BIST) that will allow monitoring the performance of the inference engine hardware.

Benchmarking Deep Learning On a Myriad X Processor Onboard the International Space Station (ISS)

FULL TEXT

Dunkel, E., Buckley, L., Espinosa-Aranda, J.L., Romero-Cañas, J., Hervas-Martin, E., Towfic, Z., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Wagstaff, K., Lu, S., Denbina, M., Knox, C. and Fernandez, M.

Flight Software Workshop 2022

We benchmark deep learning models on Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover as well as a Myriad X memory test. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

FPGA & VPU Co-Processing in Space Applications: Development and Testing with DSP/AI Benchmarks

FULL TEXT

Leon, V., Bezaitis, C., Lentaris, G., Soudris, D., Reisis, D., Papatheofanous, E.-A., Kyriakos, A., Dunne, A., Samuelsson, A. and Steenari, D..

2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS), pp. 1-5

The advent of computationally demanding algorithms and high data rate instruments in new space applications pushes the space industry to explore disruptive solutions for onboard data processing. We examine heterogeneous computing architectures involving high-performance and low-power commercial SoCs. The current paper implements an FPGA with VPU co-processing architecture utilizing the CIF & LCD interfaces for I/O data transfers. A Kintex FPGA serves as our framing processor and heritage accelerator, while we offload novel DSP/AI functions to a Myriad2 VPU. We prototype our architecture in the lab to evaluate the interfaces, the FPGA resource utilization, the VPU computational throughput, as well as the entire data handling system’s performance, via custom benchmarking.

The Φ-Sat-1 Mission: The First On-Board Deep Neural Network Demonstrator for Satellite Earth Observation

FULL TEXT

Giuffrida, G., Fanucci, L., Meoni, G., Batič, M., Buckley, L., Dunne, A., van Dijk, C., Esposito, M., Hefele, J., Vercruyssen, N., Furano, G., Pastena, M. and Aschbacher, J.

IEEE Transactions on Geoscience and Remote Sensing Vol. 60, pp. 1-14, 2022

Artificial intelligence (AI) is paving the way for a new era of algorithms focusing directly on the information contained in the data, autonomously extracting relevant features for a given application. While the initial paradigm was to have these applications run by a server hosted processor, recent advances in microelectronics provide hardware accelerators with an efficient ratio between computation and energy consumption, enabling the implementation of AI algorithms “at the edge.” In this way only the meaningful and useful data are transmitted to the end-user, minimizing the required data bandwidth, and reducing the latency with respect to the cloud computing model. In recent years, European Space Agency (ESA) is promoting the development of disruptive innovative technologies on-board earth observation (EO) missions. In this field, the most advanced experiment to date is the Φ -sat-1, which has demonstrated the potential of artificial intelligence (AI) as a reliable and accurate tool for cloud detection on-board a hyperspectral imaging mission. The activities involved included demonstrating the robustness of the Intel Movidius Myriad 2 hardware accelerator against ionizing radiation, developing a Cloudscout segmentation neural network (NN), run on Myriad 2, to identify, classify, and eventually discard on-board the cloudy images, and assessing the innovative Hyperscout-2 hyperspectral sensor. This mission represents the first official attempt to successfully run an AI deep convolutional NN (CNN) directly inferencing on a dedicated accelerator on-board a satellite, opening the way for a new era of discovery and commercial applications driven by the deployment of on-board AI.

Benchmarking Machine Learning on the Myriad X Processor Onboard the ISS

FULL TEXT

Dunkel, E., Espinosa-Aranda, J.L., Romero-Cañas, J., Buckley, L., Towfic, Z., Mirza, F., Swope, J., Russell, D., Sauvageau, J., Sheldon, D., Chien, S., Fernandez, M., Knox, C., Wagstaff, K., Lu, S., Denbina, M., Atha, D., Swan, R.M. and Ono, H.

2021 International Space Station Research and Development Conference

August 2021

We benchmark Intel’s Movidius Myriad X Vision Processing Unit, enabled by HPE’s Spaceborne Computer-2. Deployed classifiers include neural networks trained on Mars imagery from the Reconnaissance Orbiter and Curiosity rover. This work is a step towards running similar classifiers for a range of applications, including onboard autonomy and data distillation.

High-Performance Compute Board – a Fault-Tolerant Module for On-Board Vision Processing

FULL TEXT

España-Navarro, J., Samuelsson, A., Gingsjö, H., Barendt, J., Dunne, A., Buckley, L., Reisis, D., Kyriakos, A., Papatheofanous, E.A., Bezaitis, C., Matthijs, P., Ramos, J.P. and Steenari, D.

2021 European Workshop on On-Board Data Processing (OBDP)

June 2021

This technical paper describes the High-Performance Compute Board (HPCB), currently being implemented and tested by a consortium led by Cobham Gaisler in the frame of an ESA project. The first section serves as a brief introduction to the platform, whereas subsequent sections add further detail concerning the architecture, hardware, and software design. Finally, some preliminary test results are presented before summarizing the most relevant aspects of the paper in the conclusions.

Towards the Use of Artificial Intelligence on the Edge in Space Systems: Challenges and Opportunities

FULL TEXT

Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., Byrne, J., Buckley, L., Psarakis, M., Voss, K.-O. and Fanucci, L.

IEEE Aerospace and Electronic Systems Magazine Vol. 35(12), pp. 44-56

December 2020

The market for remote sensing space-based applications is fundamentally limited by up- and downlink bandwidth and onboard compute capability for space data handling systems. This article details how the compute capability on these platforms can be vastly increased by leveraging emerging commercial off-the-shelf (COTS) system-on-chip (SoC) technologies. The orders of magnitude increase in processing power can then be applied to consuming data at source rather than on the ground allowing the deployment of value-added applications in space, which consume a tiny fraction of the downlink bandwidth that would be otherwise required. The proposed solution has the potential to revolutionize Earth observation (EO) and other remote sensing applications, reducing the time and cost to deploy new added value services to space by a great extent compared with the state of the art. This article also reports the first results in radiation tolerance and power/performance of these COTS SoCs for space-based applications and maps the trajectory toward low Earth orbit trials and the complete life-cycle for space-based artificial intelligence classifiers on orbital platforms and spacecraft.

An Evaluation of Low-Cost Vision Processors for Efficient Star Identification

FULL TEXT

Agarwal, S., Hervas-Martin, E., Byrne, J., Dunne, A., Espinosa-Aranda, J.L. and Rijlaarsdam, D.

Sensors Vol. 20(21), pp. 6250

September 2020

Star trackers are navigation sensors that are used for attitude determination of a satellite relative to certain stars. A star tracker is required to be accurate and also consume as little power as possible in order to be used in small satellites. While traditional approaches use lookup tables for identifying stars, the latest advances in star tracking use neural networks for automatic star identification. This manuscript evaluates two low-cost processors capable of running a star identification neural network, the Intel Movidius Myriad 2 Vision Processing Unit (VPU) and the STM32 Microcontroller. The intention of this manuscript is to compare the accuracy and power usage to evaluate the suitability of each device for use in a star tracker. The Myriad 2 VPU and the STM32 Microcontroller have been specifically chosen because of their performance on computer vision algorithms alongside being cost-effective and low power consuming devices. The experimental results showed that the Myriad 2 proved to be efficient and consumed around 1 Watt of power while maintaining 99.08% accuracy with an input including false stars. Comparatively the STM32 was able to deliver comparable accuracy (99.07%) and power measurement results. The proposed experimental setup is beneficial for small spacecraft missions that require low-cost and low power consuming star trackers.

Smart Doll: Emotion Recognition Using Embedded Deep Learning

FULL TEXT

Espinosa-Aranda, J., Vallez, N., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Sorci, M., Moloney, D., Pena, D. and Deniz, O.

Symmetry Vol. 10(9), pp. 387

2018

Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and deep learning applications that analyse images locally. In this article, we use the deep learning capabilities of an EoT device for a real-life facial informatics application: a doll capable of recognizing emotions, using deep learning techniques, and acting accordingly. The main impact and significance of the presented application is in showing that a toy can now do advanced processing locally, without the need of further computation in the cloud, thus reducing latency and removing most of the ethical issues involved. Finally, the performance of the convolutional neural network developed for that purpose is studied and a pilot was conducted on a panel of 12 children aged between four and ten years old to test the doll.

Eyes of Things

FULL TEXT

Deniz, O., Vallez, N., Espinosa-Aranda, J., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Moloney, D., Dehghani, A., Dunne, A., Pagani, A., Krauss, S., Reiser, R., Waeny, M., Sorci, M., Llewellynn, T., Fedorczak, C., Larmoire, T., Herbst, M., Seirafi, A. and Seirafi, K.

Sensors Vol. 17(5), pp. 1173

March 2017

Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.

UB0100 AI & CV Compute Engine

FULL TEXT

Dunne, A.

2020 ESA Workshop on Avionics, Data, Control and Software Systems (ADCSS)

The historic success of the recent ESA Φ-sat-1 mission has demonstrated for the first time that COTS hardware acceleration of AI inference on a satellite payload in-orbit is now possible. The Deep Learning cloud detection solution deployed on Φ-sat-1 utilises an Intel Movidius Myriad 2 vision processor for inference compute. The Myriad has performance-per-watt and radiation characteristics that make it ideally suited as a payload data processor for satellite deployments, providing state-of-the-art Neural Network (NN) compute within an industry-low power envelope. Building on the hardware and software deployed on Φ-sat-1, the UB0100 CubeSat board is the next generation AI inference and Computer Vision (CV) engine that addresses the form factor and interface needs of CubeSats while exposing the compute of Myriad to the payload developer. This presentation discusses the requirements of an AI CubeSat payload data processing board (hardware, firmware, software), and demonstrates how the UB0100 solution addresses these requirements through its custom CubeSat build. An overview of the CVAI software that runs on the UB0100 will show how, in addition to AI inference and integration with popular AI frameworks, the user now has direct access to the hardware-accelerated vision functionality of the Myriad VPU. This unlocks combined image pre-processing and AI compute on a single device, enabling direct processing of data products at different levels on-satellite. The flexibility provided to the user by the UB0100 solution will be demonstrated through a selection of use cases.