Credit: SpaceX Live Stream
Φsat-2 pushes the boundaries of Earth observation, demonstrating how AI technologies can advance our ability to monitor Earth from space. The miniature satellite is equipped with a state-of-the-art multispectral camera and Ubotica’s powerful CogniSAT SPACE:AI platform, which analyzes and processes imagery in orbit. This capability unlocks Live Earth Intelligence that is critical for a multitude of use cases including disaster response, maritime monitoring, environmental protection, and more.
Φsat-2: Credit Open Cosmos
Dr. Aubrey Dunne, CTO and co-founder at Ubotica, added, “We are thrilled that ESA had a successful launch of Φsat-2 today. The mission will demonstrate the transformative power of artificial intelligence in Earth observation, and we are very proud that ESA selected our CogniSAT SPACE:AI platform for on-board AI processing.”
A New Era of Artificial Intelligence and Earth Observation
SPACE:AI enables satellites to process data quickly and accurately on-board, transforming vast amounts of raw data into actionable insights for scientists, businesses, and policymakers.
While most AI processing typically occurs on the ground after data is downloaded, ESA’s Φsat-2 will perform this task in real-time on-board the satellite. Instead of downlinking vast volumes of raw data, including most images obscured by clouds, the onboard AI-apps process these images directly, ensuring that only the most essential insights are sent back to Earth, reducing data and processing costs and accelerating decision-making.
Orbiting Earth at an altitude of 510 km, the satellite’s multispectral camera captures images in seven different bands across the visible to near-infrared spectrum. The 6U Cubesat platform, designed and developed by OpenCosmos, runs AI-powered by Ubotica’s CogniSAT, with AI apps that can be easily installed and operated remotely from Earth. At launch, Φsat-2 operates the following on-board apps:
Cloud Detection:
Processes images directly in orbit, downlinking only clear, usable images. Developed by KP Labs, it also classifies clouds, providing insights into cloud distribution for better image usability.
Street Map Generation:
Sat2Map, developed by CGI, converts satellite imagery into street maps, aiding emergency response teams in identifying accessible roads during disasters. The application will be demonstrated over Southeast Asia.
Maritime Vessel Detection:
CEiiA’s application uses machine learning to detect and classify vessels, supporting maritime security and environmental conservation by monitoring activities like illegal fishing.
On-board Image Compression and Reconstruction:
Developed by GEO-K, this application compresses images on board to speed up data downloads. Images are later reconstructed on the ground, with initial demonstrations focused on detecting buildings in Europe.
Marine Anomaly Detection:
IRT Saint Exupery’s application uses machine learning to identify marine anomalies such as oil spills and harmful algae blooms in real-time, safeguarding marine ecosystems.
Wildfire Detection:
Thales Alenia Space’s wildfire detection system employs machine learning to provide real-time data, helping firefighters locate wildfires, track their spread, and identify hazards.
Transporter-11 payload. Photo provided by SpaceX
Φsat-2 by ESA is a collaborative effort, with Open Cosmos as the prime contractor, supported by an industrial consortium including Ubotica, CGI, Simera, CEiiA, GEO-K, and KP-Labs.