AI-powered satellite to imropve Earth observation

Here is a detailed text for the “Φsat-2 Mission Overview” section, formatted for embedding in a WordPress post using HTML tags:

ESA’s Φsat-2 mission, launching in the coming weeks, will push the boundaries of AI for Earth observation – demonstrating the transformative potential of AI for space technology. Measuring just 22 x 10 x 33 cm, ESA’s Φsat-2 satellite is features an multispectral camera and powerful AI computer that analyses and processes imagery in real-time – promising to deliver smarter and more efficient ways of monitoring our planet.

Φsat-2 is a dedicated AI mission which will fully explore the benefits and capabilities of utilising extended onboard processing and further demonstrate the benefits of using AI for innovative Earth observation. With six AI applications running onboard, the satellite is designed to turn images into maps, detect clouds in the images, classify them and provide insight into cloud distribution, detect and classify vessels, compress images on board and reconstruct them in the ground reducing the download time, spot anomalies in marine ecosystems and detect wildfires.

ESA’s Φsat-2 Technical Officer Nicola Melega, commented, “Φsat-2 will unlock a new era of real-time insights from space and will allow for custom AI apps to be easily developed, installed, and operated on the satellite even while in orbit. This adaptability maximises the satellite’s value for scientists, businesses and governments.

The Φsat-2 mission is a collaborative effort between ESA and Open Cosmos who serves as the prime contractor, supported by an industrial consortium including Ubotica, GGI, CEiiA, GEO-K, KP-Labs, and SIMERA.

Current section: Onboard AI Applications

Φsat-2 carries a multispectral instrument that images Earth in seven different bands and, through its AI applications, is capable of many things that can provide actionable information on the ground, including:

  • Cloud detection: Unlike traditional satellites that downlink all captured images, including those obscured by clouds, Φsat-2 processes these images directly in orbit, ensuring that only clear, usable images are sent back to Earth. Developed by KP Labs, this application can also classify clouds and provide insights into cloud distribution. This gives users more flexibility when it comes to deciding whether an image is usable or not.
  • Street map generation: The Sat2Map application, developed by CGI, converts satellite imagery into street maps. This capability is particularly beneficial for emergency response teams, enabling them to identify accessible roads during disasters such as floods or earthquakes. When the satellite orbits over the affected area and acquires images, the images are passed to the onboard processer that will identify streets and generate a corresponding map. Initially, this application will be demonstrated over Southeast Asia, showcasing its potential to aid in crisis management.
  • Maritime vessel detection: The maritime vessel detection application, developed by CEiiA, utilises machine learning techniques to automatically detect and classify vessels in specified regions, facilitating the monitoring of activities like illegal fishing. This application underscores the satellite’s role in supporting maritime security and environmental conservation efforts.
  • On-board image compression and reconstruction: Developed by GEO-K, this application is responsible for compressing images on board. By significantly reducing file sizes, this application increases the volume and speed of data downloads. After being downlinked to the ground, the images are reconstructed using a dedicated decoder. The first demonstrations of this technology will occur over Europe, focusing on the detection of buildings.

Φsat-2’s capabilities have been further expanded with the incorporation of two additional AI applications that will be uploaded once the satellite is in orbit. These AI applications were the winning entries in the OrbitalAI challenge organized by ESA’s Φ-lab and was designed to give companies the change to pioneer in-orbit Earth observation data processing. The winning applications are:

  • Marine anomaly detection: Developed by IRT Saint Exupery Technical Research, this application uses machine learning algorithm to spot anomalies in marine ecosystems – identifying threats to the marine ecosystem such as oil spills, harmful algae blooms and heavy sediment discharges in real-time.
  • Wildfire detection: The wildfire detection system, developed by Thales Alenia Space, uses machine learning to provide critical real-time information to response teams. The tool provides a classification report that helps firefighters locate wildfires, track fire spread and identify potential hazards.