OPS-SAT and Autonomous Missions Operations
We interviewed our Systems Engineer, Ricardo Silva. He is responsible for leading VisionSpace’s team in the experiment using artificial intelligence for autonomous mission operations to be carried on the OPS-SAT mission.
First, we need to understand what OPS-SAT is, and that is our first question. Ricardo, what is OPS-SAT?
OPS-SAT mission aims to test and validate new techniques in mission control and onboard systems, presenting an in-orbit test-bed environment to deploy different software experiments in many areas. It resulted from the need for a platform to test various technologies for new satellite mission operations scenarios.
The mission also brings the innovative concept of allowing the experiments to be uploaded to the spacecraft in app form by different organizations. That has been made possible due to the NanoSat MO Framework (NMF), which also simplifies the use of spacecraft’s payloads and subsystems, such as the GPS receiver, HD camera module, Attitude Determination and Control Subsystem (ADCS).
The mission is a collaboration between the European Space Operations Centre (ESOC) and the Graz University of Technology (TU Graz) in Austria. The spacecraft consists of a three-unit CubeSat, belonging to the class of nanosatellites. Launched on the 18th of December 2019, it will soon end the commissioning phase and start running the experiments.
OPS-SAT is then paving a new way for mission operations control and onboard systems. But why onboard autonomous operations? How do they work, and what are the key advantages?
Space missions rely significantly on on-ground planning and monitoring and control activities. Planning for payload operations generally takes weeks or months in advance, and it is usually inflexible to last-minute modifications, and you have different aspects to consider.
The scientific part is the mission’s purpose, related to the acquisition of relevant data. For example, in Earth Observation missions, taking pictures of the Earth or measuring polar ice levels are scientific activities. The navigation aspect includes Flight Dynamics’ responsibilities, which comprises all the monitoring and correction maneuvers to ensure that the satellite is in the correct orbit. And the Flight Control Team, responsible for all the activities related to the spacecraft’s soundness and safety, managing its proper functioning.
Autonomous operations aim to shorten the space-ground decision-making loop, giving the human operator a supervisory role. It gives the spacecraft more autonomy to plan scientific observations, analyze the data on board, and decide what data is meaningful to download. The results of scientific observations aren’t always usable. It is necessary to downlink and process the data, analyze it, and decide the next action. The system also executes replanning phases if some observation plan fails, and the onboard monitoring processes keep the spacecraft working as expected. That frees the human operator’s presence in routine operations.
I’m sure this will be the reality in the (near) future, with new technologies enabling new mission scenarios and promoting the global space sector’s growth.
What is VisionSpace doing? What are the project’s main goals?
We are collaborating with ESOC to develop the registered experiment for OPS-SAT of the Advanced Mission Concepts Team (AMCT).
Our experiment’s primary goal is to demonstrate the capability to have an E4 level of autonomy system on board. Considering the four-mission execution levels of autonomy defined by an ECSS standard (ECSS-E-70-11A), E4 means that the system can autonomously plan, execute, and replan goal-oriented mission operations on board. It constitutes a first attempt in ESA to achieve an E4 level of autonomy on a flying mission using a combination of onboard goal-oriented planning and scheduling with data analytics.
The project is an Earth Observation experiment, and we have two scenarios to run. First, the capability to select and download only meaningful observation data. For example, when you take a picture of the Earth, and the weather is cloudy, you can’t see the ground. These images are downloaded either way and checked on the ground to decide their usability. For solving this problem, our experiment will use an onboard image classifier to determine if the picture is cloudy or not. If clouded, the photo is discarded, and another observation is planned. If the picture results clear and meaningful, it is stored for download.
The second part of the experiment starts with a phase we call scouting, in which the satellite continuously takes pictures during a given time window. With another image processing algorithm, the photos will be analyzed to determine the presence of interesting phenomena, such as an island, volcano, or city. If an interesting target is identified, the satellite will plan a series of pictures to cover a larger area around it. This scenario’s objective is to demonstrate opportunistic science capabilities while shortening the space-ground decision-making loop.
That is an ambitious goal. What is your role in the project?
My involvement in this project started when I joined ESOC for a traineeship with the AMCT. First, my role was to understand all the critical concepts related to the experiments’ autonomy architecture and further develop it. More precisely, the planning and execution parts. The planner is built on APSI, an ESA’s framework for flexible plans generation. The executor, developed in this project’s scope, considers the flexible plan’s dispatchability, controllability issues, and execution optimization.
When I joined VisionSpace, we started cooperating with ESOC to finish the experiment. I’m currently responsible for coordinating the on-going collaboration with the AMCT and the activities at VisionSpace.
And what did you like the most about working on that project?
Everything I’ve learned. When you talk about innovation for mission operations, you first need to understand how things are done and why you need to innovate. I’ve learned a great deal about how mission operations are carried out, where there is room for improvement, how the space industry works, and the directions it is taking.
We are using emerging AI technologies that make working on the project exciting as you are using state-of-art algorithms and frameworks.
The overall experiment goal is exciting, and it is great to be part of this activity. I also really appreciate my team involved in this activity, including João Guerreiro, our Software Engineer, responsible for the image classifier. I believe that in the end, we are going to achieve great results.
If you would like to know more about the experiment, please contact us.
To get to know more about Ricardo Silva, visit his LinkedIn profile, and don’t forget to follow us on Linkedin, Twitter, or Facebook to get our updates.