How AI is Disrupting the New Space Economy
How AI is Disrupting the New Space Economy
An interview with Fintan Buckley, CO-FOUNDER & CEO of Ubotica
Q: Why did you create Ubotica with your Co-Founders?
Our main motivation was to take technology that had been developed for terrestrial applications and look to deploy that in orbit, in space. That opportunity came out when a large company, Movidius was acquired by Intel in 2016. Movidius had industry-leading technology for the processing of video data, deployed in security cameras, drones, etc.
There had been preliminary discussions with the European Space Agency around taking this technology and assessing its suitability for use onboard space craft. Intel had no real desire to proceed with this investigation because the silicon sales volume involved in space applications would be minute and Intel is more interested in higher volume applications for silicon sales. So, that’s where the opportunity came for us to step in and become involved as a new technology company.
Q: What does your role as CEO of Ubotica involve?
My background is in systems engineering and software engineering. In the early days of the company, I was using that expertise on a fairly technical basis to help architect some of the products we now have.
As the company has grown, my involvement in the technical side has diminished. Now, it’s really all about making sure that the team of people that we have hired, who are world-class, have all the necessary tools that they need to do their jobs and ensuring that the organisation succeeds as a whole. Part of my role is evangelising, part is networking and part is making sure that everyone is happy in terms of customers and employees.
Q: Ubotica has an international team of experts?
Absolutely. The technology is now available for us to work remotely and we have been very fortunate that we have the infrastructure in place to interact together wherever our people are based. For example, since the pandemic started, we have opened up an office in Spain and another in Delft without ever going to Holland or even meeting the people we employed there. We have one guy in Canada and have even started building a team in Tunisia – all doing so remotely.
We are doing it this way because we have identified where we can find very talented people and we have the infrastructure to support them working remotely. It has required a bit more effort doing it this way and we have more meetings and calls than we might have if everyone was in one office, but the payoff is well worth any extra work we have to do.
Q: Soon, you are going to be launching your own Ubotica satellite?
Yes, this is a really exciting project for us. We will be launching a satellite in 2024. The purpose of the mission for us is putting together a lot of the concepts and technology that we have developed and actually operating them in orbit.
This includes our first- and second-generation technology that we have already flown, as well as other applications that we will run on top of that such as our compression engine. It’s a very exciting time for us and a lot of hard work going on right now.
Q: Have you designed and manufactured the satellite yourselves?
No, we have been working with our technology partner – a company based in the UK called Open Cosmos. They are also our partner for a number of other missions. So, when we decided that this was something that we wanted to do, but we needed complete control over the payload, we decided to go with Open Cosmos because we recognised that they are very like Ubotica in a number of ways in terms of their disruptive nature and their vision and focus on doing things a little differently than elsewhere in the industry today. They have been a great partner to work with.
They provide the backbone. They provide the housing, and all the traditional pieces of a satellite. Everything that you need to get it up there and to control it once it is in space. Our IP is the payload on this, which is what we are providing. The AI engine, the applications, etc.
Q: Right now, you are in the integration stage?
The integration stage is essentially the stage of the project where all the various hardware components of the satellite are integrated together. We are actually building the physical satellite at this point in time. One of the big challenges around pulling the satellite together has been disruption in the supply chain. Covid-19 and the whole pandemic has been one element of this, and what’s going on in Ukraine is another.
So, there has been a lot of work happening from ourselves and Open Cosmos to actually source the various components that are needed to make the satellite. Some things that you would think are ubiquitous and very common suddenly become very difficult to get. A lot of management, talking and networking goes on just to make sure that we can buy and source and get delivered on time all the various pieces of the puzzle that we need.
Q: How long will the satellite be up in space for?
This is a low orbit satellite, so lifetime is typically three to five years. From a commercial point of view, this is a real underpinner of our business model. The assets that we will be designing into will have to be refreshed on a continuous basis every three to five years. This is really where a lot of the driver for our new space technology is coming from. We’re taking technology that has not been designed for space and building products with it.
Launch costs are much cheaper than they would have been five or ten years ago, as are the operating costs. We have been able to take advantage of all of this, build an asset and put it in orbit within a reasonable cost model.
Q: In three to five years you will have enhanced your AI technologies. You might be looking at a slightly different satellite?
We might be. One of the ways that we are benefitting from this is that once you get a design into a satellite, in many cases there is a reluctance to change the architecture of that satellite. So, we may end up shipping the same product for much longer to the same customers. As new AI models are developed or existing models evolve through continuous training, they can be deployed on our satellite as the platform is very programmable.
Q: This is a Ubotica satellite, in partnership with Open Cosmos, and not intended for any particular customer. It’s yours and once you have done this launch, will you start making satellites like it for other customers?
No, we will not be a satellite builder or operator. This satellite will have commercial elements that we will be able to sell. We are much more focused on being able to prove the capabilities of this satellite delivering AI in space and selling our technology to the incumbents in this area.
Our key customers are the companies that build satellites and our focus with them is getting our technology designed into the payloads of their satellites. So, when their end customers come to them, they can say that they already have an AI engine in our platform that has been flight proven. Our other set of target customers are the end users themselves. We work with them to develop their AI applications and then jointly go to the satellite builders to show them a solution that runs on AI accelerator, ready to be designed into a satellite. We never envisaged being a satellite operator ourselves on a mass scale. Other companies have been doing it for ages and they are already very good at it.
Q: Can the Ubotica satellite be re-purposed?
One of the key benefits of our solution is that it is software-based and easily repurposable in orbit. So, you can prepare the platform to run one app when it is over land and another when it is over sea. This is all done through the software. That is a key difference to some of the other technologies that are used in doing AI in orbit. The cost of reconfiguring some of these platforms is significant in terms of power.
One of the key parameters for satellites is balancing functionality versus power budget. Satellites are powered using solar panels, charged using the sun and then people want particular areas of the Earth imaged. You use the satellite to take images and use the AI to process them. However, this is all traded off against how much power there is and how many images can be taken and processed.
Q: How does the Ubotica satellite get images quicker to the end customer?
It will get the output of AI to the end user as quickly as possible. It is about bypassing the traditional data path from satellite to end user, which is satellite to ground station, then processing on the ground and then providing, in most cases, just the image to the end user.
Using our satellite, we will demonstrate the ability to send an insight, generated using AI in space, directly to the end user. So, if the end user wants to know if there is shipping in a particular location of the world, we will provide that information directly to the end user, far more quickly.
Q: Where do you see Ubotica going in the future?
We are focused right now on Earth observation. Low Earth orbit satellites are imaging the Earth and using sensors to capture data that are more and more powerful, with higher resolutions. A lot more data is being generated and AI is processing the images to work out for the satellite if a) there is any value in the data captured and b) can you extract the real insight from the data and get that back to Earth.
That is all about trying to disrupt an existing market where there are satellites that do imaging and the processing is done back on Earth. Where the product that these companies are providing for their end users is not all that satisfactory. They are just providing the image data and the customer has to wade through them to see if there is any value attached.
Where we are going next is not only on Earth observation, but also situational awareness. This is where you get into the realms of “is this really possible?” Situational awareness is putting cameras in position to be the eyes of space assets in orbit. Providing the ability to do vision-driven applications, such as object retrieval, debris tracking, debris removal, propulsion, refuelling, manufacturing in orbit. All of these things. They are going to need vision to assist in these applications.
That is our next big challenge. Also, it’s going to be AI-driven, that is for sure. That’s where we see things going above and beyond Earth observation.