Aerospace; human intervention and the importance of collaboration in AI and cyber security
Following a recent conversation between industry leaders and academics from Oxford University, we share some of the key points that arose around the topics of AI and cyber security…
Technological weakness used to be attributed to natural or human error. As a consequence, security responses were designed to limit the likelihood or the repercussions of such errors. Yet today, cyber security requires us to respond to more active threats – or attacks – in what we might term as ‘intelligent misbehaviour’. Unfortunately, legacy systems were not designed with this new ‘threat model’ in mind and given the increased ‘attack surface’ along with higher traffic density, the assumptions of these older systems have become seriously challenged.
In aerospace, there are numerous questions around how AI can be used to eliminate these threats. How do we ensure safe and reliable autonomy in new space aspirations and communications, for example? Or how can AI protect against the risk of potential hacking of autonomous flight systems? Following an in-depth conversation between Oxford University, Thales and a number of other important aerospace institutions, we capture some of the key discussion points – looking at how machines and humans can work together, along with the importance of collaboration in cyber security.
Without AI there is no autonomy, and we are coming to a point where without AI there is no real cyber security.
Sreedhar Chittamuri, Aerospace & Defence at HCL America
The role of human intervention in aerospace security
While human/machine teaming has become a critical area of research in autonomy, we are clearly still at the early stages. Before we can deploy any type of autonomy, the big picture is required in terms of technical robustness, safety and governance – from who has access to data, to looking at how we team humans and machines or build interfaces.
Falling back to human intervention is a key facet of security in deploying autonomous solutions. Yet the idea is fraught with peril; what if you don’t have a well-trained pilot in the cockpit, for example? And even if you do, the system might have become so unstable that it poses a real threat.

Also, as a system proves extremely effective, how can we be sure that the human intervention will work? After all, human intervention still requires a high degree of concentration – which is likely to decrease as soon as a person is relying on autonomous solutions. And the better and more reliable an autonomous system becomes, the easier it is to get complacent, which could potentially lead to atrophy as it becomes difficult to maintain human skills in the first instance.
There is also the question of public confidence. If we take autonomous cars as an example, the public wants to see a record of zero harm to be able to put its trust in such a leap in innovation. While autonomy may save millions of lives with fast response times, people will remember the mistakes that we might perceive to be incredibly dumb for an autonomous system.
Thus it becomes incumbent upon technologists and academics to convey the real data that is not so easy to see. When we look back, there were many similar concerns as robots and humans started working together on the factory floor, so we must set the goalposts and work together to educate.
Building cyber security into design

AI has helped to build models – such as surveillance protocols for air traffic control (ABSB) – before the focus on cyber security. But today, we must start building with cyber security included as part of the design. Data has to be a key part of this process – our dependency on it has caused a shift in way we think about design, develop and test evaluation and certification. It also offers a potential frontier for gaining advantage over attackers, where we can gather and respond to real-life examples that would not be adequately captured by theoretical models.
Safety architecture is also crucial – not just internally, but in terms of regulation. How do you prove that the thing that really needs to be secure is the most secure? We are never really going to have enough data without running many years of field trials. Plus, we cannot always prove that a system is secure – only that it might be – by demonstrating it is as safe as is reasonably practical and that human intervention is feasible.
Trust is another crucial part in the success of autonomy. When we introduce AI, we must think about how the system supports trust? How do we ensure that AI identifies anomalies and does not contribute to them? In terms of solving future problems, the increasing complexity of systems means it is not enough just to think about academic, industrial or regulatory concerns in isolation. We need a networked thinking approach that covers all aspects, from technology and policy to law and strategy.
Collaboration will be crucial to aerospace security
Collaboration between industry and academia will be key to being forward-thinking about cyber security threats. Trustworthy Autonomous Systems as an example, was set up to emphasise this collaboration and is working well for both sides. In addition to this type of incentive, more international collaboration will be needed. While the US and Europe have typically been at forefront of global aviation standards, we will need a more globalised approach – a good example being the US-led international spaceflight program, Artemis, where the US and its allies are working on systems, protocols and standards to meet everyone’s needs.
Standards and technologies are also starting to emerge from new countries and regions. As we start to leave the “safety but no security” legacy behind, it will be interesting to see problems arising from systems co-opted from different domains with different threats –requiring the transference of capabilities across domains (sometimes referred to as permeable boundaries).
We are in the middle of a paradox shift. We have to start thinking about cyber security as we are designing the systems.
Dr Danette Allen, NASA