Using AI to write aerospace software: Navigating the DO-178C landscape
February 28, 2026
Andrew Longhurst is Managing Director of WITTENSTEIN high integrity systems. With a background spanning electrical, electronic and software engineering, he brings a system-level perspective to safety-critical design, combining technical depth with commercial insight to deliver certifiable aerospace solutions.
Artificial intelligence is rapidly evolving into a formidable force within the realm of software development, and aerospace engineering is no exception. As we look toward the future of flight, AI offers the tantalising promise of helping engineers move faster, explore complex designs earlier in the lifecycle, and significantly reduce the burden of repetitive manual labour.
However, safety-critical aerospace software operates under a different set of constraints than commercial software. In this domain, safety, determinism and certification carry far more weight than development speed or creative flair. When standards such as DO-178C govern system architecture and approval, AI must be integrated with extreme care and professional responsibility.
The fundamental question facing the industry is: can AI support DO-178C aerospace software development without compromising certification integrity?
How AI fits into DO-178C aerospace software development
To use AI effectively, we must first acknowledge its limitations. Like their human creators, AI models can make mistakes. They are prone to “hallucinations,” producing misleading or incorrect information. Because of this, AI is best used as a sophisticated assistant rather than an autonomous generator of certified code or documentation.
Aerospace certification is built upon the premise that engineers understand, justify and verify every decision embedded in flight software. Used correctly, whether to summarise complex standards, generate candidate design solutions or highlight inconsistencies, AI can accelerate development. However, it must never undermine safety or compliance.
Where AI supports DO-178C requirements, coding and verification
The early phases of development offer the highest potential benefit with the lowest certification risk. AI can assist in decomposing high-level system requirements into granular software requirements, suggesting modular architectures that support partitioning and redundancy, and identifying potential single points of failure.
For example, if a requirement states, “The system shall detect sensor failure,” AI can help expand this into detection strategies, timing constraints and safe-state transitions. This supports DO-178C objectives around requirements correctness and completeness. Ultimately, the human designer remains fully responsible for the final documentation, which must undergo independent review.

AI can also assist in generating boilerplate code such as interfaces, state machines and drivers, ensuring consistent implementation patterns across languages, including C, C++, Ada or Rust.
From a DO-178C perspective, AI-generated code is treated no differently from human-written code. Every line must:
- Trace back to a low-level requirement
- Comply with approved coding standards
- Be reviewed and understood by independent engineers
- Be verified through deterministic testing
AI shifts the engineer’s role from primary author to reviewer and editor, but accountability remains entirely human.
Verification, often the most resource-intensive phase of DO-178C projects, is another area where AI can add value. AI may assist in generating unit tests from requirements, proposing boundary-value scenarios and analysing logs to identify coverage gaps or anomalies. Even here, however, AI output must be independently validated and remain explainable.
What AI cannot do in safety-critical aerospace software
There are clear boundaries in certified aerospace systems. AI should never:
- Make final safety-critical decisions
- Modify flight-critical code without human review
- Replace formal verification activities
- Obscure deterministic behaviour
In aerospace, predictability and explainability are the foundations of trust. Regulatory authorities such as the FAA and EASA enforce rigorous oversight to ensure these boundaries are respected.
Aligning AI with DO-178C specifics
As Design Assurance Levels increase from DAL E to DAL A, verification rigour and independence requirements escalate significantly. Trust in automated tools correspondingly decreases, and AI usage becomes far more constrained at DAL A and DAL B.
In most cases, AI tools are classified as development tools. To avoid the complexities of tool qualification under DO-330, many organisations treat AI output as advisory only. Results are verified manually and kept outside the unverified build chain.
Certification requires deterministic builds and reproducible outputs. Because AI models may change over time, best practice includes locking model versions, archiving prompts and treating AI systems as configuration-controlled tools.

AI tools and DO-330 qualification considerations
To safely integrate AI into DO-178C projects, development teams must adopt strict controls:
- Review all AI-generated code line by line
- Never accept AI output without human verification
- Maintain full traceability from requirements to test evidence
- Preserve determinism and reproducibility
- Keep human engineers accountable for all approvals
AI is most appropriately applied in early lifecycle phases and in systems with lower DAL classifications. Certification authorities will audit every aspect of the development process, including tool usage.
AI as a “junior engineer” in certified aerospace development
A helpful analogy is to think of AI as a highly efficient junior engineer, one who works extremely quickly but has no understanding of certification constraints.
There is a compelling case for AI adoption in aerospace precisely because the industry’s workflows are already designed to detect errors. Independent review and verification at every stage mean that mistakes, whether made by a human or an AI system, should be caught during formal review processes.
Ultimately, safety remains a human responsibility. AI can act as a powerful force multiplier, helping manage complexity and reduce routine workload, but it is a tool, not an authority. In aerospace, trust is earned through evidence, not intelligence.
















