Why
In 2019 I was working as an Aerospace Engineer for Northrop Grumman doing structural analysis work for the F/A-18 fighter jet. Having taught myself to code in grad school, I started working on a set of custom software tools to help automate many of the repetitive tasks in my analysis work.
After some promising initial results, I brought this to my supervisor who was very impressed. With his help, we managed to convince the higher ups to further develop the tooling and distribute it to the rest of the engineers in the department.
Thus, the “F/A-18 Structural Analysis Suite” was born.
The Constraints
The main constraining factor when building this tooling was compute power. We had access to a Finite Element Analysis (FEA) model for the jet which could give us loads for any part under a variety of loading conditions.
This model was quite large and could only be run on our shared compute cluster, meaning it could not be rerun for every individual analysis. This meant we had to get creative in how we accessed these loads inside our tooling.
The other main constraint I faced was a lack of IT support. This was a high security environment and it was very hard to get approval for new software infrastructure and dependencies.
This basically meant I had to build everything from scratch and all tooling had to run locally on each analyst’s computer.
The Process
The engineer began by entering the location they are analyzing in the global coordinate frame. They also laid out what the geometry of the part in question looked like. This came from the field reports where conditions often varied from the blueprints due to damage, previous repairs, modifications, etc.
The analyst would define thickness, fastener type and spacing, how multiple parts were stacked up, and anything else unique to this location. They would also define what coordinate system they wanted to work in relative to the global coordinate system. (why?)
Once the geometry was locked in, the tool would then go fetch the loads for this location from our load database (described below) for all load cases. These loads would then be translated from the global coordinate system to the analysis coordinate system using trigonometry (so, so, so, much trigonometry).
Once we had the loading conditions down, they were applied to the geometry and analyzed across a variety of potential failure methods. These include bearing failure, tear out, buckling, fastener shearing, and many more. The system would track the results of each, documenting results and presenting the analyst with the critical failure modes and loading conditions.
From here, the engineer would determine if the current condition could still meet the required factory of safety or if a repair was required. If there was a repair needed, it could easily be entered back into the system for repeat analysis and confirmation.
The results were nicely formatted and packaged for reviewers to approve or give feedback on. Once finalized, the results could quickly be dropped into the final report and presented back to the Navy operators to preform any required maintenance or repair tasks.
Challenges
I had to build out a bespoke loads database to start our analysis from. We were able to pull loads from our FEA model for 200+ loading conditions using a global static coordinate system. This gave us a known reference point we could begin working from.
Without access to any off the shelf databases, I had to develop my own rudimentary one. The nice thing was this was essentially a “read-only” database since the loads never changed unless there was a change to the model. These extracted loads were stored in a series of flat text files in a shared network location, optimized in layout for our access patterns. This database was read through a local api that accepted a location on the frame and returned all load cases for the elements in that location.
This was also my first software project of any real complexity. I learned very quickly the importance of modularity, abstraction, and testing to keep things from spiraling out of control.
Results
After rolling out this tooling and training the engineers, the results were fairly stark. The median analyst hours per report was cut by >50%.
Even more importantly, the turn around time for reviews was reduced by >75% which was often the bottleneck and allowed for much tighter feedback loops between analyst and reviewer. This was mainly due to the consistency between work ups and the high quality documenting provided by the tooling.