I am a member of the Society of American Military Engineers (SAME), the American Institute of Aeronautics and Astronautics (AIAA), and the International Council on Systems Engineering (INCOSE). In the AIAA, I am a distinguished lecturer traveling the country spreading the gospel of complex system sustainment. In INCOSE, I have access to a wonderful collection of webinars on systems engineering. The latest was: “What is a System?”. The speaker was Hillary Sillitto of Sillitto Enterprises at sillittoenterprises.com, a consultant firm for people struggling with complex systems.
I make no attempt to try to recreate his excellent presentation here. For his services, please see his website and books. But what I will do is mention a few concepts from his talk and how I see them relating to sustainment of complex systems.
Hillary Sillitto is a member of an INCOSE team that is taking another look at the INCOSE definition of “system”. As is often the case, the seemingly simplest words are often the ones that we must get precisely right to continue in our progress with systems engineering. For instance, his webinar included an overview of 7 major ways various disciplines define the word “systems”. He then showed how these 7 world views might be mapped together.
I especially enjoyed the ideas about what makes a system complex. For instance, a system might be inherently complex, it might appear complex to the observer especially if they cannot see the whole, or it may attain complexity due to multiple stakeholders with different impressions of the system. There is more than a little bit of Plato and “shadows on the cave wall” in all of this. That is, as humans we look at a real system and then idealize it as a model with our brains. This incomplete model of the ideal “chariness” of a chair can sometimes get us into trouble when the real system acts differently than our model. Or if our colleagues see the situation differently. In the Complex Systems Sustainment Management Model (CSSMM) we are always striving to update our model via observation, not only to catch emerging failure modes, but also to update our internal understanding of our system and its mission.
Hillary Sillitto mentioned the definition of complexity as the opposite of deterministic. But he also had a more clever way of saying a complex system can seem to have a “mind of its own”. This often occurs when people are an integral part of the complex system, like a missile defense system or combat aircraft. This reminds me of the epiphany that comes to advanced sustainment organizations when they see their own organization as part of the system that supports the mission. But it can occur as well in the non-human parts of the system.
Complex systems can display “emergent properties” that are unexpected if you focus on the design and fabrication only. Again, this is a good description of the need to seek the emerging failure modes never envisioned by the designers’ failure modes and effects analysis.
There are systems that might be called “complex adaptive”. Perhaps these systems carry within them a model of the outside world. In a small way, many systems have this property now. They contain computers that allow them to react correctly to outside stimulus without the need for constant human direction. For many, AI is seen as the natural extension of this ability. But how much should the human be out of the loop if the mission requires safety and surety? How can such a system be assured to fail safe?
To some, real systems are their focus. Others have a professional focus on conceptual systems such as the models used to describe real world systems. For the sustainer, the challenge is to bring the designers’ model-based systems engineering into the sustainment phase as a useful addition. This is helped by the new tools that allow a designer to include why they created the design the way they did. But this particular feature is not in widespread use yet. In any event, nowadays, the sustainer is going to need to not only sustain the system, but also the model(s) of the system.
Systems might be physical, functional, or behavioral, making the act of “drawing a line around the system” interesting. For instance, a cardio vascular system has a fractal boundary according to Hillary. Wherever you draw the line, to keep the system reliably low entropy, energy must be added to maintain it. This parallels perfectly the CSSMM requirement to understand the boundaries of your system in order to give you the best chance to manage it. This is another action within the CSSMM that must be constantly repeated to update understanding.
Hillary Sillitto also said that some systems are so complex that the most we can hope for is not control, but influence. This echos the truism we always used with our repair depots. “You can hope to influence them but not control them; if you try to control them, you won’t even influence them.” Thus it is with any system that has a strong human component.
We members of the Society of American Military Engineers know all too well the issues associated with having so many humans as components in our systems.