Anti-fragile: Beyond robust, the ability to self-heal and improve after stress. Self-improving.
Assessment: Observation of the system to find changes in performance, determine if those changes affect the system readiness parameters, and characterize them if they do.
Assessor: Team member trained in the skill of observing the system and drawing conclusions about trends. Assessors should work in teams with a minimum of a system expert and a statistician. Test chiefs, test plan authors, test report writers, and data base experts are examples of other skills critical to good assessment. “Assessor” is an example of sustainment knowledge that needs preserved via good organizational structure where the individuals are spread throughout the organization, but form an association or guild to share their knowledge of assessment.
Availability: A readiness factor that measures how many systems are available when needed. For instance, combat aircraft may have availability requirements stated like this: 85% of all aircraft must be ready for flight crews 2 hours after notification of a mission. A power gird may need x megawatts of alternate power available within microseconds.
Capabilities Baseline: Once the system is deployed, the operator begins to perceive and depend upon capabilities of the system that might not be captured in any design documentation. This becomes an important baseline for the sustainer.
CLFA: A Closed Loop Failure Analysis (CLFA) program applies FRACAS to a repair depot during the sustainment phase of a system. An effective CLFA program is a formal contract between the sustaining organization and the repair depot to ensure sufficient diagnostic information is created and delivered to support the sustainment assessment program. The information includes verification that the failure that was fixed was reasonably the one that occurred to bring the component to the depot. The information can be used to improve the readiness factors of the system and the effectiveness of the depot equipment. CLFA programs typically required significant changes to the depot data and hardware routing systems.
Complex Funding System: Complex systems are usually associated with complex systems for providing sustainment funding. They have many funding sources that interact in sometimes unpredictable ways with decision-makers and office staff. This creates the need for experts in all the sources of funding, their rules, and their interactions. Typically the rules and funding sources change over time.
Complex System: Systems are considered complex when they can enter states unpredictably. Before deployment, design engineers attempt to determine all states of the system, including states that are entered at failure. Safety critical components are designed to fail in safe modes. After deployment, these new failure states emerge and may place the system in unanticipated states.
Complicated System: A system with many, many components interacting in many, many ways.
Constant: Unaffected by changing laws, regulation, or fads.
Cowboy: A sustainment slang term for the hero that rides in and saves the day during a crisis. Not a bad thing, but we want to minimize crises.
Farmer: A sustainment slang term for the bulk of the organization which is, hopefully, following process to find emerging failure modes early and avoid flashy crises.
FRACAS: A Failure Reporting and Corrective Action System (FRACAS) is a process typically applied to a system production line to ensure early achievement of reliability and maintainability. An effective FRACAS program reports failures of system components (including software), analyses to determine causes of failure, takes corrective actions, and verifies results. See Mil-HDBK-2155. See also CLFA.
Fundamental Theorem of Sustainment: A “fundamental theorem” is a statement that is necessary to create the associated domain of knowledge. In sustainment, the fundamental theorem is: “An effective sustainment organization will always find ways to affordably detect threats to the system in time to correct them before the mission is impacted.”
Impact: In a sustainment risk analysis, the effects on the system and mission due to an emerging failure mode or inadequate sustainment process.
Information Management System (IMS): Software, hardware, and processes designed to stow all kinds of data, metadata, and information; the tools needed to transform this “big data” into useful information; and the means to retrieve and report the products.
Integrated Product Team (IPT): A team of teams in two ways. The full team is composed of lower level teams who focus on system components or engineering specialties that need to preserve their knowledge of the system and of sustainment. Lower level teams are composed of all the needed experts from multiple companies. The entire organization can be referred to as an IPT structure. Each individual team within this structure can be referred to as an IPT. For example, a guidance system IPT could be part of a larger rocket IPT which is part of a larger system IPT.
Integrated: Internally consistent.
Lead Time Ahead: A phrase meant to capture the need to consider when the risk might be realized versus the time it will take to mitigate it. Design and development schedules are fixed by many other factors. In the sustainment phase, projects to mitigate future risks are primarily created based on the timing of risk realization.
Leadership: The ability to focus others on the system’s mission and instill a desire for competence while performing your tasks with competence.
Likelihood: In a risk analysis, how likely the system and mission will feel the impact of a realized risk. Or, in other words, how likely the risk will actually occur. In design and deployment, schedules are fixed and risks are discussed within this time frame. In a sustainment risk system, the time factor must be added to fully prioritize risks. A medium impact risk that could happen in a year or two will be prioritized above a high risk that could happen in 8 to 10 years. The latter is still important to track as a risk, but in a resource constrained environment it might not get worked on as quickly.
Mission: The reason the system is employed. The military warfighter’s or civil system operator’s mission is the sustainer’s mission. Sustainer mission statements that use the word “sustain” remove themselves too far from their actual mission. Sustainers must see themselves as part of the weapon system warfighter’s or civilian system operator’s team.
Practical: Easy to apply, possessing a common lexicon.
Process Discipline: The actions of your people as they follow organizational processes. Improvements can only occur if the teams respect the processes and improve them instead of ignoring them. Audits that focus on improvements instead of blame and processes to quickly change processes support this organizational goal.
Readiness Factors: Two to six system independent characteristics that, if violated, will affect the system’s ability to perform its mission. For instance, the vast majority of systems must be both reliable when used and available when needed. Some must provide accuracy while others need to deliver persistence over a defined site. Readiness factor requirements (e.g. 85% reliable) are often measured across many individual systems and aggregated. This improves the precision of the estimate, usually to the benefit of the mission.
Reliability: A readiness factor that measures how well the system meets the mission once employed. A system might have several modes depending on a mission. For instance, an aircraft might score 100% reliable for delivery of cargo but 0% reliable for delivery of fuel in-flight if the refueling boom breaks.
Risk Integrator: Every sustainment organization needs a few risk integrators well-schooled in working with assessment engineers and other IPT members to formalized risks and present them at monthly risk meetings. This is a skill that must be learned. Risk integrator is a great position to give to a team member for a few years who is a rising star. “Risk Integrator” is an example of sustainment knowledge that needs preserved via good organizational structure where the individuals are spread throughout the organization, but form an association or guild to share their knowledge of the risk system.
Sustainability: In man-made systems, it is a design goal, such as reliability or availability. Once deployed, the sustainer is left with the continuous completion of the goal. In non-man-made systems such as ecological systems, the definition gets confused since there is no consensus on whether the system was originally designed or if it was (consider Evolution as a designer for instance) whether those forces can be relied upon to complete the goal satisfactorily. Thus, in the absence of consensus, post-deployment sustainment is often conflated with sustainability.
Sustainment Risk: A risk that can be shown to impact the mission via the system readiness factors.
Sustainment: Support of the system to ensure continued mission capability. Some view logistics as sustainment, or supply as sustainment, or depot activities as sustainment. Others raise expert engineers or astute program managers to be the most important element of sustainment. In this handbook, sustainment encompasses all the skills required to provide support of the deployed system. Experts in funding sources are just as important as expert repair techs.
System: A set of interacting components. In this handbook, the system includes everything required for the operator to employ the hardware and embedded software to achieve the mission. For instance, manned strategic bombers are designed and deployed to carry out the military doctrine of strategic bombardment against a nation’s ability to wage war. World-wide lighter than air Wi-Fi vehicles are designed and deployed to ensure internet coverage in even the most remote parts of Earth.
Time-dependence: In a sustainment risk analysis, a required factor to help determine risk prioritization. See LIKELIHOOD in this chapter.