Locations, routes; echelons and maintenance levels (O-Leve/I-Level/D-Level); fields, depots and storages; distribution of systems; initial stocks of LRUs and SRUs; LRU and SRU cycling; shipment and discarding data; types, capacities and initial allocation of resources
Operational States
operational states of systems and equipment; operational profiles; tasks and missions
Logic of Operation and Maintenance
Algorithm of behavior
The customer states the purpose of the analysis. Typically, there is a number of parameters of interest (like time-dependent availability), whose values should be evaluated. There may be, however, other reasons for creating a working simulation model, e.g. when it serves as a step toward the development of a special application.
PREPARING DATA
Certain elements referenced above appear in bold and are underlined: functionalproduct tree,failure modes, failure distributions, RBD and algorithm of behavior. These require special effort. They roughly fall into two groups, Data Group and Logic Group. Product tree, failure modes, failure distributions and RBD comprise the Data Group, while the Logic Group contains the algorithm of behavior. Activities related to these groups may be executed in parallel, which is reflected in the Project Management procedure outlined below. Let’s start with the Data Group.
Functional Product Tree
A product tree of an equipment item is a tree hierarchy based on (in descending order) an equipment item itself as a ‘root node’, followed by its components, followed by their components and so on. In a typical case an equipment item is a system consisting of LRUs, which in turn may consist of SRUs. A product tree is built taking into account the functional block diagrams that have to illustrate the operation and interrelationships between functional entities of a system as defined in engineering data and schematics. A functional product tree should be described for each item participating in the scenario.
Failure Modes
Sometimes an object has more than one failure mode (type of failure). Whenever relevant these modes must be taken into account in the overall model. A particular discipline used to account for failure modes is called FMECA (Failure Mode Effect and Criticality Analysis). Failure Mode and Effects Analysis (FMEA or FMECA) is an analysis technique which facilitates the identification of potential problems in the design or process by examining the effects of lower level failures. Recommended actions or compensating provisions are made to reduce the likelihood of the problem occurring, and mitigate the risk, if in fact, it does occur. The FMEA team determines, by failure mode analysis, the effect of each failure and identifies single failure points that are critical. It may also rank each failure according to the criticality of a failure effect and its probability of occurring.
Failure Distributions
Failure data is a central element of the overall scenario. Failure distributions are primarily obtained by the analysis of historical failure data. These distributions are of the following types: Exponential, Weibull, Normal, Lognormal, Non-parametric user defined. (In a similar way, historical repair data are analyzed by assigning probability distributions which represent repair characteristics of a given failure mode.)
The important method used to produce distributions (especially, Weibull) from historical data is the Maximum Likelihood Estimation (MLE) technique. The MLE method is based on the assumption that the likely outcome of an experiment is the result that has the highest probability of appearance, i.e., results with higher probability will have a greater tendency to appear.
When historical data are unavailable, the classical failure rate prediction methods are used. These are based on industry standards, like MIL-HDBK-217. This predicts failure rates for electronic equipment based on work carried out for the US DoD.
Reliability Block Diagram (RBD)
A Reliability Block Diagram is a graphical representation of how the components of a system are reliability-wise connected. It is a method of modeling how components and sub-system failures combine to cause system failure. Functional relations given by RBDs may be used during the simulation to predict the availability of a system and other measures.
Preparing Data (continued)
As mentioned, the activities needed to produce product trees of equipment items (systems), descriptions of failure modes, failure (and also, repair, insertion, removal and shipment) distributions, and Reliability Block Diagrams of systems constitute an important part of work of collecting and preparing data prior to actual simulation of the scenario. Another important item needed for actual calculation is an algorithm of behavior. This is the description of model responses to various stochastic events, like system failure, arrival of a spare to a local storage, or ringing of a clock that announces the beginning of a preventive maintenance. The activities needed to produce such an algorithm are outlined in the next paragraph.
DEVELOPING THE ALGORITHM
The starting point is the detailed narrative scenario obtained from the customer. The emphasis here is on the Logic of Operations. The principal advantage of Annabelle modeling environment is in its flexibility, so that the modeler is not forced to adjust its model to comply with the available modeling tools, but may express the relevant reality quite freely. This allows documenting the logic of the model independently of the later implementation in Annabelle. The independency of such recording is important since it allows concentrating on the consultation of the modeler with the customer as opposed to communication with the computer program. The accumulated experience shows that the first version of the narrative account of the scenario obtained from the customer is not usually sufficiently precise or complete. In a typical realistic situation the scenario analysis process is iterative and includes the following three types of activity:
Obtaining scenario: the task of communicating with customers and users to determine what is the scenario (gathering the elements of scenario).
Analyzing scenario: determining whether the stated elements of the scenario are unclear, incomplete, ambiguous, or contradictory, and then resolving these issues.
Recording scenario: scenario might be documented in various forms, such as natural-language documents, graphical means, or textual but formal process specifications.
The collected information needs to be represented in some way, and ideally the gathered statements should be expressed in a notation which elucidates their implications, prompts further questions, correlates different aspects, and facilitates detailed analysis. A.D. Achlama uses two alternative representation means: Process Specification Language (PSL) or Flowcharts. The choice of representation is affected by two considerations: by the need to improve the communication between the users and the analyst, and by the evolutionary nature of scenario (model) building. The latter aspect makes PSL the preferable method over the graphical means.
The analysis activity and the recording activity are intertwined, and include the following steps:
Compile the list of all possible contingencies in the scenario
For each such event write down a description of the ensuing process using PSL. A particular variety of PSL employed is an open-ended specialization of natural language which uses control structures to reflect an algorithmic character of the process. The control structures are usually sequential or alternation ones (like if-then), and occasionally iterative.
Suppose the event was the end of a periodic checking procedure of an aircraft, which is aimed at determining the operational states of its major constituents, LRUs. Assume that detection efficiency is less than 100%.
The process description may include the following construct:
IF
A failure of least one constituent was detected
THEN
Send this aircraft to repair
END IF
It may happen that no failure was detected, in which case we are inclined to send the aircraft to the mission, so that it takes off right away. But what if there is an important failed part of the aircraft which remained undetected but prevents the aircraft from flying?
Questions like this one typically arise during a methodical analysis and require additional consultation with the customer, emphasizing the iterative nature of the scenario analysis. Thus, the activities of obtaining scenario from the customer and of analyzing and recording follow each other iteratively until both the analyst and the customer are satisfied with the resulting description.
CLOSING THE SCENARIO
Any change of scenario after it is implemented in Annabelle simulation model is much more costly than before the implementation. Hence, there is a time point during the project evolution when the scenario documented as it is described above should be concluded and sealed. This closure is an important milestone. It warrants a special Design Review, which may help to provide the better mutual understanding between the customers and the analysts as to the scope and the content of the scenario and the goals of the analysis. This is also a point where the customer formally accepts and approves the description of the scenario and authorizes the simulations to be initiated.
BUILDING ANNABELLE PROJECT
After the data are collected and prepared and the scenario logic crystallized and sealed the next step is an implementation, i.e. building Annabelle model. This work is done by analysts and it typically consists of a number of steps, including the description of systems structures and their RBDs, describing the geography of the model, inputting process distributions (for failure, repair, removal, insertion, spatial transfer and other processes), and expressing the scenario logic in Model Logical Streams (MLS).
The parameters to be evaluated are also introduced into the calculation model in the form of specified tallies.
The coding is followed by test runs, debugging, verification (checking Annabelle model with respect to its specification, as opposed to validation, the term which refers to comparison of calculation results with the real measurements), and activities aimed at making the MLS algorithms more efficient in order to decrease simulation times. All Annabelle related work is done by the analyst.
SIMULATION & VALIDATION
When the simulation model has been checked and is ready, the production runs are made to produce the results.
At this point the customer may require validating the model by comparing the calculated parameter values with the measured ones. The validation of the model may be an important step providing confidence in the model and the bulk of the results obtained. It should be remembered, however, that the quality of results depends upon the quality of the input data, so that a good agreement of calculated and measured values may be expected only if the input data are of reasonable quality.
FINAL REPORT & PRESENTATIONS OF RESULTS
If the purpose of the analysis was the evaluation of the specified parameters, then this is the final point of the project. The report is delivered to the customer and the final presentation is made.
If, however, the goal is the development of a special application, then this is the starting point of the application building.
FORMULATING THE APPLICATION REQUIREMENTS
Annabelle modeling and simulation platform allows for modeling of a vast variety of scenarios. It requires, however, a certain level of acquaintance with the tool on the part of the modeler. There are cases when the customer would like to make simulation analyses himself, but lacks means and/or motivation to develop and sustain in-house expertise.
This conflict may be resolved when the intention is to be able to repeatedly analyze one particular already developed model with varying values of a set of parameters. Such a situation calls for developing an application, an Annabelle-based model in which values of certain parameters may be altered while the rest of the model is fixed. One of the benefits of the Application is that its user needs much less training than the operator of the full-size Annabelle.
If the customer decides in favor of an application, he has to formulate the requirements, which may be done with or without analyst assistance. Such requirements would contain two major sections: a list of modifiable parameters and the outline of the preferred user interface. Some additional tallies may be requested as well.
DEVELOPING THE APPLICATION
The application is based upon the Annabelle Project, which was already built and validated as it is described above. The idea is to close this project, i.e. make it unalterable by the user in all its parts except for the number of specified parameters. A special interface (sometimes called a “casing”) is prepared which allows entering values of specified parameters. The user sees only the casing and uses it to enter the input data (i.e. parameters values) for each parametric run. These values are checked to ensure their legitimacy and, if found valid, are transferred to Annabelle. The user pushes the “Run” button and the simulation of the imbedded model is initiated to produce the results for this specific choice of the parameter values. One important advantage of using an Application from the customer’s point of view is that the data preparation, the run and the collection of results may be done by personnel which got much less training than a modeler who would operate a full-scale Annabelle modeling platform.
Acceptance Test and Completion of the Development
Following the development and internal testing of the Application, the Acceptance Test is conducted by the customer with the assistance of the company. If the customer is satisfied with the results of the acceptance test, the Application is delivered to the customer and the development project is completed. Of course, the company provides maintenance support to the customer on the terms mutually agreed upon.
PROJECT MANGEMENT ISSUES
In order to facilitate project management and to insure getting the deliverables on time certain management processes are included into the planned project activities. These are: Project Initiation Review (establishing goals, deliverables, and time schedule), Design Review (closing the scenario and starting Annabelle modeling), and Concluding Design Review (presentation of the results, Acceptance Test and Delivering the Application).
Project Milestones
The following are the milestones of a project evolution:
Project Initiation Review
Getting the Scenario and Systems Description
Preparing Data
Preparing Product Trees
Describing Failure Modes
Gathering Failure Distributions
Building Reliability Block Diagrams (RBDs)
Developing the Algorithm
Design Review: Closing the Scenario
Building Annabelle Project
Simulation and Validation
Concluding Design Review: Presentation of Results
Formulating the Application Requirements
Developing the Application
Acceptance Test and Delivering the Application
Remarks
“Preparing Data” group of activities (Product Trees, Failure Modes, Failure Distributions, RBDs, etc.) and “Developing the Algorithm” item are worked on in parallel
Application related items (the last three bullets above) appear if the final goal of the project is Application development.