Rapid Relational Modeling/Interpretive Structural Modeling


Rapid Relational Modeling (RRM) and Interpretive Structural Modeling (ISM) are problem solving methodologies designed to assist the user in identifying, defining, ordering and understanding the structural elements of complex issues. There are usually several facets or components of even simple problems; complex problems just have more of them and the interrelationships among them are more intricate. In structural modeling terminology, these components are referred to as “elements” within a well-defined problem area generally referred to as a “set” or “domain”. The term “simple” is applied to problem domains consisting of between three and seven elements. These numbers represent the generally accepted limits of short-term memory within which human beings can manage variables in data structuring, interpretation and decision-making processes, with consistency. In simpler problem domains, it is fairly obvious what the elements and relationships are, and structural modeling is not needed. Unfortunately, there are few problem domains in modern society that fall into the “simple” category.

Human thought processes are remarkably adept at decomposing large element sets into manageable sub-groupings, for rapidly shifting focus to manage overlapping sets, and for identifying correlating or associative relationships in order to manage and understand complex problems. In many cases, the sub-conscious processes involved in comprehension are so efficient as to lead us to believe that we are quite capable of correctly understanding complex problems when, in fact, we are greatly in error with respect to our perception versus reality. While some relatively simple problems may be solved by reasoning or intuition alone, complex problems require assisted reasoning. Intuition may be valuable in formulating conceptual solutions, but these concepts must be supported by logic and reasoning if they are to be translated into appropriate responses to complex problems.

Complex problems involving many elements (more than five; plus or minus two) are approached within a maze of interactions and human mental functions that comprise human decision-making processes. Such functions include data assimilation, reasoning, comprehension and situation interpretation. These functions are notably prone to error and inconsistency. It is highly unlikely, given the contributions of prior experience and conditioning in human management of sensed data, that two people will instantly develop identical mental models from stimuli presented by complex problems. This often leads to wide variations between “truth” and “perception” and produces disagreement between human beings as to which, if any, internalized mental models most correctly represents proper correlation or correspondence with the structure of the “real” problem domain.

Tools such as structural modeling, based on graph theory, matrix algebra and the mathematics of “fuzzy logic” have been conceived to assist people in reaching consensus and enhancing the understanding of the nature of complex problems. The application of these approaches is becoming widely accepted as a vital step in developing testable hypotheses, measurable objectives, and ultimately, to formulating effective approaches to problem solving. Rapid Relational Modeling is a three-step, computer-aided methodology developed around well-documented structural modeling theories (see attached Bibliography) that has been applied to a wide range of complex problems with great success.

What is Rapid Relational Modeling?

The RRM process involves a three-step methodology incorporating: 1) nominal group activities directed toward developing an operationally defined set of elements related to a specific problem domain; 2) use of an on-line, interactive computer-aided software package known as Interpretive Structural Modeling; and 3) construction of decision logic tree (a graphical representation is produced called a dendrogram, that depicts the numerical taxonomy Figure 1. Typical Dendrogram

System’s internal structure and hierarchical lineage) ordered top-to-bottom, left-to-right on the output of one or more ISM sessions.

Step 1: Brainwriting

The RRM technique to addressing and solving complex problems begins with the identification and definition of the problem context domain. Elements comprising the domain are identified and operationally defined (i.e., able to be sensed, quantified and verified by human observation) in the process of a “brainwriting” session involving domain experts and end users of products and processes. Brainwriting is a structured form of problem solving that follows the general rules of brainstorming; e.g., free-flowing, group idea generation with prohibition of practices, such as use of “killer phrases”, that result in premature rejection of any input that may prove to be a good idea upon more careful consideration. A brainwriting session begins with the formulation of a question to which the group is to respond, gathering the group around a table or splitting the group into several groups made up of odd numbers of participants. There is no conversation during the process. Each person writes as many ideas as possible on a page, submits the list to a pool made up of the contributions of other participants, and draws another person’s list of ideas from the pool. After reading what is written on the page drawn from the pool, participants write additional ideas on the page that are stimulated by the reading, but that have not been captured previously.

The process continues until each member has read everything in the pool and has no additional ideas to contribute. If there are several groups (this and succeeding steps are optional), the products of each group are exchanged and the idea generation process is repeated or the groups may be instructed to edit the contributions of one of the other groups in a manner to eliminate redundancy only (no reduction in ideas) or to clarify the element set definition. Each group, in turn then submits the edited product as a report to a plenary session that is convened to consolidate the products of the group as a whole. Again, editing is limited to elimination of duplicative inputs or to achieve clarification of any element.

At the conclusion of this brainwriting activity, a comprehensive, empirically defined checklist is produced of “things that should be considered in a decision context” and that completely describes the problem space. This composite list of all participant contributions is entered into the computer as a text file to be used in the computer-aided ISM session that follows.

Step 2: Interpretive Structural Modeling

The ISM technique offers a computer-based method capable of addressing both empirical and subjective elements in the same context by using transitive inferences in order to structure the relationships present among the element set defined in the brainwriting session. The ISM technique establishes consensus within the expert user group as to the ordering of the element set by recording and processing “true”, “false”, “equal”, or “not related” answers to computer-generated questions like “more important than”, “precedes in time”, “causes a change in”, “depends on”, “is equivalent to”... with elements of the problem domain presented to the group in a pair wise fashion. The ISM algorithm finds the fewest number of pairings necessary to structure the element set, by computing the transitive inferences (e.g., if A, then B, then C; therefore if A, then C) with each descriptor against each member of the element list. The ISM algorithm seeks to resolve conflicts and inconsistencies in participant responses until a consistent and logical ordering can be represented as a directed graph or as a logic tree. The text and graphical outputs of the automated session offer the users three value-added products to aid in the understanding of the complex problem domain: 1) clustering (placing certain elements into identity sets), 2) hierarchical ordering, and 3) presentation of supportive-causal linking of related elements.

The two basic types of logical ordering used to discern the problem descriptor/element relationships are called “one-directional”, or “bi-directional”. If the relationship is sequential and operates in only one direction (examples include time-ordered events in a process or “is B more important than A), this suggests a one directional (or single-directed) relationship between elements. The test for such relationship is to determine if the answer to a proposition is “true”, the relationship will not be changed if the element order is reversed and a “false” response is indicated (i.e., A will always be more important than B, regardless of how the question is presented). On the other hand, if the relationship is “necessary for”, the fact that “A” is “necessary” for “B” does not logically imply that “B” is also “necessary” for “A”. In these bi-directional relationships, the ISM program offers options for presenting inverse pairing presentations for each question.

The ISM algorithm also provides for the efficient simultaneous structuring of elements belonging to joint or disjoint sets of elements existing within the same problem domain (i.e., the user has a set of essentially unlimited or randomly generated data points and needs to know if any are related in some fashion to one or more sets, comprised of one or more other data points, within a defined problem domain). The ISM program provides “adjacency” and “reach ability” routines that allow pairs of elements to be “not related”, yet hierarchically ordered and/or inter-related to one or more other elements. These groups may exist as either parallel or sequential sets that may exist within the complex problem domain. This feature provides the flexibility to use ISM for generating GANTT charts (“Does event A precede event B in the context of a process?”) or PERT/Critical Path (“Is event A necessary for event B in the context of a process?), as well as to structure importance hierarchies with branch paths using the same software package.

Step 3: Using ISM Outputs for Decision Support or Solution Formulation

Statistical analysis or “fuzzy logic” may be easily applied to the distribution of loadings within the resulting structures of the ISM outputs. This can fine-tune the classification scheme to maximize the similarities within any of the element subsets and to maximize the dissimilarities between other element sets while avoiding isolates (i.e., anomalous data points). Statistical techniques available for partitioning the problem elements/descriptors are numerous and the ISM program does not limit their use, inasmuch as the technique is context independent and is not limited by application domain. ISM incorporates powerful statistical methods such as measurements of penalties matrix, reach ability matrix or measurements of loss matrix operations that offer ways for threads of thought to be traced. An element/descriptor of the problem is grouped with other elements/descriptors because “X” causes “Y” or because the perceived similarity of the concepts behind the semantics can be measured to lie between some empirical boundaries or within some percentile approximation.

Ad hoc structuring is not presently supported (like “take A and put it into B”), but can be as simple as assigning weights to the nodes representing one or more elements/descriptors in a logic tree, then looking for natural cleavages or clusters. The third step in the RRM methodology may involve assignment of weights corresponding in value to the depth level of the nodes in a logic-tree. These weights are then mapped to the desired decision outcome in a manner that allows the cumulative weight total to fall within a range of scores that assist decision-making in an operational environment. When used as a “filter” to map scarce resources against a prioritized set of elements/descriptors characterizing demand, the logic-tree structure produced by RRM may be used as a rule-base for an expert system that manages resource allocation decisions. Similar processes may be used to rapidly structure scoring criteria for decisions related to any selection process involving integration of a large number of variables, high frequency or repetitive assignment, or involving directing or controlling decisions made by a large population of operators where standardization of decision criteria and outcome is desired.