JFORCES ISR Overview



Within this study Blue responses were limited to reactions to observed Red activities. Early unambiguous warning of Red actions accords more operational responses to Blue. To this end rapid collection, evaluation and delivery of ISR information leverages Blues survival and subsequent Red defeat.



The ISR evaluation incorporated within this study starts with sensor collection. Sensor collection is based upon the combination of viewable regions and the collection tasking within those regions. The areas viewable by sensors can be evaluated prior to full scenario simulation in modes appropriate to the sensor platform. The evaluation is significantly different for space-based sensors versus air, sea and ground based sensors. The approach for evaluating space-based sensors is addressed first.











Satellite coverages of selected targets can be reviewed for satellites both graphically and in tabular output in this interface.



Using this interface the analyst can perform first-order evaluation of an architecture’s ability to cover any selected region of interest. More importantly, the algorithms behind this interface provide the basis for detailed prescheduled sensor tasking to be employed in the scenario.





Two modes of tasking sensors are supported. The first is prescheduled sensor tasks. The second is dynamic cross-cuing of sensors to confirm an initial ambiguous detection made during the scenario execution. Dynamic cross-cuing results from an initial sensor detection and is driven by an analyst-defined stack of tasking rules, specifying the types of sensors that should be dynamically retasked to confirm initial detections and specifications on the available roles of sensors within the scenario.



Prescheduled sensor tasking can either be defined explicitly for every sensor or generated based upon a target list, tasking priorities, desired coverage times and revisit rates, sensor type and characteristics, and desired information quality. A prioritized tasking stack is then generated tracking information on type, quality and timeliness of detection requirements.



This PP slide will form the basis of a dynamic presentation where parts will be highlighted IAW the narration















D

Again, this PP slide will form the basis of a dynamic presentation where parts will be highlighted IAW the narration

uring scenario execution this information is then merged with the sensor access time and sensor characteristics to develop an intelligent task plan. Benefits to performing this sensor-to-task pairing during runtime include the ability to reschedule according to weather changes or dynamically insert tasks resulting from scenario execution. Factors determining the sensors include:



This develops a tasking stack that directs detection attempts during scenario execution. Actual detection success was then determined based on sensor characteristics, environment, jamming, and the target’s signature, states and activities.



This data is both maintained internally and can be monitored by the analyst. Many aids are available to understand the results. Here information is dynamically provided as reports come in. At the top level the detection status of key targets is presented in a “stoplight” manner with green, yellow and red dual-color buttons indicating the target detection by different methods. The dual-color buttons indicate the report timeliness in the center and the accuracy on the outside. Clicking these buttons provides information on which sensor detected the target, when, and with what accuracy. Resolution data can also be presented in similar outputs. The results are color coded in white and blue. Data with a white background indicates data available for analysis. A blue background indicates data available to a warfighter for decision-making.



In addition, the confidence column indicates the current force confidence in the contact. Clicking on this button indicate the specific sensor reports fused to provide this confidence.



Finally, the rightmost two columns indicate the ISR and operational responses taken in this scenario execution.



Non-satellite ISR evaluation is similar. Sensor coverages were again evaluated according to the sensor characteristics, including view limitations.





























As we change from the scenario sensor coverage overview to the coverage of a specific predator, we change from coarse coverage view to the detailed sensor field of view actually used throughout the sensor detection algorithms.













Moreover, the actual detection success is based upon line-of-sight limitations, as indicated in this detailed view of the coverage of ground surveillance radars. The detection capability in each direction is overlaid on an auto-generated map based on DTED information. Because most of this scenario occurred over the ocean this capability was turned off to reduce runtime. Additional mobile sensor planning features were available but not used in this scenario include the capability to generate optimal airborne routing to cover a number of collection points that need to be regularly monitored. This capability sets up a roving sensor pattern that is analogous to the tasking task methodology for satellites.



















T

I will dynamically show the transmission of a notional message using a “follow the bouncing ball” approach



he ISR component of this study went beyond basic data collection by incorporating the communications and processing delays in getting information to the simulated warfighter. The communications links to maintain the architecture were detailed by station, location, delay information, and linkages, as shown here. A typical message would originate from the sensor, be relayed through MGS, where an initial delay is incurred. The message is then relayed through communications satellites, and to the JWICS, where another processing and dissemination delay occurs. This data would then be transmitted to the JIC and DCGS, and is finally forwarded to PACOM for use. At every step the probability of message loss is evaluated.



While conceptually straightforward, in execution the architecture-level evaluation is complex because of the number of possible links and the possibility of employing alternate communications paths as the principle connections routes experience delays. Therefore the performance of the evolving architectures was evaluated. This analysis can be used to identify architecture chokepoints and measure information availability to the warfighter across different epochs. In this scenario there were no direct or ECM attacks against our strategic communications architecture, nor were there any nuclear events to disrupt propagation, so communications connectivity remained high.









Details on detection success and causes of failure were analyzed after the run. The details on detections against key threats could be evaluated on a single-scan resolution. The results were also evaluated for timeliness, relevance, availability, accuracy, and completeness, or TRAAC for short. These form one of the two bases of cross architecture comparison made in these runs. This information augments the typical warfighting statistics also gathered in this study.

















Other information gathered in this study includes the knowledge matrix statistics. These are based first on the ISR detections, but then evaluate these detections according to the location, tracking, identification, threat activity, capability and intent evaluation of each detecting and reporting sensor and related processing.



The information shown in this AVI is notional to allow unclassified dissemination and control. Actual data will be discussed later in this briefing.