FORCES Detailed Space-Based IR Detection Processing

(a.k.a. Brilliant Eyes Detection Model)


The JFORCES Detailed Space-Based IR detection model considers the following distinct activities:


DrawObject























The focus of this model is to detect, track and identity missile launches. The model resolution is at the focal plane pixel level. This means that the heat signatures recorded at each pixel on the sensor's focal plane can be determined. Additionally the sensor operations are strongly segregated into the following operationally distinct functions:


DrawObject




























The result is a testbed structure that supports software and hardware in the loop evaluation. Because this was the design emphasis this model does not use the standard sensor processing used by most other sensors in FORCES. Notably, there is no standard scanning cycle maintained through the $SIM_DIR/sensor/prelimsensor.F routine. Likewise, the standard sensor environment routines are not called. While the standard JFORCES sensor modeling procedure works well for most C2, man-in-the-loop and analytic applications, it focuses on discrete detection events involving specific sensor/target events and primarily evaluates the environment only as related to these detection events. Instead, the focus of this testbed design is to provide a comprehensive emulation of a SBIR focal plane, and only after a realistic representation of this focal plan was developed were detections to be extracted and tracks formed. The coding divisions reflect the functional divisions of operational systems to a level appropriate for a hardware- and software in the loop testbed, which is how this model and the JFORCES environment were used in the original application.


One of the first differences between typical sensor modeling and the modeling supporting this application is that the instantaneous viewing of every sensor has to be specified at all times. Two approaches were used to drive the sensor orientation. The first, and more versatile, approach required the scenario designed to load a series of tasks for each sensor. (Click here for details.) Alternately, an approach cueing one of the detailed SBIRs against a suspected threat launch. This employs (or at least should employ, see notes) two systems of sensors. The first sensor type would ususally be a coarse IR sensor scanning large areas of the ground for hotspots. These warnings are used to cue a high-resolution IR sensor (on the same satellite?) to confirm the detection. The focal plane of this sensor is modeled at the specific pixel level. Hypothesis launch detections are generated based upon the hotspots detected when compared across scans for movement and environmental elements. If the detection is confirmed the high resolution sensor then maintains the track on the track and provides the track data (angles, drift, accelleration and intensity) to a track correlator as indicated in Figure 1.


Described below is the coding overview for each of the activities in Figure 1.


Cross Sensor Cueing

In addition to prescripted sensor tasks, the detailed SBIR representation incorporates the capability to cue SBIRs from other sensors. These other sensors can be modeled view other processes, potentially on other machines or even using live feeds, or can be "dummied up" at a very perfunctory level inside the SBIR representation itself. This is the point where external feeds from external processes reading stored and/or simulated DSP feeds tasked the JFORCES sensors in the past.


The dummied-up routine that replaces these external function is found in $SIM_DIR/sensor/beirdsphandover.F. This procedure is called by $SIM_DIR/objutil/launchreport.F every time there is a ballistic missile launch and is called separately for each satellite sporting a detailed SBIR sensor representation. But it is only called if the parameter "dsprun" is set to true. When set to false (the normal mode) the detailed SBIRs will only be tasked as a result of an input from an external process. The call to the beirdsphandover routine is called after a time delay reflecting operational delays expected in the real system. It is not required for the external satellites to actually be modeled for this procedure to work - their presence is assumed whenever the "dsprun" parameter is set.


The beirdsphandover routine then determines which satellites can see the launch and computes the delay to get these sensors to actually view the expected threat location based upon the current sensor's orientation, tasking, slew and settle properties. If the observation is possible this routine then schedules an event to inject a search task in the expected threat region. The logic for priorizing this task along with other ongoing tasks is as described in the sensor Tasking section, above.


Sensor Tasking

The first mode is to task sensors based upon a user-input prescripted set of sensor tasks. During mission planning the user specifies tasks for each sensor. These tasks are in one of the following forms:

  1. Maintain a view of a fixed azimuth and elevation relative to the satellite

  2. Search a specified location on the earth

  3. Search a specified trapezoidal area on the the earth

In each case the sensor will scan the specified volume until another task is given to the satellite. Only one search can be performed at a time. If commanded to search a point or area no the earth and that region passes over the earth limb the sensor will stop trying to detect anything in that region (essentially shut down) until the region is visible again.


Key code/data components related to this activity are:

1) Sensor tasks stored in the following database tables;


2) Data Retrieval for runtime


3) Tasking prioritization code


4) Supporting runtime data structures


Detection Processing

There are at least three different sensor processing approaches that have been used in the detailed SBIR sensor representation. These are:


  1. Interface to hardware-in-the-loop for testing. The interface code was developed for each prototype has delivered. The interface code disappeared with the prototypes, so this mode of operation would require a new interface.

  2. Emulated the sensor's focal plane on a graphics processor. This was done on a Visual Information Technologies Incorporated (VITEC) board. This board emulated the focal plan by setting up pixels as pixels on the focal plane and then building a scene image by either importing a cloud image with environmental hotspots (e.g. fires) or creating one fractily and then overlaying hotspots caused by launchesas distorted by the atmosphere and sensor jitter into this scene. Various filters (including 3x3 & 7x7 high and low pass filters) could be used on this scene to isolate candidate detections. The code is still intact and is in $SIM_DIR/sensor/VITEC. I won't discuss it in more detail at this time because we no longer have a VITEC board handy. But the code could probably be readily translated to another graphics processor.

  3. Finally, we genererally used a parametric model to derive detection results. These detection results were not only useful for military utility analysis, but the approach used was detailed enough to provide realistic inputs for 1) testing tracking algorithms, 2) testing prototype trackers in the loop, and 3) testing candidate track identification algorithms. All of these capabilities were used.


Because the parametric model is currently intact it will be the focus of the rest of this section.


Key code/data components related to detailed SBIRS detection representation in the parametric model are:

1) Sensor stored in the following database tables:


2) Data Retrieval for runtime


3) Detection processing code for the parametric model


4) Supporting runtime data structures


Single Sensor Tracking

This is the tracking process that could occur onboard an individual satellite. The target is tracked not according to absolute location but instead in terms of current angle from the satellite and rates of angular movement and angular accelleration. These are the items that can readily be tracked on a single satellite based upon data collected at the focal plane.


Key code/data components related to this activity are:

1) Sensor tasks stored in the following database tables;


2) Data Retrieval for runtime


3) Angular Tracker Code

For the simplified 3D tracker (see next section for options) a number of 3D tracking functions are also incorporated into this routine IF the 3D tracker is specified in database prototyping as bing spaceborne.


4) Supporting runtime data structures


MultiSensor Track Correlation

The next step is to combine those angular tracks developed onboard each satellite into composite 3D tracks. This is modeled as being done either in space or at a ground site. The difference between these options is not one of tracker functionality but instead different communications success rates. The 3D track correlator can only use detections from the satellites that have successful communications with the tracker site. This can vary dramatically between a space and ground-based solution depending on the scenarios.


Like many other components of this model the MultiSensor Track Correlator was a candidate for hardware and software in the loop testing. The hardware prototypes are gone, but there are at least 3 software options still available:

A simplified 3D tracker built into the single sensor tracker. This was discussed in the above section.

A multiple Hypothesis Tracker. Code is in the $SIM_DIR/sensor/MHT directory.

An alternate 3D tracker of unknown lineage. Code is in the $SIM_DIR/sensor/real3dtracker directory.


Key code/data components related to this activity are:

1) tracker perameters stored in the following database tables;


2) Data Retrieval for runtime


3) 3D tracker code. This should be broken own into 2 parts - Generating and maintaining the tracks, and 2) Attack assessment. The first activity combines detections from one or more sensors into a composite to change from the 2D focal plane world into the 3D world used for targeting and attack assessment. This will provide information on where the threat currently is as well as very near term estimates of the threat's history and future required for tracking. The attack assessment activity extends this estimate of the threat's history and future to assess likely the likely threat launch and target points.


As mentioned above, there are three different 3D trackers in the current FORCE inventory. Each produces list of probable current threat locations and heading/velocity information. The word probable in the previous sentence indicates that, because these tracks are based upon uncertain detections with limited resolution and errors due to factors including platform jitter and environmental purtabations, the following errors will exist:

1) Track locations, headings and speeds will have errors

2) Note all threats will be tracked at all times

3) Tracks may be formed where no threat exists.

This said, The first tracker is the simplified 3D tracker described as part of the single sensor tracker description. Click here for details. The other two trackers are not yet documented.


All three trackers use the same attack assessment code, which is:


4) Supporting runtime data structures


Statistics Collection

Based on the application requirements, the statistics gathering for this model are more complex than usual and warrant a quick description. Code will be found in the detaileds SBIRs model that accesses real data. This data is not used by the detection, tracking or attack assessment code directly but is instead used to provide error measurements used to monitor system performance. The key code elements are:



A warning to programmers is that the calls to the statistics collection routines are closely coupled with the attack assessment calls. This is convenient for generating statistics and isolating specific error causes at the sources, but it might be better to decouple them in future applications.