Measuring Performance for Situation Assessment

Citation: Mahoney, S.M., K.B. Laskey, E.J. Wright, and K.C. Ng, "Measuring Performance for Situation Assessment", Proceedings of 2000 MSS National Symposium on Sensor and Data Fusion, San Antonio, Texas: June 2000.


We define a situation estimate as a probability distribution over hypothesized groups, units, sites and activities of interest in a military situation. To be useful to decision makers, there must be a way to evaluate the quality of situation assessments. This paper presents and illustrates an approach to meeting this difficult challenge. A situation assessment integrates low-level sensor reports to produce hypotheses at a level of aggregation of direct interest to a military commander. The elements of a situation assessment include 1) hypotheses about entities of interest and their attributes, 2) association of reports and/or lower level elements with entities of interest, and 3) inferences about the activities of a set of entities of interest. In estimating the quality of a stiuation assessment, these elements may be scored at any level of granularity from a single vehicle to the entire situation estimate. Scoring involves associating situation hypotheses with the ground truth elements that gave rise to them. We describe why this process may present technical challenges, and present a method for inferring the correspondence between hypotheses and ground truth. Conditional on this correspondence, the quality of the estimate must be assessed according to criteria that reflect the user's requirements. Scores must then be integrated over different plausible assignments. Our scoring methods can be tailored to user requirements, and include rigorous treatment of the uncertainties associated with a situation estimate.

For more information, contact IET