Wiki /

Common Misconceptions About the FedComp Index

Several recurring misunderstandings appear in questions submitted through the platform's feedback channels. This post addresses them directly, using the definitions established in the methodology documentation.

The first misconception holds that the FedComp Index measures contractor quality. It does not. The index measures competitive positioning based on federal award history. Quality, past performance, technical capability, and responsibility status are evaluated separately by contracting officers through different mechanisms. The index does not assess any of these factors.

A second misunderstanding confuses registration with competitive positioning. A firm registered on sam.gov appears in the contractor population, but registration alone does not generate a meaningful score. Award history determines the score. A contractor with no award transactions has no index value regardless of how complete their sam.gov registration may be.

The third misconception treats certifications as score drivers. Certifications such as 8(a), HUBZone, SDVOSB, and WOSB appear in contractor dossiers because they are matters of public record in SAM.gov. They do not modify the scoring formula. A contractor's index value derives entirely from award volume and award recency as applied to all contractors uniformly.

The fourth misconception concerns geographic scope. The index does not measure presence within a single state or region. Award data is national. A contractor may be based in one location and compete for federal requirements across the country. The index reflects that competition regardless of where the contractor's physical address appears.

The fifth misconception assumes that the score is subjective or comparative. The scoring formula is deterministic. Identical inputs produce identical outputs. A contractor's index value is not assigned based on judgment or industry standing. It is computed from award data using fixed parameters that apply without exception to every contractor in the population.

The sixth misconception holds that the index can be influenced by external factors such as marketing, proposals, or relationship-building. The index does not observe any of these activities. It observes only what has been awarded and recorded in USASpending.gov. If no award transaction exists, no score change occurs.

The seventh misconception treats the index as a prediction. It is not. The index describes historical competitive positioning. It says nothing about what a contractor will win in the future. A high index reflects strong award history. It does not guarantee continued award activity.

The eighth misconception involves direct comparison between contractors in different NAICS corridors. A contractor with a score of 65 in one corridor is not equivalent to a contractor with a score of 65 in another corridor. Each corridor has its own competitive density. Scores are computed within a shared population, but the population varies by classification.

The ninth misconception concerns subcontracting relationships. The index reflects prime contract awards only. Subcontracting activity does not appear in the scoring data. A contractor that performs substantial work as a subcontractor may have a low index while maintaining significant revenue through indirect means.

The tenth misconception involves the weight assigned to the index drivers. Award volume is the dominant factor. Award recency is a secondary factor. The relative weights are fixed and apply uniformly. The index does not adjust weights based on contractor type, size, or certification status.

A contractor's score exists independent of whether anyone references it. The methodology computes it. The dossier displays it. The ranking table sorts by it. The FedComp Index is what it is regardless of whether anyone reads the methodology documentation. A score of 61 is a score of 61. The platform makes no case for what it means.

All articles Methodology