Why cells instead of category ratings
One-word vendor ratings — “strong,” “core,” “moderate” — compress too much into too few words. Analysts looking at the same vendor reasonably disagree, and a single category score hides which specific capabilities a vendor ships versus what they don't.
Stack Analysis decomposes each of the 18 security capability categories into smaller cells — 4 to 13 sub-capabilities per category. Every vendor gets rated 0 to 5 on each cell. The category score shown on the radar is the mean of those cell scores, which captures breadth and depth in one number you can actually interrogate.
How to read the chart. Click any category under the radar to see the cell-level breakdown. That's where a 3.5 turns into “strong on detection, weak on investigation” — the view that's actually actionable.
The 0–5 strength scale
| Score | Label | What it means |
|---|---|---|
| 0 | None | Vendor does not cover this cell at all. |
| 1 | Light | Exists but basic — often an add-on or afterthought. |
| 2 | Partial | Limited capability. Some users will hit its edges quickly. |
| 3 | Moderate | Decent capability but not the vendor's strength. |
| 4 | Strong | Solid. Competitive with category specialists. |
| 5 | Core | This is what the vendor is known for. Deep and mature. |
Where the cells come from
Cells are drawn from three sources, in order of authority:
- Framework sub-categories. NIST CSF 2.0 subcategories and CIS Critical Security Controls v8.1 safeguards anchor the taxonomy in accepted reference frames.
- Analyst category definitions. Gartner Magic Quadrants, Forrester Waves, and KuppingerCole Leadership Compasses calibrate what counts as a separable capability — not to duplicate analyst rankings, but to match the market's own vocabulary.
- Product reality. What vendors actually ship as distinct products or SKUs. If three major vendors all treat X as a separate product, it deserves its own cell.
How vendors get scored
Each vendor-cell pair is graded from:
- Product documentation. Official product pages, datasheets, architecture docs, solution briefs.
- Packaging. A capability that's core to the platform scores higher than the same capability sold as an add-on SKU.
- Analyst placement. Gartner / Forrester / KuppingerCole positioning in the relevant category informs calibration between vendors.
- Hands-on evaluation. Our own review where demos, trials, or detailed public documentation allowed direct assessment.
The cell structure is the transparent part; individual scores are editorial judgment calibrated against those sources. We don't publish per-cell citations on the live chart today.
The category score
category_score = mean(cell_strength) across all cells in the category
Range: 0.0 to 5.0. Cells with score 0 are included in the average — broad shallow coverage can outscore narrow perfect coverage on purpose. Customers experience gaps, and the formula reflects that.
Combining vendors: per-cell merge
When you select multiple vendors, the tool doesn't pick the best scorer per category. It takes the max score per cell across your selection, then averages. That matches how security stacks actually work: a specialist's weak cell is often filled by a generalist, and both should count toward your coverage.
Example. If your IAM vendor scores 2 on Identity Threat Detection and you've also selected a dedicated ITDR vendor that scores 5 on that cell, the stack's combined score for that cell is 5. The category average rises accordingly.
The market ceiling (dashed ring)
For each category, the dashed ring shows what's achievable if a customer bought every vendor in the dataset — per-cell max across everyone, averaged.
In most categories the ring sits at or very near 5.0, and that's by design. The taxonomy rule is “a cell exists when vendors ship it,” so for almost every cell at least one vendor scores 5. The ring's job is to anchor the chart's outer edge — the gap between your selected stack and the ring is the signal. A shape reaching 3 in a category with a ceiling at 5 means you're two points of coverage away from what the market offers.
Where the ring visibly dips below 5, that's a different kind of signal: the market itself hasn't solved this yet. AI Security currently sits around 3.8 because prompt-injection defense, data-poisoning prevention, and AI governance are all new categories where no vendor has reached category-defining depth. No stack can close that gap today. A few cells in other categories cap at 4 because the best pure-play vendor for them isn't in the current dataset — API security is the clearest example.
Self-scoring disclosure
SignumCyber is in the dataset. We score ourselves. You should know that.
Each SignumCyber cell maps to a specific, countable feature of the platform:
- Risk Quantification. 688 assessment questions across 73 security domains and 11 role-based paths. 3,110 vulnerabilities with severity ratings. 601 recommendations, each scored across 9 dimensions. Full FAIR-based ALE/SLE/ARO with industry benchmarks.
- Compliance. 88% of questions mapped to ISO 27001 controls. 94% mapped to NIST CSF 2.0. 88% mapped to SOC 2 criteria. Full CIS v8.1 integration (18 controls, 103 safeguards).
- Program management. IRP, BCP, and cybersecurity plan generators with versioning, approval workflows, and multi-state assignment tracking.
- Where we score low, we say so. Continuous Control Monitoring scores 1 — we don't auto-integrate with cloud APIs yet. Phishing simulation, on-call escalation, and live forensics score 0 — we don't ship those.
If a SignumCyber cell looks inflated relative to that evidence, tell us and we'll defend or correct it.
What this tool does NOT do
No priority judgments
Stack Analysis doesn't flag categories as “essential” or “recommended” for your org. What matters most depends on your threat model, regulatory exposure, data sensitivity, and business continuity requirements — which requires a risk assessment to determine. A free tool can show you what you have. It can't honestly tell you what you need.
No overall “best vendor” ranking
A vendor can score 4.8 in one category (specialist) and 0 in another (doesn't play there). Scores live per category, not overall. A single-number vendor ranking would be lossy to the point of misleading.
No credit for features we don't score
If a vendor ships something that doesn't map to any cell in our taxonomy, they get no credit for it in this model. That's a limitation of any taxonomy. We add cells when a capability becomes market-established.
Caveats worth knowing
- Judgment is inherent. Even with cells, assigning 4 vs. 5 on a specific capability is a judgment call. Two analysts could reasonably differ by one step on any given cell.
- Newer categories are less settled. AI Security, ITDR, SASE, and other emerging markets are still crystallizing. Scores there will move more than in mature categories like Email Security.
- Scores drift. Vendor capabilities shift as products ship and acquisitions close. Expect some lag between a major vendor change and its reflection here.
- Known gaps in vendor coverage. Major vendors we don't yet score include Netskope and Cato (SSE/SASE), Orca (CNAPP), BeyondTrust and Delinea (PAM), SailPoint and Saviynt (IGA), Rubrik and Cohesity (backup), and Darktrace and Vectra (NDR). Additions in flight.
Corrections welcome
If you see a score that looks wrong for a vendor you know well, tell us. We don't auto-apply suggestions — we review them against the sources above — but we do take them seriously.
Want this done for your stack, properly?
Stack Analysis is a self-serve diagnostic. A proper coverage review — one that factors in your actual vendor contracts, feature flags, implementation maturity, and the threats you actually face — is what SignumEssentials and SignumVantage produce.