STIG Workbench Docs
Everything you need to import benchmarks, assess findings, automate evidence, and export ATO packages. Use the sidebar to jump to any section, or read top-to-bottom for a guided tour.
Install #
Download the build for your operating system from the downloads page and install it like any other desktop app.
- macOS: Open the
.dmgand drag STIG Workbench to your Applications folder. - Windows: Run the
.exeinstaller. - Linux: Make the
.AppImageexecutable (chmod +x) and run it.
After installing, .cklb files become the default file association — double-click any checklist and it opens in the workbench.
STIG Workbench is fully offline. All parsing, importing, and exporting happens on your machine. The only outbound request is a one-time license validation when you enter your key.
Your first checklist #
The fastest way to get a working checklist is to import a DISA XCCDF benchmark.
- Download a benchmark Visit public.cyber.mil/stigs and grab the ZIP for the STIG you need. Inside, find the
*-xccdf.xmlfile. - Import it In STIG Workbench, choose File → Import XCCDF Benchmark… and select the XML.
- Open the generated .cklb A new
.cklbis written next to the XCCDF file. All rules start at Not Reviewed. - Start triaging Click any rule to view its check content, fix text, and discussion. Set status from the dropdown and add evidence in the Finding Details field.
Activate your license #
STIG Workbench runs in 14-day trial mode out of the box. To activate after purchase:
- Check your inbox for the license key email (subject: Your STIG Workbench Pro License Key). Keys look like
SW-A1B2-C3D4-E5F6-G7H8. - In STIG Workbench, open Settings → License (or STIG Workbench → License on macOS).
- Paste your key and click Activate. The app contacts
api.stigworkbench.comonce to validate, then runs offline thereafter.
Email [email protected] with the email address you used at checkout and we’ll resend it.
The .cklb editor #
Open any .cklb file to edit findings inline. The editor displays all rules across every embedded STIG with full check content, fix text, and a rich detail panel.
Inline editing
Click a rule to open the detail panel. Status is a dropdown with four values that match the DISA CKLB specification:
- Not Reviewed — the default; the rule has not yet been triaged.
- Open — a finding; the system is non-compliant.
- Not a Finding — the system is compliant with the rule.
- Not Applicable — the rule does not apply to this system.
The Finding Details and Comments fields support free-text. Changes auto-save to the .cklb file on disk — no Cmd+S required, though Cmd+S still works if you want to be explicit.
Filtering & search #
The rule list above the detail panel supports three independent filters that combine with AND logic:
- Severity: CAT I (high), CAT II (medium), CAT III (low). Multi-select.
- Status: Not Reviewed, Open, Not a Finding, Not Applicable. Multi-select.
- Search: free-text matching across rule title, Vuln ID (V-12345), and rule version (SV-12345r1_rule).
Click any column header to sort. Click again to reverse direction.
Bulk actions #
Select multiple rules with Shift+click (range) or Cmd/Ctrl+click (individual). With selection active, the toolbar exposes:
- Set status — apply one status to every selected rule.
- Append comment — add the same comment to every selected rule (existing comments are preserved).
- Clear selection — deselect all.
Bulk operations write to disk as soon as you confirm. There is currently no in-app undo — if you make a mistake, the safest recovery is to revert the file in your version control system. Putting your .cklb files in git is highly recommended.
Target data #
Every checklist captures metadata about the system being assessed: hostname, IP address, FQDN, MAC, asset type, role, etc. Click Edit Target Data in the toolbar to open the modal.
Target fields populate the target_data object in the .cklb JSON and are carried into CKL exports. They are also the fallback for several import flows — for example, HDF imports auto-fill empty target fields from passthrough.target.
Import XCCDF benchmark #
Parses a DISA XCCDF *-xccdf.xml benchmark file (or an SCAP 1.2/1.3 data stream) and creates a new .cklb file next to it. All rules start at Not Reviewed.
Use this when you need a fresh checklist for a STIG you don’t already have a CKLB for. The XCCDF source on public.cyber.mil is always the most current.
What gets captured
- Every rule with its
group_id,rule_id,rule_version, severity, title, discussion, check content, fix text, weight, and references. - CCI identifiers from
<ident>elements (used later for NIST 800-53 crosswalk). - SRG IDs (used by the Upgrade Wizard for stable matching across STIG versions).
- STIG metadata: name, version, release info, date.
Import legacy CKL #
Converts a legacy DISA .ckl file (XML) to .cklb (JSON), preserving status, Finding Details, and Comments from every <VULN> entry. Target data and STIG metadata are preserved.
Use this to bring older work into the modern format. Once converted, the new .cklb can be exported back to .ckl at any time if your downstream tooling still requires the legacy format.
Import SCAP results #
Requires an open checklist. Applies pass/fail results from an XCCDF result file produced by a SCAP scanner (e.g. SCC, OpenSCAP) to the open checklist.
| SCAP result | .cklb status |
|---|---|
pass | Not a Finding |
fail | Open |
notapplicable | Not Applicable |
error / unknown / notchecked | Not Reviewed |
Rule matching uses rule_version as the primary key, falling back to rule_id. Imported rules are marked with a [SCAP IMPORT] prefix in Finding Details so you can distinguish automated evidence from manual review.
Import InSpec / HDF results #
Reads a Heimdall Data Format (HDF) JSON file produced by an InSpec profile run (or by saf convert from another result format) and applies the automated test results. A 4-step guided wizard handles file selection, mode selection, preview, and execution.
Two modes
| Mode | When to use |
|---|---|
| Apply to existing | You already have an open .cklb checklist and want to fold in InSpec evidence — preserves your manually-set statuses by default. |
| Generate new | You only have an HDF file and want a fresh .cklb built from the profile’s controls. |
Status mapping
InSpec result statuses are translated to .cklb status conservatively — an errored test is never treated as a pass:
| HDF result | .cklb status |
|---|---|
passed | Not a Finding |
failed | Open |
skipped (all) | Not Applicable |
error (without any pass) | Not Reviewed |
| no results | Not Reviewed |
For mixed result sets, any failed result wins (Open); a mix of passed + error falls back to Not Reviewed because the evidence is incomplete.
Rule matching
Matching uses stable identifiers that survive DISA STIG renumbering:
- HDF
tags.stig_id→ checklistrule_version(primary key) - HDF
control.id→ checklistgroup_id - HDF
tags.gid→ checklistgroup_id
Profiles with limited STIG metadata (most controls missing tags.stig_id) trigger a soft warning so you know fallback identifiers were used.
Options
| Option | Default | Description |
|---|---|---|
| Overwrite existing status | off | When off, only Not Reviewed rules accept the HDF status — protects analyst-set values. |
| Preserve existing finding details | on | Appends HDF evidence under a --- HDF Import <date> --- header instead of replacing. |
| Update target data | on | Fills empty host_name, fqdn, ip_address, mac_address from passthrough.target. |
Multi-host runs and additional passthrough metadata are preserved verbatim in the target’s Comments field so nothing is silently dropped.
Output
Imported finding details are tagged with a [HDF IMPORT] prefix so you can always tell automated evidence from manually authored notes. After execution you can save a markdown HDF Import Report that lists every rule updated, every rule preserved, unmatched HDF controls, unmatched checklist rules, and the methodology used.
Import SARIF #
Requires an open checklist. Reads one or more SARIF 2.1.0 files (CodeQL, Semgrep, Bandit, and any other compliant tool) and maps findings to STIG rules via CWE lookup. Matched rules are set to Open with the tool name and rule details populated in Finding Details.
How CWE matching works
- Each SARIF result carries one or more CWE IDs in its
taxareferences. - Each STIG rule has zero or more CCI identifiers, which map to NIST 800-53 controls, which in turn relate to CWEs.
- STIG Workbench uses a curated CWE→CCI→STIG mapping table to find candidate rules for each finding.
A CWE finding doesn’t prove a STIG violation — it indicates a class of weakness that may be relevant to one or more rules. The import marks rules as Open and includes the SARIF evidence; an analyst should confirm before signing off the checklist.
Import dependency audit #
Requires an open checklist. Reads a vulnerability JSON report and maps vulnerabilities to STIG rules with CAT I/II/III severity mapping. Three formats are auto-detected:
npm audit— output ofnpm audit --json.pip-audit— output ofpip-audit -f json.- Generic CVE list — an array of objects with
cve,severity, anddescriptionfields.
Vulnerabilities map to STIG rules through CWE references when the audit tool provides them, falling back to severity-only mapping for known-vulnerable-component rules.
Repo scanner #
Pattern-matches source code evidence against STIG check content. Useful for catching obvious compliance issues before a formal scan and for generating evidence trails for code-related rules.
Point the scanner at a repo directory; it walks the tree (respecting .gitignore), runs the configured patterns, and presents matches grouped by STIG rule. Apply matches to your open checklist to set status and populate Finding Details with file paths and line numbers.
Pattern matches need analyst review. False positives are common; the scanner errs toward surfacing too much rather than too little.
Merge / carry forward #
Carries status, Finding Details, Comments, and overrides from an older checklist into a newer one, matching rules by rule_version. Useful when DISA releases a minor STIG update without changing rule content — for example, a quarterly release that adds a few rules but doesn’t modify existing ones.
For major version upgrades where check content has been rewritten, use the Upgrade Wizard instead — it does change detection so you don’t silently carry stale findings forward.
Merge vs. Upgrade Wizard
| Use case | Tool |
|---|---|
| Same major STIG version, slight rule additions | Merge |
| New major STIG version, content may have changed | Upgrade Wizard |
| Combining work from two analysts on the same STIG | Merge (carefully — review conflicts) |
STIG Upgrade Wizard #
A 4-step guided workflow for upgrading a completed checklist to a new major STIG version while preserving your prior work. Also accessible from the Home screen and the Upgrade Wizard nav tab.
Step 1 — Select source and target
- Source: The completed
.cklbchecklist whose findings you want to keep. - Target: The new STIG — either a DISA XCCDF
*-xccdf.xmlbenchmark or a blank.cklbcreated by importing the new XCCDF.
Multi-STIG checklists show a dropdown to select which STIG to upgrade.
Step 2 — Analysis preview
Runs the full upgrade analysis and displays categorized results before any file is touched:
| Category | Description |
|---|---|
| Carried (clean) | Rule matched and content unchanged — findings copied automatically. |
| Needs Re-review | Rule matched but DISA updated the check/fix text — analyst must verify. |
| New rules | Rules present in the new STIG with no match in the old checklist. |
| Removed rules | Rules in the old checklist not present in the new STIG. |
| Severity changes | Rules where CAT level changed between versions. |
| Quality warnings | Source rules with missing evidence (e.g. Not a Finding with empty Finding Details). |
Matching priority
Matching never uses group_id or rule_id, which change between releases. The wizard tries three identifiers in order:
rule_version— the stable DISA version string (primary key).srg_id— same SRG requirement, possibly rewritten.- CCI overlap — same NIST control, different implementation.
Change detection
Normalizes whitespace before comparing check_content, fix_text, rule_title, and discussion so formatting-only changes are not flagged. Only meaningful content changes trigger the Needs Re-review category.
Step 3 — Options
| Option | Default | Description |
|---|---|---|
| Preserve target data | on | Copy host name, IP, FQDN, and other asset fields to the new checklist. |
| Reset changed rules to Not Reviewed | on | Rules with updated check content are set back to Not Reviewed. |
| Add upgrade note to comments | on | Prepends [UPGRADE NOTE: Check content updated in v…] to the Comments field of changed rules. |
| Generate markdown upgrade report | on | Creates a detailed .md report listing all changes with content diffs. |
Step 4 — Execute
Review the summary and click Execute Upgrade. Two files are written next to the source checklist:
<name>_v<version>.cklb— the upgraded checklist, ready to open.<name>_upgrade_v<old>_to_v<new>_<date>.md— the markdown report (if enabled).
Your original .cklb is left untouched. If you want to keep the upgraded checklist, work from the new file going forward; if you don’t, just delete it.
Dashboard #
Or use File → Open Dashboard Folder… to point at any directory. The dashboard scans recursively for all .cklb files and displays aggregate compliance metrics:
- Rule counts by status (Not Reviewed, Open, Not a Finding, Not Applicable) and by severity (CAT I/II/III).
- Completion rates per checklist.
- A sortable table letting you drill into individual checklists.
Use this for portfolio-level visibility — for example, the security posture of every system in a program or a snapshot of where each ATO package stands.
Checklist diff #
Compare any two .cklb files side by side. The diff view groups changes by type, sorted from most-significant to least:
- Regressions — rules that moved from compliant to non-compliant.
- Improvements — rules that moved from non-compliant to compliant.
- New rules — rules in the second checklist not present in the first.
- Removed rules — rules in the first checklist not present in the second.
Common uses: comparing two analysts’ work on the same STIG, tracking remediation progress over time, or auditing a vendor-supplied checklist against your baseline.
Evidence package #
Requires an open checklist. Bundles the checklist and optional supporting files into a ZIP archive. Includes a human-readable SUMMARY.md of Open findings that you can paste into ATO submission packages.
What goes into the ZIP
- The current
.cklbfile. - An exported
.cklfor tools that still require the legacy format. - A
SUMMARY.mdlisting every Open finding with severity, rule ID, title, and finding details. - Optional: any supporting files you attach (screenshots, scan outputs, configuration exports).
CSV / CKL / POA&M #
The editor toolbar exposes three single-file exports for common downstream uses:
CSV export
A spreadsheet of all rules with status, severity, finding details, and comments. Best for briefings, internal tracking, and sharing with stakeholders who don’t use STIG-aware tooling.
CKL export
The legacy DISA XML format. Use this when your downstream tool (eMASS, older STIG Viewer installs, third-party GRC tools) requires .ckl instead of .cklb. Round-trips cleanly — you can import the exported CKL back into STIG Workbench without losing data.
POA&M export
A Plan of Action & Milestones spreadsheet listing every Open rule. Pre-populated with rule ID, title, severity, current status, and finding details — ready for you to fill in remediation owners, scheduled completion dates, and milestones.
Keyboard shortcuts #
macOS shortcuts use Cmd; Windows and Linux use Ctrl.
| Action | Shortcut | Menu |
|---|---|---|
| Open checklist | Cmd+O | File → Open Checklist… |
| Save | Cmd+S | File → Save |
| Save As | Cmd+Shift+S | File → Save As… |
| Import XCCDF | — | File → Import XCCDF Benchmark… |
| Import CKL | — | File → Import CKL Checklist… |
| Dashboard | Cmd+Shift+D | View → Dashboard |
| Diff Checklists | — | View → Diff Checklists… |
| Scan Repository | — | Tools → Scan Repository… |
| Import SCAP | — | Tools → Import SCAP Results… |
| Import SARIF | — | Tools → Import SARIF… |
| Import Dependency Audit | — | Tools → Import Dependency Audit… |
| Import InSpec / HDF | Cmd+Shift+H | Tools → Import InSpec / HDF Results… |
| Merge Findings | — | Tools → Merge Findings… |
| Upgrade STIG Version | — | Tools → Upgrade STIG Version… |
| Export Evidence Package | — | Tools → Export Evidence Package… |
.cklb file format #
STIG Workbench uses .cklb — a JSON format that is a superset of the DISA CKLB specification. Each file contains:
{
"title": "...",
"id": "<uuid>",
"stigs": [
{
"stig_name": "...",
"version": "...",
"rules": [ ... ]
}
],
"target_data": {
"host_name": "...",
"ip_address": "...",
"fqdn": "...",
"mac_address": "...",
"role": "...",
"...": "..."
},
"cklb_version": "1"
}
Each rule stores status, finding_details, comments, overrides, and the full STIG content fields (check_content, fix_text, discussion, severity, references, CCIs, SRG IDs, etc.).
The legacy .ckl format is XML and was designed for the Java-era STIG Viewer. JSON is easier to diff in version control, easier to read in editors, easier to script against, and produces smaller files. STIG Workbench reads and writes both, so you’re never locked in.
Requirements #
- macOS 11 (Big Sur) or later, Intel or Apple Silicon.
- Windows 10 or later, x64.
- Linux Ubuntu 20.04+ or any modern distro that runs
.AppImage. x64. - No network access required — all parsing and processing happens locally on your machine. The only outbound connection is a one-time license validation when activating a key.
- No Java, no Node.js, no VS Code required at runtime.
Need help? Email [email protected] or open an issue on GitHub.