Documentation

STIG Workbench Docs

Everything you need to import benchmarks, assess findings, automate evidence, and export ATO packages. Use the sidebar to jump to any section, or read top-to-bottom for a guided tour.

Install #

Download the build for your operating system from the downloads page and install it like any other desktop app.

  • macOS: Open the .dmg and drag STIG Workbench to your Applications folder.
  • Windows: Run the .exe installer.
  • Linux: Make the .AppImage executable (chmod +x) and run it.

After installing, .cklb files become the default file association — double-click any checklist and it opens in the workbench.

No network required

STIG Workbench is fully offline. All parsing, importing, and exporting happens on your machine. The only outbound request is a one-time license validation when you enter your key.

Your first checklist #

The fastest way to get a working checklist is to import a DISA XCCDF benchmark.

  1. Download a benchmark Visit public.cyber.mil/stigs and grab the ZIP for the STIG you need. Inside, find the *-xccdf.xml file.
  2. Import it In STIG Workbench, choose File → Import XCCDF Benchmark… and select the XML.
  3. Open the generated .cklb A new .cklb is written next to the XCCDF file. All rules start at Not Reviewed.
  4. Start triaging Click any rule to view its check content, fix text, and discussion. Set status from the dropdown and add evidence in the Finding Details field.
screenshot — main editor view
The .cklb editor showing rule list, severity bars, and detail panel.

Activate your license #

STIG Workbench runs in 14-day trial mode out of the box. To activate after purchase:

  1. Check your inbox for the license key email (subject: Your STIG Workbench Pro License Key). Keys look like SW-A1B2-C3D4-E5F6-G7H8.
  2. In STIG Workbench, open Settings → License (or STIG Workbench → License on macOS).
  3. Paste your key and click Activate. The app contacts api.stigworkbench.com once to validate, then runs offline thereafter.
Lost your key?

Email [email protected] with the email address you used at checkout and we’ll resend it.

The .cklb editor #

Open any .cklb file to edit findings inline. The editor displays all rules across every embedded STIG with full check content, fix text, and a rich detail panel.

Inline editing

Click a rule to open the detail panel. Status is a dropdown with four values that match the DISA CKLB specification:

  • Not Reviewed — the default; the rule has not yet been triaged.
  • Open — a finding; the system is non-compliant.
  • Not a Finding — the system is compliant with the rule.
  • Not Applicable — the rule does not apply to this system.

The Finding Details and Comments fields support free-text. Changes auto-save to the .cklb file on disk — no Cmd+S required, though Cmd+S still works if you want to be explicit.

screenshot — rule detail panel with status dropdown
Slide-over detail panel with inline status, finding details, comments, and full STIG content.

Filtering & search #

The rule list above the detail panel supports three independent filters that combine with AND logic:

  • Severity: CAT I (high), CAT II (medium), CAT III (low). Multi-select.
  • Status: Not Reviewed, Open, Not a Finding, Not Applicable. Multi-select.
  • Search: free-text matching across rule title, Vuln ID (V-12345), and rule version (SV-12345r1_rule).

Click any column header to sort. Click again to reverse direction.

screenshot — filtered rule list
Rule list with CAT I + Open filter applied and a free-text search.

Bulk actions #

Select multiple rules with Shift+click (range) or Cmd/Ctrl+click (individual). With selection active, the toolbar exposes:

  • Set status — apply one status to every selected rule.
  • Append comment — add the same comment to every selected rule (existing comments are preserved).
  • Clear selection — deselect all.
Bulk status changes are immediate

Bulk operations write to disk as soon as you confirm. There is currently no in-app undo — if you make a mistake, the safest recovery is to revert the file in your version control system. Putting your .cklb files in git is highly recommended.

Target data #

Every checklist captures metadata about the system being assessed: hostname, IP address, FQDN, MAC, asset type, role, etc. Click Edit Target Data in the toolbar to open the modal.

Target fields populate the target_data object in the .cklb JSON and are carried into CKL exports. They are also the fallback for several import flows — for example, HDF imports auto-fill empty target fields from passthrough.target.

screenshot — target data modal
Target data modal with hostname, IP, FQDN, MAC, role, and asset type fields.

Import XCCDF benchmark #

Parses a DISA XCCDF *-xccdf.xml benchmark file (or an SCAP 1.2/1.3 data stream) and creates a new .cklb file next to it. All rules start at Not Reviewed.

Use this when you need a fresh checklist for a STIG you don’t already have a CKLB for. The XCCDF source on public.cyber.mil is always the most current.

What gets captured

  • Every rule with its group_id, rule_id, rule_version, severity, title, discussion, check content, fix text, weight, and references.
  • CCI identifiers from <ident> elements (used later for NIST 800-53 crosswalk).
  • SRG IDs (used by the Upgrade Wizard for stable matching across STIG versions).
  • STIG metadata: name, version, release info, date.

Import legacy CKL #

Converts a legacy DISA .ckl file (XML) to .cklb (JSON), preserving status, Finding Details, and Comments from every <VULN> entry. Target data and STIG metadata are preserved.

Use this to bring older work into the modern format. Once converted, the new .cklb can be exported back to .ckl at any time if your downstream tooling still requires the legacy format.

Import SCAP results #

Requires an open checklist. Applies pass/fail results from an XCCDF result file produced by a SCAP scanner (e.g. SCC, OpenSCAP) to the open checklist.

SCAP result.cklb status
passNot a Finding
failOpen
notapplicableNot Applicable
error / unknown / notcheckedNot Reviewed

Rule matching uses rule_version as the primary key, falling back to rule_id. Imported rules are marked with a [SCAP IMPORT] prefix in Finding Details so you can distinguish automated evidence from manual review.

Import InSpec / HDF results #

Reads a Heimdall Data Format (HDF) JSON file produced by an InSpec profile run (or by saf convert from another result format) and applies the automated test results. A 4-step guided wizard handles file selection, mode selection, preview, and execution.

Two modes

ModeWhen to use
Apply to existingYou already have an open .cklb checklist and want to fold in InSpec evidence — preserves your manually-set statuses by default.
Generate newYou only have an HDF file and want a fresh .cklb built from the profile’s controls.

Status mapping

InSpec result statuses are translated to .cklb status conservatively — an errored test is never treated as a pass:

HDF result.cklb status
passedNot a Finding
failedOpen
skipped (all)Not Applicable
error (without any pass)Not Reviewed
no resultsNot Reviewed

For mixed result sets, any failed result wins (Open); a mix of passed + error falls back to Not Reviewed because the evidence is incomplete.

Rule matching

Matching uses stable identifiers that survive DISA STIG renumbering:

  1. HDF tags.stig_id → checklist rule_version (primary key)
  2. HDF control.id → checklist group_id
  3. HDF tags.gid → checklist group_id

Profiles with limited STIG metadata (most controls missing tags.stig_id) trigger a soft warning so you know fallback identifiers were used.

Options

OptionDefaultDescription
Overwrite existing statusoffWhen off, only Not Reviewed rules accept the HDF status — protects analyst-set values.
Preserve existing finding detailsonAppends HDF evidence under a --- HDF Import <date> --- header instead of replacing.
Update target dataonFills empty host_name, fqdn, ip_address, mac_address from passthrough.target.

Multi-host runs and additional passthrough metadata are preserved verbatim in the target’s Comments field so nothing is silently dropped.

Output

Imported finding details are tagged with a [HDF IMPORT] prefix so you can always tell automated evidence from manually authored notes. After execution you can save a markdown HDF Import Report that lists every rule updated, every rule preserved, unmatched HDF controls, unmatched checklist rules, and the methodology used.

screenshot — HDF import wizard preview step
HDF import wizard, step 3 — preview showing matched, unmatched, and preserved counts before execution.

Import SARIF #

Requires an open checklist. Reads one or more SARIF 2.1.0 files (CodeQL, Semgrep, Bandit, and any other compliant tool) and maps findings to STIG rules via CWE lookup. Matched rules are set to Open with the tool name and rule details populated in Finding Details.

How CWE matching works

  1. Each SARIF result carries one or more CWE IDs in its taxa references.
  2. Each STIG rule has zero or more CCI identifiers, which map to NIST 800-53 controls, which in turn relate to CWEs.
  3. STIG Workbench uses a curated CWE→CCI→STIG mapping table to find candidate rules for each finding.
Why CWE-based mapping is conservative

A CWE finding doesn’t prove a STIG violation — it indicates a class of weakness that may be relevant to one or more rules. The import marks rules as Open and includes the SARIF evidence; an analyst should confirm before signing off the checklist.

Import dependency audit #

Requires an open checklist. Reads a vulnerability JSON report and maps vulnerabilities to STIG rules with CAT I/II/III severity mapping. Three formats are auto-detected:

  • npm audit — output of npm audit --json.
  • pip-audit — output of pip-audit -f json.
  • Generic CVE list — an array of objects with cve, severity, and description fields.

Vulnerabilities map to STIG rules through CWE references when the audit tool provides them, falling back to severity-only mapping for known-vulnerable-component rules.

Repo scanner #

Pattern-matches source code evidence against STIG check content. Useful for catching obvious compliance issues before a formal scan and for generating evidence trails for code-related rules.

Point the scanner at a repo directory; it walks the tree (respecting .gitignore), runs the configured patterns, and presents matches grouped by STIG rule. Apply matches to your open checklist to set status and populate Finding Details with file paths and line numbers.

Pattern matching is a starting point, not a verdict

Pattern matches need analyst review. False positives are common; the scanner errs toward surfacing too much rather than too little.

Merge / carry forward #

Carries status, Finding Details, Comments, and overrides from an older checklist into a newer one, matching rules by rule_version. Useful when DISA releases a minor STIG update without changing rule content — for example, a quarterly release that adds a few rules but doesn’t modify existing ones.

For major version upgrades where check content has been rewritten, use the Upgrade Wizard instead — it does change detection so you don’t silently carry stale findings forward.

Merge vs. Upgrade Wizard

Use caseTool
Same major STIG version, slight rule additionsMerge
New major STIG version, content may have changedUpgrade Wizard
Combining work from two analysts on the same STIGMerge (carefully — review conflicts)

STIG Upgrade Wizard #

A 4-step guided workflow for upgrading a completed checklist to a new major STIG version while preserving your prior work. Also accessible from the Home screen and the Upgrade Wizard nav tab.

Step 1 — Select source and target

  • Source: The completed .cklb checklist whose findings you want to keep.
  • Target: The new STIG — either a DISA XCCDF *-xccdf.xml benchmark or a blank .cklb created by importing the new XCCDF.

Multi-STIG checklists show a dropdown to select which STIG to upgrade.

Step 2 — Analysis preview

Runs the full upgrade analysis and displays categorized results before any file is touched:

CategoryDescription
Carried (clean)Rule matched and content unchanged — findings copied automatically.
Needs Re-reviewRule matched but DISA updated the check/fix text — analyst must verify.
New rulesRules present in the new STIG with no match in the old checklist.
Removed rulesRules in the old checklist not present in the new STIG.
Severity changesRules where CAT level changed between versions.
Quality warningsSource rules with missing evidence (e.g. Not a Finding with empty Finding Details).

Matching priority

Matching never uses group_id or rule_id, which change between releases. The wizard tries three identifiers in order:

  1. rule_version — the stable DISA version string (primary key).
  2. srg_id — same SRG requirement, possibly rewritten.
  3. CCI overlap — same NIST control, different implementation.

Change detection

Normalizes whitespace before comparing check_content, fix_text, rule_title, and discussion so formatting-only changes are not flagged. Only meaningful content changes trigger the Needs Re-review category.

Step 3 — Options

OptionDefaultDescription
Preserve target dataonCopy host name, IP, FQDN, and other asset fields to the new checklist.
Reset changed rules to Not ReviewedonRules with updated check content are set back to Not Reviewed.
Add upgrade note to commentsonPrepends [UPGRADE NOTE: Check content updated in v…] to the Comments field of changed rules.
Generate markdown upgrade reportonCreates a detailed .md report listing all changes with content diffs.

Step 4 — Execute

Review the summary and click Execute Upgrade. Two files are written next to the source checklist:

  1. <name>_v<version>.cklb — the upgraded checklist, ready to open.
  2. <name>_upgrade_v<old>_to_v<new>_<date>.md — the markdown report (if enabled).
The source checklist is never modified

Your original .cklb is left untouched. If you want to keep the upgraded checklist, work from the new file going forward; if you don’t, just delete it.

screenshot — upgrade wizard analysis step
Upgrade Wizard step 2 — categorized analysis showing carried, needs-review, new, and removed rule counts.

Dashboard #

Or use File → Open Dashboard Folder… to point at any directory. The dashboard scans recursively for all .cklb files and displays aggregate compliance metrics:

  • Rule counts by status (Not Reviewed, Open, Not a Finding, Not Applicable) and by severity (CAT I/II/III).
  • Completion rates per checklist.
  • A sortable table letting you drill into individual checklists.

Use this for portfolio-level visibility — for example, the security posture of every system in a program or a snapshot of where each ATO package stands.

screenshot — dashboard with multi-checklist metrics
Dashboard view aggregating compliance metrics across a folder of checklists.

Checklist diff #

Compare any two .cklb files side by side. The diff view groups changes by type, sorted from most-significant to least:

  • Regressions — rules that moved from compliant to non-compliant.
  • Improvements — rules that moved from non-compliant to compliant.
  • New rules — rules in the second checklist not present in the first.
  • Removed rules — rules in the first checklist not present in the second.

Common uses: comparing two analysts’ work on the same STIG, tracking remediation progress over time, or auditing a vendor-supplied checklist against your baseline.

Evidence package #

Requires an open checklist. Bundles the checklist and optional supporting files into a ZIP archive. Includes a human-readable SUMMARY.md of Open findings that you can paste into ATO submission packages.

What goes into the ZIP

  • The current .cklb file.
  • An exported .ckl for tools that still require the legacy format.
  • A SUMMARY.md listing every Open finding with severity, rule ID, title, and finding details.
  • Optional: any supporting files you attach (screenshots, scan outputs, configuration exports).

CSV / CKL / POA&M #

The editor toolbar exposes three single-file exports for common downstream uses:

CSV export

A spreadsheet of all rules with status, severity, finding details, and comments. Best for briefings, internal tracking, and sharing with stakeholders who don’t use STIG-aware tooling.

CKL export

The legacy DISA XML format. Use this when your downstream tool (eMASS, older STIG Viewer installs, third-party GRC tools) requires .ckl instead of .cklb. Round-trips cleanly — you can import the exported CKL back into STIG Workbench without losing data.

POA&M export

A Plan of Action & Milestones spreadsheet listing every Open rule. Pre-populated with rule ID, title, severity, current status, and finding details — ready for you to fill in remediation owners, scheduled completion dates, and milestones.

Keyboard shortcuts #

macOS shortcuts use Cmd; Windows and Linux use Ctrl.

ActionShortcutMenu
Open checklistCmd+OFile → Open Checklist…
SaveCmd+SFile → Save
Save AsCmd+Shift+SFile → Save As…
Import XCCDFFile → Import XCCDF Benchmark…
Import CKLFile → Import CKL Checklist…
DashboardCmd+Shift+DView → Dashboard
Diff ChecklistsView → Diff Checklists…
Scan RepositoryTools → Scan Repository…
Import SCAPTools → Import SCAP Results…
Import SARIFTools → Import SARIF…
Import Dependency AuditTools → Import Dependency Audit…
Import InSpec / HDFCmd+Shift+HTools → Import InSpec / HDF Results…
Merge FindingsTools → Merge Findings…
Upgrade STIG VersionTools → Upgrade STIG Version…
Export Evidence PackageTools → Export Evidence Package…

.cklb file format #

STIG Workbench uses .cklb — a JSON format that is a superset of the DISA CKLB specification. Each file contains:

{
  "title": "...",
  "id": "<uuid>",
  "stigs": [
    {
      "stig_name": "...",
      "version": "...",
      "rules": [ ... ]
    }
  ],
  "target_data": {
    "host_name": "...",
    "ip_address": "...",
    "fqdn": "...",
    "mac_address": "...",
    "role": "...",
    "...": "..."
  },
  "cklb_version": "1"
}

Each rule stores status, finding_details, comments, overrides, and the full STIG content fields (check_content, fix_text, discussion, severity, references, CCIs, SRG IDs, etc.).

Why JSON

The legacy .ckl format is XML and was designed for the Java-era STIG Viewer. JSON is easier to diff in version control, easier to read in editors, easier to script against, and produces smaller files. STIG Workbench reads and writes both, so you’re never locked in.

Requirements #

  • macOS 11 (Big Sur) or later, Intel or Apple Silicon.
  • Windows 10 or later, x64.
  • Linux Ubuntu 20.04+ or any modern distro that runs .AppImage. x64.
  • No network access required — all parsing and processing happens locally on your machine. The only outbound connection is a one-time license validation when activating a key.
  • No Java, no Node.js, no VS Code required at runtime.

Need help? Email [email protected] or open an issue on GitHub.