mirror of
https://github.com/speed47/spectre-meltdown-checker.git
synced 2026-04-23 00:53:23 +02:00
Compare commits
15 Commits
f5c42098c3
...
ef57f070db
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ef57f070db | ||
|
|
0caabfc220 | ||
|
|
6106dce8d8 | ||
|
|
b71465ff74 | ||
|
|
1159d44c78 | ||
|
|
954dddaa9d | ||
|
|
c9a6a4f2f0 | ||
|
|
add102e04b | ||
|
|
637af10ca4 | ||
|
|
e2eba83ce8 | ||
|
|
96c696e313 | ||
|
|
485e2d275b | ||
|
|
786bc86be8 | ||
|
|
9288a8295d | ||
|
|
7a7408d124 |
2
.github/workflows/expected_cve_count
vendored
2
.github/workflows/expected_cve_count
vendored
@@ -1 +1 @@
|
||||
26
|
||||
28
|
||||
|
||||
@@ -267,6 +267,8 @@ In `src/libs/200_cpu_affected.sh`, add an `affected_yourname` variable and popul
|
||||
|
||||
Never use microcode version strings.
|
||||
|
||||
When populating the CPU model list, use the **most recent version** of the Linux kernel source as the authoritative reference. The relevant lists are typically found in `arch/x86/kernel/cpu/common.c` (`cpu_vuln_blacklist`) or in the vulnerability-specific mitigation source file. Cross-reference the kernel list with the vendor's published advisory to catch any models the kernel hasn't added yet. Always document the kernel commit hash(es) you based the list on in a comment above the model checks, so future maintainers can diff against newer kernels.
|
||||
|
||||
**Important**: Do not confuse hardware immunity bits with *mitigation* capability bits. A hardware immunity bit (e.g. `GDS_NO`, `TSA_SQ_NO`) declares that the CPU design is architecturally free of the vulnerability - it belongs here in `is_cpu_affected()`. A mitigation capability bit (e.g. `VERW_CLEAR`, `MD_CLEAR`) indicates that updated microcode provides a mechanism to work around a vulnerability the CPU *does* have - it belongs in the `check_CVE_YYYY_NNNNN_linux()` function (Phase 2), where it is used to determine whether mitigations are in place.
|
||||
|
||||
### Step 3: Implement the Linux Check
|
||||
@@ -321,7 +323,7 @@ This is where the real detection lives. Check for mitigations at each layer:
|
||||
|
||||
Each source may independently be unavailable (offline mode without the file, or stripped kernel), so check all that are present. A match in any one confirms kernel support.
|
||||
|
||||
- **Runtime state** (live mode only): Read MSRs, check cpuinfo flags, parse dmesg, inspect debugfs.
|
||||
- **Runtime state** (live mode only): Read MSRs, check cpuinfo flags, parse dmesg, inspect debugfs. All runtime-only checks — including `/proc/cpuinfo` flags — must be guarded by `if [ "$opt_live" = 1 ]`, both when collecting the evidence in Phase 2 and when using it in Phase 4. In Phase 4, use explicit live/offline branches so that live-only variables (e.g. cpuinfo flags, MSR values) are never referenced in the offline path.
|
||||
```sh
|
||||
if [ "$opt_live" = 1 ]; then
|
||||
read_msr 0xADDRESS
|
||||
@@ -697,13 +699,15 @@ CVEs that need VMM context should call `check_has_vmm` early in their `_linux()`
|
||||
|
||||
### Step 4: Wire Up and Test
|
||||
|
||||
1. **Add the CVE name mapping** in the `cve2name()` function so the header prints a human-readable name.
|
||||
2. **Build** the monolithic script with `make`.
|
||||
3. **Test live**: Run the built script and confirm your CVE appears in the output and reports a sensible status.
|
||||
4. **Test batch JSON**: Run with `--batch json` and verify the CVE count incremented by one (currently 19 → 20).
|
||||
5. **Test offline**: Run with `--kernel`/`--config`/`--map` pointing to a kernel image and verify the offline code path reports correctly.
|
||||
6. **Lint**: Run `shellcheck` on the monolithic script and fix any warnings.
|
||||
7. **Update `dist/README.md`**: Add details about the new CVE check (name, description, what it detects) so that the user-facing documentation stays in sync with the implementation.
|
||||
1. **Add the CVE to `CVE_REGISTRY`** in `src/libs/002_core_globals.sh` with the correct fields: `CVE-YYYY-NNNNN|JSON_KEY|affected_var_suffix|Complete Name and Aliases`. This is the single source of truth for CVE metadata — it drives `cve2name()`, `is_cpu_affected()`, and the supported CVE list.
|
||||
2. **Add a `--variant` alias** in `src/libs/230_util_optparse.sh`: add a new `case` entry mapping a short name (e.g. `rfds`, `downfall`) to `opt_cve_list="$opt_cve_list CVE-YYYY-NNNNN"`, and add that short name to the `help)` echo line. The CVE is already selectable via `--cve CVE-YYYY-NNNNN` (this is handled generically by the existing `--cve` parsing code), but the `--variant` alias provides the user-friendly short name.
|
||||
3. **Update `dist/README.md`**: Add the CVE in **both** tables — the "Supported CVEs" reference table at the top (CVE link, description, alias) **and** the "Am I at risk?" matrix (with the correct leak/mitigation indicators per boundary). Also add a detailed description paragraph in the `<details>` section at the bottom.
|
||||
4. **Build** the monolithic script with `make`.
|
||||
5. **Test live**: Run the built script and confirm your CVE appears in the output and reports a sensible status.
|
||||
6. **Test batch JSON**: Run with `--batch json` and verify the CVE appears in the output.
|
||||
7. **Test offline**: Run with `--kernel`/`--config`/`--map` pointing to a kernel image and verify the offline code path reports correctly.
|
||||
8. **Test `--variant` and `--cve`**: Run with `--variant <shortname>` and `--cve CVE-YYYY-NNNNN` separately to confirm both selection methods work and produce the same output.
|
||||
9. **Lint**: Run `shellcheck` on the monolithic script and fix any warnings.
|
||||
|
||||
### Key Rules to Remember
|
||||
|
||||
|
||||
@@ -48,6 +48,53 @@ A Spectre V1 subvariant where the `SWAPGS` instruction can be speculatively exec
|
||||
|
||||
**Why out of scope:** This is a Spectre V1 subvariant whose mitigation (SWAPGS barriers) shares the same sysfs entry as CVE-2017-5753. This tool's existing CVE-2017-5753 checks already detect SWAPGS barriers: a mitigated kernel reports `"Mitigation: usercopy/swapgs barriers and __user pointer sanitization"`, while a kernel lacking the fix reports `"Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers"`. CVE-2019-1125 is therefore fully covered as part of Spectre V1.
|
||||
|
||||
## CVE-2021-26341 — AMD Straight-Line Speculation (direct branches)
|
||||
|
||||
- **Bulletin:** [AMD-SB-1026](https://www.amd.com/en/resources/product-security/bulletin/amd-sb-1026.html)
|
||||
- **Affected CPUs:** AMD Zen 1, Zen 2
|
||||
- **CVSS:** 6.5 (Medium)
|
||||
- **Covered by:** CVE-0000-0001 (SLS supplementary check)
|
||||
|
||||
AMD Zen 1/Zen 2 CPUs may transiently execute instructions beyond unconditional direct branches (JMP, CALL), potentially allowing information disclosure via side channels.
|
||||
|
||||
**Why out of scope:** This is the AMD-specific direct-branch subset of the broader Straight-Line Speculation (SLS) class. The kernel mitigates it via `CONFIG_MITIGATION_SLS` (formerly `CONFIG_SLS`), which enables the GCC flag `-mharden-sls=all` to insert INT3 after unconditional control flow instructions. Since this is a compile-time-only mitigation with no sysfs interface, no MSR, and no per-CVE CPU feature flag, it cannot be checked using the standard CVE framework. A supplementary SLS check is available via `--extra` mode, which covers this CVE's mitigation as well.
|
||||
|
||||
## CVE-2020-13844 — ARM Straight-Line Speculation
|
||||
|
||||
- **Advisory:** [ARM Developer Security Update (June 2020)](https://developer.arm.com/Arm%20Security%20Center/Speculative%20Processor%20Vulnerability)
|
||||
- **Affected CPUs:** Cortex-A32, A34, A35, A53, A57, A72, A73, and broadly all speculative Armv8-A cores
|
||||
- **CVSS:** 5.5 (Medium)
|
||||
- **Covered by:** CVE-0000-0001 (SLS supplementary check)
|
||||
|
||||
ARM processors may speculatively execute instructions past unconditional control flow changes (RET, BR, BLR). GCC and Clang support `-mharden-sls=all` for aarch64, but the Linux kernel never merged the patches to enable it: a `CONFIG_HARDEN_SLS_ALL` series was submitted in 2021 but rejected upstream.
|
||||
|
||||
**Why out of scope:** This is the ARM-specific subset of the broader Straight-Line Speculation (SLS) class. The supplementary SLS check available via `--extra` mode detects affected ARM CPU models and reports that no kernel mitigation is currently available.
|
||||
|
||||
## CVE-2024-2201 — Native BHI (Branch History Injection without eBPF)
|
||||
|
||||
- **Issue:** [#491](https://github.com/speed47/spectre-meltdown-checker/issues/491)
|
||||
- **Research:** [InSpectre Gadget / Native BHI (VUSec)](https://www.vusec.net/projects/native-bhi/)
|
||||
- **Intel advisory:** [Branch History Injection (Intel)](https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html)
|
||||
- **Affected CPUs:** Intel CPUs with eIBRS (Ice Lake+, 10th gen+, and virtualized Intel guests)
|
||||
- **CVSS:** 4.7 (Medium)
|
||||
- **Covered by:** CVE-2017-5715 (Spectre V2)
|
||||
|
||||
VUSec researchers demonstrated that the original BHI mitigation (disabling unprivileged eBPF) was insufficient: 1,511 native kernel gadgets exist that allow exploiting Branch History Injection without eBPF, leaking arbitrary kernel memory at ~3.5 kB/sec on Intel CPUs.
|
||||
|
||||
**Why out of scope:** CVE-2024-2201 is not a new hardware vulnerability — it is the same BHI hardware bug as CVE-2022-0002, but proves that eBPF restriction alone was never sufficient. The required mitigations are identical: `BHI_DIS_S` hardware control (MSR `IA32_SPEC_CTRL` bit 10), software BHB clearing loop at syscall entry and VM exit, or retpoline with RRSBA disabled. These are all already detected by this tool's CVE-2017-5715 (Spectre V2) checks, which parse the `BHI:` suffix from `/sys/devices/system/cpu/vulnerabilities/spectre_v2` and check for `CONFIG_MITIGATION_SPECTRE_BHI` in offline mode. No new sysfs entry, MSR, kernel config option, or boot parameter was introduced for this CVE.
|
||||
|
||||
## CVE-2020-0549 — L1D Eviction Sampling (CacheOut)
|
||||
|
||||
- **Issue:** [#341](https://github.com/speed47/spectre-meltdown-checker/issues/341)
|
||||
- **Advisory:** [INTEL-SA-00329](https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/l1d-eviction-sampling.html)
|
||||
- **Affected CPUs:** Intel Skylake through 10th gen (Tiger Lake+ not affected)
|
||||
- **CVSS:** 6.5 (Medium)
|
||||
- **Covered by:** CVE-2018-12126 / CVE-2018-12127 / CVE-2018-12130 / CVE-2019-11091 (MDS) and CVE-2018-3646 (L1TF)
|
||||
|
||||
An Intel-specific data leakage vulnerability where L1 data cache evictions can be exploited in combination with MDS or TAA side channels to leak data across security boundaries.
|
||||
|
||||
**Why out of scope:** The June 2020 microcode update that addresses this CVE does not introduce any new MSR bits or CPUID flags — it reuses the existing MD_CLEAR (`CPUID.7.0:EDX[10]`) and L1D_FLUSH (`MSR_IA32_FLUSH_CMD`, 0x10B) infrastructure already deployed for MDS and L1TF. The Linux kernel has no dedicated sysfs entry in `/sys/devices/system/cpu/vulnerabilities/` for this CVE; instead, it provides an opt-in per-task L1D flush via `prctl(PR_SPEC_L1D_FLUSH)` and the `l1d_flush=on` boot parameter, which piggyback on the same L1D flush mechanism checked by the existing L1TF and MDS vulnerability modules. In practice, a system with up-to-date microcode and MDS/L1TF mitigations in place is already protected against L1D Eviction Sampling.
|
||||
|
||||
## CVE-2025-20623 — Shared Microarchitectural Predictor State (10th Gen Intel)
|
||||
|
||||
- **Advisory:** [INTEL-SA-01247](https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01247.html)
|
||||
@@ -108,6 +155,17 @@ AMD CPUs may transiently execute non-canonical loads and stores using only the l
|
||||
|
||||
**Why out of scope:** AMD's mitigation guidance is for software vendors to "analyze their code for any potential vulnerabilities" and insert LFENCE or use existing speculation mitigation techniques in their own code. No microcode or kernel-level mitigations have been issued. The responsibility falls on individual software, not on the kernel or firmware, leaving nothing for this script to check.
|
||||
|
||||
## CVE-2021-26318 — AMD Prefetch Attacks through Power and Time
|
||||
|
||||
- **Issue:** [#412](https://github.com/speed47/spectre-meltdown-checker/issues/412)
|
||||
- **Bulletin:** [AMD-SB-1017](https://www.amd.com/en/resources/product-security/bulletin/amd-sb-1017.html)
|
||||
- **Research paper:** [AMD Prefetch Attacks through Power and Time (USENIX Security '22)](https://www.usenix.org/conference/usenixsecurity22/presentation/lipp)
|
||||
- **CVSS:** 5.5 (Medium)
|
||||
|
||||
The x86 PREFETCH instruction on AMD CPUs leaks timing and power information, enabling a microarchitectural KASLR bypass from unprivileged userspace. The researchers demonstrated kernel address space layout recovery and kernel memory leakage at ~52 B/s using Spectre gadgets.
|
||||
|
||||
**Why out of scope:** AMD acknowledged the research but explicitly stated they are "not recommending any mitigations at this time," as the attack leaks kernel address layout information (KASLR bypass) but does not directly leak kernel data across address space boundaries. KPTI was never enabled on AMD by default in the Linux kernel as a result. No microcode, kernel, or sysfs mitigations have been issued, leaving nothing for this script to check.
|
||||
|
||||
## CVE-2024-7881 — ARM Prefetcher Privilege Escalation
|
||||
|
||||
- **Affected CPUs:** Specific ARM cores only
|
||||
@@ -135,6 +193,17 @@ A transient execution vulnerability in some AMD processors may allow a user proc
|
||||
|
||||
**Why out of scope:** AMD has determined that "leakage of TSC_AUX does not result in leakage of sensitive information" and has marked this CVE as "No fix planned" across all affected product lines. No microcode or kernel mitigations have been issued, leaving nothing for this script to check.
|
||||
|
||||
## No CVE — TLBleed (TLB side-channel)
|
||||
|
||||
- **Issue:** [#231](https://github.com/speed47/spectre-meltdown-checker/issues/231)
|
||||
- **Research paper:** [Defeating Cache Side-channel Protections with TLB Attacks (VUSec, USENIX Security '18)](https://www.vusec.net/projects/tlbleed/)
|
||||
- **Red Hat blog:** [Temporal side-channels and you: Understanding TLBleed](https://www.redhat.com/en/blog/temporal-side-channels-and-you-understanding-tlbleed)
|
||||
- **Affected CPUs:** Intel CPUs with Hyper-Threading (demonstrated on Skylake, Coffee Lake, Broadwell Xeon)
|
||||
|
||||
A timing side-channel attack exploiting the shared Translation Lookaside Buffer (TLB) on Intel hyperthreaded CPUs. By using machine learning to analyze TLB hit/miss timing patterns, an attacker co-located on the same physical core can extract cryptographic keys (demonstrated with 99.8% success rate on a 256-bit EdDSA key). OpenBSD disabled Hyper-Threading by default in response.
|
||||
|
||||
**Why out of scope:** No CVE was ever assigned — Intel explicitly declined to request one. Intel stated the attack is "not related to Spectre or Meltdown" and has no plans to issue a microcode fix, pointing to existing constant-time coding practices in cryptographic software as the appropriate defense. No Linux kernel mitigation was ever merged. Red Hat's guidance was limited to operational advice (disable SMT, use CPU pinning) rather than a software fix. The only OS-level response was OpenBSD disabling Hyper-Threading by default. With no CVE, no microcode update, and no kernel mitigation, there is nothing for this script to check.
|
||||
|
||||
---
|
||||
|
||||
# Not a transient/speculative execution vulnerability
|
||||
|
||||
12
dist/README.md
vendored
12
dist/README.md
vendored
@@ -26,8 +26,10 @@ CVE | Name | Aliases
|
||||
[CVE-2022-29901](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29901) | Arbitrary Speculative Code Execution with Return Instructions | Retbleed (Intel), RSBA
|
||||
[CVE-2022-40982](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40982) | Gather Data Sampling | Downfall, GDS
|
||||
[CVE-2023-20569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-20569) | Return Address Security | Inception, SRSO
|
||||
[CVE-2023-20588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-20588) | AMD Division by Zero Speculative Data Leak | DIV0
|
||||
[CVE-2023-20593](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-20593) | Cross-Process Information Leak | Zenbleed
|
||||
[CVE-2023-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23583) | Redundant Prefix Issue | Reptar
|
||||
[CVE-2023-28746](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-28746) | Register File Data Sampling | RFDS
|
||||
[CVE-2024-28956](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-28956) | Indirect Target Selection | ITS
|
||||
[CVE-2024-36350](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-36350) | Transient Scheduler Attack, Store Queue | TSA-SQ
|
||||
[CVE-2024-36357](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-36357) | Transient Scheduler Attack, L1 | TSA-L1
|
||||
@@ -60,8 +62,10 @@ CVE-2022-29900 (Retbleed AMD) | 💥 | ✅ | 💥 | ✅ | Kernel update (+ micro
|
||||
CVE-2022-29901 (Retbleed Intel, RSBA) | 💥 | ✅ | 💥 | ✅ | Microcode + kernel update (eIBRS or IBRS)
|
||||
CVE-2022-40982 (Downfall, GDS) | 💥 | 💥 | 💥 | 💥 | Microcode update (or disable AVX)
|
||||
CVE-2023-20569 (Inception, SRSO) | 💥 | ✅ | 💥 | ✅ | Microcode + kernel update
|
||||
CVE-2023-20588 (DIV0) | 💥 | 💥 (1) | 💥 | 💥 (1) | Kernel update (+ disable SMT)
|
||||
CVE-2023-20593 (Zenbleed) | 💥 | 💥 | 💥 | 💥 | Microcode update (or kernel workaround)
|
||||
CVE-2023-23583 (Reptar) | ☠️ | ☠️ | ☠️ | ☠️ | Microcode update
|
||||
CVE-2023-28746 (RFDS) | 💥 | ✅ | 💥 | ✅ | Microcode + kernel update
|
||||
CVE-2024-28956 (ITS) | 💥 | ✅ | 💥 (4) | ✅ | Microcode + kernel update
|
||||
CVE-2024-36350 (TSA-SQ) | 💥 | 💥 (1) | 💥 | 💥 (1) | Microcode + kernel update
|
||||
CVE-2024-36357 (TSA-L1) | 💥 | 💥 (1) | 💥 | 💥 (1) | Microcode + kernel update
|
||||
@@ -157,6 +161,10 @@ The AVX GATHER instructions can leak data from previously used vector registers
|
||||
|
||||
On AMD Zen 1 through Zen 4 processors, an attacker can manipulate the return address predictor to redirect speculative execution on return instructions, leaking kernel memory. Mitigation requires both a kernel update (providing SRSO safe-return sequences or IBPB-on-entry) and a microcode update (providing SBPB on Zen 3/4, or IBPB support on Zen 1/2 — which additionally requires SMT to be disabled). Performance impact ranges from low to significant depending on the chosen mitigation and CPU generation.
|
||||
|
||||
**CVE-2023-20588 — AMD Division by Zero Speculative Data Leak (DIV0)**
|
||||
|
||||
On AMD Zen 1 processors, a #DE (divide-by-zero) exception can leave stale quotient data from a previous division in the divider unit, observable by a subsequent division via speculative side channels. This can leak data across any privilege boundary, including between SMT sibling threads sharing the same physical core. Mitigation requires a kernel update (Linux 6.5+) that adds a dummy division (`amd_clear_divider()`) on every exit to userspace and before VMRUN, preventing stale data from persisting. No microcode update is needed. Disabling SMT provides additional protection because the kernel mitigation does not cover cross-SMT-thread leaks. Performance impact is negligible.
|
||||
|
||||
**CVE-2023-20593 — Cross-Process Information Leak (Zenbleed)**
|
||||
|
||||
A bug in AMD Zen 2 processors causes the VZEROUPPER instruction to incorrectly zero register files during speculative execution, leaving stale data from other processes observable in vector registers. This can leak data across any privilege boundary, including from the kernel and other processes, at rates up to 30 KB/s per core. Mitigation is available either through a microcode update that fixes the bug, or through a kernel workaround that sets the FP_BACKUP_FIX bit (bit 9) in the DE_CFG MSR, disabling the faulty optimization. Either approach alone is sufficient. Performance impact is negligible.
|
||||
@@ -165,6 +173,10 @@ A bug in AMD Zen 2 processors causes the VZEROUPPER instruction to incorrectly z
|
||||
|
||||
A bug in Intel processors causes unexpected behavior when executing instructions with specific redundant REX prefixes. Depending on the circumstances, this can result in a system crash (MCE), unpredictable behavior, or potentially privilege escalation. Any software running on an affected CPU can trigger the bug. Mitigation requires a microcode update. Performance impact is low.
|
||||
|
||||
**CVE-2023-28746 — Register File Data Sampling (RFDS)**
|
||||
|
||||
On certain Intel Atom and hybrid processors (Goldmont, Goldmont Plus, Tremont, Gracemont, and the Atom cores of Alder Lake and Raptor Lake), the register file can retain stale data from previous operations that is accessible via speculative execution, allowing an attacker to infer data across privilege boundaries. Mitigation requires both a microcode update (providing the RFDS_CLEAR capability) and a kernel update (CONFIG_MITIGATION_RFDS, Linux 6.9+) that uses the VERW instruction to clear the register file on privilege transitions. CPUs with the RFDS_NO capability bit are not affected. Performance impact is low.
|
||||
|
||||
**CVE-2024-28956 — Indirect Target Selection (ITS)**
|
||||
|
||||
On certain Intel processors (Skylake-X stepping 6+, Kaby Lake, Comet Lake, Ice Lake, Tiger Lake, Rocket Lake), an attacker can train the indirect branch predictor to speculatively execute a targeted gadget in the kernel, bypassing eIBRS protections. The Branch Target Buffer (BTB) uses only partial address bits to index indirect branch targets, allowing user-space code to influence kernel-space speculative execution. Some affected CPUs (Ice Lake, Tiger Lake, Rocket Lake) are only vulnerable to native user-to-kernel attacks, not guest-to-host (VMX) attacks. Mitigation requires both a microcode update (IPU 2025.1 / microcode-20250512+, which fixes IBPB to fully flush indirect branch predictions) and a kernel update (CONFIG_MITIGATION_ITS, Linux 6.15+) that aligns branch/return thunks or uses RSB stuffing. Performance impact is low.
|
||||
|
||||
@@ -32,6 +32,7 @@ exit_cleanup() {
|
||||
[ -n "${g_dumped_config:-}" ] && [ -f "$g_dumped_config" ] && rm -f "$g_dumped_config"
|
||||
[ -n "${g_kerneltmp:-}" ] && [ -f "$g_kerneltmp" ] && rm -f "$g_kerneltmp"
|
||||
[ -n "${g_kerneltmp2:-}" ] && [ -f "$g_kerneltmp2" ] && rm -f "$g_kerneltmp2"
|
||||
[ -n "${g_sls_text_tmp:-}" ] && [ -f "$g_sls_text_tmp" ] && rm -f "$g_sls_text_tmp"
|
||||
[ -n "${g_mcedb_tmp:-}" ] && [ -f "$g_mcedb_tmp" ] && rm -f "$g_mcedb_tmp"
|
||||
[ -n "${g_intel_tmp:-}" ] && [ -d "$g_intel_tmp" ] && rm -rf "$g_intel_tmp"
|
||||
[ -n "${g_linuxfw_tmp:-}" ] && [ -f "$g_linuxfw_tmp" ] && rm -f "$g_linuxfw_tmp"
|
||||
|
||||
@@ -29,9 +29,11 @@ show_usage() {
|
||||
--no-color don't use color codes
|
||||
--verbose, -v increase verbosity level, possibly several times
|
||||
--explain produce an additional human-readable explanation of actions to take to mitigate a vulnerability
|
||||
--paranoid require IBPB to deem Variant 2 as mitigated
|
||||
also require SMT disabled + unconditional L1D flush to deem Foreshadow-NG VMM as mitigated
|
||||
also require SMT disabled to deem MDS vulnerabilities mitigated
|
||||
--paranoid require all mitigations to be enabled to the fullest extent, including those that
|
||||
are not strictly necessary but provide defense in depth (e.g. SMT disabled, IBPB
|
||||
always-on); without this flag, the script follows the security community consensus
|
||||
--extra run additional checks for issues that don't have a CVE but are still security-relevant,
|
||||
such as compile-time mitigations not enabled by default (e.g. Straight-Line Speculation)
|
||||
|
||||
--no-sysfs don't use the /sys interface even if present [Linux]
|
||||
--sysfs-only only use the /sys interface, don't run our own checks [Linux]
|
||||
@@ -128,6 +130,7 @@ opt_allow_msr_write=0
|
||||
opt_cpu=0
|
||||
opt_explain=0
|
||||
opt_paranoid=0
|
||||
opt_extra=0
|
||||
opt_mock=0
|
||||
opt_intel_db=1
|
||||
|
||||
@@ -153,6 +156,7 @@ CVE-2019-11091|MDSUM|mdsum|RIDL, microarchitectural data sampling uncacheable me
|
||||
CVE-2019-11135|TAA|taa|ZombieLoad V2, TSX Asynchronous Abort (TAA)
|
||||
CVE-2018-12207|ITLBMH|itlbmh|No eXcuses, iTLB Multihit, machine check exception on page size changes (MCEPSC)
|
||||
CVE-2020-0543|SRBDS|srbds|Special Register Buffer Data Sampling (SRBDS)
|
||||
CVE-2023-20588|DIV0|div0|Division by Zero, AMD Zen1 speculative data leak
|
||||
CVE-2023-20593|ZENBLEED|zenbleed|Zenbleed, cross-process information leak
|
||||
CVE-2022-40982|DOWNFALL|downfall|Downfall, gather data sampling (GDS)
|
||||
CVE-2022-29900|RETBLEED AMD|retbleed|Retbleed, arbitrary speculative code execution with return instructions (AMD)
|
||||
@@ -163,7 +167,9 @@ CVE-2024-36350|TSA_SQ|tsa|Transient Scheduler Attack - Store Queue (TSA-SQ)
|
||||
CVE-2024-36357|TSA_L1|tsa|Transient Scheduler Attack - L1 (TSA-L1)
|
||||
CVE-2024-28956|ITS|its|Indirect Target Selection (ITS)
|
||||
CVE-2025-40300|VMSCAPE|vmscape|VMScape, VM-exit stale branch prediction
|
||||
CVE-2023-28746|RFDS|rfds|Register File Data Sampling (RFDS)
|
||||
CVE-2024-45332|BPI|bpi|Branch Privilege Injection (BPI)
|
||||
CVE-0000-0001|SLS|sls|Straight-Line Speculation (SLS)
|
||||
'
|
||||
|
||||
# Derive the supported CVE list from the registry
|
||||
|
||||
@@ -99,16 +99,19 @@ is_cpu_affected() {
|
||||
affected_taa=''
|
||||
affected_itlbmh=''
|
||||
affected_srbds=''
|
||||
# Zenbleed and Inception are both AMD specific, look for "is_amd" below:
|
||||
affected_sls=''
|
||||
# DIV0, Zenbleed and Inception are all AMD specific, look for "is_amd" below:
|
||||
_set_immune div0
|
||||
_set_immune zenbleed
|
||||
_set_immune inception
|
||||
# TSA is AMD specific (Zen 3/4), look for "is_amd" below:
|
||||
_set_immune tsa
|
||||
# Retbleed: AMD (CVE-2022-29900) and Intel (CVE-2022-29901) specific:
|
||||
_set_immune retbleed
|
||||
# Downfall, Reptar, ITS & BPI are Intel specific, look for "is_intel" below:
|
||||
# Downfall, Reptar, RFDS, ITS & BPI are Intel specific, look for "is_intel" below:
|
||||
_set_immune downfall
|
||||
_set_immune reptar
|
||||
_set_immune rfds
|
||||
_set_immune its
|
||||
_set_immune bpi
|
||||
# VMScape affects Intel, AMD and Hygon — set immune, overridden below:
|
||||
@@ -265,6 +268,32 @@ is_cpu_affected() {
|
||||
fi
|
||||
set +u
|
||||
fi
|
||||
# RFDS (Register File Data Sampling, CVE-2023-28746)
|
||||
# kernel cpu_vuln_blacklist (8076fcde016c, initial model list)
|
||||
# immunity: ARCH_CAP_RFDS_NO (bit 27 of IA32_ARCH_CAPABILITIES)
|
||||
# vendor scope: Intel only (family 6), Atom/hybrid cores
|
||||
if [ "$cap_rfds_no" = 1 ]; then
|
||||
pr_debug "is_cpu_affected: rfds: not affected (RFDS_NO)"
|
||||
_set_immune rfds
|
||||
elif [ "$cpu_family" = 6 ]; then
|
||||
set -u
|
||||
if [ "$cpu_model" = "$INTEL_FAM6_ATOM_GOLDMONT" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_GOLDMONT_D" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_GOLDMONT_PLUS" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_TREMONT_D" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_TREMONT" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_TREMONT_L" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ATOM_GRACEMONT" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ALDERLAKE" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_ALDERLAKE_L" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_RAPTORLAKE" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_RAPTORLAKE_P" ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_RAPTORLAKE_S" ]; then
|
||||
pr_debug "is_cpu_affected: rfds: affected"
|
||||
_set_vuln rfds
|
||||
fi
|
||||
set +u
|
||||
fi
|
||||
# ITS (Indirect Target Selection, CVE-2024-28956)
|
||||
# kernel vulnerable_to_its() + cpu_vuln_blacklist (159013a7ca18)
|
||||
# immunity: ARCH_CAP_ITS_NO (bit 62 of IA32_ARCH_CAPABILITIES)
|
||||
@@ -559,6 +588,13 @@ is_cpu_affected() {
|
||||
fi
|
||||
_set_immune variantl1tf
|
||||
|
||||
# DIV0 (Zen1 only)
|
||||
# 77245f1c3c64 (v6.5, initial model list): family 0x17 models 0x00-0x2f, 0x50-0x5f
|
||||
# bfff3c6692ce (v6.8): moved to init_amd_zen1(), unconditional for all Zen1
|
||||
# All Zen1 CPUs are family 0x17, models 0x00-0x2f and 0x50-0x5f
|
||||
amd_legacy_erratum "$(amd_model_range 0x17 0x00 0x0 0x2f 0xf)" && _set_vuln div0
|
||||
amd_legacy_erratum "$(amd_model_range 0x17 0x50 0x0 0x5f 0xf)" && _set_vuln div0
|
||||
|
||||
# Zenbleed
|
||||
amd_legacy_erratum "$(amd_model_range 0x17 0x30 0x0 0x4f 0xf)" && _set_vuln zenbleed
|
||||
amd_legacy_erratum "$(amd_model_range 0x17 0x60 0x0 0x7f 0xf)" && _set_vuln zenbleed
|
||||
@@ -741,13 +777,35 @@ is_cpu_affected() {
|
||||
_infer_immune itlbmh
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2154 # affected_zenbleed/inception/retbleed/tsa/downfall/reptar/its/vmscape/bpi set via eval (_set_immune)
|
||||
# SLS (Straight-Line Speculation):
|
||||
# - x86_64: all CPUs are affected (compile-time mitigation CONFIG_MITIGATION_SLS)
|
||||
# - arm64 (CVE-2020-13844): Cortex-A32/A34/A35/A53/A57/A72/A73 confirmed affected,
|
||||
# and broadly all speculative Armv8-A cores. No kernel mitigation merged.
|
||||
# Part numbers: A32=0xd01 A34=0xd02 A53=0xd03 A35=0xd04 A57=0xd07 A72=0xd08 A73=0xd09
|
||||
# Plus later speculative cores: A75=0xd0a A76=0xd0b A77=0xd0d N1=0xd0c V1=0xd40 N2=0xd49 V2=0xd4f
|
||||
if is_intel || is_amd; then
|
||||
_infer_vuln sls
|
||||
elif [ "$cpu_vendor" = ARM ]; then
|
||||
for cpupart in $cpu_part_list; do
|
||||
if echo "$cpupart" | grep -q -w -e 0xd01 -e 0xd02 -e 0xd03 -e 0xd04 \
|
||||
-e 0xd07 -e 0xd08 -e 0xd09 -e 0xd0a -e 0xd0b -e 0xd0c -e 0xd0d \
|
||||
-e 0xd40 -e 0xd49 -e 0xd4f; then
|
||||
_set_vuln sls
|
||||
fi
|
||||
done
|
||||
# non-speculative ARM cores (arch <= 7, or early v8 models) are not affected
|
||||
_infer_immune sls
|
||||
else
|
||||
_infer_immune sls
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2154
|
||||
{
|
||||
pr_debug "is_cpu_affected: final results: variant1=$affected_variant1 variant2=$affected_variant2 variant3=$affected_variant3 variant3a=$affected_variant3a"
|
||||
pr_debug "is_cpu_affected: final results: variant4=$affected_variant4 variantl1tf=$affected_variantl1tf msbds=$affected_msbds mfbds=$affected_mfbds"
|
||||
pr_debug "is_cpu_affected: final results: mlpds=$affected_mlpds mdsum=$affected_mdsum taa=$affected_taa itlbmh=$affected_itlbmh srbds=$affected_srbds"
|
||||
pr_debug "is_cpu_affected: final results: zenbleed=$affected_zenbleed inception=$affected_inception retbleed=$affected_retbleed tsa=$affected_tsa downfall=$affected_downfall reptar=$affected_reptar its=$affected_its"
|
||||
pr_debug "is_cpu_affected: final results: vmscape=$affected_vmscape bpi=$affected_bpi"
|
||||
pr_debug "is_cpu_affected: final results: div0=$affected_div0 zenbleed=$affected_zenbleed inception=$affected_inception retbleed=$affected_retbleed tsa=$affected_tsa downfall=$affected_downfall reptar=$affected_reptar rfds=$affected_rfds its=$affected_its"
|
||||
pr_debug "is_cpu_affected: final results: vmscape=$affected_vmscape bpi=$affected_bpi sls=$affected_sls"
|
||||
}
|
||||
affected_variantl1tf_sgx="$affected_variantl1tf"
|
||||
# even if we are affected to L1TF, if there's no SGX, we're not affected to the original foreshadow
|
||||
|
||||
@@ -143,7 +143,7 @@ is_cpu_srbds_free() {
|
||||
return 1
|
||||
elif [ "$cpu_model" = "$INTEL_FAM6_KABYLAKE_L" ] && [ "$cpu_stepping" -le 12 ] ||
|
||||
[ "$cpu_model" = "$INTEL_FAM6_KABYLAKE" ] && [ "$cpu_stepping" -le 13 ]; then
|
||||
if [ "$cap_mds_no" -eq 1 ] && { [ "$cap_rtm" -eq 0 ] || [ "$cap_tsx_ctrl_rtm_disable" -eq 1 ]; }; then
|
||||
if [ "$cap_mds_no" -eq 1 ] && { [ "$cap_rtm" -eq 0 ] || [ "$cap_tsx_ctrl_rtm_disable" -eq 1 ] || [ "$cap_tsx_force_abort_rtm_disable" -eq 1 ]; }; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
|
||||
@@ -68,6 +68,9 @@ while [ -n "${1:-}" ]; do
|
||||
elif [ "$1" = "--paranoid" ]; then
|
||||
opt_paranoid=1
|
||||
shift
|
||||
elif [ "$1" = "--extra" ]; then
|
||||
opt_extra=1
|
||||
shift
|
||||
elif [ "$1" = "--hw-only" ]; then
|
||||
opt_hw_only=1
|
||||
shift
|
||||
@@ -166,7 +169,7 @@ while [ -n "${1:-}" ]; do
|
||||
case "$2" in
|
||||
help)
|
||||
echo "The following parameters are supported for --variant (can be used multiple times):"
|
||||
echo "1, 2, 3, 3a, 4, msbds, mfbds, mlpds, mdsum, l1tf, taa, mcepsc, srbds, zenbleed, downfall, inception, reptar, tsa, tsa-sq, tsa-l1, its, vmscape, bpi"
|
||||
echo "1, 2, 3, 3a, 4, msbds, mfbds, mlpds, mdsum, l1tf, taa, mcepsc, srbds, div0, zenbleed, downfall, retbleed, inception, reptar, rfds, tsa, tsa-sq, tsa-l1, its, vmscape, bpi, sls"
|
||||
exit 0
|
||||
;;
|
||||
1)
|
||||
@@ -221,6 +224,10 @@ while [ -n "${1:-}" ]; do
|
||||
opt_cve_list="$opt_cve_list CVE-2020-0543"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
div0)
|
||||
opt_cve_list="$opt_cve_list CVE-2023-20588"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
zenbleed)
|
||||
opt_cve_list="$opt_cve_list CVE-2023-20593"
|
||||
opt_cve_all=0
|
||||
@@ -229,6 +236,10 @@ while [ -n "${1:-}" ]; do
|
||||
opt_cve_list="$opt_cve_list CVE-2022-40982"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
retbleed)
|
||||
opt_cve_list="$opt_cve_list CVE-2022-29900 CVE-2022-29901"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
inception)
|
||||
opt_cve_list="$opt_cve_list CVE-2023-20569"
|
||||
opt_cve_all=0
|
||||
@@ -237,6 +248,10 @@ while [ -n "${1:-}" ]; do
|
||||
opt_cve_list="$opt_cve_list CVE-2023-23583"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
rfds)
|
||||
opt_cve_list="$opt_cve_list CVE-2023-28746"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
tsa)
|
||||
opt_cve_list="$opt_cve_list CVE-2024-36350 CVE-2024-36357"
|
||||
opt_cve_all=0
|
||||
@@ -261,6 +276,10 @@ while [ -n "${1:-}" ]; do
|
||||
opt_cve_list="$opt_cve_list CVE-2024-45332"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
sls)
|
||||
opt_cve_list="$opt_cve_list CVE-0000-0001"
|
||||
opt_cve_all=0
|
||||
;;
|
||||
*)
|
||||
echo "$0: error: invalid parameter '$2' for --variant, see --variant help for a list" >&2
|
||||
exit 255
|
||||
|
||||
@@ -153,6 +153,7 @@ write_msr_one_core() {
|
||||
readonly MSR_IA32_PLATFORM_ID=0x17
|
||||
readonly MSR_IA32_SPEC_CTRL=0x48
|
||||
readonly MSR_IA32_ARCH_CAPABILITIES=0x10a
|
||||
readonly MSR_IA32_TSX_FORCE_ABORT=0x10f
|
||||
readonly MSR_IA32_TSX_CTRL=0x122
|
||||
readonly MSR_IA32_MCU_OPT_CTRL=0x123
|
||||
readonly READ_MSR_RET_OK=0
|
||||
|
||||
@@ -130,21 +130,25 @@ parse_cpu_details() {
|
||||
if [ -z "$cpu_ucode" ] && [ "$g_os" != Linux ]; then
|
||||
load_cpuid
|
||||
if [ -e ${BSD_CPUCTL_DEV_BASE}0 ]; then
|
||||
# init MSR with NULLs
|
||||
cpucontrol -m 0x8b=0 ${BSD_CPUCTL_DEV_BASE}0
|
||||
# call CPUID
|
||||
cpucontrol -i 1 ${BSD_CPUCTL_DEV_BASE}0 >/dev/null
|
||||
# read MSR
|
||||
cpu_ucode=$(cpucontrol -m 0x8b ${BSD_CPUCTL_DEV_BASE}0 | awk '{print $3}')
|
||||
# convert to decimal
|
||||
cpu_ucode=$((cpu_ucode))
|
||||
# convert back to hex
|
||||
cpu_ucode=$(printf "0x%x" "$cpu_ucode")
|
||||
if [ "$cpu_vendor" = AuthenticAMD ]; then
|
||||
# AMD: read MSR_PATCHLEVEL (0xC0010058) directly
|
||||
cpu_ucode=$(cpucontrol -m 0xC0010058 ${BSD_CPUCTL_DEV_BASE}0 2>/dev/null | awk '{print $3}')
|
||||
elif [ "$cpu_vendor" = GenuineIntel ]; then
|
||||
# Intel: write 0 to IA32_BIOS_SIGN_ID, execute CPUID, then read back
|
||||
cpucontrol -m 0x8b=0 ${BSD_CPUCTL_DEV_BASE}0 2>/dev/null
|
||||
cpucontrol -i 1 ${BSD_CPUCTL_DEV_BASE}0 >/dev/null 2>&1
|
||||
cpu_ucode=$(cpucontrol -m 0x8b ${BSD_CPUCTL_DEV_BASE}0 2>/dev/null | awk '{print $3}')
|
||||
fi
|
||||
if [ -n "$cpu_ucode" ]; then
|
||||
# convert to decimal then back to hex
|
||||
cpu_ucode=$((cpu_ucode))
|
||||
cpu_ucode=$(printf "0x%x" "$cpu_ucode")
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# if we got no cpu_ucode (e.g. we're in a vm), fall back to 0x0
|
||||
: "${cpu_ucode:=0x0}"
|
||||
# if we got no cpu_ucode (e.g. we're in a vm), leave it empty
|
||||
# so that we can detect this case and avoid false positives
|
||||
|
||||
# on non-x86 systems (e.g. ARM), these fields may not exist in cpuinfo, fall back to 0
|
||||
: "${cpu_family:=0}"
|
||||
@@ -159,9 +163,11 @@ parse_cpu_details() {
|
||||
g_mockme=$(printf "%b\n%b" "$g_mockme" "SMC_MOCK_CPU_UCODE='$cpu_ucode'")
|
||||
fi
|
||||
|
||||
echo "$cpu_ucode" | grep -q ^0x && cpu_ucode=$((cpu_ucode))
|
||||
g_ucode_found=$(printf "family 0x%x model 0x%x stepping 0x%x ucode 0x%x cpuid 0x%x pfid 0x%x" \
|
||||
"$cpu_family" "$cpu_model" "$cpu_stepping" "$cpu_ucode" "$cpu_cpuid" "$cpu_platformid")
|
||||
if [ -n "$cpu_ucode" ]; then
|
||||
echo "$cpu_ucode" | grep -q ^0x && cpu_ucode=$((cpu_ucode))
|
||||
fi
|
||||
g_ucode_found=$(printf "family 0x%x model 0x%x stepping 0x%x ucode 0x%s cpuid 0x%x pfid 0x%x" \
|
||||
"$cpu_family" "$cpu_model" "$cpu_stepping" "${cpu_ucode:-unknown}" "$cpu_cpuid" "$cpu_platformid")
|
||||
|
||||
g_parse_cpu_details_done=1
|
||||
}
|
||||
|
||||
@@ -210,7 +210,7 @@ has_zenbleed_fixed_firmware() {
|
||||
model_high=$(echo "$tuple" | cut -d, -f2)
|
||||
fwver=$(echo "$tuple" | cut -d, -f3)
|
||||
if [ $((cpu_model)) -ge $((model_low)) ] && [ $((cpu_model)) -le $((model_high)) ]; then
|
||||
if [ $((cpu_ucode)) -ge $((fwver)) ]; then
|
||||
if [ -n "$cpu_ucode" ] && [ $((cpu_ucode)) -ge $((fwver)) ]; then
|
||||
g_zenbleed_fw=0 # true
|
||||
break
|
||||
else
|
||||
|
||||
@@ -42,6 +42,10 @@ is_latest_known_ucode() {
|
||||
ret_is_latest_known_ucode_latest="couldn't get your cpuid"
|
||||
return 2
|
||||
fi
|
||||
if [ -z "$cpu_ucode" ]; then
|
||||
ret_is_latest_known_ucode_latest="couldn't get your microcode version"
|
||||
return 2
|
||||
fi
|
||||
ret_is_latest_known_ucode_latest="latest microcode version for your CPU model is unknown"
|
||||
if is_intel; then
|
||||
brand_prefix=I
|
||||
|
||||
@@ -467,6 +467,26 @@ check_cpu() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# IBPB_RET: CPUID EAX=0x80000008, ECX=0x00 return EBX[30] indicates IBPB also flushes
|
||||
# return predictions (Zen4+). Without this bit, IBPB alone does not clear the return
|
||||
# predictor, requiring an additional RSB fill (kernel X86_BUG_IBPB_NO_RET fix).
|
||||
cap_ibpb_ret=''
|
||||
if is_amd || is_hygon; then
|
||||
pr_info_nol " * CPU indicates IBPB flushes return predictions: "
|
||||
read_cpuid 0x80000008 0x0 $EBX 30 1 1
|
||||
ret=$?
|
||||
if [ $ret = $READ_CPUID_RET_OK ]; then
|
||||
cap_ibpb_ret=1
|
||||
pstatus green YES "IBPB_RET feature bit"
|
||||
elif [ $ret = $READ_CPUID_RET_KO ]; then
|
||||
cap_ibpb_ret=0
|
||||
pstatus yellow NO
|
||||
else
|
||||
cap_ibpb_ret=-1
|
||||
pstatus yellow UNKNOWN "$ret_read_cpuid_msg"
|
||||
fi
|
||||
fi
|
||||
|
||||
# STIBP
|
||||
pr_info " * Single Thread Indirect Branch Predictors (STIBP)"
|
||||
pr_info_nol " * SPEC_CTRL MSR is available: "
|
||||
@@ -734,6 +754,8 @@ check_cpu() {
|
||||
cap_tsx_ctrl_msr=-1
|
||||
cap_gds_ctrl=-1
|
||||
cap_gds_no=-1
|
||||
cap_rfds_no=-1
|
||||
cap_rfds_clear=-1
|
||||
cap_its_no=-1
|
||||
if [ "$cap_arch_capabilities" = -1 ]; then
|
||||
pstatus yellow UNKNOWN
|
||||
@@ -749,6 +771,8 @@ check_cpu() {
|
||||
cap_tsx_ctrl_msr=0
|
||||
cap_gds_ctrl=0
|
||||
cap_gds_no=0
|
||||
cap_rfds_no=0
|
||||
cap_rfds_clear=0
|
||||
cap_its_no=0
|
||||
pstatus yellow NO
|
||||
else
|
||||
@@ -765,6 +789,8 @@ check_cpu() {
|
||||
cap_tsx_ctrl_msr=0
|
||||
cap_gds_ctrl=0
|
||||
cap_gds_no=0
|
||||
cap_rfds_no=0
|
||||
cap_rfds_clear=0
|
||||
cap_its_no=0
|
||||
if [ $ret = $READ_MSR_RET_OK ]; then
|
||||
capabilities=$ret_read_msr_value
|
||||
@@ -781,8 +807,10 @@ check_cpu() {
|
||||
[ $((ret_read_msr_value_lo >> 8 & 1)) -eq 1 ] && cap_taa_no=1
|
||||
[ $((ret_read_msr_value_lo >> 25 & 1)) -eq 1 ] && cap_gds_ctrl=1
|
||||
[ $((ret_read_msr_value_lo >> 26 & 1)) -eq 1 ] && cap_gds_no=1
|
||||
[ $((ret_read_msr_value_lo >> 27 & 1)) -eq 1 ] && cap_rfds_no=1
|
||||
[ $((ret_read_msr_value_lo >> 28 & 1)) -eq 1 ] && cap_rfds_clear=1
|
||||
[ $((ret_read_msr_value_hi >> 30 & 1)) -eq 1 ] && cap_its_no=1
|
||||
pr_debug "capabilities says rdcl_no=$cap_rdcl_no ibrs_all=$cap_ibrs_all rsba=$cap_rsba l1dflush_no=$cap_l1dflush_no ssb_no=$cap_ssb_no mds_no=$cap_mds_no taa_no=$cap_taa_no pschange_msc_no=$cap_pschange_msc_no its_no=$cap_its_no"
|
||||
pr_debug "capabilities says rdcl_no=$cap_rdcl_no ibrs_all=$cap_ibrs_all rsba=$cap_rsba l1dflush_no=$cap_l1dflush_no ssb_no=$cap_ssb_no mds_no=$cap_mds_no taa_no=$cap_taa_no pschange_msc_no=$cap_pschange_msc_no rfds_no=$cap_rfds_no rfds_clear=$cap_rfds_clear its_no=$cap_its_no"
|
||||
if [ "$cap_ibrs_all" = 1 ]; then
|
||||
pstatus green YES
|
||||
else
|
||||
@@ -867,6 +895,8 @@ check_cpu() {
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
# IA32_TSX_CTRL (MSR 0x122): architectural way to disable TSX, available on
|
||||
# Cascade Lake and newer, and some Coffee Lake steppings via microcode update
|
||||
if [ "$cap_tsx_ctrl_msr" = 1 ]; then
|
||||
read_msr $MSR_IA32_TSX_CTRL
|
||||
ret=$?
|
||||
@@ -941,6 +971,24 @@ check_cpu() {
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
pr_info_nol " * CPU explicitly indicates not being affected by RFDS (RFDS_NO): "
|
||||
if [ "$cap_rfds_no" = -1 ]; then
|
||||
pstatus yellow UNKNOWN "couldn't read MSR"
|
||||
elif [ "$cap_rfds_no" = 1 ]; then
|
||||
pstatus green YES
|
||||
else
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
pr_info_nol " * CPU microcode supports clearing register files (RFDS_CLEAR): "
|
||||
if [ "$cap_rfds_clear" = -1 ]; then
|
||||
pstatus yellow UNKNOWN "couldn't read MSR"
|
||||
elif [ "$cap_rfds_clear" = 1 ]; then
|
||||
pstatus green YES
|
||||
else
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
fi
|
||||
|
||||
if is_amd || is_hygon; then
|
||||
@@ -1043,6 +1091,52 @@ check_cpu() {
|
||||
pstatus yellow UNKNOWN "$ret_read_cpuid_msg"
|
||||
fi
|
||||
|
||||
pr_info_nol " * CPU supports TSX Force Abort (TSX_FORCE_ABORT): "
|
||||
ret=$READ_CPUID_RET_KO
|
||||
cap_tsx_force_abort=0
|
||||
if is_intel; then
|
||||
read_cpuid 0x7 0x0 $EDX 13 1 1
|
||||
ret=$?
|
||||
fi
|
||||
if [ $ret = $READ_CPUID_RET_OK ]; then
|
||||
cap_tsx_force_abort=1
|
||||
pstatus blue YES
|
||||
elif [ $ret = $READ_CPUID_RET_KO ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
cap_tsx_force_abort=-1
|
||||
pstatus yellow UNKNOWN "$ret_read_cpuid_msg"
|
||||
fi
|
||||
|
||||
# IA32_TSX_FORCE_ABORT (MSR 0x10F): stopgap for older Skylake/Kaby Lake CPUs that
|
||||
# don't support IA32_TSX_CTRL, forces all RTM transactions to abort via microcode update
|
||||
if [ "$cap_tsx_force_abort" = 1 ]; then
|
||||
read_msr $MSR_IA32_TSX_FORCE_ABORT
|
||||
ret=$?
|
||||
if [ "$ret" = $READ_MSR_RET_OK ]; then
|
||||
cap_tsx_force_abort_rtm_disable=$((ret_read_msr_value_lo >> 0 & 1))
|
||||
cap_tsx_force_abort_cpuid_clear=$((ret_read_msr_value_lo >> 1 & 1))
|
||||
fi
|
||||
|
||||
pr_info_nol " * TSX_FORCE_ABORT MSR indicates all TSX transactions are aborted: "
|
||||
if [ "$cap_tsx_force_abort_rtm_disable" = 1 ]; then
|
||||
pstatus blue YES
|
||||
elif [ "$cap_tsx_force_abort_rtm_disable" = 0 ]; then
|
||||
pstatus blue NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read MSR"
|
||||
fi
|
||||
|
||||
pr_info_nol " * TSX_FORCE_ABORT MSR indicates TSX CPUID bit is cleared: "
|
||||
if [ "$cap_tsx_force_abort_cpuid_clear" = 1 ]; then
|
||||
pstatus blue YES
|
||||
elif [ "$cap_tsx_force_abort_cpuid_clear" = 0 ]; then
|
||||
pstatus blue NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read MSR"
|
||||
fi
|
||||
fi
|
||||
|
||||
pr_info_nol " * CPU supports Software Guard Extensions (SGX): "
|
||||
ret=$READ_CPUID_RET_KO
|
||||
cap_sgx=0
|
||||
@@ -1076,11 +1170,11 @@ check_cpu() {
|
||||
read_msr $MSR_IA32_MCU_OPT_CTRL
|
||||
ret=$?
|
||||
if [ $ret = $READ_MSR_RET_OK ]; then
|
||||
if [ "$ret_read_msr_value" = "0000000000000000" ]; then
|
||||
#SRBDS mitigation control exists and is enabled via microcode
|
||||
if [ "$((ret_read_msr_value_lo >> 0 & 1))" = 0 ]; then
|
||||
#SRBDS mitigation control exists and is enabled via microcode (RNGDS_MITG_DIS bit is 0)
|
||||
cap_srbds_on=1
|
||||
else
|
||||
#SRBDS mitigation control exists but is disabled via microcode
|
||||
#SRBDS mitigation control exists but is disabled via microcode (RNGDS_MITG_DIS bit is 1)
|
||||
cap_srbds_on=0
|
||||
fi
|
||||
else
|
||||
|
||||
@@ -53,7 +53,17 @@ check_mds_bsd() {
|
||||
else
|
||||
kernel_mds_state=inactive
|
||||
fi
|
||||
# https://github.com/freebsd/freebsd/blob/master/sys/x86/x86/cpu_machdep.c#L953
|
||||
# possible values for hw.mds_disable_state (FreeBSD cpu_machdep.c):
|
||||
# - inactive: no mitigation (non-Intel, disabled, or not needed)
|
||||
# - VERW: microcode-based VERW instruction
|
||||
# - software IvyBridge: SW sequence for Ivy Bridge
|
||||
# - software Broadwell: SW sequence for Broadwell
|
||||
# - software Skylake SSE: SW sequence for Skylake (SSE)
|
||||
# - software Skylake AVX: SW sequence for Skylake (AVX)
|
||||
# - software Skylake AVX512: SW sequence for Skylake (AVX-512)
|
||||
# - software Silvermont: SW sequence for Silvermont
|
||||
# - unknown: fallback if handler doesn't match any known
|
||||
# ref: https://github.com/freebsd/freebsd-src/blob/main/sys/x86/x86/cpu_machdep.c
|
||||
case "$kernel_mds_state" in
|
||||
inactive) pstatus yellow NO ;;
|
||||
VERW) pstatus green YES "with microcode support" ;;
|
||||
@@ -85,7 +95,23 @@ check_mds_bsd() {
|
||||
pvulnstatus "$cve" VULN "Your microcode supports mitigation, but your kernel doesn't, upgrade it to mitigate the vulnerability"
|
||||
fi
|
||||
else
|
||||
if [ "$kernel_md_clear" = 1 ]; then
|
||||
if [ "$kernel_md_clear" = 1 ] && [ "$opt_live" = 1 ]; then
|
||||
# no MD_CLEAR in microcode, but FreeBSD may still have software-only mitigation active
|
||||
case "$kernel_mds_state" in
|
||||
software*)
|
||||
if [ "$opt_paranoid" = 1 ]; then
|
||||
pvulnstatus "$cve" VULN "Software-only mitigation is active, but in paranoid mode a microcode-based mitigation is required"
|
||||
elif [ "$kernel_smt_allowed" = 1 ]; then
|
||||
pvulnstatus "$cve" OK "Software-only mitigation is active, but SMT is enabled so cross-thread attacks are still possible"
|
||||
else
|
||||
pvulnstatus "$cve" OK "Software-only mitigation is active (no microcode update required for this CPU)"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
pvulnstatus "$cve" VULN "Your kernel supports mitigation, but your CPU microcode also needs to be updated to mitigate the vulnerability"
|
||||
;;
|
||||
esac
|
||||
elif [ "$kernel_md_clear" = 1 ]; then
|
||||
pvulnstatus "$cve" VULN "Your kernel supports mitigation, but your CPU microcode also needs to be updated to mitigate the vulnerability"
|
||||
else
|
||||
pvulnstatus "$cve" VULN "Neither your kernel or your microcode support mitigation, upgrade both to mitigate the vulnerability"
|
||||
|
||||
286
src/vulns-helpers/check_sls.sh
Normal file
286
src/vulns-helpers/check_sls.sh
Normal file
@@ -0,0 +1,286 @@
|
||||
# vim: set ts=4 sw=4 sts=4 et:
|
||||
###############################
|
||||
# Straight-Line Speculation (SLS) — supplementary check (--extra only)
|
||||
#
|
||||
# SLS: x86 CPUs may speculatively execute instructions past unconditional
|
||||
# control flow changes (RET, indirect JMP/CALL). Mitigated at compile time
|
||||
# by CONFIG_MITIGATION_SLS (formerly CONFIG_SLS before kernel 6.8), which
|
||||
# enables -mharden-sls=all to insert INT3 after these instructions.
|
||||
# No sysfs interface, no MSR, no CPU feature flag.
|
||||
# Related: CVE-2021-26341 (AMD Zen1/Zen2 direct-branch SLS subset).
|
||||
|
||||
# Heuristic: scan the kernel .text section for indirect call/jmp thunks
|
||||
# (retpoline-style stubs), then check whether tail-call JMPs to those thunks
|
||||
# are followed by INT3 (0xcc). With SLS enabled: >80%. Without: <20%.
|
||||
#
|
||||
# Thunk signature: e8 01 00 00 00 cc 48 89 XX 24
|
||||
# call +1; int3; mov <reg>,(%rsp); ...
|
||||
# Tail-call pattern: e9 XX XX XX XX [cc?]
|
||||
# jmp <thunk>; [int3 if SLS]
|
||||
|
||||
# Perl implementation of the SLS heuristic byte scanner.
|
||||
# Args: $1 = path to raw .text binary (from objcopy -O binary -j .text)
|
||||
# Output: thunks=N jmps=N sls=N
|
||||
#
|
||||
# The heuristic looks for two types of thunks and counts how many jmp rel32
|
||||
# instructions targeting them are followed by INT3 (the SLS mitigation):
|
||||
#
|
||||
# 1. Indirect call/jmp thunks (retpoline stubs used for indirect tail calls):
|
||||
# e8 01 00 00 00 cc 48 89 XX 24 (call +1; int3; mov <reg>,(%rsp))
|
||||
#
|
||||
# 2. Return thunk (used for all function returns via jmp __x86_return_thunk):
|
||||
# c3 90 90 90 90 cc cc cc cc cc (ret; nop*4; int3*5+)
|
||||
# This is the most common jmp target in retpoline-enabled kernels.
|
||||
#
|
||||
# Some kernels only use indirect thunks, some only the return thunk, and some
|
||||
# use both. We check both and combine the results.
|
||||
_sls_heuristic_perl() {
|
||||
perl -e '
|
||||
use strict;
|
||||
use warnings;
|
||||
local $/;
|
||||
open my $fh, "<:raw", $ARGV[0] or die "open: $!";
|
||||
my $text = <$fh>;
|
||||
close $fh;
|
||||
my $len = length($text);
|
||||
|
||||
# Collect two types of thunks separately, as different kernels
|
||||
# apply SLS to different thunk types.
|
||||
|
||||
my (%indirect_thunks, %return_thunks);
|
||||
|
||||
# Pattern 1: indirect call/jmp thunks (retpoline stubs)
|
||||
while ($text =~ /\xe8\x01\x00\x00\x00\xcc\x48\x89.\x24/gs) {
|
||||
$indirect_thunks{ pos($text) - length($&) } = 1;
|
||||
}
|
||||
|
||||
# Pattern 2: return thunk (ret; nop*4; int3*5)
|
||||
while ($text =~ /\xc3\x90\x90\x90\x90\xcc\xcc\xcc\xcc\xcc/gs) {
|
||||
$return_thunks{ pos($text) - length($&) } = 1;
|
||||
}
|
||||
|
||||
my $n_indirect = scalar keys %indirect_thunks;
|
||||
my $n_return = scalar keys %return_thunks;
|
||||
|
||||
if ($n_indirect + $n_return == 0) {
|
||||
print "thunks=0 jmps=0 sls=0\n";
|
||||
exit 0;
|
||||
}
|
||||
|
||||
# Count jmps to each thunk type separately
|
||||
my ($ind_total, $ind_sls) = (0, 0);
|
||||
my ($ret_total, $ret_sls) = (0, 0);
|
||||
|
||||
for (my $i = 0; $i + 5 < $len; $i++) {
|
||||
next unless substr($text, $i, 1) eq "\xe9";
|
||||
my $rel = unpack("V", substr($text, $i + 1, 4));
|
||||
$rel -= 4294967296 if $rel >= 2147483648;
|
||||
my $target = $i + 5 + $rel;
|
||||
my $has_int3 = ($i + 5 < $len && substr($text, $i + 5, 1) eq "\xcc") ? 1 : 0;
|
||||
if (exists $indirect_thunks{$target}) {
|
||||
$ind_total++;
|
||||
$ind_sls += $has_int3;
|
||||
}
|
||||
if (exists $return_thunks{$target}) {
|
||||
$ret_total++;
|
||||
$ret_sls += $has_int3;
|
||||
}
|
||||
}
|
||||
|
||||
# Use whichever thunk type has jmps; prefer indirect thunks if both have data
|
||||
my ($total, $sls, $n_thunks);
|
||||
if ($ind_total > 0) {
|
||||
($total, $sls, $n_thunks) = ($ind_total, $ind_sls, $n_indirect);
|
||||
} elsif ($ret_total > 0) {
|
||||
($total, $sls, $n_thunks) = ($ret_total, $ret_sls, $n_return);
|
||||
} else {
|
||||
($total, $sls, $n_thunks) = (0, 0, $n_indirect + $n_return);
|
||||
}
|
||||
|
||||
printf "thunks=%d jmps=%d sls=%d\n", $n_thunks, $total, $sls;
|
||||
' "$1" 2>/dev/null
|
||||
}
|
||||
|
||||
# Awk fallback implementation of the SLS heuristic byte scanner.
|
||||
# Slower than perl but uses only POSIX tools (od + awk).
|
||||
# Args: $1 = path to raw .text binary (from objcopy -O binary -j .text)
|
||||
# Output: thunks=N jmps=N sls=N
|
||||
_sls_heuristic_awk() {
|
||||
od -An -tu1 -v "$1" | awk '
|
||||
{
|
||||
for (i = 1; i <= NF; i++) b[n++] = $i + 0
|
||||
}
|
||||
END {
|
||||
# Pattern 1: indirect call/jmp thunks
|
||||
# 232 1 0 0 0 204 72 137 XX 36 (e8 01 00 00 00 cc 48 89 XX 24)
|
||||
for (i = 0; i + 9 < n; i++) {
|
||||
if (b[i]==232 && b[i+1]==1 && b[i+2]==0 && b[i+3]==0 && \
|
||||
b[i+4]==0 && b[i+5]==204 && b[i+6]==72 && b[i+7]==137 && \
|
||||
b[i+9]==36) {
|
||||
ind[i] = 1
|
||||
n_ind++
|
||||
}
|
||||
}
|
||||
# Pattern 2: return thunk (ret; nop*4; int3*5)
|
||||
# 195 144 144 144 144 204 204 204 204 204 (c3 90 90 90 90 cc cc cc cc cc)
|
||||
for (i = 0; i + 9 < n; i++) {
|
||||
if (b[i]==195 && b[i+1]==144 && b[i+2]==144 && b[i+3]==144 && \
|
||||
b[i+4]==144 && b[i+5]==204 && b[i+6]==204 && b[i+7]==204 && \
|
||||
b[i+8]==204 && b[i+9]==204) {
|
||||
ret[i] = 1
|
||||
n_ret++
|
||||
}
|
||||
}
|
||||
if (n_ind + n_ret == 0) { print "thunks=0 jmps=0 sls=0"; exit }
|
||||
|
||||
# Count jmps to each thunk type separately
|
||||
ind_total = 0; ind_sls = 0
|
||||
ret_total = 0; ret_sls = 0
|
||||
for (i = 0; i + 5 < n; i++) {
|
||||
if (b[i] != 233) continue
|
||||
rel = b[i+1] + b[i+2]*256 + b[i+3]*65536 + b[i+4]*16777216
|
||||
if (rel >= 2147483648) rel -= 4294967296
|
||||
target = i + 5 + rel
|
||||
has_int3 = (b[i+5] == 204) ? 1 : 0
|
||||
if (target in ind) { ind_total++; ind_sls += has_int3 }
|
||||
if (target in ret) { ret_total++; ret_sls += has_int3 }
|
||||
}
|
||||
|
||||
# Prefer indirect thunks if they have data, else fall back to return thunk
|
||||
if (ind_total > 0)
|
||||
printf "thunks=%d jmps=%d sls=%d\n", n_ind, ind_total, ind_sls
|
||||
else if (ret_total > 0)
|
||||
printf "thunks=%d jmps=%d sls=%d\n", n_ret, ret_total, ret_sls
|
||||
else
|
||||
printf "thunks=%d jmps=0 sls=0\n", n_ind + n_ret
|
||||
}' 2>/dev/null
|
||||
}
|
||||
|
||||
check_CVE_0000_0001_linux() {
|
||||
local status sys_interface_available msg
|
||||
status=UNK
|
||||
sys_interface_available=0
|
||||
msg=''
|
||||
|
||||
# No sysfs interface for SLS
|
||||
# sys_interface_available stays 0
|
||||
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
|
||||
# --- CPU affection check ---
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
pvulnstatus "$cve" OK "your CPU is not affected"
|
||||
return
|
||||
fi
|
||||
|
||||
# --- arm64: no kernel mitigation available ---
|
||||
local _sls_arch
|
||||
_sls_arch=$(uname -m 2>/dev/null || echo unknown)
|
||||
if echo "$_sls_arch" | grep -qw 'aarch64'; then
|
||||
pvulnstatus "$cve" VULN "no kernel mitigation available for arm64 SLS (CVE-2020-13844)"
|
||||
explain "Your ARM processor is affected by Straight-Line Speculation (CVE-2020-13844).\n" \
|
||||
"GCC and Clang support -mharden-sls=all for aarch64, which inserts SB (Speculation Barrier)\n" \
|
||||
"or DSB+ISB after RET and BR instructions. However, the Linux kernel does not enable this flag:\n" \
|
||||
"patches to add CONFIG_HARDEN_SLS_ALL were submitted in 2021 but were rejected upstream.\n" \
|
||||
"There is currently no kernel-level mitigation for SLS on arm64."
|
||||
return
|
||||
fi
|
||||
|
||||
# --- method 1: kernel config check (x86_64) ---
|
||||
local _sls_config=''
|
||||
if [ -n "$opt_config" ] && [ -r "$opt_config" ]; then
|
||||
pr_info_nol " * Kernel compiled with SLS mitigation: "
|
||||
if grep -qE '^CONFIG_(MITIGATION_)?SLS=y' "$opt_config"; then
|
||||
_sls_config=1
|
||||
pstatus green YES
|
||||
else
|
||||
_sls_config=0
|
||||
pstatus yellow NO
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- method 2: kernel image heuristic (fallback when no config) ---
|
||||
local _sls_heuristic=''
|
||||
if [ -z "$_sls_config" ]; then
|
||||
pr_info_nol " * Kernel compiled with SLS mitigation: "
|
||||
if [ -n "$g_kernel_err" ]; then
|
||||
pstatus yellow UNKNOWN "$g_kernel_err"
|
||||
elif [ -z "$g_kernel" ]; then
|
||||
pstatus yellow UNKNOWN "no kernel image available"
|
||||
elif ! command -v "${opt_arch_prefix}objcopy" >/dev/null 2>&1; then
|
||||
pstatus yellow UNKNOWN "missing '${opt_arch_prefix}objcopy' tool, usually in the binutils package"
|
||||
else
|
||||
local _sls_result
|
||||
g_sls_text_tmp=$(mktemp -t smc-sls-text-XXXXXX)
|
||||
|
||||
if ! "${opt_arch_prefix}objcopy" -O binary -j .text "$g_kernel" "$g_sls_text_tmp" 2>/dev/null || [ ! -s "$g_sls_text_tmp" ]; then
|
||||
pstatus yellow UNKNOWN "failed to extract .text section from kernel image"
|
||||
rm -f "$g_sls_text_tmp"
|
||||
g_sls_text_tmp=''
|
||||
else
|
||||
_sls_result=''
|
||||
if command -v perl >/dev/null 2>&1; then
|
||||
_sls_result=$(_sls_heuristic_perl "$g_sls_text_tmp")
|
||||
elif command -v awk >/dev/null 2>&1; then
|
||||
_sls_result=$(_sls_heuristic_awk "$g_sls_text_tmp")
|
||||
fi
|
||||
rm -f "$g_sls_text_tmp"
|
||||
g_sls_text_tmp=''
|
||||
|
||||
if [ -z "$_sls_result" ]; then
|
||||
pstatus yellow UNKNOWN "missing 'perl' or 'awk' tool for heuristic scan"
|
||||
else
|
||||
local _sls_thunks _sls_jmps _sls_int3
|
||||
_sls_thunks=$(echo "$_sls_result" | sed -n 's/.*thunks=\([0-9]*\).*/\1/p')
|
||||
_sls_jmps=$(echo "$_sls_result" | sed -n 's/.*jmps=\([0-9]*\).*/\1/p')
|
||||
_sls_int3=$(echo "$_sls_result" | sed -n 's/.*sls=\([0-9]*\).*/\1/p')
|
||||
pr_debug "sls heuristic: thunks=$_sls_thunks jmps=$_sls_jmps int3=$_sls_int3"
|
||||
|
||||
if [ "${_sls_thunks:-0}" = 0 ] || [ "${_sls_jmps:-0}" = 0 ]; then
|
||||
pstatus yellow UNKNOWN "no retpoline indirect thunks found in kernel image"
|
||||
else
|
||||
local _sls_pct=$((_sls_int3 * 100 / _sls_jmps))
|
||||
if [ "$_sls_pct" -ge 80 ]; then
|
||||
_sls_heuristic=1
|
||||
pstatus green YES "$_sls_int3/$_sls_jmps indirect tail-call JMPs hardened (${_sls_pct}%%)"
|
||||
elif [ "$_sls_pct" -le 20 ]; then
|
||||
_sls_heuristic=0
|
||||
pstatus yellow NO "$_sls_int3/$_sls_jmps indirect tail-call JMPs hardened (${_sls_pct}%%)"
|
||||
else
|
||||
pstatus yellow UNKNOWN "$_sls_int3/$_sls_jmps indirect tail-call JMPs hardened (${_sls_pct}%%, inconclusive)"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- verdict (x86_64) ---
|
||||
if [ "$_sls_config" = 1 ] || [ "$_sls_heuristic" = 1 ]; then
|
||||
pvulnstatus "$cve" OK "kernel compiled with SLS mitigation"
|
||||
explain "Your kernel was compiled with CONFIG_MITIGATION_SLS=y (or CONFIG_SLS=y on kernels before 6.8),\n" \
|
||||
"which enables the GCC flag -mharden-sls=all to insert INT3 instructions after unconditional\n" \
|
||||
"control flow changes, blocking straight-line speculation."
|
||||
elif [ "$_sls_config" = 0 ] || [ "$_sls_heuristic" = 0 ]; then
|
||||
pvulnstatus "$cve" VULN "kernel not compiled with SLS mitigation"
|
||||
explain "Recompile your kernel with CONFIG_MITIGATION_SLS=y (or CONFIG_SLS=y on kernels before 6.8).\n" \
|
||||
"This enables the GCC flag -mharden-sls=all, which inserts INT3 after unconditional control flow\n" \
|
||||
"instructions to block straight-line speculation. Note: this option defaults to off in most kernels\n" \
|
||||
"and incurs ~2.4%% text size overhead."
|
||||
else
|
||||
pvulnstatus "$cve" UNK "couldn't determine SLS mitigation status"
|
||||
fi
|
||||
elif [ "$sys_interface_available" = 0 ]; then
|
||||
msg="/sys vulnerability interface use forced, but there is no sysfs entry for SLS"
|
||||
status=UNK
|
||||
pvulnstatus "$cve" "$status" "$msg"
|
||||
fi
|
||||
}
|
||||
|
||||
check_CVE_0000_0001_bsd() {
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
pvulnstatus "$cve" OK "your CPU vendor reported your CPU model as not affected"
|
||||
else
|
||||
pvulnstatus "$cve" UNK "your CPU is affected, but mitigation detection has not yet been implemented for BSD in this script"
|
||||
fi
|
||||
}
|
||||
15
src/vulns/CVE-0000-0001.sh
Normal file
15
src/vulns/CVE-0000-0001.sh
Normal file
@@ -0,0 +1,15 @@
|
||||
# vim: set ts=4 sw=4 sts=4 et:
|
||||
###############################
|
||||
# CVE-0000-0001, SLS, Straight-Line Speculation
|
||||
# Supplementary check, only runs under --extra
|
||||
|
||||
# shellcheck disable=SC2034
|
||||
check_CVE_0000_0001() {
|
||||
# SLS is a supplementary check: skip it in the default "all CVEs" run
|
||||
# unless --extra is passed, but always run when explicitly selected
|
||||
# via --variant sls or --cve CVE-0000-0001
|
||||
if [ "$opt_cve_all" = 1 ] && [ "$opt_extra" != 1 ]; then
|
||||
return 0
|
||||
fi
|
||||
check_cve 'CVE-0000-0001'
|
||||
}
|
||||
@@ -70,7 +70,19 @@ check_CVE_2019_11135_linux() {
|
||||
else
|
||||
if [ "$opt_paranoid" = 1 ]; then
|
||||
# in paranoid mode, TSX or SMT enabled are not OK, even if TAA is mitigated
|
||||
if ! echo "$ret_sys_interface_check_fullmsg" | grep -qF 'TSX disabled'; then
|
||||
# first check sysfs, then fall back to MSR-based detection for older kernels
|
||||
# that may not report TSX as disabled even when microcode has done so
|
||||
tsx_disabled=0
|
||||
if echo "$ret_sys_interface_check_fullmsg" | grep -qF 'TSX disabled'; then
|
||||
tsx_disabled=1
|
||||
elif [ "$cap_tsx_ctrl_rtm_disable" = 1 ] && [ "$cap_tsx_ctrl_cpuid_clear" = 1 ]; then
|
||||
# TSX disabled via IA32_TSX_CTRL MSR (0x122)
|
||||
tsx_disabled=1
|
||||
elif [ "$cap_tsx_force_abort_rtm_disable" = 1 ] && [ "$cap_tsx_force_abort_cpuid_clear" = 1 ]; then
|
||||
# TSX disabled via IA32_TSX_FORCE_ABORT MSR (0x10F), for older Skylake-era CPUs
|
||||
tsx_disabled=1
|
||||
fi
|
||||
if [ "$tsx_disabled" = 0 ]; then
|
||||
pvulnstatus "$cve" VULN "TSX must be disabled for full mitigation"
|
||||
elif echo "$ret_sys_interface_check_fullmsg" | grep -qF 'SMT vulnerable'; then
|
||||
pvulnstatus "$cve" VULN "SMT (HyperThreading) must be disabled for full mitigation"
|
||||
|
||||
@@ -7,7 +7,7 @@ check_CVE_2023_20569() {
|
||||
}
|
||||
|
||||
check_CVE_2023_20569_linux() {
|
||||
local status sys_interface_available msg kernel_sro kernel_sro_err kernel_srso kernel_ibpb_entry smt_enabled
|
||||
local status sys_interface_available msg kernel_sro kernel_sro_err kernel_srso kernel_ibpb_entry kernel_ibpb_no_ret smt_enabled
|
||||
status=UNK
|
||||
sys_interface_available=0
|
||||
msg=''
|
||||
@@ -25,6 +25,15 @@ check_CVE_2023_20569_linux() {
|
||||
status=VULN
|
||||
msg="Vulnerable: Safe RET, no microcode (your kernel incorrectly reports this as mitigated, it was fixed in more recent kernels)"
|
||||
fi
|
||||
# kernels before the IBPB_NO_RET fix (v6.12, backported to v6.11.5/v6.6.58/v6.1.114/v5.15.169/v5.10.228)
|
||||
# don't fill the RSB after IBPB, so when sysfs reports an IBPB-based mitigation, the return predictor
|
||||
# can still be poisoned cross-process (PB-Inception). Override sysfs in that case.
|
||||
if [ "$status" = OK ] && echo "$ret_sys_interface_check_fullmsg" | grep -qi 'IBPB'; then
|
||||
if [ "$cap_ibpb_ret" != 1 ] && ! grep -q 'ibpb_no_ret' "$g_kernel" 2>/dev/null; then
|
||||
status=VULN
|
||||
msg="Vulnerable: IBPB-based mitigation active but kernel lacks return prediction clearing after IBPB (PB-Inception, upgrade to kernel 6.12+)"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
@@ -117,6 +126,19 @@ check_CVE_2023_20569_linux() {
|
||||
fi
|
||||
fi
|
||||
|
||||
# check whether the kernel is aware of the IBPB return predictor bypass (PB-Inception).
|
||||
# kernels with the fix (v6.12+, backported) contain the "ibpb_no_ret" bug flag string,
|
||||
# and add an RSB fill after every IBPB on affected CPUs (Zen 1-3).
|
||||
pr_info_nol "* Kernel is aware of IBPB return predictor bypass: "
|
||||
if [ -n "$g_kernel_err" ]; then
|
||||
pstatus yellow UNKNOWN "$g_kernel_err"
|
||||
elif grep -q 'ibpb_no_ret' "$g_kernel"; then
|
||||
kernel_ibpb_no_ret="ibpb_no_ret found in kernel image"
|
||||
pstatus green YES "$kernel_ibpb_no_ret"
|
||||
else
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
# Zen & Zen2 : if the right IBPB microcode applied + SMT off --> not vuln
|
||||
if [ "$cpu_family" = $((0x17)) ]; then
|
||||
pr_info_nol "* CPU supports IBPB: "
|
||||
@@ -166,7 +188,11 @@ check_CVE_2023_20569_linux() {
|
||||
elif [ -z "$kernel_sro" ]; then
|
||||
pvulnstatus "$cve" VULN "Your kernel is too old and doesn't have the SRSO mitigation logic"
|
||||
elif [ -n "$cap_ibpb" ]; then
|
||||
pvulnstatus "$cve" OK "SMT is disabled and both your kernel and microcode support mitigation"
|
||||
if [ "$cap_ibpb_ret" != 1 ] && [ -z "$kernel_ibpb_no_ret" ]; then
|
||||
pvulnstatus "$cve" VULN "IBPB alone doesn't flush return predictions on this CPU, kernel update needed (PB-Inception, fixed in 6.12+)"
|
||||
else
|
||||
pvulnstatus "$cve" OK "SMT is disabled and both your kernel and microcode support mitigation"
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" VULN "Your microcode is too old"
|
||||
fi
|
||||
@@ -181,7 +207,11 @@ check_CVE_2023_20569_linux() {
|
||||
elif [ "$cap_sbpb" = 2 ]; then
|
||||
pvulnstatus "$cve" VULN "Your microcode doesn't support SBPB"
|
||||
else
|
||||
pvulnstatus "$cve" OK "Your kernel and microcode both support mitigation"
|
||||
if [ "$cap_ibpb_ret" != 1 ] && [ -z "$kernel_ibpb_no_ret" ] && [ -n "$kernel_ibpb_entry" ]; then
|
||||
pvulnstatus "$cve" VULN "IBPB alone doesn't flush return predictions on this CPU, kernel update needed (PB-Inception, fixed in 6.12+)"
|
||||
else
|
||||
pvulnstatus "$cve" OK "Your kernel and microcode both support mitigation"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
# not supposed to happen, as normally this CPU should not be affected and not run this code
|
||||
|
||||
169
src/vulns/CVE-2023-20588.sh
Normal file
169
src/vulns/CVE-2023-20588.sh
Normal file
@@ -0,0 +1,169 @@
|
||||
# vim: set ts=4 sw=4 sts=4 et:
|
||||
###############################
|
||||
# CVE-2023-20588, DIV0, AMD Division by Zero Speculative Data Leak
|
||||
|
||||
check_CVE_2023_20588() {
|
||||
check_cve 'CVE-2023-20588'
|
||||
}
|
||||
|
||||
# shellcheck disable=SC2034
|
||||
_cve_2023_20588_pvulnstatus_smt() {
|
||||
# common logic for both live (cpuinfo) and live (kernel image fallback) paths:
|
||||
# if --paranoid and SMT is on, report VULN; otherwise OK.
|
||||
# $1 = mitigation detail message
|
||||
if [ "$opt_paranoid" != 1 ] || ! is_cpu_smt_enabled; then
|
||||
pvulnstatus "$cve" OK "Mitigation: amd_clear_divider on exit to user/guest"
|
||||
else
|
||||
pvulnstatus "$cve" VULN "DIV0 mitigation is active but SMT is enabled, data leak possible between sibling threads"
|
||||
explain "Disable SMT (Simultaneous Multi-Threading) for full protection against DIV0.\n " \
|
||||
"The kernel mitigation only covers kernel-to-user and host-to-guest leak paths, not cross-SMT-thread leaks.\n " \
|
||||
"You can disable SMT by booting with the \`nosmt\` kernel parameter, or at runtime:\n " \
|
||||
"\`echo off > /sys/devices/system/cpu/smt/control\`"
|
||||
fi
|
||||
}
|
||||
|
||||
# shellcheck disable=SC2034
|
||||
_cve_2023_20588_pvulnstatus_no_kernel() {
|
||||
pvulnstatus "$cve" VULN "your kernel doesn't support DIV0 mitigation"
|
||||
explain "Update your kernel to a version that includes the amd_clear_divider mitigation (Linux >= 6.5 or a backported stable/vendor kernel).\n " \
|
||||
"The kernel fix adds a dummy division on every exit to userspace and before VMRUN, preventing stale quotient data from leaking.\n " \
|
||||
"Also disable SMT for full protection, as the mitigation doesn't cover cross-SMT-thread leaks."
|
||||
}
|
||||
|
||||
check_CVE_2023_20588_linux() {
|
||||
local status sys_interface_available msg kernel_mitigated cpuinfo_div0 dmesg_div0 ret
|
||||
status=UNK
|
||||
sys_interface_available=0
|
||||
msg=''
|
||||
# No sysfs interface exists for this CVE (no /sys/devices/system/cpu/vulnerabilities/div0).
|
||||
# sys_interface_available stays 0.
|
||||
#
|
||||
# Kernel source inventory for CVE-2023-20588 (DIV0), traced via git blame:
|
||||
#
|
||||
# --- sysfs messages ---
|
||||
# none: this vulnerability has no sysfs entry
|
||||
#
|
||||
# --- Kconfig symbols ---
|
||||
# none: the mitigation is unconditional, not configurable (no CONFIG_* knob)
|
||||
#
|
||||
# --- kernel functions (for $opt_map / System.map) ---
|
||||
# 77245f1c3c64 (v6.5, initial fix): amd_clear_divider()
|
||||
# initially called from exc_divide_error() (#DE handler)
|
||||
# f58d6fbcb7c8 (v6.5, follow-up fix): moved amd_clear_divider() call to
|
||||
# exit-to-userspace path and before VMRUN (SVM)
|
||||
# bfff3c6692ce (v6.8): moved DIV0 detection from model range check to
|
||||
# unconditional in init_amd_zen1()
|
||||
# 501bd734f933 (v6.11): amd_clear_divider() made __always_inline
|
||||
# (may no longer appear in System.map on newer kernels)
|
||||
#
|
||||
# --- dmesg ---
|
||||
# 77245f1c3c64 (v6.5): "AMD Zen1 DIV0 bug detected. Disable SMT for full protection."
|
||||
# (present since the initial fix, printed via pr_notice_once)
|
||||
#
|
||||
# --- /proc/cpuinfo bugs field ---
|
||||
# 77245f1c3c64 (v6.5): X86_BUG_DIV0 mapped to "div0" in bugs field
|
||||
#
|
||||
# --- CPU affection logic (for is_cpu_affected) ---
|
||||
# 77245f1c3c64 (v6.5, initial model list):
|
||||
# AMD: family 0x17 models 0x00-0x2f, 0x50-0x5f
|
||||
# bfff3c6692ce (v6.8): moved to init_amd_zen1(), unconditional for all Zen1
|
||||
# (same model ranges, just different detection path)
|
||||
# vendor scope: AMD only (Zen1 microarchitecture)
|
||||
#
|
||||
# --- stable backports ---
|
||||
# 5.10.y, 5.15.y, 6.1.y, 6.4.y: backported via cpu_has_amd_erratum() path
|
||||
# (same as mainline v6.5 initial implementation)
|
||||
# 6.5.y, 6.7.y: same erratum-table detection as mainline v6.5
|
||||
# 6.6.y: stable-specific commit 824549816609 backported the init_amd_zen1()
|
||||
# move (equivalent to mainline bfff3c6692ce but adapted to 6.6 context)
|
||||
# 6.8.y, 6.9.y, 6.10.y: carry mainline bfff3c6692ce directly
|
||||
# 6.7.y missed the init_amd_zen1() move (EOL before backport landed)
|
||||
# 501bd734f933 (__always_inline) was NOT backported to any stable branch
|
||||
# 4.14.y, 4.19.y, 5.4.y: do NOT have the fix (EOL or not backported)
|
||||
# no stable-specific string or behavior differences; all branches use the
|
||||
# same dmesg message and /proc/cpuinfo bugs field as mainline
|
||||
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
pr_info_nol "* Kernel supports DIV0 mitigation: "
|
||||
kernel_mitigated=''
|
||||
if [ -n "$g_kernel_err" ]; then
|
||||
pstatus yellow UNKNOWN "$g_kernel_err"
|
||||
elif grep -q 'amd_clear_divider' "$g_kernel"; then
|
||||
kernel_mitigated="found amd_clear_divider in kernel image"
|
||||
pstatus green YES "$kernel_mitigated"
|
||||
elif [ -n "$opt_map" ] && grep -q 'amd_clear_divider' "$opt_map"; then
|
||||
kernel_mitigated="found amd_clear_divider in System.map"
|
||||
pstatus green YES "$kernel_mitigated"
|
||||
else
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
pr_info_nol "* DIV0 mitigation enabled and active: "
|
||||
cpuinfo_div0=''
|
||||
dmesg_div0=''
|
||||
if [ "$opt_live" = 1 ]; then
|
||||
if [ -e "$g_procfs/cpuinfo" ] && grep -qw 'div0' "$g_procfs/cpuinfo" 2>/dev/null; then
|
||||
cpuinfo_div0=1
|
||||
pstatus green YES "div0 found in $g_procfs/cpuinfo bug flags"
|
||||
else
|
||||
# cpuinfo flag not found, fall back to dmesg
|
||||
dmesg_grep 'AMD Zen1 DIV0 bug detected'
|
||||
ret=$?
|
||||
if [ "$ret" -eq 0 ]; then
|
||||
dmesg_div0=1
|
||||
pstatus green YES "DIV0 bug detected message found in dmesg"
|
||||
elif [ "$ret" -eq 2 ]; then
|
||||
pstatus yellow UNKNOWN "dmesg truncated, cannot check for DIV0 message"
|
||||
else
|
||||
pstatus yellow NO "div0 not found in $g_procfs/cpuinfo bug flags or dmesg"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
pstatus blue N/A "not testable in offline mode"
|
||||
fi
|
||||
|
||||
pr_info_nol "* SMT (Simultaneous Multi-Threading) status: "
|
||||
is_cpu_smt_enabled
|
||||
elif [ "$sys_interface_available" = 0 ]; then
|
||||
msg="/sys vulnerability interface use forced, but it's not available!"
|
||||
status=UNK
|
||||
fi
|
||||
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
pvulnstatus "$cve" OK "your CPU vendor reported your CPU model as not affected"
|
||||
elif [ -z "$msg" ]; then
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
if [ "$opt_live" = 1 ]; then
|
||||
# live mode: cpuinfo div0 flag is the strongest proof the mitigation is active
|
||||
if [ "$cpuinfo_div0" = 1 ] || [ "$dmesg_div0" = 1 ]; then
|
||||
_cve_2023_20588_pvulnstatus_smt
|
||||
elif [ -n "$kernel_mitigated" ]; then
|
||||
# kernel has the code but the bug flag is not set, it shouldn't happen on affected CPUs,
|
||||
# but if it does, trust the kernel image evidence
|
||||
_cve_2023_20588_pvulnstatus_smt
|
||||
else
|
||||
_cve_2023_20588_pvulnstatus_no_kernel
|
||||
fi
|
||||
else
|
||||
# offline mode: only kernel image / System.map evidence is available
|
||||
if [ -n "$kernel_mitigated" ]; then
|
||||
pvulnstatus "$cve" OK "Mitigation: amd_clear_divider found in kernel image"
|
||||
else
|
||||
_cve_2023_20588_pvulnstatus_no_kernel
|
||||
fi
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" "$status" "no sysfs interface available for this CVE, use --no-sysfs to check"
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" "$status" "$msg"
|
||||
fi
|
||||
}
|
||||
|
||||
check_CVE_2023_20588_bsd() {
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
pvulnstatus "$cve" OK "your CPU vendor reported your CPU model as not affected"
|
||||
else
|
||||
pvulnstatus "$cve" UNK "your CPU is affected, but mitigation detection has not yet been implemented for BSD in this script"
|
||||
fi
|
||||
}
|
||||
@@ -24,7 +24,10 @@ check_CVE_2023_23583_linux() {
|
||||
pvulnstatus "$cve" VULN "your CPU is affected and no microcode update is available for your CPU stepping"
|
||||
else
|
||||
pr_info_nol "* Reptar is mitigated by microcode: "
|
||||
if [ "$cpu_ucode" -lt "$g_reptar_fixed_ucode_version" ]; then
|
||||
if [ -z "$cpu_ucode" ]; then
|
||||
pstatus yellow UNKNOWN "couldn't get your microcode version"
|
||||
pvulnstatus "$cve" UNK "couldn't detect microcode version to verify mitigation"
|
||||
elif [ "$cpu_ucode" -lt "$g_reptar_fixed_ucode_version" ]; then
|
||||
pstatus yellow NO "You have ucode $(printf "0x%x" "$cpu_ucode") and version $(printf "0x%x" "$g_reptar_fixed_ucode_version") minimum is required"
|
||||
pvulnstatus "$cve" VULN "Your microcode is too old to mitigate the vulnerability"
|
||||
else
|
||||
|
||||
173
src/vulns/CVE-2023-28746.sh
Normal file
173
src/vulns/CVE-2023-28746.sh
Normal file
@@ -0,0 +1,173 @@
|
||||
# vim: set ts=4 sw=4 sts=4 et:
|
||||
###############################
|
||||
# CVE-2023-28746, RFDS, Register File Data Sampling
|
||||
|
||||
check_CVE_2023_28746() {
|
||||
check_cve 'CVE-2023-28746'
|
||||
}
|
||||
|
||||
check_CVE_2023_28746_linux() {
|
||||
local status sys_interface_available msg kernel_rfds kernel_rfds_err rfds_mitigated
|
||||
status=UNK
|
||||
sys_interface_available=0
|
||||
msg=''
|
||||
|
||||
if sys_interface_check "$VULN_SYSFS_BASE/reg_file_data_sampling"; then
|
||||
# this kernel has the /sys interface, trust it over everything
|
||||
sys_interface_available=1
|
||||
#
|
||||
# Kernel source inventory for reg_file_data_sampling (RFDS)
|
||||
#
|
||||
# --- sysfs messages ---
|
||||
# all versions:
|
||||
# "Not affected" (cpu_show_common, pre-existing)
|
||||
#
|
||||
# --- mainline ---
|
||||
# 8076fcde016c (v6.9-rc1, initial RFDS sysfs):
|
||||
# "Vulnerable" (RFDS_MITIGATION_OFF)
|
||||
# "Vulnerable: No microcode" (RFDS_MITIGATION_UCODE_NEEDED)
|
||||
# "Mitigation: Clear Register File" (RFDS_MITIGATION_VERW)
|
||||
# b8ce25df2999 (v6.15, added AUTO state):
|
||||
# no string changes; RFDS_MITIGATION_AUTO is internal, resolved before display
|
||||
# 203d81f8e167 (v6.17, restructured):
|
||||
# no string changes; added rfds_update_mitigation() + rfds_apply_mitigation()
|
||||
#
|
||||
# --- stable backports ---
|
||||
# 5.10.215, 5.15.154, 6.1.82, 6.6.22, 6.7.10, 6.8.1:
|
||||
# same 3 strings as mainline; no structural differences
|
||||
# macro ALDERLAKE_N (0xBE) used instead of mainline ATOM_GRACEMONT (same model)
|
||||
#
|
||||
# --- Kconfig symbols ---
|
||||
# 8076fcde016c (v6.9-rc1): CONFIG_MITIGATION_RFDS (default y)
|
||||
# no renames across any version
|
||||
#
|
||||
# --- kernel functions (for $opt_map / System.map) ---
|
||||
# 8076fcde016c (v6.9-rc1): rfds_select_mitigation(), rfds_parse_cmdline(),
|
||||
# rfds_show_state(), cpu_show_reg_file_data_sampling(), vulnerable_to_rfds()
|
||||
# 203d81f8e167 (v6.17): + rfds_update_mitigation(), rfds_apply_mitigation()
|
||||
#
|
||||
# --- CPU affection logic (for is_cpu_affected) ---
|
||||
# 8076fcde016c (v6.9-rc1, initial model list):
|
||||
# Intel: ATOM_GOLDMONT (0x5C), ATOM_GOLDMONT_D (0x5F),
|
||||
# ATOM_GOLDMONT_PLUS (0x7A), ATOM_TREMONT_D (0x86),
|
||||
# ATOM_TREMONT (0x96), ATOM_TREMONT_L (0x9C),
|
||||
# ATOM_GRACEMONT (0xBE), ALDERLAKE (0x97),
|
||||
# ALDERLAKE_L (0x9A), RAPTORLAKE (0xB7),
|
||||
# RAPTORLAKE_P (0xBA), RAPTORLAKE_S (0xBF)
|
||||
# 722fa0dba74f (v6.15, P-only hybrid exclusion):
|
||||
# ALDERLAKE (0x97) and RAPTORLAKE (0xB7) narrowed to Atom core type only
|
||||
# via X86_HYBRID_CPU_TYPE_ATOM check in vulnerable_to_rfds(); P-cores on
|
||||
# these hybrid models are not affected, only E-cores (Gracemont) are.
|
||||
# (not modeled here, we conservatively flag all steppings per whitelist principle,
|
||||
# because detecting the active core type at runtime is unreliable from userspace)
|
||||
# immunity: ARCH_CAP_RFDS_NO (bit 27 of IA32_ARCH_CAPABILITIES)
|
||||
# mitigation: ARCH_CAP_RFDS_CLEAR (bit 28 of IA32_ARCH_CAPABILITIES)
|
||||
# vendor scope: Intel only
|
||||
#
|
||||
# all messages start with either "Not affected", "Mitigation", or "Vulnerable"
|
||||
status=$ret_sys_interface_check_status
|
||||
fi
|
||||
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
pr_info_nol "* CPU microcode mitigates the vulnerability: "
|
||||
if [ "$cap_rfds_clear" = 1 ]; then
|
||||
pstatus green YES "RFDS_CLEAR capability indicated by microcode"
|
||||
elif [ "$cap_rfds_clear" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read MSR"
|
||||
fi
|
||||
|
||||
pr_info_nol "* Kernel supports RFDS mitigation (VERW on transitions): "
|
||||
kernel_rfds=''
|
||||
kernel_rfds_err=''
|
||||
if [ -n "$g_kernel_err" ]; then
|
||||
kernel_rfds_err="$g_kernel_err"
|
||||
elif grep -q 'Clear Register File' "$g_kernel"; then
|
||||
kernel_rfds="found 'Clear Register File' string in kernel image"
|
||||
elif grep -q 'reg_file_data_sampling' "$g_kernel"; then
|
||||
kernel_rfds="found reg_file_data_sampling in kernel image"
|
||||
fi
|
||||
if [ -z "$kernel_rfds" ] && [ -r "$opt_config" ]; then
|
||||
if grep -q '^CONFIG_MITIGATION_RFDS=y' "$opt_config"; then
|
||||
kernel_rfds="RFDS mitigation config option found enabled in kernel config"
|
||||
fi
|
||||
fi
|
||||
if [ -z "$kernel_rfds" ] && [ -n "$opt_map" ]; then
|
||||
if grep -q 'rfds_select_mitigation' "$opt_map"; then
|
||||
kernel_rfds="found rfds_select_mitigation in System.map"
|
||||
fi
|
||||
fi
|
||||
if [ -n "$kernel_rfds" ]; then
|
||||
pstatus green YES "$kernel_rfds"
|
||||
elif [ -n "$kernel_rfds_err" ]; then
|
||||
pstatus yellow UNKNOWN "$kernel_rfds_err"
|
||||
else
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
if [ "$opt_live" = 1 ] && [ "$sys_interface_available" = 1 ]; then
|
||||
pr_info_nol "* RFDS mitigation is enabled and active: "
|
||||
if echo "$ret_sys_interface_check_fullmsg" | grep -qi '^Mitigation'; then
|
||||
rfds_mitigated=1
|
||||
pstatus green YES
|
||||
else
|
||||
rfds_mitigated=0
|
||||
pstatus yellow NO
|
||||
fi
|
||||
fi
|
||||
elif [ "$sys_interface_available" = 0 ]; then
|
||||
# we have no sysfs but were asked to use it only!
|
||||
msg="/sys vulnerability interface use forced, but it's not available!"
|
||||
status=UNK
|
||||
fi
|
||||
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
# override status & msg in case CPU is not vulnerable after all
|
||||
pvulnstatus "$cve" OK "your CPU vendor reported your CPU model as not affected"
|
||||
elif [ -z "$msg" ]; then
|
||||
if [ "$opt_sysfs_only" != 1 ]; then
|
||||
if [ "$cap_rfds_clear" = 1 ]; then
|
||||
if [ -n "$kernel_rfds" ]; then
|
||||
if [ "$opt_live" = 1 ]; then
|
||||
if [ "$rfds_mitigated" = 1 ]; then
|
||||
pvulnstatus "$cve" OK "Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled"
|
||||
else
|
||||
pvulnstatus "$cve" VULN "Your microcode and kernel are both up to date for this mitigation, but the mitigation is not active"
|
||||
explain "The RFDS mitigation has been disabled. Remove 'reg_file_data_sampling=off' or 'mitigations=off'\n " \
|
||||
"from your kernel command line to re-enable it."
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" OK "Your microcode and kernel are both up to date for this mitigation"
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" VULN "Your microcode supports mitigation, but your kernel doesn't, upgrade it to mitigate the vulnerability"
|
||||
explain "Update your kernel to a version that supports RFDS mitigation (Linux 6.9+, or check if your distro\n " \
|
||||
"has a backport). Your CPU microcode already provides the RFDS_CLEAR capability."
|
||||
fi
|
||||
else
|
||||
if [ -n "$kernel_rfds" ]; then
|
||||
pvulnstatus "$cve" VULN "Your kernel supports mitigation, but your CPU microcode also needs to be updated to mitigate the vulnerability"
|
||||
explain "Update your CPU microcode (via BIOS/firmware update or linux-firmware package) to a version that\n " \
|
||||
"provides the RFDS_CLEAR capability."
|
||||
else
|
||||
pvulnstatus "$cve" VULN "Neither your kernel or your microcode support mitigation, upgrade both to mitigate the vulnerability"
|
||||
explain "Update both your CPU microcode (via BIOS/firmware update from your OEM) and your kernel to a version\n " \
|
||||
"that supports RFDS mitigation (Linux 6.9+, or check if your distro has a backport)."
|
||||
fi
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" "$status" "$ret_sys_interface_check_fullmsg"
|
||||
fi
|
||||
else
|
||||
pvulnstatus "$cve" "$status" "$msg"
|
||||
fi
|
||||
}
|
||||
|
||||
check_CVE_2023_28746_bsd() {
|
||||
if ! is_cpu_affected "$cve"; then
|
||||
pvulnstatus "$cve" OK "your CPU vendor reported your CPU model as not affected"
|
||||
else
|
||||
pvulnstatus "$cve" UNK "your CPU is affected, but mitigation detection has not yet been implemented for BSD in this script"
|
||||
fi
|
||||
}
|
||||
@@ -93,22 +93,24 @@ check_CVE_2024_36350_linux() {
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
pr_info_nol "* CPU explicitly indicates not vulnerable to TSA-SQ (TSA_SQ_NO): "
|
||||
if [ "$cap_tsa_sq_no" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_tsa_sq_no" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
if is_amd || is_hygon; then
|
||||
pr_info_nol "* CPU explicitly indicates not vulnerable to TSA-SQ (TSA_SQ_NO): "
|
||||
if [ "$cap_tsa_sq_no" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_tsa_sq_no" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
|
||||
pr_info_nol "* Microcode supports VERW buffer clearing: "
|
||||
if [ "$cap_verw_clear" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_verw_clear" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
pr_info_nol "* Microcode supports VERW buffer clearing: "
|
||||
if [ "$cap_verw_clear" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_verw_clear" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
fi
|
||||
|
||||
pr_info_nol "* Hyper-Threading (SMT) is enabled: "
|
||||
|
||||
@@ -93,22 +93,24 @@ check_CVE_2024_36357_linux() {
|
||||
pstatus yellow NO
|
||||
fi
|
||||
|
||||
pr_info_nol "* CPU explicitly indicates not vulnerable to TSA-L1 (TSA_L1_NO): "
|
||||
if [ "$cap_tsa_l1_no" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_tsa_l1_no" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
if is_amd || is_hygon; then
|
||||
pr_info_nol "* CPU explicitly indicates not vulnerable to TSA-L1 (TSA_L1_NO): "
|
||||
if [ "$cap_tsa_l1_no" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_tsa_l1_no" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
|
||||
pr_info_nol "* Microcode supports VERW buffer clearing: "
|
||||
if [ "$cap_verw_clear" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_verw_clear" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
pr_info_nol "* Microcode supports VERW buffer clearing: "
|
||||
if [ "$cap_verw_clear" = 1 ]; then
|
||||
pstatus green YES
|
||||
elif [ "$cap_verw_clear" = 0 ]; then
|
||||
pstatus yellow NO
|
||||
else
|
||||
pstatus yellow UNKNOWN "couldn't read CPUID leaf 0x80000021"
|
||||
fi
|
||||
fi
|
||||
|
||||
elif [ "$sys_interface_available" = 0 ]; then
|
||||
|
||||
@@ -31,7 +31,10 @@ check_CVE_2024_45332_linux() {
|
||||
"update is available for your specific CPU stepping."
|
||||
else
|
||||
pr_info_nol "* BPI is mitigated by microcode: "
|
||||
if [ "$cpu_ucode" -lt "$g_bpi_fixed_ucode_version" ]; then
|
||||
if [ -z "$cpu_ucode" ]; then
|
||||
pstatus yellow UNKNOWN "couldn't get your microcode version"
|
||||
pvulnstatus "$cve" UNK "couldn't detect microcode version to verify mitigation"
|
||||
elif [ "$cpu_ucode" -lt "$g_bpi_fixed_ucode_version" ]; then
|
||||
pstatus yellow NO "You have ucode $(printf "0x%x" "$cpu_ucode") and version $(printf "0x%x" "$g_bpi_fixed_ucode_version") minimum is required"
|
||||
pvulnstatus "$cve" VULN "Your microcode is too old to mitigate the vulnerability"
|
||||
explain "CVE-2024-45332 (Branch Privilege Injection) is a race condition in the branch predictor\n" \
|
||||
|
||||
Reference in New Issue
Block a user