Hírolvasó
Security updates for Thursday
rpki-client 7.9 released
The announcement leads in,
rpki-client 7.9 has just been released and will be available in the rpki-client directory of any OpenBSD mirror soon.
rpki-client is a FREE, easy-to-use implementation of the Resource Public Key Infrastructure (RPKI) for Relying Parties (RP) to facilitate validation of BGP announcements. The program queries the global RPKI repository system and validates untrusted network inputs. The program outputs validated ROA payloads and BGPsec Router keys in configuration formats suitable for OpenBGPD and BIRD, and supports emitting CSV and JSON for consumption by other routing stacks.
Read the whole thing here and grab the new release at your favorite OpenBSD mirror.
[$] LWN.net Weekly Edition for July 14, 2022
[$] "Critical" projects and volunteer maintainers
Security updates for Wednesday
[$] Native Python support for units?
The "Retbleed" speculative execution vulnerabilities
Kernel and hypervisor developers have developed mitigations in coordination with Intel and AMD. Mitigating Retbleed in the Linux kernel required a substantial effort, involving changes to 68 files, 1783 new lines and 387 removed lines. Our performance evaluation shows that mitigating Retbleed has unfortunately turned out to be expensive: we have measured between 14% and 39% overhead with the AMD and Intel patches respectively.
Those mitigations were pulled into the mainline kernel today. They are not in the July 12 stable kernel updates but will almost certainly show up in those channels soon.
The latest stable kernel updates
Garrett: Responsible stewardship of the UEFI secure boot ecosystem
So, to have Microsoft, the self-appointed steward of the UEFI Secure Boot ecosystem, turn round and say that a bunch of binaries that have been reviewed through processes developed in negotiation with Microsoft, implementing technologies designed to make management of revocation easier for Microsoft, and incorporating fixes for vulnerabilities discovered by the developers of those binaries who notified Microsoft of these issues despite having no obligation to do so, and which have then been signed by Microsoft are now considered by Microsoft to be insecure is, uh, kind of impolite?
Security updates for Tuesday
Matthew Garrett: Responsible stewardship of the UEFI secure boot ecosystem
Starting in 2022 for Secured-core PCs it is a Microsoft requirement for the 3rd Party Certificate to be disabled by default.
"Secured-core" is a term used to describe machines that meet a certain set of Microsoft requirements around firmware security, and by and large it's a good thing - devices that meet these requirements are resilient against a whole bunch of potential attacks in the early boot process. But unfortunately the 2022 requirements don't seem to be publicly available, so it's difficult to know what's being asked for and why. But first, some background.
Most x86 UEFI systems that support Secure Boot trust at least two certificate authorities:
1) The Microsoft Windows Production PCA - this is used to sign the bootloader in production Windows builds. Trusting this is sufficient to boot Windows.
2) The Microsoft Corporation UEFI CA - this is used by Microsoft to sign non-Windows UEFI binaries, including built-in drivers for hardware that needs to work in the UEFI environment (such as GPUs and network cards) and bootloaders for non-Windows.
The apparent secured-core requirement for 2022 is that the second of these CAs should not be trusted by default. As a result, drivers or bootloaders signed with this certificate will not run on these systems. This means that, out of the box, these systems will not boot anything other than Windows[1].
Given the association with the secured-core requirements, this is presumably a security decision of some kind. Unfortunately, we have no real idea what this security decision is intended to protect against. The most likely scenario is concerns about the (in)security of binaries signed with the third-party signing key - there are some legitimate concerns here, but I'm going to cover why I don't think they're terribly realistic.
The first point is that, from a boot security perspective, a signed bootloader that will happily boot unsigned code kind of defeats the point. Kaspersky did it anyway. The second is that even a signed bootloader that is intended to only boot signed code may run into issues in the event of security vulnerabilities - the Boothole vulnerabilities are an example of this, covering multiple issues in GRUB that could allow for arbitrary code execution and potential loading of untrusted code.
So we know that signed bootloaders that will (either through accident or design) execute unsigned code exist. The signatures for all the known vulnerable bootloaders have been revoked, but that doesn't mean there won't be other vulnerabilities discovered in future. Configuring systems so that they don't trust the third-party CA means that those signed bootloaders won't be trusted, which means any future vulnerabilities will be irrelevant. This seems like a simple choice?
There's actually a couple of reasons why I don't think it's anywhere near that simple. The first is that whenever a signed object is booted by the firmware, the trusted certificate used to verify that object is measured into PCR 7 in the TPM. If a system previously booted with something signed with the Windows Production CA, and is now suddenly booting with something signed with the third-party UEFI CA, the values in PCR 7 will be different. TPMs support "sealing" a secret - encrypting it with a policy that the TPM will only decrypt it if certain conditions are met. Microsoft make use of this for their default Bitlocker disk encryption mechanism. The disk encryption key is encrypted by the TPM, and associated with a specific PCR 7 value. If the value of PCR 7 doesn't match, the TPM will refuse to decrypt the key, and the machine won't boot. This means that attempting to attack a Windows system that has Bitlocker enabled using a non-Windows bootloader will fail - the system will be unable to obtain the disk unlock key, which is a strong indication to the owner that they're being attacked.
The second is that this is predicated on the idea that removing the third-party bootloaders and drivers removes all the vulnerabilities. In fact, there's been rather a lot of vulnerabilities in the Windows bootloader. A broad enough vulnerability in the Windows bootloader is arguably a lot worse than a vulnerability in a third-party loader, since it won't change the PCR 7 measurements and the system will boot happily. Removing trust in the third-party CA does nothing to protect against this.
The third reason doesn't apply to all systems, but it does to many. System vendors frequently want to ship diagnostic or management utilities that run in the boot environment, but would prefer not to have to go to the trouble of getting them all signed by Microsoft. The simple solution to this is to ship their own certificate and sign all their tooling directly - the secured-core Lenovo I'm looking at currently is an example of this, with a Lenovo signing certificate. While everything signed with the third-party signing certificate goes through some degree of security review, there's no requirement for any vendor tooling to be reviewed at all. Removing the third-party CA does nothing to protect the user against the code that's most likely to contain vulnerabilities.
Obviously I may be missing something here - Microsoft may well have a strong technical justification. But they haven't shared it, and so right now we're left making guesses. And right now, I just don't see a good security argument.
But let's move on from the technical side of things and discuss the broader issue. The reason UEFI Secure Boot is present on most x86 systems is that Microsoft mandated it back in 2012. Microsoft chose to be the only trusted signing authority. Microsoft made the decision to assert that third-party code could be signed and trusted.
We've certainly learned some things since then, and a bunch of things have changed. Third-party bootloaders based on the Shim infrastructure are now reviewed via a community-managed process. We've had a productive coordinated response to the Boothole incident, which also taught us that the existing revocation strategy wasn't going to scale. In response, the community worked with Microsoft to develop a specification for making it easier to handle similar events in future. And it's also worth noting that after the initial Boothole disclosure was made to the GRUB maintainers, they proactively sought out other vulnerabilities in their codebase rather than simply patching what had been reported. The free software community has gone to great lengths to ensure third-party bootloaders are compatible with the security goals of UEFI Secure Boot.
So, to have Microsoft, the self-appointed steward of the UEFI Secure Boot ecosystem, turn round and say that a bunch of binaries that have been reviewed through processes developed in negotiation with Microsoft, implementing technologies designed to make management of revocation easier for Microsoft, and incorporating fixes for vulnerabilities discovered by the developers of those binaries who notified Microsoft of these issues despite having no obligation to do so, and which have then been signed by Microsoft are now considered by Microsoft to be insecure is, uh, kind of impolite? Especially when unreviewed vendor-signed binaries are still considered trustworthy, despite no external review being carried out at all.
If Microsoft had a set of criteria used to determine whether something is considered sufficiently trustworthy, we could determine which of these we fell short on and do something about that. From a technical perspective, Microsoft could set criteria that would allow a subset of third-party binaries that met additional review be trusted without having to trust all third-party binaries[2]. But, instead, this has been a decision made by the steward of this ecosystem without consulting major stakeholders.
If there are legitimate security concerns, let's talk about them and come up with solutions that fix them without doing a significant amount of collateral damage. Don't complain about a vendor blocking your apps and then do the same thing yourself.
[Edit to add: there seems to be some misunderstanding about where this restriction is being imposed. I bought this laptop because I'm interested in investigating the Microsoft Pluton security processor, but Pluton is not involved at all here. The restriction is being imposed by the firmware running on the main CPU, not any sort of functionality implemented on Pluton]
[1] They'll also refuse to run any drivers that are stored in flash on Thunderbolt devices, which means eGPU setups may be more complicated, as will netbooting off Thunderbolt-attached NICs
[2] Use a different leaf cert to sign the new trust tier, add the old leaf cert to dbx unless a config option is set, leave the existing intermediate in db
comments
Rust frontend approved for GCC
[$] Kernel support for hardware-based control-flow integrity
Calibre 6.0 released
It has been a year and a half since calibre 5.0. The headline feature is Full text search, calibre can now optionally index all the books in your library so you can search your entire library for a word or phrase.
Other changes introduced since 5.0 include 64-bit Arm support, the removal of 32-bit support, and an update to Qt 6.
Ronacher: Congratulations: We Now Have Opinions on Your Open Source Contributions
There is a hypothetical future where the rules tighten. One could imagine that an index would like to enforce cryptographic signing of newly released packages. Or the index wants to enable reclaiming of critical packages if the author does not respond or do bad things with the package. For instance a critical package being unpublished is a problem for the ecosystem. One could imagine a situation where in that case the Index maintainers take over the record of that package on the index to undo the damage. Likewise it's more than imaginable that an index of the future will require packages to enforce a minimum standard for critical packages such as a certain SLO for responding to critical incoming requests (security, trademark laws etc.).
Security updates for Monday
Kernel prepatch 5.19-rc6
Perhaps somewhat unusually, I picked up a few fixes that were pending in trees that haven't actually hit upstream yet. It's already rc6, and I wanted to close out a few of the regression reports and not have to wait for another (possibly last, knock wood) rc to have them in the tree.
Linux Plumbers Conference: Microconferences at Linux Plumbers Conference: Rust
Linux Plumbers Conference 2022 is pleased to host the Rust MC
Rust is a systems programming language that is making great strides in becoming the next big one in the domain.
Rust for Linux aims to bring it into the kernel since it has a key property that makes it very interesting to consider as the second language in the kernel: it guarantees no undefined behavior takes place (as long as unsafe code is sound). This includes no use-after-free mistakes, no double frees, no data races, etc.
This microconference intends to cover talks and discussions on both Rust for Linux as well as other non-kernel Rust topics.
Possible Rust for Linux topics:
- Bringing Rust into the kernel (e.g. status update, next steps…).
- Use cases for Rust around the kernel (e.g. drivers, subsystems…).
- Integration with kernel systems and infrastructure (e.g. wrapping existing subsystems safely, build system, documentation, testing, maintenance…).
Possible Rust topics:
- Language and standard library (e.g. upcoming features, memory model…).
- Compilers and codegen (e.g. rustc improvements, LLVM and Rust, rustc_codegen_gcc, gccrs…).
- Other tooling and new ideas (Cargo, Miri, Clippy, Compiler Explorer, Coccinelle for Rust…).
- Educational material.
- Any other Rust topic within the Linux ecosystem.
Please come and join us in the discussion about Rust in the Linux ecosystem.
We hope to see you there!