Hírolvasó
Security updates for Monday
Matthew Garrett: Bring Your Own Disaster
There's obvious mutual appeal to having developers use their own hardware rather than rely on employer-provided hardware. The user gets to use hardware they're familiar with, and which matches their ergonomic desires. The employer gets to save on the money required to buy new hardware for the employee. From this perspective, there's a clear win-win outcome.
But once you start thinking about security, it gets more complicated. If I, as an employer, want to ensure that any systems that can access my resources meet a certain security baseline (eg, I don't want my developers using unpatched Windows ME), I need some of my own software installed on there. And that software doesn't magically go away when the user is doing their own thing. If a user lends their machine to their partner, is the partner fully informed about what level of access I have? Are they going to feel that their privacy has been violated if they find out afterwards?
But it's not just about monitoring. If an employee's machine is compromised and the compromise is detected, what happens next? If the employer owns the system then it's easy - you pick up the device for forensic analysis and give the employee a new machine to use while that's going on. If the employee owns the system, they're probably not going to be super enthusiastic about handing over a machine that also contains a bunch of their personal data. In much of the world the law is probably on their side, and even if it isn't then telling the employee that they have a choice between handing over their laptop or getting fired probably isn't going to end well.
But obviously this is all predicated on the idea that an employer needs visibility into what's happening on systems that have access to their systems, or which are used to develop code that they'll be deploying. And I think it's fair to say that not everyone needs that! But if you hold any sort of personal data (including passwords) for any external users, I really do think you need to protect against compromised employee machines, and that does mean having some degree of insight into what's happening on those machines. If you don't want to deal with the complicated consequences of allowing employees to use their own hardware, it's rational to ensure that only employer-owned hardware can be used.
But what about the employers that don't currently need that? If there's no plausible future where you'll host user data, or where you'll sell products to others who'll host user data, then sure! But if that might happen in future (even if it doesn't right now), what's your transition plan? How are you going to deal with employees who are happily using their personal systems right now? At what point are you going to buy new laptops for everyone? BYOD might work for you now, but will it always?
And if your employer insists on employees using their own hardware, those employees should ask what happens in the event of a security breach. Whose responsibility is it to ensure that hardware is kept up to date? Is there an expectation that security can insist on the hardware being handed over for investigation? What information about the employee's use of their own hardware is going to be logged, who has access to those logs, and how long are those logs going to be kept for? If those questions can't be answered in a reasonable way, it's a huge red flag. You shouldn't have to give up your privacy and (potentially) your hardware for a job.
Using technical mechanisms to ensure that employees only use employer-provided hardware is understandably icky, but it's something that allows employers to impose appropriate security policies without violating employee privacy.
comments
An X11 Apologist Tries Wayland (artemis.sh)
It feels fantastic. It even made my software cursor not feel so softwarey, which I’ve never experienced with a software cursor before. I have a pretty bad GPU, but on a higher end card you’d get a huge benefit to this in games. If your card can render the game many times faster than your monitor refresh rate, you can unlock your FPS in the game, tune your max_render_time to the absolute minimum, and get EXTREMELY low latency while still having absolutely no screen tearing whatsoever.
And like, this is the first time I’ve ever seen the vsync setting in a game actually sync the game up with the vblank interval in a way that matters. It works for games in wine. It’s amazing. I have never experienced gaming on Linux that looked this smooth in my life.
Kernel prepatch 6.0-rc6
So this is an artificially small -rc release, because this past week we had the Maintainers' Summit in Dublin (along with OSS EU and LPC 2022), so we've had a lot of maintainers traveling.
Or - putting my ridiculously optimistic hat on - maybe things are just so nice and stable that there just weren't all that many fixes?
EuroBSDCon 2022
EuroBSDCon 2022 is currently underway.
Slides for some of the OpenBSD sessions are already available from the the usual place on the OpenBSD web site.
At the time of writing, it's not too late to catch live streams of the final day of the conference!
Dave Airlie (blogspot): LPC 2022 GPU BOF (user console and cgroups)
At LPC 2022 we had a BoF session for GPUs with two topics.
Moving to userspace consoles:Currently most mainline distros still use the kernel console, provided by the VT subsystem. We'd like to move to CONFIG_VT=n as the console and vt subsystem have historically been a source of bugs but are also nasty places for locking etc. It also can be the cause of oops going missing when it takes out the panic path with locking bugs stopping other paths from completely processing the oops (like pstore or serial).
The session started discussing what things would like. Lennart gave a great summary of the work David did a few years ago and the train of thought involved.
Once you think through all the paths and things you want supported, you realise the best user console is going to be one that supports emojis and non-Latin scripts. This probably means you want a lightweight wayland compositor running a fullscreen VTE based terminal. Working back from the consequences of this means you probably aren't going to want this in systemd, and it should be a separate development.
The other area discussed was around the requirements for a panic/emergency kernel console, likely called drmlog, this would just be something to output to the display whenever the kernel panics or during boot before the user console is loaded.
cgroups for GPUThis has been an ongoing topic, where vendors often suggest cgroup patches, and on review told they are too vendor specific and to simplify and try again, never to try again. We went over the problem space of both memory allocation accounting/enforcement and processing restrictions. We all agreed the local memory accounting was probably the easiest place to start doing something. We also agreed that accounting should be implemented and solid before we start digging into enforcement. We had a discussion about where CPU memory should be accounted, especially around integrated vs discrete graphics, since users might care about the application memory usage apart from the GPU objects usage which would be CPU on integrated and GPU on discrete. I believe a follow-up hallway discussion also covered a bunch of this with upstream cgroups maintainer.
[$] The road to Zettalinux
Security updates for Friday
OpenBGPD 7.6 released
The release announcement leads in,
We have released OpenBGPD 7.6, which will be arriving in the OpenBGPD directory of your local OpenBSD mirror soon. This release includes the following changes to the previous release: * Include OpenBSD 7.1 errata 008: bgpd(8) could fail to invalidate nexthops and incorrectly leave them in the FIB or Adj-RIB-Out. * Speedup bgpctl show rib 10/8 or-longer and show rib 10/8 or-shorter * Switch various static hash tables to RB trees improving performance on large systems * Export per neighbor pending update and withdraw statistics * Fix race between a neighbor session reset and its update message backlog * Improve handling of nexthop reachability state changes * Further improve portability of the FIB handling code
Matthew Garrett: git signatures with SSH certificates
git's SSH signing support is actually just it shelling out to ssh-keygen with a specific set of options, so let's go through an example of this with ssh-keygen. First, here's my certificate:
$ ssh-keygen -L -f id_aurora-cert.pub
id_aurora-cert.pub:
Type: ecdsa-sha2-nistp256-cert-v01@openssh.com user certificate
Public key: ECDSA-CERT SHA256:(elided)
Signing CA: RSA SHA256:(elided)
Key ID: "mgarrett@aurora.tech"
Serial: 10505979558050566331
Valid: from 2022-09-13T17:23:53 to 2022-09-14T13:24:23
Principals:
mgarrett@aurora.tech
Critical Options: (none)
Extensions:
permit-agent-forwarding
permit-port-forwarding
permit-pty
Ok! Now let's sign something:
$ ssh-keygen -Y sign -f ~/.ssh/id_aurora-cert.pub -n git /tmp/testfile
Signing file /tmp/testfile
Write signature to /tmp/testfile.sig
To verify this we need an allowed signatures file, which should look something like:
*@aurora.tech cert-authority ssh-rsa AAA(elided)
Perfect. Let's verify it:
$ cat /tmp/testfile | ssh-keygen -Y verify -f /tmp/allowed_signers -I mgarrett@aurora.tech -n git -s /tmp/testfile.sig
Good "git" signature for mgarrett@aurora.tech with ECDSA-CERT key SHA256:(elided)
Woo! So, how do we make use of this in git? Generating the signatures is as simple as
$ git config --global commit.gpgsign true
$ git config --global gpg.format ssh
$ git config --global user.signingkey /home/mjg59/.ssh/id_aurora-cert.pub
and then getting on with life. Any commits will now be signed with the provided certificate. Unfortunately, git itself won't handle verification of these - it calls ssh-keygen -Y find-principals which doesn't deal with wildcards in the allowed signers file correctly, and then falls back to verifying the signature without making any assertions about identity. Which means you're going to have to implement this in your own CI by extracting the commit and the signature, extracting the identity from the commit metadata and calling ssh-keygen on your own. But it can be made to work!
But why would you want to? The current approach of managing keys for git isn't ideal - you can kind of piggy-back off github/gitlab SSH key infrastructure, but if you're an enterprise using SSH certificates for access then your users don't necessarily have enrolled keys to start with. And using certificates gives you extra benefits, such as having your CA verify that keys are hardware-backed before issuing a cert. Want to ensure that whoever made a commit was actually on an authorised laptop? Now you can!
I'll probably spend a little while looking into whether it's plausible to make the git verification code work with certificates or whether the right thing is to fix up ssh-keygen -Y find-principals to work with wildcard identities, but either way it's probably not much effort to get this working out of the box.
Edit to add: thanks to this commenter for pointing out that current OpenSSH git actually makes this work already!
comments
[$] The perils of pinning
New stable kernels
Security updates for Thursday
[$] LWN.net Weekly Edition for September 15, 2022
Scaling Git’s garbage collection (GitHub blog)
To solve this problem, we turned to a long-discussed idea on the Git mailing list: cruft packs. The idea is simple: store an auxiliary list of mtime data alongside a pack containing just unreachable objects. To garbage collect a repository, Git places the unreachable objects in a pack. That pack is designated as a “cruft pack” because Git also writes the mtime data corresponding to each object in a separate file alongside that pack. This makes it possible to update the mtime of a single unreachable object without changing the mtimes of any other unreachable object.
Unicode 15 released
This version adds 4,489 characters, bringing the total to 149,186 characters. These additions include two new scripts, for a total of 161 scripts, along with 20 new emoji characters, and 4,193 CJK (Chinese, Japanese, and Korean) ideographs.