Hírolvasó

Matthew Garrett: Why is there no consistent single signon API flow?

3 hét 6 nap óta
Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comments

Graham: about Plasma’s X11 session

3 hét 6 nap óta

KDE contributor Nate Graham recently wrote about the KDE Project's plans for Plasma's X11 session. He notes that the project will continue to ensure that Plasma "continues to compile and deploy on X11" and isn't horribly broken. Major regressions will probably be fixed, eventually, but the writing is on the wall:

X11's upstream development has dropped off significantly in recent years, and X11 isn't able to perform up to the standards of what people expect today with respect to HDR, 10 bits-per-color monitors, other fancy monitor features, multi-monitor setups (especially with mixed DPIs or refresh rates), multi-GPU setups, screen tearing, security, crash robustness, input handling, and more.

As for when Plasma will drop support for X11? There's currently no firm timeline for this, and I certainly don't expect it to happen in the next year, or even the next two years. But that's just a guess; it depends on how quickly we implement everything on https://community.kde.org/Plasma/Wayland_Known_Significant_Issues. Our plan is to handle everything on that page such that even the most hardcore X11 user doesn't notice anything missing when they move to Wayland.

jzb

PostmarketOS 25.06: "the one with systemd"

4 hét óta

The postmarketOS project, which creates a Linux distribution for mobile devices, announced it was working on adding a version with systemd last March. That day has arrived with the announcement of version 25.06:

We considered supporting an upgrade from OpenRC to systemd in our upgrade script, but then decided against it as such an upgrade path might introduce its own bugs and we would rather spend the time improving other parts of postmarketOS. So for this one-time scenario we ask you to please reinstall postmarketOS to get from OpenRC to systemd. Thank you for your understanding!

jzb

[$] GNOME deepens systemd dependencies

4 hét óta

Adrian Vovk, a GNOME contributor and member of its release team, recently announced in a blog post that GNOME would be adding new dependencies on systemd, and soon. The idea is to shed GNOME's homegrown service manager in favor of using systemd, and to improve GNOME's ability to run concurrent user sessions. However, the move is also going to throw a spanner in the works for the BSDs and Linux distributions without systemd when the changes take effect in the GNOME 49 release that is set for September.

jzb

Linux Media Summit 2025 recap (Collabora blog)

4 hét óta
The Collabora blog has a summary, written by Nicolas Dufresne, about the Linux Media Summit held on May 13 in Nice, France. It was co-located with the Embedded Recipes conference and had sessions on stateless video encoders, camera support, staging drivers, memory accounting, and a multi-committer model for the media subsystem. "Our largest Media Summit to date brought together around 20 engaged participants. Engagement was strong, marked by thoughtful questions and lively discussions."
jake

Security updates for Monday

4 hét óta
Security updates have been issued by AlmaLinux (libblockdev and open-vm-tools), Debian (debian-security-support, gdk-pixbuf, konsole, and node-send), Fedora (apache-commons-beanutils, chromium, clamav, dotnet9.0, libblockdev, mediawiki, mingw-python-setuptools, pam, perl-File-Find-Rule, python-pycares, python-setuptools, spdlog, udisks2, and xorg-x11-server-Xwayland), Mageia (chromium-browser-stable), Oracle (apache-commons-beanutils, container-tools:ol8, gimp:2.8, idm:DL1, perl-FCGI:0.78, and postgresql), Red Hat (container-tools:rhel8, delve, git-lfs, go-toolset:rhel8, grafana, kernel, mod_auth_openidc, and spice-client-win), SUSE (apache-commons-beanutils, apache2-mod_security2, distribution, gstreamer-plugins-good, icu, ignition, perl, python310, python311, python312, and python39), and Ubuntu (apache-log4j1.2 and botan).
jake

[$] How to write Rust in the kernel: part 1

1 hónap óta

The Linux kernel is seeing a steady accumulation of Rust code. As it becomes more prevalent, maintainers may want to know how to read, review, and test the Rust code that relates to their areas of expertise. Just as kernel C code is different from user-space C code, so too is kernel Rust code somewhat different from user-space Rust code. That fact makes Rust's extensive documentation of less use than it otherwise would be, and means that potential contributors with user-space experience will need some additional instruction. This article is the first in a multi-part series aimed at helping existing kernel contributors become familiar with Rust, and helping existing Rust programmers become familiar with what the kernel does differently from the typical Rust project.

daroc

[$] A distributed filesystem for archival systems: ngnfs

1 hónap óta
A new filesystem was the topic of a session led by Zach Brown at the 2025 Linux Storage, Filesystem, Memory Management, and BPF Summit (LSFMM+BPF). The ngnfs filesystem is not a "next generation" NFS, as might be guessed from the name; Brown said that he did not think about that linkage ("I hate naming so much") until it was pointed out to him by Chuck Lever in an email. It is, instead, a filesystem for enormous data sets that are mostly stored offline.
jake

Tag2upload is now ready for experimentation

1 hónap óta

Debian's long-awaited tag2upload service is now ready for Debian maintainers to use in some circumstances. Tag2upload makes it easier for maintainers to upload packages, by allowing them to push a signed Git commit that will automatically be picked up and built, instead of pushing a build from their local machine. LWN covered the discussion around the service in July of last year. With the timing of its readiness, it's likely to become more useful once Debian 13 ("trixie") is released.

Be very aware of the freeze! Do not just upload to unstable as your first test! Uploads to unstable, targeting trixie, can be done with tag2upload - but in most cases you will probably want to upload the same package to experimental first.
daroc

Security updates for Friday

1 hónap óta
Security updates have been issued by SUSE (apache2-mod_security2, augeas, ghc-pandoc, gstreamer, ignition, kernel, libblockdev, libxml2, nodejs20, openssl-3, pam_pkcs11, perl, python3, systemd, ucode-intel, webkit2gtk3, and xen) and Ubuntu (linux, linux-aws, linux-aws-5.4, linux-azure, linux-gcp, linux-gcp-5.4, linux-ibm, linux-ibm-5.4, linux-kvm, linux-oracle, linux-oracle-5.4, linux-xilinx-zynqmp, linux-aws-fips, linux-gcp-fips, python3.13, python3.12, and roundcube).
daroc

Matthew Garrett: My a11y journey

1 hónap óta
23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

comments