Hírolvasó

OpenShot 3.0 released

2 év 8 hónap óta
Version 3.0 of the OpenShot video editor is out.

One of the largest and most noticeable changes to OpenShot 3.0 is our improved video preview, resulting in smoother video preview and fewer freezes and pauses during previewing. But to understand why things are so much smoother, we need to look deeper into our decoding engine. We have rearchitected our decoder to be much more resilient to missing packets, missing timestamps, and better understanding when we are missing video or audio data, so we can move on without pausing.

corbet

A Google kénytelen lesz törölni a pontatlan keresési eredményeket, ha arra kérik

2 év 8 hónap óta

Az Európai Legfelsőbb Bíróság múlt hét csütörtökön jelentette be, hogy a Google köteles törölni az európai személyekre vonatkozó keresési találatokat, amennyiben az érintettek egyértelműen bizonyítani tudják, hogy az információk tévesek. A feledés jogának értelmében az európai polgároknak jogukban áll arra kérni a Google-t és más keresőmotorokat, hogy töröljék a róluk szóló elavult vagy kínos információkra […]

The post A Google kénytelen lesz törölni a pontatlan keresési eredményeket, ha arra kérik first appeared on Nemzeti Kibervédelmi Intézet.

NKI

The 6.1 kernel is out

2 év 8 hónap óta
Linus has released the 6.1 kernel; he is preparing for a tricky holiday merge window: So here we are, a week late, but last week was nice and slow, and I'm much happier about the state of 6.1 than I was a couple of weeks ago when things didn't seem to be slowing down.

Of course, that means that now we have the merge window from hell, just before the holidays, with me having some pre-holiday travel coming up too. So while delaying things for a week was the right thing to do, it does make the timing for the 6.2 merge window awkward.

Headline features in 6.1 include reworked, LLVM-based control-flow integrity, initial support for kernel development in Rust, support for destructive BPF programs, some significant io_uring performance improvements, better user-space control over transparent huge-page creation, improved memory-tiering support, fundamental memory-management rewrites in the form of the multi-generational LRU and the maple tree data structure, the kernel memory sanitizer, and much more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 6.1 page for more information.

corbet

BIOS Memory Map for vmd(8) Rewrite in Progress

2 év 8 hónap óta
A rewritten version of vmd(8)'s BIOS memory map handling could soon be appearing in -current.

In a recent post to tech@ and supplemented by an accompanying post to ports@ since the changes touch on SeaBIOS, Dave Voutila (dv@) describes the changes and the motiviation for changing them, ie

In short, this nukes some old hacks we've been carrying to communicate things like >4GB of memory to SeaBIOS via CMOS. It assumes vmd(8) properly builds and conveys a bios e820 memory map via the fw_cfg api.

Follow the links to the messages for the full story, test the patches if you feel up to it. While the ETA of the upcoming commit is not yet certain, expect to see this change go in soon.

Matthew Garrett: On-device WebAuthn and what makes it hard to do well

2 év 8 hónap óta
WebAuthn improves login security a lot by making it significantly harder for a user's credentials to be misused - a WebAuthn token will only respond to a challenge if it's issued by the site a secret was issued to, and in general will only do so if the user provides proof of physical presence[1]. But giving people tokens is tedious and also I have a new laptop which only has USB-C but does have a working fingerprint reader and I hate the aesthetics of the Yubikey 5C Nano, so I've been thinking about what WebAuthn looks like done without extra hardware.

Let's talk about the broad set of problems first. For this to work you want to be able to generate a key in hardware (so it can't just be copied elsewhere if the machine is compromised), prove to a remote site that it's generated in hardware (so the remote site isn't confused about what security assertions you're making), and tie use of that key to the user being physically present (which may range from "I touched this object" to "I presented biometric evidence of identity"). What's important here is that a compromised OS shouldn't be able to just fake a response. For that to be possible, the chain between proof of physical presence to the secret needs to be outside the control of the OS.

For a physical security token like a Yubikey, this is pretty easy. The communication protocol involves the OS passing a challenge and the source of the challenge to the token. The token then waits for a physical touch, verifies that the source of the challenge corresponds to the secret it's being asked to respond to the challenge with, and provides a response. At the point where keys are being enrolled, the token can generate a signed attestation that it generated the key, and a remote site can then conclude that this key is legitimately sequestered away from the OS. This all takes place outside the control of the OS, meeting all the goals described above.

How about Macs? The easiest approach here is to make use of the secure enclave and TouchID. The secure enclave is a separate piece of hardware built into either a support chip (for x86-based Macs) or directly on the SoC (for ARM-based Macs). It's capable of generating keys and also capable of producing attestations that said key was generated on an Apple secure enclave ("Apple Anonymous Attestation", which has the interesting property of attesting that it was generated on Apple hardware, but not which Apple hardware, avoiding a lot of privacy concerns). These keys can have an associated policy that says they're only usable if the user provides a legitimate touch on the fingerprint sensor, which means it can not only assert physical presence of a user, it can assert physical presence of an authorised user. Communication between the fingerprint sensor and the secure enclave is a private channel that the OS can't meaningfully interfere with, which means even a compromised OS can't fake physical presence responses (eg, the OS can't record a legitimate fingerprint press and then send that to the secure enclave again in order to mimic the user being present - the secure enclave requires that each response from the fingerprint sensor be unique). This achieves our goals.

The PC space is more complicated. In the Mac case, communication between the biometric sensors (be that TouchID or FaceID) occurs in a controlled communication channel where all the hardware involved knows how to talk to the other hardware. In the PC case, the typical location where we'd store secrets is in the TPM, but TPMs conform to a standardised spec that has no understanding of this sort of communication, and biometric components on PCs have no way to communicate with the TPM other than via the OS. We can generate keys in the TPM, and the TPM can attest to those keys being TPM-generated, which means an attacker can't exfiltrate those secrets and mimic the user's token on another machine. But in the absence of any explicit binding between the TPM and the physical presence indicator, the association needs to be up to code running on the CPU. If that's in the OS, an attacker who compromises the OS can simply ask the TPM to respond to an challenge it wants, skipping the biometric validation entirely.

Windows solves this problem in an interesting way. The Windows Hello Enhanced Signin doesn't add new hardware, but relies on the use of virtualisation. The agent that handles WebAuthn responses isn't running in the OS, it's running in another VM that's entirely isolated from the OS. Hardware that supports this model has a mechanism for proving its identity to the local code (eg, fingerprint readers that support this can sign their responses with a key that has a certificate that chains back to Microsoft). Additionally, the secrets that are associated with the TPM can be held in this VM rather than in the OS, meaning that the OS can't use them directly. This means we have a flow where a browser asks for a WebAuthn response, that's passed to the VM, the VM asks the biometric device for proof of user presence (including some sort of random value to prevent the OS just replaying that), receives it, and then asks the TPM to generate a response to the challenge. Compromising the OS doesn't give you the ability to forge the responses between the biometric device and the VM, and doesn't give you access to the secrets in the TPM, so again we meet all our goals.

On Linux (and other free OSes), things are less good. Projects like tpm-fido generate keys on the TPM, but there's no secure channel between that code and whatever's providing proof of physical presence. An attacker who compromises the OS may not be able to copy the keys to their own system, but while they're on the compromised system they can respond to as many challenges as they like. That's not the same security assertion we have in the other cases.

Overall, Apple's approach is the simplest - having binding between the various hardware components involved means you can just ignore the OS entirely. Windows doesn't have the luxury of having as much control over what the hardware landscape looks like, so has to rely on virtualisation to provide a security barrier against a compromised OS. And in Linux land, we're fucked. Who do I have to pay to write a lightweight hypervisor that runs on commodity hardware and provides an environment where we can run this sort of code?


[1] As I discussed recently there are scenarios where these assertions are less strong, but even so

comments

[$] mimmutable() for OpenBSD

2 év 8 hónap óta
Virtual-memory systems provide a great deal of flexibility in how memory can be mapped and protected. Unfortunately, memory-management flexibility can also be useful to attackers bent on compromising a system. In the OpenBSD world, a new system call is being added to reduce this flexibility; it is, though, a system call that almost no code is expected to use.
corbet

Security updates for Friday

2 év 8 hónap óta
Security updates have been issued by Debian (leptonlib), Fedora (woff), Red Hat (grub2), Slackware (emacs), SUSE (busybox, chromium, java-1_8_0-openjdk, netatalk, and rabbitmq-server), and Ubuntu (gcc-5, gccgo-6, glibc, protobuf, and python2.7, python3.10, python3.6, python3.8).
jake

Matthew Garrett: End-to-end encrypted messages need more than libsignal

2 év 8 hónap óta
(Disclaimer: I'm not a cryptographer, and I do not claim to be an expert in Signal. I've had this read over by a couple of people who are so with luck there's no egregious errors, but any mistakes here are mine)

There are indications that Twitter is working on end-to-end encrypted DMs, likely building on work that was done back in 2018. This made use of libsignal, the reference implementation of the protocol used by the Signal encrypted messaging app. There seems to be a fairly widespread perception that, since libsignal is widely deployed (it's also the basis for WhatsApp's e2e encryption) and open source and has been worked on by a whole bunch of cryptography experts, choosing to use libsignal means that 90% of the work has already been done. And in some ways this is true - the security of the protocol is probably just fine. But there's rather more to producing a secure and usable client than just sprinkling on some libsignal.

(Aside: To be clear, I have no reason to believe that the people who were working on this feature in 2018 were unaware of this. This thread kind of implies that the practical problems are why it didn't ship at the time. Given the reduction in Twitter's engineering headcount, and given the new leadership's espousal of political and social perspectives that don't line up terribly well with the bulk of the cryptography community, I have doubts that any implementation deployed in the near future will get all of these details right)

I was musing about this last night and someone pointed out some prior art. Bridgefy is a messaging app that uses Bluetooth as its transport layer, allowing messaging even in the absence of data services. The initial implementation involved a bunch of custom cryptography, enabling a range of attacks ranging from denial of service to extracting plaintext from encrypted messages. In response to criticism Bridgefy replaced their custom cryptographic protocol with libsignal, but that didn't fix everything. One issue is the potential for MITMing - keys are shared on first communication, but the client provided no mechanism to verify those keys, so a hostile actor could pretend to be a user, receive messages intended for that user, and then reencrypt them with the user's actual key. This isn't a weakness in libsignal, in the same way that the ability to add a custom certificate authority to a browser's trust store isn't a weakness in TLS. In Signal the app key distribution is all handled via Signal's servers, so if you're just using libsignal you need to implement the equivalent yourself.

The other issue was more subtle. libsignal has no awareness at all of the Bluetooth transport layer. Deciding where to send a message is up to the client, and these routing messages were spoofable. Any phone in the mesh could say "Send messages for Bob here", and other phones would do so. This should have been a denial of service at worst, since the messages for Bob would still be encrypted with Bob's key, so the attacker would be able to prevent Bob from receiving the messages but wouldn't be able to decrypt them. However, the code to decide where to send the message and the code to decide which key to encrypt the message with were separate, and the routing decision was made before the encryption key decision. An attacker could send a message saying "Route messages for Bob to me", and then another saying "Actually lol no I'm Mallory". If a message was sent between those two messages, the message intended for Bob would be delivered to Mallory's phone and encrypted with Mallory's key.

Again, this isn't a libsignal issue. libsignal encrypted the message using the key bundle it was told to encrypt it with, but the client code gave it a key bundle corresponding to the wrong user. A race condition in the client logic allowed messages intended for one person to be delivered to and readable by another.

This isn't the only case where client code has used libsignal poorly. The Bond Touch is a Bluetooth-connected bracelet that you wear. Tapping it or drawing gestures sends a signal to your phone, which culminates in a message being sent to someone else's phone which sends a signal to their bracelet, which then vibrates and glows in order to indicate a specific sentiment. The idea is that you can send brief indications of your feelings to someone you care about by simply tapping on your wrist, and they can know what you're thinking without having to interrupt whatever they're doing at the time. It's kind of sweet in a way that I'm not, but it also advertised "Private Spaces", a supposedly secure way to send chat messages and pictures, and that seemed more interesting. I grabbed the app and disassembled it, and found it was using libsignal. So I bought one and played with it, including dumping the traffic from the app. One important thing to realise is that libsignal is just the protocol library - it doesn't implement a server, and so you still need some way to get information between clients. And one of the bits of information you have to get between clients is the public key material.

Back when I played with this earlier this year, key distribution was implemented by uploading the public key to a database. The other end would download the public key, and everything works out fine. And this doesn't sound like a problem, given that the entire point of a public key is to be, well, public. Except that there was no access control on this database, and the filenames were simply phone numbers, so you could overwrite anyone's public key with one of your choosing. This didn't let you cause messages intended for them to be delivered to you, so exploiting this for anything other than a DoS would require another vulnerability somewhere, but there are contrived situations where this would potentially allow the privacy expectations to be broken.

Another issue with this app was its handling of one-time prekeys. When you send someone new a message via Signal, it's encrypted with a key derived from not only the recipient's identity key, but also from what's referred to as a "one-time prekey". Users generate a bunch of keypairs and upload the public half to the server. When you want to send a message to someone, you ask the server for one of their one-time prekeys and use that. Decrypting this message requires using the private half of the one-time prekey, and the recipient deletes it afterwards. This means that an attacker who intercepts a bunch of encrypted messages over the network and then later somehow obtains the long-term keys still won't be able to decrypt the messages, since they depended on keys that no longer exist. Since these one-time prekeys are only supposed to be used once (it's in the name!) there's a risk that they can all be consumed before they're replenished. The spec regarding pre-keys says that servers should consider rate-limiting this, but the protocol also supports falling back to just not using one-time prekeys if they're exhausted (you lose the forward secrecy benefits, but it's still end-to-end encrypted). This implementation not only implemented no rate-limiting, making it easy to exhaust the one-time prekeys, it then also failed to fall back to running without them. Another easy way to force DoS.

(And, remember, a successful DoS on an encrypted communications channel potentially results in the users falling back to an unencrypted communications channel instead. DoS may not break the encrypted protocol, but it may be sufficient to obtain plaintext anyway)

And finally, there's ClearSignal. I looked at this earlier this year - it's avoided many of these pitfalls by literally just being a modified version of the official Signal client and using the existing Signal servers (it's even interoperable with Actual Signal), but it's then got a bunch of other weirdness. The Signal database (I /think/ including the keys, but I haven't completely verified that) gets backed up to an AWS S3 bucket, identified using something derived from a key using KERI, and I've seen no external review of that whatsoever. So, who knows. It also has crash reporting enabled, and it's unclear how much internal state it sends on crashes, and it's also based on an extremely old version of Signal with the "You need to upgrade Signal" functionality disabled.

Three clients all using libsignal in one form or another, and three clients that do things wrong in ways that potentially have a privacy impact. Again, none of these issues are down to issues with libsignal, they're all in the code that surrounds it. And remember that Twitter probably has to worry about other issues as well! If I lose my phone I'm probably not going to worry too much about whether the messages sent through my weird bracelet app being gone forever, but losing all my Twitter DMs would be a significant change in behaviour from the status quo. But that's not an easy thing to do when you're not supposed to have access to any keys! Group chats? That's another significant problem to deal with. And making the messages readable through the web UI as well as on mobile means dealing with another set of key distribution issues. Get any of this wrong in one way and the user experience doesn't line up with expectations, get it wrong in another way and the worst case involves some of your users in countries with poor human rights records being executed.

Simply building something on top of libsignal doesn't mean it's secure. If you want meaningful functionality you need to build a lot of infrastructure around libsignal, and doing that well involves not just competent development and UX design, but also a strong understanding of security and cryptography. Given Twitter's lost most of their engineering and is led by someone who's alienated all the cryptographers I know, I wouldn't be optimistic.

comments

PHP 8.2.0 released

2 év 8 hónap óta
Version 8.2.0 of the PHP language is out.

PHP 8.2 is a major update of the PHP language.It contains many new features, including readonly classes, null, false, and true as stand-alone types, deprecated dynamic properties, performance improvements and more.

corbet

[$] Bugs and fixes in the kernel history

2 év 8 hónap óta
Each new kernel release fixes a lot of bugs, but each release also introduces new bugs of its own. That leads to a fundamental question: is the kernel community fixing bugs more quickly than it is adding them? The answer is less than obvious but, if it could be found, it would give an important indication of the long-term future of the kernel code base. While digging into the kernel's revision history cannot give a definitive answer to that question, it can provide some hints as to what that answer might be.
corbet

Security updates for Thursday

2 év 8 hónap óta
Security updates have been issued by Debian (dlt-daemon, jqueryui, and virglrenderer), Fedora (firefox, vim, and woff), Oracle (kernel and nodejs:18), Red Hat (java-1.8.0-ibm and redhat-ds:11), Slackware (python3), SUSE (buildah, matio, and osc), and Ubuntu (heimdal and postgresql-9.5).
jake

Fuzzing ping(8) … and finding a 24 year old bug.

2 év 8 hónap óta
Following the recent discovery of a security issue in FreeBSD's ping(8), OpenBSD developer Florian Obser (florian@) wanted to know if something similar lurked in the OpenBSD code as well.

The result of his investigation can be found in the article called Fuzzing ping(8) … and finding a 24 year old bug., which leads in,

FreeBSD had a security fluctuation in their implementation of ping(8) the other day. As someone who has done a lot of work on ping(8) in OpenBSD this tickled my interests.

What about OpenBSD?

ping(8) is ancient:

Read the rest of the article here. It is quite a story, with lessons to be considered by anyone working on code that's been around a few years or decades.

As Florian mentions in his post, the fix has been committed to the repo (with a subsequent tweak).

[$] Composefs for integrity protection and data sharing

2 év 8 hónap óta
A read-only filesystem that will transparently share file data between disparate directory trees, while also providing integrity verification for the data and the directory metadata, was recently posted as an RFC to the linux-kernel mailing list. Composefs was developed by Alexander Larsson (who posted it) and Giuseppe Scrivano for use by podman containers and OSTree (or "libostree" as it is now known) root directories, but there are likely others who want the abilities it provides. So far, there has been little response, either with feedback or complaints, but it is a small patch set (around 2K lines of code) and generally self-contained since it is a filesystem, so it would not be a surprise to see it appear in some upcoming kernel.
jake

Security updates for Wednesday

2 év 8 hónap óta
Security updates have been issued by Debian (cgal, ruby-rails-html-sanitizer, and xfce4-settings), Red Hat (dbus, grub2, kernel, pki-core, and usbguard), Scientific Linux (pki-core), SUSE (bcel, LibVNCServer, and xen), and Ubuntu (ca-certificates and u-boot).
corbet

81 biztonsági rést javít az Android decemberi frissítése

2 év 8 hónap óta

A Google kiadta az Android 2022. decemberi biztonsági frissítését, amely 4 kritikus sebezhetőséget is javít, köztük egy, a Bluetooth komponensen keresztül kihasználható távoli kódfuttatásai hibát. Mind a négy kritikus besorolású hiba az Android 10-13-as verzióit érinti: CVE-2022-20472 – Távoli kódvégrehajtási hiba az Android Framework-ben. CVE-2022-20473 – Szintén egy távoli kódvégrehajtási hiba az Android Framework-ben. CVE-2022-20411 […]

The post 81 biztonsági rést javít az Android decemberi frissítése first appeared on Nemzeti Kibervédelmi Intézet.

NKI

Rust support coming to GCC

2 év 8 hónap óta
Gccrs — the Rust front-end for GCC — has been approved for merging into the GCC trunk. That means that the next GCC release will be able to compile Rust, sort of; as gccrs developer Arthur Cohen warns: "This is very much an extremely experimental compiler and will still get a lot of changes in the coming weeks and months up until the release". See this article and this one for more details on the current status of gccrs.
corbet