Hírolvasó

Security updates for Tuesday

1 év 7 hónap óta
Security updates have been issued by Debian (curl, openssh, osslsigncode, and putty), Fedora (chromium, filezilla, libfilezilla, mingw-gstreamer1, mingw-gstreamer1-plugins-bad-free, mingw-gstreamer1-plugins-base, mingw-gstreamer1-plugins-good, opensc, thunderbird, unrealircd, and xorg-x11-server-Xwayland), Gentoo (Ceph, FFmpeg, Flatpak, Gitea, and SABnzbd), Mageia (chromium-browser-stable), Slackware (kernel and postfix), and SUSE (cppcheck, distribution, gstreamer-plugins-bad, jbigkit, and ppp).
jake

Ruby 3.3.0 Released

1 év 8 hónap óta
As is the tradition for the Ruby programming language, December 25 is the date for new major releases; this year, Ruby 3.3.0 was released. It comes with a new parser called "Prism" that is "both a C library that will be used internally by CRuby and a Ruby gem that can be used by any tooling which needs to parse Ruby code". The release also has many performance improvements, especially in the YJIT (Yet another Ruby JIT) just-in-time compiler. Ruby 3.3 adds a new Ruby-based JIT, RJIT, that targets x86_64, which is available for experimental purposes. There are lots of other improvements and new features described in the announcement.
jake

Kernel prepatch 6.7-rc7

1 év 8 hónap óta
The 6.7-rc7 kernel prepatch is out for testing.

Anyway, rc7 itself looks fairly normal. It's actually a bit bigger than rc6 was, but not hugely so, and nothing in here looks at all strange. Please do give it a whirl if you have the time and the energy, but let's face it, I expect things to be very quiet and this to be one of those "nothing happens" weeks. Because even if you aren't celebrating this time of year, you might take advantage of the peace and quiet.

corbet

Stable kernel 5.15.145

1 év 8 hónap óta
The 5.15.145 stable kernel has been released. It consists mostly of fixes to the ksmbd subsystem, which has been marked as broken due to (until now) a lack of support for the 5.15.x kernels.
corbet

Darktable 4.6.0 released

1 év 8 hónap óta
Version 4.6.0 of the darktable photo editor has been released. Changes include a new "rgb primaries" module that "can be used for delicate color corrections as well as creative color grading", enhancements to the sigmoid module, some performance improvements, and more. (LWN looked at darktable in 2022).
corbet

Security updates for Friday

1 év 8 hónap óta
Security updates have been issued by Debian (bluez, chromium, gst-plugins-bad1.0, openssh, and thunderbird), Fedora (chromium, firefox, kernel, libssh, nss, opensc, and thunderbird), Gentoo (Arduino, Exiv2, LibRaw, libssh, NASM, and QtWebEngine), Mageia (gstreamer), and SUSE (gnutls, gstreamer-plugins-bad, libcryptopp, libqt5-qtbase, ppp, tinyxml, xorg-x11-server, and zbar).
jake

The 6.7 kernel will be released on January 7

1 év 8 hónap óta
Unsurprisingly, Linus Torvalds has let it be known that he will do a 6.7-rc8 release (rather than 6.7 final) on December 31, thus avoiding opening the 6.8 merge window on New Year's Day.

Just FYI - my current plan is that -rc7 will happen this Saturday (because I still follow the Finnish customs of Christmas _Eve_ being the important day, so Sunday I'll be off), and then if anything comes in that week - which it will do, even if networking might be offline - I'll do an rc8 the week after.

Then, unless anything odd happens, the final 6.7 release will be Jan 7th, and so the merge window for 6.8 will open Jan 8th.

corbet

[$] Data-type profiling for perf

1 év 8 hónap óta
Tooling for profiling the effects of memory usage and layout has always lagged behind that for profiling processor activity, so Namhyung Kim's patch set for data-type profiling in perf is a welcome addition. It provides aggregated breakdowns of memory accesses by data type that can inform structure layout and access pattern changes. Existing tools have either, like heaptrack, focused on profiling allocations, or, like perf mem, on accounting memory accesses only at the address level. This new work builds on the latter, using DWARF debugging information to correlate memory operations with their source-level types.
corbet

Announcing `async fn` and return-position `impl Trait` in traits (Rust Blog)

1 év 8 hónap óta
The Rust Blog announces the stabilization of a couple of trait features aimed at improving support for async code:

Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly. [...]

Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator: [...]

So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.

corbet

Security updates for Thursday

1 év 8 hónap óta
Security updates have been issued by Debian (firefox-esr), Fedora (kernel), Mageia (bluez), Oracle (fence-agents, gstreamer1-plugins-bad-free, opensc, openssl, postgresql:10, and postgresql:12), Red Hat (postgresql:15 and tigervnc), Slackware (proftpd), and SUSE (docker, rootlesskit, firefox, go1.20-openssl, go1.21-openssl, gstreamer-plugins-bad, libreoffice, libssh2_org, poppler, putty, rabbitmq-server, wireshark, xen, xorg-x11-server, and xwayland).
jake

Rusty Russell: OP_CAT beyond 520 bytes

1 év 8 hónap óta

The original OP_CAT proposal limited the result to be 520 bytes, but we want more for covenents which analyze scripts (especially since OP_CAT itself makes scripting more interesting).

My post showed that it’s fairly simple to allow larger sizes (instead of a limit of 1000 stack elements each with a maximum of 520 bytes each, we reduce the element limit for each element over 520 bytes such that the total is still capped the same).

Hashing of Large Stack Objects

But consider hashing operations, such as OP_SHA256. Prior to OP_CAT, it can only be made to hash 520 bytes (9 hash rounds) with three opcodes:

OP_DUP OP_SHA256 OP_DROP

That’s 3 hash rounds per opcode. With OP_CAT and no 520 byte stack limit we can make a 260,000 byte stack element, and that’s 4062 hash rounds, or 1354 per opcode, which is 450x as expensive so we definitely need to think about it!

A quick benchmark shows OpenSSL’s sha256 on my laptop takes about 115 microseconds to hash 260k: a block full of such instructions would take about 150 seconds to validate!

So we need to have some limit, and there are three obvious ways to do it:

  1. Give up, and don’t allow OP_CAT to produce more than 520 bytes.
  2. Use some higher limit for OP_CAT, but still less than 260,000.
  3. Use a weight budget, like BIP-342 does for checksig.

A quick benchmark on my laptop shows that we can hash about 48k (using the OpenSSL routines) in the time we do a single ECDSA signature verification (and Taproot charges 50 witness weight for each signature routine).

A simple limit would be to say “1 instruction lets you hash about 1k” and “the tightest loop we can find is three instructions” and so limit OP_CAT to producing 3k. But that seems a little arbitrary, and still quite restrictive for future complex scripts.

My Proposal: A Dynamic Limit for Hashing

A dynamic BIP-342-style approach would be to have a “hashing budget” of some number times the total witness weight. SHA256 uses blocks of 64 bytes, but it is easier to simply count bytes, and we don’t need this level of precision.

I propose we allow a budget of 520 bytes of hashing for each witness byte: this gives us some headroom from the ~1k measurement above, and cannot make any currently-legal script illegal, since the opcode itself would allow the maximal possible stack element.

This budget is easy to calculate: 520 times total witness weight, and would be consumed by every byte hashed by OP_RIPEMD160, OP_SHA1, OP_SHA256, OP_HASH160, OP_HASH256. I’ve ignored that some of these hash twice, since the second hash amounts to a single block.

Is 520 Bytes of hashing per Witness Weight Too Limiting?

Could this budget ever be limiting to future scripts? Not for the case of “your script must look like {merkle tree of some template}” since the presence of the template itself weighs more than enough to allow the hashing. Similarly for merkle calculations, where the neighbor hashes similarly contribute more than enough for the hash operations.

If you provide the data you’re hashing in your witness, you can’t reasonably hit the limit. One could imagine a future OP_TX which let you query some (unhashed) witness script of (another) input, but even in this case the limit is generous, allowing several kilobytes of hashing.

What Other Opcodes are Proportional to Element Length?

Hashing is the obvious case, but several other opcodes work on arbitrary length elements and should be considered. In particular, OP_EQUAL, OP_EQUALVERIFY, the various DUP opcodes, and OP_CAT itself.

I would really need to hack bitcoind to run exact tests, but modifying my benchmark to do a memcmp of two 260,000 byte blobs takes 3100ns, and allocating and freeing a copy takes 3600.

The worst case seems to be arranging for 173k on the stack then repeatedly doing:

OP_DUP OP_DUP OP_EQUALVERIFY

4MB of this would take about 8.9 seconds to evaluate on my laptop. Mitigating this further would be possible (copy-on-write for stack objects, or adding a budget for linear ops), but 10 seconds is probably not enough to worry about.