Both issues (”Network is unreachable” and WoL not working) were fixed for me after installing 6.8.0-106
Both issues (”Network is unreachable” and WoL not working) were fixed for me after the latest kernel update, which was Linux 6.8.0-106.
Both issues (”Network is unreachable” and WoL not working) were fixed for me after the latest kernel update, which was Linux 6.8.0-106.
Yeah, seems to be fixed: I tested in an Ubuntu 24.04 vm with WezTerm 20240203-110809-5046fc22 and couldn’t reproduce the issue.
(I first tried the nightly, which from the repository was 20260117-154428-05343b38, but that one was missing all window decorations entirely, and so couldn’t be resized at all.)
I’ll close the issue and unsubscribe, as I’m not using WezTerm currently myself. Others can reopen it if still affected.
So that’s what was causing all these weird network issues all of a sudden here. Looks like 6.8.0-101-generic is likewise affected.
Sending wake-on-LAN packets also broke for me coincidentally with this. That is, my NAS (now also running 6.8.0-101) could still receive WOL packets sent from a server running 6.8.0-79, but none sent from my desktop (running 6.8.0-101). After downgrading the desktop back to 6.8.0-94, WOL packets sent from it were again immediately picked up by the NAS.
Thanks. I had hoped for either some correlation with my history.log, or something connected to drawing/rendering the desktop, but nothing jumps out from those, alas. The nvidia/libnvidia-* ones from the 23rd would be the obvious suspect, except you had already posted about the problems (above) when those packages were getting installed.
The apparmor denials for Firefox seem to be a known issue. I don’t use Firefox (and back when I did, I used the Mozilla PPA version instead of the snap), but I suspect they’re not related to the freeze. At least not unless you’re running particularly low on memory.
So we’re running different kernels, even curiouser and curiouser. :)
You’re right that the one you currently have probably has been the only one it’s had since installing (judging from 6.14’s publishing history).
Could you check to see which updates, if any, you’ve installed just prior to the lockups starting? The log files should be under /var/log/apt/ as history.log (the most recent), then history.1.gz (next newest) etc. The .log files are plain text. The .gz ones are compressed, so have to be decompressed first, or, if you’re comfortable with the command line, you can use zless to view them directly.
To me this does look like a kernel issue, but if any non-kernel updates are the trigger, you could perhaps work around the issue by downgrading the package(s), if you can pinpoint which one(s) started the issue.
Right, I’m on (the non-HWE) 6.8 series:
Linux saegusa 6.8.0-90-generic #91-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 18 14:14:30 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
According to /var/log/apt/history.log*, I installed it freshly back in December (2025-12-12), and journalctl shows I’d rebooted the next day (the 13th), so have been running it ever since.
Since this is so rare an occurrence on my system, hopefully @anonymousdormouse has the ability to try older/newer kernels with their easily repeatable trigger.
Cool, I’m not going crazy! I’ve had a very, very similar issue for a couple of weeks now. Identical in fact, except for the triggers and frequency: three incidents so far, since the first one on January 11th, with the last two within two days earlier this week. In all instances I’ve had Twitch playing video in Librewolf, when suddenly the desktop freezes, leaving only a 500 ms bit of the audio looping endlessly. I had forgotten about magic SysRq, so I have yet to try if it works; I’ve also just done a hard reset instead.
If the logs posted by anonymousdormouse are related to the issue, then that’s one difference wrt. what’s happening here: there’s been nothing related to the problem in any logs. The system seems to just die instantly and completely.
Ubuntu is on an NVMe drive instead of a HDD, and it’s the only OS on the drive.
I have done one pass of memtest to rule out memory errors.
My system doesn’t have Nvidia, it’s using the integrated GPU of a Core i7-8700 with two external displays (one DP-connected, one HDMI-). (This is a Dell XPS desktop that originally came with an external Nvidia GPU, but I ripped it right out before installing the system. I’ve had enough bad experiences with Nvidia to know to avoid them whenever possible.)
Describe the bug
I’m testing sudo-rs, and came across a bit of weirdness in sudoers parsing, related to quotes and parameter order.
To Reproduce
$ touch test/etc/sudoers.d/90-ssh-auth-sock to look like this:Defaults!/bin/chown timestamp_timeout=1,env_keep+=SSH_AUTH_SOCK$ sudo-rs chown root:root test # this works as expected/etc/sudoers.d/90-ssh-auth-sock to reorder the parameter=value pairs like this:Defaults!/bin/chown env_keep+=SSH_AUTH_SOCK,timestamp_timeout=1$ sudo-rs chown jani:jani test # this fails:/etc/sudoers.d/90-ssh-auth-sock:1:63: double quotes are required for VAR=value pairsDefaults!/bin/chown env_keep+=SSH_AUTH_SOCK,timestamp_timeout=1 ^Expected behavior
For sudo-rs to perform the command in 5. without error, as it did in point 3.
Environment (please complete the following information):
sudo-rs commit hash: b434d4d (precompiled version 0.2.8 binary from the Github release page)Additional context
For background, I’m using pam_ssh_agent_auth to authorize my user with SSH keys to run some commands, which requires env_keep+=SSH_AUTH_SOCK.
I also like have it time out immediately, so I additionally set timestamp_timeout=0. I initially thought the issue was caused by the zero, but testing with timestamp_timeout=1 resulted in the same errors, so that’s what I’m using here, for unambiguity.
The caret in the error message points to timestamp_timeout’s value, so I’d assume the logical solution is to quote that value, like this:
Defaults!/bin/chown env_keep+=SSH_AUTH_SOCK,timestamp_timeout="1"
But this doesn’t help:
$ sudo-rs chown jani:jani test
/etc/sudoers.d/90-ssh-auth-sock:1:63: double quotes are required for VAR=value pairs
Defaults!/bin/chown env_keep+=SSH_AUTH_SOCK,timestamp_timeout="1"
^
So my next thought is to quote both values:
Defaults!/bin/chown env_keep+="SSH_AUTH_SOCK",timestamp_timeout="1"
This causes a different error:
$ sudo-rs chown jani:jani test
/etc/sudoers.d/90-ssh-auth-sock:1:65: expected nonnegative number
Defaults!/bin/chown env_keep+="SSH_AUTH_SOCK",timestamp_timeout="1"
^
The only remaining option is to quote only the first parameter value. Surprisingly, this works:
Defaults!/bin/chown env_keep+="SSH_AUTH_SOCK",timestamp_timeout=1
$ sudo-rs chown jani:jani test
$
With OG sudo, any order or combination of these parameters, quoted or unquoted, works as expected.
Describe the bug
I’m getting inconsistent results when converting AVIF images to other formats, using Firefox (in Ubuntu). I’m self-hosting Mazanoke 1.1.5, but the results are the same with mazanoke.com.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
to receive a PNG file, fox.profile0.10bpc.yuv420.png
Screenshots
I receive a JPEG file, named fox.profile0.10bpc.yuv420.jpeg
Desktop (please complete the following information):
Additional context
The same thing (i.e. receiving a JPEG file) happens if I choose WebP as output, but not if I choose ICO: downloading the latter does produce a fox.profile0.10bpc.yuv420.ico, as expected.
Choosing JPG output also works as expected (although it would be pretty funny if it didn’t). Notably, this produces the filename fox.profile0.10bpc.yuv420.jpg: the file extension differs from the one produced by Firefox for the unexpected cases (.jpeg).
I can also work around the issue by selecting ”Download all”, which produces a zip archive, and all the files therein are in my chosen output formats as listed.
In Vivaldi (which is based on Chrome) the Download link also does produce the file in the chosen output format. So this looks like Firefox-specific issue.
Converting from (and to) other formats seems to work as expected in Firefox (at least the ones I’ve tested so far).
No problem; like I said, using node 20 works fine. I only noticed this issue recently, when I rebuilt my build environment, and tried doing so by the book (i.e. according to the docs).
If there’s something I can do to try to further narrow this down, let me know. I’m doing the build in an Ubuntu 24.04 VM, which I can restore to a working snapshot if testing causes it to break.