Author: Jani

  • How to have email status from ~/Maildir under motd (Ubuntu 24.04)

    I recently set up my local mail to go into ~/Maildir. But when logging in on a console, the mail notification under motd still seemed to assume my mail being somewhere else (under /var/mail maybe), as it only ever said

    You do not have any new mail.

    I figured out that this message is generated by pam_mail, and it’s possible to configure it to use ~/Maildir with the dir= parameter. There’s separate configuration for each of (local) login, ssh and su in /etc/pam.d/login, /etc/pam.d/sshd and /etc/pam.d/su respectively. For instance, /etc/pam.d/login has this line:

    session    optional   pam_mail.so standard

    which I changed to

    session    optional   pam_mail.so dir=~/Maildir standard

    and now the note under motd reflects what’s in my ~/Maildir.

  • How to have Mutt thread together messages with the same subject, even without Re:

    Mutt settings are mostly inscrutable to me, so I’ll just dump the ones I found (through trial and error) that do what I want below. Besides having these:

    set sort=threads
    set sort_browser=reverse-date
    set sort_aux=reverse-last-date-received

    I needed to add these:

    set sort_re=no
    unset strict_threads

    For convenience, I also added these:

    # Collapse threads at startup
    exec collapse-all
    
    # Set the keys for un-/collapsing all threads/one thread
    bind index _ collapse-all
    bind index - collapse-thread

    (I’m using Mutt version 2.2.12 in Ubuntu 24.04.)

  • Crossgrading Ubuntu 18.04 to Debian 10

    After upgrading all Ubuntu 18.04 packages to their latest releases, I adapted eudoxos‘ recipe:

    1. Rebooted and selected the stock 18.04 kernel instead of HWE, which I was using
    2. Downloaded debian-keyring and debian-archive-keyring .debs and installed them with dpkg -i debian*.deb
    3. Created /etc/apt/preferences.d/10-no-ubuntu to pin down Ubuntu packages:
      Package: *
      Pin: release o=Ubuntu
      Pin-Priority: -1000
    4. Added Debian sources for buster to the end of /etc/apt/sources.list:
      deb http://deb.debian.org/debian buster main contrib non-free
      deb http://deb.debian.org/debian-security/ buster/updates main contrib non-free
      deb http://deb.debian.org/debian buster-updates main contrib non-free
    5. apt update && apt-get dist-upgrade
    6. Answered “no” to all configuration changes questions (I’ll update them later)
    7. Post-install, networking was non-functional. Apparently this was caused by Apparmor, so I disabled it (systemctl disable apparmor.service) and rebooted.
    8. Deleted /etc/apt/preferences.d/10-no-ubuntu
    9. Then it was time for manual package surgery. Lots and lots of it. Searching for remaining Ubuntu packages is easy enough, with
      dpkg -l | grep ubuntu

      and

      aptitude search '?narrow(?installed, ?not(?origin(Debian)))'

      Many of the matching packages can be removed/downgraded to their Debian versions without issue, but a bunch of gcc and python packages turned out to be tricky, and had to be downgraded all together. I did make notes, but following them blindly would most likely just do more harm than good, so I won’t post them here; you’ll need to hack’n’slash your own way anyway.

  • Kuinka kalibroin vanhan Android-tabletin akun varausilmaisimen

    XDA-foorumin anandmoren ohjeen mukaisesti:

    1. Irrotin laitteen laturista.
    2. Käytin laitetta niin kauan, että se sammui itsestään akun varauksen loputtua. Ensimmäisellä syklillä tämä tapahtui varausilmaisimen näyttäessä 86 %.
    3. Käynnistelin laitetta uudestaan niin kauan, että se ei enää päässyt käynnistyslatainruutua (laitteen valmistajan logo) pitemmälle ilman että sammuu. Pari kertaa pääsin kotinäytölle asti, ja kerran ennätin käynnistää videosovelluksenkin.
    4. Kytkin laitteen sammutettuna laturiin.
    5. Annoin (edelleen sammutettuna olevan laitteen) latautua niin kauan, että varausilmaisin näytti akun olevan (100 %) täynnä. (Virtakytkimen lyhyt painallus näyttää varausilmaisimen ilman että laite käynnistyy.) Tässä kesti useita tunteja, vaikka lähtötilanne oli (ensimmäisellä syklillä) varausilmaisimen mielestä noin 75—80 % (josta huolimatta laite ei siis enää kyennyt käynnistymään).
    6. Irrotin laitteen laturista.
    7. Käynnistin laitteen.
    8. Varausilmaisin näytti käynnistymisen jälkeen 98 %. Kytkin laitteen laturiin.
    9. Annoin latautua niin kauan, että (nyt käynnissä olevan laitteen) varausilmaisin näytti akun olevan (100 %) täynnä, joskaan tällä ensimmäisellä syklillä se ei päässyt tuosta 98 %:sta ylemmäs.
    10. Sammutin laitteen.
    11. Irrotin laitteen laturista.
    12. Käynnistin laitteen.
    13. Toistin vaiheet 2—12 vielä kaksi kertaa. Vasta sen jälkeen laitetta pystyi käyttämään niin, että akun varaus tyhjeni 100 %:sta (käytön myötä, vähitellen) alle 10 %:iin ilman automaattista sammumista.
  • “Copy” disabled in Firefox’s context menu

    Just a note for myself about this: Bug 1863246: Copy and Paste context menu entries are sometimes disabled when they should not be

    And a workaround lower down, from lexlexlex:

    “I noticed that an easy workaround when this happens is to click the address bar to change selection focus, then interact with the page again. After that, the “Copy” entry in the context menu works again without reopening the tab.”

  • Notes as I go: my attempt to develop a Home Assistant integration, part 3

    One thing that was confusing to me from the start was the seemingly overlapping functionality in __init__.py and config_flow.py, but I think I figured it out: async_step_*() in config_flow.py is for when a component is first set up, whereas async_setup_entry() in __init__.py is for when a config entry for the component has already previously been set up, and so an object can be instantiated for that entry.

    As for “my API”, my impression (from both core and non-core integrations) is that if I had a published Python package for the device, I’d just import it (like, for instance, RuuviTag integration imports ruuvitag-ble), but if/as I don’t, I should create the API in a subdirectory of my integration and import it using dot notation.

    Hence, naming the files inside the API directory is more a question of general Python convention, rather than something HA has opinions about.


    Anyway, while I understand all the complications I keep stumbling over make HA component development scalable, it also makes my tiny button project way more complicated than it should be. There appears to be no way to build a minuscule MVP with just a few lines of code, to be then extended with all the bells and whistles that would make it look and play nice.

    Instead it seems easier to just hack some existing code to work with my device (which I’ve already done, semi-successfully). But that means then having to clean up tons of someone else’s code, or, what’s more likely, just leave it all as an ugly hack, never to be published anywhere.

  • How I made tty1 a live display for (Systemd’s) journal (in Ubuntu 20.04)

    1. Run systemctl edit getty@tty1.service
    2. In the override file, enter
      [Service]
      ExecStart=
      ExecStart=-/bin/journalctl -b -ef
      StandardInput=tty
      StandardOutput=tty
    3. To see it right away (without reboot), run systemctl daemon-reload followed by systemctl restart getty@tty1.service
    4. To stop the console from blanking, edit /etc/default/grub, add consoleblank=0 to GRUB_CMDLINE_LINUX_DEFAULT, save and exit. Then run update-grub and reboot.

    (I adapted the service override from Remy’s old post.)

  • Notes as I go: my attempt to develop a Home Assistant integration, part 2

    The documentation keeps referring to “my API”, but leaves everything about it up to imagination. How should I name the file? What code should I put there, and what should I not?


    The MyEvent example, which I adapted for my event.py, keeps failing to load, because it “has no attribute ‘async_setup_entry’”. I get it to shut up by adding a dummy by that name as a function — not as a method.


    I grab some logging code from someone else’s Bluetooth integration and adapt it to my _async_has_devices(), to see if I need to do any actual filtering. But none of it ever gets run, despite HA announcing a device being detected by my integration.

    In the end I decide it’s not going to be as easy as I’d hoped with the config_flow_discovery scaffolding. I’m going to have to invoke Bleak to be able to subscribe to BLE notifications, which are what my button uses to communicate press events, and which, apparently, are not part of HA core for now. So I revert the config_flow.py changes and go back to the longer version.


    The first TODO there is for user data schema. I won’t have users input any configuration, so out it goes, as does all of validate_input(), and async_step_user() in ConfigFlow. Likewise for PlaceholderHub (“Remove this placeholder class and replace with things from your PyPI package”).

    According to “Discovery steps“,

    “When an integration is discovered, their respective discovery step is invoked (ie async_step_dhcp or async_step_zeroconf) with the discovery information.”

    This means it should be async_step_bluetooth() for mine. I wish it was listed, along with all other variants, instead of just the two examples, to help googling. An accompanying example would be even better.

    The last item on the list of things the discovery step should do is:

    “Invoking a discovery step should never result in a finished flow and a config entry. Always confirm with the user.”

    Again, not even a hint about this in the documentation, but most core integrations appear to implement a async_step_bluetooth_confirm(), and call it at the end of async_step_bluetooth(). There’s also _set_confirm_only(), which seems like what I need, but again, no documentation anywhere.

  • Notes as I go: my attempt to develop a Home Assistant integration, part 1

    Set up Development Environment says the “easiest way to get started with development is to use Visual Studio Code with devcontainers”, but I dislike both those things, so I go with Manual Environment instead.

    I’m not doing core development or even integration PRs for now, so I don’t need a fork; instead I just git clone --depth 1 https://github.com/home-assistant/core.git.

    For the dependencies, sudo apt install python3-pip python3-dev python3-venv autoconf libssl-dev libxml2-dev libxslt1-dev libjpeg-dev libffi-dev libudev-dev zlib1g-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev ffmpeg


    I then try to run script/setup, except that’s not going to work, because it’s not in my path; to call it from the core root directory using a relative path, it has to be ./script/setup

    Some python packages get installed, and in the end it craps out with “no such option: –config-settings”. Apparently Python 3.10, which my Ubuntu 22.04 desktop has, is no longer supported, so I need to redo all of the above on my laptop (which is running Ubuntu 23.10 with Python 3.11).


    I then try python3 -m script.scaffold integration, which is what I’m here for, but it fails with “ModuleNotFoundError: No module named ‘attr’”. Nothing useful comes up on Google, but then I realize I need to source venv/bin/activate first.

    After I answer the scaffolding questions, the script begins building the scaffold, but that also craps out, this time with “ModuleNotFoundError: No module named ‘numpy’”. pip3 list reveals this to be true, so I pip3 install numpy.

    I restart python3 -m script.scaffold integration, which this time only asks the first question, then (correctly) assumes my previous answers to the others are still valid, and finally generates the scaffolding. Doesn’t tell me where it is, but git status shows theres new stuff in homeassistant/components/ and in tests/components/.

    “The next step is to look at the files and deal with all areas marked as TODO.”


    I rsync the scaffolding back to my desktop and open __init__.py in VSCodium. The first TODO says “List the platforms that you want to support.” By default it’s set to LIGHT, but my thing is a button, so I go looking for the platform alternatives. “There are lights, switches, covers, climate devices, and many more.”

    The “many more” link goes to an entities introduction page, so I assume the side menu listing of Entities is what I have to choose from. As I said, I’m developing a button, so you’d think, a Button, right?

    Of course not:

    “not suitable for implementing actual physical buttons; the sole purpose of a button entity is to provide a virtual button inside Home Assistant.”

    “If you want to represent something that can be turned on and off (and thus have an actual state), you should use a switch entity instead.”

    I don’t, because mine doesn’t, so no.

    “If you want to integrate a real, physical, stateless button device in Home Assistant, you can do so by firing custom events. The entity button entity isn’t suitable for these cases.”

    Well, I’d like to fire events, but this only seems to draw further away from what “platform” I need to use for my integration.

    The scaffolding imports Platform from homeassistant.const, so I find the Platform enumeration in const.py, but it just lists the same entities as the documentation side panel, so I’m out of luck.

    No, wait, Event is also listed. So I’ll go with Platform.EVENT for now. That means I’ll need to “create a file with the domain name of the integration that you are building a platform for” — so in this case, event.py.

    Actually, Integration File Structure says

    “If the integration only offers a platform, you can keep [__init__.py] limited to a docstring introducing the integration”.

    So that means I don’t need any of the __init__.py scaffolding?


    Whatever, I’ll take a look at the manifest instead.

    Integration type was not set up by the scaffolding script (despite the documentation saying it should be set, and becoming mandatory in the future). I’m going to pick device, which seems unambiguous, for once.

    Version is mandatory, I’ll go with 0.1.

    My button communicates via Bluetooth, and Best practices for this says I need bluetooth_adapters in my dependencies.

    I’ve used bluetoothctl to sniff out what name the device uses, so I’m also defining a local_name matcher in a Bluetooth section. There’s also connectable, but it defaults to true, which is what it should be for my button.

    In the scaffolding, I’d previously set iot_class to local_polling, and I’m going to leave it as such, although for now, I’m treating the button as if didn’t have any state.


    Next, config_flow.py.

    Discoverable integrations that require no authentication” looks useful, so I go back to my laptop, source venv/bin/activate again, then python3 -m script.scaffold config_flow_discovery, and finally rsync the new files (now named EXAMPLE_*) back again. The only changed file was config_flow.py, which is now substantially smaller, defining mostly just _async_has_devices().

    The only TODO there is “Check if there are any devices that can be discovered in the network.” Spying from one core integration, my guess is that _async_has_devices() receives a bunch of discovered devices, and should return true only if any of those devices are ones that my integration is actually able to configure.

    Why/whether this isn’t already handled by the matcher definition in the manifest, I don’t know; I found at least one integration that always returns true, with a comment “MQTT is set as dependency, so that should be sufficient.”


    I minimally tweak the MyEvent example and save it as event.py. I think it’s missing at least imports of EventEntity and EventDeviceClass. Also, there’s a tip about being “sure to deregister any callbacks when the entity is removed from Home Assistant”, so why isn’t that incorporated into the example?

  • Clock Override suddenly only somewhat works (but it’s really not about that)

    I use Clock Override to customize the date and time format displayed in the top bar in GNOME Shell. Recently, after a couple of bad logins (Gnome crashing), it suddenly only reacted to the %S modifier and no other GLib GDateTime codes anymore. The date and time was either the default format, or the default format with seconds (when %S was part of the format string).

    I tried reinstalling the extension, clearing its dconf settings and all that jazz, to no avail. In the end I tried to reproduce the issue using another user account, and, having failed that (i.e. the extension worked perfectly), once again went through bisecting a dconf dump from the non-working account:

    dconf reset -f / && dconf load / <nonworking

    This finally revealed that Gnome Shell had set /org/gnome/shell/disable-user-extensions to true, which “makes sense”.

    I might have found out the cause much quicker, had the extension not been in this bizarre “1 % working, 99 % not” half-state but completely nonfunctional. Even quicker still, had Gnome somehow indicated that it was now running in this crippled safe mode.