Movatterモバイル変換


[0]ホーム

URL:


Debian

planet

Skip Quicknav

Planet Debian

February 17, 2026

Russell Coker

Links February 2026

Charles Stross has a good theory of why “AI” is being pushed on corporations, really we need to just replace CEOs with LLMs [1].

This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.

interesting analysis of dbus and design for a more secure replacement [3].

Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4].

Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.

Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.

Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7].

Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8].

This video about designing a C64 laptop is a masterclass in computer design [9].

Salon has an interesting article about the abortion thought experiment that conservatives can’t handle [10].

Ron Garrett wrote an insightful blog post about abortion [11].

Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

Related posts:

  1. Links February 2024In 2018 Charles Stross wrote an insightful blog post Dude...
  2. Links February 2025Oliver Lindburg wrote an interesting article about Designing for Crisis...
  3. Links February 2021Elestic Search gets a new license to deal with AWS...

17 February, 2026 08:09AM by etbe

February 16, 2026

Antoine Beaupré

Keeping track of decisions using the ADR model

In the Tor Project system Administrator's team (colloquially known asTPA), we've recently changed how we take decisions, which means you'llget clearer communications from us about upcoming changes ortargeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team hasits own way of coordinating and making decisions, and so far thisprocess is only used inside TPA. We encourage other teams inside andoutside Tor to evaluate this process to see if it can improve yourprocesses and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments")process and have recently switched to "ADR" ("Architecture DecisionRecord").

The ADR process is, for us, pretty simple. It consists of threethings:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (inADR-100), a variation of theNygard template. TheTPAvariation of the template is similarly simple, as it has only 5headings, and is worth quoting in full:

  • Context: What is the issue that we're seeing that is motivatingthis decision or change?

  • Decision: What is the change that we're proposing and/or doing?

  • Consequences: What becomes easier or more difficult to dobecause of this change?

  • More Information (optional): What else should we know? Forlarger projects, consider including a timeline and cost estimate,along with the impact on affected users (perhaps including existingPersonas). Generally, this includes a short evaluation ofalternatives considered.

  • Metadata: status, decision date, decision makers, consulted,informed users, and link to a discussion forum

Theprevious RFC template had17 (seventeen!) headings, whichencouraged much longer documents. Now, the decision record will beeasier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issuesmore for comparisons and brainstorming. Instead of dumping in adocument all sorts of details like pricing or in-depth alternativescomparison, we record those in the discussion issue, keeping thedocument shorter.

The process

The whole process is simple enough that it's worth quoting in full aswell:

Major decisions are introduced to stakeholders in a meeting, smallerones by email. A delay allows people to submit final comments beforeadoption.

Now, of course, the devil is in the details (andADR-101), but thepoint is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls theone weird trick, is to "decide who decides". Our previous processwas vague about who makes the decision and the new template (andprocess) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions aroundtrivial issues becausetoo many stakeholders are consulted, aproblem known as theLaw of triviality, also known as the "BikeShed syndrome".

The new process better identifies stakeholders:

  • "informed" users (previously "affected users")
  • "consulted" (previously undefined!)
  • "decision maker" (instead of the vague "approval")

Picking those stakeholders is still tricky, but our definitions aremore explicit and aligned to the classicRACI matrix (Responsible,Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouplethe act of making and recording decisions fromcommunicating aboutthe decision. Those are tworadically different problems tosolve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have aspecific template for communications. We suggest theFive Wsmethod (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

TheADR process is not something I invented. I first stumbled uponit in theThunderbird Android project. Then, in parallel, I was intheprocess of reviewing the RFC process, following JacobKaplan-Moss'scriticism of the RFC process. Essentially, he arguesthat:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics andpower structures"

And, indeed, I have been guilty of a lot of those issues. A verbosewriter, I have writtenextremely long proposals that I suspect noone has ever fully read. Some proposals were adopted by exhaustion, orignored because not looping in the right stakeholders.

Ourdiscussion issue on the topic has more details on the issues Ifound with our RFC process. But to give credit to the old process, itdid serve us well while it was there: it's better than nothing, and itallowed us to document a staggering number of changes and decisions(95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, asthis text might suggest. Because it's a separate step, we also have atendency to forget or postpone it, like this post, which comes acouple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which waseasy and quick, but incomprehensible to most. Now we need to write aseparate communication, which is more work but, hopefully, worth theas the result is more digestible.

We can't wait to hear what you think of the new process and how itworks for you, here or in thediscussion issue! We're particularlyinterested in people that are already using a similar process, or thatwill adopt one after reading this.

Note: this article was also published on theTor Blog.

16 February, 2026 08:21PM

hackergotchi for Philipp Kern

Philipp Kern

What is happening with this "connection verification"?

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much wewant to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g.haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion 

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

16 February, 2026 07:55PM by Philipp Kern (noreply@blogger.com)

Antoine Beaupré

Kernel-only network configuration on Linux

What if I told you there is a way to configure the network on anyLinux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and aboot loader (nosystemd-networkd,ifupdown,NetworkManager,nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is stilltotally worth your time.

Known options in Debian

People following Debian development might have noticed there are nowfour ways of configuring the network Debian system. At least that iswhat theDebian wiki claims, namely:

At this point, I feelifupdown is on its way out, possibly replacedbysystemd-networkd. NetworkManager already manages most desktopconfigurations.

A "new" network configuration system

The method is this:

  • ip= on theLinux kernel command line: for servers with asingle IPv4 or IPv6 address, no software required other than thekernel and a boot loader (since 2002 or older)

So by "new" I mean "new to me". This option isreally old. Thenfsroot.txt where it is documented predates the git import of theLinux kernel: it's part of the 2005 git import of 2.6.12-rc2. That'salready 20+ years old already.

The oldest trace I found is in this2002 commit, which importsthe whole file at once, but the option might goes back as far as1996-1997, if the copyright on the file is correct and the optionwas present back then.

What are you doing.

The trick is to add anip= parameter to the kernel'scommand-line. The syntax, as mentioned above, is innfsroot.txtand looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the uselessones:

  • <client-ip>: IP address of the server
  • <gw-ip>: address of the gateway
  • <netmask>: netmask, in quad notation
  • <device>: interface name, if multiple available
  • <autoconf>: how to configure the interface, namely:
    • off ornone: no autoconfiguration (static)
    • on orany: use any protocol (default)
    • dhcp, essentially likeon for all intents and purposes
  • <dns0-ip>,<dns1-ip>: IP address of primary and secondary nameservers, exported to/proc/net/pnp, can by symlinked to/etc/resolv.conf

We're ignoring the options:

  • <server-ip>: IP address of the NFS server, exported to/proc/net/pnp
  • <hostnname>: Name of the client, typically sent over the DHCPrequests, which may lead to a DNS record to be created in somenetworks
  • <ntp0-ip>: exported to/proc/net/ipconfig/ntp_servers, unused bythe kernel

Note that theRed Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (althoughserver-id is weird), and theautoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to192.0.2.1. This will properly guess the network interface if there's asingle one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot themachine. That wouldn't work. You need to configure the kernelcommandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file/etc/default/grub(ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

Forsystemd-boot UKI setups, it's simpler: just add the setting tothe/etc/kernel/cmdline file. Don't forget to include anythingthat's non-default from/proc/cmdline.

This assumes that is theCmdline=@ setting in/etc/kernel/uki.conf. See2025-08-20-luks-ukify-conversion formy minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might firstlook, because of course each distribution has its own way ofconfiguring those options. Here are some that I know of:

  • Arch (11 options, mostly/etc/default/grub,/boot/loader/entries/arch.conf forsystemd-boot or/etc/kernel/cmdline for UKI)
  • Fedora (mostly/etc/default/grub, may be moreRHEL mentionsgrubby, possibly somesystemd-boot things here as well)
  • Gentoo (5 options, mostly/etc/default/grub,/efi/loader/entries/gentoo-sources-kernel.conf forsystemd-boot,or/etc/kernel/install.d/95-uki-with-custom-opts.install)

It's interesting that/etc/default/grub is consistent across alldistributions above, while thesystemd-boot setups areall over theplace (except for the UKI case), while I would have expected those bemore standard than GRUB.

dropbear-initramfs

Ifdropbear-initramfs is setup, it alreadyrequires you to havesuch a configuration, and it might not work out of the box.

This is because, by default, itdisables the interfaces configuredin the kernel after completing its tasks (typically unlocking theencrypted disks).

To fix this, you need todisable that "feature":

IFDOWN="none"

This will keepdropbear-initramfs from disabling the configuredinterface.

Why?

Traditionally, I've always setup my servers withifupdown on serversand NetworkManager on laptops, because that's essentially thedefault. But on some machines, I've started usingsystemd-networkdbecauseifupdown has ... issues, particularly with reloading networkconfigurations.ifupdown is a old hack, feels like legacy, and isDebian-specific.

Not excited about configuring another service, I figured I would trysomething else: just configure the network at boot, through the kernelcommand-line.

I was already doing such configurations fordropbear-initramfs(seethis documentation), which requires the network the be upfor unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration willnot change after boot. Of course, this won't work on laptops orany mobile device.

  2. This only works for configuring a single, simple, interface. Youcan't configure multiple interfaces, WiFi, bridges, VLAN, bonding,etc.

  3. It does support IPv6 and feels like the best way to configure IPv6hosts: true zero configuration.

  4. It likely doesnot work with adual-stack IPv4/IPv6 staticconfiguration. Itmight work with adynamic dual stackconfiguration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemonseems to be running so I assume leases are not renewed, so this ismore useful for static configurations, which includes server-sidereserved fixed IP addresses. (A non-renewed lease risks gettingreallocated to another machine, which would cause an addressingconflict.)

  6. It will not automatically reconfigure the interface on linkchanges, butifupdown does not either.

  7. It willnot write/etc/resolv.conf for youbut thedns0-ipanddns1-ip do end up in/proc/net/pnp which has a compatiblesyntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
  8. I have not really tested thisat scale: only a single, testserver at home.

Yes, that's a lot of caveats, but it happens to cover alot ofmachines for me, and it works surprisingly well. My main doubts areabout long-term DHCP behaviour, but I don't see why that would be aproblem with a statically defined lease.

Cleanup

Once you have this configuration, you don't needany "user" levelnetwork system, so you can get rid ofeverything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note thatifupdown (and probably others) leave stray files in (e.g.)/etc/network which you might want to cleanup, or keep in case allthis fails and I have put you in utter misery. Configuration files forother packages might also be left behind, I haven't tested this, nowarranty.

Credits

This whole idea came from theA/I folks (not to be confused withAI) who have been doing this forever, thanks!

16 February, 2026 04:18AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Why do people participate in similar online communities?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led byNathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling:why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW:TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.”Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25.https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

16 February, 2026 03:13AM by Benjamin Mako Hill

February 15, 2026

Ian Jackson

Adopting tag2upload and modernising your Debian packaging

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recentlyannounced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part ofDebian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See theDay-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly itisn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’tproperly publishing our work! Yes, we typically put our git branch on Salsa, and pointVcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, baredebian/, orsomething even stranger.
  • There is no guarantee that the DEP-14debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such asgbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that aresystematic andalways correct. They are OK for expert humans, but they are awkward (evenhazardous) for Debian novices, and you cannot use them in automation. The real test is: could you useVcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgitdo solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-formarchive/debian/1.2.3-7 tag to a single central git depository,*.dgit.debian.org;
  4. Record the git information in theDgit field in.dsc so that clients can tell (using theftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client isdgit clone, as advertised in e.g.dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt itwithout completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start withthe wiki page andgit-debpush(1) (ideally from forky aka testing).

Youdon’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used withgbp pq and/orquilt, and often used withgit-buildpackage. You previously usedgbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have yourorigin remote set to Salsa.

  • Your main Debian branch name on Salsa ismaster. Personally Ithink we should usemain but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are theDebian delta. We need to somehow represent it in git.

We recommend storing the deltaas git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling likequilt andgbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files indebian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be analarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plaingit merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from withindebian/.

This is the approach documented more fully in our workflow tutorialdgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorialdgit-maint-debrebase(7).

Examples of complex packages using this approach includesrc:xen andsrc:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by usinggbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even ajoke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is1.2.3, and that upstream tagged itv1.2.3.

Editdebian/watch to contain something like this:

version=4opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. Ifdebian/watch had afiles-excluded, you’ll need to make afiltered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We needsome “upstream tarball” for the3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. Wedon’t need or want to use a tarball from upstream for this. The.orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current.orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing.origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add+git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representationgit reset --hard v1.2.3          # go back to the real upstream git taggit checkout old-master :debian  # take debian/* from old-mastergit commit -m "Re-import Debian packaging on top of upstream git"git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-mastergit branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to yourmain branch withgit am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option-fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history.-fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetchgit show dgit/dgit/sid:debian/changelog# check that you have the same version numbergit merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existingdebian/source/options and/ordebian/source/local-options.

git merge

Changedebian/source/format to1.0. Adddebian/source/options containing-sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusiveunjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to#1106402.

git-debrebase

Ensure thatdebian/source/format contains3.0 (quilt).

Now you are ready to do alocal test build.

Sort out the documentation and metadata

EditREADME.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything indebian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that yourVcs-Git is correct indebian/control. Consider deleting or pruningdebian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note todebian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza todebian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the+git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. Itisfullofstartlingbugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model areserious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Createdebian/salsa-ci.yml containing

include:  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” todebian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in.gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration todebian/salsa-ci.yml to customise it. Consult theSalsa CI docs.

git-debrebase

Add todebian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare  # install the tools we'll need  - apt-get update  - apt-get --yes install git-debrebase git-debpush  # git-debrebase needs git user setup  - git config user.email "salsa-ci@invalid.invalid"  - git config user.name "salsa-ci"  # run git-debrebase make-patches  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371  - git-debrebase --force  - git-debrebase make-patches  # make an orig tarball using the upstream tag, not a gbp upstream/ tag  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541  - git-deborig.build-definition: &build-definition  extends: .build-definition-common  before_script: *git-debrebase-preparebuild source:  extends: .build-source-only  before_script: *git-debrebase-preparevariables:  # disable shallow cloning of git repository. This is needed for git-debrebase  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on thosein the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branchmaster. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to landanything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button.This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

TheDebian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files indebian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Usegit-rebase to squash/edit/combine/reorder commits.

git-debrebase

Usegit-debrebase -i to squash/edit/combine/reorder commits. When you are happy, rungit-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, usegit-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .dpkg-buildpackage -uc -b

Ideally this will leavegit status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to.gitignore ordebian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, usedgit sbuild asdescribed below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in thedebian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sidgbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this onmaster, but of course you’re running it on your MR branch.

The--git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have anupstream remote and that you’re basing your work on theirmain branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need--since=debian/1.2.3-1, where1.2.3-1 is your previous DEP-14 tag, becausedgit/dgit/sid will be a dsc import, not your actual history.)

ChangeUNRELEASED to the target suite, and finalise the changelog. (Note thatdch will insist that you at least save the file in your editor.)

dch -rgit commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout mastergit pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot,dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deboriggit-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuilddgit push-built
rationale

You don’thave to usedgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around agitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4dch -v1.2.4-1 'New upstream release.'

git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part ofgit merge orgit (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runsgit-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sidgit diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look atdebian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sidgit diff master...dgit/dgit/sid

If the NMUerused dgit, thengit log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You shouldgit-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes todebian/patches/. Normally it’s best to filter them out withgit diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7git-debrebase --force make-patchesgit diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version withdebian/patches/ up to date. (The NMU, indgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3git rm nonfree.exegit commit -m "upstream version 1.2.3 DFSG-cleaned"git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1git push origin upstream-dfsg

And now, use1.2.3+ds1, and the filtered branchupstream-dfsg, as the upstream version, instead of1.2.3 andupstream/main. Follow the steps forConvert the git branch orNew upstream version, as applicable, adding+ds1 intodebian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the sameupstream-dfsg branch and bump theds version, egv1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsggit merge v1.2.4git rm additional-nonfree.exe # if anygit commit -m "upstream version 1.2.4 DFSG-cleaned"git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell scriptdebian/rm-nonfree containing appropriategit rm commands. If you usegit rm -f it will succeed even if thegit merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideallyuscan, which has a way of representing DFSG filtering patterns indebian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    ForReasons the dgit and tag2upload system disregards and disables the use of.gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload orgit-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules:git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you cangit rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look indgit-maint-merge(7) anddgit-maint-debrebase(7). There is alsodgit-maint-gbp(7) for those who want to keep usinggbp pq and/orquilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, Seedgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. Seedgit-maint-native(7).

  • tag2upload documentation: Thetag2upload wiki page is a good starting point. There’s thegit-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual indgit(1). Description of the dgit data model and Principles of Operation is indgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload islinked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing withgbp pq, manual use ofquilt,git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual ingit-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented ingit-debrebase(5).



comment count unavailable comments

15 February, 2026 01:31PM

February 14, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

A286874(15) >= 42

The following 42 15-bit values form a 2-disjunctive matrix(that is, no union of two values contain or equal a third value),or equivalently, a superimposed code:

000000000011111000000011100011000000101101100000001010110100000001101010001000001110001010000010011011000000100100110010000110010000110000110100001001000111001100000001000110000101001010000110001001010101000010001011000001100001100001010100001100010101000001101000000011010001000101001010010001000101010010110100000010011000010010010100001001010010100010010001010101100000100011000000100110011000100011000011001011000000100001001000110100010000101010100010100010100100011010000001100100000100101100100111000000100101000011000101000001001001101000010010010101001100100000110000001110000110000010001100110000100000011111110000000000

This shows thatA286874 a(15) >= 42.

If I had to make a guess, I'd say the equality holds, but I have nowherenear the computing resources to actually find the answer for sure.Stay tuned for news about a(14), though.

14 February, 2026 06:56PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf 26 Registration and Call for Proposals are open

Registration and the Call for Proposals for DebConf 26 are now open.The 27th edition of the Debian annual conference will be held fromJuly 20th toJuly 25th, 2026, in Santa Fe, Argentina.

The conference days will be preceded by DebCamp, which will take placefrom July 13th to July 19th, 2026.

The registration form can be accessed on the DebConf 26 website.After creating an account, click"register" in the profile section.

As always, basic registration for DebConf is free of charge forattendees. If you are attending the conference in a professionalcapacity or as a representative of your company, we kindly ask that youconsider registering in one of our paid categories to help cover thecosts of organizing the conference and to support subsidizing othercommunity members.

The last day to register with guaranteed swag is June 14th.

We also encourage eligible individuals to apply for a diversitybursary. Travel, food, and accommodation bursaries are also available. Moredetails can be found on thebursary info page.

The last day to apply for a bursary is April 1st. Applicants shouldreceive feedback on their bursary application by May 1st.

Call for proposals

The call for proposals for talks, discussions and other activities isalso open. To submit a proposal you need to create an account on thewebsite, and then use the"Submit Talk" button in the profilesection.

The last day to submit and have your proposal be considered for themain conference schedule, with video coverage guaranteed, is April 1st.

Become a sponsor

DebConf 26 is also accepting sponsors. Interested companies andorganizations may contact the DebConf team through sponsors@debconf.orgorvisit the DebConf 26 website.

See you in Santa Fe,

The DebConf 26 Team

14 February, 2026 12:15PM by Carlos Henrique Lima Melara, Santiago Ruano Rincón

February 13, 2026

hackergotchi for Erich Schubert

Erich Schubert

Dogfood Generative AI

Current AI companiesignore licenses such as the GPL, and often train on anything they can scrape.This is not acceptable.

The AI companiesignore web conventions, e.g., they deep link images from your web sites (even adding?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site.You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to usedata-nosnippet and cripple the snippet preview in Google.The “AI” browsers such as Comet, Atlas do notidentify as such, but rather pretend they are standard Chromium.There is no way to ban such AI use on your web site.

Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated.This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, theseplatforms evenbenefit from the AI slop.And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.

If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine.And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it…And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.

Partially because of GenAI,StackOverflow is pretty much dead – which used to be one of the most valuable programming resources.(While a lot of people complain about moderation, famous moderator Shog9 from the early SO dayssuggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers.In January 2026, there were around 3400 questions and 6000 answers posted, less than in thefirst month of SO of August 2008 (before the official launch).

Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program.Wikipedia is also suffering badly from GenAI.

Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.

However, the worst effect (at least to me as an educator) is thenoskilling effect (a rather novel term derived from deskilling, I have only seen itin this article by Weßels and Maibaum).

Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect isdramatic. It is even worse thandeskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place.And the earlier pupils start using generative AI, the less skills they acquire.

Dogfood the AI

Let’sdogfood the AI. Here’s an outline:

  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI…

Here is an example prompt that you can use:

You are a university educator, preparing homework assignments in debugging.The programming language used is {lang}.The students are tasked to find bugs in given code.Do not just call existing implementations from libraries, but implement the algorithm from scratch.Make sure there are two mistakes in the code that need to be discovered by the students.Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.The code may have (misleading) comments, but must NOT mention the bugs.If you do not know how to implement the algorithm, output an empty response.Output only the code for the assignment! Do not use markdown.Begin with a code comment that indicates the algorithm name and idea.If you indicate a bug, always use a comment with the keyword BUGGenerate a {lang} implementation (with bugs) of: {n} ({desc})

Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.

If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.

On the long run, we need to get back to an internet for people, not an internet for bots.Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built.Hence I don’t think technology is the answere here, but human networks of trust.

13 February, 2026 10:29AM by Erich Schubert

February 12, 2026

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 ofRcppSpdlog arrivedonCRAN moments ago, and willbe uploaded toDebian and built forr2u shortly. The (nice)documentationsite will be refreshed too.RcppSpdlogbundlesspdlog, awonderful header-only C++ logging library with all the bells andwhistles you would want that was written byGabi Melman, and also includesfmt byVictor Zverovich. You can learnmore at the nicepackagedocumention site.

Brian Ripley has now turned C++20 on as a default for R-devel (aka R4.6.0 ‘to be’), and this turned up misbehvior in packages usingRcppSpdlog such asourspdl wrapper(offering a nicer interface from both R and C++) when relying onstd::format. So for now, we turned this off and remain withfmt::format from thefmt library while weinvestigate further.

The NEWS entry for this release follows.

Changes inRcppSpdlog version 0.0.27 (2026-02-11)

  • Under C++20 or later, keep relying onfmt::format untilissues experienced usingstd::format can be identified andresolved

Courtesy of myCRANberries, thereis also adiffstatreport detailing changes. More detailed information is on theRcppSpdlogpage, or thepackage documentionsite.

This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

12 February, 2026 01:59PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-01

Contributing to Debianis part ofFreexian’s mission. This articlecovers the latest achievements of Freexian and their collaborators. All of thisis made possible by organizations subscribing to ourLong Term Support contracts andconsulting services.

cross building, by Helmut Grohne

In version 1.10.1, Meson merged a patch to make it call the correctg-ir-scanner by default thanks to Eli Schwarz. This problem affected more than130 source packages. Helmut retried building them all and filed 69 patches as aresult. A significant portion of those packages require another Mesonchange to call the correctvapigen. Another notable change isconverting gnu-efi to multiarch,which ended up requiring changes to a number of other packages. Since Aureliendropped thelibcrypt-dev dependency fromlibc6-dev, this transition now ismostly complete and has resulted in most of the Perl ecosystem correctlyexpressingperl-xs-dev dependencies needed for cross building. It is theseinfrastructure changes affecting several client packages that this work targets.As a result of this continued work, about 66% of Debian’s source packages nowhave satisfiable cross Build-Depends in unstable and about 10000 (55%) actuallycan be cross built. There are now more than 500 openbug reportsaffecting more than 2000 packages most of which carry patches.

rebootstrap, by Helmut Grohne

Maintaining architecture cross-bootstrap requires continued effort for adaptingto archive changes such asglib2.0 dropping a build profile or ane2fsprogsFTBFS. Beyond those generic problems,architecture-specific problems with e.g.musl-linux-any orsparc may arise.While all these changes move things forward on the surface, the bootstraptooling has become a growing pile of patches. Helmut managed to upstream twochanges toglibc for reducing itsBuild-Depends in thestage2 buildprofile and thanks Aurelien Jarno.

Refresh of the patch tagging guidelines, by Raphaël Hertzog

Debian Enhancement Proposal #3(DEP-3) is named “Patch Tagging Guidelines” and standardizes meta-informationthat Debian contributors can put in patches included in Debian source packages.With the feedback received over the years, and with the change in the packagemanagement landscape, the need to refresh those guidelines became evident. Asthe initial driver of that DEP, I spent a good day reviewing all the feedback(that I kept in a folder) and producing anew version of the document.The changes aim to give more weight to the syntax that is compatible with gitformat-patch’s output, and also to clarify the expected uses and meanings of acouple of fields, including some algorithm that parsers should follow to definethe state of the patch. After theannouncement of the new drafton debian-devel, the revised DEP-3 received a significant number of commentsthat I still have to process.

Miscellaneous contributions

  • Helmut uploadeddebvm making it work with unstable as a target distributionagain.
  • Helmut modernized the code base backingdedup.debian.netsignificantly expanding the support for type checking.
  • Helmut fixed the multiarch hinter once more given feedback fromFabian Grünbichler.
  • Helmut worked on migrating therocblas package to forky.
  • Raphaël fixed RC bug#1111812 inpublicanand did some maintenance for tracker.debian.org.
  • Carles added support in thefestival Debian package forsystemd socket activationandsystemd service and socket units.Adapted the patch for upstream andcreated a merge request(alsofixed a MacOS X building systemerror while working on it). UpdatedOrca Wiki documentationregarding festival.Discusseda 2007 bug/feature in festival which allowed having a local shell and that thenew systemd socket activation has the same code path.
  • Carles usingpo-debconf-managerworked on Catalan translations: 7 reviewed and sent; 5 follow ups, 5 deleted packages.
  • Carls made some po-debconf-manager changes: now it attaches the translationfile on follow ups, fixed bullseye compatibility issues.
  • Carles reviewed a new Catalan apt translation.
  • Carles investigated and reported alxhotkey bugandsent a patchfor the “abcde” package.
  • Carles made minor updates for Debian Wiki for different pages(lxde for dead keys,Ripping with abcdetroubleshooting,VirtualBox troubleshooting).
  • Stefano renamedbuild-details.json inPython 3.14 to fixmultiarch coinstallability.
  • Stefano audited the tooling and ignore lists for checking the contents of thepython3.X-minimal packages, finding and fixing some issues in the process.
  • Stefano made a few uploads ofpython3-defaults anddh-python in support ofPython 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilationfailures by default, and started down the road of implementing this.
  • Stefano did some sysadmin work on debian.social infrastructure.
  • Stefano and Santiago worked on preparations for DebConf 26. Especially to helpthe local team on opening the registration, and reviewing the budget to bepresented for approval.
  • Stefano uploaded routine updates ofpython-virtualenv andpython-flexmock.
  • Antonio collaborated with DSA on enabling a new proxy for salsa to preventscrapers from taking the service down.
  • Antonio did miscellaneous salsa administrative tasks.
  • Antonio fixed a few Ruby packages towards the Ruby 3.4 transition.
  • Antoniostarted work on planned improvementsto the DebConf registration system.
  • Santiago prepared unstable updates for the latest upstream versions ofknot-dnsandknot-resolver.The authoritative DNS server and DNS resolver software developed by CZ.NIC.It is worth highlighting that, given the separation of functionality compared toother implementations,knot-dns andknot-resolver are also less complexsoftware, which results in advantages in terms of security: only three CVEs havebeen reported for knot-dns since 2011).
  • Santiago made some routine reviews of merge requests proposed for the SalsaCI’s pipeline. E.g.a proposal to fix how sbuild chooses the chroot when building a package for experimental.
  • Colin fixed lots of Python packages to handle Python 3.14 and to avoid usingthedeprecatedpkg_resources module.
  • Colinadded forky supportto the images used in Salsa CI pipelines.
  • Colin began working on getting a release candidate ofgroff 1.24.0(the first upstream release since mid-2023, so a very large set of changes)into experimental.
  • Lucas kept working on the preparation for Ruby 3.4 transition. Some packagesfixed (support build against Ruby 3.3 and 3.4):ruby-rbpdf,jekyll,origami-pdf,ruby-kdl,ruby-twitter,ruby-twitter-text,ruby-globalid.
  • Lucas supported some potential mentors in the Google Summer of Code 26 programto submit their projects.
  • Anupa worked on the point release announcements for Debian 12.13 and 13.3 fromthe Debian publicity team side.
  • Anupa attended the publicity team meeting to discuss the team activities andto plan an online sprint in February.
  • Anupa attended meetings with the Debian India team to plan and coordinate theMinDebConf Kanpur and sent out related Micronews.
  • Emilio coordinated various transitions and helped get rid of llvm-toolchain-17from sid.

12 February, 2026 12:00AM by Anupa Ann Joseph

February 10, 2026

Writing a new worker task for Debusine (by Carles Pina i Estany)

Debusine is a tool designedfor Debian developers and Operating System developers in general. You can tryout Debusine ondebusine.debian.net,and follow its development onsalsa.debian.org.

This post describes how to write a new worker task for Debusine. It can beused to add tasks to a self-hosted Debusine instance, or to submit to theDebusine project new tasks to add new capabilities to Debusine.

Tasks are the lower-level pieces of Debusine workflows. Examples of tasks areSbuild,Lintian,Debdiff(see theavailable tasks).

This post will document the steps to write a new basicworker task.The example will add a worker task that runsreprotest and creates an artifact of thenew typeReprotestArtifact with the reprotest log.

Tasks are usually used by workflows. Workflows solve high-level goals bycreating and orchestrating different tasks (e.g. a Sbuild workflowwould create different Sbuild tasks, one for each architecture).

Overview of tasks

A task usually does the following:

  • It receives structured data defining its input artifacts and configuration
  • Input artifacts are downloaded
  • A process is run by the worker (e.g.lintian,debdiff, etc.). In thisblog post, it will runreprotest
  • The output (files, logs, exit code, etc.) is analyzed, artifactsand relations might be generated, and the work request is marked as completed,either withSuccess orFailure

If you want to follow the tutorial and add theReprotest task, yourDebusine development instance should have at least one worker, one user,a debusine client set up, and permissions for the client to create tasks.All of this can be setup following the steps in theContribute sectionof the documentation.

This blog post shows a functionalReprotest task. This task is notcurrently part of Debusine. The Reprotest task implementation is simplified(no error handling, unit tests, specific view, docs, some shortcuts inthe environment preparation, etc.). At some point,in Debusine,we might addadebrebuild task which is based on buildinfo files and usessnapshot.debian.org to recreate the binary packages.

Defining the inputs of the task

The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic indebusine/tasks/models.py:

classReprotestData(BaseTaskDataWithExecutor):"""Data for Reprotest task."""source_artifact:LookupSingleclassReprotestDynamicData(BaseDynamicTaskDataWithExecutor):"""Reprotest dynamic data."""source_artifact_id:int |None =None

TheReprotestData is what the user will input. ALookupSingle is alookupthat resolves to a single artifact.

We would also have configuration for the desiredvariations to test,but we have left that out of this example for simplicity. Configuring variationsis left as an exercise for the reader.

SinceReprotestData is a subclass ofBaseTaskDataWithExecutor italso containsenvironment where the user can specify in which environmentthe task will run. The environment is an artifact with a Debian image.

TheReprotestDynamicData holds the resolution of all lookups. Thesecan be seen in the “Internals” tab of the work request view.

Add the newReprotest artifact data class

In order for the reprotest task to create a new Artifact of the typeDebianReprotest with the log and output metadata: add the new category toArtifactCategory indebusine/artifacts/models.py:

REPROTEST ="debian:reprotest"

In the same file add theDebianReprotest class:

classDebianReprotest(ArtifactData):"""Data for debian:reprotest artifacts."""reproducible:bool |None =Nonedefget_label(self) ->str:"""Return a short human-readable label for the artifact."""return"reprotest analysis"

It could also include the package name or version.

In order to have the category listed in the work request output artifactstable, edit the filedebusine/db/models/artifacts.py: InARTIFACT_CATEGORY_ICON_NAMES addArtifactCategory.REPROTEST: "folder",and inARTIFACT_CATEGORY_SHORT_NAMES addArtifactCategory.REPROTEST: "reprotest",.

Create the new Task class

Indebusine/tasks/ create a new filereprotest.py.

reprotest.py
# Copyright © The Debusine Developers# See the AUTHORS file at the top-level directory of this distribution## This file is part of Debusine. It is subject to the license terms# in the LICENSE file found in the top-level directory of this# distribution. No part of Debusine, including this file, may be copied,# modified, propagated, or distributed except according to the terms# contained in the LICENSE file."""Task to use reprotest in debusine."""frompathlibimportPathfromtypingimportAnyfromdebusineimportutilsfromdebusine.artifacts.local_artifactimportReprotestArtifactfromdebusine.artifacts.modelsimport (ArtifactCategory,CollectionCategory,DebianSourcePackage,DebianUpload,WorkRequestResults,get_source_package_name,get_source_package_version,)fromdebusine.client.modelsimportRelationTypefromdebusine.tasksimportBaseTaskWithExecutor,RunCommandTaskfromdebusine.tasks.modelsimportReprotestData,ReprotestDynamicDatafromdebusine.tasks.serverimportTaskDatabaseInterfaceclassReprotest(RunCommandTask[ReprotestData,ReprotestDynamicData],BaseTaskWithExecutor[ReprotestData,ReprotestDynamicData],):"""Task to use reprotest in debusine."""TASK_VERSION =1CAPTURE_OUTPUT_FILENAME ="reprotest.log"def__init__(self,task_data:dict[str,Any],dynamic_task_data:dict[str,Any] |None =None,    ) ->None:"""Initialize object."""super().__init__(task_data,dynamic_task_data)self._reprotest_target:Path |None =Nonedefbuild_dynamic_data(self,task_database:TaskDatabaseInterface    ) ->ReprotestDynamicData:"""Compute and return ReprotestDynamicData."""input_source_artifact =task_database.lookup_single_artifact(self.data.source_artifact        )assertinput_source_artifactisnotNoneself.ensure_artifact_categories(configuration_key="input.source_artifact",category=input_source_artifact.category,expected=(ArtifactCategory.SOURCE_PACKAGE,ArtifactCategory.UPLOAD,            ),        )assertisinstance(input_source_artifact.data, (DebianSourcePackage,DebianUpload)        )subject =get_source_package_name(input_source_artifact.data)version =get_source_package_version(input_source_artifact.data)assertself.data.environmentisnotNoneenvironment =self.get_environment(task_database,self.data.environment,default_category=CollectionCategory.ENVIRONMENTS,        )returnReprotestDynamicData(source_artifact_id=input_source_artifact.id,subject=subject,parameter_summary=f"{subject}_{version}",environment_id=environment.id,        )defget_input_artifacts_ids(self) ->list[int]:"""Return the list of input artifact IDs used by this task."""ifnotself.dynamic_data:return []return [self.dynamic_data.source_artifact_id,self.dynamic_data.environment_id,        ]deffetch_input(self,destination:Path) ->bool:"""Download the required artifacts."""assertself.dynamic_dataartifact_id =self.dynamic_data.source_artifact_idassertartifact_idisnotNoneself.fetch_artifact(artifact_id,destination)returnTruedefconfigure_for_execution(self,download_directory:Path) ->bool:"""        Find a .dsc in download_directory.        Install reprotest and other utilities used in _cmdline.        Set self._reprotest_target to it.        :param download_directory: where to search the files        :return: True if valid files were found        """self._prepare_executor_instance()ifself.executor_instanceisNone:raiseAssertionError("self.executor_instance cannot be None")self.run_executor_command(            ["apt-get","update"],log_filename="install.log",run_as_root=True,check=True,        )self.run_executor_command(            ["apt-get","--yes","--no-install-recommends","install","reprotest","dpkg-dev","devscripts","equivs","sudo",            ],log_filename="install.log",run_as_root=True,        )self._reprotest_target =utils.find_file_suffixes(download_directory, [".dsc"]        )returnTruedef_cmdline(self) ->list[str]:"""        Build the reprotest command line.        Use configuration of self.data and self._reprotest_target.        """target =self._reprotest_targetasserttargetisnotNonecmd = ["bash","-c",f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x{target} package/; ""cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; ""rm *.deb ; ""reprotest --vary=-time,-user_group,-fileordering,-domain_host .",        ]returncmd@staticmethoddef_cmdline_as_root() ->bool:r"""apt-get install --yes ./\*.deb must be run as root."""returnTruedeftask_result(self,returncode:int |None,execute_directory:Path,# noqa: U100    ) ->WorkRequestResults:"""        Evaluate task output and return success.        For a successful run of reprotest:        -must have the output file        -exit code is 0        :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.        """reprotest_file =execute_directory /self.CAPTURE_OUTPUT_FILENAMEifreprotest_file.exists()andreturncode ==0:returnWorkRequestResults.SUCCESSreturnWorkRequestResults.FAILUREdefupload_artifacts(self,exec_directory:Path, *,execution_result:WorkRequestResults    ) ->None:"""Upload the ReprotestArtifact with the files and relationships."""ifnotself.debusine:raiseAssertionError("self.debusine not set")assertself.dynamic_dataisnotNoneassertself.dynamic_data.parameter_summaryisnotNonereprotest_artifact =ReprotestArtifact.create(reprotest_output=exec_directory /self.CAPTURE_OUTPUT_FILENAME,reproducible=execution_result ==WorkRequestResults.SUCCESS,package=self.dynamic_data.parameter_summary,        )uploaded =self.debusine.upload_artifact(reprotest_artifact,workspace=self.workspace_name,work_request=self.work_request_id,        )assertself.dynamic_dataisnotNoneassertself.dynamic_data.source_artifact_idisnotNoneself.debusine.relation_create(uploaded.id,self.dynamic_data.source_artifact_id,RelationType.RELATES_TO,        )

Below are the main methods with some basic explanation.

In order for Debusine to discover the task, add"Reprotest"in the filedebusine/tasks/__init__.py in the__all__ list.

Let’s explain the different methods of theReprotest class:

build_dynamic_data method

The worker has no access to Debusine’s database. Lookups are all resolved beforethe task gets dispatched to a worker, so all it has to do is download thespecified input artifacts.

build_dynamic_data method lookup the artifact, assert that is a validcategory, extract the package name and version, and get the environment inwhich it will be executed.

Theenvironment is needed to run the task (reprotest will runin a container usingunshare,incus…).

defbuild_dynamic_data(self,task_database:TaskDatabaseInterface    ) ->ReprotestDynamicData:"""Compute and return ReprotestDynamicData."""input_source_artifact =task_database.lookup_single_artifact(self.data.source_artifact        )assertinput_source_artifactisnotNoneself.ensure_artifact_categories(configuration_key="input.source_artifact",category=input_source_artifact.category,expected=(ArtifactCategory.SOURCE_PACKAGE,ArtifactCategory.UPLOAD,            ),        )assertisinstance(input_source_artifact.data, (DebianSourcePackage,DebianUpload)        )subject =get_source_package_name(input_source_artifact.data)version =get_source_package_version(input_source_artifact.data)assertself.data.environmentisnotNoneenvironment =self.get_environment(task_database,self.data.environment,default_category=CollectionCategory.ENVIRONMENTS,        )returnReprotestDynamicData(source_artifact_id=input_source_artifact.id,subject=subject,parameter_summary=f"{subject}_{version}",environment_id=environment.id,        )

get_input_artifacts_ids method

Used to list the task’s input artifacts in the web UI.

defget_input_artifacts_ids(self) ->list[int]:"""Return the list of input artifact IDs used by this task."""ifnotself.dynamic_data:return []assertself.dynamic_data.source_artifact_idisnotNonereturn [self.dynamic_data.source_artifact_id]

fetch_input method

Download the required artifacts on the worker.

deffetch_input(self,destination:Path) ->bool:"""Download the required artifacts."""assertself.dynamic_dataartifact_id =self.dynamic_data.source_artifact_idassertartifact_idisnotNoneself.fetch_artifact(artifact_id,destination)returnTrue

configure_for_execution method

Install the packages needed by the task and set_reprotest_target, whichis used to build the task’s command line.

defconfigure_for_execution(self,download_directory:Path) ->bool:"""       Find a .dsc in download_directory.       Install reprotest and other utilities used in _cmdline.       Set self._reprotest_target to it.       :param download_directory: where to search the files       :return: True if valid files were found       """self._prepare_executor_instance()ifself.executor_instanceisNone:raiseAssertionError("self.executor_instance cannot be None")self.run_executor_command(           ["apt-get","update"],log_filename="install.log",run_as_root=True,check=True,       )self.run_executor_command(           ["apt-get","--yes","--no-install-recommends","install","reprotest","dpkg-dev","devscripts","equivs","sudo",           ],log_filename="install.log",run_as_root=True,       )self._reprotest_target =utils.find_file_suffixes(download_directory, [".dsc"]       )returnTrue

_cmdline method

Return the command line to run the task.

In this case, and to keep the example simple, we will runreprotestdirectly in the worker’s executor VM/container, without giving it anisolated virtual server.

So, this command installs the build dependencies required by the package(soreprotest can build it) and runs reprotest itself.

def_cmdline(self) ->list[str]:"""       Build the reprotest command line.       Use configuration of self.data and self._reprotest_target.       """target =self._reprotest_targetasserttargetisnotNonecmd = ["bash","-c",f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x{target} package/; ""cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; ""rm *.deb ; ""reprotest --vary=-time,-user_group,-fileordering,-domain_host .",       ]returncmd

Some reprotest variations are disabled. This is to keep the example simplewith the set of packages to install and reprotest features.

_cmdline_as_root method

Since during the execution it’s needed to install packages, run it asroot (in the container):

@staticmethoddef_cmdline_as_root() ->bool:r"""apt-get install --yes ./\*.deb must be run as root."""returnTrue

task_result method

Task succeeded if a log is generated and the return code is 0.

deftask_result(self,returncode:int |None,execute_directory:Path,# noqa: U100    ) ->WorkRequestResults:"""        Evaluate task output and return success.        For a successful run of reprotest:        -must have the output file        -exit code is 0        :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.        """reprotest_file =execute_directory /self.CAPTURE_OUTPUT_FILENAMEifreprotest_file.exists()andreturncode ==0:returnWorkRequestResults.SUCCESSreturnWorkRequestResults.FAILURE

upload_artifacts method

Create theReprotestArtifact with the log and the reproducible boolean,upload it, and then add a relation between theReprotestArtifactand the source package:

defupload_artifacts(self,exec_directory:Path, *,execution_result:WorkRequestResults    ) ->None:"""Upload the ReprotestArtifact with the files and relationships."""ifnotself.debusine:raiseAssertionError("self.debusine not set")assertself.dynamic_dataisnotNoneassertself.dynamic_data.parameter_summaryisnotNonereprotest_artifact =ReprotestArtifact.create(reprotest_output=exec_directory /self.CAPTURE_OUTPUT_FILENAME,reproducible=execution_result ==WorkRequestResults.SUCCESS,package=self.dynamic_data.parameter_summary,        )uploaded =self.debusine.upload_artifact(reprotest_artifact,workspace=self.workspace_name,work_request=self.work_request_id,        )assertself.dynamic_dataisnotNoneassertself.dynamic_data.source_artifact_idisnotNoneself.debusine.relation_create(uploaded.id,self.dynamic_data.source_artifact_id,RelationType.RELATES_TO,        )

Execution example

To run this task in a local Debusine (see steps to have it ready withan environment, permissions and users created) you can do:

$ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc

(get the artifact ID from the output of that command)

The artifact can be seen inhttp://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/.

Then create areprotest.yaml:

$ cat <<EOF > reprotest.yamlsource_artifact: $ARTIFACT_IDenvironment: "debian/match:codename=bookworm"EOF

Instead ofdebian/match:codename=bookworm it could use the artifact ID.

Finally, create the work request to run the task:

$ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yaml

Using Debusine web you can see the work request, which should go toRunningstatus, thenCompleted withSuccess orFailure (depending ifreprotest could reproduce it or not). Clicking on theOutput tab would havean artifact of typedebian:reprotest with one file: the log.In theMetadata tab of the artifact it has Data: the package name andreproducible (true or false).

What is left to do?

This was a simple example of creating a task. Other things that could be done:

  • unit tests
  • documentation
  • configurablevariations
  • runningreprotest directly on the worker host, using the executorenvironment as areprotest “virtual server”
  • in this specific example, the command line might be doing too many thingsthat could maybe be done by other parts of the task, such asprepare_environment.
  • integrate it in a workflow so it’s easier to use (e.g. part ofQaWorkflow)
  • extract more from the log than just pass/fail
  • display the output in a more useful way (implement an artifact specializedview)

10 February, 2026 12:00AM by Carles Pina i Estany

February 08, 2026

hackergotchi for Colin Watson

Colin Watson

Free software activity in January 2026

About 80% of my Debian contributions this month weresponsored byFreexian, as well as one direct donation viaGitHubSponsors (thanks!). If youappreciate this sort of work and are at a company that uses Debian, have alook to see whether you can pay for any ofFreexian‘s services; as well as the directbenefits, that revenue stream helps to keep Debian development sustainablefor me andseveral other lovelypeople.

You can also support my work directly viaLiberapay orGitHubSponsors.

Python packaging

New upstream versions:

Fixes for Python 3.14:

Fixes for pytest 9:

Porting away from thedeprecatedpkg_resources:

Other build/test failures:

I investigated several more build failures and suggested removing the packages in question:

Other bugs:

Other bits and pieces

Alejandro Colomar reported thatman(1) ignored theMANWIDTH environment variable in some circumstances. I investigated this andfixed it upstream.

I contributed anubuntu-dev-tools patch to stop recommendingsudo.

Iadded forky support to the images used in SalsaCI pipelines.

I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet.

I worked on some lower-priority security updates for OpenSSH.

Code reviews

08 February, 2026 07:30PM by Colin Watson

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

chronometre: A new package (pair) demo for R and Python

Both R and Python make it reasonably easy to work with compiledextensions. But how to access objects in one environment from the otherand share state or (non-trivial) objects remains trickier.Recently (and while r-forge was ‘resting’ so we opened GitHubDiscussions) a question was askedconcerning Rand Python object pointer exchange.

This lead to a pretty decent discussion includingarrow interchange demos (prettyideal if dealing with data.frame-alike objects), but once the focus ison more ‘library-specific’ objects from a given (C or C++, say) libraryit is less clear what to do, or how involved it may get.

R has external pointers, and these make it feasible to instantiatethe same object in Python. To demonstrate, I created a pair of(minimal) packages wrapping a lovely (small) class from the excellentspdlog library byGabi Melman, and more specificallyin an adapted-for-R version (to avoid someR CMD checknags) in myRcppSpdlogpackage. It is essentially a nicer/fancier C++ version of thetic() andtic() timing scheme. When an objectis instantiated, it ‘starts the clock’ and when we accessing it later itprints the time elapsed in microsecond resolution. In Modern C++ thistakes little more than keeping an internalchronoobject.

Which makes for a nice, small, yet specific object to pass to Python.So theR side ofthe package pair instantiates such an object, and accesses itsaddress. For different reasons, sending a ‘raw’ pointer across does notwork so well, but a string with the address printed works fabulously(and is a paradigm used around other packages so we did not inventthis). Over on thePython side of thepackage pair, we then take this string representation and pass it toa little bit ofpybind11 code toinstantiate a new object. This can of course also expose functionalitysuch as the ‘show time elapsed’ feature, either formatted or justnumerically, of interest here.

And that is all that there is! Now this can be done from R as wellthanks toreticulateas thedemo() (also shown on the package README.md)shows:

>library(chronometre)>demo("chronometre",ask=FALSE)demo(chronometre)----~~~~~~~~~~~>#!/usr/bin/env r>>stopifnot("Demo requires 'reticulate'"=requireNamespace("reticulate",quietly=TRUE))>stopifnot("Demo requires 'RcppSpdlog'"=requireNamespace("RcppSpdlog",quietly=TRUE))>stopifnot("Demo requires 'xptr'"=requireNamespace("xptr",quietly=TRUE))>library(reticulate)>## reticulate and Python in general these days really want a venv so we will use one,>## the default value is a location used locally; if needed create one>## check for existing virtualenv to use, or else set one up> venvdir<-Sys.getenv("CHRONOMETRE_VENV","/opt/venv/chronometre")>if (dir.exists(venvdir)) {+>use_virtualenv(venvdir,required =TRUE)+> }else {+>## create a virtual environment, but make it temporary+>Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())+>virtualenv_create("r-reticulate-env")+>virtualenv_install("r-reticulate-env",packages =c("chronometre"))+>use_virtualenv("r-reticulate-env",required =TRUE)+> }> sw<- RcppSpdlog::get_stopwatch()# we use a C++ struct as example>Sys.sleep(0.5)# imagine doing some code here>print(sw)# stopwatch shows elapsed time0.501220> xptr::is_xptr(sw)# this is an external pointer in R[1]TRUE> xptr::xptr_address(sw)# get address, format is "0x...."[1]"0x58adb5918510"> sw2<- xptr::new_xptr(xptr::xptr_address(sw))# cloned (!!) but unclassed>attr(sw2,"class")<-c("stopwatch","externalptr")# class it .. and then use it!>print(sw2)# `xptr` allows us close and use0.501597> sw3<- ch$Stopwatch(  xptr::xptr_address(sw) )# new Python object via string ctor>print(sw3$elapsed())# shows output via Python I/Odatetime.timedelta(microseconds=502013)>cat(sw3$count(),"\n")# shows double0.502657>print(sw)# object still works in R0.502721>

The same object, instantiated in R is used in Python and thereafteragain in R. Whilethis object here is minimal in features, theconcept ofpassing a pointer is universal. We could use it forany interesting object that R can access and Python too can instantiate.Obviously, there be dragons as we pass pointers so one may want toascertain that headers from corresponding compatible versions are usedetc butprinciple is unaffected and should just work.

Both parts of this pair of packages are now at the correspondingrepositories:PyPIandCRAN.As I commonly do here on package (change) announcements, I include the(minimal so far) set of high-level changes for the R package.

Changes in version 0.0.2(2026-02-05)

  • Removed replaced unconditional virtualenv use in demo givenpreceding conditional block

  • Updated README.md with badges and an updated demo

Changes in version 0.0.1(2026-01-25)

  • Initial version and CRAN upload

Questions, suggestions, bug reports, … are welcome at either the (nowawoken from the R-Forge slumber)Rcppmailing list or the newerRcppDiscussions.

This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

08 February, 2026 05:11PM

Vincent Bernat

Fragments of an adolescent web

I have unearthed a fewold articles typed during my adolescence, between1996 and 1998. Unremarkable at the time, these pages now compose, three decadeslater, the chronicle of a vanished era.1

The word “blog” does not exist yet. Wikipedia remains to come. Google has notbeen born. AltaVista reigns over searches, whilealready struggling toembrace the nascent immensity of the web2. To meet someone,you had to agree in advance and prepare your route on paper maps. 🗺️

The web is taking off. The CSS specification has just emerged, HTML tables stillserve for page layout.Cookies andadvertising banners are makingtheir appearance. Pages are adorned withmusic and videos, forcingbrowsers to arm themselves withplugins. Netscape Navigator sits on 86% ofthe territory, but Windows 95 now bundles Internet Explorer toquickly catchup. Facing this offensive, Netscapeopensource itsbrowser.

Francefalls behind. Outside universities, Internet access remainsexpensive and laborious.Minitel still reigns, offering phonedirectory, train tickets, remote shopping. This was not yet possible with theInternet:buying a CD online was a pipe dream. Encryption suffers frominappropriate regulation: the DES algorithm is capped at 40 bits andcracked in a few seconds.

These pages bear the trace of the web’s adolescence. Thirty years have passed.The same battles continue: data selling, advertising, monopolies.


  1. Most articles linked here are not translated from French to English. ↩︎

  2. I recently noticed that Google no longer fully indexes my blog. Forexample, it is no longer possible to find thearticle on lanĉo. Iassume this is a consequence of the explosion of AI-generated content or achange in priorities for Google. ↩︎

08 February, 2026 02:51PM by Vincent Bernat

Thorsten Alteholz

My Debian Activities in January 2026

Debian LTS/ELTS

This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian(as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities).

During my allocated time I uploaded or worked on:

  • [DLA 4449-1] zvbi security update to fix five CVEs related to uninitialized pointers and integer overflows.
  • [DLA 4450-1] taglib security update to fix one CVE related to a segmentation violation.
  • [DLA 4451-1] shapelib security update to fix one CVE related to a double free.
  • [DLA 4454-1] libuev security update to fix one CVE related to a buffer overrun.
  • [ELA-1620-1] zvbi security update to fix five CVEs in Buster and Stretch related to uninitialized pointers and integer overflows.
  • [ELA-1621-1] taglib security update to fix one CVE in Buster and Stretch related to a segmentation violation.
  • [#1126167] bookworm-pu bug for zvbi to fix five CVEs in Bookworm.
  • [#1126273] bookworm-pu bug for taglib to fix one CVE in Bookworm.
  • [#1126370] bookworm-pu bug for libuev to fix one CVE in Bookworm.

I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them asignored.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Lomiri

This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.

This work is generously funded byFre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded a new upstream version or a bugfix version of:

Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.

08 February, 2026 01:25PM by alteholz

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2025 edition

Another year of data fromSociété de Transport de Montréal, Montreal'stransit agency!

A few highlights this year:

  1. Although theSaint-Michel station closed foremergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for thenew Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.

  2. The effects of the opening of theRoyalmount shopping center has had a durable impact on the traffic at theDe la Savane station. I reported on this last year, but it seems this wasn't just a fad.

  3. With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. TheÉdouard-Montpetit station has nearly reached its previous all-time record of 2015 and theMcGill station has recovered from the general slump all the other stations have had in 2025.

  4. TheAssomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.

  5. Although still affected by a very high seasonality, theJean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands.

More generally, it seems the Montreal subway has had a pretty bad year. Traffichad been slowly climbing back since the COVID-19 pandemic, but this is thefirst year since 2020 such a sharp decline can be witnessed. Even majorstations likeJean-Talon orLionel-Groulx are ona downward trend and it is pretty worrisome.

As for causes, a few things come to mind. First of all, as the number ofMontrealers commuting to work by bike continues to rise1, a modalshift from public transit to active mobility is to be expected.As localexperts put it, this is not uncommon and has been seen in othercities before.

Another important factor that certainly turned people away from the subway thisyear has been the impacts of the continued housing crisis in Montreal. As moreand more people get kicked out of their apartments, many have been seekingrefuge in the subway stations to find shelter.

Sadly, this also brought a unprecedented wave of incivilities. As riders' senseof security sharply decreased, the STM eventually resorted tobanning unhousedpeople from sheltering in the subway. This decision did bring backsome peace to the network, but one can posit damage had already been done andmany casual riders are still avoiding the subway for this reason.

Finally, the weekslong STM worker's strike in Q4 had an important impact ongeneral traffic, as it severely reduced the opening hours of the subway. As forthe previous item, once people find alternative ways to get around, it's alwaysharder to bring them back.

Hopefully, my 2026 report will be a more cheerful one...

By clicking on a subway station, you'll be redirected to a graph of thestation's foot traffic.

Licences


  1. Mostly thanks to major improvements to the cycling network and theBIXI bikesharing program. 

08 February, 2026 05:00AM by Louis-Philippe Véronneau

February 06, 2026

Reproducible Builds

Reproducible Builds in January 2026

Welcome to the first monthly report in 2026 from theReproducible Builds project!

These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see theContribute page on our website.

  1. Flathub now testing for reproducibility
  2. Reproducibility identifying projects that will fail to build in 2038
  3. Distribution work
  4. Tool development
  5. Two new academic papers
  6. Upstream patches

Flathub now testing for reproducibility

Flathub, the primary repository/app store forFlatpak-based applications, has begun checking for build reproducibility.According to a recent blog post:

We have started testing binary reproducibility ofx86_64 builds targeting the stable repository. This is possible thanks toflathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub. While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts.

The test results and status is available on theirreproducible builds page.


Reproducibility identifying software projects that will fail to build in 2038

Longtime Reproducible Builds developer Bernhard M. Wiedemannposted on Reddit on “Y2K38 commemoration day T-12” — that is to say, twelve years to the day before the UNIX Epoch will no longer fit into a signed 32-bit integer variable on 19th January 2038.

Bernhard’s comment succinctly outlines the problem as well as notes some of the potential remedies, as well aslinks to a discussion with the GCC developers regarding “adding warnings forinttime_t conversions”.

At the time of publication, Bernard’s topic had generated50 comments in response.


Distribution work

Conda is language-agnostic package manager which was originally developed to help Python data scientists and is now a popular package manager for Python and R.

conda-forge, a community-led infrastructure for Conda recently revamped theirdashboards to rebuild packages straight to track reproducibility. There have been changes over the past two years to make theconda-forge build tooling fully reproducible by embedding the ‘lockfile’ of the entire build environment inside the packages.


InDebian this month:


InNixOS this month, it wasannounced that theGNU Guix Full Source Bootstrap wasported to NixOS as part ofWire Jansen bachelor’s thesis (PDF). At the time of publication, thischange has landed in NiX’stdev distribution.


Lastly, Bernhard M. Wiedemann posted anotheropenSUSEmonthly update for his work there.


Tool development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions,310 and311 to Debian.

  • Fix test compatibility withu-boot-tools version2026-01. []
  • Drop the impliedRules-Requires-Root: no entry indebian/control. []
  • BumpStandards-Version to 4.7.3. []
  • Reference the Debianocaml package instead ofocaml-nox. (#1125094)
  • Apply a patch by Jelle van der Waa to adjust a test fixture match new lines. []
  • Also the drop impliedPriority: optional fromdebian/control. []


In addition, Holger Levsen uploaded two versions ofdisorderfs, first updating the package from FUSE 2 toFUSE 3 as described inlast months report, as well as updating the packaging to the latest Debian standards. Asecond upload (0.6.2-1) was subsequently made, with Holger adding instructions on how to add the upstream release to our release archive and incorporating changes by Roland Clobus to set_FILE_OFFSET_BITS on 32-bit platforms, fixing a build failure on 32-bit systems. Vagrant Cascadian updateddiffoscope in GNU Guix to version311-2-ge4ec97f7 anddisorderfs to0.6.2.


Two new academic papers

Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, theInformation Processing and Communications Laboratory (LTCI) published a paper this month titledDocker Does Not Guarantee Reproducibility:

[…] WhileDocker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writingDockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5,298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.

APDF of their paper is available online.


Quentin Guilloteau, Antoine Waehren and Florina M. Ciorba of theUniversity of Basel in Swedenalso published aDocker-related paper, theirs calledLongitudinal Study of the Software Environments Produced by Dockerfiles from Research Artifacts:

The reproducibility crisis has affected all scientific disciplines, including computer science (CS). To address this issue, the CS community has established artifact evaluation processes at conferences and in journals to evaluate the reproducibility of the results shared in publications. Authors are therefore required to share their artifacts with reviewers, including code, data, and the software environment necessary to reproduce the results. One method for sharing the software environment proposed by conferences and journals is to utilize container technologies such as Docker and Apptainer. However, these tools rely on non-reproducible tools, resulting in non-reproducible containers. In this paper, we present a tool and methodology to evaluate variations over time in software environments of container images derived from research artifacts. We also present initial results on a small set ofDockerfiles from the Euro-Par 2024 conference.

APDF of their paper is available online.


Miscellaneous news

Onour mailing list this month:

Lastly,kpcyrd added aRust section to theStable order for outputs page on our website. []


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit ourContribute page on our website. However, you can get in touch with us via:

06 February, 2026 08:04PM

Birger Schacht

Status update, January 2026

January was a slow month, I only did three uploads to Debian unstable:

  • xdg-desktop-portal-wlr updated to 0.8.1-1
  • swayimg updated to 4.7-1
  • usbguard updated to 1.1.4+ds-2, which closed#1122733

I was very happy to see the newdfsg-new-queue and that there are more handsnow processing the NEW queue. I also finally got one of the packages acceptedthat I uploaded after the Trixie release:wayback which I uploaded last August.There has been another release since then, I’ll try to upload that in the nextfew days.

There was abug report forcarlasking for Windows support.carl used thexdgcreate for looking up the XDG directories, butxdg does not supportwindows systems (and it seems thiswill notchange)The reporter also provided a PR to replace the dependency with thedirectories crate which more systemagnostic. I adapted the PR a bit and merged it andreleased version0.6.0 of carl.

At my dayjob I refactoreddjango-grouper.django-grouper is a package we use to find duplicate objects in our data. Ourusers often work with datasets of thousands of historical persons, places andinstitutions and in projects that run over years and ingest data from multiple sources,it happens that entries are created several times.I wrote the initial app in 2024, but was never really happy about the approachI used back then. It was based onthis blogpostthat describes how to group spreadsheet text cells. It usessklearnsTfidfVectorizerwith a custom analyzer and the librarysparse_dot_topn for creating thematrix. All in all the module to calculate the clusters was 80 lines and withsparse_dot_topn it pulled in a rather niche Python library. I was pretty surethat this functionality could also be implemented with basic sklearnfunctionality and it was: we are now usingDictVectorizerbecause in a Django app we are working with objects that can be mapped to dictsanyway. And for clustering the data, the app now uses theDBSCANalgorithm (with the manhattan distance as metric). The module is now only halfthe size and the whole app lost one dependency! I released those changes asversion0.3.0 of theapp.

At the end of January together with friends I went to Brussels to attendFOSDEM. We took the night train but there were a couple ofbroken down trains so the ride took 26 hours instead of one night. It is a goodthing we had a one day buffer and FOSDEM only started on Saturday. As usualthere were too many talks to visit, so I’ll have to watch some of therecordings in the next few weeks.

Some examples of talks I found interesting so far:

06 February, 2026 05:28AM

Reproducible Builds (diffoscope)

diffoscope 312 released

The diffoscope maintainers are pleased to announce the release of diffoscopeversion312. This version includes the following changes:

[ Jelle van der Waa ]* Adjust u-boot-tools/fit diff to match new lines.

You find out more byvisiting the project homepage.

06 February, 2026 12:00AM

February 05, 2026

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 2.3.3: Limited Rebirth

rfoaas greed example

The originalFOAAS site provideda rather wide variety of REST access points, but it sadky is no more(while theold repois still there). A newer replacement siteFOASS is up and running, but witha somewhat reduced offering. (For example, the two accessors shown inthe screenshot are no more.C’est la vie.)

Recognising that perfect may once again be the enemy of (somewhat)good (enough), we have rejigged therfoaas packagein a new release 2.3.3. (The precding version number 2.3.2 correspondedto the upstream version, indicating which API release we matched. Now wejust went ‘+ 0.0.1’ but there is no longer a correspondence to theservice version atFOASS.)

Accessor functions for each of the now available access points areprovided, ans the random sampling accessorgetRandomFO()now picks from that set.

MyCRANberriesservice provides a comparison totheprevious release. Questions, comments etc should go to theGitHub issuetracker. More background information is on theproject pageas well as on thegithub repo

This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

05 February, 2026 01:00AM

February 04, 2026

littler 0.3.23 on CRAN: More Features (and Fixes)

max-heap image

The twentythird release oflittler as aCRAN packagelanded on CRAN just now, following in the now twenty year history (!!)as a (initially non-CRAN) package started byJeff in 2006, and joinedby me a few weeks later.

littleris the first command-line interface for R as it predatesRscript. It allows for piping as well forshebangscripting via#!, uses command-line arguments moreconsistently and stillstartsfaster. It also always loaded themethods package whichRscript only began to do in later years.

littlerlives on Linux and Unix, has its difficulties on macOS due tosome-braindeadedness there (who ever thought case-insensitivefilesystems as a default were a good idea?) and simply does not exist onWindows (yet – the build system could be extended – seeRInside foran existence proof, and volunteers are welcome!). See theFAQvignette on how to add it to yourPATH. A few examplesare highlighted at theGithub repo:, as wellas in theexamplesvignette.

This release, the first in about eleven months, once again brings twonew helper scripts, and enhances six existing one. The release wastriggered because it finally became clear whyinstallGitHub.r ignored r2u when available: we forced thetype argument to ‘source’ (so thanks to Iñaki for spotting this). Onechange was once again contributed byMichael which is againgreatly appreciated.

The full change description follows.

Changes in littlerversion 0.3.22 (2026-02-03)

  • Changes in examples scripts

    • A new scriptbusybees.r aggregates deadlinedpackages by maintainer

    • Several small updated have been made to the (mostly internal)'r2u.r' script

    • Thedeadliners.r script has refined treatment forscreen width

    • Theinstall2.r script has new options--quiet and--verbose as proposed by ZivanKaraman

    • Thercc.r script passes build-args to 'rcmdcheck' tocompact vignettes and save data

    • TheinstallRub.r script now defaults to 'noble' andis more tolerant of inputs

    • TheinstallRub.r script deals correctly with emptyutils::osVersion thanks to Michael Chirico

    • New scriptcheckPackageUrls.r inspired by how CRANchecks (with thanks to Kurt Hornik for the hint)

    • TheinstallGithub.r script now adjusts tobspm and takes advantage of r2u binaries for its builddependencies

  • Changes in package

    • Environment variables (read at build time) can use doublequotes

    • Continuous intgegration scripts received a minor update

MyCRANberriesservice provides a comparison totheprevious release. Full details for thelittlerrelease are provided as usual at theChangeLogpage, and also on thepackage docs website.The code is available via theGitHub repo, fromtarballs and now of course also fromits CRAN page andviainstall.packages("littler"). Binary packages areavailable directly inDebian aswell as (in a day or two)Ubuntu binaries atCRAN thanks to the tireless Michael Rutter. Comments and suggestionsare welcome at theGitHub repo.

This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

04 February, 2026 12:37PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in January 2026

04 February, 2026 11:39AM by Ben Hutchings

February 03, 2026

hackergotchi for Jonathan Dowland

Jonathan Dowland

FOSDEM 2026 talk recording available

FOSDEM 2026 was great! I hope to blog a proper postmortem in due course. Butfor now,The video of my talk is up, as are my slides with speaker notes and links.

03 February, 2026 04:21PM

Valhalla's Things

A Day Off

Posted on February 3, 2026
Tags:
madeof:atoms,madeof:bits

Today I had a day off. Some of it went great. Some less so.

I woke up, went out to pay our tribute to NotOurCat, and it was snowing!yay! And I had a day off, so if it had snowed enough that shovelling wasneeded, I had time to do it (it didn’t, it started to rain soonafterwards, but still, YAY snow!).

Then I had breakfast, with the fruit rye bread I had baked yesterday,and I treated myself to some of the strong Irish tea I have left,instead of the milder ones I want to finish before buying more of theIrish.

And then, I bought myself a fancy newexpensive fountain pen. One thatcosts 16€! more than three times as much as my usual ones! I hope itwill work as well, but I’m quite confident it should. I’ll find out whenit arrives from Germany (together with afew ink samples that willresult in a future blog post with some SCIENCE).

I decided to try and use bank transfers instead of my visa debit cardwhen buying from online shops that give the option to do so: it’s a tinybit more effort, but it means I’m paying 0.25€ to my bank1rather than the seller having to pay some unknown amount to an US basedpayment provider. Unluckily, the fountain pen website offered a hugenumber of payment methods, but not bank transfers. sigh.

And then, I could start working a bit on the connecting wires for theLED strips for our living room: I soldered two pieces, six wires each(it’s one RGB strip, 4 pins, and a warm white one requiring two more),then did a bit of tests, including writing some micropython code to adda test mode that lights up each colour in sequence, and the morning wasalmost gone. For some reason this project, as simple as it is, is takingforever. But it is showing progress.

There was a break, when the postman delivered a package ofchemicals2 for a future project or two. There will be blog posts!

After lunch I spent some time finishing eyelets on the outfit I wantedto wear this evening, as I had not been able to finish it during fosdem.This one will result intwo blog posts!

Meanwhile, in the morning I didn’t remember the name of the program Iused to load software on micropython boards such as the one that willcontrol the LED strips (that’sthonny), and while searching for it inthe documentation, I found that there is also a command line program Ican use,mpremote, and that’s a much better fit for my preferences!

I mentioned it in an xmpp room full of nerds, and one of them mentionedthat he could try it on hisInkplate, when he had time, and I wasnerd-sniped into trying it on mine, which had been sitting unusedshowing the temperatures in our old house on the last day it spent thereand needs to be updated for the sensors in the new house.

And that lead to the writing ofsome notes on how to set it up from thecommand line(good), and to the opening on oneupstream issue(bad), because I have an old model, and the board-specific library isn’tworking. at all.

And that’s when I realized that it was 17:00, I still had to cook thebread I had been working on since yesterday evening (ciabatta, one of myfavourites, but it needs almost one hour in the oven), the outfit Iwanted to wear in the evening was still not wearable, the table neededcleaning and some panicking was due. Thankfully, my mother was cookingdinner, so I didn’t have to do that too.

I turned the oven on, sewed the shoulder seams of the bodice whilespraying water on the bread every 5 minutes, and then while it wascooking on its own, started to attach a closure to the skirt, decidedthat a safety pin was a perfectly reasonable closure for the first dayan outfit is worn, took care of the table, took care of the bread, usedsome twine to close the bodice, because I still haven’t worked out whatto use for laces, realized my bodkin is still misplaced, used alongandsharp andbig needle meant for sewing mattresses instead of abodkin, managed not to stab myself, and less than half an hour late wecould have dinner.

There was bread, there was Swedish crispbread, there were spreads (tuna,and beans), and vegetables, and then there was the cake that caused mymother to panic when she added her last honey to the milk and it curdled(my SO and I tried it, it had no odd taste, we decided it could be used)and it was good, although I had to get a second slice just to be 100%sure of it.

And now I’m exhausted, and I’ve only done half of the things I hadplanned to do, but I’d still say I’ve had quite a good day.


  1. Banca Etica, so one that avoids any investment in weapons anda number of other problematic things.↩︎

  2. not food grade, except for one, but kitchen-safe.↩︎

03 February, 2026 12:00AM

February 02, 2026

Isoken Ibizugbe

How Open Source Contributions Define Your Career Path

Hi there, I’m more than halfway through (8 weeks) my Outreachy internship with Debian, working on the openQA project to test Live Images.

My journey into tech began as a software engineering trainee, during which I built a foundation in Bash scripting, C programming, and Python. Later, I worked for a startup as a Product Manager. As is common in startups, I wore many hats, but I found myself drawn most to the Quality Assurance team. Testing user flows and edge-case scenarios sparked my curiosity, and that curiosity is exactly what led me to the Debian Live Image testing project.

From Manual to Automated

In my previous roles, I was accustomed to manual testing, simulating user actions one by one. While effective, I quickly realized it could be a bottleneck in fast-paced environments. This internship has been a masterclass in removing that bottleneck. I’ve learned that automating repetitive actions makes life (and engineering) much easier. Life’s too short for manual testing😁.

One of my proudest technical wins so far has been creating “Synergy” across desktop environments. I proposed a solution to group common applications so we could use a single Perl script to handle tests for multiple desktop environments. With my mentor’s guidance, we implemented this using symbolic links, which significantly reduced code redundancy.

Expanding My Technical Toolkit

Over the last 8 weeks, my technical toolkit has expanded significantly:

  • Perl & openQA: I’ve learnt writing with Perl for automation within the openQA framework and I’ve successfully automated apps_startstop tests for Cinnamon and LXQt
  • Technical Documentation: I authored a contributor guide. This required paying close attention to detail, ensuring that new contributors can have faster reviews and merged contributions
  • Ansible: I am learning that testing doesn’t happen in a vacuum. To ensure new tests are truly automated, they must be integrated into the system’s configuration.

Working on this project has shaped my perspective on where I fit in the tech ecosystem. In Open Source, my “resume” isn’t just a PDF, it’s a public trail of merged code, technical proposals, and collaborative discussions.

As my mentor recently pointed out, this is my “proof-of-work.” It’s a transparent record of what I am capable of and where my interests lie. 

Finally, I’ve grown as a team player. Working with a global team across different time zones has taught me the importance of asynchronous communication and respecting everyone’s time. Whether I am proposing a new logic or documenting a process, I am learning that open communication is just as vital as clean code.

02 February, 2026 09:49PM by Isoken Ibizugbe

hackergotchi for Patryk Cisek

Patryk Cisek

Bitwarden Secrets Manager With Ansible

If you’d like to have a simple solution for managing all the secrets you’reusing in your Ansible Playbooks, keep reading on. Bitwarden’s Secrets Managerprovides an Ansible collection, which makes it very easyto use this particular Secrets Manager in Ansible Playbooks. I’ll show you howto set up a free Secrets Manager account in Bitwarden. Then I’ll walk youthrough the setup in an example Ansible Playbook.

YouTube Video version

I’ve also recorded a video version of this article. If you prefer a video, youcan find ithere.

02 February, 2026 09:00PM

Hellen Chemtai

Career Growth Through Open Source: A Personal Journey

Hello world😀! I am an intern at Outreachy working with the Debian OpenQA team on images testing. We get to know what career opportunities awaits us when we work on open source projects. In open source, we are constantly learning. The community has different sets of skills and a large network of people.

So, how did I start off in this internship

I entered the community with the these skills:

  • MERN (Mongo DB, Express JS , React JS and Node JS) – for web development
  • Linux and Shell Scripting – for some administrative purposes
  • Containerization using Google Cloud
  • Operating Systems
  • A learning passion for Open Source – I contributed to some open source work in the past but it was in terms of documentation and bug hunting

I was a newbie at OpenQA but, I had a month to learn and contribute. Time is a critical resource but so is understanding what you are doing. I followed the installations instructions given but whenever I got errors, I had to research why I got the errors. I took time to understand errors I was solving then continued on with the tasks I wanted to do. I communicated my logic and understanding while working on the task and awaited reviews and feedback. Within a span of two months I had learned a lot by practicing and testing.

The skills I gained

As of today, I gained these technical skills from my work with Debian OpenQA team.

  • Perl – the tests that we run are written in this language
  • Ansible configuration – ansible configurations and settings for the machines the team runs
  • Git – this is needed for code versioning and diving tasks into different branches
  • Linux – shell scripting and working with the Debian Operating system
  • Virtual Machines and Operating Systems – I constantly view how different operating systems are booted and run on virtual machines during testing
  • Testing – I keep watch of needles and ensure the tests work as required
  • Debian – I use a Debian system to run my Virtual Machines
  • OpenQA – the tool that is used to automate testing of Images

With open source comes the need of constant communication. The community is diverse and the team is usually on different time zones. These are some of the soft / social skills I gained when working with the team

  • Communication – this is essential especially in taking tasks with confidence, talking about issues encountered and stating the progress of the tasks
  • Interpersonal skills – this is for general communication within the community
  • Flexibility – we have to adapt to changes because we are a community of different people with different skills

With these skills and the willingness to learn , open source is a great area to focus on . Aside from your career you will extend your network. My interests are set on open source and Linux in general. Working with a wider network has really skilled me up and I will continue learning. Working with the Debian OpenQA team has been very great. The team is great at communication and I learn every day. The knowledge I gain from the team is helping me build a great career in open source.

02 February, 2026 03:30PM by hellen chemtai tay

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Paging all Radio Curious Hackers

After years of thinking about and learning about how radios work, I figured itwas high-time to start to more aggressively share the things i’ve beenlearning. I had a ton of fun atDistrictConyear 0, so it was a pretty natural place to pitch an RF-focused introductorytalk.

I was selected for Year 1, and able to give my first ever RF related talk abouthow to set off restaurant pagers (including one on stage!) by reading andwriting IQ directly using a little bit of stdlib only Python.

This talk is based around the work I’ve written about previously(here,here andhere), but the “all-in-one” form factor wassomething I was hoping would help encourage folks out there to take a lookunder the hood of some of the gear around them.

(In case the iframe above isn’t working, direct link to the YouTube videorecording ishere)

I’ve posted my slides from the talk atPARCH.pdf to hopefully give folks some timeto flip through them directly.

All in all, the session was great – It was truely humbling to see so manyfolks interested in hearing me talk about radios. I had a bit of an own-goal inpicking a 20 minute form-factor, so the talk is paced wrong (it feels like itwent way too fast). Hopefully being able to see the slides and pause the videois helpful.

We had a short ad-hoc session after where I brought two sets of pagers and mypower switch; but unfortunately we didn’t have anyone who was able to triggerany of the devices on their own (due to a mix of time between sessions andcomputer set-up). Hopefully it was enough to get folks interested in tryingthis on their own!

02 February, 2026 02:50PM

February 01, 2026

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

What do people do when they edit Wikipedia through Tor?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This post is closely based ona previously published post byKaylea Champion onthe Community Data Science Blog.

Many individuals use Tor to reduce their visibility to widespread internet surveillance.

One popular approach to protecting our privacy online is to use theTor network. Tor protects users from being identified by their IP address, which can be tied to a physical location. However, if you’d like to contribute to Wikipedia using Tor, you’ll run into a problem. Although most IP addresses can edit without an account, Tor users are blocked from editing.

Tor users attempting to contribute to Wikipedia are shown a screen that informs them that they are not allowed to edit Wikipedia.

Other research by my team has shown that Wikipedia’s attempt to block Tor is imperfect and that some people have been able to edit despite the ban. As part of this work, we built a dataset of more than 11,000 contributions to Wikipedia via Tor and used quantitative analysis to show that contributions from Tor were of about the same quality as contributions from other new editors and other contributors without accounts. Of course, given the unusual circumstances Tor-based contributors face, we wondered whether a deeper look at the content of their edits might reveal more about their motives and the kinds of contributions they seek to make. Kaylea Champion (then a student, now faculty at UW Bothell) led a qualitative investigation to explore these questions.

Given the challenges of studying anonymity seekers, we designed a novel “forensic” qualitative approach inspired by techniques common in computer security and criminal investigation. We applied this new technique to a sample of 500 editing sessions and categorized each session based on what the editor seemed to be intending to do.

Most of the contributions we found fell into one of the two following categories:

  • Many contributions werequotidian attempts to add to the encyclopedia. Tor-based editors added facts, fixed typos, and updated train schedules. There’s no way to know whether these individuals knew they were just getting lucky in their ability to edit or were patiently reloading to evade the ban.
  • Second, we foundharassing comments and vandalism. Unwelcome conduct is common in online environments, and it is sometimes more common when the likelihood of being identified decreases. Some of the harassing comments we observed were direct responses to being banned as a Tor user.

Although these were most of what we observed, we also found evidence of several types of contributor intent:

  • We observedactivism: when a contributor tried to draw attention to journalistic accounts of environmental and human rights abuses committed by a mining company, only to have editors linked to the mining company repeatedly remove their edits. Another example involved an editor trying to diminish the influence of proponents of alternative medicine.
  • We also observedquality maintenance activities when editors used Wikipedia’s rules on appropriate sourcing to remove personal websites being cited in conspiracy theories.
  • We sawedit wars with Tor editors participating in a back-and-forth removal and replacement of content as part of a dispute, in some cases countering the work of an experienced Wikipedia editor whom even other experienced editors had gauged to be biased.
  • Finally, we saw Tor-based editors participating innon-article discussions, such as investigations into administrator misconduct andprotests against the Wikipedia platform’s mistrust of Tor editors.
An exploratory mapping of our themes in terms of the value a type of contribution represents to the Wikipedia community and the importance of anonymity in facilitating it. Anonymity-protecting tools play a critical role in facilitating contributions on the right side of the figure, while edits on the left are more likely to occur even when anonymity is impossible. Contributions toward the top reflect valuable forms of participation in Wikipedia, while edits at the bottom reflect damage.

In all, these themes led us to reflect on how the risks individuals face when contributing to online communities sometimes diverge from the risks the communities face in accepting their work. Expressing minoritized perspectives, maintaining community standards even when you may be targeted by the rulebreaker, highlighting injustice, or acting as a whistleblower can be very risky for an individual, and may not be possible without privacy protections. Of course, in platforms seeking to support the public good, such knowledge and accountability may be crucial.


This work was published as a paper at CSCW:Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. 2019. A Forensic Qualitative Analysis of Contributions to Wikipedia from Anonymity Seeking Users.Proceedings of the ACM on Human-Computer Interaction. 3, CSCW, Article 53 (November 2019), 26 pages.https://doi.org/10.1145/3359155

This project was conducted by Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. This work was supported by the National Science Foundation (awards CNS-1703736 and CNS-1703049) and included the work of two undergraduates supported through an NSF REU supplement.

01 February, 2026 12:19PM by Benjamin Mako Hill

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got rid of documents I had for last year's Tax return.

Got rid of documents I had for last year's Tax return. Now I have the least document in my bookshelf out of the year.

01 February, 2026 05:54AM by Junichi Uekawa

Russ Allbery

Review: Paladin's Faith

Review:Paladin's Faith, by T. Kingfisher

Series:The Saint of Steel #4
Publisher:Red Wombat Studio
Copyright:2023
ISBN:1-61450-614-0
Format:Kindle
Pages:515

Paladin's Faith is the fourth book in T. Kingfisher's looselyconnected series of fantasy novels about the berserker former paladins ofthe Saint of Steel. You could read this as a standalone, but there arenumerous (spoilery) references to the previous books in the series.

Marguerite, who was central to the plot of the first book in the series,Paladin's Grace, is a spy with aproblem. An internal power struggle in the Red Sail, the organization thatshe's been working for, has left her a target. She has a plan for how tobreak their power sufficiently that they will hopefully leave her alone,but to pull it off she's going to need help. As the story opens, she isworking to acquire that help in a very Marguerite sort of way: breakinginto the office of Bishop Beartongue of the Temple of the White Rat.

The Red Sail, the powerful merchant organization Marguerite worked for,makes their money in the salt trade. Marguerite has learned that someoneinvented a cheap and reproducible way to extract salt from sea water, thusmaking the salt trade irrelevant. The Red Sail wants to ensure thatinvention never sees the light of day, and has forced the artificer intohiding. Marguerite doesn't know where they are, but she knows where shecan find out: the Court of Smoke, where the artificer has a patron.

Having grown up in Anuket City, Marguerite was familiar with many clockwork creations, not to mention all the ways that they could go horribly wrong. (Ninety-nine times out of a hundred, it was an explosion. The hundredth time, it ran amok and stabbed innocent bystanders, and the artificer would be left standing there saying, "But I had to put blades on it, or how would it rake the leaves?" while the gutters filled up with blood.)

All Marguerite needs to put her plan into motion is some bodyguards sothat she's not constantly distracted and anxious about being assassinated.Readers of this series will be unsurprised to learn that the bodyguardsshe asks Beartongue for are paladins, including a large broody male onewith serious self-esteem problems.

This is, like the other books in this series, a slow-burn romance withinfuriating communication problems and a male protagonist who would dowell to seek out a sack of hammers as a mentor. However, it has two thingsgoing for it that most books in this series do not: a long and complexplot to which the romance takes a back seat, and Marguerite, who is notparticularly interested in playing along with the expected romancedevelopments. There are also two main paladins in this story, not justone, and the other is one of the two female paladins of the Saint of Steeland rather more entertaining than Shane.

I generally like court intrigue stories, which is what fills most of thisbook. Marguerite is an experienced operative, so the reader gets somesolid competence porn, and the paladins are fish out of water but are alsounexpectedly dangerous, which adds both comedy and satisfyingtable-turning. I thoroughly enjoyed the maneuvering and the cultureclashes. Marguerite is very good at what she does, knows it, and isentirely uninterested in other people's opinions about that, whichshort-circuits a lot of Shane's most annoying behavior and keeps the storyfrom devolving into mopey angst like some of the books in this series havedone.

The end of this book takes the plot in a different direction that addssignificantly to the world-building, but also has a (thankfully short)depths of despair segment that I endured rather than enjoyed. I am notreally in the mood for bleak hopelessness in my fiction at the moment,even if the reader is fairly sure it will be temporary. But apart fromthat, I thoroughly enjoyed this book from beginning to end. When wefinally meet the artificer, they are an absolute delight in that way thatKingfisher is so good at. The whole story is infused with the sense ofdetermined and competent people refusing to stop trying to fix problems.As usual, the romance was not for me and I think the book would have beenbetter without it, but it's less central to the plot and therefore annoyedme less than any of the books in this series so far.

My one major complaint is the lack of gnoles, but we get some new andintriguing world-building to make up for it, along with a setup for afifth book that I am now extremely curious about.

By this point in the series, you probably know if you like the generalformula. Compared to the previous book,Paladin's Hope, I thoughtPaladin's Faith was muchstronger and more interesting, but it's clearly of the same type. If, likeme, you like the plots but not the romance, the plot here is moresubstantial. You will have to decide if that makes up for a romance in thetypical T. Kingfisher configuration.

Personally, I enjoyed this quite a bit, except for the short bleak part,and I'm back to eagerly awaiting the next book in the series.

Rating: 8 out of 10

01 February, 2026 04:54AM

January 31, 2026

hackergotchi for Michael Prokop

Michael Prokop

apt, SHA-1 keys + 2026-02-01

You might have seenPolicy will reject signature within a year warnings in apt(-get) update runs like this:

root@424812bd4556:/# apt updateGet:1 http://foo.example.org/debian demo InRelease [4229 B]Hit:2 http://deb.debian.org/debian trixie InReleaseHit:3 http://deb.debian.org/debian trixie-updates InReleaseHit:4 http://deb.debian.org/debian-security trixie-security InReleaseGet:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]Fetched 5326 B in 0s (43.2 kB/s)All packages are up to date.Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for detailsroot@424812bd4556:/# apt --audit updateHit:1 http://foo.example.org/debian demo InReleaseHit:2 http://deb.debian.org/debian trixie InReleaseHit:3 http://deb.debian.org/debian trixie-updates InReleaseHit:4 http://deb.debian.org/debian-security trixie-security InReleaseAll packages are up to date.    Warning:  http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for detailsAudit:  http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:   Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:              No binding signature at time 2024-06-19T10:33:47Z     because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance     because: SHA1 is not considered secure since 2026-02-01T00:00:00ZAudit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sourcesAudit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'Audit: Consider migrating all sources.list(5) entries to the deb822 .sources formatAudit: The deb822 .sources format supports both embedded as well as external OpenPGP keysAudit: See apt-secure(8) for best practices in configuring repository signing.Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.

If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you’re reading this because you’re already affected).

Let’s simulate the future:

root@424812bd4556:/# apt --update -y install faketime[...]root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" root@424812bd4556:/# dateSat Aug 29 23:42:11 UTC 2026root@424812bd4556:/# apt updateGet:1 http://foo.example.org/debian demo InRelease [4229 B]Hit:2 http://deb.debian.org/debian trixie InRelease                                 Err:1 http://foo.example.org/debian demo InRelease  Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z[...]Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z[...]root@424812bd4556:/# echo $?100

Now, the proper solution would have been to fix the signing key underneath (via e.g.sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed).

If you don’t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you’re out of luck for a proper fix.

But there’s a workaround for the apt situation (related seeapt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):

root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config# Default APT Sequoia configuration. To overwrite, consider copying this# to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the# desired values.[asymmetric_algorithms]dsa2048 = 2024-02-01dsa3072 = 2024-02-01dsa4096 = 2024-02-01brainpoolp256 = 2028-02-01brainpoolp384 = 2028-02-01brainpoolp512 = 2028-02-01rsa2048  = 2030-02-01[hash_algorithms]sha1.second_preimage_resistance = 2026-02-01    # Extend the expiry for legacy repositoriessha224 = 2026-02-01[packets]signature.v3 = 2026-02-01   # Extend the expiry

Adjust this according to your needs:

root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.configroot@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.configroot@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config# APT Sequoia override configuration[asymmetric_algorithms]dsa2048 = 2024-02-01dsa3072 = 2024-02-01dsa4096 = 2024-02-01brainpoolp256 = 2028-02-01brainpoolp384 = 2028-02-01brainpoolp512 = 2028-02-01rsa2048  = 2030-02-01[hash_algorithms]sha1.second_preimage_resistance = 2026-09-01    # Extend the expiry for legacy repositoriessha224 = 2026-09-01[packets]signature.v3 = 2026-02-01   # Extend the expiry

Then we’re back into the original situation, being a warning instead of an error:

root@424812bd4556:/# apt updateHit:1 http://deb.debian.org/debian trixie InReleaseGet:2 http://foo.example.org/debian demo InRelease [4229 B]Hit:3 http://deb.debian.org/debian trixie-updates InReleaseHit:4 http://deb.debian.org/debian-security trixie-security InReleaseWarning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details[..]

Please note that this is aworkaround, andnot a proper solution.

31 January, 2026 01:57PM by mika

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Dialogue

Me: Do you want your coffee in a Japanese or Western style tea cup?

M:Yunomi.

Me: Apparently not as well as you think I do!

31 January, 2026 10:59AM by Benjamin Mako Hill

Russ Allbery

Review: Dragon Pearl

Review:Dragon Pearl, by Yoon Ha Lee

Series:Thousand Worlds #1
Publisher:Rick Riordan Presents
Copyright:2019
ISBN:1-368-01519-0
Format:Kindle
Pages:315

Dragon Pearl is a middle-grade space fantasy based on Koreanmythology and the first book of a series.

Min is a fourteen-year-old girl living on the barely-terraformed world ofJinju with her extended family. Her older brother Jun passed the entranceexams for the Academy and left to join the Thousand Worlds Space Forces,and Min is counting the years until she can do the same. Those plans arethrown into turmoil when an official investigator appears at their doorclaiming that Jun deserted to search for the Dragon Pearl. A series ofimpulsive fourteen-year-old decisions lead to Min heading for a spaceportalone, determined to find her brother and prove his innocence.

This would be a rather improbable quest for a young girl, but Min is agumiho, one of the supernaturals who live in the Thousand Worlds alongsidenon-magical humans. Unlike the more respectable dragons, tigers, goblins,and shamans, gumiho are viewed with suspicion and distrust because theirpowers are useful for deception. They are natural shapeshifters who cancopy the shapes of others, and their Charm ability lets them influencepeople's thoughts and create temporary illusions of objects such as IDcards. It will take all of Min's powers, and some rather luckycoincidences, to infiltrate the Space Forces and determine what happenedto her brother.

It's common for reviews of this book to open with a caution that this is amiddle-grade adventure novel and you should not expect a story likeNinefox Gambit. I will be boring andrepeat that caution.Dragon Pearl has a single first-personviewpoint and a very linear and straightforward plot. Adult readers areunlikely to be surprised by plot twists; the fun is the world-building andseeing how Min manages to work around plot obstacles.

The world-building is enjoyable but not very rigorous. Min uses and abusesCharm with the creative intensity of aDungeons & Dragonsmin-maxer. Each individual event makes sense given the implication thatMin is unusually powerful, but I'm dubious about the surrounding societyand lack of protections against Charm given what Min is able to do. Mindoes say that gumiho are rare and many people think they're extinct, whichis a bit of a fig leaf, but you'll need to bring your urban fantasysuspension of disbelief skills to this one.

I did like that the world-building conceit went more than skin deep andinfluenced every part of the world. There are ghosts who are critical tothe plot. Terraforming is done through magic, hence the quest for theDragon Pearl and the miserable state of Min's home planet due to its loss.Medical treatment involves the body's meridians, as does engineering: Thestarships have meridians similar to those of humans, and engineers partlymerge with those meridians to adjust them. This is not the sort of bookthat tries to build rigorous scientific theories or explain them to thereader, and I'm not sure everything would hang together if you poked at ittoo hard, but Min isn't interested in doing that poking and the storydoesn't try to justify itself. It's mostly a vibe, but it's a vibe that Ienjoyed and that is rather different than other space fantasy I've read.

The characters were okay but never quite clicked for me, in part becauseproper character exploration would have required Min take a detour fromher quest to find her brother and that was not going to happen. The readergets occasional glimpses of a military SF cadet story and a friendship onfalse premises story, but neither have time to breathe because Min dropsany entanglement that gets in the way of her quest. She's almost amoral ina way that I found believable but not quite aligned with my reading mood.I also felt a bit wrong-footed by how her friendships developed; sayingtoo much more would be a spoiler, but I was expecting more humanconnection than I got.

I think my primary disappointment with this book was something I knewgoing in, not in any way its fault, and part of the reason why I'd put offreading it: This is pitched at young teenagers and didn't have quiteenough plot and characterization complexity to satisfy me. It's a linear,somewhat episodic adventure story with some neat world-building, and ittherefore glides over the spots where an adult novel would have addedpolitical and factional complexity. That is exactly as advertised, so it'sup to you whether that's the book you're in the mood for.

One warning: The text of this book opens with an introduction by RickRiordan that is just fluff marketing and that spoils the first fewchapters of the book. It is unmarked as such at the beginning and trickedme into thinking it was the start of the book proper, and then deeplyannoyed me. If you do read this book, I recommend skipping the utterlypointless introduction and going straight to chapter one.

Followed byTiger Honor.

Rating: 6 out of 10

31 January, 2026 05:26AM

January 30, 2026

hackergotchi for Joey Hess

Joey Hess

the local weather

Snow coming. I'm tuned into the local 24 hour slop weather stream. AIgenerated, narrated, up to the minute radar and forecast graphics. Peoplepopping up on the live weather map with questions "snow soon?" (They payfor the privilege.) LLM generating reply that riffs on their name. Tuned tokeep the urgency up, something is always happening somewhere, scanners arepulling the police reports, live webcam description models addverisimilitude to the description of the morning commute. Weather ishappening.

In the subtext, climate change is happening. Weather is a growth industry.The guy up in Kentucky coal country who put this thing together is buildingan empire. He started as just another local news greenscreener. Dropped outand went twitch weather stream. Hyping up tornado days and dicy snowforecasts. Nowcasting, hyper individualized, interacting with chat.Now he's automated it all. On big days when he's getting real views,the bot breaks into his live streams, gives him a break.

Only a few thousand watching this morning yet. Perfect 2026 grade slop.Details never quite right, but close enough to keep on in the backgroundall day. Nobody expects a perfect forecast after all, and it's fed from theNational Weather Center discussion too. We still fund those guys? Whybother when a bot can do it?

He knows why he's big in these states, these rural areas. Understandsthe target audience. Airbrushed AI aesthetics are ok with them, receive nopushback. Flying more under the radar coastally, but weather is big thereand getting bigger. The local weather will come for us all.

6 inches of snow covering some ground mount solar panels with a vertical solar panel fence behind them free of snow except cute little caps

(Not fiction FYI.)

30 January, 2026 01:05PM

Utkarsh Gupta

FOSS Activites in January 2026

Here’s my monthly but brief update about the activities I’ve done in the FOSS world.

Debian

Whilst I didn’t get a chance to do much, here are still a few things that I worked on:

  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Reviewing pyenv MR for Ujjwal.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joinedCanonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:

  • Successfully releasedResolute Snapshot 3!
    • This one was also done without the ISO tracker and cdimage access.
    • We also worked very hard to build and promote all the image in due time.
  • Worked further on the whole artifact signing story for cdimage.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
  • With that, the mid-cycle sprints are around the corner, so quite busy preparing for that.

Debian (E)LTS

This month I have worked 59 hoursonDebian Long Term Support (LTS)and on its sisterExtended LTSproject and did the following things:

Released Security Updates

Work in Progress

  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.

  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.

    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we’ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771. Haven’t made much progress since last month and will carry it over.
  • node-lodash: Affected by CVE-2025-13465, lrototype pollution in baseUnset function.

    • [stable]: The patch for trixie and bookworm are ready but haven’t been uploaded yet as I’d like for the unstable upload to settle a bit before I proceed with stable uploads.
    • [LTS]: The bullseye upload will follow once the stable uploads are in and ACK’d by the SRMs.
  • xrdp: Affected by CVE-2025-68670, leading to a stack-based buffer overflow.

Other Activities

  • [ELTS] Helped Bastien Roucaries debug a tomcat9 regression for buster.

    • I spent quite a lot of time trying to help Bastien (with Markus and Santiago involved via mail thread) by reproducing the regression that the user(s) reported.
    • I also helped suggest a path forward by vendoring everything, which I was then requested to also help perform.
    • Whilst doing that, I noticed circular dependency hellhole and suggested another path forward by backporting bnd and its dependencies as separate NEW packages.
    • Bastien liked the idea and is going to work on that but preferred to revert the update to remedy the immediate regressions reported. I further helped him in reviewing his update. This conversation happened on #debian-elts IRC channel.
  • [LTS] Assisted Ben Hutchings with his question about the next possible steps with a plausible libvirt regression caused by the Linux kernel update. This wasa thread on debian-lts@ mailing list.

  • [LTS] Attended the monthly LTS meeting on IRC.Summary here.

  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.


Until next time.
:wq for today.

30 January, 2026 05:41AM

January 29, 2026

hackergotchi for C.J. Collier

C.J. Collier

Part 3: Building the Keystone – Dataproc Custom Images for Secure Boot & GPUs

Part
3: Building the Keystone – Dataproc Custom Images for Secure Boot &
GPUs

In Part 1, we established a secure, proxy-only network. In Part 2, we
explored the enhancedinstall_gpu_driver.sh initialization
action. Now, in Part 3, we’ll focus on using theLLC-Technologies-Collier/custom-images
repository (branchproxy-exercise-2025-11) to build the
actual custom Dataproc images embedded with NVIDIA drivers signed for
Secure Boot, all within our proxied environment.

Why Custom Images?

To run NVIDIA GPUs on Shielded VMs with Secure Boot enabled, the
NVIDIA kernel modules must be signed with a key trusted by the VM’s EFI
firmware. Since standard Dataproc images don’t include these
custom-signed modules, we need to build our own. This process also
allows us to pre-install a full stack of GPU-accelerated software.

The
custom-images Toolkit
(examples/secure-boot)

Theexamples/secure-boot directory within the
custom-images repository contains the necessary scripts and
configurations, refined through significant development to handle proxy
and Secure Boot challenges.

Key Components & Development Insights:

  • env.json: The central configuration
    file (as used in Part 1) for project, network, proxy, and bucket
    details. This became the single source of truth to avoid configuration
    drift.
  • create-key-pair.sh: Manages the Secure
    Boot signing keys (PK, KEK, DB) in Google Secret Manager, essential for
    the module signing.
  • build-and-run-podman.sh: Orchestrates
    the image build process in an isolated Podman container. This was
    introduced to standardize the build environment and encapsulate
    dependencies, simplifying what the user needs to install locally.
  • pre-init.sh: Sets up the build
    environment within the container and calls
    generate_custom_image.py. It crucially passes metadata
    derived fromenv.json (like proxy settings and Secure Boot
    key secret names) to the temporary build VM.
  • generate_custom_image.py: The core
    Python script that automates GCE VM creation, runs the customization
    script, and creates the final GCE image.
  • gce-proxy-setup.sh: This script from
    startup_script/ is vital. It’s injected into the temporary
    build VM and runsfirst to configure the OS, package
    managers (apt, dnf), tools (curl, wget, GPG), Conda, and Java to use the
    proxy settings passed in the metadata. This ensures the entire build
    process is proxy-aware.
  • install_gpu_driver.sh: Used as the
    --customization-script within the build VM. As detailed in
    Part 2, this script handles the driver/CUDA/ML stack installation and
    signing, now able to function correctly due to the proxy setup by
    gce-proxy-setup.sh.

Layered Image Strategy:

Thepre-init.sh script employs a layered approach:

  1. secure-boot Image: Base image with
    Secure Boot certificates injected.
  2. tf Image: Based on
    secure-boot, this image runs the full
    install_gpu_driver.sh within the proxy-configured build VM
    to install NVIDIA drivers, CUDA, ML libraries (TensorFlow, PyTorch,
    RAPIDS), and sign the modules. This is the primary target image for our
    use case.

(Note:secure-proxy andproxy-tf layers
were experiments, but the-tf image combined with runtime
metadata emerged as the most effective solution for 2.2-debian12).

Build Steps:

  1. Clone Repos & Configure
    env.json:
    Ensure you have the
    custom-images andcloud-dataproc repos and a
    completeenv.json as described in Part 1.

  2. Run the Build:
    bash # Example: Build a 2.2-debian12 based image set # Run from the custom-images repository root bash examples/secure-boot/build-and-run-podman.sh 2.2-debian12
    This command will build the layered images, leveraging the proxy
    settings fromenv.json via the metadata injected into the
    build VM. Note the final image name produced (e.g.,
    dataproc-2-2-deb12-YYYYMMDD-HHMMSS-tf).

Conclusion of Part 3

Through an iterative process, we’ve developed a robust workflow
within thecustom-images repository to build Secure
Boot-compatible GPU images in a proxy-only environment. The key was
isolating the build in Podman, ensuring the build VM is fully
proxy-aware usinggce-proxy-setup.sh, and leveraging the
enhancedinstall_gpu_driver.sh from Part 2.

In Part 4, we’ll bring it all together, deploying a Dataproc cluster
using this custom-tf image within the secure network, and
verifying the end-to-end functionality.

29 January, 2026 09:08AM by C.J. Collier

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2025)

The following contributors got their Debian Developer accounts in the last twomonths:

  • Aquila Macedo (aquila)
  • Peter Blackman (peterb)
  • Kiran S Kunjumon (hacksk)
  • Ben Westover (bjw)

The following contributors were added as Debian Maintainers in the last twomonths:

  • Vladimir Petko
  • Antonin Delpeuch
  • Nadzeya Hutsko
  • Aryan Karamtoth
  • Carl Keinath
  • Richard Nelson

Congratulations!

29 January, 2026 09:00AM by Jean-Pierre Giraud

January 28, 2026

hackergotchi for C.J. Collier

C.J. Collier

Part 2: Taming the Beast – Deep Dive into the Proxy-Aware GPU Initialization Action

Part
2: Taming the Beast – Deep Dive into the Proxy-Aware GPU Initialization
Action

In Part 1 of this series, we laid the network foundation for running
secure Dataproc clusters. Now, let’s zoom in on the core component
responsible for installing and configuring NVIDIA GPU drivers and the
associated ML stack in this restricted environment: the
install_gpu_driver.sh script from theLLC-Technologies-Collier/initialization-actions
repository (branchgpu-202601).

This isn’t just any installation script; it has been significantly
enhanced to handle the nuances of Secure Boot and to operate seamlessly
behind an HTTP/S proxy.

The
Challenge: Installing GPU Drivers Without Direct Internet

Our goal was to create a Dataproc custom image with NVIDIA GPU
drivers, sign the kernel modules for Secure Boot, and ensure the entire
process works seamlessly when the build VM and the eventual cluster
nodes have no direct internet access, relying solely on an HTTP/S proxy.
This involved:

  1. Proxy-Aware Build: Ensuring all build steps within
    the custom image creation process (package downloads, driver downloads,
    GPG keys, etc.) correctly use the customer’s proxy.
  2. Secure Boot Signing: Integrating kernel module
    signing using keys managed in GCP Secret Manager, especially when
    drivers are built from source.
  3. Conda Environment: Reliably and speedily installing
    a complex Conda environment with PyTorch, TensorFlow, Rapids, and other
    GPU-accelerated libraries through the proxy.
  4. Dataproc Integration: Making sure the custom image
    works correctly with Dataproc’s own startup, agent processes, and
    cluster-specific configurations like YARN.

The
Development Journey: Key Enhancements in
install_gpu_driver.sh

To address these challenges, the script incorporates several key
features:

  • Robust Proxy Handling (set_proxy
    function):

    • Challenge: Initial script versions had spotty proxy
      support. Many tools likeapt,curl,
      gpg, and evengsutil failed in proxy-only
      environments.
    • Enhancements: Theset_proxy function
      (also used ingce-proxy-setup.sh) was completely overhauled
      to parse various proxy metadata (http-proxy,
      https-proxy,proxy-uri,
      no-proxy). Critically, environment variables
      (HTTP_PROXY,HTTPS_PROXY,
      NO_PROXY) are now setbefore any network
      operations.NO_PROXY is carefully set to include
      .google.com and.googleapis.com to allow
      direct access to Google APIs via Private Google Access. System-wide
      trust stores (OS, Java, Conda) are updated with the proxy’s CA
      certificate if provided viahttp-proxy-pem-uri.
      gcloud,apt,dnf, and
      dirmngr are also configured to use the proxy.
  • Reliable GPG Key Fetching (import_gpg_keys
    function):

    • Challenge: Importing GPG keys for repositories
      often failed as keyservers use non-HTTP ports (e.g., 11371) blocked by
      firewalls, andgpg --recv-keys is not proxy-friendly.
    • Solution: A newimport_gpg_keys
      function now fetches keys over HTTPS usingcurl, which
      respects the environment’s proxy settings. This replaced all direct
      gpg --recv-keys calls.
  • GCS Caching is King:
    • Challenge: Repeatedly downloading large files
      (drivers, CUDA, source code) through a proxy is slow and
      inefficient.
    • Solution: Implemented extensive GCS caching for
      NVIDIA drivers, CUDA runfiles, NVIDIA Open Kernel Module source
      tarballs, compiled kernel modules, and even packed Conda environments.
      Scripts now check a GCS bucket (dataproc-temp-bucket)
      before hitting the internet.
    • Impact: Dramatically speeds up subsequent runs and
      init action execution times on cluster nodes after the cache is
      warmed.
  • Conda Environment Stability & Speed:
    • Challenge: Large Conda environments are prone to
      solver conflicts and slow installation times.
    • Solution: Integrated Mamba for faster package
      solving. Refined package lists for better compatibility. Added logic to
      force-clean and rebuild the Conda environment cache on GCS and locally
      if inconsistencies are detected (e.g., driver installed but Conda env
      not fully set up).
  • Secure Boot & Kernel Module Signing:
    • Challenge: Custom-compiled kernel modules must be
      signed to load when Secure Boot is enabled.
    • Solution: The script integrates with GCP Secret
      Manager to fetch signing keys. Thebuild_driver_from_github
      function now includes robust steps to compile, sign (using
      sign-file), install, and verify the signed modules.
  • Custom Image Workflow & Deferred Configuration:
    • Challenge: Cluster-specific settings (like YARN GPU
      configuration) should not be baked into the image.
    • Solution: Theinstall_gpu_driver.sh
      script detects when it’s run during image creation
      (--metadata invocation-type=custom-images). In this mode,
      it defers cluster-specific setups to a systemd service
      (dataproc-gpu-config.service) that runs on the first boot
      of a cluster instance. This ensures that YARN and Spark configurations
      are applied in the context of the running cluster, not at image build
      time.

Conclusion of Part 2

Theinstall_gpu_driver.sh initialization action is more
than just an installer; it’s a carefully crafted tool designed to handle
the complexities of secure, proxied environments. Its robust proxy
support, comprehensive GCS caching, refined Conda management, Secure
Boot signing capabilities, and awareness of the custom image build
lifecycle make it a critical enabler.

In Part 3, we’ll explore how theLLC-Technologies-Collier/custom-images
repository (branchproxy-exercise-2025-11) uses this
initialization action to build the complete, ready-to-deploy Secure Boot
GPU custom images.

28 January, 2026 10:45AM by C.J. Collier

Dataproc GPUs, Secure Boot, & Proxies

Part
1: Building a Secure Network Foundation for Dataproc with GPUs &
SWP

Welcome to the first post in our series on running GPU-accelerated
Dataproc workloads in secure, enterprise-grade environments. Many
organizations need to operate within VPCs that have no direct internet
egress, instead routing all traffic through a Secure Web Proxy (SWP).
Additionally, security mandates often require the use of Shielded VMs
with Secure Boot enabled. This series will show you how to meet these
requirements for your Dataproc GPU clusters.

In this post, we’ll focus on laying the network foundation using
tools from theLLC-Technologies-Collier/cloud-dataproc
repository (branchproxy-sync-2026-01).

The Challenge: Network
Isolation & Control

Before we can even think about custom images or GPU drivers, we need
a network environment that:

  1. Prevents direct internet access from Dataproc cluster nodes.
  2. Forces all egress traffic through a manageable and auditable
    SWP.
  3. Provides the necessary connectivity for Dataproc to function and for
    us to build images later.
  4. Supports Secure Boot for all VMs.

The Toolkit:
LLC-Technologies-Collier/cloud-dataproc

To make setting up and tearing down these complex network
environments repeatable and consistent, we’ve developed a set of bash
scripts within thegcloud directory of the
cloud-dataproc repository. These scripts handle the
creation of VPCs, subnets, firewall rules, service accounts, and the
Secure Web Proxy itself.

Key Script:
gcloud/bin/create-dpgce-private

This script is the cornerstone for creating the private, proxied
environment. It automates:

  • VPC and Subnet creation (for the cluster, SWP, and management).
  • Setup of Certificate Authority Service and Certificate Manager for
    SWP TLS interception.
  • Deployment of the SWP Gateway instance.
  • Configuration of a Gateway Security Policy to control egress.
  • Creation of necessary firewall rules.
  • Result: Cluster nodes in this VPC have NO default
    internet route and MUST use the SWP.

Configuration viaenv.json

We use a singleenv.json file to drive the
configuration. This file will also be used by the
custom-images scripts in Part 3. Thisenv.json
should reside in yourcustom-images repository clone, and
you’ll symlink it into thecloud-dataproc/gcloud
directory.

Running the Setup:

# Assuming you have cloud-dataproc and custom-images cloned side-by-side# And your env.json is in the custom-images rootcd cloud-dataproc/gcloud# Symlink to the env.json in custom-imagesln-sf ../../custom-images/env.json env.json# Run the creation script, but don't create a cluster yetbash bin/create-dpgce-private--no-create-clustercd ../../custom-images

Node
Configuration: The Metadata Startup Script for Runtime

For the Dataproc cluster nodes to function correctly in this proxied
environment, they need to be configured to use the SWP on boot. We
achieve this using a GCE metadata startup script.

The scriptstartup_script/gce-proxy-setup.sh (from the
custom-images repository) is designed to be run on each
cluster node at boot. It reads metadata likehttp-proxy and
http-proxy-pem-uri (which our cluster creation scripts in
Part 4 will pass) to configure the OS environment, package managers, and
other tools to use the SWP.

Upload this script to your GCS bucket:

# Run from the custom-images repository rootgsutil cp startup_script/gce-proxy-setup.sh gs://$(jq-r .BUCKET env.json)/custom-image-deps/

This script is essential for theruntime behavior of the
cluster nodes.

Conclusion of Part 1

With thecloud-dataproc scripts, we’ve laid the
groundwork by provisioning a secure VPC with controlled egress through
an SWP. We’ve also prepared the essential node-level proxy configuration
script (gce-proxy-setup.sh) in GCS, ready to be used by our
clusters.

Stay tuned for Part 2, where we’ll dive into the
install_gpu_driver.sh initialization action from the
LLC-Technologies-Collier/initialization-actions repository
(branchgpu-202601) and how it’s been adapted to install
all GPU-related software through the proxy during the image build
process.

28 January, 2026 10:37AM by C.J. Collier

Sven Hoexter

Decrypt TLS Connection with wireshark and curl

With TLS 1.3 more parts of the handshake got encrypted(e.g. the certificate), but sometimes it's still helpful to lookat the complete handshake.

curl uses the somewhat standardized env variable for the key log file calledSSLKEYLOGFILE,which is also supported by Firefox and Chrome. wireshark hides the setting in the UI behindEdit -> Preferences -> Protocols -> TLS -> (Pre)-Master-Secret log filename which isuncomfortable to reach. Looking up the config setting in theAdvanced settingsone can learn that it's called internallytls.keylog_file. Thus we can set it up with:

sudo wireshark -o "tls.keylog_file:/home/sven/curl.keylog"SSLKEYLOGFILE=/home/sven/curl.keylog curl -v https://www.cloudflare.com/cdn-cgi/trace

Depending on the setup root might be unable to access the wayland session, that canbe worked around by letting sudo keep the relevant env variables:

$ cat /etc/sudoers.d/wayland Defaults   env_keep += "XDG_RUNTIME_DIR"Defaults   env_keep += "WAYLAND_DISPLAY"

Or setup wireshark properly and use thewireshark group to be able to dumptraffic. Might require asudo dpkg-reconfigure wireshark-common.

Regarding curl: In some situations it could be desirable to forcea specific older TLS version for testing, which requires aminimalandmaximalversion. E.g. to force TLS 1.2 only:

curl -v --tlsv1.2 --tls-max 1.2 https://www.cloudflare.com/cdn-cgi/trace

28 January, 2026 10:32AM

January 27, 2026

Sergio Cipriano

Query Debian changelogs by keyword with the FTP-Master API

QueryDebian changelogs by keyword with the FTP-Master API

In my post abouttracking myDebian uploads, I used theProjectB database directlyto retrieve how many uploads I had so far.

I was pleasantly surprised to receive a message from Joerg Jaspert,who introduced me to theDebianArchive Kit web API (dak), also known as the FTP-Master API.

Joerg gave the idea of integrating the query I had written into thedak API, so that anyone could obtain the same results without needing touse the mirror host, with a simple http request.

I liked the idea and I decided to work on it. The endpoint is alreadyavailable and you can try by yourself by doing something like this:

$ curl https://api.ftp-master.debian.org/changelogs?search_term=almeida+cipriano

⚠️ WARNING: Check v2:https://people.debian.org/~gladk/blog/posts/202601_ftp-master-changelog-v2/

The query provides a way to search through the changelogs of allDebian packages currently published. The source code is available atSalsa.

I'm already using it to track my uploads, I madethispage that updates every day. If you want to setup something similar,you can use myscriptand just change thesearch_term to the name you use in yourchangelog entries.

I’m running it using a systemd timer. Here’s what I’ve got:

# .config/systemd/user/track-uploads.service[Unit]Description=Track my uploads using the dak APIStopWhenUnneeded=yes[Service]Type=oneshotWorkingDirectory=/home/cipriano/public_html/uploadsExecStart=/usr/bin/python3 generate.py
# .config/systemd/user/track-uploads.timer[Unit]Description=Run track-uploads script daily[Timer]OnCalendar=dailyPersistent=true[Install]WantedBy=timers.target

After placing every file in the right place you just need to run:

$ systemctl --user daemon-reload$ systemctl --user enable --now track-uploads.timer$ systemctl --user start track-uploads.service # generates the html now

If you want to get a bit fancier, I’m also using an Ansible playbookfor that. The source code is available on myGitLabrepository.

If you want to learn more about dak, there is aweb docsavailable.

I’d like to thank Joerg once again for suggesting the idea and forreviewing and merging the change so quickly.

27 January, 2026 05:20PM

Elana Hashman

A beginner's guide to improving your digital security

In 2017, I led aseries of workshops aimed at teachingbeginners a better understanding of encryption, how the internet works, andtheir digital security. Nearly a decade later, there is still a great need toshare reliable resources and guides on improving these skills.

I have worked professionally in computer security one way or another for wellover a decade, atmany major technology companies and in many open sourcesoftware projects. There are many inaccurate and unreliable resourcesout there on this subject, put together by well-meaning people without abackground in security, which can lead to sharing misinformation, exaggerationand fearmongering.

I hope that I can offer you a trusted, curated list of high impact things thatyou can do right now, using whichever vetted guide you prefer. In addition, Ialso include how long it should take, why you should do each task, and anylimitations.

This guide is aimed at improving your personal security, and does not apply toyour work-owned devices.Always assume your company can monitor all of yourmessages and activities on work devices.

What can I do to improve my security right away?

I put together this list in order of effort, easiest tasks first. You should beable to complete many of the low effort tasks in a single hour. The medium tohigh effort tasks are very much worth doing, but may take you a few days oreven weeks to complete them.

Low effort (<15 minutes)

Upgrade your software to the latest versions

Why? I don't know anyone who hasn't complained about software updatesbreaking features, introducing bugs, and causing headaches. If it ain't broke,why upgrade, right? Well, alongside all of those annoying bugs and breakingchanges, software updates also include security fixes, which willprotect yourdevice from being exploited by bad actors. Security issues can be found insoftware at any time, even software that's been available for many years andthought to be secure. You want to install these as soon as they are available.

Recommendation: Turn on automatic upgrades and always keep your devices asup-to-date as possible. If you have some software you know will not work if youupgrade it, at least be sure to upgrade your laptop and phone operating system(iOS, Android, Windows, etc.) and web browser (Chrome, Safari, Firefox, etc.).Do not use devices that do not receive security support (e.g. old Android oriPhones).

Guides:

Limitations: This will prevent someone from exploiting known securityissues on your devices, but it won't help if your device was alreadycompromised. If this is a concern, doing a factory reset, upgrade, and turningon automatic upgrades may help. This also won't protect against all types ofattacks, but it is a necessary foundation.

Use Signal

Why?Signal is a trusted, vetted, securemessaging application that allows you to send end-to-end encrypted messages andmake video/phone calls. This means that only you and your intended recipientcan decrypt the messages and someone cannot intercept and read your messages,in contrast to texting (SMS) and other insecure forms of messaging. Otherapplications advertise themselves as end-to-end encrypted, butSignal providesthe strongest protections.

Recommendation: I recommend installing the Signal app and using it! My momloves that she can video call me on Wi-Fi on my Android phone. It also supportsgroup chats. I use it as a secure alternative to texting (SMS) and other chatplatforms. I also like Signal's "disappearing messages" feature which I enableby default because it automatically deletes messages after a certain period oftime. This avoids your messages taking up too much storage.

Guides:

Limitations: Signal is only able to protect your messages in transit. Ifsomeone has access to your phone or the phone of the person you sent messagesto, they will still be able to read them. As a rule of thumb, if you don't wantsomeone to read something,don't write it down! Meet in person or make anencrypted phone call where you will not be overheard. If you are talking tosomeone you don't know, assume your messages are as public as posting on socialmedia.

Set passwords and turn on device encryption

Why? Passwords ensure that someone else can't unlock your device withoutyour consent or knowledge. They also are required to turn on device encryption,which protects your information on your device from being accessed when it islocked. Biometric (fingerprint or face ID) locking provides some privacy, butyour fingerprint or face ID can be used against your wishes, whereas if you arethe only person who knows your password, only you can use it.

Recommendation: Always set passwords and have device encryption enabled inorder to protect your personal privacy. It may be convenient to allow kids orfamily members access to an unlocked device, but anyone else can access it,too! Use strong passwords that cannot be guessed—avoid using names,birthdays, phone numbers, addresses, or other public information. Using apassword manager will make creating and managing passwordseven easier. Disable biometric unlock, or at least know how to disable it. Mostdevices will enable disk encryption by default, but you should double-check.

Guides:

Limitations: If your device is unlocked, the password and encryption willprovide no protections; the device must be locked for this to protect yourprivacy. It is possible, though unlikely, for someone to gain remote access toyour device (for example through malware orstalkerware), which would bypassthese protections. Someforensictools are alsosophisticated enough to work with physical access to a device that is turned onand locked, but not a device that is turned off/freshly powered on andencrypted. If you lose your password or disk encryption key, you may loseaccess to your device. For this reason, Windows and Apple laptops can make acloud backup of your disk encryption key. However, a cloud backup canpotentially bedisclosed to lawenforcement.

Install an ad blocker

Why? Online ad networks are oftenexploited to spread malware to unsuspecting visitors.If you've ever visited a regular website and suddenly seen an urgent, flashingpop-up claiming your device was hacked, it is often due to a bad ad. Blockingads provides an additional layer of protection against these kinds of attacks.

Recommendation: I recommend everyone uses an ad blocker at all times. Notonly are ads annoying and disruptive, but they can even result in your devicesbeing compromised!

Guides:

Limitations: Sometimes the use of ad blockers can break functionality onwebsites, which can be annoying, but you can temporarily disable them to fixthe problem. These may not be able to block all ads or all tracking, but theymake browsing the web much more pleasant and lower risk! Some people might alsobe concerned that blocking ads might impact the revenue of their favouritewebsites or creators. In this case, I recommend either donating directly orsharing the site with a wider audience, but keep using the ad blocker for yoursafety.

Enable HTTPS-Only Mode

Why? The "S" in "HTTPS" stands for "secure". This feature, which can beenabled on your web browser, ensures that every time you visit a website, yourconnection is always end-to-end encrypted (just like when you use Signal!) Thisensures that someone can't intercept what you search for, what pages onwebsites you visit, and any information you or the website share such as yourbanking details.

Recommendation: I recommend enabling this for everyone, though withimprovements in web browser security and adoption of HTTPS over the years, yourdevices will often do this by default! There is a small risk you willencounter some websites that do not support HTTPS, usually older sites.

Guides:

Limitations: HTTPS protects the information on your connection to awebsite. It does not hide or protect the fact that you visited that website,only the information you accessed. If the website is malicious, HTTPS does notprovide any protection. In certain settings, like when you use a work-managedcomputer that was set up for you, it can still be possible for your ITDepartment to see what you are browsing, even over an HTTPS connection, becausethey have administrator access to your computer and the network.

Medium to high effort (1+ hours)

These tasks require more effort but are worth the investment.

Set up a password manager

Why? It is not possible for a person to remember a unique password forevery single website and app that they use. I have, as of writing, 556passwords stored in my password manager. Password managers do three importantthings very well:

  1. Theygenerate secure passwords with ease. You don't need to worry about getting your digits and special characters just right; the app will do it for you, and generate long, secure passwords.
  2. Theyremember all your passwords for you, and you just need to remember one password to access all of them. The most common reason people's accounts get hacked online is because they used the same password across multiple websites, and one of the websites had all their passwords leaked. When you use a unique password on every website, it doesn't matter if your password gets leaked!
  3. They autofill passwords based on the website you're visiting. This is important because it helpsprevent you from gettingphished. If you're tricked into visiting an evil lookalike site, your password manager will refuse to fill the password.

Recommendation: These benefits are extremely important, and setting up apassword manager is often one of themost impactful things you can do for yourdigital security. However, they take time to get used to, and migrating all ofyour passwords into the app (and immediately changing them!) can take a fewminutes at a time... over weeks. I recommend youprioritize the most importantsites, such as your email accounts, banking/financial sites, and cellphoneprovider. This process will feel like a lot of work, but you will get to enjoythe benefits of never having to remember new passwords and the autofillfunctionality for websites. My recommended password manager is1Password, but it stores passwords in the cloud andcosts money. There are some good free options as well if cost is a concern.You can also use web browser- or OS-based password managers, but I do notprefer these.

Guides:

Limitations: Many people are concerned about the risk of using a passwordmanager causing all of their passwords to be compromised. For this reason, it'svery important to use a vetted, reputable password manager that has passedaudits, such as 1Password orBitwarden. It is alsoextremely important to choose a strong password to unlock your passwordmanager. 1Password makes this easier by generating a secret to strengthen yourunlock password, but I recommend using along, memorablepassword in any case. Another risk is that if youforget your password manager's password, you will lose access to all yourpasswords. This is why I recommend 1Password, which has you set up anEmergency Kit to recover accessto your account.

Set up two-factor authentication (2FA) for your accounts

Why? If your password is compromised in a website leak or due to a phishingattack, two-factor authentication will require a second piece of information tolog in and potentially thwart the intruder. This provides you with an extralayer of security on your accounts.

Recommendation: You don't necessarily need to enable 2FA on every account,but prioritize enabling it on your most important accounts (email, banking,cellphone, etc.) There are typically a few different kinds: email-based (whichis why your email account's security is so important), text message orSMS-based (which is why your cell phone account's security is so important),app-based, and hardware token-based. Email and text message 2FA are fine formost accounts. You may want to enable app- or hardware token-based 2FA for yourmost sensitive accounts.

Guides:

Limitations: The major limitation is that if you lose access to 2FA, youcan be locked out of an account. This can happen if you're travelling abroadand can't access your usual cellphone number, if you break your phone and youdon't have a backup of your authenticator app, or if you lose yourhardware-based token. For this reason, many websites will provide you with"backup tokens"—you can print them out and store them in a secure location oruse your password manager. I also recommend if you use an app, you choose one that will allow you to makesecure backups, such as Ente. You are also limited by the types of 2FA awebsite supports; many don't support app- or hardware token-based 2FA.

Remove your information from data brokers

Why? This is a problem that mostly affects people in the US. It surprisesmany people that information from their credit reports and other public recordsis scraped and available (for free or at a low cost) online through "databroker" websites. I have shocked friends who didn't believe this was an issueby searching for their full names and within 5 minutes being able to show themtheir birthday, home address, and phone number. This is a serious privacyproblem!

Recommendation: Opt out of any and all data broker websites to remove thisinformation from the internet. This is especially important if you are at riskof being stalked or harassed.

Guides:

Limitations: It can take time for your information to be removed once youopt out, and unfortunately search engines may have cached your information fora while longer. This is also not a one-and-done process. New data brokers areconstantly popping up and some may not properly honour your opt out, so youwill need to check on a regular basis (perhaps once or twice a year) to makesure your data has been properly scrubbed. This also cannot prevent someonefrom directly searching public records to find your information, but thatrequires much more effort.

"Recommended security measures" I think beginners should avoid

We've covered a lot of tasks you should do, but I also think it's important tocover whatnot to do. I see many of these tools recommended to securitybeginners, and I think that's a mistake. For each tool, I will explain myreasoning around why I don't think you should use it, and the scenarios inwhich it might make sense to use.

"Secure email"

What is it? Many email providers, such asProton Mail, advertise themselves as providing secureemail. They are often recommended as a "more secure" alternative to typicalemail providers such as GMail.

What's the problem? Email is fundamentally insecure by design. The emailspecification(RFC-3207) states thatany publicly available email server MUST NOT require the use of end-to-endencryption in transit. Email providers can of course provide additionalsecurity by encrypting their copies of your email, and providing you access toyour email by HTTPS, but the messages themselves can always be sent withoutencryption. Some platforms such as Proton Mail advertise end-to-end encryptedemails so long as you email another Proton user. This is not truly email, buttheir own internal encrypted messaging platform that follows the email format.

What should I do instead? Use Signal to send encrypted messages.NEVERassume the contents of an email are secure.

Who should use it? I don't believe there are any major advantages to usinga service such as this one. Even if you pay for a more "secure" email provider,themajority of youremailswill still be delivered to people who don't. Additionally, while I don'tuse or necessarily recommend their service, Google offers anAdvancedProtection Programfor people who may be targeted by state-level actors.

PGP/GPG Encryption

What is it? PGP ("Pretty Good Privacy") and GPG ("GNU Privacy Guard") areencryption and cryptographic signing software. They are often recommended toencrypt messages or email.

What's the problem? GPG is decades old and its usability has always beenterrible. It is extremely easy to accidentally send a message that you thoughtwas encrypted without encryption! The problems with PGP/GPG have beenextensively documented.

What should I do instead? UseSignal to send encrypted messages.Again,NEVER use email for sensitive information.

Who should use it? Software developers who contribute to projects wherethere is a requirement to use GPG should continue to use it until an adequatealternative is available. Everyone else should live their lives in PGP-freebliss.

Installing a "secure" operating system (OS) on your phone

What is it? There are a number of self-installed operating systems forAndroid phones, such asGrapheneOS, that advertiseas being "more secure" than using the version of the Android operating systemprovided by your phone manufacturer. They often remove core Google APIs andservices to allow you to "de-Google" your phone.

What's the problem? These projects are relatively niche, and don'thave nearly enough resourcing to be able to respond to the high levels ofsecurity pressure Android experiences (such as against theforensictools I mentioned earlier). You may suddenly lose security supportwith no notice, as withCalyxOS. Youneed a high level of technical know-how and a lot of spare time to maintainyour device with a custom operating system, which is not a reasonableexpectation for the average person. By stripping all Google APIs such asGoogle Play Services, some useful apps can no longer function. And somelaw enforcement organizations have gone as far asaccusing people who installGrapheneOS on Pixel phones to beengaging in criminal activity.

What should I do instead? For the best security on an Android device, use aphone manufactured by Google or Samsung (smaller manufacturers are moreunreliable), or consider buying an iPhone. Make sure your device is receivingsecurity updates and up-to-date.

Who should use it? These projects are great for tech enthusiasts who areinterested in contributing to and developing them further. They can be used togive new life to old phones that are not receiving security or softwareupdates. They are also great for people with an interest in free and opensource software and digital autonomy. But these tools are not a good choice fora general audience, nor do they provide more practical security than using anup-to-date Google or Samsung Android phone.

Virtual Private Network (VPN) Services

What is it? A virtual private network or VPN service can provide you with asecure tunnel from your device to the location that the VPN operates. Thismeans that if I am using my phone in Seattle connected to a VPN in Amsterdam,if I access a website, it appears to the website that my phone is located inAmsterdam.

What's the problem? VPN services are frequently advertised as providingsecurity or protection from nefarious bad actors, or helping protect yourprivacy. These benefits are often far overstated, and there are predatory VPNproviders that can actually be harmful. It costs money and resources to providea VPN, so free VPN services are especially suspect. When you use a VPN, the VPNprovider knows the websites you are visiting in order to provide you with theservice. Free VPN providers maysell thisdatain order to cover the cost of providing the service, leaving you with lesssecurity and privacy. The average person does not have the knowledge to be ableto determine if a VPN service is trustworthy or not. VPNs also don't provideany additional encryption benefits if you are already using HTTPS. They mayprovide a small amount of privacy benefit if you are connected to an untrustednetwork with an attacker.

What should I do instead? Always use HTTPS to access websites. Don'tconnect to untrusted internet providers—for example, use cellphonenetwork data instead of a sketchy Wi-Fi access point. Your local neighbourhoodcoffee shop is probably fine.

Who should use it? There are three main use cases for VPNs. The first is tobypass geographic restrictions. A VPN will cause all of your web traffic toappear to be coming from another location. If you live in an area that haslocal internet censorship policies, you can use a VPN to access the internetfrom a location that lacks such policies. The second is if you know yourinternet service provider is actively hostile or malicious. A trusted VPN willprotect the visibility of all your traffic, including which websites you visit,from your internet service provider, and the only thing they will be able tosee is that you are accessing a VPN. The third use case is toaccess a networkthat isn't connected to the public internet, such as a corporate intranet. Istrongly discourage the use of VPNs for "general-purpose security."

Tor

What is it? Tor, "The Onion Router", is a free and open source softwareproject that provides anonymous networking. Unlike with a VPN, where the VPNprovider knows who you are and what websites you are requesting, Tor'sarchitecture makes it extremely difficult to determine who sent a request.

What's the problem? Tor is difficult to set up properly; similar toPGP-encrypted email, it is possible to accidentally not be connected to Tor andnot know the difference. This usability has improved over the years, but Tor isstill not a good tool for beginners to use. Due to the way Tor works, it isalso extremely slow. If you have used cable or fiber internet, get ready to goback to dialup speeds. Tor also doesn't provide perfect privacy and without astrong understanding of itslimitations, it can be possible to deanonymizesomeone despite using it. Additionally, many websites are able to detectconnections from the Tor network and block them.

What should I do instead? If you want to use Tor to bypass censorship, itis often better to use a trusted VPN provider, particularly if you need highbandwidth (e.g. for streaming). If you want to use Tor to access a websiteanonymously, Tor itself might not be enough to protect you. For example, if youneed to provide an email address or personal information, you can decline toprovide accurate information and use amasked emailaddress. A friend of mineonce used the alias "Nunya Biznes" 🥸

Who should use it? Tor should only be used by people who are experiencedusers of security tools and understand its strengths and limitations. Tor alsois best used on a purpose-built system, such asTor Browseror Freedom of the Press Foundation'sSecureDrop.

I want to learn more!

I hope you've found this guide to be a useful starting point. I always welcomefolks reaching out to me with questions, though I might take a little bit oftime to respond. You can alwaysemail me.

If there's enough interest, I might cover the following topics in a futurepost:

  • Threat modelling, which you can get started with by reading theEFF's orVCW's guides
  • Browser addons for privacy, whichConsumer Reports has a tip for
  • Secure DNS, which you can read more abouthere

Stay safe out there! 🔒

27 January, 2026 02:00AM by Elana Hashman

January 26, 2026

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Ubuntu Pro subscription - should you pay to use Linux?

Featured image of post Ubuntu Pro subscription - should you pay to use Linux?

Ubuntu Pro is a subscription offering for Ubuntu users who want to pay for the assurance of getting quick and high-quality security updates for Ubuntu. I tested it out to see how it works in practice, and to evaluate how well it works as a commercial open source service model for Linux.

Anyone running Ubuntu can subscribe to it atubuntu.com/pro/subscribe by selecting the setup type “Desktops” for the price of$25 per year (+applicable taxes) for Enterprise users. There is also a free version for personal use. Once you have an account, you can find your activation token atubuntu.com/pro/dashboard, and use it to activate Ubuntu Pro on your desktop or laptop Ubuntu machine by runningsudo pro attach <token>:

$ sudo pro attach aabbcc112233aabbcc112233Enabling default service esm-appsUpdating package listsUbuntu Pro: ESM Apps enabledEnabling default service esm-infraUpdating package listsUbuntu Pro: ESM Infra enabledEnabling default service livepatchInstalling canonical-livepatch snapCanonical livepatch enabled.Unable to determine current instance-idThis machine is now attached to 'Ubuntu Pro Desktop'
$ sudo pro attach aabbcc112233aabbcc112233Enabling default service esm-appsUpdating package listsUbuntu Pro: ESM Apps enabledEnabling default service esm-infraUpdating package listsUbuntu Pro: ESM Infra enabledEnabling default service livepatchInstalling canonical-livepatch snapCanonical livepatch enabled.Unable to determine current instance-idThis machine is now attached to 'Ubuntu Pro Desktop'

You can at any time confirm the Ubuntu Pro status by running:

$ sudo pro status --allSERVICE ENTITLED STATUS DESCRIPTIONanbox-cloud yes disabled Scalable Android in the cloudcc-eal yes n/a Common Criteria EAL2 Provisioning Packagesesm-apps yes enabled Expanded Security Maintenance for Applicationsesm-infra yes enabled Expanded Security Maintenance for Infrastructurefips yes n/a NIST-certified FIPS crypto packagesfips-preview yes n/a Preview of FIPS crypto packages undergoing certification with NISTfips-updates yes disabled FIPS compliant crypto packages with stable security updateslandscape yes enabled Management and administration tool for Ubuntulivepatch yes disabled Canonical Livepatch servicerealtime-kernel yes disabled Ubuntu kernel with PREEMPT_RT patches integrated├ generic yes disabled Generic version of the RT kernel (default)├ intel-iotg yes n/a RT kernel optimized for Intel IOTG platform└ raspi yes n/a 24.04 Real-time kernel optimised for Raspberry Piros yes n/a Security Updates for the Robot Operating Systemros-updates yes n/a All Updates for the Robot Operating Systemusg yes disabled Security compliance and audit toolsEnable services with: pro enable <service>Account: Otto KekalainenSubscription: Ubuntu Pro DesktopValid until: Thu Mar 3 08:08:38 2026 PDTTechnical support level: essential
$ sudo pro status --allSERVICE ENTITLED STATUS DESCRIPTIONanbox-cloud yes disabled Scalable Android in the cloudcc-eal yes n/a Common Criteria EAL2 Provisioning Packagesesm-apps yes enabled Expanded Security Maintenance for Applicationsesm-infra yes enabled Expanded Security Maintenance for Infrastructurefips yes n/a NIST-certified FIPS crypto packagesfips-preview yes n/a Preview of FIPS crypto packages undergoing certification with NISTfips-updates yes disabled FIPS compliant crypto packages with stable security updateslandscape yes enabled Management and administration tool for Ubuntulivepatch yes disabled Canonical Livepatch servicerealtime-kernel yes disabled Ubuntu kernel with PREEMPT_RT patches integrated├ generic yes disabled Generic version of the RT kernel (default)├ intel-iotg yes n/a RT kernel optimized for Intel IOTG platform└ raspi yes n/a 24.04 Real-time kernel optimised for Raspberry Piros yes n/a Security Updates for the Robot Operating Systemros-updates yes n/a All Updates for the Robot Operating Systemusg yes disabled Security compliance and audit toolsEnable services with: pro enable <service>Account: Otto KekalainenSubscription: Ubuntu Pro DesktopValid until: Thu Mar 3 08:08:38 2026 PDTTechnical support level: essential

For a regular desktop/laptop user the most relevant service is theesm-apps, which deliversextended security updates to many applications typically used on desktop systems.

Another relevant command to confirm the current subscription status is:

$ sudo pro security-status2828 packages installed:2143 packages from Ubuntu Main/Restricted repository660 packages from Ubuntu Universe/Multiverse repository13 packages from third parties12 packages no longer available for downloadTo get more information about the packages, runpro security-status --helpfor a list of available options.This machine is receiving security patching for Ubuntu Main/Restrictedrepository until 2029.This machine is attached to an Ubuntu Pro subscription.Ubuntu Pro with 'esm-infra' enabled provides security updates forMain/Restricted packages until 2034.Ubuntu Pro with 'esm-apps' enabled provides security updates forUniverse/Multiverse packages until 2034. You have received 26 securityupdates.
$ sudo pro security-status2828 packages installed:2143 packages from Ubuntu Main/Restricted repository660 packages from Ubuntu Universe/Multiverse repository13 packages from third parties12 packages no longer available for downloadTo get more information about the packages, runpro security-status --helpfor a list of available options.This machine is receiving security patching for Ubuntu Main/Restrictedrepository until 2029.This machine is attached to an Ubuntu Pro subscription.Ubuntu Pro with 'esm-infra' enabled provides security updates forMain/Restricted packages until 2034.Ubuntu Pro with 'esm-apps' enabled provides security updates forUniverse/Multiverse packages until 2034. You have received 26 securityupdates.

This confirms the scope of the security support. You can even runsudo pro security-status --esm-apps to get a detailed breakdown of the installed software packages in scope for Expanded Security Maintenance (ESM).

Experiences from using Ubuntu Pro for over a year

Personally I have been using it on two laptop systems for well over a year now and everything seems to have worked well. I seeapt is downloading software updates fromhttps://esm.ubuntu.com/apps/ubuntu, but other than that there aren’t any notable signs of Ubuntu Pro being in use. That is a good thing – after all one is paying forassurance that everything works with minimum disruptions, so the system that enables smooth sailing should stay in the background and not make too much noise of itself.

Using Landscape to manage multiple Ubuntu laptops

Landscape portal reports showing security update status and resource utilization

Landscape.canonical.com is a fleet management system that shows information like security update status and resource utilization for the computers you administer. Ubuntu Pro attached systems under one’s account are not automatically visible in Landscape, but have to be enrolled in it.

To enroll an Ubuntu Pro attached desktop/laptop to Landscape, first install the required package withsudo apt install landscape-client and then runsudo landscape-config --account-name <account name> to start the configuration wizard. You can find your account name in theLandscape portal. On the last wizard questionRequest a new registration for this computer now? [y/N] hity to accept. If successful, the new computer will be visible on the Landscape portal page “Pending computers”, from where you can click to accept it.

Landscape portal page showing pending computer registration

If I had a large fleet of computers, Landscape might come in useful. Also it is obvious Landscape is intended primarily for managing server systems. For example, the default alarm trigger on systems being offline, which is common for laptops and desktop computers, is an alert-worthy thing only on server systems.

It is good to know that Landscape exists, but on desktop systems I would probably skip it, and only stick to the security updates offered by Ubuntu Pro without using Landscape.

Landscape is evolving

The screenshots above are from the current Landscape portal which I have been using so far. Recently Canonical has also launched a new web portal, with a fresh look:

New Landscape dashboard with fresh look

This shows Canonical is actively investing in the service and it is likely going to sit at the center of their business model for years to come.

Other offerings by Canonical for individual users

Canonical, the company behind the world’s most popular desktop Linux distribution Ubuntu, has been offering various commercial support services for corporate customers since the company launched back in 2005, but there haven’t been any offerings available to individual users sinceUbuntu One, with file syncing, a music store and more, was wound down back in 2014. Canonical and the other major Linux companies, Red Hat and SUSE, have always been very enterprise-oriented, presumably because achieving economies of scale is much easier when maintaining standardized corporate environments compared to dealing with a wide range of custom configurations that individual consumer customers might have. I remember some years ago Canonical offered desktop support under theUbuntu Advantage product name, but the minimum subscription was for 5 desktop systems, which typically isn’t an option for a regular home consumer.

I am glad to see Ubuntu Pro is now available and I honestly hope people using Ubuntu will opt into it. The more customers it has, the more it incentivizes Canonical to develop and maintain features that are important for desktop and home users.

Pay for Linux because you can, not because you have to

Open source is a great software development model for rapid innovation and adoption, but I don’t think the business models in the space are yet quite mature. Users who get long-term value should participate more in funding open source maintenance work. While some donation platforms like GitHub Sponsors, OpenCollective and the like have gained popularity in recent years, none of them seem to generate recurring revenue comparable to the scale of how popular open source software is now in 2026.

I welcome more paid schemes, such as Ubuntu Pro, as I believe it is beneficial for the whole ecosystem. I also expect more service providers to enter this space and experiment with different open source business models and various forms of decentralized funding. Linux and open source areprimarilyfree as in speech, but as a side effect license fees are hard to enforce and many use Linux without paying for it. The more people, corporations and even countries rely on it to stay sovereign in the information society, the more users should think about how they want to use Linux and who they want to pay to maintain it and other critical parts of the open source ecosystem.

26 January, 2026 12:00AM

January 25, 2026

Anton Gladky

Introducing v2/changelogs FTP-Master API

v2/changelogs – FTP-Master API released and how can it be used to track your uploads

25 January, 2026 05:50PM

January 24, 2026

hackergotchi for Gunnar Wolf

Gunnar Wolf

Finally some light for those who care about Debian on the Raspberry Pi

Finally, some light at the end of the tunnel!

As I have saidin thisblogand elsewhere, after putting quite a bit of work into generating the DebianRaspberry Pi images betweenlate2018and 2023, I had to recognize I don’t have the time and energy to properlycare for it.

I even registered a GSoC project for it. I mentored Kurva Prashanth, whodid good work on thevmdb2scripts we use for the image generation — but in the end, was unable topush them to be built in Debian infrastructure. Maybe a different approachwas needed! While I adopted the images as they were conceived byMichaelStapelberg,sometimes it’s easier to start from scratch and build a fresh approach.

So, I’m not yet pointing at a stable, proven release, but to a goodpromise. And I hope I’m not being pushy by making this public: in the#debian-raspberrypi channel,waldi has shared the images he has createdwith the Debian Cloud Team’s infrastructure.

So, right now, theimages built sofarsupport Raspberry Pi families 4 and 5 (notably,not the 500 computer Ihave, due to a missingDevice Tree, but I’ll try to help figure that bitout… Anyway, p400/500/500+ systems are notthat usual). Work isunderway to get the 3B+ to boot (some hackery is needed, as it onlyunderstands MBR partition schemes, so creating a hybrid image seems to beneeded).

Debian Cloud images for Raspberries

Sadly, I don’t think the effort will be extended to cover older,32-bit-only systems (RPi 0, 1 and 2).

Anyway, as this effort stabilizes, I will phase out my (stale!) work onraspi.debian.net, and will redirect it to point at the new images.

Comments

Andrea Pappacodatachi@d.o 2026-01-26 17:39:14 GMT+1

Are there any particular caveats compared to using the regular Raspberry PiOS?

Are they documented anywhere?

Gunnar Wolfgwolf.blog@gwolf.org 2026-01-26 11:02:29 GMT-6

Well, the Raspberry Pi OS includes quite a bit of software that’s notpackaged in Debian for various reasons — some of it because it’s non-freedemo-ware, some of it because it’s RPiOS-specific configuration, some ofit… I don’t care, I like running Debian wherever possible 😉

Andrea Pappacodatachi@d.o 2026-01-26 18:20:24 GMT+1

Thanks for the reply! Yeah, sorry, I should’ve been more specific. I alsojust care about the Debian part. But: are there any hardware issues orunsupported stuff, like booting from an SSD (which I’m currently doing)?

Gunnar Wolfgwolf.blog@gwolf.org 2026-01-26 12:16:29 GMT-6

That’s… beyond my knowledge 😉 Although I can tell you that:

  • Raspberry Pi OS has hardware support as soon as their new boards hit themarket. The ability to even boot a board can take over a year for themainline Linux kernel (at least, it has, both in the cases of the 4 andthe 5 families).

  • Also, sometimes some bits of hardware are not discovered by the Linuxkernels even if the general family boots because they are not declared inthe right place of the Device Tree (i.e. the wireless network interfacein the 02W is in a different address than in the 3B+, or the 500 does notfully boot while the 5B now does). Usually it is a matter of “just”declaring stuff in the right place, but it’s not a skill many of us have.

  • Also, many RPi “hats” ship with their own Device Tree overlays, and theycannot always be loaded on top of mainline kernels.

Andrea Pappacodatachi@d.o 2026-01-26 19:31:55 GMT+1

That’s… beyond my knowledge 😉 Although I can tell you that:

Raspberry Pi OS has hardware support as soon as their new boards hit themarket. The ability to even boot a board can take over a year for themainline Linux kernel (at least, it has, both in the cases of the 4 andthe 5 families).

Yeah, unfortunately I’m aware of that… I’ve also been trying to bootOpenBSD on my rpi5 out of curiosity, but been blocked by my somewhatunusual setup involving an NVMe SSD as the boot drive :/

Also, sometimes some bits of hardware are not discovered by the Linuxkernels even if the general family boots because they are not declared inthe right place of the Device Tree (i.e. the wireless network interfacein the 02W is in a different address than in the 3B+, or the 500 does notfully boot while the 5B now does). Usually it is a matter of “just”declaring stuff in the right place, but it’s not a skill many of us have.

At some point in my life I had started reading a bit about device trees andstuff, but got distracted by other stuff before I could develop anyfamiliarity with it. So I don’t have the skills either :)

Also, many RPi “hats” ship with their own Device Tree overlays, and theycannot always be loaded on top of mainline kernels.

I’m definitely not happy to hear this!

Guess I’ll have to try, and maybe report back once some page for these newbuilds materializes.

24 January, 2026 04:24PM

January 23, 2026

Reproducible Builds (diffoscope)

diffoscope 311 released

The diffoscope maintainers are pleased to announce the release of diffoscopeversion311. This version includes the following changes:

[ Chris Lamb ]* Fix test compatibility with u-boot-tools 2026-01. Thanks, Jelle!* Bump Standards-Version to 4.7.3.* Drop implied "Priority: optional" from debian/control.* Also drop implied "Rules-Requires-Root: no" entry in debian/control.* Update copyright years.

You find out more byvisiting the project homepage.

23 January, 2026 12:00AM

January 22, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 2

Inpart 1,we discovered the problem of rewriting git history in the presence ofnontrivial merges. Today, we'll discuss the workaround I chose.

As I previously mentioned, and asJulia Evans' excellent data model documentexplains, a git commit is just a snapshot of a tree (suitably deduplicatedby means of content hashes), a commit message and a (possibly empty)set of parents. So fundamentally, we don't really need to mess with diffs;if we can make the changes we want directly to the tree (well, technically,make a new tree that looks like what we want, and a new commit using thattree), we're good.(Diffs in git are, generally, just git diff looking at two trees andtrying to make sense of it. This has the unfortunate result that thereis no solid way of representing a rename; there are heuristics, but ifyou rename a fileand change it in the same commit, they may failand stuff like git blame or git log may be broken, depending on flags.Gerrit doesn't even seem to understand a no-change copy.)

In earlier related cases, I've taken this to the extreme by simplyhand-writing a commit usinggit commit-tree. Create exactly the statethat you want by whatever means, commit it in some dummy commit and then usethat commit's tree with some suitable commit message and parent(s); voila. But it doesn't help us with history; while we can fix up an older commitin exactly the way we'd like, we also need the latter commits to have ournew fixed-up commit as parent.

Thus, entergit filter-branch. git filter-branch comes with a suitableset of warnings about eating your repository and being deprecated (I neverreally figured out its supposed replacementgit filter-repo, so I won'ttalk much about it), but it's useful when all else fails.

In particular, git filter-branch allows you to do arbitrary changesto the tree of a series of commits, updating the parent commit IDsas rewrites happen. So if you can express your desired changes in away that's better than “run the editor” (or if you're happy runningthe editor and making the same edit manually 300 times!), you canjust run that command over all commits in the entire branch(forgive me for breaking lines a bit):

git filter-branch -f --tree-filter \  '! [ -f src/cluster.cpp ] || sed -i "s/if (mi.rank != 0)/if (mi.rank != 0    \&\& mi.rank == rank())/" src/cluster.cpp' \  665155410753978998c8080c813da660fc64bbfe^..cluster-master

This is suitably terrible. Remember, if we only did this for one commit,the change wouldn't be there in the next one (git diff would show thatit was immediately reverted), so filter-branch needs to do this over andover again, once for each commit (tree) in the branch. And I wanted multiplefixups, so I had a bunch of these; some of them were as simple as “copy thisfile from /tmp” and some were shell scripts that did things like runningclang-format.

You can do similar things for commit messages; at some point, I figuredI should write “cluster” (the official name for the branch) and not“cluster-master” (my local name) in the merge messages, so I could just do

git filter-branch \  --commit-msg-filter 'sed s/cluster-master/cluster/g' \  665155410753978998c8080c813da660fc64bbfe^..cluster-master

I also did a bunch of them to fix up my email address (GIT_COMMITTER_EMAILwasn't properly set), although I cannot honestly remember whether I used--env-filter or something else.Perhaps that was actually with git rebase and `-r --exec 'git commit --amend--no-edit --author …'` or similar. There are many ways to do ugly things. :-)

Eventually, I had the branch mostly in a state where I thought it would beready for review, but after uploading to GitHub, one reviewer commented thatsome of my merges against master were commits that didn't exist in master.Huh? That's… surprising.

It took a fair bit of digging to figure out what had happened:git filter-branch had rewritten some commits that it didn'tactually have to; the merge sources from upstream. This is normallyharmless, since git hashes are deterministic, but these commits were signedby the author! And filter-branch (or perhaps fast-export, upon which itbuilds?) generally assumes that it can't sign stuff with other people's keys,so it just strips the signatures, deeming that better than havinginvalid ones sitting around. Now, of course, these commit signatures would stillbe valid since wedidn't change anything, but evidently, filter-branchdoesn't have any special code for that.

Removing an object like this (a “gpgsig” attribute, it seems) changes thecommit hash, which is where the phantom commits came from. I couldn't getfilter-branch to turn it off… but again, parents can be freely changed,diffs don't exist anyway. So I wrote a little script that took inparameters suitable forgit commit-tree (mostly the parent list),rewrote known-bad parents to known-good parents, gave the script togit filter-branch --commit-filter, and that solved the problem.(I guess--parent-filter would also have worked; I don't think I sawit in the man page at the time.)

So, well, I won't claim this is an exercise in elegancy. (Perhaps my nextadventure will be figuring out how this works in jj, which supposedly has conflicts as more of a first-class concept.) But it got thejob done in a couple of hours after fighting with rebase for a long time,the PR was reviewed, and now the Stockfish cluster branch is a little bitmore alive.

22 January, 2026 07:45AM

January 21, 2026

hackergotchi for Evgeni Golov

Evgeni Golov

Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generateYAML from ERB, what could possibly go wrong?!

Well, actually,quitea lot,so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-initcloud-config,and while checking that we generated a valid YAML document is easy (and we were already doing that),it would be much better if we could check that cloud-init can actually use it.

Entercloud-init schema, or so I thought.Turns outrunningcloud-init schema is rather broken without root privileges,as it tries to loada ton of information from the running system.This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself.I've not found a way to disable that behavior.

Luckily,I know Python.

Enterevgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3importsysfromcloudinit.config.schemaimportget_schema,validate_cloudconfig_file,SchemaValidationErrortry:valid=validate_cloudconfig_file(config_path=sys.argv[1],schema=get_schema())ifnotvalid:raiseRuntimeError("Schema is not valid")except(SchemaValidationError,RuntimeError)ase:print(e)sys.exit(1)

The canonical1 version if thislives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API,as it will sometimes raise anSchemaValidationError,sometimes aRuntimeError and sometimes just returnFalse.No idea why.But the above just turns it into a couple of printed lines and a non zero exit code,unless of course there are no problems, then you get peaceful silence.

21 January, 2026 07:42PM by evgeni

January 20, 2026

Sahil Dhiman

Conferences, why?

Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That’s when the thought struck me: why so many conferences, and why do I work for them?

I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what’s happening in their interest zones. I feel naturally happy talking to folks. In this case, people inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later.

For most conference discussions now, I just show up without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing.

Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet, talk, and work on something.

No doubt, one reason for all these conferences is evangelism for, let’s say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.

20 January, 2026 02:27AM

January 19, 2026

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RApiDatetime 0.0.11 on CRAN: Micro-Maintenance

A new (micro) maintenance release of ourRApiDatetimepackage is now onCRAN, comingonly a good week after the0.0.10release which itself had a two year gap to its predecessorrelease.

RApiDatetimeprovides a number of entry points for C-level functions of the R API forDate andDatetime calculations. The functionsasPOSIXlt andasPOSIXct convert between longand compact datetime representation,formatPOSIXlt andRstrptime convert to and from character strings, andPOSIXlt2D andD2POSIXlt convert betweenDate andPOSIXlt datetime. Lastly,asDatePOSIXct converts to a date type. All these functionsare rather useful, but were not previously exported by R for C-level useby other packages. Which this package aims to change.

This release adds a single (and ) around one variable as therchk container and service by Tomas now flagged this. Whichis … somewhat peculiar, as this is old code also ‘borrowed’ from Ritself but no point arguing so I just added this.

Details of the release follow based on the NEWS file.

Changes inRApiDatetime version 0.0.11 (2026-01-19)

  • Add PROTECT (and UNPROTECT) to appease rchk

Courtesy of myCRANberries, thereis also a diffstat report forthisrelease.

This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

19 January, 2026 11:21PM

Isoken Ibizugbe

Mid-Point Project Progress

Halfway There

Hurray!🎉 I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.

I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I’ve transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.

In all of this, I’ve committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.

The Original Roadmap vs. Reality

Reviewing my 12-week goal, which included extending automated tests for “live image testing,” “installer testing,” and “documentation,” I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.

Accomplishments

I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.

  • Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
  • LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.

Solving for “Synergy”

One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide

During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.

To help, I created a contributor guide [MR !97]. This guide addresses the project’s writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.

Looking Forward

For the second half of the internship, I plan to:

  1. Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
  2. Explore new coverage: Move beyond start-stop tests into deeper functional testing.

This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.

I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.

Here’s to the next 6 weeks🥂

19 January, 2026 09:15PM by Isoken Ibizugbe

Hellen Chemtai

Internship Highlights at Outreachy: My Journey with Debian OpenQA

Highlights

Hello world😀. I am an intern here at Outreachy working with Debian OpenQA Image testing team. The work consists of testing Images with OpenQA. The internship has reached midpoint and here are some of the highlights that I have had so far.

  1. The mentors : Roland Clobus, Tassia Camoes and Philip Hands are very good mentors. I like the constant communication and the help I get while working on the project. I enjoy working with this team.
  2. The community : The contributors, mentors and the greater SUSE OpenQA community are constantly in communication. I learn a lot from these meetings.
  3. The women network : The women of Debian meet and network . The meetings are interactive and we are encouraged to interact.
  4. The project : We are making progress one step at a time. Isoken Ibizugbe is my fellow intern working on start-stop tests. I am working on live installers tests.

Communication

I have learned a lot during my internship. I have always been on the silent path of life with little communication. I once told myself being a developer would hide me behind a computer to avoid socializing. Being in open source especially this internship has helped me out with communication and networking. The team work in the project has helped me a lot

  1. My mentors encourage communication. Giving project updates and stating when we get stuck.
  2. My mentors have scheduled weekly meetings to communicate about the project
  3. We are constantly invited to the SUSE meetings by mentors or by Sam Thursfield who is part of the team.
  4. Female contributors are encouraged to join Debian women monthly meetings for networking

Lessons so far

I have had challenges , solved problems and learned new skills all this while

  1. I have learned Perl, OpenQA configuration, needle editing and improved my Linux and Git skills
  2. I have known how various Images are installed , booted and run through live viewing of tests
  3. I have solved many test errors and learned to work with applications that are needed in the OS installations. e.g. rufus
  4. I have learned how virtual machines work and how to solve errors in regards to them

So far so good. I am grateful to be a contributor towards the project and hope to continue learning.

19 January, 2026 03:28PM by hellen chemtai tay

hackergotchi for Jonathan Dowland

Jonathan Dowland

FOSDEM 2026

I'm going toFOSDEM 2026!

I'm presenting in the Containers dev room. My talk isJava Memory Managementin Containersand it's scheduled as the first talk on the first day. I'm the warm-up act!

The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in otherforms), but sadly there's no Java devroom this year. There's a story aboutthat, but it's not mine to tell.

Please recommend to me any interesting talks! Here's a few that caught my eye:

Debian/related:

Containers:

Research:

Other:

19 January, 2026 02:12PM

Francesco Paolo Lovergine

A Terramaster NAS with Debian, take two.

After experimenting at home, the very first professional-grade NAS fromTerramaster arrived at work, too, with 12 HDD bays and possibly a pair of M2s.NVME cards. In this case, I again installed a plain Debian distribution, but HDDmonitoring required some configuration adjustments to runsmartd properly.

A decent approach to data safety is to run regularly scheduled short and longSMART tests on all disks to detect potential damage. Running such tests on alldisks at once isn't ideal, so I set up a script to create a staggeredconfiguration and test multiple groups of disks at different times. Note that itis mandatory to read the devices at each reboot because their names and ordercan change.

Of course, the same principle (short/long test at regular intervals along theweek) should be applied for a simpler configuration, as in the case of my homeNAS with a pair of RAID1 devices.

What follows is a simple script to create a staggeredsmartd.conf at boottime:

#!/bin/bash## Save this as /usr/local/bin/create-smartd-conf.sh## Dynamically generate smartd.conf with staggered SMART test scheduling# at boot time based on discovered ATA devices# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS##   -d TYPE Set the device type: ata, scsi[+TYPE], nvme[,NSID],#           sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N],#           usbprolific, usbsunplus, sntasmedia, sntjmicron[,NSID], sntrealtek,#           ... (platform specific)#   -T TYPE Set the tolerance to one of: normal, permissive#   -o VAL  Enable/disable automatic offline tests (on/off)#   -S VAL  Enable/disable attribute autosave (on/off)#   -n MODE No check if: never, sleep[,N][,q], standby[,N][,q], idle[,N][,q]#   -H      Monitor SMART Health Status, report if failed#   -s REG  Do Self-Test at time(s) given by regular expression REG#   -l TYPE Monitor SMART log or self-test status:#           error, selftest, xerror, offlinests[,ns], selfteststs[,ns]#   -l scterc,R,W  Set SCT Error Recovery Control#   -e      Change device setting: aam,[N|off], apm,[N|off], dsn,[on|off],#           lookahead,[on|off], security-freeze, standby,[N|off], wcache,[on|off]#   -f      Monitor 'Usage' Attributes, report failures#   -m ADD  Send email warning to address ADD#   -M TYPE Modify email warning behavior (see man page)#   -p      Report changes in 'Prefailure' Attributes#   -u      Report changes in 'Usage' Attributes#   -t      Equivalent to -p and -u Directives#   -r ID   Also report Raw values of Attribute ID with -p, -u or -t#   -R ID   Track changes in Attribute ID Raw value with -p, -u or -t#   -i ID   Ignore Attribute ID for -f Directive#   -I ID   Ignore Attribute ID for -p, -u or -t Directive#   -C ID[+] Monitor [increases of] Current Pending Sectors in Attribute ID#   -U ID[+] Monitor [increases of] Offline Uncorrectable Sectors in Attribute ID#   -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit#   -v N,ST Modifies labeling of Attribute N (see man page)#   -P TYPE Drive-specific presets: use, ignore, show, showall#   -a      Default: -H -f -t -l error -l selftest -l selfteststs -C 197 -U 198#   -F TYPE Use firmware bug workaround:#           none, nologdir, samsung, samsung2, samsung3, xerrorlba#   -c i=N  Set interval between disk checks to N seconds#    #      Comment: text after a hash sign is ignored#    \      Line continuation character# Attribute ID is a decimal integer 1 <= ID <= 255# except for -C and -U, where ID = 0 turns them off.set -euo pipefail# Test schedule configurationBASE_SCHEDULE="L/../../6"  # Long test on SaturdaysTEST_HOURS=(01 03 05 07)   # 4 time slots: 1am, 3am, 5am, 7amDEVICES_PER_GROUP=3main() {    # Get array of device names (e.g., sda, sdb, sdc)    mapfile -t devices < <(ls -l /dev/disk/by-id/ | grep ata | awk '{print $11}' | grep sd | cut -d/ -f3 | sort -u)    if [[ ${#devices[@]} -eq 0 ]]; then        exit 1    fi    # Start building config file    cat << EOF# smartd.conf - Auto-generated at boot# Generated: $(date '+%Y-%m-%d %H:%M:%S')## Staggered SMART test scheduling to avoid concurrent disk load# Long tests run on Saturdays at different times per group#EOF    # Process devices into groups    local group=0    local count_in_group=0    for i in "${!devices[@]}"; do        local dev="${devices[$i]}"        local hour="${TEST_HOURS[$group]}"        # Add group header at start of each group        if [[ $count_in_group -eq 0 ]]; then            echo ""            echo "# Group $((group + 1)) - Tests at ${hour}:00 on Saturdays"        fi        # Add device entry        #echo "/dev/${dev} -a -o on -S on -s (${BASE_SCHEDULE}/${hour}) -m root"        echo "/dev/${dev} -a -o on -S on -s (L/../../6/${hour}) -s (S/../.././$(((hour + 12) % 24))) -m root"        # Move to next group when current group is full        count_in_group=$((count_in_group + 1))        if [[ $count_in_group -ge $DEVICES_PER_GROUP ]]; then            count_in_group=0            group=$(((group + 1) % ${#TEST_HOURS[@]}))        fi    done}main "$@"

To run such a script at boot, add a unit file to the systemd configuration.

sudo systemctl  edit --full /etc/systemd/system/regenerate-smartd-conf.servicesudo systemctl enable regenerate-smartd-conf.service

Where the unit service is the following:

[Unit]Description=Generate smartd.conf with staggered SMART test scheduling# Wait for all local filesystems and udev device detectionAfter=local-fs.target systemd-udev-settle.serviceBefore=smartd.serviceWants=systemd-udev-settle.serviceDefaultDependencies=no[Service]Type=oneshot# Only generate the config file, don't touch smartd hereExecStart=/bin/bash -c '/usr/local/bin/create-smartd-config.sh > /etc/smartd.conf'StandardOutput=journalStandardError=journalRemainAfterExit=yes[Install]WantedBy=multi-user.target

19 January, 2026 01:00PM by Francesco P. Lovergine (mbox@lovergine.com)

Russell Coker

Furilabs FLX1s

The Aim

I have just got aFurilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want.

The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria.

I am not anywhere near an average user. I don’t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff.

Features

TheFurilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn’t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5.

It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn’t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 – which are both phones that I’ve run Debian on. The current price is $US499 which isn’t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today’s standards so losing the corners matters.

The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them.

Droidian

The OS is based on Android drivers (using the same techniques asDroidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS.

The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn’t work that well. There are 3 D state processes (uninterrupteable sleep – which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can’t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it’s a symptom of things not working as well as you would hope.

The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can’t do everything the Android way (with the full OS updates etc) and you also can’t do everything the Debian way. TheTOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it’s different from most other devices means that fixes can’t be done in the same way.

Applications

GUI

The system uses Phosh and Phoc, the GNOME system for handheld devices. It’s a very different UI from Android, I prefer Android but it is usable with Phosh.

IM

Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn’t test because I don’t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don’t want to keep logging in on new applications.

Chatty also does SMS but I couldn’t test that without the SIM caddy.

I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian.

Email

I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can’t just subscribe to the important email on my phone so that bandwidth isn’t wasted on less important email (there is a GNOME gitlab issue about this – seethe Debian Wiki page about Mobile apps [4]).

Music

Music playing isn’t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 – a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it’s controls for pause/play and going forward and backward one track on the lock screen.

Maps

The installed map program is gnome-maps which works reasonably well. It gets directions via theGraphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps.

Delivery and Unboxing

I received myFLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn’t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don’t usually use screen protectors but in this case it might be useful as the edges of the case don’t even extend 0.5mm above the screen. So if it falls face down the case won’t help much.

When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn’t immediately test VoLTE functionality. The contact form on their web site wasn’t working when I tried to report that and the email for support was bouncing.

Bluetooth

As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn’t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend’s laptop running Debian/Trixie also wouldn’t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this.

Wifi

Currently 5GHz wifi doesn’t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven’t tested running a hotspot due to being unable to get 4G working as they haven’t yet shipped me the SIM caddy.

Docking

This phone doesn’t support DP Alt-mode or Thunderbolt docking so it can’t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device.

Camera

The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn’t work and the camera on my PinePhone Pro currently doesn’t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen – taking photos of computer screens is an important part of my work but I can probably work around that.

I wasn’t assessing this camera t find out if it’s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos.

FLX1s


Note 9


Power Use

In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot).

The PinePhone Pro and the Librem 5 have an optional “Caffeine mode” which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn’t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it.

Charging

One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn’t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it’s charging and the power meter will measure that it’s drawing some current.

I added the table frommy previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a “High-power device” which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a “High-power device” port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don’t know if it is obeying the spec or just sucking.

On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation.

The case that I received has a hole for the USB-C connector that isn’t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It’s no big deal to adjust the hole (I have done it with other cases) but it’s an annoyance.

PhoneTop z640Bottom Z640MonitorAli ChargerDell DockBatteryBestWorst
FLX1sFAIL5.0V 0.49A 2.4W4.8V 1.9A 9.0W5.9V 1.8A 11W4.8V 2.1A 10W5.8V 2.1A 12W5.8V 2.1A 12W5.0V 0.49A 2.4W
Note94.8V 1.0A 5.2W4.8V 1.6A 7.5W4.9V 2.0A 9.5W5.1V 1.9A 9.7W4.8V 2.1A 10W5.1V 2.1A 10W5.1V 2.1A 10W4.8V 1.0A 5.2W
Pixel 7 pro4.9V 0.80A 4.2W4.8V 1.2A 5.9W9.1V 1.3A 12W9.1V 1.2A 11W4.9V 1.8A 8.7W9.0V 1.3A 12W9.1V 1.3A 12W4.9V 0.80A 4.2W
Pixel 84.7V 1.2A 5.4W4.7V 1.5A 7.2W8.9V 2.1A 19W9.1V 2.7A 24W4.8V 2.3A 11.0W9.1V 2.6A 24W9.1V 2.7A 24W4.7V 1.2A 5.4W
PPP4.7V 1.2A 6.0W4.8V 1.3A 6.8W4.9V 1.4A 6.6W5.0V 1.2A 5.8W4.9V 1.4A 5.9W5.1V 1.2A 6.3W4.8V 1.3A 6.8W5.0V 1.2A 5.8W
Librem 54.4V 1.5A 6.7W4.6V 2.0A 9.2W4.8V 2.4A 11.2W12V 0.48A 5.8W5.0V 0.56A 2.7W5.1V 2.0A 10W4.8V 2.4A 11.2W5.0V 0.56A 2.7W
OnePlus65.0V 0.51A 2.5W5.0V 0.50A 2.5W5.0V 0.81A 4.0W5.0V 0.75A 3.7W5.0V 0.77A 3.7W5.0V 0.77A 3.9W5.0V 0.81A 4.0W5.0V 0.50A 2.5W
Best4.4V 1.5A 6.7W4.6V 2.0A 9.2W8.9V 2.1A 19W9.1V 2.7A 24W4.8V 2.3A 11.0W9.1V 2.6A 24W

Conclusion

The Furilabs support people are friendly and enthusiastic but my customer experience wasn’t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven’t received but believe is in the mail) but it would be better if such things just didn’t happen.

The phone is quite user friendly and could be used by a novice.

I paid $US577 for the FLX1s which is $AU863 by today’s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that’s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861.

Doing what the Furilabs people have done is not a small project. It’s a significant amount of work and the prices of their products need to cover that. I’m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a “feature phone” I’d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I’d get them the FLX1s.

For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven’t tested them yet.

The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen.

I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that’s entirely made in the US won’t want the FLX1s.

This isn’t the destination for Debian based phones, but it’s a good step on the way to it and I don’t think I’ll regret this purchase.

Related posts:

  1. Phone Charging Speeds With Debian/TrixieOne of the problems I encountered with the PinePhone Pro...
  2. Pixel 6AI have just bought a Pixel 6A [1] for my...
  3. More About the Librem 5I concluded my previous post about the Purism Librem 5...

19 January, 2026 06:43AM by etbe

Vincent Bernat

RAID 5 with mixed-capacity disks on Linux

Standard RAID solutions waste space when disks have different sizes. Linuxsoftware RAID withLVM uses the full capacity of each disk and lets you growstorage by replacing one or two disks at a time.1

We start with four disks of equal size:

$lsblk-MoNAME,TYPE,SIZENAME TYPE  SIZEvda  disk  101Mvdb  disk  101Mvdc  disk  101Mvdd  disk  101M

We create one partition on each of them:

$sgdisk--zap-all--new=0:0:0-t0:fd00/dev/vda$sgdisk--zap-all--new=0:0:0-t0:fd00/dev/vdb$sgdisk--zap-all--new=0:0:0-t0:fd00/dev/vdc$sgdisk--zap-all--new=0:0:0-t0:fd00/dev/vdd$lsblk-MoNAME,TYPE,SIZENAME   TYPE  SIZEvda    disk  101M└─vda1 part  100Mvdb    disk  101M└─vdb1 part  100Mvdc    disk  101M└─vdc1 part  100Mvdd    disk  101M└─vdd1 part  100M

We set up a RAID 5 device by assembling the four partitions:2

$mdadm--create/dev/md0--level=raid5--bitmap=internal--raid-devices=4\>/dev/vda1/dev/vdb1/dev/vdc1/dev/vdd1$lsblk-MoNAME,TYPE,SIZE    NAME          TYPE    SIZE    vda           disk    101M┌┈▶ └─vda1        part    100M┆   vdb           disk    101M├┈▶ └─vdb1        part    100M┆   vdc           disk    101M├┈▶ └─vdc1        part    100M┆   vdd           disk    101M└┬▶ └─vdd1        part    100M └┈┈md0           raid5 292.5M$cat/proc/mdstatmd0 : active raid5 vdd1[4] vdc1[2] vdb1[1] vda1[0]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunk

We useLVM to create logical volumes on top of the RAID 5 device.

This gives us the following setup:

One RAID 5 device built from four partitions from four disks of equal capacity. The RAID device is part of an LVM volume group with two logical volumes.
RAID 5 setup with disks of equal capacity

We replace/dev/vda with a bigger disk. We add it back to the RAID 5 arrayafter copying the partitions from/dev/vdb:

We do not use the additional capacity: this setup would not survive the loss of/dev/vda because we have no spare capacity. We need a second disk replacement,like/dev/vdb:

We create a new RAID 1 array by using the free space on/dev/vda and/dev/vdb:

We add/dev/md1 to the volume group:

This gives us the following setup:3

One RAID 5 device built from four partitions and one RAID 1 device built from two partitions. The two last disks are smaller. The two RAID devices are part of a single LVM volume group.
Setup mixing both RAID 1 and RAID 5

We extend our capacity further by replacing/dev/vdc:

$cat/proc/mdstatmd1 : active (auto-read-only) raid1 vda2[0] vdb2[1]      101312 blocks super 1.2 [2/2] [UU]      bitmap: 0/1 pages [0KB], 65536KB chunkmd0 : active (auto-read-only) raid5 vda1[5] vdd1[4] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]      bitmap: 0/1 pages [0KB], 65536KB chunk$sgdisk--replicate=/dev/vdc/dev/vdb$sgdisk--randomize-guids/dev/vdc$mdadm--manage/dev/md0--add/dev/vdc1$cat/proc/mdstatmd1 : active (auto-read-only) raid1 vda2[0] vdb2[1]      101312 blocks super 1.2 [2/2] [UU]      bitmap: 0/1 pages [0KB], 65536KB chunkmd0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunk

Then, we convert/dev/md1 from RAID 1 to RAID 5:

$mdadm--grow/dev/md1--level=5--raid-devices=3--add/dev/vdc2mdadm: level of /dev/md1 changed to raid5mdadm: added /dev/vdc2$cat/proc/mdstatmd1 : active raid5 vdc2[2] vda2[0] vdb2[1]      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]      bitmap: 0/1 pages [0KB], 65536KB chunkmd0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunk$pvresize/dev/md1$vgs  VG   #PV #LV #SN Attr   VSize   VFree  data   2   2   0 wz--n- 482.00m 282.00m

This gives us the following layout:

Two RAID 5 devices built from four disks of different sizes. The last disk is smaller and contains only one partition, while the others have two partitions: one for /dev/md0 and one for /dev/md1. The two RAID devices are part of a single LVM volume group.
RAID 5 setup with mixed-capacity disks using partitions and LVM

We further extend our capacity by replacing/dev/vdd:

$cat/proc/mdstatmd0 : active (auto-read-only) raid5 vda1[5] vdc1[7] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]      bitmap: 0/1 pages [0KB], 65536KB chunkmd1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1]      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]      bitmap: 0/1 pages [0KB], 65536KB chunk$sgdisk--replicate=/dev/vdd/dev/vdc$sgdisk--randomize-guids/dev/vdd$mdadm--manage/dev/md0--add/dev/vdd1$cat/proc/mdstatmd0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunkmd1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1]      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]      bitmap: 0/1 pages [0KB], 65536KB chunk

We grow the second RAID 5 array:

$mdadm--grow/dev/md1--raid-devices=4--add/dev/vdd2mdadm: added /dev/vdd2$cat/proc/mdstatmd0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6]      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunkmd1 : active raid5 vdd2[3] vda2[0] vdc2[2] vdb2[1]      303936 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]      bitmap: 0/1 pages [0KB], 65536KB chunk$pvresize/dev/md1$vgs  VG   #PV #LV #SN Attr   VSize   VFree  data   2   2   0 wz--n- 580.00m 380.00m$lsblk-MoNAME,TYPE,SIZE       NAME          TYPE    SIZE       vda           disk    201M   ┌┈▶ ├─vda1        part    100M┌┈▶┆   └─vda2        part    100M┆  ┆   vdb           disk    201M┆  ├┈▶ ├─vdb1        part    100M├┈▶┆   └─vdb2        part    100M┆  ┆   vdc           disk    201M┆  ├┈▶ ├─vdc1        part    100M├┈▶┆   └─vdc2        part    100M┆  ┆   vdd           disk    301M┆  └┬▶ ├─vdd1        part    100M└┬▶ ┆  └─vdd2        part    100M ┆  └┈┈md0           raid5 292.5M ┆     ├─data-bits   lvm     100M ┆     └─data-pieces lvm     100M └┈┈┈┈┈md1           raid5 296.8M

You can continue by replacing each disk one by one using the same steps. ♾️


  1. This is the same approachSynology uses for itsHybrid RAID andwhat theOpenMediaVault community calls a “Sliced Hybrid RAID.” ↩︎

  2. Write-intent bitmaps speed up recovery of the RAID array after apower failure by marking unsynchronized regions as dirty. They have animpact on performance, but I did not measure it myself. An alternativeis to use apartial parity log with--consistency-policy=ppl↩︎

  3. In thelsblk output,/dev/md1 appears unused because the logicalvolumes do not use any space from it yet. Once you create more logicalvolumes or extend them,lsblk will reflect the usage. ↩︎

19 January, 2026 05:49AM by Vincent Bernat

Dima Kogan

mrcal 2.5 released!

mrcal 2.5 is out:the release notes. Onceagain, this is mostly a bug-fixrelease en route to the big new features coming in 3.0.

One cool thing is that these tools have now matured enough to no longer beconsidered experimental. They have been used with great success inlots ofcontexts across many different projects and organizations. Some highlights:

  • I've calibrated extremely wide lenses
  • and extremely narrow lenses
  • and joint systems containing many different kinds of lenses
  • with lots of cameras at the same time. The biggest single joint calibrationI've done today had 10 cameras, but I'll almost certainly encounter biggersystems in the future
  • mrcal has been used to process both visible and thermal cameras
  • The new triangulated-feature capability has been used in astructure-from-motion context to compute the world geometry on-line.
  • mrcal has been used with weird experimental setups employing customcalibration objects and single-view solves
  • mrcal has calibratedjoint camera-LIDAR systems
  • andjoint camera-IMU systems
  • Lots of students use mrcal as part ofPhotonVision, the toolkit used by teamsin theFIRST Robotics Competition

Some of the above is new, and not yet fully polished and documented and tested,but it works.

In mrcal 2.5,most of the implementation of some new big features is writtenand committed, but it's still incomplete. The new stuff is there, but is lightlytested and documented. This will be completed eventually in mrcal 3.0:

mrcal is quite good already, and will be even better in the future. Try ittoday!

19 January, 2026 12:00AM by Dima Kogan

January 17, 2026

Simon Josefsson

Backup of S3 Objects Using rsnapshot

I’ve been usingrsnapshot to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot usesrsync overSSH and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:

/backup/serverA.domain/.sync/foo/backup/serverA.domain/daily.0/foo/backup/serverA.domain/daily.1/foo/backup/serverA.domain/daily.2/foo.../backup/serverA.domain/daily.6/foo/backup/serverA.domain/weekly.0/foo/backup/serverA.domain/weekly.1/foo.../backup/serverA.domain/monthly.0/foo/backup/serverA.domain/monthly.1/foo.../backup/serverA.domain/yearly.0/foo

I can browse and rescue files easily, going back in time when needed.

Thersnapshot project README explains more, there is a longrsnapshot HOWTO although I usually find thersnapshot man page the easiest to digest.

I havestored multi-TB Git-LFS data on GitLab.com for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrateddebdistget to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the €25/month warning threshold.

But how do you backup stuff stored in S3?

For some time, my S3 backup solution has been to run theminio-clientmirror command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME’s are relatively cheap, I’ve felt that this disk and network churn on my laptop is unsatisfactory for quite some time.

What is a better approach?

I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven’t really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too.

The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me.

What if we could use rsnapshot with a S3 client instead of rsync?

Someone elseasked about this several years ago, and the suggestion was to use the fuse-baseds3fs which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in thersnapshot implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired.

Here is my configuration snippet:

cmd_rsync    /backup/s3/s3rsyncrsync_short_args    -Qrsync_long_args    --json --removelockfile    /backup/s3/rsnapshot.pidsnapshot_root    /backup/s3backup    s3:://hetzner/debdistget-gnuinos    ./debdistget-gnuinosbackup    s3:://hetzner/debdistget-tacos  ./debdistget-tacosbackup    s3:://hetzner/debdistget-diffos ./debdistget-diffosbackup    s3:://hetzner/debdistget-pureos ./debdistget-pureosbackup    s3:://hetzner/debdistget-kali   ./debdistget-kalibackup    s3:://hetzner/debdistget-devuan ./debdistget-devuanbackup    s3:://hetzner/debdistget-trisquel   ./debdistget-trisquelbackup    s3:://hetzner/debdistget-debian ./debdistget-debian

The idea is to save a backup of a couple of S3 buckets under/backup/s3/.

I have some scripts that take a completersnapshot.conf file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this,backup-all invokesbackup-one appending myrsnapshot.conf template with the snippet above.

Thes3rsync wrapper script is the essential hack to convert rsnapshot’s rsync parameters into something that talks S3 and the script is as follows:

#!/bin/shset -euS3ARG=for ARG in "$@"; do    case $ARG in    s3:://*) S3ARG="$S3ARG "$(echo $ARG | sed -e 's,s3:://,,');;    -Q*) ;;    *) S3ARG="$S3ARG $ARG";;    esacdoneecho /backup/s3/mc mirror $S3ARGexec /backup/s3/mc mirror $S3ARG

It uses theminio-client tool. I first trieds3cmd but itssync command read all files to compute MD5 checksums every time you invoke it, which is very slow. Themc mirror command is blazingly fast since it only compare mtime’s, just likersync orgit.

First you need to store credentials for your S3 bucket. These are stored in plaintext in~/.mc/config.json which I find to be sloppy security practices, but I don’t know of any better way to do this. ReplaceAKEY andSKEY with your access token and secret token from your S3 provider:

/backup/s3/mc alias set hetzner AKEY SKEY

If I invoke async job for a fully synced up directory the output looks like this:

root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V syncSetting locale to POSIX "C"echo 1443 > /backup/s3/rsnapshot.pid /backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \    /backup/s3/.sync//debdistget-gnuinos /backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \    /backup/s3/.sync//debdistget-tacos /backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \    /backup/s3/.sync//debdistget-diffos /backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \    /backup/s3/.sync//debdistget-pureos /backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \    /backup/s3/.sync//debdistget-kali /backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \    /backup/s3/.sync//debdistget-devuan /backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \    /backup/s3/.sync//debdistget-trisquel /backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \    /backup/s3/.sync//debdistget-debian /backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}touch /backup/s3/.sync/ rm -f /backup/s3/rsnapshot.pid /run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \    -V sync: completed successfully root@hamster /backup#

You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately,mc is not packaged in Guix, so you will have to install it from theMinIO Client GitHub page manually.

Running the daily rotation looks like this:

root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V dailySetting locale to POSIX "C"echo 1549 > /backup/s3/rsnapshot.pid mv /backup/s3/daily.5/ /backup/s3/daily.6/ mv /backup/s3/daily.4/ /backup/s3/daily.5/ mv /backup/s3/daily.3/ /backup/s3/daily.4/ mv /backup/s3/daily.2/ /backup/s3/daily.3/ mv /backup/s3/daily.1/ /backup/s3/daily.2/ mv /backup/s3/daily.0/ /backup/s3/daily.1/ /run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0 rm -f /backup/s3/rsnapshot.pid /run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \    /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \    -V daily: completed successfully root@hamster /backup#

Hopefully you will feel inspired to take backups of your S3 buckets now!

17 January, 2026 10:04PM by simon

Search


A complete feed is available in any of your favourite syndication formats linked by the buttons below.

[RSS 1.0 Feed][RSS 2.0 Feed][Atom Feed][FOAF Subscriptions][OPML Subscriptions][Hacker][Planet]

Last updated: 18 Feb 2026 09:45
All times are UTC.
Contact:Debian Planet Maintainers

Planetarium

Hidden Feeds

You currently have hidden entries.Show all

Subscriptions


[8]ページ先頭

©2009-2026 Movatter.jp