This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.
interesting analysis of dbus and design for a more secure replacement [3].
Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.
Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.
This video about designing a C64 laptop is a masterclass in computer design [9].
Ron Garrett wrote an insightful blog post about abortion [11].
Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!
Related posts:
17 February, 2026 08:09AM by etbe
In the Tor Project system Administrator's team (colloquially known asTPA), we've recently changed how we take decisions, which means you'llget clearer communications from us about upcoming changes ortargeted questions about a proposal.
Note that this change only affects the TPA team. At Tor, each team hasits own way of coordinating and making decisions, and so far thisprocess is only used inside TPA. We encourage other teams inside andoutside Tor to evaluate this process to see if it can improve yourprocesses and documentation.
We had traditionally been using a "RFC" ("Request For Comments")process and have recently switched to "ADR" ("Architecture DecisionRecord").
The ADR process is, for us, pretty simple. It consists of threethings:
As team lead, the first thing I did was to propose a new template (inADR-100), a variation of theNygard template. TheTPAvariation of the template is similarly simple, as it has only 5headings, and is worth quoting in full:
Context: What is the issue that we're seeing that is motivatingthis decision or change?
Decision: What is the change that we're proposing and/or doing?
Consequences: What becomes easier or more difficult to dobecause of this change?
More Information (optional): What else should we know? Forlarger projects, consider including a timeline and cost estimate,along with the impact on affected users (perhaps including existingPersonas). Generally, this includes a short evaluation ofalternatives considered.
Metadata: status, decision date, decision makers, consulted,informed users, and link to a discussion forum
Theprevious RFC template had17 (seventeen!) headings, whichencouraged much longer documents. Now, the decision record will beeasier to read and digest at one glance.
An immediate effect of this is that I've started using GitLab issuesmore for comparisons and brainstorming. Instead of dumping in adocument all sorts of details like pricing or in-depth alternativescomparison, we record those in the discussion issue, keeping thedocument shorter.
The whole process is simple enough that it's worth quoting in full aswell:
Major decisions are introduced to stakeholders in a meeting, smallerones by email. A delay allows people to submit final comments beforeadoption.
Now, of course, the devil is in the details (andADR-101), but thepoint is to keep things simple.
A crucial aspect of the proposal, which Jacob Kaplan-Moss calls theone weird trick, is to "decide who decides". Our previous processwas vague about who makes the decision and the new template (andprocess) clarifies decision makers, for each decision.
Inversely, some decisions degenerate into endless discussions aroundtrivial issues becausetoo many stakeholders are consulted, aproblem known as theLaw of triviality, also known as the "BikeShed syndrome".
The new process better identifies stakeholders:
Picking those stakeholders is still tricky, but our definitions aremore explicit and aligned to the classicRACI matrix (Responsible,Accountable, Consulted, Informed).
Finally, a crucial part of the process (ADR-102) is to decouplethe act of making and recording decisions fromcommunicating aboutthe decision. Those are tworadically different problems tosolve. We have found that a single document can't serve both purposes.
Because ADRs can affect a wide range of things, we don't have aspecific template for communications. We suggest theFive Wsmethod (Who? What? When? Where? Why?) and, again, to keep things simple.
TheADR process is not something I invented. I first stumbled uponit in theThunderbird Android project. Then, in parallel, I was intheprocess of reviewing the RFC process, following JacobKaplan-Moss'scriticism of the RFC process. Essentially, he arguesthat:
And, indeed, I have been guilty of a lot of those issues. A verbosewriter, I have writtenextremely long proposals that I suspect noone has ever fully read. Some proposals were adopted by exhaustion, orignored because not looping in the right stakeholders.
Ourdiscussion issue on the topic has more details on the issues Ifound with our RFC process. But to give credit to the old process, itdid serve us well while it was there: it's better than nothing, and itallowed us to document a staggering number of changes and decisions(95 RFCs!) made over the course of 6 years of work.
We're still experimenting with the communication around decisions, asthis text might suggest. Because it's a separate step, we also have atendency to forget or postpone it, like this post, which comes acouple of months late.
Previously, we'd just ship a copy of the RFC to everyone, which waseasy and quick, but incomprehensible to most. Now we need to write aseparate communication, which is more work but, hopefully, worth theas the result is more digestible.
We can't wait to hear what you think of the new process and how itworks for you, here or in thediscussion issue! We're particularlyinterested in people that are already using a similar process, or thatwill adopt one after reading this.
Note: this article was also published on theTor Blog.

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:
Optimally we would go and solve some scalability issues with the services, however there is also a question of how much wewant to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.
DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.
If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g.haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.
Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.
For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.
I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.
16 February, 2026 07:55PM by Philipp Kern (noreply@blogger.com)
What if I told you there is a way to configure the network on anyLinux server that:
systemd-networkd,ifupdown,NetworkManager,nothing)It has literally 8 different caveats on top of that, but is stilltotally worth your time.
People following Debian development might have noticed there are nowfour ways of configuring the network Debian system. At least that iswhat theDebian wiki claims, namely:
ifupdown (/etc/network/interfaces): traditional staticconfiguration system, mostly for workstations and servers that hasbeen there forever in Debian (sinceat least 2000), documentedin the Debian wiki
NetworkManager: self-proclaimed "standard Linux networkconfiguration", mostly used on desktops but technically supportsservers as well, see theDebian wiki page (introduced in 2004)
systemd-network: used more for servers, seeDebian reference DocChapter 5 (introduced some time around Debian 8 "jessie", in2015)
Netplan: latest entry (2018), YAML-based configurationabstraction layer on top of the above two, see alsoDebianreference Doc Chapter 5 andthe Debian wiki
At this point, I feelifupdown is on its way out, possibly replacedbysystemd-networkd. NetworkManager already manages most desktopconfigurations.
The method is this:
ip= on theLinux kernel command line: for servers with asingle IPv4 or IPv6 address, no software required other than thekernel and a boot loader (since 2002 or older)So by "new" I mean "new to me". This option isreally old. The
nfsroot.txtwhere it is documented predates the git import of theLinux kernel: it's part of the 2005 git import of 2.6.12-rc2. That'salready 20+ years old already.The oldest trace I found is in this2002 commit, which importsthe whole file at once, but the option might goes back as far as1996-1997, if the copyright on the file is correct and the optionwas present back then.
The trick is to add anip= parameter to the kernel'scommand-line. The syntax, as mentioned above, is innfsroot.txtand looks like this:
ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>Most settings are pretty self-explanatory, if you ignore the uselessones:
<client-ip>: IP address of the server<gw-ip>: address of the gateway<netmask>: netmask, in quad notation<device>: interface name, if multiple available<autoconf>: how to configure the interface, namely:off ornone: no autoconfiguration (static)on orany: use any protocol (default)dhcp, essentially likeon for all intents and purposes<dns0-ip>,<dns1-ip>: IP address of primary and secondary nameservers, exported to/proc/net/pnp, can by symlinked to/etc/resolv.confWe're ignoring the options:
<server-ip>: IP address of the NFS server, exported to/proc/net/pnp<hostnname>: Name of the client, typically sent over the DHCPrequests, which may lead to a DNS record to be created in somenetworks<ntp0-ip>: exported to/proc/net/ipconfig/ntp_servers, unused bythe kernelNote that theRed Hat manual has a different opinion:
ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]It's essentially the same (althoughserver-id is weird), and theautoconf variable has other settings, so that's a bit odd.
For example, this command-line setting:
ip=192.0.2.42::192.0.2.1:255.255.255.0:::off... will set the IP address to 192.0.2.42/24 and the gateway to192.0.2.1. This will properly guess the network interface if there's asingle one.
A DHCP only configuration will look like this:
ip=::::::dhcpOf course, you don't want to type this by hand every time you boot themachine. That wouldn't work. You need to configure the kernelcommandline, and that depends on your boot loader.
With GRUB, you need to edit (on Debian), the file/etc/default/grub(ugh) and find a line like:
GRUB_CMDLINE_LINUX=and change it to:
GRUB_CMDLINE_LINUX=ip=::::::dhcpForsystemd-boot UKI setups, it's simpler: just add the setting tothe/etc/kernel/cmdline file. Don't forget to include anythingthat's non-default from/proc/cmdline.
This assumes that is theCmdline=@ setting in/etc/kernel/uki.conf. See2025-08-20-luks-ukify-conversion formy minimal documentation on this.
This is perhaps where this is much less portable than it might firstlook, because of course each distribution has its own way ofconfiguring those options. Here are some that I know of:
/etc/default/grub,/boot/loader/entries/arch.conf forsystemd-boot or/etc/kernel/cmdline for UKI)/etc/default/grub, may be moreRHEL mentionsgrubby, possibly somesystemd-boot things here as well)/etc/default/grub,/efi/loader/entries/gentoo-sources-kernel.conf forsystemd-boot,or/etc/kernel/install.d/95-uki-with-custom-opts.install)It's interesting that/etc/default/grub is consistent across alldistributions above, while thesystemd-boot setups areall over theplace (except for the UKI case), while I would have expected those bemore standard than GRUB.
Ifdropbear-initramfs is setup, it alreadyrequires you to havesuch a configuration, and it might not work out of the box.
This is because, by default, itdisables the interfaces configuredin the kernel after completing its tasks (typically unlocking theencrypted disks).
To fix this, you need todisable that "feature":
IFDOWN="none"This will keepdropbear-initramfs from disabling the configuredinterface.
Traditionally, I've always setup my servers withifupdown on serversand NetworkManager on laptops, because that's essentially thedefault. But on some machines, I've started usingsystemd-networkdbecauseifupdown has ... issues, particularly with reloading networkconfigurations.ifupdown is a old hack, feels like legacy, and isDebian-specific.
Not excited about configuring another service, I figured I would trysomething else: just configure the network at boot, through the kernelcommand-line.
I was already doing such configurations fordropbear-initramfs(seethis documentation), which requires the network the be upfor unlocking the full-disk encryption keys.
So in a sense, this is a "Don't Repeat Yourself" solution.
Also known as: "wait, that works?" Yes, it does! That said...
This is useful for servers where the network configuration willnot change after boot. Of course, this won't work on laptops orany mobile device.
This only works for configuring a single, simple, interface. Youcan't configure multiple interfaces, WiFi, bridges, VLAN, bonding,etc.
It does support IPv6 and feels like the best way to configure IPv6hosts: true zero configuration.
It likely doesnot work with adual-stack IPv4/IPv6 staticconfiguration. Itmight work with adynamic dual stackconfiguration, but I doubt it.
I don't know what happens when a DHCP lease expires. No daemonseems to be running so I assume leases are not renewed, so this ismore useful for static configurations, which includes server-sidereserved fixed IP addresses. (A non-renewed lease risks gettingreallocated to another machine, which would cause an addressingconflict.)
It will not automatically reconfigure the interface on linkchanges, butifupdown does not either.
It willnot write/etc/resolv.conf for youbut thedns0-ipanddns1-ip do end up in/proc/net/pnp which has a compatiblesyntax, so a common configuration is:
ln -s /proc/net/pnp /etc/resolv.confI have not really tested thisat scale: only a single, testserver at home.
Yes, that's a lot of caveats, but it happens to cover alot ofmachines for me, and it works surprisingly well. My main doubts areabout long-term DHCP behaviour, but I don't see why that would be aproblem with a statically defined lease.
Once you have this configuration, you don't needany "user" levelnetwork system, so you can get rid ofeverything:
apt purge systemd-networkd ifupdown network-manager netplan.ioNote thatifupdown (and probably others) leave stray files in (e.g.)/etc/network which you might want to cleanup, or keep in case allthis fails and I have put you in utter misery. Configuration files forother packages might also be left behind, I haven't tested this, nowarranty.
This whole idea came from theA/I folks (not to be confused withAI) who have been doing this forever, thanks!

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.
It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led byNathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.
When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling:why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?
We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).
We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:
Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.
Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.
This work was published as a paper at CSCW:TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.”Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25.https://doi.org/10.1145/3512908.
This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.
16 February, 2026 03:13AM by Benjamin Mako Hill
tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.
We have recentlyannounced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.
tag2upload, as part ofDebian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.
This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.
(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)
git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.
dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.
They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.
tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.
See theDay-to-day work section below to see how simple your life could be.
Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.
We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.
The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.
And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly itisn’t always trivial to get your first push to succeed.
One of Debian’s foundational principles is that we publish the source code.
Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.
But, without tag2upload or dgit, we aren’tproperly publishing our work! Yes, we typically put our git branch on Salsa, and pointVcs-Git at it. However:
debian/, orsomething even stranger.debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such asgbp buildpackage) doesn’t cross-check the .dsc against git.This means that the git repositories on Salsa cannot be used by anyone who needs things that aresystematic andalways correct. They are OK for expert humans, but they are awkward (evenhazardous) for Debian novices, and you cannot use them in automation. The real test is: could you useVcs-Git and Salsa to build a Debian derivative? You could not.
tag2upload and dgitdo solve this problem. When you upload, they:
archive/debian/1.2.3-7 tag to a single central git depository,*.dgit.debian.org;Dgit field in.dsc so that clients can tell (using theftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.
(The client isdgit clone, as advertised in e.g.dgit-user(7). For dput-based uploads, it falls back to importing the source package.)
tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.
So, you can just adopt itwithout completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.
Start withthe wiki page andgit-debpush(1) (ideally from forky aka testing).
Youdon’t need to do any of the other things recommended in this article.
The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.
Your current approach uses the “patches-unapplied” git branch format used withgbp pq and/orquilt, and often used withgit-buildpackage. You previously usedgbp import-orig.
You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have yourorigin remote set to Salsa.
Your main Debian branch name on Salsa ismaster. Personally Ithink we should usemain but changing your main branch name is outside the scope of this article.
You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.
Your co-maintainers are also adopting the new approach.
tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.
This article will guide you in adopting:
In Debian we need to be able to modify the upstream-provided source code. Those modifications are theDebian delta. We need to somehow represent it in git.
We recommend storing the deltaas git commits to those upstream files, by picking one of the following two approaches.
rationale
Much traditional Debian tooling like
quiltandgbp pquses the “patches-unapplied” branch format, which stores the delta as patch files indebian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be analarming beartrap for Debian outsiders.
Option 1: simply use git, directly, including git merge.
Just make changes directly to upstream files on your Debian branch, when necessary. Use plaingit merge when merging from upstream.
This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from withindebian/.
This is the approach documented more fully in our workflow tutorialdgit-maint-merge(7).
Option 2: Adopt git-debrebase.
git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.
The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.
This is the approach documented more fully in our workflow tutorialdgit-maint-debrebase(7).
Examples of complex packages using this approach includesrc:xen andsrc:sbcl.
We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.
rationale
Many maintainers have been importing upstream tarballs into git, for example by using
gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even ajoke by the author of pristine-tar!)
First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is1.2.3, and that upstream tagged itv1.2.3.
Editdebian/watch to contain something like this:
version=4opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)You may need to adjust the regexp, depending on your upstream’s tag name convention. Ifdebian/watch had afiles-excluded, you’ll need to make afiltered version of upstream git.
From now on we’ll generate our own .orig tarballs directly from git.
rationale
We needsome “upstream tarball” for the
3.0 (quilt)source format to work with. It needs to correspond to the git commit we’re using as our upstream. Wedon’t need or want to use a tarball from upstream for this. The.origis just needed so a nice legacy Debian source package (.dsc) can be generated.
Probably, the current.orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing.origs for the “same upstream version”.
So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add+git to Debian’s idea of the upstream version. Manually make a tag with that name:
git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0git push origin v1.2.3+gitIf you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.
Prepare a new branch on top of upstream git, containing what we want:
git branch -f old-master # make a note of the old git representationgit reset --hard v1.2.3 # go back to the real upstream git taggit checkout old-master :debian # take debian/* from old-mastergit commit -m "Re-import Debian packaging on top of upstream git"git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-mastergit branch -d old-master # it's incorporated in our history nowIf there are any patches, manually apply them to yourmain branch withgit am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)
rationale
These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.
Convert the branch to git-debrebase format and rebase onto the upstream git:
git-debrebase -fdiverged convert-from-gbp upstream/1.2.3git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+gitIf you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.
rationale
The force option
-fupstream-not-ffwill be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history.-fdivergedmay be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.
Manually make your history fast forward from the git import of your previous upload.
dgit fetchgit show dgit/dgit/sid:debian/changelog# check that you have the same version numbergit merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sidDelete any existingdebian/source/options and/ordebian/source/local-options.
Changedebian/source/format to1.0. Adddebian/source/options containing-sn.
rationale
We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusiveunjustifiably rejects this configuration.
You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to#1106402.
Ensure thatdebian/source/format contains3.0 (quilt).
Now you are ready to do alocal test build.
EditREADME.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything indebian/patches/. Consider saying that uploads should be done via dgit or tag2upload.
Check that yourVcs-Git is correct indebian/control. Consider deleting or pruningdebian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.
Add a note todebian/changelog about the git packaging change.
git-debrebase new-upstream will have added a “new upstream version” stanza todebian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the+git from the upstream version number there!)
In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.
rationale
Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.
gitlab is a giant pile of enterprise crap. Itisfullofstartlingbugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model areserious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).
However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.
The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.
Createdebian/salsa-ci.yml containing
include: - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.ymlIn your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” todebian/salsa-ci.yml.
rationale
Your project may have an upstream CI config in
.gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.You can add various extra configuration to
debian/salsa-ci.ymlto customise it. Consult theSalsa CI docs.
Add todebian/salsa-ci.yml:
.git-debrebase-prepare: &git-debrebase-prepare # install the tools we'll need - apt-get update - apt-get --yes install git-debrebase git-debpush # git-debrebase needs git user setup - git config user.email "salsa-ci@invalid.invalid" - git config user.name "salsa-ci" # run git-debrebase make-patches # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371 - git-debrebase --force - git-debrebase make-patches # make an orig tarball using the upstream tag, not a gbp upstream/ tag # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541 - git-deborig.build-definition: &build-definition extends: .build-definition-common before_script: *git-debrebase-preparebuild source: extends: .build-source-only before_script: *git-debrebase-preparevariables: # disable shallow cloning of git repository. This is needed for git-debrebase GIT_DEPTH: 0rationale
Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).
These runes were based on thosein the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.
Push this to salsa and make the CI pass.
If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.
In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branchmaster. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.
This means that the only way to landanything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.
gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.
(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button.This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)
Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.
TheDebian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.
With this capable tooling, most tasks are much easier.
Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.
On your MR branch you can freely edit every file. This includes upstream files, and files indebian/.
For example, you can:
git cherry-pick an upstream commit.git am a patch from a mailing list or from the Debian Bug System.git revert an earlier commit, even an upstream one.When you have a working state of things, tidy up your git branch:
Usegit-rebase to squash/edit/combine/reorder commits.
Usegit-debrebase -i to squash/edit/combine/reorder commits. When you are happy, rungit-debrebase conclude.
Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, usegit-debrebase -i to edit the actual commits.
Push the MR branch (topic branch) to Salsa and make a Merge Request.
Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)
If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.
An informal test build can be done like this:
apt-get build-dep .dpkg-buildpackage -uc -bIdeally this will leavegit status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to.gitignore ordebian/.gitignore as applicable.
If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.
For formal binaries builds, including for testing, usedgit sbuild asdescribed below for uploading to NEW.
Start an MR branch for the administrative changes for the release.
Document all the changes you’re going to release, in thedebian/changelog.
gbp dch can help write the changelog for you:
dgit fetch sidgbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/mainrationale
--ignore-branchis needed because gbp dch wrongly thinks you ought to be running this onmaster, but of course you’re running it on your MR branch.The
--git-log=^upstream/mainexcludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have anupstreamremote and that you’re basing your work on theirmainbranch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.
(For the first upload after switching to using tag2upload or dgit you need--since=debian/1.2.3-1, where1.2.3-1 is your previous DEP-14 tag, becausedgit/dgit/sid will be a dsc import, not your actual history.)
ChangeUNRELEASED to the target suite, and finalise the changelog. (Note thatdch will insist that you at least save the file in your editor.)
dch -rgit commit -m 'Finalise for upload' debian/changelogMake an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)
Now you can perform the actual upload:
git checkout mastergit pull --ff-only # bring the gitlab-made MR merge commit into your local treegit-debpushgit-debpush --quilt=linear--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.
If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.
Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot,dgit can help take care of the build and upload for you:
Prepare the changelog update and merge it, as above. Then:
Create the orig tarball and launder the git-derebase branch:
git-deboriggit-debrebase quickrationale
Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.
Build the source and binary packages, locally:
dgit sbuilddgit push-builtrationale
You don’thave to use
dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around agitignore-related defect in dpkg-source.
Find the new upstream version number and corresponding tag. (Let’s suppose it’s1.2.4.) Check the provenance:
git verify-tag v1.2.4rationale
Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.
Simply merge the new upstream version and update the changelog:
git merge v1.2.4dch -v1.2.4-1 'New upstream release.'Rebase your delta queue onto the new upstream version:
git debrebase mew-upstream 1.2.4If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part ofgit merge orgit (deb)rebase.
After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.
git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.
When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runsgit-debpush.
As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:
dgit fetch sidgit diff dgit/dgit/sid..HEADOr to see the Debian delta of the proposed upload:
git verify-tag v1.2.3git diff v1.2.3..HEAD ':!debian'Or to show all the delta as a series of commits:
git log -p v1.2.3..HEAD ':!debian'Don’t look atdebian/patches/. It can be absent or out of date.
Fetch the NMU into your local git, and see what it contains:
dgit fetch sidgit diff master...dgit/dgit/sidIf the NMUerused dgit, thengit log dgit/dgit/sid will show you the commits they made.
Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:
git merge dgit/dgit/sidYou shouldgit-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.
Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was1.2.3-7, you can go back and see the NMU diff again with:
git diff debian/1.2.3-7...dgit/dgit/sidThe actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes todebian/patches/. Normally it’s best to filter them out withgit diff ... ':!debian/patches'
If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like
git checkout debian/1.2.3-7git-debrebase --force make-patchesgit diff HEAD...dgit/dgit/sid -- :debian/patchesto diff against a version withdebian/patches/ up to date. (The NMU, indgit/dgit/sid, will necessarily have the patches already up to date.)
Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.
This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.
Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.
rationale
Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.
git checkout -b upstream-dfsg v1.2.3git rm nonfree.exegit commit -m "upstream version 1.2.3 DFSG-cleaned"git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1git push origin upstream-dfsgAnd now, use1.2.3+ds1, and the filtered branchupstream-dfsg, as the upstream version, instead of1.2.3 andupstream/main. Follow the steps forConvert the git branch orNew upstream version, as applicable, adding+ds1 intodebian/changelog.
If you missed something and need to filter out more a nonfree files, re-use the sameupstream-dfsg branch and bump theds version, egv1.2.3+ds2.
git checkout upstream-dfsggit merge v1.2.4git rm additional-nonfree.exe # if anygit commit -m "upstream version 1.2.4 DFSG-cleaned"git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1git push origin upstream-dfsgIf the files you need to remove keep changing, you could automate things with a small shell scriptdebian/rm-nonfree containing appropriategit rm commands. If you usegit rm -f it will succeed even if thegit merge from real upstream has conflicts due to changes to non-free files.
rationale
Ideally
uscan, which has a way of representing DFSG filtering patterns indebian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.
Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.
It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.
gitattributes:
ForReasons the dgit and tag2upload system disregards and disables the use of.gitattributes to modify files as they are checked out.
Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload orgit-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.
git submodules:git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.
If you’re lucky, the code in the submodule isn’t used in which case you cangit rm the submodule.
I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.
You may want to look at:
dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.
These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.
Look indgit-maint-merge(7) anddgit-maint-debrebase(7). There is alsodgit-maint-gbp(7) for those who want to keep usinggbp pq and/orquilt with a patches-unapplied branch.
NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)
You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, Seedgit-nmu-simple(7).
Native packages (meaning packages maintained wholly within Debian) are much simpler. Seedgit-maint-native(7).
tag2upload documentation: Thetag2upload wiki page is a good starting point. There’s thegit-debpush(1) manpage of course.
dgit reference documentation:
There is a comprehensive command-line manual indgit(1). Description of the dgit data model and Principles of Operation is indgit(7); including coverage of out-of-course situations.
dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.
Design and implementation documentation for tag2upload islinked to from the wiki.
Debian’s git transition blog post from December.
tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.
git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing withgbp pq, manual use ofquilt,git-dpm and so on.
git-debrebase reference documentation:
Of course there’s a comprehensive command-line manual ingit-debrebase(1).
git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented ingit-debrebase(5).

The following 42 15-bit values form a 2-disjunctive matrix(that is, no union of two values contain or equal a third value),or equivalently, a superimposed code:
000000000011111000000011100011000000101101100000001010110100000001101010001000001110001010000010011011000000100100110010000110010000110000110100001001000111001100000001000110000101001010000110001001010101000010001011000001100001100001010100001100010101000001101000000011010001000101001010010001000101010010110100000010011000010010010100001001010010100010010001010101100000100011000000100110011000100011000011001011000000100001001000110100010000101010100010100010100100011010000001100100000100101100100111000000100101000011000101000001001001101000010010010101001100100000110000001110000110000010001100110000100000011111110000000000
This shows thatA286874 a(15) >= 42.
If I had to make a guess, I'd say the equality holds, but I have nowherenear the computing resources to actually find the answer for sure.Stay tuned for news about a(14), though.

Registration and the Call for Proposals for DebConf 26 are now open.The 27th edition of the Debian annual conference will be held fromJuly 20th toJuly 25th, 2026, in Santa Fe, Argentina.
The conference days will be preceded by DebCamp, which will take placefrom July 13th to July 19th, 2026.
The registration form can be accessed on the DebConf 26 website.After creating an account, click"register" in the profile section.
As always, basic registration for DebConf is free of charge forattendees. If you are attending the conference in a professionalcapacity or as a representative of your company, we kindly ask that youconsider registering in one of our paid categories to help cover thecosts of organizing the conference and to support subsidizing othercommunity members.
The last day to register with guaranteed swag is June 14th.
We also encourage eligible individuals to apply for a diversitybursary. Travel, food, and accommodation bursaries are also available. Moredetails can be found on thebursary info page.
The last day to apply for a bursary is April 1st. Applicants shouldreceive feedback on their bursary application by May 1st.
The call for proposals for talks, discussions and other activities isalso open. To submit a proposal you need to create an account on thewebsite, and then use the"Submit Talk" button in the profilesection.
The last day to submit and have your proposal be considered for themain conference schedule, with video coverage guaranteed, is April 1st.
DebConf 26 is also accepting sponsors. Interested companies andorganizations may contact the DebConf team through sponsors@debconf.orgorvisit the DebConf 26 website.
See you in Santa Fe,
The DebConf 26 Team
14 February, 2026 12:15PM by Carlos Henrique Lima Melara, Santiago Ruano Rincón

Current AI companiesignore licenses such as the GPL, and often train on anything they can scrape.This is not acceptable.
The AI companiesignore web conventions, e.g., they deep link images from your web sites (even adding?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site.You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to usedata-nosnippet and cripple the snippet preview in Google.The “AI” browsers such as Comet, Atlas do notidentify as such, but rather pretend they are standard Chromium.There is no way to ban such AI use on your web site.
Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated.This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, theseplatforms evenbenefit from the AI slop.And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.
If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine.And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it…And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.
Partially because of GenAI,StackOverflow is pretty much dead – which used to be one of the most valuable programming resources.(While a lot of people complain about moderation, famous moderator Shog9 from the early SO dayssuggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers.In January 2026, there were around 3400 questions and 6000 answers posted, less than in thefirst month of SO of August 2008 (before the official launch).
Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program.Wikipedia is also suffering badly from GenAI.
Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.
However, the worst effect (at least to me as an educator) is thenoskilling effect (a rather novel term derived from deskilling, I have only seen itin this article by Weßels and Maibaum).
Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect isdramatic. It is even worse thandeskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place.And the earlier pupils start using generative AI, the less skills they acquire.
Let’sdogfood the AI. Here’s an outline:
Here is an example prompt that you can use:
You are a university educator, preparing homework assignments in debugging.The programming language used is {lang}.The students are tasked to find bugs in given code.Do not just call existing implementations from libraries, but implement the algorithm from scratch.Make sure there are two mistakes in the code that need to be discovered by the students.Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.The code may have (misleading) comments, but must NOT mention the bugs.If you do not know how to implement the algorithm, output an empty response.Output only the code for the assignment! Do not use markdown.Begin with a code comment that indicates the algorithm name and idea.If you indicate a bug, always use a comment with the keyword BUGGenerate a {lang} implementation (with bugs) of: {n} ({desc})Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.
If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.
On the long run, we need to get back to an internet for people, not an internet for bots.Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built.Hence I don’t think technology is the answere here, but human networks of trust.
13 February, 2026 10:29AM by Erich Schubert

Version 0.0.27 ofRcppSpdlog arrivedonCRAN moments ago, and willbe uploaded toDebian and built forr2u shortly. The (nice)documentationsite will be refreshed too.RcppSpdlogbundlesspdlog, awonderful header-only C++ logging library with all the bells andwhistles you would want that was written byGabi Melman, and also includesfmt byVictor Zverovich. You can learnmore at the nicepackagedocumention site.
Brian Ripley has now turned C++20 on as a default for R-devel (aka R4.6.0 ‘to be’), and this turned up misbehvior in packages usingRcppSpdlog such asourspdl wrapper(offering a nicer interface from both R and C++) when relying onstd::format. So for now, we turned this off and remain withfmt::format from thefmt library while weinvestigate further.
The NEWS entry for this release follows.
Changes inRcppSpdlog version 0.0.27 (2026-02-11)
- Under C++20 or later, keep relying on
fmt::formatuntilissues experienced usingstd::formatcan be identified andresolved
Courtesy of myCRANberries, thereis also adiffstatreport detailing changes. More detailed information is on theRcppSpdlogpage, or thepackage documentionsite.
This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.

Contributing to Debianis part ofFreexian’s mission. This articlecovers the latest achievements of Freexian and their collaborators. All of thisis made possible by organizations subscribing to ourLong Term Support contracts andconsulting services.
In version 1.10.1, Meson merged a patch to make it call the correctg-ir-scanner by default thanks to Eli Schwarz. This problem affected more than130 source packages. Helmut retried building them all and filed 69 patches as aresult. A significant portion of those packages require another Mesonchange to call the correctvapigen. Another notable change isconverting gnu-efi to multiarch,which ended up requiring changes to a number of other packages. Since Aureliendropped thelibcrypt-dev dependency fromlibc6-dev, this transition now ismostly complete and has resulted in most of the Perl ecosystem correctlyexpressingperl-xs-dev dependencies needed for cross building. It is theseinfrastructure changes affecting several client packages that this work targets.As a result of this continued work, about 66% of Debian’s source packages nowhave satisfiable cross Build-Depends in unstable and about 10000 (55%) actuallycan be cross built. There are now more than 500 openbug reportsaffecting more than 2000 packages most of which carry patches.
Maintaining architecture cross-bootstrap requires continued effort for adaptingto archive changes such asglib2.0 dropping a build profile or ane2fsprogsFTBFS. Beyond those generic problems,architecture-specific problems with e.g.musl-linux-any orsparc may arise.While all these changes move things forward on the surface, the bootstraptooling has become a growing pile of patches. Helmut managed to upstream twochanges toglibc for reducing itsBuild-Depends in thestage2 buildprofile and thanks Aurelien Jarno.
Debian Enhancement Proposal #3(DEP-3) is named “Patch Tagging Guidelines” and standardizes meta-informationthat Debian contributors can put in patches included in Debian source packages.With the feedback received over the years, and with the change in the packagemanagement landscape, the need to refresh those guidelines became evident. Asthe initial driver of that DEP, I spent a good day reviewing all the feedback(that I kept in a folder) and producing anew version of the document.The changes aim to give more weight to the syntax that is compatible with gitformat-patch’s output, and also to clarify the expected uses and meanings of acouple of fields, including some algorithm that parsers should follow to definethe state of the patch. After theannouncement of the new drafton debian-devel, the revised DEP-3 received a significant number of commentsthat I still have to process.
debvm making it work with unstable as a target distributionagain.rocblas package to forky.publicanand did some maintenance for tracker.debian.org.festival Debian package forsystemd socket activationandsystemd service and socket units.Adapted the patch for upstream andcreated a merge request(alsofixed a MacOS X building systemerror while working on it). UpdatedOrca Wiki documentationregarding festival.Discusseda 2007 bug/feature in festival which allowed having a local shell and that thenew systemd socket activation has the same code path.abcde” package.python3-defaults anddh-python in support ofPython 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilationfailures by default, and started down the road of implementing this.python-virtualenv andpython-flexmock.knot-dns andknot-resolver are also less complexsoftware, which results in advantages in terms of security: only three CVEs havebeen reported for knot-dns since 2011).pkg_resources module.groff 1.24.0(the first upstream release since mid-2023, so a very large set of changes)into experimental.ruby-rbpdf,jekyll,origami-pdf,ruby-kdl,ruby-twitter,ruby-twitter-text,ruby-globalid.12 February, 2026 12:00AM by Anupa Ann Joseph
Debusine is a tool designedfor Debian developers and Operating System developers in general. You can tryout Debusine ondebusine.debian.net,and follow its development onsalsa.debian.org.
This post describes how to write a new worker task for Debusine. It can beused to add tasks to a self-hosted Debusine instance, or to submit to theDebusine project new tasks to add new capabilities to Debusine.
Tasks are the lower-level pieces of Debusine workflows. Examples of tasks areSbuild,Lintian,Debdiff(see theavailable tasks).
This post will document the steps to write a new basicworker task.The example will add a worker task that runsreprotest and creates an artifact of thenew typeReprotestArtifact with the reprotest log.
Tasks are usually used by workflows. Workflows solve high-level goals bycreating and orchestrating different tasks (e.g. a Sbuild workflowwould create different Sbuild tasks, one for each architecture).
A task usually does the following:
lintian,debdiff, etc.). In thisblog post, it will runreprotestSuccess orFailureIf you want to follow the tutorial and add theReprotest task, yourDebusine development instance should have at least one worker, one user,a debusine client set up, and permissions for the client to create tasks.All of this can be setup following the steps in theContribute sectionof the documentation.
This blog post shows a functionalReprotest task. This task is notcurrently part of Debusine. The Reprotest task implementation is simplified(no error handling, unit tests, specific view, docs, some shortcuts inthe environment preparation, etc.). At some point,in Debusine,we might addadebrebuild task which is based on buildinfo files and usessnapshot.debian.org to recreate the binary packages.
The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic indebusine/tasks/models.py:
classReprotestData(BaseTaskDataWithExecutor):"""Data for Reprotest task."""source_artifact:LookupSingleclassReprotestDynamicData(BaseDynamicTaskDataWithExecutor):"""Reprotest dynamic data."""source_artifact_id:int |None =NoneTheReprotestData is what the user will input. ALookupSingle is alookupthat resolves to a single artifact.
We would also have configuration for the desiredvariations to test,but we have left that out of this example for simplicity. Configuring variationsis left as an exercise for the reader.
SinceReprotestData is a subclass ofBaseTaskDataWithExecutor italso containsenvironment where the user can specify in which environmentthe task will run. The environment is an artifact with a Debian image.
TheReprotestDynamicData holds the resolution of all lookups. Thesecan be seen in the “Internals” tab of the work request view.
Reprotest artifact data classIn order for the reprotest task to create a new Artifact of the typeDebianReprotest with the log and output metadata: add the new category toArtifactCategory indebusine/artifacts/models.py:
REPROTEST ="debian:reprotest"In the same file add theDebianReprotest class:
classDebianReprotest(ArtifactData):"""Data for debian:reprotest artifacts."""reproducible:bool |None =Nonedefget_label(self) ->str:"""Return a short human-readable label for the artifact."""return"reprotest analysis"It could also include the package name or version.
In order to have the category listed in the work request output artifactstable, edit the filedebusine/db/models/artifacts.py: InARTIFACT_CATEGORY_ICON_NAMES addArtifactCategory.REPROTEST: "folder",and inARTIFACT_CATEGORY_SHORT_NAMES addArtifactCategory.REPROTEST: "reprotest",.
Indebusine/tasks/ create a new filereprotest.py.
# Copyright © The Debusine Developers# See the AUTHORS file at the top-level directory of this distribution## This file is part of Debusine. It is subject to the license terms# in the LICENSE file found in the top-level directory of this# distribution. No part of Debusine, including this file, may be copied,# modified, propagated, or distributed except according to the terms# contained in the LICENSE file."""Task to use reprotest in debusine."""frompathlibimportPathfromtypingimportAnyfromdebusineimportutilsfromdebusine.artifacts.local_artifactimportReprotestArtifactfromdebusine.artifacts.modelsimport (ArtifactCategory,CollectionCategory,DebianSourcePackage,DebianUpload,WorkRequestResults,get_source_package_name,get_source_package_version,)fromdebusine.client.modelsimportRelationTypefromdebusine.tasksimportBaseTaskWithExecutor,RunCommandTaskfromdebusine.tasks.modelsimportReprotestData,ReprotestDynamicDatafromdebusine.tasks.serverimportTaskDatabaseInterfaceclassReprotest(RunCommandTask[ReprotestData,ReprotestDynamicData],BaseTaskWithExecutor[ReprotestData,ReprotestDynamicData],):"""Task to use reprotest in debusine."""TASK_VERSION =1CAPTURE_OUTPUT_FILENAME ="reprotest.log"def__init__(self,task_data:dict[str,Any],dynamic_task_data:dict[str,Any] |None =None, ) ->None:"""Initialize object."""super().__init__(task_data,dynamic_task_data)self._reprotest_target:Path |None =Nonedefbuild_dynamic_data(self,task_database:TaskDatabaseInterface ) ->ReprotestDynamicData:"""Compute and return ReprotestDynamicData."""input_source_artifact =task_database.lookup_single_artifact(self.data.source_artifact )assertinput_source_artifactisnotNoneself.ensure_artifact_categories(configuration_key="input.source_artifact",category=input_source_artifact.category,expected=(ArtifactCategory.SOURCE_PACKAGE,ArtifactCategory.UPLOAD, ), )assertisinstance(input_source_artifact.data, (DebianSourcePackage,DebianUpload) )subject =get_source_package_name(input_source_artifact.data)version =get_source_package_version(input_source_artifact.data)assertself.data.environmentisnotNoneenvironment =self.get_environment(task_database,self.data.environment,default_category=CollectionCategory.ENVIRONMENTS, )returnReprotestDynamicData(source_artifact_id=input_source_artifact.id,subject=subject,parameter_summary=f"{subject}_{version}",environment_id=environment.id, )defget_input_artifacts_ids(self) ->list[int]:"""Return the list of input artifact IDs used by this task."""ifnotself.dynamic_data:return []return [self.dynamic_data.source_artifact_id,self.dynamic_data.environment_id, ]deffetch_input(self,destination:Path) ->bool:"""Download the required artifacts."""assertself.dynamic_dataartifact_id =self.dynamic_data.source_artifact_idassertartifact_idisnotNoneself.fetch_artifact(artifact_id,destination)returnTruedefconfigure_for_execution(self,download_directory:Path) ->bool:""" Find a .dsc in download_directory. Install reprotest and other utilities used in _cmdline. Set self._reprotest_target to it. :param download_directory: where to search the files :return: True if valid files were found """self._prepare_executor_instance()ifself.executor_instanceisNone:raiseAssertionError("self.executor_instance cannot be None")self.run_executor_command( ["apt-get","update"],log_filename="install.log",run_as_root=True,check=True, )self.run_executor_command( ["apt-get","--yes","--no-install-recommends","install","reprotest","dpkg-dev","devscripts","equivs","sudo", ],log_filename="install.log",run_as_root=True, )self._reprotest_target =utils.find_file_suffixes(download_directory, [".dsc"] )returnTruedef_cmdline(self) ->list[str]:""" Build the reprotest command line. Use configuration of self.data and self._reprotest_target. """target =self._reprotest_targetasserttargetisnotNonecmd = ["bash","-c",f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x{target} package/; ""cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; ""rm *.deb ; ""reprotest --vary=-time,-user_group,-fileordering,-domain_host .", ]returncmd@staticmethoddef_cmdline_as_root() ->bool:r"""apt-get install --yes ./\*.deb must be run as root."""returnTruedeftask_result(self,returncode:int |None,execute_directory:Path,# noqa: U100 ) ->WorkRequestResults:""" Evaluate task output and return success. For a successful run of reprotest: -must have the output file -exit code is 0 :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE. """reprotest_file =execute_directory /self.CAPTURE_OUTPUT_FILENAMEifreprotest_file.exists()andreturncode ==0:returnWorkRequestResults.SUCCESSreturnWorkRequestResults.FAILUREdefupload_artifacts(self,exec_directory:Path, *,execution_result:WorkRequestResults ) ->None:"""Upload the ReprotestArtifact with the files and relationships."""ifnotself.debusine:raiseAssertionError("self.debusine not set")assertself.dynamic_dataisnotNoneassertself.dynamic_data.parameter_summaryisnotNonereprotest_artifact =ReprotestArtifact.create(reprotest_output=exec_directory /self.CAPTURE_OUTPUT_FILENAME,reproducible=execution_result ==WorkRequestResults.SUCCESS,package=self.dynamic_data.parameter_summary, )uploaded =self.debusine.upload_artifact(reprotest_artifact,workspace=self.workspace_name,work_request=self.work_request_id, )assertself.dynamic_dataisnotNoneassertself.dynamic_data.source_artifact_idisnotNoneself.debusine.relation_create(uploaded.id,self.dynamic_data.source_artifact_id,RelationType.RELATES_TO, )Below are the main methods with some basic explanation.
In order for Debusine to discover the task, add"Reprotest"in the filedebusine/tasks/__init__.py in the__all__ list.
Let’s explain the different methods of theReprotest class:
build_dynamic_data methodThe worker has no access to Debusine’s database. Lookups are all resolved beforethe task gets dispatched to a worker, so all it has to do is download thespecified input artifacts.
build_dynamic_data method lookup the artifact, assert that is a validcategory, extract the package name and version, and get the environment inwhich it will be executed.
Theenvironment is needed to run the task (reprotest will runin a container usingunshare,incus…).
defbuild_dynamic_data(self,task_database:TaskDatabaseInterface ) ->ReprotestDynamicData:"""Compute and return ReprotestDynamicData."""input_source_artifact =task_database.lookup_single_artifact(self.data.source_artifact )assertinput_source_artifactisnotNoneself.ensure_artifact_categories(configuration_key="input.source_artifact",category=input_source_artifact.category,expected=(ArtifactCategory.SOURCE_PACKAGE,ArtifactCategory.UPLOAD, ), )assertisinstance(input_source_artifact.data, (DebianSourcePackage,DebianUpload) )subject =get_source_package_name(input_source_artifact.data)version =get_source_package_version(input_source_artifact.data)assertself.data.environmentisnotNoneenvironment =self.get_environment(task_database,self.data.environment,default_category=CollectionCategory.ENVIRONMENTS, )returnReprotestDynamicData(source_artifact_id=input_source_artifact.id,subject=subject,parameter_summary=f"{subject}_{version}",environment_id=environment.id, )get_input_artifacts_ids methodUsed to list the task’s input artifacts in the web UI.
defget_input_artifacts_ids(self) ->list[int]:"""Return the list of input artifact IDs used by this task."""ifnotself.dynamic_data:return []assertself.dynamic_data.source_artifact_idisnotNonereturn [self.dynamic_data.source_artifact_id]fetch_input methodDownload the required artifacts on the worker.
deffetch_input(self,destination:Path) ->bool:"""Download the required artifacts."""assertself.dynamic_dataartifact_id =self.dynamic_data.source_artifact_idassertartifact_idisnotNoneself.fetch_artifact(artifact_id,destination)returnTrueconfigure_for_execution methodInstall the packages needed by the task and set_reprotest_target, whichis used to build the task’s command line.
defconfigure_for_execution(self,download_directory:Path) ->bool:""" Find a .dsc in download_directory. Install reprotest and other utilities used in _cmdline. Set self._reprotest_target to it. :param download_directory: where to search the files :return: True if valid files were found """self._prepare_executor_instance()ifself.executor_instanceisNone:raiseAssertionError("self.executor_instance cannot be None")self.run_executor_command( ["apt-get","update"],log_filename="install.log",run_as_root=True,check=True, )self.run_executor_command( ["apt-get","--yes","--no-install-recommends","install","reprotest","dpkg-dev","devscripts","equivs","sudo", ],log_filename="install.log",run_as_root=True, )self._reprotest_target =utils.find_file_suffixes(download_directory, [".dsc"] )returnTrue_cmdline methodReturn the command line to run the task.
In this case, and to keep the example simple, we will runreprotestdirectly in the worker’s executor VM/container, without giving it anisolated virtual server.
So, this command installs the build dependencies required by the package(soreprotest can build it) and runs reprotest itself.
def_cmdline(self) ->list[str]:""" Build the reprotest command line. Use configuration of self.data and self._reprotest_target. """target =self._reprotest_targetasserttargetisnotNonecmd = ["bash","-c",f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x{target} package/; ""cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; ""rm *.deb ; ""reprotest --vary=-time,-user_group,-fileordering,-domain_host .", ]returncmdSome reprotest variations are disabled. This is to keep the example simplewith the set of packages to install and reprotest features.
_cmdline_as_root methodSince during the execution it’s needed to install packages, run it asroot (in the container):
@staticmethoddef_cmdline_as_root() ->bool:r"""apt-get install --yes ./\*.deb must be run as root."""returnTruetask_result methodTask succeeded if a log is generated and the return code is 0.
deftask_result(self,returncode:int |None,execute_directory:Path,# noqa: U100 ) ->WorkRequestResults:""" Evaluate task output and return success. For a successful run of reprotest: -must have the output file -exit code is 0 :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE. """reprotest_file =execute_directory /self.CAPTURE_OUTPUT_FILENAMEifreprotest_file.exists()andreturncode ==0:returnWorkRequestResults.SUCCESSreturnWorkRequestResults.FAILUREupload_artifacts methodCreate theReprotestArtifact with the log and the reproducible boolean,upload it, and then add a relation between theReprotestArtifactand the source package:
defupload_artifacts(self,exec_directory:Path, *,execution_result:WorkRequestResults ) ->None:"""Upload the ReprotestArtifact with the files and relationships."""ifnotself.debusine:raiseAssertionError("self.debusine not set")assertself.dynamic_dataisnotNoneassertself.dynamic_data.parameter_summaryisnotNonereprotest_artifact =ReprotestArtifact.create(reprotest_output=exec_directory /self.CAPTURE_OUTPUT_FILENAME,reproducible=execution_result ==WorkRequestResults.SUCCESS,package=self.dynamic_data.parameter_summary, )uploaded =self.debusine.upload_artifact(reprotest_artifact,workspace=self.workspace_name,work_request=self.work_request_id, )assertself.dynamic_dataisnotNoneassertself.dynamic_data.source_artifact_idisnotNoneself.debusine.relation_create(uploaded.id,self.dynamic_data.source_artifact_id,RelationType.RELATES_TO, )To run this task in a local Debusine (see steps to have it ready withan environment, permissions and users created) you can do:
$ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc(get the artifact ID from the output of that command)
The artifact can be seen inhttp://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/.
Then create areprotest.yaml:
$ cat <<EOF > reprotest.yamlsource_artifact: $ARTIFACT_IDenvironment: "debian/match:codename=bookworm"EOFInstead ofdebian/match:codename=bookworm it could use the artifact ID.
Finally, create the work request to run the task:
$ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yamlUsing Debusine web you can see the work request, which should go toRunningstatus, thenCompleted withSuccess orFailure (depending ifreprotest could reproduce it or not). Clicking on theOutput tab would havean artifact of typedebian:reprotest with one file: the log.In theMetadata tab of the artifact it has Data: the package name andreproducible (true or false).
This was a simple example of creating a task. Other things that could be done:
variationsreprotest directly on the worker host, using the executorenvironment as areprotest “virtual server”prepare_environment.QaWorkflow)10 February, 2026 12:00AM by Carles Pina i Estany

About 80% of my Debian contributions this month weresponsored byFreexian, as well as one direct donation viaGitHubSponsors (thanks!). If youappreciate this sort of work and are at a company that uses Debian, have alook to see whether you can pay for any ofFreexian‘s services; as well as the directbenefits, that revenue stream helps to keep Debian development sustainablefor me andseveral other lovelypeople.
You can also support my work directly viaLiberapay orGitHubSponsors.
New upstream versions:
pkg_resources)pkg_resources)pkg_resources)Fixes for Python 3.14:
Fixes for pytest 9:
Porting away from thedeprecatedpkg_resources:
Other build/test failures:
global logged_msgs is unused: name is never assigned in scope (NMU)I investigated several more build failures and suggested removing the packages in question:
Other bugs:
Alejandro Colomar reported thatman(1) ignored theMANWIDTH environment variable in some circumstances. I investigated this andfixed it upstream.
I contributed anubuntu-dev-tools patch to stop recommendingsudo.
Iadded forky support to the images used in SalsaCI pipelines.
I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet.
I worked on some lower-priority security updates for OpenSSH.
08 February, 2026 07:30PM by Colin Watson

Both R and Python make it reasonably easy to work with compiledextensions. But how to access objects in one environment from the otherand share state or (non-trivial) objects remains trickier.Recently (and while r-forge was ‘resting’ so we opened GitHubDiscussions) a question was askedconcerning Rand Python object pointer exchange.
This lead to a pretty decent discussion includingarrow interchange demos (prettyideal if dealing with data.frame-alike objects), but once the focus ison more ‘library-specific’ objects from a given (C or C++, say) libraryit is less clear what to do, or how involved it may get.
R has external pointers, and these make it feasible to instantiatethe same object in Python. To demonstrate, I created a pair of(minimal) packages wrapping a lovely (small) class from the excellentspdlog library byGabi Melman, and more specificallyin an adapted-for-R version (to avoid someR CMD checknags) in myRcppSpdlogpackage. It is essentially a nicer/fancier C++ version of thetic() andtic() timing scheme. When an objectis instantiated, it ‘starts the clock’ and when we accessing it later itprints the time elapsed in microsecond resolution. In Modern C++ thistakes little more than keeping an internalchronoobject.
Which makes for a nice, small, yet specific object to pass to Python.So theR side ofthe package pair instantiates such an object, and accesses itsaddress. For different reasons, sending a ‘raw’ pointer across does notwork so well, but a string with the address printed works fabulously(and is a paradigm used around other packages so we did not inventthis). Over on thePython side of thepackage pair, we then take this string representation and pass it toa little bit ofpybind11 code toinstantiate a new object. This can of course also expose functionalitysuch as the ‘show time elapsed’ feature, either formatted or justnumerically, of interest here.
And that is all that there is! Now this can be done from R as wellthanks toreticulateas thedemo() (also shown on the package README.md)shows:
>library(chronometre)>demo("chronometre",ask=FALSE)demo(chronometre)----~~~~~~~~~~~>#!/usr/bin/env r>>stopifnot("Demo requires 'reticulate'"=requireNamespace("reticulate",quietly=TRUE))>stopifnot("Demo requires 'RcppSpdlog'"=requireNamespace("RcppSpdlog",quietly=TRUE))>stopifnot("Demo requires 'xptr'"=requireNamespace("xptr",quietly=TRUE))>library(reticulate)>## reticulate and Python in general these days really want a venv so we will use one,>## the default value is a location used locally; if needed create one>## check for existing virtualenv to use, or else set one up> venvdir<-Sys.getenv("CHRONOMETRE_VENV","/opt/venv/chronometre")>if (dir.exists(venvdir)) {+>use_virtualenv(venvdir,required =TRUE)+> }else {+>## create a virtual environment, but make it temporary+>Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())+>virtualenv_create("r-reticulate-env")+>virtualenv_install("r-reticulate-env",packages =c("chronometre"))+>use_virtualenv("r-reticulate-env",required =TRUE)+> }> sw<- RcppSpdlog::get_stopwatch()# we use a C++ struct as example>Sys.sleep(0.5)# imagine doing some code here>print(sw)# stopwatch shows elapsed time0.501220> xptr::is_xptr(sw)# this is an external pointer in R[1]TRUE> xptr::xptr_address(sw)# get address, format is "0x...."[1]"0x58adb5918510"> sw2<- xptr::new_xptr(xptr::xptr_address(sw))# cloned (!!) but unclassed>attr(sw2,"class")<-c("stopwatch","externalptr")# class it .. and then use it!>print(sw2)# `xptr` allows us close and use0.501597> sw3<- ch$Stopwatch( xptr::xptr_address(sw) )# new Python object via string ctor>print(sw3$elapsed())# shows output via Python I/Odatetime.timedelta(microseconds=502013)>cat(sw3$count(),"\n")# shows double0.502657>print(sw)# object still works in R0.502721>The same object, instantiated in R is used in Python and thereafteragain in R. Whilethis object here is minimal in features, theconcept ofpassing a pointer is universal. We could use it forany interesting object that R can access and Python too can instantiate.Obviously, there be dragons as we pass pointers so one may want toascertain that headers from corresponding compatible versions are usedetc butprinciple is unaffected and should just work.
Both parts of this pair of packages are now at the correspondingrepositories:PyPIandCRAN.As I commonly do here on package (change) announcements, I include the(minimal so far) set of high-level changes for the R package.
Changes in version 0.0.2(2026-02-05)
Removed replaced unconditional virtualenv use in demo givenpreceding conditional block
Updated README.md with badges and an updated demo
Changes in version 0.0.1(2026-01-25)
- Initial version and CRAN upload
Questions, suggestions, bug reports, … are welcome at either the (nowawoken from the R-Forge slumber)Rcppmailing list or the newerRcppDiscussions.
This post byDirkEddelbuettel originated on hisThinking inside the boxblog. If you like this or other open-source work I do, you cansponsor me atGitHub.
I have unearthed a fewold articles typed during my adolescence, between1996 and 1998. Unremarkable at the time, these pages now compose, three decadeslater, the chronicle of a vanished era.1
The word “blog” does not exist yet. Wikipedia remains to come. Google has notbeen born. AltaVista reigns over searches, whilealready struggling toembrace the nascent immensity of the web2. To meet someone,you had to agree in advance and prepare your route on paper maps. 🗺️
The web is taking off. The CSS specification has just emerged, HTML tables stillserve for page layout.Cookies andadvertising banners are makingtheir appearance. Pages are adorned withmusic and videos, forcingbrowsers to arm themselves withplugins. Netscape Navigator sits on 86% ofthe territory, but Windows 95 now bundles Internet Explorer toquickly catchup. Facing this offensive, Netscapeopensource itsbrowser.
Francefalls behind. Outside universities, Internet access remainsexpensive and laborious.Minitel still reigns, offering phonedirectory, train tickets, remote shopping. This was not yet possible with theInternet:buying a CD online was a pipe dream. Encryption suffers frominappropriate regulation: the DES algorithm is capped at 40 bits andcracked in a few seconds.
These pages bear the trace of the web’s adolescence. Thirty years have passed.The same battles continue: data selling, advertising, monopolies.
Most articles linked here are not translated from French to English. ↩︎
I recently noticed that Google no longer fully indexes my blog. Forexample, it is no longer possible to find thearticle on lanĉo. Iassume this is a consequence of the explosion of AI-generated content or achange in priorities for Google. ↩︎
08 February, 2026 02:51PM by Vincent Bernat
This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian(as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities).
During my allocated time I uploaded or worked on:
I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them asignored.
Unfortunately I didn’t found any time to work on this topic.
This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.
This work is generously funded byFre(i)e Software GmbH!
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately I didn’t found any time to work on this topic.
Unfortunately I didn’t found any time to work on this topic.
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.
08 February, 2026 01:25PM by alteholz

Another year of data fromSociété de Transport de Montréal, Montreal'stransit agency!
A few highlights this year:
Although theSaint-Michel station closed foremergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for thenew Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.
The effects of the opening of theRoyalmount shopping center has had a durable impact on the traffic at theDe la Savane station. I reported on this last year, but it seems this wasn't just a fad.
With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. TheÉdouard-Montpetit station has nearly reached its previous all-time record of 2015 and theMcGill station has recovered from the general slump all the other stations have had in 2025.
TheAssomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.
Although still affected by a very high seasonality, theJean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands.
More generally, it seems the Montreal subway has had a pretty bad year. Traffichad been slowly climbing back since the COVID-19 pandemic, but this is thefirst year since 2020 such a sharp decline can be witnessed. Even majorstations likeJean-Talon orLionel-Groulx are ona downward trend and it is pretty worrisome.
As for causes, a few things come to mind. First of all, as the number ofMontrealers commuting to work by bike continues to rise1, a modalshift from public transit to active mobility is to be expected.As localexperts put it, this is not uncommon and has been seen in othercities before.
Another important factor that certainly turned people away from the subway thisyear has been the impacts of the continued housing crisis in Montreal. As moreand more people get kicked out of their apartments, many have been seekingrefuge in the subway stations to find shelter.
Sadly, this also brought a unprecedented wave of incivilities. As riders' senseof security sharply decreased, the STM eventually resorted tobanning unhousedpeople from sheltering in the subway. This decision did bring backsome peace to the network, but one can posit damage had already been done andmany casual riders are still avoiding the subway for this reason.
Finally, the weekslong STM worker's strike in Q4 had an important impact ongeneral traffic, as it severely reduced the opening hours of the subway. As forthe previous item, once people find alternative ways to get around, it's alwaysharder to bring them back.
Hopefully, my 2026 report will be a more cheerful one...
By clicking on a subway station, you'll be redirected to a graph of thestation's foot traffic.