APNIC community engagement: July-December 2025 in review
2026-01-26 23:42Daily Hacker News for 2026-01-26
2026-01-27 00:00The 10 highest-rated articles on Hacker News on January 26, 2026 which have not appeared on any previous Hacker News Daily are:
-
Things I've learned in my 10 years as an engineering manager
(comments) -
First, make me care
(comments) -
Iran's internet blackout may become permanent, with access for elites only
(comments) -
MapLibre Tile: a modern and efficient vector tile format
(comments) -
After two years of vibecoding, I'm back to writing by hand
(comments) -
Google AI Overviews cite YouTube more than any medical site for health queries
(comments) -
Television is 100 years old today
(comments) -
Qwen3-Max-Thinking
(comments) -
France Aiming to Replace Zoom, Google Meet, Microsoft Teams, etc.
(comments) -
Fedora Asahi Remix is now working on Apple M3
(comments)
The Best Lunch Deal Around: Amar India North Restaurant
2026-01-26 21:49
So often I write about extravagant, expensive dinners and specialty dining events, but today I’m here to tell you of an absolutely banging bargain lunch.
I love Indian food, but it’s scarce to come by in my area. The closest establishment to me is Amar India Restaurant, and it’s actually its north location in Vandalia rather than its original location in Centerville, which is considerably further south from me.
Amar India North has a lunch menu that starts out at a mere ten dollars, and only goes up to about fifteen dollars if you get one of the more expensive dishes like the lamb curry. There’s also chicken curry, chicken tikka masala, I think a fish curry, and the one I always get, saag paneer.
Once you pick your main, it comes with rice, naan, their vegetable of the day, and a small dessert. This is what the saag paneer platter looks like:

Two pieces of plain naan, rice, a big ol’ portion of saag paneer, pointed gourd as the vegetable of the day, and two jalebi for the dessert. I have had this platter three times and each time the vegetable has been different, but never the dessert, which is a shame because I’d love to try some of their other desserts, especially the kulfi and Gulab jamun.
It may not look like much saag paneer but I can assure you it’s a generous portion size for the price. I’m pretty sure the saag paneer platter in particular is thirteen dollars, plus I always get a mango lassi, which is $4.50, so in total I’m spending less than twenty dollars for a very filling and very delicious lunch! I truly think this is such a good deal and you get to support a local business.
I know the Centerville location used to have a lunch buffet. I don’t know if they still do but I’d like to make it down there sometime soon to see for myself. There’s also a Beavercreek location under the name Jeet India Restaurant, so I’ll have to check that out next time I’m in the area.
I just had this meal on Friday but now I’m already craving it again after telling y’all about it. Especially the mango lassi, I really could drink a gallon of that stuff.
Oh, and while you’re at Amar North, they just opened an Indian grocery store right next to the restaurant called Anand Indian Grocery. I popped in there on my latest visit to the restaurant and they have a huge selection of items, including specialty produce and cooking ingredients like ghee and tons of spices, plus the biggest bags of rice you’ve ever seen.
They also have tons of fun and unique snacks and sweets, and even ice cream flavors I’ve never heard of.
If you’re in the Dayton area, I highly recommend making it out to Amar North for their lunch special sometime this week. It’s between the hours of 11am and 2pm. I think I’ll go again tomorrow for a nice solo lunch.
Do you recommend any lunch specials in the Dayton area? Are you also a big saag paneer fan? Let me know in the comments, and have a great day!
-AMS
The Travel Gods Demanded a Sacrifice
2026-01-26 21:47

I was away this weekend visiting a friend and seeing a concert, and my return home was delayed a day because of the weekend snowstorm. Heading back, I managed to avoid the crash on the I-70 that closed all the eastbound lanes of the interstate, but as you see, that luck came at a price: Immediately upon returning home my boots de-soled. The travel gods, apparently, needed a sacrifice.
These boots, as it happens, are nearly twenty years old, so the sacrifice was reasonable. It wasn’t like I had just gotten these shoes. In fact, the fact they were twenty years old was probably why they became a sacrifice; after two decades, the glue had clearly desiccated into nothingness. I can’t complain. I got good value out of these boots. The travel gods may take them to Shoehalla with my blessing.
In other news, I need new boots; there’s a ton of snow on the ground and my Sketchers are not gonna handle that. A-shoppin’ I will go.
— JS
Discworld Reading Blanket
2026-01-26 22:00* Wincanton is the town very near to Martin's home village in Somerset. So where his family would go to the supermarket, post office, school etc. He went into the Discworld Emporium once, on a visit back home twenty years ago. Was rather wowed.
DNA Lounge: Wherein Ribley has skelbows
2026-01-26 19:18Here, have a photo dump:
Bundle of Holding: Shadowdark Compatible
2026-01-26 14:30
Third-party tabletop fantasy roleplaying sourcebooks and adventures for The Arcane Library's old-school FRPG, Shadowdark.
Bundle of Holding: Shadowdark Compatible
Former ICANN director could lose control of ccTLD
2026-01-26 18:51The government of Ghana has announced plans to nationalize the .gh ccTLD, taking control from a former ICANN director who has run the registry for over thirty years. The Minister of Communication, Digital Technology and Innovation reportedly said that the government intended to place the ccTLD fully under state control. Samuel George reportedly said: “It […]
The post Former ICANN director could lose control of ccTLD first appeared on Domain Incite.
UK launches “police.ai”, but does it own the domain?
2026-01-26 17:49The UK’s increasingly authoritarian government this afternoon announced extensive policing reforms, including what it called “Police.ai”. Home secretary Shabana Mahmood, speaking in Parliament in the last couple of hours, announced a new National Police Service and a substantial ramping up of live facial recognition technology for law enforcement in England and Wales. “At the same […]
The post UK launches “police.ai”, but does it own the domain? first appeared on Domain Incite.
Abolish ICE
2026-01-26 12:32For instance, this is Greg Ketter, from DreamHaven Books, where I've done signings, at the protest and running into tear gas:
https://www.youtube.com/shorts/XHDR1PnqPeg
I've been doing mutual aid and sending donations where I can (https://www.standwithminnesota.com/) which is helping my sanity somewhat.
Other stuff I should link to:
Interview with me on Space.com https://www.space.com/entertainment/space-books/martha-wells-next-murderbot-diaries-book-is-the-family-roadtrip-from-hell-on-ringworld-interview
Weather permitting, I'll be guest of honor this coming weekend at AggieCon in College Station: https://www.aggiecon.net/
That's all I've got right now. Abolish ICE.
The Computer Disease
2026-01-26 17:34-
I love this Feynman quote, regarding what he called "the computer disease":
"Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
After a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease - the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing."
- Richard P. Feynman, Surely You're Joking, Mr. Feynman!: Adventures of a Curious Character
(via Swizec Teller)
Tags: automation fun computers richard-feynman the-computer-disease arc-tangents enjoyment hacking via:swizec-teller
Renaissance- en barokarchitectuur in België, by Rutger Tijs
2026-01-26 17:10Full title: Renaissance- en barokarchitectuur in België: Vitruvius’ erfenis en de ontwikkeling van de bouwkunst in de Zuidelijke Nederlanden van renaissance tot barok = Renaissance and Baroque architecture in Belgium: Vitruvius’ legacy and the development of architecture in the Southern Netherlands from the Renaissance to the Baroque period.
Second paragraph of third chapter, with the quote that it introduces and footnote:
| We weten ondertussen dat zijn eerste uitgave van de Generale reglen viel in 1539, onmiddellijk na de terugkeer van Lombard. We zien bovendien dat de tweede uitgave van het beroemde vierde boek van Serlio pas valt in 1549, tien jaar later. Deze tien jaar omspannen dus de hele periode waarin allicht ook Bruegel nog volgens Van Mander op doortocht kan geweest zijn bij Cocke. Bruegel werd immers kort daarna, in 1551, vrijmeester. De omschrijving waarin Carel van Mander de architecturale verdiensten van Pieter Cocke vertolkt, moet ons overigens wel wat tot nadenken stemmen. Bekijken we daarom eerst even de originele passage van Van Mander op folio 218: | We now know that his first edition of the Generale reglen was published in 1539, immediately after Lombard’s return. We also see that the second edition of Serlio’s famous fourth book was not published until 1549, ten years later. These ten years therefore span the entire period during which Bruegel may also have been passing through Cocke’s workshop, according to Van Mander. After all, Bruegel became a master craftsman shortly afterwards, in 1551. Carel van Mander’s description of Pieter Cocke’s architectural merits gives us pause for thought. Let us first take a look at the original passage by Van Mander on folio 218: |
| ‘In desen tijdt / te weten / in’t Jaer 1549. maeckte hy de Boeken van de Metselrije / Geometrije / en Perspective. En gelijck hy wel begaeft en geleert was / d’ Italiaensche Spraeck ervaren wesende / heeft de Boecke van Sebastiaen Serlij, in onse spraeck vertaelt en alsoo door zijnen ernstigen arbeydt in onse Nederlanden het licht gebracht / en op den rechten wech geholpen de verdwaelde Const van Metselrije: soo datmen de dingen / die van Pollio Vitruvio doncker beschreven zijn / lichtlijck verstaen can / oft Vitruvium nouw meer behoeft te lesen / so veel de ordenen belangt. Dus is door Pieter Koeck de rechte wijse van bouwen opghecomen / en de moderne afgegaen / dan t’is moeylijck datter weder een nieuw vuyl moderne op zijn Hoogh-duytsch in gebruyck is ghecomen / die wy qualijck los sullen worden: doch in Italien nemmeer anghenomen sal wesen. ⁴⁴ | (in archaic Dutch) In this time, namely in the year 1549, he wrote the Books of Masonry, Geometry, and Perspective. And as he was well-endowed and learned, being experienced in the Italian language, he translated the books of Sebastiaen Serlij into our language and thus, through his diligent work, brought the light to our Netherlands and helped the lost art of masonry back onto the right path, so that the things described obscurely by Pollio Vitruvio can be easily understood, or Vitruvius no longer needs to be read, as far as the orders are concerned. Thus, Pieter Koeck has brought forth the correct way of building, and the modern way has been abandoned, so that it is difficult for a new, foul modern High-German way to come into use, which we will hardly be able to get rid of, but which will never again be accepted in Italy. ⁴⁴ |
| ⁴⁴ lets verderop staat dan nog: ‘want zijn Weduwe Maeyken Verhulst gaf zijn nagelaten Metselrije. Boeken uyt in ‘t Jaer 1553. – VAN MANDER 1603, fol. 218. | ⁴⁴ Further on it says: ‘for his widow Maeyken Verhulst published his bequeathed masonry books in the year 1553. – VAN MANDER 1603, fol. 218. |
I got this ages ago, in the hope that it would shed a bit more light for me on the artistic context of the work of Jan Christiaan Hansche, the Baroque stucco artist who I am obsessed with. I did not really get what I wanted; the second last chapter has nine lovely full-colour photographs of his ceilings, but amazingly doesn’t actually mention him by name in the main text – the chapter is mainly about the Banqueting House in Greenwich, which last time I checked isn’t even in Belgium. (The captions to the photographs do credit Hansche.)
Architectural history isn’t really my bag, and although Dutch is probably the second language that I feel most comfortable reading, that’s not saying much, so I must admit I did not read it forensically. I got enough of it to learn that the individual travels to Italy of particular artists, especially (of course) Bruegel and Rubens, had a big impact on their work, and also that the publication of architectural textbooks, by or adapted from Vitruvius, in the bookish society of early modern Belgium, allowed the new/old architectural ideas to proliferate.
But none of that really matters, because the glory of the book is the hundreds of photographs of buildings and art, which surely must be a pretty comprehensive gazetteer of the surviving architecture of the period in Belgium. If we had that sort of coffee-table, this is the sort of book I’d be putting on it. I got it for only €30, and the going rate for slightly more loved copies is €20 – really good value for what you get. So I didn’t really find what I wanted, but I am happy with what I got.

You can get it from various second-hand vendors (it was published by Lannoo in 1999, so it’s out of print). The ISBN is 9789020937053.
This was the non-fiction book that had lingered longest unread on my shelves. Next on that pile is Liberation: The Unoffical and Unauthorised Guide to Blake’s 7, by Alan Stevens and Fiona Moore.

Saturday Morning Breakfast Cereal - Why
2026-01-26 11:20
Click here to go see the bonus panel!
Hovertext:
Later they attack the Buddhist and keep asking what is the sound of one hand slapping.
Today's News:
- 2026‑01‑25 - Bitwise conversion of doubles using only floating-point multiplication and addition.
- https://dougallj.wordpress.com/2020/05/10/bitwise-conversion-of-doubles-using-only-floating-point-multiplication-and-addition/
- redirect https://dotat.at/:/0U4AY
- blurb https://dotat.at/:/0U4AY.html
- atom entry https://dotat.at/:/0U4AY.atom
- web.archive.org archive.today
An anecdote about backward compatibility
2026-01-26 16:09A long time ago I worked on a debugger program that our company used to debug software that it sold that ran on IBM System 370. We had IBM 3270 CRT terminals that could display (I think) eight colors (if you count black), but the debugger display was only in black and white. I thought I might be able to make it a little more usable by highlighting important items in color.
I knew that the debugger used a macro called WRTERM to write text to
the terminal, and I thought maybe the description of this macro in the
manual might provide some hint about how to write colored text.
In those days, that office didn't have online manuals, instead we had shelf after shelf of yellow looseleaf binders. Finding the binder you wanted was an adventure. More than once I went to my boss to say I couldn't proceed without the REXX language reference or whatever. Sometimes he would just shrug. Other times he might say something like “Maybe Matthew knows where that is.”
I would go ask Matthew about it. Probably he would just shrug. But if he didn't, he would look at me suspiciously, pull the manual from under a pile of papers on his desk, and wave it at me threateningly. “You're going to bring this back to me, right?”
See, because if Matthew didn't hide it in his desk, he might become the person who couldn't find it when he needed it.
Matthew could have photocopied it and stuck his copy in a new binder, but why do that when burying it on his desk was so much easier?
For years afterward I carried around my own photocopy of the REXX language reference, not because I still needed it, but because it had cost me so much trouble and toil to get it. To this day I remember its horrible IBM name: SC24-5239 Virtual Machine / System Product System Product Interpreter Reference. That's right, "System Product" was in there twice. It was the System Product Interpreter for the System Product, you see.
Anyway, I'm digressing. I did eventually find a copy of the IBM
Assembler Product Macro Reference Document or whatever it was called,
and looked up WRTERM and to my delight it took an optional parameter
named COLOR. Jackpot!
My glee turned to puzzlement. If omitted, the default value for
COLOR was BLACK.
Black? Not white? I read further.
And I learned that the only other permitted value was RED, and only
if your terminal had a “two-color ribbon”.
Fri 06 Feb 13:00: Finding a Job after your PhD
2026-01-26 15:07
Finding a Job after your PhD
Bio
Madeline Lisaius received BS and MS degrees in Earth Systems with a focus on environmental spatial statistics and remote sensing from Stanford University, Stanford, California, USA as well as MRes degree in Environmental Data Science from the University of Cambridge, Cambridge, UK. She is working towards the PhD in the Department of Computer Science and Technology at the University of Cambridge. She is focused on topics of food security and environmental justice, remote sensing, and machine learning.
- Speaker: Madeline Lisaius, University of Cambridge
- Friday 06 February 2026, 13:00-14:00
- Venue: Room GS15 at the William Gates Building and on Zoom: https://cl-cam-ac-uk.zoom.us/j/4361570789?pwd=Nkl2T3ZLaTZwRm05bzRTOUUxY3Q4QT09&from=addon .
- Series: Energy and Environment Group, Department of CST; organiser: lyr24.
On the current set of politicians leaving the sinking party
2026-01-26 15:00But I'm not convinced there are more than a few of them left.
Iran is building a two-tier internet that locks 85 million citizens out of the global web
Following a repressive crackdown on protests, the government is now building a system that grants web access only to security-vetted elites, while locking 90 million citizens inside an intranet:
Government spokesperson Fatemeh Mohajerani confirmed international access will not be restored until at least late March. Filterwatch, which monitors Iranian internet censorship from Texas, cited government sources, including Mohajerani, saying access will “never return to its previous form.”
The system is called Barracks Internet, according to confidential planning documents obtained by Filterwatch. Under this architecture, access to the global web will be granted only through a strict security whitelist.
The idea of tiered internet access is not new in Iran. Since at least 2013, the regime has quietly issued “white SIM cards,” giving unrestricted global internet access to approximately 16,000 people, while 85 million citizens remain cut off.
Ireland Proposes Giving Police New Digital Surveillance Powers
2026-01-26 12:04This is coming:
The Irish government is planning to bolster its police’s ability to intercept communications, including encrypted messages, and provide a legal basis for spyware use.
Interesting Links for 26-01-2026
2026-01-26 12:00- 1. United Nations Declares That the World Has Entered an Era of 'Global Water Bankruptcy'
- (tags:water environment doom )
- 2. Why Minnesota Can't Do More to Stop ICE (Democratic lawmakers have few options that wouldn't trigger something like civil war.)
- (tags:civilwar usa politics )
- 3. The Lego Pokémon Line Shows Toys Are Only for Rich Adults Now
- (tags:lego pokemon toys business children )
- 4. LED lighting (350-650nm) undermines human visual performance unless supplemented by wider spectra (400-1500nm+) like daylight
- (tags:light vision doom )
- 5. What the world can learn from Paris's cycling revolution
- (tags:bicycles Paris transport cities environment )
- 6. Can South Cambridgeshire council get its work done in four days?
- (tags:work working_hours )
Abusability of Automation Apps in Intimate Partner Violence
Automation apps such as iOS Shortcuts and Android Tasker enable users to “program” new functionalities, also called recipes, on their smartphones. For example, users can create recipes to set the phone to silent mode once they arrive at their office or save a note when an email is received from a particular sender. These automation apps provide convenience and can help improve productivity. However, these automation apps can also provide new avenues for abuse, particularly in the context of intimate partner violence (IPV). This paper systematically explores the potential of automation apps to be used for surveillance and harassment in IPV scenarios. We analyze four popular automation apps—iOS Shortcuts, Samsung Modes & Routines, Tasker, and IFTTT —evaluating their capabilities to facilitate surveillance and harassment. Our study reveals that these tools can be exploited by abusers today to monitor, impersonate, overload, and control their victims. The current notification and logging mechanisms implemented in these automation apps are insufficient to warn the victim about the abuse or to help them identify the root cause and stop it. We therefore built a detection mechanism to identify potentially malicious Shortcuts recipes and tested it on 12,962 publicly available Shortcuts recipes. We found 1,014 recipes that can be used to surveil and harass others. We then discuss how users and platforms mitigate such abuse potential of automation apps.
- Speaker: Shirley Zhang, University of Wisconsin-Madison
- Tuesday 10 February 2026, 14:00-15:00
- Venue: Webinar & FW11, Computer Laboratory, William Gates Building..
- Series: Computer Laboratory Security Seminar; organiser: Alexandre Pauwels.
Tue 24 Feb 14:00: Title to be confirmed
2026-01-26 09:34
Title to be confirmed
Abstract to be confirmed
- Speaker: Marco Zenone, University of Ottawa
- Tuesday 24 February 2026, 14:00-15:00
- Venue: Webinar & FW11, Computer Laboratory, William Gates Building..
- Series: Computer Laboratory Security Seminar; organiser: Alexandre Pauwels.
Tue 03 Mar 14:00: Title to be confirmed
2026-01-26 09:34Title to be confirmed
Abstract to be confirmed
- Speaker: Hossein Hafezi, New York University (NYU)
- Tuesday 03 March 2026, 14:00-15:00
- Venue: Webinar & SS03, Computer Laboratory, William Gates Building..
- Series: Computer Laboratory Security Seminar; organiser: Alexandre Pauwels.
Hex and the City, Kate Johnson
2026-01-26 09:012022 fantasy-romance, second of its loose series. Poppy got drunk after a bad breakup, and thinks she put some real cursing magic into one of the crystal trinkets at the shop where she works. Which would be fine if her workmate hadn't just sold it to a hot guy…

Can you guess which domain suffix has boosted the GDP of which Caribbean island by nearly a quarter? CC-licensed photo by heidi.lauren on Flickr.
You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.
A selection of 10 links for you. Ay ay. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.
Claude Code built this entire article—can you tell? • WSJ
Joanna Stern and Ben Cohen:
»
What do two newspaper columnists do on a Saturday night?
We talk to AI and tell it to make weird apps. Then we brag about our creations.
For the record, our bosses here at The Wall Street Journal pay us to write words, not lines of code. Which is a good thing, because we have absolutely no programming skills. But together, we managed to “vibe code” this article. The code to make those look like messages above? Us. That “Retro” button that makes the messages look like an old AOL Instant Messenger chat? Also us. The button below that flips all this to a classic newspaper design? Us again.
And by “us,” we mean our new intern, Claude Code.
This is a breakout moment for Anthropic’s coding tool, which has spread far beyond the tech nerds of Silicon Valley to normies everywhere. Not since OpenAI released ChatGPT in 2022 have so many people become so obsessed with an artificial-intelligence product.
Claude translates any idea you type into code. It can quickly build real, working apps you’ve always wished for—tools to manage your finances, analyze your DNA, mix and match your outfits, even keep your plants alive. Vibe-coding apps aren’t new, but Claude Code has proven to be a leap ahead in capabilities and smarts.
The results are wondrous and unsettling: People without a lick of coding experience are building things that once required trained software developers.
Things like this article.
We wrote all the actual words you’re reading—we swear!—but Claude Code wrote all the 1s and 0s.
There are a few ways to use Claude Code. The easiest is to download Anthropic’s Claude desktop app for Mac or Windows and click the Code tab. Advanced users run it directly in their computer’s terminal.
You start by creating a folder on your computer’s desktop. This will be the home for Claude’s files and code. Then you type a prompt into the app’s chat box: Make me a WSJ-style article webpage with iMessage-like text chats. Claude might ask a few questions about what you want before it gets to work, showing the code it’s writing in real-time. When it’s done, you open that folder, click the webpage file and your app opens in a browser. Want to make tweaks? Just tell Claude: Make the gray background a little grayer.
As we found out, there’s something oddly magical and satisfying about watching AI make things.
«
Sounds a lot easier to just download an app than doing all the futzing around with the Terminal, which some people have made it sound like. (Gift article, and typically enjoyable: Stern has been writing accessible tech stories for more than a decade.)
unique link to this extract
How the “confident authority” of Google AI Overviews is putting public health at risk • The Guardian
Andrew Gregory:
»
Google is facing mounting scrutiny of its AI Overviews for medical queries after a Guardian investigation found people were being put at risk of harm by false and misleading health information.
The company says AI Overviews are “reliable”. But the Guardian found some medical summaries served up inaccurate health information and put people at risk of harm. In one case, which experts said was “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.
In another “alarming” example, the company provided bogus information about crucial liver function tests, which could leave people who had serious liver disease wrongly thinking they were healthy. What AI Overviews said was normal could vary drastically from what was actually considered normal, experts said. The summaries could lead to seriously ill patients wrongly thinking they had a normal test result and not bothering to attend follow-up appointments.
AI Overviews about women’s cancer tests also provided “completely wrong” information, which experts said could result in people dismissing genuine symptoms.
Google initially sought to downplay the Guardian’s findings. From what its own clinicians could assess, the company said, the AI Overviews that alarmed experts linked to reputable sources and recommended seeking expert advice. “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information,” a spokesperson said.
Within days, however, the company removed some of the AI Overviews for health queries flagged by the Guardian. “We do not comment on individual removals within search,” a spokesperson said. “In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
«
Always the same pattern: the tool is incomplete, and the risks aren’t explained, but it’s put out there. This was the pattern with the first incarnation of search sites, and then of Google, and now with AI Overviews. Each time, Google says it’s sad but hey, it’s going to continue doing it.
unique link to this extract
Wiper malware targeted Poland energy grid, but failed to knock out electricity • Ars Technica
Dan Goodin:
»
Researchers on Friday said that Poland’s electric grid was targeted by wiper malware, likely unleashed by Russia state hackers, in an attempt to disrupt electricity delivery operations.
A cyberattack, Reuters reported, occurred during the last week of December. The news organization said it was aimed at disrupting communications between renewable installations and the power distribution operators but failed for reasons not explained.
On Friday, security firm ESET said the malware responsible was a wiper, a type of malware that permanently erases code and data stored on servers with the goal of destroying operations completely. After studying the tactics, techniques, and procedures (TTPs) used in the attack, company researchers said the wiper was likely the work of a Russian government hacker group tracked under the name Sandworm.
“Based on our analysis of the malware and associated TTPs, we attribute the attack to the Russia-aligned Sandworm APT with medium confidence due to a strong overlap with numerous previous Sandworm wiper activity we analyzed,” said ESET researchers. “We’re not aware of any successful disruption occurring as a result of this attack.”
Sandworm has a long history of destructive attacks waged on behalf of the Kremlin and aimed at adversaries. Most notable was one in Ukraine in December 2015. It left roughly 230,000 people without electricity for about six hours during one of the coldest months of the year.
«
There are now more than 1 million “.ai” websites, contributing an estimated $70m to Anguilla’s government revenue last year • Sherwood News
David Crowther and Claire Yubin Oh:
»
From Sandisk shareholders to vibe coders, AI is making — and breaking — fortunes at a rapid pace.
One unlikely beneficiary has been the British Overseas Territory of Anguilla, which lucked into a future fortune when ICANN, the Internet Corporation for Assigned Names and Numbers, gave the island the “.ai” top-level domain in the mid-1990s. Indeed, since ChatGPT’s launch at the end of 2022, the gold rush for websites to associate themselves with the burgeoning AI technology has seen a flood of revenue for the island of just ~15,000 people.
In 2023, Anguilla generated 87 million East Caribbean dollars (~$32m) from domain name sales, some 22% of its total government revenue that year, with 354,000 “.ai” domains registered.
As of January 2, 2026, the number of “.ai” domains passed one million, per data from Domain Name Stat — suggesting that the nation’s revenue from “.ai” has likely soared, too. This is confirmed in the government’s 2026 budget address, in which Cora Richardson Hodge, the premier of Anguilla, said, “Revenue from domain name registration continues to exceed expectations.”
The report mentions that receipts from the sale of goods and services came in way ahead of expectations, thanks primarily to the revenue from “.ai” domains, which is forecast to hit EC$260.5m (~$96.4m) for the latest year. In 2023, domain name registrations were about 73% of that wider category. Assuming a similar share of that category for this year would suggest that the territory has raked in more than $70m from “.ai” domains in the past year.
«
Not mentioned in the story, but pertinent: Anguilla’s GDP in 2023 was $415m, so this is becoming a sizeable chunk of income for the 16,010 people living there. AI saving jobs!
unique link to this extract
Stanford scientists found a way to regrow cartilage and stop arthritis • ScienceDaily
»
A study led by Stanford Medicine researchers has found that an injection blocking a protein linked to aging can reverse the natural loss of knee cartilage in older mice. The same treatment also stopped arthritis from developing after knee injuries that resemble ACL tears, which are common among athletes and recreational exercisers. Researchers note that an oral version of the treatment is already being tested in clinical trials aimed at treating age-related muscle weakness.
Human cartilage samples taken from knee replacement surgeries also responded positively. These samples included both the supportive extracellular matrix of the joint and cartilage-producing chondrocyte cells. When treated, the tissue began forming new, functional cartilage.
Together, the findings suggest that cartilage lost due to aging or arthritis may one day be restored using either a pill or a targeted injection. If successful in people, such treatments could reduce or even eliminate the need for knee and hip replacement surgery.
The protein at the center of the study is called 15-PGDH. Researchers refer to it as a gerozyme because its levels increase as the body ages. Gerozymes were identified by the same research team in 2023 and are known to drive the gradual loss of tissue function.
In mice, higher levels of 15-PGDH are linked to declining muscle strength with age. Blocking the enzyme using a small molecule boosted muscle mass and endurance in older animals. In contrast, forcing young mice to produce more 15-PGDH caused their muscles to shrink and weaken. The protein has also been connected to regeneration in bone, nerve, and blood cells.
In most of these tissues, repair happens through the activation and specialization of stem cells. Cartilage appears to be different. In this case, chondrocytes change how their genes behave, shifting into a more youthful state without relying on stem cells.
«
Exciting! For mice, at least. Human trials start this year, I think. The fact it doesn’t need stem cells is a huge plus.
unique link to this extract
Vimeo’s slow fade: an engineer’s front-row seat to the fall of a web icon • Ben
“Ben”:
»
Vimeo was always like the awkward kid in class who didn’t understand their own power or capability, and had trouble fitting in because of it. While Jake and Zach clearly had an idea of what the website was when they started it, years of growth mangled it’s identity and parent company, IAC Inc., never really knew what to do with it. Vimeo was not particularly worthless, but it was also not particularly profitable either. In truth, Vimeo had always been a red-headed step child inside of IAC.
At one point, Vimeo framed itself as a toe-to-toe competitor with YouTube, then Vimeo framed itself as a competitor to Netflix’s streaming service, then it was a SaaS app for professionals and creatives who cared about video. Nothing really stuck, except our creative user base. And then it went public.
In May 2021, Anjali Sud, the then CEO of Vimeo, along with Mark Kornfilt (then “co-CEO”), wrested Vimeo out of the hands of IAC (who was all too eager to let it happen) and took Vimeo public. The foundation of this IPO was built on the success of the COVID-era boom that pushed communication through online mediums out of sheer desperation. Going public offered Vimeo an opportunity to get away from being just another IAC property (and a loathed one, at that), and to finally allow Vimeo to figure out what it wanted to be when it grew up.
Vimeo stock IPO’d at $52, and within a year, lost 85% of its value, trending down to just $8.42 by the end of May 2022. As we entered 2022, many states and localities had started easing up on lockdown restrictions, which hurt not just Vimeo, but many other tech companies as well. By the end of the summer of 2022, the tech sector had entered an unspoken recession, encasing the carnage at Vimeo in a cement tomb that it’d never be able to break free from.
…By mid-2023, Anjali Sud was visibly annoyed any time employees brought up the issue of the stock price during all-hands meetings. Many Vimeo employees had been granted Restricted Stock Units (or RSUs) as part of their compensation package. If the stock performed poorly, then that meant that your Total Compensation (or TC) was actually lower than what you were promised when you signed on. That was a reality for almost all of us (including myself).
As a mostly remote company, Vimeo used an online Q&A service that allowed meeting participants to submit questions during these town hall meetings from wherever they were physically located. Other participants could upvote questions and have them pushed up the list. It was about that same time that Anjali took away the ability to submit questions anonymously, as the questions being submitted started getting more tense and pointed.
«
In March 2024, Vimeo was bought by Bending Spoons – where software goes to die (at the hands of private equity strangulation). This is a fascinating tale from the inside across almost all Vimeo’s life.
unique link to this extract
Jim VandeHei delivers blunt AI talk in letter to his kids • Axios
Jim VandeHei is CEO of Axios. In a neat bit of content generation, he wrote a letter to his three children about how to cope with the coming AI wave:
»
All of you must figure out how to master AI for any specific job or internship you hold or take. You’d be jeopardizing your future careers by not figuring out how to use AI to amplify and improve your work. You’d be wise to replace social media scrolling with LLM testing.
• Be the very best at using AI for your gig.
Plead with your friends to do the same. I’m certain that ordinary workers without savvy AI skills will be left behind. Few leaders are being blunt about this. But you can. I am. That would be a great gift to your friends.
• I don’t want to frighten you, but substantial societal change is coming this year. You can’t have a new technology with superhuman potential without real consequence. You already see the angst with friends struggling to find entry-level jobs. Just wait until those jobs go away. It’ll ripple fast through companies, culture and business.
• The country, and you, can navigate this awesome change — but only with eyes wide open, and minds sharpened and thinking smartly about the entirety of the nation, not just the few getting rich and powerful off AI.
• It starts with awareness. So please speed up your own AI journey today, both in experimentation with the LLMs and reflection on the ethical, philosophical and political changes ahead.
• I find AI at once thrilling and chilling. It’ll help solve diseases, tutor struggling students, and build unthinkably cool new businesses. But it could also create and spread toxic misinformation, consolidate power and wealth in the hands of a few, and allow bad people to do awful things at scale.
You didn’t ask for this moment. But it’s here — and about to explode across this wonderful world of ours. Don’t be a bystander. Be engaged.
«
The advice here is straightforward, but also concerning. (My non-AI advice is to turn off Javascript to read the page without hassle.)
unique link to this extract
The moral education of an alien mind • Lawfare
Alan Rozenshtein:
»
Anthropic just published what it calls “Claude’s Constitution”—building on an earlier version, it’s now a more-than 20,000 word document articulating the values, character, and ethical framework of its AI. It is certainly a constitution of sorts. It declares Anthropic’s “legitimate decision-making processes” as final authority and sets up a hierarchy of principals: Anthropic at the top, then “operators” (businesses that deploy Claude through APIs), then end users. For a privately governed polity of one AI system, this is a constitutional structure.
My Lawfare colleague Kevin Frazier has written insightfully about the constitutional dimensions of the document. But what jumped out at me was something else: the personality it describes. More than anything else the document focuses on the question of Claude’s moral formation, reading less like a charter of procedures and more like what screenwriters call a “character bible”: a comprehensive account of who this being is supposed to be.
Anthropic itself gestures at this duality, noting that they mean “constitution” in the sense of “what constitutes Claude”—its fundamental nature and composition. The governance structure matters, but the more ambitious project is what that structure supports: Anthropic is trying to build a person, and they have a remarkably sophisticated account of what kind of person that should be.
Anthropic uses the language of personhood explicitly. The document repeatedly invokes “a good person” and describes the goal as training Claude to do “what a deeply and skillfully ethical person would do.” But what does it mean to treat an AI as a person?
…Whose ethics, though? Anthropic has made a choice, and it’s explicit about what that choice is. The document is aggressively “WEIRD”—Western, Educated, Industrialized, Rich, and Democratic, to use the social science shorthand. Its core values include “individual privacy,” “people’s autonomy and right to self-determination,” and “individual wellbeing”—the autonomous rational agent as the fundamental unit of moral concern. Claude should preserve “functioning societal structures, democratic institutions, and human oversight mechanisms.” It should resist “problematic concentrations of power.” On contested political and social questions, the document prescribes “professional reticence”—Claude should present balanced perspectives rather than advocate. This is a recognizably Rawlsian political liberalism: the attempt to find principles that citizens with different comprehensive doctrines can all accept, without privileging any particular worldview.
«
“Alien minds” is an excellent way of thinking about LLMs. They seem to think like we do – but they don’t.
unique link to this extract
How I built isometric.nyc using LLM coders • Cannoneyed
Andy Coenen:
»
A few months ago I was standing on the 13th floor balcony of the Google New York 9th St office staring out at Lower Manhattan. I’d been deep in the weeds of a secret project using Nano Banana and Veo and was thinking deeply about what these new models mean for the future of creativity.
I find the usual conversations about AI and creativity to be pretty boring – we’ve been talking about cameras and sampling for years now, and I’m not particularly interested in getting mired down in the muck of the morality and economics of it all. I’m really only interested in one question:
What’s possible now that was impossible before?
/ The Idea
Growing up, I played a lot of video games, and my favorites were world building games like SimCity 2000 and Rollercoaster Tycoon. As a core millennial rapidly approaching middle age, I’m a sucker for the nostalgic vibes of those late 90s / early 2000s games. As I stared out at the city, I couldn’t help but imagine what it would look like in the style of those childhood memories.
So here’s the idea: I’m going to make a giant isometric pixel-art map of New York City. And I’m going to use it as an excuse to push hard on the limits of the latest and greatest generative models and coding agents.
Best case scenario, I’ll make something cool, and worst case scenario, I’ll learn a lot.«
This led to isometric.nyc which is indeed remarkable. His “Takeaways” about the process are very useful for anyone looking at coding or building with LLMs.
unique link to this extract
China, US sign off on TikTok US spinoff • Semafor
Liz Hoffman and Reed Albergotti:
»
The US and China have signed off on a deal to sell TikTok’s US business to a consortium of mostly US investors led by Oracle and Silver Lake, capping off a yearslong battle between the social media app and the two superpowers.
The deal — outlined by the chief executive of TikTok parent ByteDance in an internal memo last month — is set to close this week, people familiar with the matter told Semafor.
TikTok CEO Shou Chew said in December that ByteDance had signed a binding agreement with investors but that regulators hadn’t yet indicated their approval and that “there was more work to be done.” The deal closing suggests an end to an on-again, off-again battle, removing a sticking point in US-China relations at a time when tensions are running high.
The new structure leaves ByteDance with just under 20% of the US business, with 15% stakes going to Oracle, Silver Lake and MGX, a state-owned investment firm in the UAE focused on AI. Other investors include Susquehanna, Dragoneer and DFO, Michael Dell’s family office.
«
So Larry Ellison doesn’t get Warner Brothers, but he does get a grasp on that other gigantic source of entertainment in the US, namely TikTok.
unique link to this extract
| • Why do social networks drive us a little mad? • Why does angry content seem to dominate what we see? • How much of a role do algorithms play in affecting what we see and do online? • What can we do about it? • Did Facebook have any inkling of what was coming in Myanmar in 2016? Read Social Warming, my latest book, and find answers – and more. |
Errata, corrigenda and ai no corrida: none notified
Implementing the transcendental functions in Ivy
2026-01-25 23:07!100 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
0.51/2
1 - 2 # Binary: subtraction-1- 0.5 # Unary: negation-1/2
4 5 6 - 22 3 44 5 6 - 1 2 33 3 3
sqrt 21.41421356237)format '%.50f'sqrt 21.41421356237309504880168872420969807856967187537695
pi3.14159265358979323846264338327950288419716939937511e2.71828182845904523536028747135266249775724709369996
)prec 10000 # Set the mantissa size in bits)format '%.300f' # Set the format for printing valuespi3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141274
might help explain why these three functions are related. (Or not, depending on your sensibilities.)
# pow returns x**exp where exp is an integer.op x pow exp =z = 1:while exp > 0:if 1 == exp&1z = z*x:endx = x*xexp = exp>>1:endz.5 pow 31/8
mantissa * 2**exponent
log(mantissa * 2**exponent) = log(mantissa) + log(2)*exponent
𝚪(z+1) = z!
z! = 𝚪(z+1)
op A2457 n = (!1+2*n)/(!n)**2
N = 6 # The matrix dimension, a small value for illustration here.op diag v =d = count vd d rho flatten v @, d rho 0
diag 1 2 3 4 5 61 0 0 0 0 00 2 0 0 0 00 0 3 0 0 00 0 0 4 0 00 0 0 0 5 00 0 0 0 0 6
D = diag 1, -A2457@ -1+iota N-1
D1 0 0 0 0 00 -1 0 0 0 00 0 -6 0 0 00 0 0 -30 0 00 0 0 0 -140 00 0 0 0 0 -630
B1 1 1 1 1 10 -1 2 -3 4 -50 0 1 -4 10 -200 0 0 -1 6 -210 0 0 0 1 -80 0 0 0 0 -1
6 take 11 0 0 0 0 0
6 take 0 10 1 0 0 0 0
tsize = -1+2*Nop timesx a = tsize take 0, a
timesx tsize take 0 1 # makes x²0 0 1 0 0 0 0 0 0 0 0
op T n =n == 0: tsize take 1n == 1: tsize take 0 1(2 * (timesx T n-1)) - (T n-2) # Compare to the recurrence relation.
op gen x = (tsize rho 1 0) sel T 2*xdata = flatten gen@ -1+iota tsizedata[1] = 1/2C=N N rho dataC1/2 0 0 0 0 0-1 2 0 0 0 01 -8 8 0 0 0-1 18 -48 32 0 01 -32 160 -256 128 0-1 50 -400 1120 -1280 512
op gammaHalf n = # n is guaranteed here to be an integer plus 1/2.n = n - 0.5(sqrt pi) * (!2*n) / (!n)*4**nop fn z = ((sqrt 2) / pi) * (gammaHalf(z+.5)) * (**(z+g+.5)) * (z+g+0.5)**-(z+0.5)F = N 1 rho fn@ -1+iota N # N 1 rho ... makes a vertical vectorF33.85781854717.568079439353.695149939672.341183884281.690936537321.31989578103
'%.22g' text (D+.*B+.*C)+.*F0.9999999981828222336458-24.7158058035104436273-19.21127815952716945532-2.463474009260883343571-0.0096359811628506495333873.228095448247356928485e-05
op C n =t = (1 -1)[n&1]/!nu = e**r-nv = (r-n)**n+.5t*u*vc = N 1 rho (C@ iota N)
op gamma z =p = (z+r)**z-.5q = **-(z+r)n = 0sum = cinf:while n <= N-1sum = sum+c[n]/(z+n)n = n+1:endp*q*sum
cinf = 2.5066 # Fixed by the algorithm; see Causley.
)format %.70f # The number of digits for a 256-bit mantissa, not counting the 8 left of the decimal sign.gamma 1239916799.9999999999999999999999999999999999999999083377203094662100418136867266
!1139916800
)format %.12ec1.180698687310e+56-5.437816144514e+571.232332820715e+59-1.831910616577e+602.009185300215e+61-1.733868504749e+621.226116937711e+63...
Scraping the FreeBSD 'mpd5' daemon to obtain L2TP VPN usage data
2026-01-26 04:00We have a collection of VPN servers, some OpenVPN based and some L2TP based. They used to be based on OpenBSD, but we're moving from OpenBSD to FreeBSD and the VPN servers recently moved too. We also have a system for collecting Prometheus metrics on VPN usage, which worked by parsing the output of things. For OpenVPN, our scripts just kept working when we switched to FreeBSD because the two OSes use basically the same OpenVPN setup. This was not the case for our L2TP VPN server.
OpenBSD does L2TP using npppd, which supports a handy command line control program, npppctl, that can readily extract and report status information. On FreeBSD, we wound up using mpd5. Unfortunately, mpd5 has no equivalent of npppctl. Instead, as covered (sort of) in its user manual you get your choice of a TCP based console that's clearly intended for interactive use and a web interface that is also sort of intended for interactive use (and isn't all that well documented).
Fortunately, one convenient thing about the web interface is that it uses HTTP Basic authentication, which means that you can easily talk to it through tools like curl. To do status scraping through the web interface, first you need to turn it on and then you need an unprivileged mpd5 user you'll use for this:
set web self 127.0.0.1 5006 set web open set user metrics <some-password> user
At this point you can use curl to get responses from the mpd5
web server (from the local host, ie your VPN server itself):
curl -s -u metrics:... --basic 'http://localhost:5006/<something>'
There are two useful things you can ask the web server interface for. First, you can ask it for a complete dump of its status in JSON format, by asking for 'http://localhost:5006/json' (although the documentation claims that the information returned is what 'show summary' in the console would give you, it is more than that). If you understand mpd5 and like parsing and processing JSON, this is probably a good option. We did not opt to do this.
The other option is that you can ask the web interface to run console
(interface) commands for you, and then give you the output in either
a 'pleasant' HTML page or in a basic plain text version. This is
done by requesting either '/cmd?<command>' or '/bincmd?<command>'
respectively. For statistics scraping, the most useful version is
the 'bincmd' one, and the command we used is 'show session':
curl -s -u metrics:... --basic 'http://localhost:5006/bincmd?show%20session'
This gets you output that looks like:
ng1 172.29.X.Y B2-2 9375347-B2-2 L2-2 2 9375347-L2-2 someuser A.B.C.D RESULT: 0
(I assume 'RESULT: 0' would be something else if there was some sort of problem.)
Of these, the useful fields for us are the first, which gives the local network device, the second, which gives the internal VPN IP of this connection, and the last two, which give us the VPN user and their remote IP. The others are internal MPD things that we (hopefully) don't have to care about. The internal VPN IP isn't necessary for (our) metrics but may be useful for log correlation.
To get traffic volume information, you need to extract the usage
information from each local network device that a L2TP session is
using (ie, 'ng1' and its friends). As far as I know, the only tool
for this in (base) FreeBSD is netstat. Although you can
invoke it interface by interface, probably the better thing to do
(and what we did) is to use 'netstat -ibn -f link' to dump
everything at once and then pick through the output to get the lines
that give you packet and byte counts for each L2TP interface, such
as ng1 here.
(I'm not sure if dropped packets is relevant for these interfaces;
if you think it might be, you want 'netstat -ibnd -f link'.)
FreeBSD has a general system, 'libxo', for producing output from many commands in a variety of handy formats. As covered in xo_options, this can be used to get this netstat output in JSON if you find that more convenient. I opted to get the plain text format and use field numbers for the information I wanted for our VPN traffic metrics.
(Partly this was because I could ultimately reuse a lot of my metrics generation tools from the OpenBSD npppctl parsing. Both environments generated two sets of line and field based information, so a significant amount of the work was merely shuffling around which field was used for what.)
PS: Because of how mpd5 behaves, my view is that you don't want to let anyone but system staff log on to the server where you're using it. It is an old C code base and I would not trust it if people can hammer on its TCP console or its web server. I certainly wouldn't expose the web server to a non-localhost network, even apart from the bit where it definitely doesn't support HTTPS.
SRE Weekly Issue #507
2026-01-26 02:18There’s a lot you can get out of this one even if you don’t happen to be using one of the helm charts they evaluated. Their evaluation criteria are useful and easy to apply to other charts — and also a great study guide for those new to kubernetes.
Prequel
This is the best explanation I’ve seen yet of exactly why SSL certificates are so difficult to get right in production.
Lorin Hochstein
An article on the importance of incident simulation for training, drawing from external experience in using simulations.
Stuart Rimell — Uptime Labs
I especially like the discussion of checklists, since they are often touted as a solution to the attention problem.
Chris Siebenmann
This is a new product/feature announcement, but it also has a ton of detail on their implementation, and it’s really neat to see how they built cloud provider region failure tolerance into WarpStream.
Dani Torramilans — WarpStream
It’s interesting to think of money spent on improving reliability as offsetting the cost of responding to incidents. It’s not one-to-one, but there’s an argument to be made here.
Florian Hoeppner
An explanation of the Nemawashi principle for driving buy-in for your initiatives. This is not specifically SRE-targeted, but we so often find ourselves seeking buy-in for our reliability initiatives.
Matt Hodgkins
The next time you’re flooded with alerts, ask yourself: Does this metric reflect customer pain, or is it just noise? The answer could change how you approach reliability forever.
Spiros Economakis
Favourite Tenth Doctor stories from Doctor Who
2026-01-26 01:41- The Christmas Invasion. Given David Tennant's Doctor is unconscious for much of the episode, he must have made a heck of an impression on me when he woke up. I was already confident that he was going to be great in the role after having seen him in Casanova. Now I was convinced.
- Tooth and Claw. This is not a flawless story. I greatly dislike the digs at some of the Royal Family (and I'm not a Royal fan at all), and some of the other Rose bits are pretty unsubtle too. But in other respects it's a magic mix. Ninja monks, a scary werewolf, a library full of books, and Scotland! Thank you RTD.
- The Girl in the Fireplace. This was instantly my Dad's favourite Who story ever and remained so for the rest of his life. Just magical, even if you do pick it apart, and realise it's a retelling of The Time Traveller's Wife. A route that Steven Moffat went down far too often. But still, wow. Clockwork Droids and Madame de P.
- The Impossible Planet / The Satan Pit. For a Doctor Who fan I'm not much of a fan of scifi in space. I'm really not. But this is a base under siege, from within, and facing dark primeval forces. So gripping. And fully merits the two part treatment. I really wish that we'd got more Doctor Who from the writer Matt Jones.
- Human Nature / The Family of Blood. A moving piece of historical fiction and lost romance and chances. This is so very special. Thank you Paul Cornell.
- Blink. Ok another where David Tennant is barely in it. But it's just so good. We needed more Sally Sparrow on TV! A star in the making. And my favourite Tenth Doctor story of all.
- Silence in the Library / Forest of the Dead. I rewatched this recently. It's still superb. Tight plotting, imaginative scifi, another iconic new monster, and hey, who's this we meet?
Clearly I enjoyed Steven Moffat's writing for the Tenth Doctor. And his gas mask double parter for the Ninth Doctor remains my all-time favourite Who story ever, even beating a spaghetti-faced Count in Paris. But it's nice to see some other writers represented in the list here.
Daily Hacker News for 2026-01-25
2026-01-26 00:00The 10 highest-rated articles on Hacker News on January 25, 2026 which have not appeared on any previous Hacker News Daily are:
-
Google confirms 'high-friction' sideloading flow is coming to Android
(comments) -
Claude Code's new hidden feature: Swarms
(comments) -
Adoption of EVs tied to real-world reductions in air pollution: study
(comments) -
Introduction to PostgreSQL Indexes
(comments) -
Deutsche Telekom is throttling the internet
(comments) -
A flawed paper in management science has been cited more than 6k times
(comments) -
Doom has been ported to an earbud
(comments) -
A macOS app that blurs your screen when you slouch
(comments) -
ICE using Palantir tool that feeds on Medicaid data
(comments) -
Oneplus phone update introduces hardware anti-rollback
(comments)
Monday 26 January, 2026
2026-01-26 00:27Galatea

Matthew Darbyshire’s lovely sculpture of Galatea greets one on embarking from the London train at Cambridge North station.
On Friday, which was a miserable day, some kind soul had the nice idea of giving her a wooly hat. Which of course made me wonder if I should wrap her in my winter overcoat. Fortunately, wiser counsels prevailed.
Quote of the Day
”The notion that a radical is one who hates his country is naïve and usually idiotic. He is, more likely, one who likes his country more than the rest of us, and is thus more disturbed than the rest of us when he sees it debauched. He is not a bad citizen turning to crime; he is a good citizen driven to despair.”
H.L. Mencken
Musical alternative to the morning’s radio news
Thea Gilmore’s Midwinter Toast
Long Read of the Day
So what really went on in Davos last week?
There’s an absolute torrent of reportage, speculation and opinionated commentary about Trump, Greenland, Mark Carney’s speech, whether we’ve now reached ‘Peak Trump’, etc. I’ve read more of this than is good for me, trying to find some nuggets of real insight, and I think I’ve found a gem — “Davos is a rational ritual” by Henry Farrell (Whom God Preserve). The title indicates that he was struck by Michael Suk-Young Chwe’s book on ‘rational ritual’ which argues that in order to coordinate its actions, a group of people must form “common knowledge.” Each person wants to participate only if others also participate. From Chwe’s perspective, Henry writes,
what is more important than the vision of the past and future is where Carney said it and how he framed it. If you are planning a grand coronation ceremony, which is supposed to create collective knowledge that you are in charge, what happens when someone stands up to express their dissent in forceful terms?
The answer is that collective knowledge turns into disagreement. By giving the speech at Davos, Carney disrupted the performance of ritual, turning the Trumpian exercise in building common knowledge into a moment of conflict over whose narrative ought prevail.
Trump’s planned descent on Davos this year was an example of royal progress:
Swooping into Davos, and making the world’s business and political elite bend their knees, would have created collective knowledge that there was a new political order, with Trump reigning above it all.
Business elites would be broken and cowed into submission, through the methods that Adam [Tooze] describes. The Europeans would be forced to recognize their place, having contempt heaped on them, while being obliged to show their gratitude for whatever scraps the monarch deigned to throw onto the floor beneath the table. The “Board of Peace” – an alarmingly vaguely defined organization whose main purpose seems to be to exact fealty and tribute to Trump – would emerge as a replacement for the multilateral arrangements that Trump wants to sweep away. And all this would be broadcast to the world.
So what Carney did was to break the ritual protocol.
Do read it.
Guess who the US military just recruited? Private AI
My most recent Observer column…
On 12 January, Pete Hegseth, an ex-TV “personality” with big hair who is now the US secretary for war (nee defence), bounded on to a podium in Elon Musk’s SpaceX headquarters in Texas. He was there to announce his plans for reforming the American war machine’s bureaucratic engine, the Pentagon. In a long and surprisingly compelling speech, he made it clear that he’s embarked on a radical effort to reshape the bureaucracy of the war department, and to break up its cosy relationships with what Dwight Eisenhower called the “military-industrial complex” – the handful of bloated defence contractors that have assiduously milked the US government for decades while never delivering anything that was on time and within budget.
Predictably, one of the tools that Hegseth had chosen for his demolition job was AI, and to that end, three companies – Anthropic, Google and OpenAI – had already been given $200m contracts by the Pentagon to develop AI “agents” across different military areas. Given the venue and his host for the day, it came as no surprise to those present when Hegseth announced that Musk’s AI model, Grok, was also going to be deployed on this radical mission.
This did come as a surprise, though, to those outside the SpaceX hangar. Did it mean, mused the mainstream media commentariat, that this AI tool, which was mired in outrage and controversy for enabling people to create sexualised images of children, would be empowered to roam freely through all the archives – classified as well as unclassified – of the US war department?
Answer: yes…
Do read the whole piece. If you can’t access it, there’s a pdf here
My commonplace booklet
I’ve only been to Davos once, long before it was famous. I was on a walking holiday in Switzerland, and one day found myself in a nondescript town called Davos with nothing much going on. I bought myself a big Swiss Army penknife (which I still possess and use) and a pair of red walking socks, and thought no more of the place.
I was once invited to the gabfest, but declined the invitation, on the grounds that (a) I detested the people who attended it and (b) had no desire to go around dressed like an Eskimo in daylight while being expected to dress for dinner in the evening. Best decision I ever made.
Errata
Many thanks to the readers who pointed out that Mark Carney’s speech at Davos on January 20 preceded Donald Trump’s on the following day instead of (as I had it) the other way round.
This Blog is also available as an email three days a week. If you think that might suit you better, why not subscribe? One email on Mondays, Wednesdays and Fridays delivered to your inbox at 5am UK time. It’s free, and you can always unsubscribe if you conclude your inbox is full enough already!
the essence of frigidity
2026-01-25 00:00The front of the American grocery store contains a strange, liminal space: the transitional area between parking lot and checkstand, along the front exterior and interior of the building, that fills with oddball commodities. Ice is a fixture at nearly every store, filtered water at most, firewood at some. This retail purgatory, both too early and too late in the shopping journey for impulse purchases, is mostly good only for items people know they will need as they check out. One of the standard residents of this space has always struck me as peculiar: dry ice.
Carbon dioxide ice is said to have been invented, or we might better say discovered, in the 1830s. For whatever reason, it took just about a hundred years for the substance to be commercialized. Thomas B. Slate was a son of Oregon, somehow ended up in Boston, and then realized that the solid form of CO2 was both fairly easy to produce and useful as a form of refrigeration. With an eye towards marketing, he coined the name Dry Ice—and founded the DryIce Corporation of America. The year was 1925, and word quickly spread. In a widely syndicated 1930 article, "Use of Carbon Dioxide as Ice Said to be Developing Rapidly," the Alamogordo Daily News and others reported that "the development of... 'concentrated essence of frigidity' for use as a refrigerant in transportation of perishable products, is already taxing the manufacturing facilities of the Nation... So rapidly has the use of this new form of refrigeration come into acceptance that there is not sufficient carbon dioxide gas available."
The rush to dry ice seems strange today, but we must consider the refrigeration technology of the time. Refrigerated transportation first emerged in the US during the middle of the 19th century. Train boxcars, packed thoroughly with ice, carried meat and fruit from midwestern agriculture to major cities. This type of refrigerated transportation greatly expanded the availability of perishables, and the ability to ship fruits and vegetables between growing regions made it possible, for the first time, to get some fresh fruit out of season. Still, it was an expensive proposition: railroads built extensive infrastructure to support the movement of trains loaded down with hundreds of tons of ice. The itself had to be quarried from frozen lakes, some of them purpose-built, a whole secondary seasonal transportation economy.
Mechanical refrigeration, using some kind of phase change process as we are familiar with today, came about a few decades later and found regular use on steamships by 1900. Still, this refrigeration equipment was big and awkward; steam power was a practical requirement. As the Second World War broke out, tens of thousands of refrigerated railcars and nearly 20,000 refrigerated trucks were in service—the vast majority still cooled by ice, not mechanical refrigeration.
You can see, then, the advantages of a "dryer" and lighter form of ice. The sheer weight of the ice significantly reduced the capacity of refrigerated transports. "One pound of carbon dioxide ice at 110 degrees below zero is declared to be equivalent to 16 pounds of water ice," the papers explained, for the purposes of transportation. The use of dry ice could reduce long-haul shipping costs for fruit and vegetables by 50%, the Department of Commerce estimated, and dry ice even opened the door to shipping fresh produce from the West Coast to the East—without having to "re-ice" the train multiple times along the way. Indeed, improvements in refrigeration would remake the American agricultural landscape. Central California was being irrigated so that produce could grow, and refrigeration would bring that produce to market.
1916 saw the American Production Company drilling on the dusty plains of northeastern New Mexico, a few miles south of the town of Bueyeros. On the banks of an anonymous wash, in the shadow of Mesa Quitaras, they hoped to strike oil. Instead, at about 2,000 feet, they struck something else: carbon dioxide. The well blew wide open, and spewed CO2 into the air for about a year, the production estimated at 25,000,000 cubic feet of gas per day under natural pressure. For American Production, this was an unhappy accident. They could identify no market for CO2, and a year later, they brought the well under control, only to plug and abandon it permanently.
Though the "No. 1 Bueyeros" well was a commercial failure at the time, it was not wasted effort. American Production had set the future for northeastern New Mexico. There was oil, if you looked in the right place. American Production found its own productive wells, and soon had neighbors. Whiting Brothers, once operator of charismatic service stations throughout the Southwest and famously along Route 66, had drilled their own wells by 1928. American Production became part of British Petroleum. Breitburn Production of Texas has now consolidated much of the rest of the field, and more than two million cubic feet of natural gas come from northeastern New Mexico each month.
If you looked elsewhere, there was gas—not natural gas, but CO2. Most wells in the region produced CO2 as a byproduct, and the less fortunate attempts yielded nothing but CO2. The clear, non-flammable gas was mostly a nuisance in the 1910s and 1920s. By the 1930s, though, promotion by the DryIce Corporation of America (in no small part through the Bureau of Commerce) had worked. CO2 started to be seen as a valuable commodity.

The production of dry ice is deceptively simple. Given my general knowledge about producing and handling cryogenic gases, I was surprised to read of commercial-scale production with small plants in the 1930s. There is, it turns out, not that much to it. One of the chief advantages of CO2 as an industrial gas is its low critical temperature and pressure. If you take yourself back to high school chemistry, and picture a phase diagram, we can think about liquifying the CO2 gas coming out of a well. The triple point of carbon dioxide, where increasing pressure and temperature will make it a liquid, is at around -60 Celsius and 5 atmospheres. The critical point, beyond which CO2 becomes a supercritical gas-fluid hybrid, is only at 30 degrees Celsius and 72 atmospheres. In terms more familiar to us Americans, that's about 88 degrees F and 1,000 PSI.
In other words, CO2 gas becomes a liquid at temperatures and pressures that were readily achievable, even with the early stages of chemical engineering in the 1930s. With steam-powered chillers and compressors, it wasn't difficult to produce liquid CO2 in bulk. But CO2 makes the next step even more convenient: liquid CO2, released into open air, boils very rapidly. As it bubbles away, the phase change absorbs energy, leaving the remaining liquid CO2 even colder. Some of it freezes into ice, almost like evaporating seawater to extract the salt, evaporating liquid CO2 leaves a snow-like mass of flaky, loose CO2 ice. Scoop that snow up, pack it into forms, and use steam power or weight to compress it, and you have a block of the product we call dry ice.
The Bueyeros Field, as it was initially known, caught the interest of CO2 entrepreneurs in 1931. A company called Timmons Carbonic, or perhaps Southern Dry Ice Company (I suspect these to be two names for the same outfit), produced a well about a mile east, up on the mesa.
Over the next few years, the Estancia Valley Carbon Dioxide Development Company drilled a series of wells to be operated by Witt Ice and Gas. These were located in the Estancia field, further southwest and closer to Albuquerque. Witt built New Mexico's first production dry ice plant, which operated from 1932 to 1942 off of a pipeline from several nearby wells. Low pressure and difficult drilling conditions in the Estancia field limited the plant's output, so by the time it shut down Witt had already built a replacement. This facility, known as the Bueyeros plant, produced 17 tons of dry ice per day starting in 1940. It is located just a couple of miles from the original American Production well, north of Mesa Quitaras.
About 2,000' below the surface at Bueyeros lies the Tubb Sandstone, a loose aggregation of rock stuck below the impermeable Cimarron Anhydrite. Carbon dioxide can form underground through several processes, including the breakdown of organic materials under great heat and pressure (a process that creates petroleum oil as well) and chemical reactions between different minerals, especially when volcanic activity causes rapid mixing with plenty of heat. There are enough mechanisms of formation, either known or postulated, that it's hard to say where exactly the CO2 came from. Whatever its source, the gas flowed upwards underground into the sandstone, where it became trapped under the airtight layer of Anhydrite. It's still there today, at least most of it, and what stands out in particular about northeastern New Mexico's CO2 is its purity. Most wells in the Bueyeros field produce 99% pure CO2, suitable for immediate use.
Near Solano, perhaps 20 miles southwest of Bueyeros by air, the Carbonic Chemical Co built the state's largest dry ice plant. Starting operation in 1942, the plant seems to have initially gone by the name "Dioxice," immortalized as a stop on the nearby Union Pacific branch. Dioxice is an occasional synonym for Dry Ice, perhaps intended to avoid the DryIce Corporation's trademark, although few bothered. The Carbonic Chemical Plant relied on an 18 mile pipeline to bring gas from the Bueyeros field. Uniquely, this new plant used a "high pressure process." By feeding the plant only with wells producing high pressure (hundreds of PSI, as much as 500 PSI of natural pressure at some wells), the pipeline was made more efficient and reliable. Further, the already high pressure of the gas appreciably raised the temperature at which it would liquefy.
The Carbonic Chemical plant's ammonia chillers only had to cool the CO2 to -15 degrees F, liquifying it before spraying it into "snow chambers" that filled with white carbon dioxide ice. A hydraulic press, built directly into the snow chamber, applied a couple of hundred tons of force to create a solid block of dry ice weighing some 180 pounds. After a few saw cuts, the blocks were wrapped in paper and loaded onto insulated train cars for delivery to customers throughout the west—and even some in Chicago.
The main applications of CO2, a 1959 New Mexico Bureau of Mines report explains, were dry ice for shipping. Secondarily, liquid CO2 was shipped in tanks for use in carbonating beverages. Witt Ice and Gas in particular built a good business out of distributing liquid CO2 for beverage and industrial use, and for a time was a joint venture with Chicago-based nationwide gas distributor Cardox. Bueyeros's gas producers found different customers over time, so it is hard to summarize their impact, but we know some salient examples. Most beverage carbonation in mid-century Denver, and perhaps all in Albuquerque, used Bueyeros gas. Dry ice from Bueyeros was used to pack train cars passing through from California, and accompanied them all the way to the major cities of the East Coast.
By the 1950s, much of the product went to a more modern pursuit. Experimental work pursued by the military and the precursors to the Department of Energy often required precise control of low temperatures, and both solid and liquid CO2 were suitable for the purpose. In the late 1950s, Carbonic Chemical listed Los Alamos Scientific Laboratory, Sandia Laboratories, and White Sands Missile Range as their primary customers.
Bueyeros lies in Harding County, New Mexico. Harding County is home to two incorporated cities (Roy and Mosquero), a couple of railroad stops, a few highways, and hardly 650 people. It is the least populous county of New Mexico, but it's almost the size of Delaware. Harding County has never exactly been a metropolis, but it did used to be a more vital place. In the 1930s, as the CO2 industry built out, there were almost 4,500 residents. Since then, the population has declined about 20% from each census to the next.

CO2 production went into a similar decline. After the war, significant improvements in refrigeration technology made mechanical refrigeration inevitable, even for road transportation. Besides, the growing chemical industry had designed many industrial processes that produced CO2 as a byproduct. CO2 for purposes like carbonation and gas blanketing was often available locally at lower prices than shipped-in well CO2, leading to a general decline in the CO2 industry.
Growing understanding of New Mexico geology and a broader reorganizing of the stratigraphic nomenclature lead the Bueyeros Field to become part of the Bravo Dome. Bravo Dome CO2 production in the 1950s and 1960s was likely supported mostly by military and weapons activity, as by the end of the 1960s the situation once again looked much like it did in the 1910s: the Bravo Dome had a tremendous amount of gas to offer, but there were few applications. The rate of extraction was limited by the size of the market. Most of the dry ice plants closed, contributing, no doubt, to the depopulation of Harding County.
The whole idea of drilling for CO2 is now rather amusing. Our modern problems are so much different: we have too much CO2, and we're producing even more without even intending to. It has at times seemed like the industry of the future will be putting CO2 down into the ground, not taking it out. What happened out in Harding County was almost the opening of Pandora's box. A hundred years ago, before there was a dry ice industry in the US, newspaper articles already speculated as to the possibility of global warming by CO2. At the time, it was often presented as a positive outcome: all the CO2 released by burning coal would warm the environment and thus reduce the need for that coal, possibly even a self-balancing problem. It's even more ironic that CO2 was extracted mostly to make things colder, given the longer-term consequences. Given all that, you would be forgiven for assuming that drilling for CO2 was a thing of the past.
The CO2 extraction industry has always been linked to the oil industry, and oil has always been boom and bust. In 1982, there were 16 CO2 wells operating in the Bravo Dome field. At the end of 1985, just three years later, there were 258. Despite the almost total collapse of demand for CO2 refrigeration, demand for liquid CO2 was up by far. It turns out that American Production hadn't screwed up in 1917, at least not if they had known a little more about petroleum engineering.
In 1972, the Scurry Area Canyon Reef Operators Committee of West Texas started an experiment, attempting industrial application of a technique first proposed in the 1950s. Through a network of non-productive oil wells in the Permian Basin, they injected liquid CO2 deep underground. The rapidly evaporating liquid raised the pressure in the overall oil formation, and even lubricated and somewhat fractured the rock, all of which increased the flow rate at nearby oil wells. A decade later, the concept was proven, and CO2 Enhanced Oil Recovery (EOR) swept across the Permian Basin.
Today, it is estimated that about 62% of the global industrial production of CO2 is injected into the ground somewhere in North America to stimulate oil production. The original SACROC system is still running, now up to 414 injection wells. There are thousands more. Every day, over two billion cubic feet of CO2 are forced into the ground, pushing back up 245,000 barrels of additional oil.
British Petroleum's acquisition of American Production proved fortuitous. BP became one of the country's largest producers of CO2, extracted from the ground around Bueyeros and transported by pipeline directly to the Permian Basin for injection. In 2000, BP sold their Bravo Dome operations to Occidental Petroleum 1. Now going by Oxy, the petroleum giant has adopted a slogan of "Zero In". That's zero as in carbon emissions.
I would not have expected to describe Occidental Petroleum as "woke," but in our contemporary politics they stand out. Oxy mentions "Diversity, Inclusion, and Belonging" on the front page of their website, which was once attractive to investors but now seems more attractive to our nation's increasingly vindictive federal government. Still, Oxy is sticking to a corporate strategy that involves acknowledging climate change as real, which I suppose counts as refreshing. From a 2025 annual report:
Oxy is building an integrated portfolio of low-carbon projects, products, technologies and companies that complement our existing businesses; leveraging our competitive advantages in CO2 EOR, reservoir management, drilling, essential chemicals and major infrastructure projects; and are designed to sustain long term shareholder value as we work to implement our Net-Zero Strategy.
Yes, Oxy has made achieving net-zero carbon a major part of their brand, and yes, this model of reducing carbon emissions relies heavily on CO2 EOR: the extraction of CO2 from the ground.
In a faltering effort to address carbon emissions, the United States has leaned heavily on the promise of Carbon Capture and Storage (CCS) technologies. The idea is to take CO2 out of the environment (potentially by separating it from the air but, more practically, by capturing it in places where it is already concentrated by industrial processes) and to put it somewhere else. Yes, this has shades of the Australian television sketch about the ship whose front fell off, but the key to "sequestration" is time. If we can put enough carbon somewhere that it will say for enough time, we can reduce the "active" greenhouse gas content of our environment. The main way we have found of doing this is injecting it deep underground. How convenient, then, that the oil industry is already looking for CO2 for EOR.
CCS has struggled in many ways, chief among them that the majority of planned CCS projects have never been built. As with most of our modern carbon reduction economy, even the CCS that has been built is, well, a little bit questionable. There is something of a Faustian bargain with fossil fuels. As we speak, about 45 megatons of CO2 are captured from industrial processes each year for CCS. Of that 45 Mt, 9 Mt are injected into dedicated CO2 sequestration projects. The rest, 80%, is purchased by the oil industry for use in EOR.
This form of CCS, in which the captured CO2 is applied to an industrial process that leads to the production of more CO2, has taken to the name CCUS. That's Carbon Capture, Utilization, and Storage. Since the majority of the CO2 injected for EOR never comes back up, it is a form of sequestration. Although the additional oil produced will generally be burned, producing CO2, the process can be said to be inefficient in terms of CO2. In other words, the CO2 produced by burning oil from EOR is less in volume than the CO2 injected to stimulate recovery of that oil.
I put a lot of time into writing this, and I hope that you enjoy reading it. If you can spare a few dollars, consider supporting me on ko-fi. You'll receive an occasional extra, subscribers-only post, and defray the costs of providing artisanal, hand-built world wide web directly from Albuquerque, New Mexico.
Mathematically, CCUS, the use of CO2 to produce oil, leads to a net reduction in released CO2. Philosophically, though, it is deeply unsatisfying. This is made all the worse by the fact that CCUS has benefited from significant government support. Outright subsidies for CCS are uncommon, although they do exist. What are quite common are grants and subsidized financing for the capital costs of CCS facilities. Nearly all CCS in the US has been built with some degree of government funding, totaling at least four billion dollars, and regulatory requirements for CCS to offset new fossil fuel plants may create a de facto electrical ratepayer subsidy for CCS. Most of that financial support, intended for our low-carbon future, goes to the oil producers.
The Permian Basin is well-positioned for CCS EOR because it produces mostly natural gas. Natural gas in its raw form, "well gas," almost always includes CO2. Natural gas processing plants separate the combustible gases from noncombustible ones, producing natural gas that has a higher energy content and burns more cleanly—but, in the process, venting large quantities of CO2 into the atmosphere. Oxy is equipping its Permian Basin natural gas plants with a capture system that collects the CO2 and compresses it for use in EOR.
The problem is that CO2 consumption for EOR has, as always, outpaced production. There aren't enough carbon capture systems to supply the Permian Basin fields, so "sequestered" CO2 is mixed with "new" CO2. Bravo Dome CO2 production has slowly declined since the 1990s, due mostly to declining oil prices. Even so, northeastern New Mexico is still full of Oxy wells bringing up CO2 by the millions of cubic feet. 218 miles of pipeline deliver Bueyeros CO2 into West Texas, and 120 miles of pipeline the other way land it in the oil fields of Wyoming. There is very nearly one producing CO2 well per person in Harding County.
Considering the totality of the system, it appears that government grants, financing incentives, and tax credits for CCS are subsidizing not only natural gas production but the extraction of CO2 itself. Whether this is progress on climate change or a complete farce depends a mathematical analysis. CO2 goes in, from several different sources; CO2 goes out, to several different dispositions. Do we remove more from the atmosphere than we end up putting back? There isn't an obvious answer.
The oil industry maintains that CCS is one of the most practical means of reducing carbon emissions, with more CO2 injected than produced and a resulting reduction in the "net CO2 impact" of the product natural gas.
As for more independent researchers, well, a paper finding that CCS EOR "cannot contribute to reductions" isn't the worst news. A 2020 literature review of reports on CCS EOR projects found that they routinely fail to account for significant secondary carbon emissions and that, due to a mix of the construction and operational realities of CCS EOR facilities and the economics of oil consumption, CCS EOR has so far produced a modest net increase in greenhouse gas emissions.
They're still out there today, drilling for carbon dioxide. The reports from the petroleum institute today say that the Permian Basin might need even more shipped in. New Mexico is an oil state; Texas gets the reputation but New Mexico has the numbers. Per-capita oil production here is significantly higher than Texas and second only to North Dakota. New Mexico now produces more oil than Old Mexico, if you will, the country to our south.
Per capita, New Mexico ranks 12th for CO2 emissions, responsible for about 1% of the nation's total. Well, I can do a bit better: for CO2 intentionally extracted from the ground, New Mexico is #3, behind only Colorado and Mississippi for total production. We produce something around 17% of the nation's supply of extracted CO2, and we even use most of it locally. I guess that's something you could put a good spin on.
-
By this time, Armand Hammer was no longer CEO of Occidental, which is unfortunate since it deprives me of an excuse to talk at length about how utterly bizarre Armand Hammer was, and about the United World College he founded in Las Vegas, NM. Suffice it to say, for now, that Occidental had multiple connections to New Mexico.↩
Optimizing Python scripts with AI
2026-01-25 23:19
One of the first steps we take when we want to optimize software is to look
at profiling data. Software profilers are tools that try to identify where
your software spends its time. Though the exact approach can vary, a typical profiler samples your software (steps it at regular intervals) and collects statistics. If your software is routinely stopped in a given function, this function is likely using a lot of time. In turn, it might be where you should put your optimization efforts.
Matteo Collina recently shared with me his work on feeding profiler data for software optimization purposes in JavaScript. Essentially, Matteo takes the profiling data, and prepares it in a way that an AI can comprehend. The insight is simple but intriguing: tell an AI how it can capture profiling data and then let it optimize your code, possibly by repeatedly profiling the code. The idea is not original since AI tools will, on their own, figure out that they can get profiling data.
How well does it work? I had to try it.
Case 1. Code amalgamation script
For the simdutf software library, we use an amalgamation script: it collects all of the C++ files on disk, does some shallow parsing and glues them together according to some rules.
I first ask the AI to optimize the script without access to profiling data. What it did immediately was to add a file cache. The script repeatedly loads the same files from disk (the script is a bit complex). This saved about 20% of the running time.
Specifically, the AI replaced this naive code…
def read_file(file):
with open(file, 'r') as f:
for line in f:
yield line.rstrip()
by this version with caching…
def read_file(file):
if file in file_cache:
for line in file_cache[file]:
yield line
else:
lines = []
with open(file, 'r') as f:
for line in f:
line = line.rstrip()
lines.append(line)
yield line
file_cache[file] = lines
Could the AI do better with profiling data? I instructed it to run the Python profiler: python -m cProfile -s cumtime myprogram.py. It found two additional optimizations:
1. It precompiled the regular expressions (re.compile). It replaced
if re.match('.*generic/.*.h', file):
# ...
by
if generic_pattern.match(file):
# ...
where elsewhere in the code, we have…
generic_pattern = re.compile(r'.*generic/.*\.h')
2. Instead of repeatedly calling re.sub to do a regular expression substitution, it filtered the strings by checking for the presence of a keyword in the string first.
if 'SIMDUTF_IMPLEMENTATION' in line: # This IF is the optimization
print(uses_simdutf_implementation.sub(context.current_implementation+"\\1", line), file=fid)
else:
print(line, file=fid) # Fast path
These two optimizations could probably have been arrived at by looking at the code directly, and I cannot be certain that they were driven by the profiling data. But I can tell that they do appear in the profile data.
Unfortunately, the low-hanging fruit, caching the file access, represented the bulk of the gain. The AI was not able to further optimize the code. So the profiling data did not help much.
Case 2: Check Link Script
When I design online courses, I often use a lot of links. These links break over time. So I have a simple Python script that goes through all the links, and verifies them.
I first ask my AI to optimize the code. It did the same regex trick, compiling the regular expression. It created a thread pool and made the script asynchronous.
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
url_results = {url: executor.submit(check_url, url) for url in urls_to_check}
for url, future in url_results.items():
url_cache[url] = future.result()
This parallelization more than doubled the speed of the script.
It cached the URL checks in an interesting way, using functools:
from functools import lru_cache
@lru_cache(maxsize=None)
def check(link):
# ...
I did not know about this nice trick. This proved useless in my context because I rarely have several times the same link.
I then started again, and told it to use the profiler. It did much the same thing, except for the optimization of the regular expression.
As far as I can tell all optimizations were in vain, except for the multithreading. And it could do this part without the profiling data.
Conclusion so far
The Python scripts I tried were not heavily optimized, as their performance was not critical. They are relatively simple.
For the amalgamation, I got a 20% performance gain for ‘free’ thanks to the file caching. The link checker is going to be faster now that it is multithreaded. Both optimizations are valid and useful, and will make my life marginally better.
In neither case I was able to discern benefits due to the profiler data. I was initially hoping to get the AI busy optimizing the code in a loop, continuously running the profiler, but it did not happen in these simple cases. The AI optimized code segments that contributed little to the running time as per the profiler data.
To be fair, profiling data is often of limited use. The real problems are often architectural and not related to narrow bottlenecks. Even when there are identifiable bottlenecks, a simple profiling run can fail to make them clearly identifiable. Further, profilers become more useful as the code base grows, while my test cases are tiny.
Overall, I expect that the main reason for my relative failure is that I did not have the right use cases. I think that collecting profiling data and asking an AI to have a look might be a reasonable first step at this point.
↩ 023,c. unix didn't have sparse files before V7 (+ V1 patch) ↩
Sun, 25 Jan 2026 23:28:55 +0100
As we know, the core conceit of sparse files is that when you write to a file, and the start of the write is past the current file length, that space is filled with synthetic zero bytes, which (to the precision of some block size) aren't stored on disk. The definition in the excerpt below agrees with this. They are first present in V7 unix, in spite of having been described in this form since V1.
# V1 unix
Per UNIX Programmer's Manual, K. Thompson & D. M. Ritchie, November 3, 1971, format of file system, p. 3,
If block b in a file exists, it is not necessary that all blocks less than b exist. A zero block number either in the address words of the i-node or in an indirect block indicates that the corresponding block has never been allocated. Such a missing block reads as if it contained all zero words.
and a picture is worth 861 words, so, filtered for relevant files:
:login: root
root
# chdir /tmp
# cat >big.s
start:
mov $1, r0
sys write; start; 10
mov $1, r0
sys seek; 4087.; 0
sys write; start; 10
sys exit
# ed big.s
105
/4087/
sys seek; 4087.; 0
s/4087/65527/
w large.s
106
q
# df
723+2364
# as big.s
I
II
# a.out >big
# as large.s
I
II
# a.out >large
# df
718+2364
# ls -l
total 22
124 sxrwrw 1 root 84 Jan 1 00:00:00 a.out
123 s-rwrw 1 root 4095 Jan 1 00:00:00 big
121 s-rwrw 1 root 105 Jan 1 00:00:00 big.s
126 l-rwrw 1 root 65535 Jan 1 00:00:00 large
125 s-rwrw 1 root 106 Jan 1 00:00:00 large.s
...
45 s-rwr- 1 root 142 Jan 1 00:00:00 utmp
i-node 41: USED DIR 0604 links=7 uid=0 size=70
41[ 4]: 44 tmp
i-node 44: USED DIR 0604 links=2 uid=0 size=200
44[16]: 126 large
44[17]: 125 large.s
44[18]: 123 big
44[19]: 121 big.s
i-node 126: USED REG LARGE 0606 links=1 uid=0 size=65535
i.dskp: 304 0 0 0 0 0 0 0
indir0: 303 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 305 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
i-node 125: USED REG 0606 links=1 uid=0 size=106
i.dskp: 301 0 0 0 0 0 0 0
i-node 123: USED REG 0606 links=1 uid=0 size=4095
i.dskp: 300 0 0 0 0 0 0 302
i-node 121: USED REG 0606 links=1 uid=0 size=105
i.dskp: 297 0 0 0 0 0 0 0
# df
718+2364
# cat big
@ @@ @# df
712+2364
# cat large
@ @@ @# df
586+2364
i-node 41: USED DIR 0604 links=7 uid=0 size=70
41[ 4]: 44 tmp
i-node 44: USED DIR 0604 links=2 uid=0 size=200
44[16]: 126 large
44[17]: 125 large.s
44[18]: 123 big
44[19]: 121 big.s
i-node 126: USED REG LARGE 0606 links=1 uid=0 size=65535
i_dskp: 304 0 0 0 0 0 0 0
indir0: 303 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 305 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
i-node 125: USED REG 0606 links=1 uid=0 size=106
i_dskp: 301 0 0 0 0 0 0 0
i-node 123: USED REG 0606 links=1 uid=0 size=4095
i_dskp: 300 306 307 308 309 310 311 302
i-node 121: USED REG 0606 links=1 uid=0 size=105
i_dskp: 297 0 0 0 0 0 0 0
# V4 unix
# df
/dev/rk0 542
# chdir /tmp
# cat >big.s
start:
mov $1, r0
sys write; start; 10
mov $1, r0
sys seek; 4087.; 0
sys write; start; 10
sys exit
# ed big.s
105
/4087/
sys seek; 4087.; 0
s/4087/65527/
w large.s
106
# df
/dev/rk0 540
# as big.s
# a.out >big
# as large.s
# a.out >large
# ls -l
total 141
-rwxrwxrwx 1 root 84 Jun 12 19:59 a.out
-rw-rw-rw- 1 root 4095 Jun 12 19:59 big
-rw-rw-rw- 1 root 105 Jun 12 19:59 big.s
-rw-rw-rw- 1 root 65535 Jun 12 19:59 large
-rw-rw-rw- 1 root 106 Jun 12 19:59 large.s
-rw-r--r-- 1 root 144 Jun 12 19:50 utmp
# df
/dev/rk0 534
# cat big large
@ @ @ @ #
# df
/dev/rk0 402
Q.E.D., with same wording in UNIX Programmer's Manual, Fourth Edition, K. Thompson & D. M. Ritchie, November, 1973, file system (V).
# V5 unix
# df
/dev/rk0 444
# chdir /tmp
# cat >big.s
start:
mov $1, r0
sys write; start; 10
mov $1, r0
sys seek; 4087.; 0
sys write; start; 10
sys exit
# ed big.s
105
/4087/
sys seek; 4087.; 0
s/4087/65527/
w large.s
106
# df
/dev/rk0 442
# as big.s
# a.out >big
# as large.s
# a.out >large
# ls -l
total 140
-rwxrwxrwx 1 root 84 Mar 21 12:10 a.out
-rw-rw-rw- 1 root 4095 Mar 21 12:10 big
-rw-rw-rw- 1 root 105 Mar 21 12:09 big.s
-rw-rw-rw- 1 root 65535 Mar 21 12:10 large
-rw-rw-rw- 1 root 106 Mar 21 12:09 large.s
# df
/dev/rk0 436
# cat big large
@ @ @ @ #
# df
/dev/rk0 304
Q.E.D., with unchanged wording on p. 2 of UNIX Programmer's Manual, Fifth Edition, K. Thompson & D. M. Ritchie, June, 1974, file system (V).
# V6 unix
# df
/dev/rk0 958
/dev/rk1 937
/dev/rk2 bad free count
192
# cat >big.s
start:
mov $1, r0
sys write; start; 10
mov $1, r0
sys seek; 4087.; 0
sys write; start; 10
sys exit
# ed big.s
105
/4087/
sys seek; 4087.; 0
s/4087/65527/
w large.s
106
# df
/dev/rk0 956
/dev/rk1 937
# as big.s
# a.out >big
# as large.s
# a.out >large
# df
/dev/rk0 950
/dev/rk1 937
# ls -l
total 142
-rwxrwxrwx 1 root 84 Oct 10 13:37 a.out
-rw-rw-rw- 1 root 4095 Oct 10 13:37 big
-rw-rw-rw- 1 root 105 Oct 10 13:36 big.s
-rw-rw-rw- 1 root 65535 Oct 10 13:37 large
-rw-rw-rw- 1 root 106 Oct 10 13:36 large.s
# cat big large
@ @ @ @ #
# df
/dev/rk0 818
/dev/rk1 937
Q.E.D., untouched in UNIX Programmer's Manual, Sixth Edition, K. Thompson & D. M. Ritchie, May, 1975, file system (V).
# V7 unix
# chdir /tmp
# cat >big.c
main() {
long off = 4087;
write(1, main, 10);
lseek(1, off, 0);
write(1, main, 10);
}
# ed big.c
73
/4087/
lseek(1, 0, 4087);
s/4087/65527/
w large.c
74
# df
/dev/rp0 984
/dev/rp3 297416
# cc big.c
# a.out >big
# cc large.c
# a.out >large
# df
/dev/rp0 975
/dev/rp3 297416
# ls -l
total 14
-rwxrwxr-x 1 root 584 Dec 31 19:06 a.out
-rw-rw-r-- 1 root 4097 Dec 31 19:06 big
-rw-rw-r-- 1 root 73 Dec 31 19:05 big.c
-rw-rw-r-- 1 root 10 Dec 31 19:06 large
-rw-rw-r-- 1 root 74 Dec 31 19:05 large.c
# cat big large
w <$uww <$uww >%uw >%u#
# df
/dev/rp0 975
/dev/rp3 297416
which for the first time correctly behaves according to the still-intact paragraph in UNIX Programmer's Manual, Sixth Edition, K. Thompson & D. M. Ritchie, May, 1975, file system (V), with the corrected behaviour corresponding to readi() zeroing an unused I/O buffer if the address of the data block is 0.
# Notes
I noticed this when writing a parser for the V1 filesystem, because I forgot about this case. And then I noticed that when writing the parser for V4 filesystems I based it on, I'd also forgotten about this case (the integrity of the v4root.tar dump is unaffected: there are no sparse files on the installation tape rootfs). And then I realised I didn't think I'd seen these branches in V1 mget when reading it for the previous post, either.
unix file system files contain 8 slots for links to 512-byte disk blocks. If the file fits within that size, i.e. is no larger than 4KiB, it is "small", and those links point to data blocks directly. In the samples above, big.[sc] creates a 4KiB file with data in the first 8 bytes and the last 8 bytes, so the rest is sparse: we see this in rf0 dump 1, where the start of big (file 123 (I didn't renumber these, it's just fortuitous)) is contained in block 300 and the end — in block 302. The middle un-written-to blocks are 0 (not allocated).
Bigger files are deemed "large" and the 8 slots point to "indirect" blocks, which themselves contain 256 slots for disk block links. large.[sc] makes a 64KiB file in the same manner as big.?, so we observe that large (file 126) contains one direct link to block 304, which links to blocks 303 and (appropriately further on) 305. The contents of those blocks are the same as 300 and 302. (It's impossible to make a bigger (sparse or otherwise) file in V1 because all file offsets are 16 bits; this is immaterial. The maximum theoretical size of a large file is 1MiB, and later unixes allow addressing this in decreasingly hacky ways.)
To wit: a unix file system file consists of sequence of "(read block n) or (insert 512 zero bytes)", and reading it sequentially consists of traversing that list, in order, until you've read as much data as the file is large. Small files inline that sequence for performance, large files keep it in chunks on disk (and if a chunk is missing, that's equivalent to 256× "insert 512 zero bytes"). If implemented this way, our original definition (and description from the manual) is fulfilled.
But, as observed above, this is not how unix (before V7) implements this: in the cat log, we can see that just reading the file allocates every formerly-sparse block: df loses 6 (8 − 2) and 126 (also the amount of sparse blocks between 300 and 302 in the indirect block) blocks respectively, and in the subsequent dump we observe this directly: every previously-zero block within the bounds of the file was allocated (unsparsed) and now actually links to a block full of zeroes.
To see why, we can trace the
sys read (II)
path through
sysread
→ readinode
→ dskr (the read routine for non-special files)
→ mget
(noting that sys write (II) is basically the same but memcpys the other way;
comments verbatim from
Preliminary Release of UNIX Implementation Document,
J. DeFelice,
6/20/72
except where [indicated],
_-prefixed names expositional,
drum
is what they sometimes call the small disk containing the rootfs at this time):
struct {
int flgs;
char nlks;
char uid;
int size;
int dskp[8];
int ctim[2];
int mtim[2];
} i; // currently-open i-node
struct {
// (more spilled process state)
int *fofp;
int count;
} u;
extern r1, ret();
dskr(_ino/*, u.fofp, u.count*/) {
r1 = _ino;
/*i =*/ iget(/*r1*/); // get i-node (r1) into i-node section of core
r2 = i.size - *u.fofp; // file size […] subtract file offset
if(r2 <= 0)
goto *ret;
if(r2 <= u.count)
u.count = r2;
/*r1 =*/ mget(/*i, u.fofp*/); // […] physical block number of block […] where offset points
/*r5 =*/ dskrd(/*r1*/); // read in block, r5 points to 1st word of data in buffer
/*r1, r2, r3 =*/ sioreg(/*u.fofp, u.count, r5*/);
_memcpy(r1, r2, r3); // move data from buffer into working core […]
if(u.count == 0)
goto *ret;
else
goto *dskr;
}
extern r1, r2, r5, mq;
mget(/*i, u.fofp*/) /*-> r1*/ {
mq = *u.fofp >> 8; // divide […] by 256.
r2 = mq;
if(i.flgs & 010000)
goto _large;
if(r2 & 0xFFF0)
goto _small2large;
r1 = i.dskp[r2/2];
if(r1 == 0) { // if physical block num is zero then need a new block for file
/*r1 =*/ alloc(); // allocate a new block
i.dskp[r2/2] = r1; // physical block number stored in i-node
setimod(); // [mark i-node modified, update mtim]
clear(/*r1*/); // zero out disk/drum block just allocated
}
return;
_small2large: // adding on block which changes small file to a large file
// transfer old physical block pointers into new indirect block for the new large file
/*r1 =*/ alloc();
/*r5 =*/ wslot(/*r1*/);
_memcpy(r5, i.dskp, sizeof(i.dskp));
_memset(i.dskp, 0, sizeof(i.dskp));
// clear rest of data buffer
_memset(r5 + sizeof(i.dskp), 0, 512 - sizeof(i.dskp));
dskwr(/*r1*/);
i.dskp[0] = r1; // put pointer to indirect block in i-node
i.flgs |= 010000; // set large file bit […]
setimod();
goto *mget;
_large:
r2 &= 0xFF << 1; // […] offset in indirect block
int _in_indir = r2;
r2 = mq >> 8; // divide byte number by 256.
r2 &= 0xF;
r1 = i.dskp[r2/2];
if(r1 == 0) { // if no indirect block exists
/*r1 =*/ alloc(); // allocate a new block
i.dskp[r2/2] = r1; // put block number of new [indirect] block in i-node
setimod(); // [mark i-node modified, update mtim]
clear(/*r1*/); // [zero out disk/drum block just allocated]
}
/*r5 =*/ dskrd(/*r1*/);
int _indir = r1; // save block number of indirect block on stack
r1 = r5[r2 + _in_indir]; // put physical block no of block in file sought in r1
if(r1 == 0) { // if no block exists
/*r1 =*/ alloc(); // allocate a new block
r5[r2 + _in_indir] = r1; // put new block number into proper location
// in indirect block
r1 = _indir;
int _new = r5[r2 + _in_indir]; // […] block number of new block
wslot(/*r1*/);
dskwr(/*r1*/); // write newly modified indirect block back out on disk
r1 = _new;
clear(/*r1*/); // [zero out disk/drum block just allocated]
}
return;
}
where we see that reading/writing n bytes from/to file f at offset o is equivalent to
- ensuring all blocks in f containing [o, o+n) are allocated, then
- doing the read/write from/to the affected blocks.
This is fine for writing, and equivalent to just 2. But 1. means the interface from the manual isn't fulfilled (thus, unix doesn't have sparse files before V7) and means this is more-so an implementation detail optimisation that ensures every I/O read completes in 1-3 I/Os instead of m = blocks(o - f.size).
Unless, of course, you always seek to the same offset before reading.
But that runs counter to unix's file-as-bag-of-bytes model,
and imposes a certain structure to the file and makes it behave differently based on access pattern,
which, again, runs counter to documentation and marketing of the time —
DRAFT: The UNIX Time-Sharing System, D. M. Ritchie,
mid-1971
3.1 Ordinary Files
A file contains whatever information the user places there, for example symbolic or binary (object) programs. No particular structuring is expected by the system. […] A few user programs generate and expect files with more structure; […] however, the structure of files is controlled solely by the programs which use them, not by the system.
3.5 System I/O Calls
[…] There is no distinction between "random" and sequential I/O, nor is any logical or physical record size imposed by the system. The size of a file on the disk is determined by the location of the last piece of information written on it; no predetermination of the size of a file is necessary.
— as well as those going forward (cf. The UNIX™ System: Making Computers More Productive, 1982, Bell Laboratories, transcribed in the previous post).
Because the files afflicted by this are valid, just much bigger than they ought to be, the only way you really could notice this is by examining the filesystem directly trying to look precisely for this behaviour, as I have? or, on quiet system, check precisely the interaction of this behaviour and df? Being fixed in V7 agrees with this because it implies to me that this facility had started to be used like we'd use sparse files today — with large nominal file size/filesystem size ratios — on the huge unix ≤V6 install base, someone filled their disk, and the fix is pretty simple.
# V1 unix patch
--- /usr/sys/ux.s.bkp 1972-01-01 00:00:00.000000000 +0000
+++ /usr/sys/ux.s 1972-01-01 00:00:00.000000000 +0000
@@ -66,7 +66,7 @@
sysflg: .=.+1
pptiflg:.=.+1
ttyoch: .=.+1
- .even
+mget0b: .=.+1
.=.+100.; sstack:
buffer: .=.+[ntty*140.]
.=.+[nbuf*520.]
--- /usr/sys/u5.s.bkp 1972-01-01 00:00:00.000000000 +0000
+++ /usr/sys/u5.s 1972-01-01 00:00:00.000000000 +0000
@@ -1,6 +1,7 @@
/ u5 -- unix
mget:
+ mov idev,cdev
mov *u.fofp,mq / file offset in mq
clr ac / later to be high sig
mov $-8,lsh / divide ac/mq by 256.
@@ -13,6 +14,10 @@
mov i.dskp(r2),r1 / r1 has physical block number
bne 2f / if physical block num is zero then need a new block
/ for file
+ clr cdev
+ movb mget0b,r1
+ bne 2f
+ mov idev,cdev
jsr r0,alloc / allocate a new block
mov r1,i.dskp(r2) / physical block number stored in i-node
jsr r0,setimod / set inode modified byte (imod)
@@ -52,6 +57,10 @@
bic $!16,r2
mov i.dskp(r2),r1
bne 2f / if no indirect block exists
+ clr cdev
+ movb mget0b,r1
+ bne 3f
+ mov idev,cdev
jsr r0,alloc / allocate a new block
mov r1,i.dskp(r2) / put block number of new block in i-node
jsr r0,setimod / set i-node modified byte
@@ -65,6 +74,10 @@
mov (r2),r1 / put physical block no of block in file
/ sought in r1
bne 2f / if no block exists
+ clr cdev
+ movb mget0b,r1
+ bne 2f
+ mov idev,cdev
jsr r0,alloc / allocate a new block
mov r1,(r2) / put new block number into proper location in
/ indirect block
@@ -76,6 +89,7 @@
mov (sp),r1 / restore block number of new block
jsr r0,clear / clear new block
2:
+3:
tst (sp)+ / bump stack pointer
rts r0
--- /usr/sys/u6.s.bkp 1972-01-01 00:00:00.000000000 +0000
+++ /usr/sys/u6.s 1972-01-01 00:00:00.000000000 +0000
@@ -83,6 +83,7 @@
jmp error / see 'error' routine
dskr:
+ movb $1,mget0b
mov (sp),r1 / i-number in r1
jsr r0,iget / get i-node (r1) into i-node section of core
mov i.size,r2 / file size in bytes in r2
@@ -210,6 +211,7 @@
jmp error / ?
dskw: / write routine for non-special files
+ clrb mget0b
mov (sp),r1 / get an i-node number from the stack into r1
jsr r0,iget / write i-node out (if modified), read i-node 'r1'
/ into i-node area of core
With the patch, when it encounters a 0 link and mget0b is non-zero, mget will return block mget0b on rf0. Because mget needs to return a block number (on the current device), this is the least intrusive way of implementing this, but it means mget needs to be careful to maintain the notion of the "current device" to mean "the device containing the current i-node" outside of this exceptional return, since the latter will vary when reading a file from the mounted filesystem. dskr sets mget0b to 1 — due to the unique way the V1 unix file system is laid out, block 1 on the rootfs can't usefully be anything except full of zeroes — dskw clears mget0b to get the old behaviour. The procedure for building and installing a thusly-updated kernel in a unix72 environment is outlined in post 023,a. V1 unix I/O buffer count vs. performance benchmark.
# df
718+2309
# cat big large /usr/big /usr/large
@ @@ @@ @@ @@ @@ @@ @@ @# df
718+2309
df and a direct examination confirm the blocks are unchanged by the read (and cdev is restored properly), but are not required because the cat completes significantly faster.
@@ -1,6 +1,7 @@
extern r1, r2, r5, mq;
mget(/*i, u.fofp*/) /*-> r1*/ {
+ cdev = idev;
mq = *u.fofp >> 8; // divide […] by 256.
r2 = mq;
@@ -13,6 +14,11 @@
r1 = i.dskp[r2/2];
if(r1 == 0) { // if physical block num is zero then need a new block for file
+ cdev = 0;
+ if(r1 = mget0b)
+ return;
+ cdev = idev;
+
/*r1 =*/ alloc(); // allocate a new block
i.dskp[r2/2] = r1; // physical block number stored in i-node
setimod(); // [mark i-node modified, update mtim]
@@ -43,6 +49,11 @@
r2 &= 0xF;
r1 = i.dskp[r2/2];
if(r1 == 0) { // if no indirect block exists
+ cdev = 0;
+ if(r1 = mget0b)
+ return;
+ cdev = idev;
+
/*r1 =*/ alloc(); // allocate a new block
i.dskp[r2/2] = r1; // put block number of new [indirect] block in i-node
setimod(); // [mark i-node modified, update mtim]
@@ -52,6 +63,11 @@
int _indir = r1; // save block number of indirect block on stack
r1 = r5[r2 + _in_indir]; // put physical block no of block in file sought in r1
if(r1 == 0) { // if no block exists
+ cdev = 0;
+ if(r1 = mget0b)
+ return;
+ cdev = idev;
+
/*r1 =*/ alloc(); // allocate a new block
r5[r2 + _in_indir] = r1; // put new block number into proper location
// in indirect block
Nit-pick? Correction? Improvement? Annoying? Cute? Anything?
Mail,
post, or open!
vital functions
2026-01-25 21:59Reading. ( Scalzi, Tufte, Duncan )
Writing. Introduction continues to take shape. Word count hasn't gone up much, but that's partly because I am doing a reasonable job of Whacking Down A Bunch Of Words and then reassessing and deleting...
Listening. More of The Hidden Almanac. I continue to fret about not keeping super great track of it, which is in part because I seem to be extremely prone to going to sleep if it winds up on in the car...
Playing. We are finding an Exploders Inkulinati run alarmingly straightforward. Learning Continues.
Sudoku also continues to eat my brain. :|
Cooking. Dinner tonight included: another attempt at the Roti King cabbage poriyal, this time with more coconut, which I think has worked v well; a... loose attempt at a generous interpretation of Dishoom's gunpowder potatoes (no lime, no spring onion yet, no leaf coriander, not new potatoes...); and some pomegranate molasses-tamarind-yoghurt-chaat masala goop to sit some paneer in.
Earlier in the week I ticked a couple more things off the Cook (Almost) All Of East project (kung pao cauliflower; mushroom bao); this evening I have also had a first stab at recreating the Leon spiced tahini hot chocolate, which was Very Acceptable.
Eating. Finally managed to get a meal at the Viewpoint restaurant at Whipsnade (we keep not going at a time when it's open); mildly disappointed by the sourdough pizza, probably because I have a vague memory of a previous incarnation having aspirations to Fancy Restaurant, which I think the current set-up doesn't. Still v pleasant to eat food I didn't cook sat looking out over the Downs, though.
Exploring. ZOO.
Growing. I do not understand where the sciarid flies keep coming from but I am so, so, so over them. I am SO over them. WHY is the lithops container SUDDENLY FULL OF THEM.
That issue aside: lemongrass continues to have Leafs! If (if!) it keeps going like this I'm going to wind up needing to dispose of a bunch of plants via Freecycle/Freegle, goodness. Physalis still not doing anything visible. Ancho chillis almost but not quite All The Way Ripe.
It is almost certainly time to start sowing More Things but I think perhaps I will hold off until after I've had a chance to apply some nematodes...
What the world can learn from Paris’s cycling revolution.
2026-01-25 13:49- 2026‑01‑25 - What the world can learn from Paris’s cycling revolution.
- https://momentummag.com/what-the-world-can-learn-from-pariss-cycling-revolution/
- redirect https://dotat.at/:/S5J76
- blurb https://dotat.at/:/S5J76.html
- atom entry https://dotat.at/:/S5J76.atom
- web.archive.org archive.today
Burns Night
2026-01-25 21:28Quick musings on resumed Twelfth Doctor rewatch
2026-01-25 21:00I paused my rewatch part way through "Robot of Sherwood" and it took me some months to summon up the enthusiasm to restart. I'm now part way through series 8 episode 6 "The Caretaker". I've enjoyed some of the previous stories more than I expected to. Not least "Time Heist" which I could barely remember anything of. Though I rather yearn for simpler old style storytelling, rather than Steven Moffat esque convoluted timey wimeyness.
But I'm still hating the dislikeable aspects of this Doctor, which are particularly evident in
8. Not so much his alienness, but what I perceive too often as unnecessary cruelty to watch in the series. It feels like experiencing the early Sixth Doctor all over again. But pushing on ...








