Notes from the Libraryspwhittonhttps://spwhitton.name//blog/spwhittonikiwiki2024-03-16T00:35:14ZEight years in Tucsonhttps://spwhitton.name//blog/entry/eightyearsintucson/2024-03-16T00:35:14Z2024-03-14T04:05:21Z
<p>I spent eight years doing teaching and research in Philosophy at the
University of Arizona, in Tucson, Arizona, from 2015 to 2023. I now have a
love for America and its people, even though I am not sure I could ever live
there again. Americans would say that Tucson is an outlier, an odd
post-frontier town which is not reflective of the rest of the nation’s cities.
And I only really visited New York, the Bay Area, and two towns in
Mississippi, so I mostly take them at their word. But I could see something
in common between these places that’s distinct from where else I’ve lived. I
will not seek to capture that here, but instead focus on how life in Tucson
was, and some things I learned.</p>
<p>When I first arrived I was very unsure about whether it would be a good idea
to stay. I was ambivalent about reentering academia, and uneasy with the
contractual terms under which I would be able to study there without paying
any money for it. Once I did decide to stay for at least one semester, I
tried to get myself set up with a daily routine that would be suitable for
making progress with my classes, while also allowing me time to pursue my
other interests. So I went to check out the library, that being where I’d
done all my work as an undergraduate. I was appalled to find that there
wasn’t a culture of silence. Supposedly the upper floors were designated as
quiet, but the only way I could feel confident in not being interrupted was to
find one of the small study desks sequestered in far corners, with those
moveable shelves of books they have in university libraries between me and
everyone else.</p>
<p>This initial problem with finding quiet and concentration somewhat epitomises
a lot of my academic experiences in Tucson. I felt that the academic culture
in the US was a noisy one: talking loudly to each other was valued a lot more
highly than it had been in the UK, and real deep reading and thinking was
something that people did on their own, at home, and didn’t talk about much.
You talked about all the writing you had been doing, and indeed about what
people you’d read had said, but with the latter it was as though the actual
reading had happened outside of time, and the things happening within time
were on-campus activities, and the hours of writing. You might say, well, it
was grad school, of course the focus is more on producing one’s own work. But
we did read a lot, in fact, and it’s not as though undergraduate Philosophy at
Oxford didn’t involve regularly spending a lot of time writing, even if tute
essays are something strange and staccato when compared to what we tried to
write in grad school. And this is not to say that I didn’t learn and develop
a great deal from many of those loud conversations, both in and out of
seminar, but I think a productive campus needs more quiet, too.</p>
<p>We had two kinds of classes, lecture-style with both undergrad and graduate
students, though in smaller groups than undergrad, and seminars with almost
exclusively graduate students. Many people would take as many seminars as
they were allowed to, and we all continued to join seminars once we’d
completed coursework. But a few of us, including me, joined as many lectures
as we could, even after completing coursework. I just love listening to
masters of their domains of study. This was distinctly uncool – you’ve got
to practice producing in order to become a philosopher yourself, would go the
thought. But it’s not as if I didn’t produce too. And you can’t be
disdainful of continuing to pump good philosophy into your head. Perhaps my
attraction to the lecture classes was because it was somewhat closer to the
deep reading with which I was familiar, that proved elusive on my American
campus. You have to do the hard work to make philosophical progress, but you
can’t engage with philosophy only by doing what feels like hard graft if you
want to succeed, I think. You have to engage with it in other ways too, like
just by listening.</p>
<p>A quality about the Americans I knew well which struck me early was their
generosity with time, friendliness and just materially. I mean to include here
peers who were my friends, as well as people who were part of my life for
extended periods, but with whom I didn’t have enough in common for friendship.
When I first arrived in Tucson I lived in a house in the Sam Hughes
neighbourhood, owned by the parents of one of my two roommates, Nick. He was
from Phoenix, and was taking a second undergraduate degree after deciding that
he didn’t really want to follow in his father’s footsteps and become a doctor,
but wanted to be a programmer. Nick and I would drive to the supermarket
together every Saturday in his big Ford truck, and we developed a habit of
listening to The Eagle’s <em>Take It Easy</em> on the ride back. I never signed a
lease for living in that place. At one point I was short of American money
after spending a lot on a summer trip, and I asked whether I could pay my rent
erratically for a while, as my stipend came in over the following academic
year, rather than transferring savings from the UK. It was no problem to do
this. One Spring Break and one Thanksgiving, I joined Nick in driving up to
Paradise Valley, Phoenix, to stay with his family. His mother had sat in the
state House of Representatives as a Republican, and had two very yappy
chihuahuas, traumatised as they had been by a previous owner. At one point
they had to stay with us down in Tucson for a few days. One of them refused
to walk on the tile floor, and we had to create a bridge of doormats between
the carpeted room in which it was sleeping and the front door.</p>
<p>Nick introduced me to the American love for pulp cinema, which we don’t really
have in the UK. Once Nick graduated and I developed closer friendships in my
department, I watched a lot more such films with philosophers.</p>
<p>After living with Nick I lived alone, for nine months, in a small terraced
bungalow, for barely any rent. The people around me were mostly economically
deprived retirees, and some young people working jobs like driving some kind
of tractor around on the extended grounds of the airport, on his own, far away
from the planes. At one point a different corporation took over management of
the properties, and they tried to make us pay an additional fee for the
laundry room that had until then been included. They did this by installing a
lock, and telling us we had to come down to the office to pay the new fee and
receive a key. My neighbour Wilma and I took the bus down to the office and
objected, and eventually got keys for free. Now that I think about it, I
don’t know whether other existing tenants ended up paying for it. I improved
my understanding of how the economically deprived, even in the West, can get
casually abused by businesses, from this.</p>
<p>Wilma would sit behind her screen door in the evening, without the lights on,
and a disembodied greeting would float out to me, among the crying cicadas, as
I biked up to my own place. I had a nine month lease and I left that place
right after because I was fed up with the insects infesting the place. But at
the same time, living there was when I figured out how to be happy with my
life in Tucson, and I maintained that happiness from then until the pandemic,
when everything got hard for most everyone. Wilma was generous like Nick.</p>
<p>Before I said goodbye to Nick and moved in next door to Wilma, I tried to live
a life involving the kind of variety that my life in Korea had had, before I
went to Tucson. I was continually frustrated in this, because it was too
distant from the lives that the people around me led for me to be able to
figure out how to do it there, and more mundanely, because of how car-centric
Tucson is. When I moved into my place on my own I somehow decided that I
would try focusing entirely on my university work, and I also expanded that
work a bit by registered for a seminar in Japanese literature up at the East
Asian Studies department. My future PhD thesis supervisor Julia joined me for
that seminar and one more the next semester, and I was able to draw upon some
novels we read for my thesis.</p>
<p>I didn’t have Internet access at my little place, and we had finally got some
designated-silent shared offices for grad students, in addition to the noisy
ones where people held office hours, and talked loudly about philosophy.
Suddenly my life got a lot more focused and quieter. I would get up and
scramble an egg with some cheese and black pepper, and have it in a pitta
bread-like thing which I sliced, froze, and defrosted in the toaster. I’d
head to campus, early, and write. I’d do my classes and reading. Then I’d go
swim in the big outside pool the university had, in the dark. I’d do one or
two lengths at a time and then hold onto the edge and just think hard. I
especially did this after my literature classes. They ran until 6pm, I think,
and then I’d go to the pool, and do my lengths interspersed with thinking hard
about the literature we’d discussed. Then after a long time out I’d go home
late, and listen to pre-downloaded tabletop roleplaying podcasts. I slept the
best I ever have, in the quiet among the noises of insects – it really was
quieter despite all that noise – on this wonderful Japanese floor bed I’d
found on Amazon. What I discovered during that time was the power of a simple
life, I think. Or perhaps it was more about not trying to live a more complex
life than the place you live allows. Or perhaps it wasn’t anything more than
about the benefits of giving up fighting against a prevalent culture of
workaholism – but at least, it was giving in to that situation in a way which
strongly benefitted me. Going with the flow, or something.</p>
<p>I tried to build upon my new focus with the next phase of time in Tucson. I
moved into the university’s grad student dorms, living right next to campus,
in the middle of a commercial district for students that felt like one had
left Tucson and gone somewhere more contemporary. This was a change I
appreciated a lot, having, as I said, grown tired with all the bugs. At this
time I got to know my now-fiancée Ke. I had finished with class credits but
sat in on so many classes and reading groups, while still continuing to write
a lot, that my work life didn’t change too much. While most people would
start teaching their own classes at this point, I asked if I could continue to
be assigned teaching assistant roles instead; I started teaching on my own
only during the pandemic. My social life, aside from time with Ke and her
roommate, mostly involved cycling East for forty minutes or so, to a house in
which three fellow philosophers lived. I loved those evening rides there and
nighttime rides back. Tucson is a dark city for the astronomy, and it’s also
flat and bike-friendly, so for most of that journey I was on a route where
various things had been set up to discourage cars from staying on the same
roads as cyclists. The friends I had who lived in that house, Brandon, Tyler
and Nathan, and later Nathan’s partner Meg and Tyler’s partner Amanda, were
now the humblingly generous Americans in my life. We got two tabletop
roleplaying groups going, with me and Nathan running a game each, and playing
in each other’s. Later we were a pandemic pod, watching through <em>Terrace
House: Opening New Doors</em> together.</p>
<p>I also significantly ramped up my involvement in Debian at around this time.
Each Saturday morning I would visit a local coffee roasters, Caffe Lucè, have
an excellent bagel and a couple of cups of coffee with half-and-half, and work
on my packages.</p>
<p>I’ve described how a built for myself something of a sense of belonging
studying Philosophy in Tucson. But ultimately, it did not compare in this
regard to the place I was most content, which was in Balliol, my Oxford
college. The Arizona grad students would go out for beer at a nice place
called Time Market on some Friday nights, and while it was often a very good
time, I would walk home with this heavy feeling of disappointment. I can now
identify this as the lack of a sense of camraderie and belonging which I
thought was essential to a productive academic environment. I can now also
see that I had an intellectual kinship with Julia, Nathan, Tyler, Ke and
others which was just as valuable, but it was still something had only with
individuals, lacking a sense of being part of something not only bigger but
also concrete, actually in the world. The pressures of professional academia
in the US didn’t seem to leave us enough space to have what I remember us
having had at Balliol. Not that the Balliol I inhabited still exists – it
was dependent as much on the place as the people I was there with.</p>
<p>The advent of the pandemic, and the remainder of my time in Tucson after the
pandemic, eroded this life I’d figured out. Our department eroding too was
part of that – a lot of people moved away to be with their partners or
families when lockdowns began, and faculty retired (and in one case tragically
died), and so we lost a critical mass of intellectually energetic individuals.
This hit me hard, and I did not have the emotional resources remaining,
post-pandemic, to try to kick start things again, as previous versions of
myself might have tried to do. I find, though, that most of my memories of
life and Philosophy in Tucson are of the good times, and I find it easy, now
at least, to write a post like this one.</p>
<p>When I think back to all the classes I took, discussions I had and essays I
wrote and revised, I can see significant intellectual development. At the
same time, it was as though my development in other senses was put on hold for
those eight years, in a way that it had not been at Oxford and in Korea. (I
even find myself wanting to say that my whole life was put on hold, but that
would be hyperbolic even if it felt that way sometimes, for as I have said, I
developed many important friendships.) Postgraduate Philosophy was just too
consuming. I don’t know if it could have been other way, but I knew all along
that it had to stop at some point; I knew that I couldn’t put all the other
respects in which I wanted to grow on hold forever. Somehow, Oxford got this
balance right: it managed to be just as satisfyingly intense and thrilling,
without being quite all-consuming. Of course, I probably have rose-tinted
glasses. It does seem, though, that European hard work manages to be more
balanced, at least for what I seek to achieve, than American hard work.</p>
<p>During my final year, a current postdoc at Oxford happened to visit Tucson to
speak at a political philosophy conference. Our quiet (to her),
old-fashioned, relatively informal academic life out in the desert as grad
students seemed to have a lot of advantages over hers in Oxford, despite how
she had graduated from her doctorate and had obtained an academic job, and we
were students. Until I met her, I had taken for granted, I think, all the
ways that academic life in Tucson <em>was</em> quite like Balliol undergrad had been
– she told me how her colleagues are all on Twitter, but none of us were,
really. When I first arrived in Tucson I found it distressing how much more
of an ivory tower it seemed, with Oxford being such a politically engaged
place. In the end I am very glad I did a humanities PhD where I did, and am
deeply grateful to America.</p>
dissertmakehttps://spwhitton.name//blog/entry/dissertmake/2023-12-05T14:52:15Z2023-12-05T14:46:36Z
<p><a href="https://diziet.dreamwidth.org/">Ian</a> suggested I share the highly involved
build process for <a href="https://spwhitton.name//philos/research/Whitton_dissert_web.pdf">my doctoral
dissertation</a>, which I submitted for
examination earlier this year. Beyond just compiling a PDF from Markdown and
LaTeX sources, there are just two, simple-seeming goals: produce a PDF that
passes PDF/A validation, for long term archival, and replace the second page
with a scanned copy of the page after it was signed by the examiners.
Achieving these two things reproducibly turned out to require a lot of
complexity.</p>
<p>First we build dissertation1.tex out of a number of LaTeX and Markdown files,
and a Pandoc metadata.yml, using Pandoc in a Debian sid chroot. I had to do
the latter because I needed a more recent Pandoc than was available in Debian
stable at the time, and didn’t dare upgrade anything else. Indeed, after
switching to the newer Pandoc, I carefully diff’d dissertation1.tex to ensure
nothing other than what I needed had changed.</p>
<pre><code class="{.GNUmakefile}">dissertation1.tex: preamble.tex \
citeproc-preamble.tex \
committee.tex \
acknowledgements.tex \
dedication.tex \
contents.tex \
abbreviations.tex \
abstract.tex \
metadata.yaml \
template.latex \
philos.csl \
philos.bib \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md
schroot -c melete-sid -- pandoc -s -N -C -H preamble.tex \
--template=template.latex -B committee.tex \
-B acknowledgements.tex -B dedication.tex \
-B contents.tex -B abbreviations.tex -B abstract.tex \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md \
citeproc-preamble.tex metadata.yaml -o $@
</code></pre>
<p>With hindsight, I think that I should have eschewed Pandoc in favour of plain
LaTeX for a project as large as this was. Pandoc is good for journal
submissions, where one is responsible for the content but not really the
presentation. However, one typesets one’s own dissertation, without anyone
else’s help. I decided to commit dissertation1.tex to git, because Pandoc’s
LaTeX generation is not too stable.</p>
<p>We then compile a first PDF. My Makefile comments say that pdfx.sty requires
this particular xelatex invocation. pdfx.sty is supposed to make the PDF
satisfy the PDF/A-2B long term archival standard … but dissertation1.pdf
doesn’t actually pass PDF/A validation. We instead rely on GhostScript to
produce a valid PDF/A-2B, at the final step. But we have to include pdfx.sty
at this stage to ensure that the hyperlinks in the PDF are PDF/A-compatible –
without pdfx.sty, GhostScript rejects hyperref’s links.</p>
<pre><code class="{.GNUmakefile}">dissertation1.pdf: \
dissertation1.tex dissertation1.xmpdata committee_watermark.png
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
</code></pre>
<p>As I said, the second page of the PDF needs to be replaced with a scanned
version of the page after it was signed by the examiners. The usual tool to
stitch PDFs together is pdftk. But pdftk loses the PDF’s metadata. For the
true, static metadata like the title, author and keywords, it would be no
problem to add them back. But the metadata that’s lost includes the PDF’s
table of contents, which PDF readers display in a sidebar, with clickable
links to chapters, and the sections within those. This information is not
static because each time any of the source Markdown and LaTeX files change,
there is the potential for the table of contents to change. So we have to
extract all the metadata from dissertation1.pdf and save it to one side,
before we stitch in the scanned page. We also have to hack the metadata to
ensure that the second page will have the correct orientation.</p>
<pre><code class="{.GNUmakefile}">SED = /^PageMediaNumber: 2$$/ { n; s/0/90/; n; s/612 792/792 612/ }
KEYWORDS = virtue ethics, virtue, happiness, eudaimonism, good lives, final ends
dissertation1_meta.txt: dissertation1.pdf
printf "InfoBegin\nInfoKey: Keywords\nInfoValue: %s\n%s\n" \
"${KEYWORDS}" "$$(pdftk $^ dump_data)" \
| sed "${SED}" >$@
</code></pre>
<p>Now we can stitch in the signed page, and then put the metadata back. You
can’t do this in one invocation of pdftk, so far as I could see.</p>
<pre><code class="{.GNUmakefile}">dissertation1_stitched_updated.pdf: \
dissertation1_stitched.pdf dissertation1_meta.txt
pdftk dissertation1_stitched.pdf \
update_info dissertation1_meta.txt output $@
dissertation1_stitched.pdf: dissertation1.pdf
pdftk A=$^ \
B=$$HOME/annex/philos/Dissertation/committee_signed.pdf \
cat A1 B1 A3-end output $@
</code></pre>
<p>Finally, we use GhostScript to reprocess the PDF into two valid PDF/A-2Bs, one
optimised for the web. This requires supplying a colour profile, a
PDFA_def.ps postscript file, a whole sequence of GhostScript options, and some
raw postscript on the command line, which gives the PDF reader some display
hints.</p>
<pre><code class="{.GNUmakefile}">GS_OPTS1 = -sDEVICE=pdfwrite -dBATCH -dNOPAUSE -dNOSAFER \
-sColorConversionStrategy=UseDeviceIndependentColor \
-dEmbedAllFonts=true -dPrinted=false -dPDFA=2 \
-dPDFACompatibilityPolicy=1 -dDetectDuplicateImages \
-dPDFSETTINGS=/printer -sOutputFile=$@
GS_OPTS2 = PDFA_def.ps dissertation1_stitched_updated.pdf \
-c "[ /PageMode /UseOutlines \
/Page 1 /View [/XYZ null null 1] \
/PageLayout /SinglePage /DOCVIEW pdfmark"
all: Whitton_dissert_web.pdf Whitton_dissert_gradcol.pdf
Whitton_dissert_gradcol.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} ${GS_OPTS2}
Whitton_dissert_web.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} -dFastWebView=true ${GS_OPTS2}
</code></pre>
<p>And here’s PDFA_def.ps, based on a sample in the GhostScript docs:</p>
<pre><code class="{.ps}">% Define an ICC profile :
/ICCProfile (srgb.icc)
def
[/_objdef {icc_PDFA} /type /stream /OBJ pdfmark
[{icc_PDFA}
<<
/N 3
>> /PUT pdfmark
[{icc_PDFA} ICCProfile (r) file /PUT pdfmark
% Define the output intent dictionary :
[/_objdef {OutputIntent_PDFA} /type /dict /OBJ pdfmark
[{OutputIntent_PDFA} <<
/Type /OutputIntent % Must be so (the standard requires).
/S /GTS_PDFA1 % Must be so (the standard requires).
/DestOutputProfile {icc_PDFA} % Must be so (see above).
/OutputConditionIdentifier (sRGB)
>> /PUT pdfmark
[{Catalog} <</OutputIntents [ {OutputIntent_PDFA} ]>> /PUT pdfmark
</code></pre>
<p>Phew!</p>
consfigurator 1.3.0https://spwhitton.name//blog/entry/consfigurator_1.3.0/2023-03-17T18:39:19Z2023-03-17T18:39:19Z
<p>I’ve just realised Consfigurator 1.3.0, with some readtable enhancements. So
now instead of writing</p>
<pre><code> (firewalld:has-policy "athenet-allow-fwd"
#>EOF><?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
</code></pre>
<p>you can write</p>
<pre><code> (firewalld:has-policy "athenet-allow-fwd" #>>~EOF>>
<?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
</code></pre>
<p>which is a lot more readable when it appears in a list of other properties.
In addition, instead of writing</p>
<pre><code>(multiple-value-bind (match groups)
(re:scan-to-strings "^uid=(\\d+)" (connection-connattr connection 'id))
(and match (parse-integer (elt groups 0))))
</code></pre>
<p>you can write just <code>(#1~/^uid=(\d+)/p (connection-connattr connection 'id))</code>.
On top of the Perl-inspired syntax, I’ve invented the new trailing option <code>p</code>
to attempt to parse matches as numbers.</p>
<p>Another respect in which Consfigurator’s readtable has become much more useful
in this release is that I’ve finally taught Emacs about these reader macros,
such that unmatched literal parentheses within regexps or heredocs don’t cause
Emacs (and especially Paredit) to think that the code couldn’t be valid Lisp.
Although I was able mostly to reuse propertising algorithms from the built-in
<code>perl-mode</code>, I did have to learn a lot more about how <code>parse-partial-sexp</code>
really works, which was pretty cool.</p>
Always running Emacs under gdbhttps://spwhitton.name//blog/entry/fg-daemon-gdb/2023-01-16T17:24:32Z2022-11-03T18:50:28Z
<p>The emacsclient(1) program is used to connect to Emacs running as a daemon.
emacsclient(1) can go in your EDITOR/VISUAL environment variables so that you
can edit things like Git commit messages and sudoers files in your existing
Emacs session, rather than starting up a new instances of Emacs. It’s not
only that this is usually faster, but also that it means you have all your
session state available – for example, you can yank text from other files you
were editing into the file you’re now editing.</p>
<p>Another, somewhat different use of emacsclient(1) is to open new Emacs frames
for arbitrary work, not just editing a single, given file. This can be in a
terminal or under a graphical display manager. I use emacsclient(1) for this
purpose about as often as I invoke it via EDITOR/VISUAL. I use <code>emacsclient
-nc</code> to open new graphical frames and <code>emacsclient -t</code> to open new text-mode
frames, the latter when SSHing into my work machine from home, or similar. In
each case, all my buffers, command history etc. are available. It’s a real
productivity boost.</p>
<p>Some people use systemd socket activation to start up the Emacs daemon. That
way, they only need ever invoke <code>emacsclient</code>, without any special options,
and the daemon will be started if not already running. In my case, instead,
<code>emacsclient</code> on PATH is a <a href="https://git.spwhitton.name/dotfiles/tree/bin/emacsclient">wrapper
script</a> that checks
whether a daemon is running and starts one if necessary. The main reason I
have this script is that I regularly use both the installed version of Emacs
and in-tree builds of Emacs out of emacs.git, and the script knows how to
choose what to launch and what to try to connect to. In particular, it
ensures that the in-tree emacsclient(1) is not used to try to connect to the
installed Emacs, which might fail due to protocol changes. And it won’t use
the in-tree Emacs executable if I’m currently recompiling Emacs.</p>
<p>I’ve recently enhanced my wrapper script to make it possible to have the
primary Emacs daemon always running under gdb. That way, if there’s a
seemingly-random crash, I might be able to learn something about what
happened. The tricky thing is that I want gdb to be running inside an
instance of Emacs too, because Emacs has a nice interface to gdb. Further,
gdb’s Emacs instance – hereafter “gdbmacs” – needs to be the installed,
optimised build of Emacs, not the in-tree build, such that it’s less likely to
suffer the same crash. And the whole thing must be transparent: I shouldn’t
have to do anything special to launch the primary session under gdb. That is,
if right after booting up my machine I execute</p>
<pre><code>% emacsclient foo.txt
</code></pre>
<p>then gdbmacs should start, it should then start the primary sesion under gdb,
and finally the real emacsclient(1) should connect to the primary session and
request editing foo.txt. I’ve got that all working now, and there are some
nice additional features. If the primary session hits a breakpoint, for
example, then emacsclient requests will be redirected to gdbmacs, so that I
can still edit files etc. without losing the information in the gdb session.
I’ve given gdbmacs a different background colour, so that if I request a new
graphical frame and it pops up with that colour, I know that the main session
is wedged and I might like to investigate.</p>
<h1>First attempt: remote attaching</h1>
<p>My first attempt, which was running for several weeks, had a different
architecture. Instead of having gdbmacs start up the primary session, the
primary session would start up gdbmacs, send over its own PID, and ask gdbmacs
to use gdb’s functionality for attaching to existing processes. In
<code>after-init-hook</code> I had to code to check whether we are an Emacs that just
started up out of my clone emacs.git, and if so, we invoke</p>
<pre><code>% emacsclient --socket-name=gdbmacs --spw/installed \
--eval '(spw/gdbmacs-attach <the pid>)'
</code></pre>
<p>The <code>--spw/installed</code> option asks the wrapper script to start up gdbmacs using
the Emacs binary on PATH, not the one in emacs.git/. (We can’t use the
<code>server-eval-at</code> function because we need the wrapper script to start up
gdbmacs if it’s not already running.)</p>
<p>Over in gdbmacs, the <code>spw/gdbmacs-attach</code> function then did something like
this:</p>
<pre><code>(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb (format "gdb -i=mi --pid=%d src/emacs" pid))
(gdb-wait-for-pending (lambda () (gud-basic-call "continue"))))
</code></pre>
<p>Having gdbmacs attach to the existing process is more robust than having
gdbmacs start up Emacs under gdb. If anything goes wrong with attaching, or
with gdbmacs more generally, you’ve still got the primary session running
normally; it just won’t be under a debugger. More significantly, the wrapper
script doesn’t need to know anything about the relationship between the two
daemons. It just needs to be able to start up both in-tree and installed
daemons, using the <code>--spw/installed</code> option to determine which. The
complexity is all in Lisp, not shell script (the wrapper is a shell script
because it needs to start up fast).</p>
<p>The disadvantage of this scheme is that the primary session’s stdout and
stderr are not directly accessible to gdbmacs. There is a function
<code>redirect-debugging-output</code> to deal with this situation, and I experimented
with having the primary session call this and send the new output filename to
gdbmacs, but it’s much less smooth than having gdbmacs start up the primary
session itself.</p>
<p>I think most people would probably prefer this scheme. It’s definitely
cleaner to have the two daemons start up independently, and then have one
attach to the other. But I decided that I was willing to complexify my
wrapper script in order to have the primary session’s stdout and stderr
attached to gdbmacs in the normal way.</p>
<h1>Second attempt: daemons starting daemons</h1>
<p>In this version, the relevant logic is shifted out of Lisp into the wrapper
script. When we execute <code>emacsclient foo.txt</code>, the script first determines
whether the primary session is already running, using something like this:</p>
<pre><code>[ -e /run/user/1000/emac/server \
-a -n "$(ss -Hplx src /run/user/1000/emacs/server)" ]
</code></pre>
<p>The ss(8) tool is used to determine if anything is listening on the socket.
The script also uses flock(1) to have other instances of the wrapper script
wait, in case they are going to cause the daemon to exit, or something. If
the daemon is running, then we can just exec <code>emacs.git/lib-src/emacsclient</code>
to handle the request. If not, we first have to start up gdbmacs:</p>
<pre><code>installed_emacsclient=$(PATH=$(echo "$PATH" \
| sed -e "s#/directory/containing/wrapper/script##") \
command -v emacsclient)
"$installed_emacsclient" -a '' -sgdbmacs --eval '(spw/gdbmacs-attach)'
</code></pre>
<p><code>spw/gdbmacs-attach</code> now does something like this:</p>
<pre><code>(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb "gdb -i=mi --args src/emacs --fg-daemon")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "set cwd ~")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "run"))))))
</code></pre>
<p><code>"$installed_emacsclient"</code> exits as soon as <code>spw/gdbmacs-attach</code> returns,
which is before the primary session has started listening on the socket, so
the wrapper script uses inotifywait(1) to wait until <code>/run/user/1000/server</code>
appears. Then it is finally able to exec <code>~/src/emacs/lib-src/emacsclient</code> to
handle the request.</p>
<h1>A particular kind of complexity</h1>
<p>The wrapper script must be highly reliable. I use my primary Emacs session
for everything, on the same laptop that I do my academic work. The main way I
get at it is via a window manager shortcut that executes <code>emacsclient -nc</code> to
request a new frame, such that if there is a problem, I won’t see any error
output until I open an xterm and tail <code>~/.swayerr</code>/<code>~/.xsession-errors</code>. And
as starting gdbmacs and only then starting up less optimised, debug in-tree
builds of Emacs is not fast, I would have to wait at least ten seconds without
any Emacs frame popping up before I could suppose that something was wrong.</p>
<p>This is where the first scheme, where the complexity is all in Lisp, really
seems attractive. My emacsclient(1) wrapper script has several other
facilities and convenience features, some of which are general and some of
which are only for my personal usage patterns, and the code for all those is
now interleaved with the special cases for gdbmacs and the primary session
that I’ve described in this post. There’s a lot that could go wrong, and it’s
all in shell, and its output isn’t readily visible to the user. I’ve done a
lot of testing, and I’m pretty confident in the script in its current form,
but if I need to change or add features, I’ll have to do a lot of testing
again before I can deploy to my usual laptop.</p>
<p>Single-threaded, readily interactively-debuggable Emacs Lisp really shines for
this sort of “do exactly what I mean, as often as possible” code, and you find
a lot of it in Emacs itself, third party packages, and peoples’ <code>init.el</code>
files. You can add all sorts of special cases to your interactive commands to
make Emacs do just what is most useful, and have confidence that you can
manage the resulting complexity. In this case, though, I’ve got piles of just
this sort of complexity out in an opaque shell script. The ultimate goal,
though, is debugging Emacs, such that one can run yet more DJWIM Emacs Lisp,
which perhaps justifies it.</p>
reprepro-rebuilderhttps://spwhitton.name//blog/entry/reprepro-rebuilder/2022-09-08T19:45:35Z2022-09-08T19:45:35Z
<p>I’ve come up with a new <a href="https://manpages.debian.org/reprepro">reprepro</a>
wrapper for adding rebuilds of existing Debian packages to a local repository:
<a href="https://git.spwhitton.name/dotfiles/tree/bin/reprepro-rebuilder">reprepro-rebuilder</a>.
It should make it quicker to update local rebuilds of existing packages,
patched or unpatched, working wholly out of git. Here’s how it works:</p>
<ol>
<li><p>Start with a git branch corresponding to the existing Debian package you
want to rebuild. Probably you want <code>dgit clone foo</code>.</p></li>
<li><p>Say <code>reprepro-rebuilder unstable</code>, and the script will switch you to a
branch <code>PREFIX/unstable</code>, where PREFIX is a short name for your reprepro
repository, and update <code>debian/changelog</code> for a local rebuild. If the
branch already exists, it will be updated with a merge.</p></li>
<li><p>You can now do any local patching you might require. Then, say
<code>reprepro-rebuilder --release</code>. (The command from step (2) will offer to
release immediately for the case that no additional patching is required.)</p></li>
<li><p>At this point, your reprepro will contain a source package coresponding to
your local rebuild. You can say <code>reprepro-rebuilder --wanna-build</code> to
build any missing binaries for all suites, for localhost’s Debian
architecture. (Again, the command from step (3) will offer to do this
immediately after adding the source package.)</p></li>
</ol>
<p>Additionally, if you’re rebuilding for unstable, reprepro-rebuilder will offer
to rebuild for backports, too, and there are a few more convenience features,
such as offering to build binaries for testing between steps (2) and (3). You
can leave the script waiting to release while you do the testing.</p>
<p>I think that the main value of this script is keeping track of the distinct
steps of a relatively fiddly, potentially slow-running workflow for you,
including offering to perform your likely next step immediately. This means
that you can be doing something else while the rebuilds are trundling along:
you just start <code>reprepro-rebuilder unstable</code> in a shell, and unless additional
patching is required between steps (2) and (3), you just have to answer script
prompts as they show up and everything gets done.</p>
<p>If you need to merge from upstream fairly regularly, and then produce binary
packages for both unstable and backports, that’s quite a lot of manual steps
that reprepro-rebuilder takes care of for you. But the script’s command line
interface is flexible enough for the cases where more intervention is
required, too. For example, for my Emacs snapshot builds, I have another
script to replace steps (1) and (2), which merges from a specific branch that
I know has been manually tested, and generates a special version number. Then
I say <code>reprepro-rebuilder --release</code> and the script takes care of preparing
packages for unstable and bullseye-backports, and I can have my snapshots on
all of my machines without a lot of work.</p>
Setting up a single-board desktop replacement with Consfiguratorhttps://spwhitton.name//blog/entry/rpi4-consfigurator/2022-08-03T00:07:21Z2022-08-03T00:04:30Z
<p>The ThinkPad x220 that I had been using as an ssh terminal at home finally
developed one too many hardware problems a few weeks ago, and so I ordered a
Raspberry Pi 4b to replace it. Debian builds <a href="http://raspi.debian.net/">minimal SD
card images</a> for these machines already, but I
wanted to use the usual ext4-on-LVM-on-LUKS setup for GNU/Linux workstations.
So I used <a href="https://spwhitton.name/tech/code/consfigurator">Consfigurator</a> to build a custom image.</p>
<p>There are two key advantages to using Consfigurator to do something like this:</p>
<ol>
<li><p>As shown below, it doesn’t take a lot of code to define the host, it’s
easily customisable without writing shell scripts, and it’s all
declarative. (It’s quite a bit less code than <a href="https://salsa.debian.org/raspi-team/image-specs">Debian’s image-building
scripts</a>, though I haven’t
carefully compared, and they are doing some additional setup beyond what’s
shown below.)</p></li>
<li><p>You can do nested block devices, as required for ext4-on-LVM-on-LUKS,
without writing an intensely complex shell script to expand the root
filesystem to fill the whole SD card on first boot. This is because
Consfigurator can just as easily partition and install an actual SD card as
it can write out a disk image, using the same host definition.</p></li>
</ol>
<p>Consfigurator already had all the capabilities to do this, but as part of this
project I did have to come up with the high-level wrapping API, which didn’t
exist yet. My first SD card write wouldn’t boot because I had to learn more
about kernel command lines; the second wouldn’t boot because of a <a href="https://git.spwhitton.name/consfigurator/commit/?id=94a10f08b087afa25cd7757e1ad2ad4c9a82f63b">minor bug
in Consfigurator regarding
/etc/crypttab</a>;
and the third build is the one I’m using, except that the first boot runs into
a <a href="https://bugs.debian.org/1016455">bug in cryptsetup-initramfs</a>. So as far
as Consfigurator is concerned I would like to claim that it worked on my
second attempt, and had I not been using LUKS it would have worked on the
first :)</p>
<h2>The code</h2>
<pre><code class="{.common-lisp}">(defhost erebus.silentflame.com ()
"Low powered home workstation in Tucson."
(os:debian-stable "bullseye" :arm64)
(timezone:configured "America/Phoenix")
(user:has-account "spwhitton")
(user:has-enabled-password "spwhitton")
(disk:has-volumes
(physical-disk
(partitioned-volume
((partition
:partition-typecode #x0700 :partition-bootable t :volume-size 512
(fat32-filesystem :mount-point #P"/boot/firmware/"))
(partition
:volume-size :remaining
(luks-container
:volume-label "erebus_crypt"
:cryptsetup-options '("--cipher" "xchacha20,aes-adiantum-plain64")
(lvm-physical-volume :volume-group "vg_erebus"))))))
(lvm-logical-volume
:volume-group "vg_erebus"
:volume-label "lv_erebus_root" :volume-size :remaining
(ext4-filesystem :volume-label "erebus_root" :mount-point #P"/"
:mount-options '("noatime" "commit=120"))))
(apt:installed "linux-image-arm64" "initramfs-tools"
"raspi-firmware" "firmware-brcm80211"
"cryptsetup" "cryptsetup-initramfs" "lvm2")
(etc-default:contains "raspi-firmware"
"ROOTPART" "/dev/mapper/vg_erebus-lv_erebus_root"
"CONSOLES" "ttyS1,115200 tty0"))
</code></pre>
<p>and then you just insert the SD card and, at the REPL on your laptop,</p>
<pre><code class="{.common-lisp}">CONSFIG> (hostdeploy-these laptop.example.com
(disk:first-disk-installed-for nil erebus.silentflame.com #P"/dev/mmcblk0"))
</code></pre>
<p>There is more general information in the <a href="https://spwhitton.name/doc/consfigurator/tutorial/os_installation.html">OS installation
tutorial</a>
in the Consfigurator user’s manual.</p>
<h2>Other niceties</h2>
<ul>
<li><p>Configuration management that’s just as easily applicable to OS installation
as it is to the more usual configuration of hosts over SSH drastically
improves the ratio of cost-to-benefit for including small customisations one
is used to.</p>
<p>For example, my standard Debian system configuration properties (omitted
from the code above) meant that when I was dropped into an initramfs shell
during my attempts to make an image that could boot itself, I found myself
availed of my custom <a href="https://spwhitton.name/blog/entry/spacecadetrebindings">Space Cadet-inspired keyboard
layout</a>, without really having thought at
any point “let’s do something to ensure I can have my usual layout while I’m
figuring this out.” It was just included along with everything else.</p></li>
<li><p>As compared with the ThinkPad x220, it’s nice how the Raspberry Pi 4b is
silent and doesn’t have any LEDs lit by default once it’s booted. A quirk
of my room is that one plug socket is controlled by a switch right next to
the switch for the ceiling light, so I’ve plugged my monitor into that
outlet. Then when I’ve finished using the new machine I can flick that
switch and the desk becomes completely silent and dark, without actually
having to suspend the machine to RAM, thereby stopping cron jobs, preventing
remote access from the office to fetch uncommitted files, etc..</p></li>
</ul>
gnus+notmuchhttps://spwhitton.name//blog/entry/gnus+notmuch/2022-07-12T01:15:32Z2022-07-11T17:03:23Z
<p>I’d like to share some pointers for using Gnus together with notmuch rather
than notmuch together with notmuch’s own Emacs interface, notmuch.el. I set
about this because I recently realised that I had been poorly reimplementing
lots of Gnus features in my init.el, primarily around killing threads and
catching up groups, supported by a number of complex shell scripts. I’ve now
switched over, and I’ve been able to somewhat simplify what’s in my init.el,
and drastically simplify my notmuch configuration outside of Emacs. I’m
always more comfortable with less Unix and more Lisp when it’s feasible.</p>
<ul>
<li><p>The basic settings are <code>gnus-search-default-engines</code> and
<code>gnus-search-notmuch-remove-prefix</code>, explained in <code>(info "(gnus) Searching")</code>,
and an entry for your maildir in <code>gnus-secondary-select-methods</code>, explained
in <code>(info "(gnus) Maildir")</code>. Then you will have <code>G G</code> and <code>G g</code> in the
group buffer to make and save notmuch searches.</p></li>
<li><p>I think it’s important to have something equivalent to
<code>notmuch-saved-searches</code> configured programmatically in your init.el, rather
than interactively adding each saved search to the group buffer. This is
because, as notmuch users know, these saved searches are more like
permanent, virtual inboxes than searches. You can learn how to do this by
looking at how <code>gnus-group-make-search-group</code> calls <code>gnus-group-make-group</code>.
I have some code running in <code>gnus-started-hook</code> which does something like
this for each saved search:</p>
<pre><code class="`` {.el}"> (if (gnus-group-entry group)
(gnus-group-set-parameter group 'nnselect-specs ...)
(gnus-group-make-group ...))
</code></pre>
<p> The idea is that if you update your saved search in your init.el,
rerunning this code will update the entries in the group buffer. An
alternative would be to just kill every nnselect search in the group
buffer each time, and then recreate them. In addition to reading
<code>gnus-group-make-search-group</code>, you can look in <code>~/.newsrc.eld</code> to see the
sort of <code>nnselect-specs</code> group parameters you’ll need your code to
produce.</p>
<p> I’ve very complicated generation of my saved searches from some variables,
but that’s something I had when I was using notmuch.el, too, so perhaps
I’ll describe some of the ideas in there in another post.</p></li>
<li><p>You’ll likely want to globally bind a function which starts up Gnus if it’s
not already running and then executes an arbitrary notmuch search. For that
you’ll want <code>(unless (gnus-alive-p) (gnus))</code>, and <strong>not</strong>
<code>(unless (gnus-alive-p) (gnus-no-server))</code>. This is because you need Gnus
to initialise nnmaildir before doing any notmuch searches. Gnus passes
<code>--output=files</code> to notmuch and constructs a summary buffer of results by
selecting mail that it already knows about with those filenames.</p></li>
<li><p>When you’re programmatically generating the list of groups, you might also
want to programmatically generate a topics topology. This is how you do
that:</p>
<pre><code class="`` {.el}"> (with-current-buffer gnus-group-buffer
(gnus-topic-mode 0)
(setq gnus-topic-alist nil gnus-topic topology nil)
;; Now push to those two variables. You can also use
;; `gnus-topic-move-matching' to move nnmaildir groups into, e.g.,
;; "misc".
(gnus-topic-mode 1)
(gnus-group-list-groups))
</code></pre>
<p> If you do this in <code>gnus-started-hook</code>, the values for those variables Gnus
saves into <code>~/.newsrc.eld</code> are completely irrelevant and do not need
backing up/syncing.</p></li>
<li><p>When you want to use <code>M-g</code> to scan for new mail in a saved search, you’ll
need to have Gnus also rescan your nnmaildir inbox, else it won’t know about
the filenames returned by notmuch and the messages won’t appear. This is
similar to the <code>gnus</code> vs. <code>gnus-no-server</code> issue above. I’m using <code>:before</code>
advice to <code>gnus-request-group-scan</code> to scan my nnmaildir inbox each time any
nnselect group is to be scanned.</p></li>
<li><p>If you are used to linking to mail from Org-mode buffers, the existing
support for creating links works fine, and the standard <code>gnus:</code> links
already contain the Message-ID. But you’ll probably want opening the link
to perform a notmuch search for id:foo rather than trying to use Gnus’s own
jump-to-Message-ID code. You can do this using <code>:around</code> or <code>:override</code>
advice for <code>org-gnus-follow-link</code>: look at
<code>gnus-group-read-ephemeral-search-group</code> to do the search, and then call
<code>gnus-summary-goto-article</code>.</p></li>
</ul>
<p>I don’t think that the above is especially hacky, and don’t expect changes to
Gnus to break any of it. Implementing the above for your own notmuch setup
should get you something close enough to notmuch.el that you can take
advantage of Gnus’ unique features without giving up too much of notmuch’s
special features. However, it’s quite a bit of work, and you need to be good
at Emacs Lisp. I’d suggest reading lots of the Gnus manual and determining
for sure that you’ll benefit from what it can do before considering switching
away from notmuch.el.</p>
<p>Reading through the Gnus manual, it’s been amazing to observe the extent to
which I’d been trying to recreate Gnus in my init.el, quite oblivious that
everything was already implemented for me so close to hand. Moreover, I used
Gnus ten years ago when I was new to Emacs, so I should have known! I think
that back then I didn’t really understand the idea that Gnus for mail is about
reading mail like news, and so I didn’t use any of the features, back then,
that more recently I’ve been unknowingly reimplementing.</p>
lispreadinghttps://spwhitton.name//blog/entry/lispreading/2022-05-08T20:41:17Z2022-05-08T20:41:17Z
<p>I recently <a href="https://spwhitton.name//blog/entry/consfigurator_1.0.0/">released Consfigurator 1.0.0</a> and
I’m now returning to my Common Lisp reading. Building Consfigurator involved
the ad hoc development of a cross between a Haskell-style functional DSL and a
Lisp-style macro DSL. I am hoping that it will be easier to retain lessons
about building these DSLs more systematically, and making better use of
macros, by finishing my studying of macrology books and papers only after
having completed the ad hoc DSL. Here’s my current list:</p>
<ul>
<li><p>Finishing off <em>On Lisp</em> and <em>Let Over Lambda</em>.</p></li>
<li><p>Richard C. Waters. 1993. “Macroexpand-All: an example of a simple lisp code
walker.” In <em>Newsletter ACM SIGPLAN Lisp Pointers 6 (1)</em>.</p></li>
<li><p><a href="http://christophe.rhodes.io/notes/blog/posts/2014/naive_vs_proper_code-walking/">Naive vs. proper code-walking</a>.</p></li>
<li><p>Michael Raskin. 2017. “Writing a best-effort portable code walker in
Common Lisp.” In <em>Proceedings of 10th European Lisp Symposium (ELS2017)</em>.</p></li>
<li><p>Cullpepper et. al. 2019. “From Macros to DSLs: The Evolution of Racket”.
<em>Summet of Advances in Programming Languages</em>.</p></li>
</ul>
<p>One thing that I would like to understand better is the place of code walking
in macro programming. The Raskin paper explains that it is not possible to
write a fully correct code walker in ANSI CL. <a href="https://spwhitton.name/doc/consfigurator/pitfalls.html#code-walking-limitations">Consfigurator currently uses
Raskin’s best-effort portable code
walker</a>.
<em>Common Lisp: The Language 2</em> includes a few additional functions which didn’t
make it into the ANSI standard that would make it possible to write a fully
correct code walker, and most implementations of CL provide them under one
name or another. So one possibility is to write a code walker in terms of
ANSI CL + those few additional functions, and then use a portability layer to
get access to those functions on different implementations
(e.g. <a href="https://github.com/Zulu-Inuoe/trivial-cltl2">trivial-cltl2</a>).</p>
<p>However, both <em>On Lisp</em> and <em>Let Over Lambda</em>, the two most substantive texts
on CL macrology, both explicitly put code walking out-of-scope. I am led to
wonder: does the Zen of Common Lisp-style macrology involve doing without code
walking? One key idea with macros is to productively blur the distinction
between designing languages and writing code in those languages. If your
macros require code walking, have you perhaps ended up too far to the side of
designing whole languages? Should you perhaps rework things so as not to
require the code walking? Then it would matter less that those parts of CLtL2
didn’t make it into ANSI. Graham notes in ch. 17 of <em>On Lisp</em> that read
macros are technically more powerful than defmacro because they can do
everything that defmacro can and more. But it would be a similar sort of
mistake to conclude that Lisp is about read macros rather than defmacro.</p>
<p>There might be some connection between arguments for and against avoiding code
walking in macro programming and the maintainance of homoiconicity. One
extant CL code walker, hu.dwim.walker, works by converting back and forth
between conses and CLOS objects (Raskin’s best-effort code walker has a more
minimal interface), and hygienic macro systems in Scheme similarly trade away
homoiconicity for additional metadata (one Lisp programmer I know says this is
an important sense in which Scheme could be considered not a Lisp). Perhaps
arguments against involving much code walking in macro programming are
equivalent to arguments against Racket’s idea of language-oriented
programming. When Racket’s designers say that Racket’s macro system is “more
powerful” than CL’s, they would be right in the sense that the system can do
all that defmacro can do and more, but wrong if indeed the activity of macro
programming is more powerful when kept further away from language design.
Anyway, these are some hypotheses I am hoping to develop some more concrete
ideas about in my reading.</p>
for-bullseyehttps://spwhitton.name//blog/entry/for-bullseye/2022-05-03T23:13:37Z2022-05-03T23:13:37Z
<p>Consfigurator has long has combinators OS:TYPECASE and OS:ETYPECASE to
conditionalise on a host’s operating system. For example:</p>
<pre><code>(os:etypecase
(debian-stable (apt:installed-backport "notmuch"))
(debian-unstable (apt:installed "notmuch")
</code></pre>
<p>You can’t distinguish between stable releases of Debian like this, however,
because while that information is known, it’s not represented at the level of
types. You can manually conditionalise on Debian suite using something like
this:</p>
<pre><code>(defpropspec notmuch-installed :posix ()
(switch ((os:debian-suite (get-hostattrs-car :os)) :test #'string=)
("bullseye" '(apt:installed-backport "notmuch"))
(t '(apt:installed "notmuch"))))
</code></pre>
<p>but that means stepping outside of Consfigurator’s DSL, which has various
disadvantages, such as a reduction in readability. So today I’ve added some
new combinators, so that you can say</p>
<pre><code>(os:debian-suite-case
("bullseye" (apt:installed-backport "notmuch"))
(t (apt:installed "notmuch")))
</code></pre>
<p>For my own use I came up with this additional simple wrapper:</p>
<pre><code>(defmacro for-bullseye (atomic)
`(os:debian-suite-case
("buster")
("bullseye" ,atomic)
;; Check the property is actually unapplicable.
,@(and (get (car atomic) 'punapply) `((t (unapplied ,atomic))))))
</code></pre>
<p>So now I can say</p>
<pre><code>(for-bullseye (apt:pinned '("elpa-org-roam") '(os:debian-unstable) 900))
</code></pre>
<p>which is a succinct expression of the following: “on bullseye, pin
elpa-org-roam to sid with priority 900, drop the pin when we upgrade the
machine to bookworm, and don’t do anything at all if the machine is still on
buster”.</p>
<p>As a consequence of my doing Debian development but running Debian stable
everywhere, I accumulate a number of tweaks like this one over the course of
each Debian stable release. In the past I’ve gone through and deleted them
all when it’s time to upgrade to the next release, but then I’ve had to add
properties to undo changes made for the last stable release, and write
comments saying why those are there and when they can be safely removed, which
is tedious and verbose. This new combinator is cleaner.</p>
consfigurator 1.0.0https://spwhitton.name//blog/entry/consfigurator_1.0.0/2022-04-30T19:50:47Z2022-04-30T19:50:47Z
<p>I am pleased to announce Consfigurator 1.0.0.</p>
<p>Reaching version 1.0.0 signifies that we will try to avoid API breaks.
You should be able to use Consfigurator to manage production systems.</p>
<p>You can find the source at https://git.spwhitton.name/consfigurator for
browsing online or git cloning.</p>
<p>Releases are made by publishing signed git tags to that repository. The
tag for this release is named ‘v1.0.0’, and is signed by me.</p>
<p>On Debian/etc. systems, apt-get install cl-consfigurator</p>
<p>-8<-</p>
<p>Consfigurator is a system for declarative configuration management using
Common Lisp. You can use it to configure hosts as root, deploy services
as unprivileged users, build and deploy containers, install operating
systems, produce disc images, and more. Some key advantages:</p>
<ul>
<li><p>Apply configuration by transparently starting up another Lisp image
on the machine to be configured, so that you can use the full power
of Common Lisp to inspect and control the host.</p></li>
<li><p>Also define properties of hosts in a more restricted language, that
of :POSIX properties, to configure machines, containers and user
accounts where you can’t install Lisp. These properties can be
applied using just an SSH or serial connection, but they can also be
applied by remote Lisp images, enabling code reuse.</p></li>
<li><p>Flexibly chain and nest methods of connecting to hosts. For example,
you could have Consfigurator SSH to a host, sudo to root, start up
Lisp, use the setns(2) system call to enter a Linux container, and
then deploy a service. Secrets, and other prerequisite data, are
properly passed along.</p></li>
<li><p>Combine declarative semantics for defining hosts and services with a
multiparadigmatic general-purpose programming language that won’t get
in your way.</p></li>
</ul>
<p>Declarative configuration management systems like Consfigurator and
Propellor share a number of goals with projects like the GNU Guix System
and NixOS. However, tools like Consfigurator and Propellor try to layer
the power of declarative and reproducible configuration semantics on top
of traditional, battle-tested UNIX system administration infrastructure
like distro package managers, package archives and daemon configuration
mechanisms, rather than seeking to replace any of those. Let’s get as
much as we can out of all that existing distro policy-compliant work!</p>