Diátaxis and LLM-assisted documentation

why should i even, i could be watching pink flamingoes

I absolutely would not want to get in the way of you doing that: I missed an all-night session of John Walters over 30 years ago and am still full of regret. Go, with my blessing.

ok i’m back now

Did you enjoy it?

yes

That’s good! Although it’s not a surprise, because everyone enjoys Pink Flamingoes.

And, similarly, everyone is quite spectacularly confused about what documentation should look like in a world where obedient LLM servants are churning out text on demand. In particular: which bits should people still do, and what should be left to the machines?

i too am confused about this. can you enlighten me?

I’ll do my best! But this will involve Concepts.

what concepts will you be inflicting upon me?

It’ll be all about Diataxis, ex a way of classifying documentation into 4 types: see here for a definition of Diataxis if you haven’t come across it already: https://diataxis.fr/ . It’s not a very new idea – evolving from maybe 2014 onwards – but it’s already gained many adherents, and I think that it’s about to get something of a boost. This is because we all realise that having LLMs to help with serving up documentation means that we can ditch a lot of the existing static content. But it’s hard to figure out what to ditch – what still needs human involvement? And it turns out that the Diataxis split can help us with that.

how does knowing about Diataxis help?

The Diataxis split helps with a crucial question relevant to where to use LLMs: to what degree some given documentation depends upon being written from a particular human viewpoint. The less you need to know about the human reading the documentation, the greater the role LLMs can play. LLMs are magical at churning through text, but they don’t know who you are.

so what, exactly, is diataxis?

Diataxis splits documentation across two axes: (1) acquiring knowledge vs applying knowledge (2) thinking about things (cognition) vs doing things (action).

This results in the following 4 categories:

  • tutorials: acquiring, action: you follow lesson steps to gain introductory knowledge
  • explanations: acquiring, cognition: you read to gain in-depth context
  • how-to-guides: applying, action: you follow process-specific steps to achieve some goal
  • reference: applying, cognition: you read to access context-free details

whatever. so what’s the big question?

It is: how does the Diataxis split interact with LLMs, and the related human viewpoint criterion?

get on with it, i’m getting bored

Let’s start with an easy win: for one of the four categories of Diataxis, there is no human viewpoint, and so no need for any persistent documentation. This is the reference category – a reference simply presents all the information, without any assumption about what you will want to do with it. In a world where LLMs assist in serving up documentation, references effectively cease to exist in any permanent form. You will simply ask an LLM pointing to your codebase and config to serve you the information up dynamically instead. Human input will be minimal – perhaps a standard prompt that enables the LLM to efficiently discover the information.

Tutorials and How-to guides are next in terms of how much LLMs can do the heavy lifting – these provide sets of actions to follow for people who either want to learn something from scratch (tutorials), or to achieve some specific end (how-to guides). Right ar the start, the goals and their consequent series of actions need to he selected by humans. But whether each individual action is correctly described can be validated and kept up-to-date by LLMs – decent ones can probably write much of the actions for you too, particularly if you have a well-structured code base and some decent explanatory material where required.

Ultimately it’s only explanations that remain predominantly human-curated, although one can (and should) use LLMs to do fact-checking on them. This is because the whole purpose of an explanation is to provide deep context: knowledge that can’t be gleaned from inspecting the codebase, because it depends on what human beings think is important.

so is diataxis the only way i should think about docs?

No. Documentation is ultimately about using writing to convey information. The conceptual world of writing prose is … large … no one model will capture all of that complexity. We shouldn’t expect Diataxis to have all the answers.

oh blah blah blah i am an engineer, give me a concrete example rather than waffling like a gcse english teacher who went to the pub at lunchtime

Here’s a specific example of how Diataxis falls short: you have to work to avoid blurring and mixing between “how-to guides” and “explanations”.

Why does this point to a problem with Diataxis? First of all, let’s establish that it’s a genuine issue.

Understanding some area of knowledge isn’t a binary yes-or-no thing. Some of us get the whole lot, some of us have a more patchy understanding. But any “how-to guide” in the Diataxis franework should assume that the reader already understands all the relevant concepts involved in carrying it out.

Bridging this gap – between the binary knowledge assumption of Diataxis-style “how-to guides” and the more nuanced levels of understanding present in the real world – gives rise to cross-talk between “how-to guides” and “explanations”. Practically speaking, you often end up with bits of explanations mixed into the “how-to guides” in an attempt to make them accessible to as broad an audience as possible.

Makes sense, right?

so how does diataxis deal with that?

It doesn’t! The Diataxis site instead spends a fair bit of time dealing with two other distinctions that I personally find far less tricky to deal with: tutorials vs how-to guides and reference vs explanation (see https://diataxis.fr/theory/).

That the Diataxis site is preoccupied with those two other distinctions makes sense in terms of the Diataxis set up, Each of those two pairs differs in only one of the two Diataxis split points, so you would expect some confusion at the boundary. By contrast, “how-to guides” and “explanations” are diametric opposites in the Diataxis taxonomy – they aren’t supposed to get confused, at least according to the Diataxis split!

so is diataxis wrong then?

No, all this just means that Diataxis isn’t perfect. And very few things in this world are perfect. Your child’s first smile, sure. The night sky in the Sahara, perhaps. The glint in Divine’s eye in that scene from Pink Flamingoes, absolutely. But most of the world falls short of such exacting standards. For documenting information technology tools, Diataxis still empirically works very well! It’s interesting that there is some signal that its theoretical underpinning has some deficiencies, but that’s no reason not to go with Diataxis – it seems to be a productive way of channelling documentation effort.

The cross-chat between “how-to guides” and “explanations” points to some kind of problem in the way Diataxis attempts to cleave reality: a misalignment between the binary “you know it or you don’t” assumption of “how-to guides” and the actual nature of undersanding for complex systems that “explanations” attempt to deal with. This problem will become worse as the system being described becomes more complex, covering a wider variety of more richly interrelated conceptual setups. There’s maybe something missing in the Diataxis system about how that can be managed.

“AI” and all that

Like many software developers, I’ve recently been puzzling out my attitude towards this “AI” stuff, by which I mean Large Language Models and all the associated tooling. 6 months ago (mid-2025) I was still a sceptic about every aspect of “AI”. And where creative endeavour is concerned, I’m still just as sceptical as I was before; see the end of this essay for more on that. I’m not even going to use the term “AI” outside quotes in this essay, because what we have at the moment is not “Artificial Intelligence”: it’s better described as “extremely sophisticated pattern matching”.

But as far as coding goes? I was flat-out wrong. In that particular domain, the new agentic tools (Claude etc), are – when properly guided – tremendously effective. I no longer have any doubt that they will have a fundamental, permanent impact on the software development industry.

Some terminology for what follows:

  • Large Language Models (LLMs): the underlying approach. Imagine something that has been trained on a vast corpus of text. The model is unchanging, but it has a “context window” that you can ask it questions in, and it will answer. This may be very large – millions of words – but it will eventually end. There is no consciousness, and no permanent learning.
  • Agentic Code Generation (ACG): the use of LLMs, with additional scaffolding, to create computer code.

What does ACG change about software development?

The gist: if you already know what you want the program to do, ACG can often write it for you, or at least do a very creditable and time-saving first draft. This is very significant: while it doesn’t eliminate software development as a role, ACG can shift where output is constrained: the constaints move from code generation to other parts of the process; idea generation, code review. Where ACG can have this effect, then developers who can enage effectively with this fundamentally different structure of constraints will become more economically valuable than ever, since their overall output will be a multiple of what it was before (2x? 3x? 10x? depends on the field) – and developers who can’t so engage will have an increasingly difficult time of it.

As a guide to what work will get swallowed up: as of today, if it can be done by a smart intern with access to Google/ StackOverflow/ etc, it’s a good potential fit for ACG. But the sea is rising: in the future this will extend to beginner professional programmers and beyond.

How high will the sea rise? The mainstream of current agentic coding works on an individual level. It replaces interns and is nudging at higher levels. What happens if you structure the agents in a more sophisticated fashion: could you replace an entire programming team, getting up to x100 or more? Steve Yegge’s Welcome to Gas Town is one vision of the future.

What will ACG-enhanced software development look like?

It’s tempting to treat the introduction of ACG into software development treats as being just another iteration of a very familiar story; ascension through successive levels of abstraction. For instance, everyone’s used to languages doing more for you than they used to, e.g. shifting from assembler to C to C++. And no-one writes their own messaging system from scratch anymore (well, hardly anyone) – we have open-source libraries and applications for that.

But it’s a different story this time – an older one – the replacement of unruly, messy humans by biddable, predictable automata. I’ve seen several failed attempts at making this happen in software development over the years; but it’s finally happening now: if not completely, at least to an significant, unprecedented extent. ACG will take over a substantial part of the overall activity of code generation in many areas, while human attention will become increasingly concentrated on other concerns: direction, coordination and verification.

This reallocation of human attention has a wide range of beneficial effects: it doesn’t just make it quicker to write exactly the same computer programs that we would written anyway. Detailed root cause analysis of weird glitches, hangs and crashes is suddenly way easier to kick off. There’s a far lower cost of experimentation (and a lower level of counter-productive emotional investment in the resulting prototypes). There’s also a lower cost to scratching minor itches – all those small fixes and improvements no-one normally gets around to, because you’d have to spend a couple of hours understanding the surrounding code in order to figure out exactly what to tweak, and who has the time?

Less positively, there is also the spectre of vibe-coding hell: people getting ACG to spit out code that makes it into production without due scrutiny of security holes, or without sufficient effort made to prune and refine changes to avoid the codebase filling up with slop. Agentic code generation has the potential for enabling those with a shaky grasp on the principles behind sustainable software development to create a huge mess far quicker than they ever could before.

It’s too early to say exactly how this will all pan out. There will be great success and wince-inducing disasters. But one thing is clear, to me at least: any software developer who doesn’t develop an understanding of how to get value out of ACG is likely to have a tough time over the next few years.

What factors will influence the effectiveness of ACG?

While I’m seeing a multiplier when using ACG for my own work, this clearly is not true for everyone. Here are some factors that will affect how effective ACG can be for you:

  • tooling quality: how good are the prompts and the agentic structure? Ideally your ACG tooling can generate tests and verify functionality is correctly implemented.
  • language: strongly typed languages (Rust, Functional languages such as MLs/Haskell) are likely to do particularly well out of ACG, which can leverage the structure offered by typesystems.
  • process quality: just because you have a robot code generator, all the other aspects of software development don’t magically vanish. There is still a need for diligent reviews, version management, design sensibility, effective requirement gathering, and all the rest.
  • domain width and stabliity: if you spend all your time on a relatively well-defined code base that you have a deep understanding of, then you may find ACG has less to offer you. If your work ranges across a very wide range of rapidly-mutating technologies and code bases, the reverse will be true.

If you have all these factors working in favour of ACG, then large multipliers are achievable. But if they’re all working against you – if you’re using poor-quality tooling to blast in a load of code that has inconsistent idioms (say, a random selection of C++ styles from the last 30 years), without reviewing it properly, in a modelling library that’s your main product and which you really should have a deeper understanding of than you do – then ACG will almost certainly seriously impair your overall output.

Prediction 1: In the immediate term, ACG won’t be for everyone, even many good development outfits (e.g. good tools and process (+ACG), but also lots of acquired knowledge over a well-bounded stable domain, and a bad match on language (-ACG) ).

Prediction 2: in the longer term, ACG will drive a shift towards languages that work well with it. Examples of langauges that may come under pressure: (1) C++, where the idioms have shifted a lot over the last 30 years – how well can ACG cope with that? And it’ll make Rust easier to write as a replacement… (2) Dynamic programming languages such as Python: here ACG assistance can potentially tip the balance in favour of more strongly-typed contenders.

Prediction 3: across all domains and time horizons, the boundary of ACG effectiveness will expand as the quality of tooling improces. And it’s not just about making agents better; people are working on is the levels above individual agents: running them in loops, running teams of agents with different roles, writing software to generate the controls around running teams of agents with different roles, you get the idea. The sea is rising.

Prediction 4: most business software – maybe as much of 90% of it – does not have an extended period of development after its creation. It’s made to fill a commercial niche, run as a cash cow, and retired when it no longer pays its way. This is a world that a lot of relatively “elite” programmers (FAANG, investment banks, hedge funds, what have you) have little experience of. Who cares whether such software is full of slop? I think there’s a decent chance that pretty much all of that software is going to be written by agents pretty soon. That’s a whole lot of software. One way of viewing this is as a culmination of the process of offshoring.

It’s not just software development …

The rise of LLMs is going to disrupt – even replace – plenty of other jobs. For example, until recently there was a useful living to be made by linguists, doing legal patent searches in foreign languages. Nowadays, Google Translate (or similar) will work well enough, and the humans are no longer in the running.

A more nuanced example: as of Dec25, there’s some heated debate going on about to what degree LLMs might be able to facilitiate/replace the work of human indexers of books. The American Society of Indexers say LLMs are useless. Here, my guess is that agentic assistance could be a net win, but that human intervention will still be necessary to reliably achieve a result that is useful to other humans. My intuition here comes primarily from the related field of topic modelling, which has had a lot of research (predating LLMs) and where the fashion in the research community on how to assess the quality of output has oscillated between programs (consistent, but veer off in weird directions) and humans (who do at least make sense to other humans, but often don’t agree with each other on what the topic categorisation should be).

.… but the creatives are not getting replaced any time soon

The scope of current technology to substitute for humans is still limited: Large Language Models are not about to somehow replace creativity, whether literary, musical, visual, or of any other kind. “AI” boosters overclaim in this area, which was why I have been so sceptical about all their other claims (including those about ACG) until recently. Writing useful code is not the same as creating art! You can verify whether code is doing the right thing by running tests!

Why is code different from Art? There are “right answers” for many coding problems – if not a single answer, then a relatively small number of well-defined solutions, with an understood range of tradeoffs between them. So using ACG to write programs is targetting a far simpler space of potential solutions than the wider, more ambiguous, realms of Art. While there is some creativity associated with the act of code generation, it is subserviant to the only really important creativity-oriented question about the code, which is: why are you writing this code in the first place? What is its meaning to humans? And LLMs can’t answer that.

Will algorithmic systems ever achieve some element of artistic creativity? Maybe, if so it will need different technology than the current language models, which are not robust to further major re-training after their initial creation. We currently get around this limitation by providing a large “context window” of interaction with that model. But when you exhaust the capacity of the context window, you’re done, and the model itself doesn’t learn a thing. Show me a model that can robustly incorporate feedback about how its actions are affecting the world, and then I’ll be interested: but a context window is no replacement for that ability, no matter how wide it is.

The illusion that ChatGPT and its relatives have a mind can be very compelling: people are easily misled by the context window into thinking they are talking to something with a mind of its own. But you can stick a pair of googly eyes on a rock, and throw your voice, and get the same effect. We humans are hard-wired to impute intentionality; this has probably been very useful when dodging predators in the distant past, but serves us poorly in this case.

TL;DR:

  • ACG has seriously high potential value. If you’re a developer, I strongly advise spending some time engaging with it in 2026, if you haven’t started doing so already. ACG isn’t a magic bullet and may not fit your domain. But you should figure that out for yourself, rather than either accepting ACG uncritically, or dismissing it without thought.
  • There are plenty of other fields where Large Language Model technology will supplant humans – typically where there is a large rote element to the work, with little scope for imagination.
  • If someone tells you that current “AI technology” can replace human creativity and insight, you should smile politely, and count your silverware once they have left the room. For a more frank, if impolite, take on some of the current hucksters in this sphere, read I am an AI Hater.
  • It’s possible that we might – off the back of different technology – develop true AI: algorithmic creations that can learn based on feedback from the real world, and use their learning to do useful things. I am less sceptical about that possibility than I was five years ago. This is not necessarily a good thing: read Lena for a well-written take on why.

Another Stevie Smith Poem

Autumn

He told his life story to Mrs. Courtly
Who was a widow. ‘Let us get married shortly’,
He said. ‘I am no longer passionate,
But we can have some conversation before it is too late.’

Reading (and Understanding) Derek Parfit

Why You Should Read Derek Parfit

Derek Parfit has been justly described as “the most famous philosopher most people have never heard of.” If you’re into moral philosophy, he’s a must-read. And despite his spending a decades-long academic career with no formal teaching commitments, Parfit’s publication record is relatively manageable: you can cover the essentials in two books – Reasons and Persons (1984) and On What Matters (2011).

Why is Parfit a must-read? Well, that’s worth an entire extra post – but it’s essentially because he has come up with a lot of very interesting ideas:

  • In Reasons and Persons he comes up with completely fresh arguments about personal identity, and links them to arguments about how to balance self-interest and more impartial moral theories.
  • There is then a cracking section on how to balance the interests of future people, including the Repugnant Conclusion, which is well worth your time.
  • In On What Matters, Parfit sets out his “Triple Theory” in Volume 1, which combines Kantian deontology, consequentialism and contracturalism – three separate moral traditions that are supposed to be incompatible.
  • In Volume 2 he starts with comments by 4 other philosophers to this theory, and then sets out his response to each of those philosophers – a mixture of deeply educational content and knockabout fun.
  • There then follows a variety of sections on Parfit’s approach to ethics, including his take on Nietzsche, which I have not yet read and am very much looking forward to.

Whether you agree with Parfit or not, this is truly mind-expanding material, written by someone who sets the weather. If you want to get a better handle on moral philosophy, I cannot recommend having a take on Parfit highly enough.

And not only is Parfit a must-read, but his style is accessible to a layperson. Just as well: although I’ve had an interest in ethics since I was a teen, it’s very much a hobby: I’m a mathematician and engineer, not a philosopher. Parfit alternates between “thought experiments” – short, pared-down descriptions of moral dilemmas that provoke intuitive responses – and dense argument and reasoning that explores the implications of these responses. I personally find this an effective way to guide people through the terrain that Parfit is attempting to map out.

Why You Might Not Want To Read Derek Parfit

So a must-read, and accessible. There’s got to be a catch, right; otherwise we’d be ending the show right here? And, yes, you’re right. To start with, the books aren’t exactly quick reads. Reasons and Persons has around 400 non-appendix pages, and On What Matters has around 1,000. Even if there’s some filler in there, that’s a lot to get into your head.

More importantly, I need to fess up that not everyone is a fan of Parfit’s style. The kinds of thought experiment so beloved by Parfit are inteded by him to clarify the essence of various moral choices. But to other eyes, these experiments can appear reductive – even deceptive – narrowing down the complexity of real-world context, to instead present an artificially stark choice between two unrealistic alternatives. These reservations are exceptionally well-laid-out by one of the philosophers responding to Parfit in On What Matters (Volume 2), where Allen Wood goes on an extended rant about just how little respect they have for the thought-experiment approach (which he denigrates as “trolley problems”). I have genuine sympathy with Wood, because even before I started reading Parfit, I had developed the same reservations as Wood about a famous thought experiement in another area of philosophy (Searle’s “Chinese Room“) – although Wood puts the reservations far better than I’m capable of. Parfit deals with Wood’s rant in his response by ignoring it completely: make of that what you will.

Wood is, at least, polite – if occasionally withering. It can get far worse. Stephen Mulhall’s LRB review of a biography of Parfit (“Non-Identity Crisis“) goes considerably further – frankly, Mulhall verges on the bitchy, concluding that the biography “presents its subject as an epigram on our present philosophical age – a compact, compellingly lucid expression of its own confusions and derangements.” It was reading that review that made me decide I should make a determined effort to understand Parfit: anyone responsible for this much vitriol was surely worth a go?

You have been warned. You may, like I, find Parfit well worth persisting with. But you may find him incredibly irritating and wrong-headed. See how you get on with Reasons and Persons before buying On What Matters!

How To Read Derek Parfit

Let’s assume that you will get on with Parfit’s style, or at least be able to tolerate it sufficiently: even if so, there’s still an awful lot to cover. I’m used to being able to speed-read through a lot of very complex material, in a variety of fields, and retain the gist as I power on through: I am very good at doing this. There are lots of things I’m crap at, but my goodness I’m good at ploughing through a shedload of complex material. But when I tried reading through Reasons and Persons, I hit the buffers a quarter of the way in; there was simply too much to hold in my head. I needed more traction. After some experimentation, I found an approach that has successfully carried me through.

Parfit splits his material out by chapters, which then tend to have multiple numbered sections. The recipe that worked for me is simple: cover material in the following three stages:

  • First, read through, making sure you understand what you’re reading. Feel free to mark up significant passages in pencil, but don’t take detailed notes at this stage.
  • Then, at least a day later than reading, make detailed section-level written notes. Elide detail that may have been useful as scaffolding, but is not necessary for you to express knowledge of the overall structure of reasoning once you’ve got it in your head. I ended up with something like 1 page of A4 per 10-15 pages of original text – the optimal compression ration depends on what you’re trying to summarise.
  • Then, at least a day later than making written notes, make word-processed chapter-level summaries. You should be able to do this solely by referring to your written notes. Aim to sum up each chapter in a single side of A4, although some will chapters require more – you’ll know those when you see them.

This all can take place simultaneously – e.g. you’ll be covering the material in 3 waves with reading being furthest on, section-level written notes being behind that, and word-processed chapter-level summaries being further back again.

How long to leave between stages is up to you, but I would recommend a day at the absolute minimum – I would say a few days is probably optimal for reading-to-notes, and notes-to-summaries could be left for longer if convenient.

And how long will all this take? Well, I embarked on my Parfit binge when I was on gardening leave. So I had a lot of free time – but I did find there was a limit to how quickly I could absorb the material, even so. I had to take breaks to do things that were not moral philosophy, to allow the concepts to settle into my brain. Reasons and Persons took me a month. Volume 1 of On What Matters took another month. The first 2 sections of Volume 2 (comments and responses) took another fortnight. If you can do better than me, I take my hat off to you!

Another person’s experience: the “Only a Game” blog took 4 months to cover Volumes 1 and 2 of On What Matters, which feels in the right ballpark for someone who has the cognitive load of a job to handle at the same time – read their take here. They reckon the best bit of On What Matters is the second half of Volume 2, which is up next – hopefully, I have a treat in store!

Why I’m Bothering To Read Derek Parfit

I think Derek Parfit is a good person with extremely interesting things to say. I am not, by instinct, congruent with Parfit’s general inclinations: I am naturally deeply suspicious of people who attempt to systematize and bring disparate streams of thought under a single guiding set of principles.

And Parfit is a systematizer of exactly the kind that normally raises my hackles – but he has good reasons for being so that I can respect. He is trying to bring moral philosophy together because he is horrified by the prospect of conflicting ethical philosophies resulting in nihilism. His potentially-off-putting tendency to systematize is, for me, offset by his warm respect for the importance of humanity, and an understanding of the reality and importance of the personal and partial motivations that we all have, and which make life woth living. Time and again, I’ll read a passage that makes me think “I love you, Derek Parfit”. For me, Parfit’s ultimate saving grace is his passionate belief in dignity – that no matter how badly someone has behaved, no matter how badly their personal moral compass is skewed, they are still worthy of consideration and so do not deserve to suffer.

Parfit is no fan of the Christian Hell: good for him.

Cognitive Empathy and Software Development

(TL;DR: software is a social endeavour, and if you can’t figure out some way to engage with that fact, then that fact will engage with you nonetheless – and things are unlikely to go well for you from that point)

TO HIGH-FUNCTIONING NEUROTYPICALS:

PLEASE STOP READING NOW

You know who you are: the life and soul of the party, charming everyone you meet. I like you, honestly I do! Although this will not be a surprise, because everybody likes you. But reading any further will be a waste of your time, since you already understand everything I’m about to lay out so painstakingly, without even having to think about it.

(Also, I’m going to be bitchy about you at one point in what’s coming, and you may find that unpleasant)

The rest of us will wait for you to leave.

Could you close the door on your way out? There’s a bit of a draft. Thanks.

LET’S TALK ABOUT EMPATHY

Are the rest of us sitting comfortably? Then I’ll begin.

There are some incredibly useful ideas about empathy that no-one ever talks about. Why?

Because ideas only get talked about if they’re in the mainstream. And the mainstream is full of people who can find their way around The Land Of Empathy just fine: no map required. So the kind of talk about empathy that we’re about to have is in the “blind spot” of our culture.

LET’S TALK ABOUT COGNITIVE EMPATHY

So: to the ideas themselves. The distinction that I’m going to beat you about the head with in what follows is set out on Wikipedia’s Empathy page: it is between Cognitive Empathy (“Empathy of the Head”) and Affective Empathy (“Empathy of the Heart”).

Affective Empathy is common-or-garden human sympathy, or as Wikipedia puts it “the ability to respond with an appropriate emotion to another’s mental states”. If you are incapable of exercising any of this variety of empathy, I’m afraid that we are unlikely to be friends – not that you would care, of course. You may as well stop reading and go back to tearing the wings off flies, or whatever else it was doing before you encountered this essay. Have fun! Please don’t kill me!

Cognitive Empathy is a very different beast: it’s all about how much you get inside other people’s heads (Wikipedia: “the ability to understand another’s perspective or mental state”). Unlike Affective Empathy, this doesn’t carry any intrinsic moral virtue: you can be a lovely, wonderful person, and absolutely suck at Cognitive Empathy. Note that if you lack innate ability in Cognitive Empathy, there are workarounds – ways to “fake it ’till you make it” – much more so than for a lack of innate Affective Empathy.

Here’s something that I’m often fascinated by: the use of Cognitive Empathy as a weapon. Here’s an example: Dr. Hannibal Lecter. Dr. Lecter is very good at Cognitive Empathy, and unfailingly polite. To someone who is meeting him for the first time, he generally comes across very positively. However, Dr L. leaves something to be desired as a social companion in the longer term, and I would definitely advise against inviting him to your next dinner party.

COGNITIVE EMPATHY IS IMPORTANT

(YES, EVEN IN INFORMATION TECHNOLOGY)

In IT, Affective Empathy is present in normal amounts, but Cognitive Empathy is generally way thinner on the ground. I won’t go into about why that is so in this essay. But we all know that IT is packed with people who would far rather focus on technical, analytical framings of issues than get all “touchy-feely” and worry overmuch about the thoughts and feelings of others.

So IT people are terrible on average at Cognitive Empathy, but does that really matter? Does a deficit in Cognitive Empathy harm you in an engineering context?

Here’s a simple motivating example:

You’ve slogged your guts out on project X but have not been given sufficient recognition for your effort. During this time, person Y has risen, seemingly without any justification.

That may sound … familiar? And if you don’t understand why this disconnection between value and reward happened, then you didn’t understand enough about how the work was being evaluated: who was pronouncing on its worth, and what they attached value to. And this kind of situation is where even a smattering of Cognitive Empathy can help tremendously.

By this stage, I imagine I might be annoying some readers. They’ll be thinking: You’re telling me to pander, and be a people-pleaser. But I’m an engineer, not a diplomat! Surely, if I just carry on doing Good Things then I should be recognised? And, if I’m not recognised, surely that means it’s the fault of the company, and that I should go and work for somewhere else with a better value system? Right?

To them, I’d like to say this:

PAY ATTENTION, YOU IDIOT

if you don’t engage with what the people around you are thinking, you are at a massive disadvantage compared to anyone else who is paying even the slightest attention to the mental states of others.

There is a romantic myth in IT of the sole coder who has an amazing idea and weaves a magical blanket of abstractions by working 24/7 in their bedroom, unveils it to the world, and receives instant fame and recognition for their genius. Like most myths and archetypes, it’s fatal to take this as an unadulterated guide to living in the real world.

Almost everywhere, and almost all the time, creating software is a social endeavour. Most great ideas come from cross-fertilisation with a wide range of other people – some of whom are likely to think very differently from you – and a lack of appropriate context makes even geniuses appear stupid.

INTRODUCING PERSON X

Person X has spent a long time (10 years or more) working on the same part of the same product, for the same company, without interacting with other teams much, whether internally or externally.

Other team members (whether more extraverted ICs or managers) interface between them and the world outside their team. But Person X prefers to concentrate on their own part of the world. While Person X most certainly adds value, hardly anyone outside their team knows who they are.

PERSON X WILL SUFFER

At some point Person X will find that their local technological landscape is invalidated. This can be through technological advances, company mergers, a new CTO with a vision – the cause is uncertain, but it will happen sooner or later. When this happens, there will be redundancies, and Person X will get the chop.

Why?

Because the decision of who to chop is made by multiple people (managers, client reps, etc) getting together and voting, and only one of these people will know who Person X is, and so they will be outvoted by all the others.

This is not fair. But it happens all the time.

I AM SORRY

I may have come across as a bit of an arsehole so far. Angry. Sorry. But this isn’t an accident; I’m attempting to shock you out of complacency. Here’s the reason I’m so cross – no-one else is going to bother to help you out: they’ve been spending their whole life cruising past your low-cognitive-empathy slum in their high-cognitive-empathy limousines.

And most of the people in the limos don’t really get why you’re in the slum, and the minority who do have an inkling: well, I wouldn’t say that they’re laughing at you through their tinted side windows, not exactly, but they don’t care about you either; at least, not enough to help you out. So you’re stuck with me, doing my grumpy best. Sorry about that. But also: fuck those guys, right?

YOU CAN ESCAPE

If you work in the IT realm and think that you may be suffering from this kind of empathic disadvantage, and would like to make the pain stop, I recommend studying the works of Gerald Weinberg. I didn’t get to meet him in person, which is one of my few major regrets in life: he died in 2018, leaving a legacy of marvellous writing behind. His original classic “The Psychology of Computer Programming” was written in 1971 but is still as relevant as ever.

I have read every non-fiction book that GW wrote and haven’t regretted a single one. He has the technical chops: he was a project manager on Project Mercury. But also he transcended personal tragedy in his family life, and as part of his journey he produced this amazing synthesis of Rogerian psychological thought with the practical process of writing systems. I cannot recommend his writing highly enough.

YOU MAY BE SAD

Please don’t be sad. There is lots of good news here. You can learn Cognitive Empathy. And this isn’t just about career benefits, about helping yourself out. While being able to better comprehend what everyone around you thinks, feels, and wants is very rewarding professionally, it’s also tremendously rewarding in itself: you will be able to help more people and generally do more good in the world.

And on top of all of this – circling back to career benefits – knowledge of Cognitive Empathy will be hugely more durable than all those technology hamster wheels we spend all our time running around. I have lost count of the number of networking technologies I have learnt over the last 30 years – but people? They are just the same now as they were 30 years ago, or the last few hundred come to that. It’s an incredibly useful bit of technology that is going to be just as relevant when you retire as it is today. You know, like key bindings for vi?

YOU MAY THINK I AM FULL OF SH*T

Well, really. The nerve of you! Well, I’ve done my best. Feel free to ignore me and focus on those sweet, sweet technical/analytical problems that we all love so much. But don’t come running to me when an idiot manager or daft fellow developer who happens to be exercising slightly more Cognitive Empathy than a broken toaster runs rings around you.

But don’t despair if you feel that a self-perceived lack of neurotypicality on your part might make all this too difficult. As I have already implied, I’m not neurotypical either, having a decently-sized helping of Emotional Contagion, with a side order of Face Blindness. Depending on the mix at your workplace, non-neurotypicality might actually make it all easier: check out The Double Empathy Problem to see what I’m talking about.

Excel Tetris

Because you’re worth it.

Written back when it was fashionable to make Excel do strange things.

(Disclaimer: Uses Windows Timers. Probably doesn’t work nowadays. Will spoil your milk, make strong men weep, and turn the weans against you)

Millennials

At some unspecified time in the past, when I was in a managerial/quasi-managerial role in an investment bank (keep it vague, Matt, keep it vague), we had An Occasion when anyone who was managerial/managerial-adjacent was brought into a half-day session organised by HR about how to deal with Millennials. The reason? We were having a lot of problems around retention for graduate Millennial joiners in our most recent year group, and we wanted to stop the rot.

The session covered the familiar talking points that the older generation typically raise about feckless youth. Millennials expect too much on coming into an organisation, when they aren’t ready for the responsibility. They don’t respect hierarchy. They require coddling. They’re snowflakes. And so on, yadda yadda yadda.

But I knew why the Millennials were really leaving. In the problematic year group, a few people had been placed, post-internship, with a part of the organisation that had been deemed non-profitable. They were laid off rather than places being found for them elsewhere. This was a violation of an implicit contract that this and similar organisations had at the time – while we might have a lot of churn and instability, we would not crap on people fresh out of an internship by making them redundant, no matter what. The norm would be that you would reassign them. They’ve been around for a few years? Fine, it’s open season. But less than 12 months after starting full-time work? Nope.

As a consequence of this norm violation, meerkat-like, everyone in that year’s cohort raised their heads, nodded at each other, and a decent fraction (disproportionately selected from those with most moxie and smarts) had changed their employer to be someone other than us before the year was out. They were absolutely right to have done so, and I would have done the same thing myself in their position.

I knew what was really going on because – unlike most of my colleagues in the managerial sphere at that organisation, at that time – I took pains to check in with everyone I felt responsible for on a fortnightly basis.

(Yeah, I know everyone does this weekly nowadays but this was many years ago, and we’re talking 14 people here. I was busy, OK?)

Much to my shame, I didn’t speak up during the session. I nodded along with everyone else, tutted about how terrible the Millennials were, and was working somewhere else myself before the year was out.

270 Playing Card Lampshade

In 2011, Nick Sayers made a roughly spherical design out of 270 playing cards. Well, 265 if you leave a hole to put a light-bulb in – 5 whole decks, plus one joker from each. It’s easy to find pictures of it on the internet, e.g. here and here. It has gaps to allow light through. It has a pleasing mixture of structure and irregularity. It looks fantastic. I wanted to figure out how to make it myself.

The design looks complicated, but it’s not so bad. It’s a Platonic solid in disguise – either a dodecahedron or an icosahedron, depending on what you say the centre of the face is: from the angle of the playing card cuts, it’s pretty much a wash since everything bulges outwards a bit, compared to a true Platonic solid.

You can figure out how to fit the cards together from the pictures (whether of Nick Sayer’s build or mine). But before getting started on this, I highly recommend you figure out how to make some Platonic solids first – it’ll give you an idea of how this kind of thing works. See my previous post for more on that.

Assuming you’ve gained some basic idea of how to build the simpler stuff, then here’s the key to building the 270-card behemoth: there is a 9-card motif that you need to repeat 30 times, which is shaped like a diamond. At each of the two sharp points of the diamond , 5 cards come together. At each of the other two points of the diamond, 3 cards come together. Everywhere else, 4 cards come together.

The really interesting bit is exactly where to cut the cards so they slit together in the right way. On my previous post for building Platonic solids, you can see how this kind of thing gets worked out. For the 270-card construction, by all means work out a theoretical starting position (this helped me) but eventually you will need to be prepared to experiment with a pack or so of cards to see what alternatives pan out best given the way the physical material behaves. I recommend using packs of Waddingtons No. 1 cards when experimenting – they are pretty tough and nice and cheap.

(Note that each 9-card motif uses 5 cards with one pattern of cuts, and 4 cards with the mirrored version. )

In terms of actually putting it together, build one 9-card motif off to one side, and refer to it as you make the main build. Start with a 5-vertex and build outwards from there. Once you have all the slots cut, and any other backing applied (see below), it will still take you at least 4 hours to put everything together. There are some points where you can take a rest – basically you’ll need to complete something radially symmetric so it doesn’t start bending too much.

While a finished build looks great, it does have one drawback as a lampshade: playing cards are not designed to let light through – it’s kind of a major part of their job description, come to think about it. So, while the lampshade does look lovely with a light bulb in the middle, it’s somewhat underpowered as a light source: more of a glowing coal than a blazing fire, let alone a UFO that will eat your mind as you gaze dumbly into its alien projection of Nirvana.

Sorry, where was I?

Ah, yes: improving light output. I did wonder if it might be possible to get more light out by building it with cheaper playing cards, which lack the “core” that makes normal playing cards opaque. But such cards tend to be generally weak, whether from the lack of the “core” layer or just because the cardstock is thinner. This weakness rules them out: the twisting inherent in the design means that those cheaper cards rip and can’t be used successfully.

My best idea so far: put stick-on chrome mirror vinyl on the parts of the backs of the cards that lie entirely within the construction: the idea being that the light will bounce around until it eventually makes its way out. From the outside, it still looks like normal playing cards – but now, when you chuck light out from the inside, you now get more than just a gentle glow – enough light escapes to get some general illumination of the surrounding space. An encouraging sign is that the colour of the light that escapes is a closer match to the original bulb colour – before, it took on the hue of the playing cards, since so much of it was being absorbed by their surface.

More external light would still be good. The photos below are all I currently get from a 26W corn led light bulb, which puts out 3,000 Lumens (around 200W equivalent for an incandescent) so is already quite punchy. I’ll try ramping it up and see how far I can go before setting everything on fire.

As to which playing cards to use? The main problem is potential tearing where the cards meet – once the whole construction is made you’re fine, but while putting things together you can end up with quite a lot of stress being put present at some stages. I prototyped with Waddingtons No 1 bridge cards, only £1.40 a pack, and they stood up to the twisting forces involved remarkably well. The final build uses Compag Jumbo Index Poker Cards, which are more like £10 a pack in the UK: about as resilient as you are going to get. But to be honest, they didn’t really deal with the forces much better than the Waddingtons did – although I did dial up the amount of card twisting to the maximum level I thought I could get away with.

You handsome devil
The interior, all lovely and shiny
If you want to do this yourself, I recommend you cut out the vinyl using a Silhouette/Cameo machine. One A4 sheet should give you 15 shapes – so 2 10-packs will suffice for one build.

Making platonic solids out of playing cards

Yet another very niche post. Would you like to do this?

This image is taken from https://mathcraft.wonderhowto.com/how-to/make-platonic-solids-out-playing-cards-0130512/ – however the templates document that this page refers to is no longer available, and there’s no information on the underlying maths either.

I’m trying to figure out how to build more complex stuff of this type, so I figured it was worth sitting down and working it out from scratch. See below for both the maths, and the cut angles for all 5 platonic solids.

Plotter Directions

There are a number of ways to go

  1. Op-Art: Bridget Riley blocks, moire, etc
  2. “Natural” – textured trees, flower fields, etc
  3. “Glitch” – a more robotic/computerised feel with explicit acknowledgement of the underlying medium
  4. Organic growth abstracts
  5. Textual integration

I’d like to explore the textual side, but it’s hard to bridge the gap between text and drawings, integrating them effectively. I feel like it’s maybe going to be the best to explore though, as the other territory seems to be thoroughly colonised. The counter-argument is that I think that Cy Twombly sucks ass.

One possibility would be to extend the non-line nature of text into collage?