“AI” and all that

Like many software developers, I’ve recently been puzzling out my attitude towards this “AI” stuff, by which I mean Large Language Models and all the associated tooling. 6 months ago (mid-2025) I was still a sceptic about every aspect of “AI”. And where creative endeavour is concerned, I’m still just as sceptical as I was before; see the end of this essay for more on that. I’m not even going to use the term “AI” outside quotes in this essay, because what we have at the moment is not “Artificial Intelligence”: it’s better described as “extremely sophisticated pattern matching”.

But as far as coding goes? I was flat-out wrong. In that particular domain, the new agentic tools (Claude etc), are – when properly guided – tremendously effective. I no longer have any doubt that they will have a fundamental, permanent impact on the software development industry.

Some terminology for what follows:

  • Large Language Models (LLMs): the underlying approach. Imagine something that has been trained on a vast corpus of text. The model is unchanging, but it has a “context window” that you can ask it questions in, and it will answer. This may be very large – millions of words – but it will eventually end. There is no consciousness, and no permanent learning.
  • Agentic Code Generation (ACG): the use of LLMs, with additional scaffolding, to create computer code.

What does ACG change about software development?

The gist: if you already know what you want the program to do, ACG can often write it for you, or at least do a very creditable and time-saving first draft. This is very significant: while it doesn’t eliminate software development as a role, ACG can shift where output is constrained: the constaints move from code generation to other parts of the process; idea generation, code review. Where ACG can have this effect, then developers who can enage effectively with this fundamentally different structure of constraints will become more economically valuable than ever, since their overall output will be a multiple of what it was before (2x? 3x? 10x? depends on the field) – and developers who can’t so engage will have an increasingly difficult time of it.

As a guide to what work will get swallowed up: if it could be done by a smart intern with access to Google/ StackOverflow/ etc, it’s a good potential fit for ACG. In the future this heuristic will scale upwards to newbie programmers: the sea is rising.

What will ACG-enhanced software development look like?

It’s tempting to treat the introduction of ACG into software development treats as being just another iteration of a very familiar story; ascension through successive levels of abstraction. For instance, everyone’s used to languages doing more for you than they used to, e.g. shifting from assembler to C to C++. And no-one writes their own messaging system from scratch anymore (well, hardly anyone) – we have open-source libraries and applications for that.

But it’s a different story this time – an older one – the replacement of unruly, messy humans by biddable, predictable automata. I’ve seen several failed attempts at making this happen in software development over the years; but it’s finally happening now: if not completely, at least to an significant, unprecedented extent. ACG will take over a substantial part of the overall activity of code generation in many areas, while human attention will become increasingly concentrated on other concerns: direction, coordination and verification.

This reallocation of human attention has a wide range of beneficial effects: it doesn’t just make it quicker to write exactly the same computer programs that we would written anyway. Detailed root cause analysis of weird glitches, hangs and crashes is suddenly way easier to kick off. There’s a far lower cost of experimentation (and a lower level of counter-productive emotional investment in the resulting prototypes). There’s also a lower cost to scratching minor itches – all those small fixes and improvements no-one normally gets around to, because you’d have to spend a couple of hours understanding the surrounding code in order to figure out exactly what to tweak, and who has the time?

And, less positively, there is also the spectre of vibe-coding hell: people getting ACG to spit out code that makes it into production without due scrutiny of security holes, or without sufficient effort made to prune and refine changes to avoid the codebase filling up with slop. Agentic code generation has the potential for enabling those with a shaky grasp on the principles behind sustainable software development to create a huge mess far quicker than they ever could before.

It’s too early to say exactly how this will all pan out. There will be great success and wince-inducing disasters. But one thing is clear, to me at least: any software developer who doesn’t develop an understanding of how to get value out of ACG is likely to have a tough time over the next few years.

What factors will influence the effectiveness of ACG?

While I’m seeing a multiplier when using ACG for my own work, this clearly is not true for everyone. Here are some factors that will affect how effective ACG can be for you:

  • tooling quality: how good are the prompts and the agentic structure? Ideally your ACG tooling can generate tests and verify functionality is correctly implemented.
  • language: strongly typed languages (Rust, Functional languages such as MLs/Haskell) are likely to do particularly well out of ACG, which can leverage the structure offered by typesystems.
  • process quality: just because you have a robot code generator, all the other aspects of software development don’t magically vanish. There is still a need for diligent reviews, version management, design sensibility, effective requirement gathering, and all the rest.
  • domain width and stabliity: if you spend all your time on a relatively well-defined code base that you have a deep understanding of, then you may find ACG has less to offer you. If your work ranges across a very wide range of rapidly-mutating technologies and code bases, the reverse will be true.

If you have all these factors working in favour of ACG, then large multipliers are achievable. But if they’re all working against you – if you’re using poor-quality tooling to blast in a load of code that has inconsistent idioms (say, a random selection of C++ styles from the last 30 years), without reviewing it properly, in a modelling library that’s your main product and which you really should have a deeper understanding of than you do – then ACG will almost certainly seriously impair your overall output.

Prediction 1: In the immediate term, ACG won’t be for everyone, even many good development outfits (e.g. good tools and process (+ACG), but also lots of acquired knowledge over a well-bounded stable domain, and a bad match on language (-ACG) ).

Prediction 2: in the longer term, ACG will drive a shift towards languages that work well with it. Examples of langauges that may come under pressure: (1) C++, where the idioms have shifted a lot over the last 30 years – how well can ACG cope with that? And it’ll make Rust easier to write as a replacement… (2) Dynamic programming languages such as Python: here ACG assistance can potentially tip the balance in favour of more strongly-typed contenders.

Prediction 3: across all domains and time horizons, the boundary of ACG effectiveness will expand as the quality of tooling improces.

It’s not just software development …

The rise of LLMs is going to disrupt – even replace – plenty of other jobs. For example, until recently there was a useful living to be made by linguists, doing legal patent searches in foreign languages. Nowadays, Google Translate (or similar) will work well enough, and the humans are no longer in the running.

A more nuanced example: as of Dec25, there’s some heated debate going on about to what degree LLMs might be able to facilitiate/replace the work of human indexers of books. The American Society of Indexers say LLMs are useless. Here I can see agentic assistance being a net win, but human intervention will still be necessary to reliably achieve a result that is useful to other humans. My intuition here comes primarily from the related field of topic modelling, which has had a lot of research (predating LLMs) and where the fashion in the research community on how to assess the quality of output has oscillated between programs (consistent, but veer off in weird directions) and humans (who do at least make sense to other humans, but often don’t agree with each other on what the topic categorisation should be).

.… but the creatives are not getting replaced any time soon

The scope of current technology to substitute for humans is still limited: Large Language Models are not about to somehow replace creativity, whether literary, musical, visual, or of any other kind. “AI” boosters overclaim in this area, which was why I have been so sceptical about all their other claims (including those about ACG) until recently. Writing useful code is not the same as creating art! You can verify whether code is doing the right thing by running tests!

Why is code different from Art? There are “right answers” for many coding problems – if not a single answer, then a relatively small number of well-defined solutions, with an understood range of tradeoffs between them. So using ACG to write programs is targetting a far simpler space of potential solutions than the wider, more ambiguous, realms of Art. While there is some creativity associated with the act of code generation, it is subserviant to the only really important creativity-oriented question about the code, which is: why are you writing this code in the first place? What is its purpose? And LLMs can’t answer that.

Will algorithmic systems ever achieve some element of artistic creativity? Maybe, if so it will need different technology than the current language models, which are not robust to further major re-training after their initial creation. We currently get around this limitation by providing a large “context window” of interaction with that model. But when you exhaust the capacity of the context window, you’re done, and the model itself doesn’t learn a thing. Show me a model that can robustly incorporate feedback about how its actions are affecting the world, and then I’ll be interested: but a context window is no replacement for that ability, no matter how wide it is.

The illusion that ChatGPT and its relatives have a mind can be very compelling: people are easily misled by the context window into thinking they are talking to something with a mind of its own. But you can stick a pair of googly eyes on a rock, and throw your voice, and get the same effect. We humans are hard-wired to impute intentionality; this has probably been very useful when dodging predators in the distant past, but serves us poorly in this case.

TL;DR:

  • ACG has seriously high potential value. If you’re a developer, I strongly advise spending some time engaging with it in 2026, if you haven’t started doing so already. ACG isn’t a magic bullet and may not fit your domain. But you should figure that out for yourself, rather than either accepting ACG uncritically, or dismissing it without thought.
  • There are plenty of other fields where Large Language Model technology will supplant humans – typically where there is a large rote element to the work, with little scope for imagination.
  • If someone tells you that current “AI technology” can replace human creativity and insight, you should smile politely, and count your silverware once they have left the room. For a more frank, if impolite, take on some of the current hucksters in this sphere, I recomment reading I am an AI Hater.
  • It’s possible that we might – off the back of different technology – develop true AI: algorithmic creations that can learn based on feedback from the real world, and use their learning to do useful things. I am less sceptical about that possibility than I was five years ago. This is not necessarily a good thing: read Lena for a well-written take on why.

Another Stevie Smith Poem

Autumn

He told his life story to Mrs. Courtly
Who was a widow. ‘Let us get married shortly’,
He said. ‘I am no longer passionate,
But we can have some conversation before it is too late.’

Reading (and Understanding) Derek Parfit

Why You Should Read Derek Parfit

Derek Parfit has been justly described as “the most famous philosopher most people have never heard of.” If you’re into moral philosophy, he’s a must-read. And despite his spending a decades-long academic career with no formal teaching commitments, Parfit’s publication record is relatively manageable: you can cover the essentials in two books – Reasons and Persons (1984) and On What Matters (2011).

Why is Parfit a must-read? Well, that’s worth an entire extra post – but it’s essentially because he has come up with a lot of very interesting ideas:

  • In Reasons and Persons he comes up with completely fresh arguments about personal identity, and links them to arguments about how to balance self-interest and more impartial moral theories.
  • There is then a cracking section on how to balance the interests of future people, including the Repugnant Conclusion, which is well worth your time.
  • In On What Matters, Parfit sets out his “Triple Theory” in Volume 1, which combines Kantian deontology, consequentialism and contracturalism – three separate moral traditions that are supposed to be incompatible.
  • In Volume 2 he starts with comments by 4 other philosophers to this theory, and then sets out his response to each of those philosophers – a mixture of deeply educational content and knockabout fun.
  • There then follows a variety of sections on Parfit’s approach to ethics, including his take on Nietzsche, which I have not yet read and am very much looking forward to.

Whether you agree with Parfit or not, this is truly mind-expanding material, written by someone who sets the weather. If you want to get a better handle on moral philosophy, I cannot recommend having a take on Parfit highly enough.

And not only is Parfit a must-read, but his style is accessible to a layperson. Just as well: although I’ve had an interest in ethics since I was a teen, it’s very much a hobby: I’m a mathematician and engineer, not a philosopher. Parfit alternates between “thought experiments” – short, pared-down descriptions of moral dilemmas that provoke intuitive responses – and dense argument and reasoning that explores the implications of these responses. I personally find this an effective way to guide people through the terrain that Parfit is attempting to map out.

Why You Might Not Want To Read Derek Parfit

So a must-read, and accessible. There’s got to be a catch, right; otherwise we’d be ending the show right here? And, yes, you’re right. To start with, the books aren’t exactly quick reads. Reasons and Persons has around 400 non-appendix pages, and On What Matters has around 1,000. Even if there’s some filler in there, that’s a lot to get into your head.

More importantly, I need to fess up that not everyone is a fan of Parfit’s style. The kinds of thought experiment so beloved by Parfit are inteded by him to clarify the essence of various moral choices. But to other eyes, these experiments can appear reductive – even deceptive – narrowing down the complexity of real-world context, to instead present an artificially stark choice between two unrealistic alternatives. These reservations are exceptionally well-laid-out by one of the philosophers responding to Parfit in On What Matters (Volume 2), where Allen Wood goes on an extended rant about just how little respect they have for the thought-experiment approach (which he denigrates as “trolley problems”). I have genuine sympathy with Wood, because even before I started reading Parfit, I had developed the same reservations as Wood about a famous thought experiement in another area of philosophy (Searle’s “Chinese Room“) – although Wood puts the reservations far better than I’m capable of. Parfit deals with Wood’s rant in his response by ignoring it completely: make of that what you will.

Wood is, at least, polite – if occasionally withering. It can get far worse. Stephen Mulhall’s LRB review of a biography of Parfit (“Non-Identity Crisis“) goes considerably further – frankly, Mulhall verges on the bitchy, concluding that the biography “presents its subject as an epigram on our present philosophical age – a compact, compellingly lucid expression of its own confusions and derangements.” It was reading that review that made me decide I should make a determined effort to understand Parfit: anyone responsible for this much vitriol was surely worth a go?

You have been warned. You may, like I, find Parfit well worth persisting with. But you may find him incredibly irritating and wrong-headed. See how you get on with Reasons and Persons before buying On What Matters!

How To Read Derek Parfit

Let’s assume that you will get on with Parfit’s style, or at least be able to tolerate it sufficiently: even if so, there’s still an awful lot to cover. I’m used to being able to speed-read through a lot of very complex material, in a variety of fields, and retain the gist as I power on through: I am very good at doing this. There are lots of things I’m crap at, but my goodness I’m good at ploughing through a shedload of complex material. But when I tried reading through Reasons and Persons, I hit the buffers a quarter of the way in; there was simply too much to hold in my head. I needed more traction. After some experimentation, I found an approach that has successfully carried me through.

Parfit splits his material out by chapters, which then tend to have multiple numbered sections. The recipe that worked for me is simple: cover material in the following three stages:

  • First, read through, making sure you understand what you’re reading. Feel free to mark up significant passages in pencil, but don’t take detailed notes at this stage.
  • Then, at least a day later than reading, make detailed section-level written notes. Elide detail that may have been useful as scaffolding, but is not necessary for you to express knowledge of the overall structure of reasoning once you’ve got it in your head. I ended up with something like 1 page of A4 per 10-15 pages of original text – the optimal compression ration depends on what you’re trying to summarise.
  • Then, at least a day later than making written notes, make word-processed chapter-level summaries. You should be able to do this solely by referring to your written notes. Aim to sum up each chapter in a single side of A4, although some will chapters require more – you’ll know those when you see them.

This all can take place simultaneously – e.g. you’ll be covering the material in 3 waves with reading being furthest on, section-level written notes being behind that, and word-processed chapter-level summaries being further back again.

How long to leave between stages is up to you, but I would recommend a day at the absolute minimum – I would say a few days is probably optimal for reading-to-notes, and notes-to-summaries could be left for longer if convenient.

And how long will all this take? Well, I embarked on my Parfit binge when I was on gardening leave. So I had a lot of free time – but I did find there was a limit to how quickly I could absorb the material, even so. I had to take breaks to do things that were not moral philosophy, to allow the concepts to settle into my brain. Reasons and Persons took me a month. Volume 1 of On What Matters took another month. The first 2 sections of Volume 2 (comments and responses) took another fortnight. If you can do better than me, I take my hat off to you!

Another person’s experience: the “Only a Game” blog took 4 months to cover Volumes 1 and 2 of On What Matters, which feels in the right ballpark for someone who has the cognitive load of a job to handle at the same time – read their take here. They reckon the best bit of On What Matters is the second half of Volume 2, which is up next – hopefully, I have a treat in store!

Why I’m Bothering To Read Derek Parfit

I think Derek Parfit is a good person with extremely interesting things to say. I am not, by instinct, congruent with Parfit’s general inclinations: I am naturally deeply suspicious of people who attempt to systematize and bring disparate streams of thought under a single guiding set of principles.

And Parfit is a systematizer of exactly the kind that normally raises my hackles – but he has good reasons for being so that I can respect. He is trying to bring moral philosophy together because he is horrified by the prospect of conflicting ethical philosophies resulting in nihilism. His potentially-off-putting tendency to systematize is, for me, offset by his warm respect for the importance of humanity, and an understanding of the reality and importance of the personal and partial motivations that we all have, and which make life woth living. Time and again, I’ll read a passage that makes me think “I love you, Derek Parfit”. For me, Parfit’s ultimate saving grace is his passionate belief in dignity – that no matter how badly someone has behaved, no matter how badly their personal moral compass is skewed, they are still worthy of consideration and so do not deserve to suffer.

Parfit is no fan of the Christian Hell: good for him.

Cognitive Empathy and Software Development

(TL;DR: software is a social endeavour, and if you can’t figure out some way to engage with that fact, then that fact will engage with you nonetheless – and things are unlikely to go well for you from that point)

TO HIGH-FUNCTIONING NEUROTYPICALS:

PLEASE STOP READING NOW

You know who you are: the life and soul of the party, charming everyone you meet. I like you, honestly I do! Although this will not be a surprise, because everybody likes you. But reading any further will be a waste of your time, since you already understand everything I’m about to lay out so painstakingly, without even having to think about it.

(Also, I’m going to be bitchy about you at one point in what’s coming, and you may find that unpleasant)

The rest of us will wait for you to leave.

Could you close the door on your way out? There’s a bit of a draft. Thanks.

LET’S TALK ABOUT EMPATHY

Are the rest of us sitting comfortably? Then I’ll begin.

There are some incredibly useful ideas about empathy that no-one ever talks about. Why?

Because ideas only get talked about if they’re in the mainstream. And the mainstream is full of people who can find their way around The Land Of Empathy just fine: no map required. So the kind of talk about empathy that we’re about to have is in the “blind spot” of our culture.

LET’S TALK ABOUT COGNITIVE EMPATHY

So: to the ideas themselves. The distinction that I’m going to beat you about the head with in what follows is set out on Wikipedia’s Empathy page: it is between Cognitive Empathy (“Empathy of the Head”) and Affective Empathy (“Empathy of the Heart”).

Affective Empathy is common-or-garden human sympathy, or as Wikipedia puts it “the ability to respond with an appropriate emotion to another’s mental states”. If you are incapable of exercising any of this variety of empathy, I’m afraid that we are unlikely to be friends – not that you would care, of course. You may as well stop reading and go back to tearing the wings off flies, or whatever else it was doing before you encountered this essay. Have fun! Please don’t kill me!

Cognitive Empathy is a very different beast: it’s all about how much you get inside other people’s heads (Wikipedia: “the ability to understand another’s perspective or mental state”). Unlike Affective Empathy, this doesn’t carry any intrinsic moral virtue: you can be a lovely, wonderful person, and absolutely suck at Cognitive Empathy. Note that if you lack innate ability in Cognitive Empathy, there are workarounds – ways to “fake it ’till you make it” – much more so than for a lack of innate Affective Empathy.

Here’s something that I’m often fascinated by: the use of Cognitive Empathy as a weapon. Here’s an example: Dr. Hannibal Lecter. Dr. Lecter is very good at Cognitive Empathy, and unfailingly polite. To someone who is meeting him for the first time, he generally comes across very positively. However, Dr L. leaves something to be desired as a social companion in the longer term, and I would definitely advise against inviting him to your next dinner party.

COGNITIVE EMPATHY IS IMPORTANT

(YES, EVEN IN INFORMATION TECHNOLOGY)

In IT, Affective Empathy is present in normal amounts, but Cognitive Empathy is generally way thinner on the ground. I won’t go into about why that is so in this essay. But we all know that IT is packed with people who would far rather focus on technical, analytical framings of issues than get all “touchy-feely” and worry overmuch about the thoughts and feelings of others.

So IT people are terrible on average at Cognitive Empathy, but does that really matter? Does a deficit in Cognitive Empathy harm you in an engineering context?

Here’s a simple motivating example:

You’ve slogged your guts out on project X but have not been given sufficient recognition for your effort. During this time, person Y has risen, seemingly without any justification.

That may sound … familiar? And if you don’t understand why this disconnection between value and reward happened, then you didn’t understand enough about how the work was being evaluated: who was pronouncing on its worth, and what they attached value to. And this kind of situation is where even a smattering of Cognitive Empathy can help tremendously.

By this stage, I imagine I might be annoying some readers. They’ll be thinking: You’re telling me to pander, and be a people-pleaser. But I’m an engineer, not a diplomat! Surely, if I just carry on doing Good Things then I should be recognised? And, if I’m not recognised, surely that means it’s the fault of the company, and that I should go and work for somewhere else with a better value system? Right?

To them, I’d like to say this:

PAY ATTENTION, YOU IDIOT

if you don’t engage with what the people around you are thinking, you are at a massive disadvantage compared to anyone else who is paying even the slightest attention to the mental states of others.

There is a romantic myth in IT of the sole coder who has an amazing idea and weaves a magical blanket of abstractions by working 24/7 in their bedroom, unveils it to the world, and receives instant fame and recognition for their genius. Like most myths and archetypes, it’s fatal to take this as an unadulterated guide to living in the real world.

Almost everywhere, and almost all the time, creating software is a social endeavour. Most great ideas come from cross-fertilisation with a wide range of other people – some of whom are likely to think very differently from you – and a lack of appropriate context makes even geniuses appear stupid.

INTRODUCING PERSON X

Person X has spent a long time (10 years or more) working on the same part of the same product, for the same company, without interacting with other teams much, whether internally or externally.

Other team members (whether more extraverted ICs or managers) interface between them and the world outside their team. But Person X prefers to concentrate on their own part of the world. While Person X most certainly adds value, hardly anyone outside their team knows who they are.

PERSON X WILL SUFFER

At some point Person X will find that their local technological landscape is invalidated. This can be through technological advances, company mergers, a new CTO with a vision – the cause is uncertain, but it will happen sooner or later. When this happens, there will be redundancies, and Person X will get the chop.

Why?

Because the decision of who to chop is made by multiple people (managers, client reps, etc) getting together and voting, and only one of these people will know who Person X is, and so they will be outvoted by all the others.

This is not fair. But it happens all the time.

I AM SORRY

I may have come across as a bit of an arsehole so far. Angry. Sorry. But this isn’t an accident; I’m attempting to shock you out of complacency. Here’s the reason I’m so cross – no-one else is going to bother to help you out: they’ve been spending their whole life cruising past your low-cognitive-empathy slum in their high-cognitive-empathy limousines.

And most of the people in the limos don’t really get why you’re in the slum, and the minority who do have an inkling: well, I wouldn’t say that they’re laughing at you through their tinted side windows, not exactly, but they don’t care about you either; at least, not enough to help you out. So you’re stuck with me, doing my grumpy best. Sorry about that. But also: fuck those guys, right?

YOU CAN ESCAPE

If you work in the IT realm and think that you may be suffering from this kind of empathic disadvantage, and would like to make the pain stop, I recommend studying the works of Gerald Weinberg. I didn’t get to meet him in person, which is one of my few major regrets in life: he died in 2018, leaving a legacy of marvellous writing behind. His original classic “The Psychology of Computer Programming” was written in 1971 but is still as relevant as ever.

I have read every non-fiction book that GW wrote and haven’t regretted a single one. He has the technical chops: he was a project manager on Project Mercury. But also he transcended personal tragedy in his family life, and as part of his journey he produced this amazing synthesis of Rogerian psychological thought with the practical process of writing systems. I cannot recommend his writing highly enough.

YOU MAY BE SAD

Please don’t be sad. There is lots of good news here. You can learn Cognitive Empathy. And this isn’t just about career benefits, about helping yourself out. While being able to better comprehend what everyone around you thinks, feels, and wants is very rewarding professionally, it’s also tremendously rewarding in itself: you will be able to help more people and generally do more good in the world.

And on top of all of this – circling back to career benefits – knowledge of Cognitive Empathy will be hugely more durable than all those technology hamster wheels we spend all our time running around. I have lost count of the number of networking technologies I have learnt over the last 30 years – but people? They are just the same now as they were 30 years ago, or the last few hundred come to that. It’s an incredibly useful bit of technology that is going to be just as relevant when you retire as it is today. You know, like key bindings for vi?

YOU MAY THINK I AM FULL OF SHIT

Well, really. The nerve of you! Well, I’ve done my best. Feel free to ignore me and focus on those sweet, sweet technical/analytical problems that we all love so much. But don’t come running to me when an idiot manager or daft fellow developer who happens to be exercising slightly more Cognitive Empathy than a broken toaster runs rings around you.

Incidentally, don’t despair if you feel that a self-perceived lack of neurotypicality on your part might make all this too difficult. As I have already implied, I’m not neurotypical either, having a decently-sized helping of Emotional Contagion, with a side order of Face Blindness. Depending on the mix at your workplace, non-neurotypicality might actually make it all easier: check out The Double Empathy Problem to see what I’m talking about.

Excel Tetris

Because you’re worth it.

Written back when it was fashionable to make Excel do strange things.

(Disclaimer: Uses Windows Timers. Probably doesn’t work nowadays. Will spoil your milk, make strong men weep, and turn the weans against you)

Millennials

At some unspecified time in the past, when I was in a managerial/quasi-managerial role in an investment bank (keep it vague, Matt, keep it vague), we had An Occasion when anyone who was managerial/managerial-adjacent was brought into a half-day session organised by HR about how to deal with Millennials. The reason? We were having a lot of problems around retention for graduate Millennial joiners in our most recent year group, and we wanted to stop the rot.

The session covered the familiar talking points that the older generation typically raise about feckless youth. Millennials expect too much on coming into an organisation, when they aren’t ready for the responsibility. They don’t respect hierarchy. They require coddling. They’re snowflakes. And so on, yadda yadda yadda.

But I knew why the Millennials were really leaving. In the problematic year group, a few people had been placed, post-internship, with a part of the organisation that had been deemed non-profitable. They were laid off rather than places being found for them elsewhere. This was a violation of an implicit contract that this and similar organisations had at the time – while we might have a lot of churn and instability, we would not crap on people fresh out of an internship by making them redundant, no matter what. The norm would be that you would reassign them. They’ve been around for a few years? Fine, it’s open season. But less than 12 months after starting full-time work? Nope.

As a consequence of this norm violation, meerkat-like, everyone in that year’s cohort raised their heads, nodded at each other, and a decent fraction (disproportionately selected from those with most moxie and smarts) had changed their employer to be someone other than us before the year was out. They were absolutely right to have done so, and I would have done the same thing myself in their position.

I knew what was really going on because – unlike most of my colleagues in the managerial sphere at that organisation, at that time – I took pains to check in with everyone I felt responsible for on a fortnightly basis.

(Yeah, I know everyone does this weekly nowadays but this was many years ago, and we’re talking 14 people here. I was busy, OK?)

Much to my shame, I didn’t speak up during the session. I nodded along with everyone else, tutted about how terrible the Millennials were, and was working somewhere else myself before the year was out.

270 Playing Card Lampshade

In 2011, Nick Sayers made a roughly spherical design out of 270 playing cards. Well, 265 if you leave a hole to put a light-bulb in – 5 whole decks, plus one joker from each. It’s easy to find pictures of it on the internet, e.g. here and here. It has gaps to allow light through. It has a pleasing mixture of structure and irregularity. It looks fantastic. I wanted to figure out how to make it myself.

The design looks complicated, but it’s not so bad. It’s a Platonic solid in disguise – either a dodecahedron or an icosahedron, depending on what you say the centre of the face is: from the angle of the playing card cuts, it’s pretty much a wash since everything bulges outwards a bit, compared to a true Platonic solid.

You can figure out how to fit the cards together from the pictures (whether of Nick Sayer’s build or mine). But before getting started on this, I highly recommend you figure out how to make some Platonic solids first – it’ll give you an idea of how this kind of thing works. See my previous post for more on that.

Assuming you’ve gained some basic idea of how to build the simpler stuff, then here’s the key to building the 270-card behemoth: there is a 9-card motif that you need to repeat 30 times, which is shaped like a diamond. At each of the two sharp points of the diamond , 5 cards come together. At each of the other two points of the diamond, 3 cards come together. Everywhere else, 4 cards come together.

The really interesting bit is exactly where to cut the cards so they slit together in the right way. On my previous post for building Platonic solids, you can see how this kind of thing gets worked out. For the 270-card construction, by all means work out a theoretical starting position (this helped me) but eventually you will need to be prepared to experiment with a pack or so of cards to see what alternatives pan out best given the way the physical material behaves. I recommend using packs of Waddingtons No. 1 cards when experimenting – they are pretty tough and nice and cheap.

(Note that each 9-card motif uses 5 cards with one pattern of cuts, and 4 cards with the mirrored version. )

In terms of actually putting it together, build one 9-card motif off to one side, and refer to it as you make the main build. Start with a 5-vertex and build outwards from there. Once you have all the slots cut, and any other backing applied (see below), it will still take you at least 4 hours to put everything together. There are some points where you can take a rest – basically you’ll need to complete something radially symmetric so it doesn’t start bending too much.

While a finished build looks great, it does have one drawback as a lampshade: playing cards are not designed to let light through – it’s kind of a major part of their job description, come to think about it. So, while the lampshade does look lovely with a light bulb in the middle, it’s somewhat underpowered as a light source: more of a glowing coal than a blazing fire, let alone a UFO that will eat your mind as you gaze dumbly into its alien projection of Nirvana.

Sorry, where was I?

Ah, yes: improving light output. I did wonder if it might be possible to get more light out by building it with cheaper playing cards, which lack the “core” that makes normal playing cards opaque. But such cards tend to be generally weak, whether from the lack of the “core” layer or just because the cardstock is thinner. This weakness rules them out: the twisting inherent in the design means that those cheaper cards rip and can’t be used successfully.

My best idea so far: put stick-on chrome mirror vinyl on the parts of the backs of the cards that lie entirely within the construction: the idea being that the light will bounce around until it eventually makes its way out. From the outside, it still looks like normal playing cards – but now, when you chuck light out from the inside, you now get more than just a gentle glow – enough light escapes to get some general illumination of the surrounding space. An encouraging sign is that the colour of the light that escapes is a closer match to the original bulb colour – before, it took on the hue of the playing cards, since so much of it was being absorbed by their surface.

More external light would still be good. The photos below are all I currently get from a 26W corn led light bulb, which puts out 3,000 Lumens (around 200W equivalent for an incandescent) so is already quite punchy. I’ll try ramping it up and see how far I can go before setting everything on fire.

As to which playing cards to use? The main problem is potential tearing where the cards meet – once the whole construction is made you’re fine, but while putting things together you can end up with quite a lot of stress being put present at some stages. I prototyped with Waddingtons No 1 bridge cards, only £1.40 a pack, and they stood up to the twisting forces involved remarkably well. The final build uses Compag Jumbo Index Poker Cards, which are more like £10 a pack in the UK: about as resilient as you are going to get. But to be honest, they didn’t really deal with the forces much better than the Waddingtons did – although I did dial up the amount of card twisting to the maximum level I thought I could get away with.

You handsome devil
The interior, all lovely and shiny
If you want to do this yourself, I recommend you cut out the vinyl using a Silhouette/Cameo machine. One A4 sheet should give you 15 shapes – so 2 10-packs will suffice for one build.

Making platonic solids out of playing cards

Yet another very niche post. Would you like to do this?

This image is taken from https://mathcraft.wonderhowto.com/how-to/make-platonic-solids-out-playing-cards-0130512/ – however the templates document that this page refers to is no longer available, and there’s no information on the underlying maths either.

I’m trying to figure out how to build more complex stuff of this type, so I figured it was worth sitting down and working it out from scratch. See below for both the maths, and the cut angles for all 5 platonic solids.

Plotter Directions

There are a number of ways to go

  1. Op-Art: Bridget Riley blocks, moire, etc
  2. “Natural” – textured trees, flower fields, etc
  3. “Glitch” – a more robotic/computerised feel with explicit acknowledgement of the underlying medium
  4. Organic growth abstracts
  5. Textual integration

I’d like to explore the textual side, but it’s hard to bridge the gap between text and drawings, integrating them effectively. I feel like it’s maybe going to be the best to explore though, as the other territory seems to be thoroughly colonised. The counter-argument is that I think that Cy Twombly sucks ass.

One possibility would be to extend the non-line nature of text into collage?

Vaccines – An Excerpt

This the concluding part of a longer essay I wrote a while ago, when sorting out my thoughts on anti-vax. It’s based on “The First Rotavirus Vaccine and the Politics of Acceptable Risk”, Milbank Q. 2012 Jun; 90(2): 278–310.

(Context: there was a “RotaShield” vaccine withdrawn in 1999 following confirmation of a serious adverse event associated with its use with infants. The story around this is an important part of the mythology of anti-vax in respect to one of their prime targets, Paul Offit – one of the most public faces of the scientific consensus that vaccines have no association with autism, among many other things)

The history of the Wyeth/RotaShield vaccine’s approval while Paul Offit was on the ACIP panel is crucial It is at the heart of disagreements around Offit himself, but it also operates as a key part of a negative feedback loop that poisons discussion across the pro-vaccine/vaccine-sceptic divide. I will explain why.

The pro-Offit position is that everyone made decisions around Wyeth/RotaShield in good faith, and that the decisions made remain perfectly understandable based on the evidence available at the time the decisions were made, and that there is no evidence to support the notion that there was wrongdoing involved.

I believe that this position is true.

I simultaneously believe that the ACIP process at the time needed improvements to its conflict-of-interest policy. The very fact that people can accuse Offit over RotaShield in the way they do, in a manner that carries a reasonable degree of credibility to a casual observer, surely proves that there is an issue here. I do not think there is any inconsistency in holding these two beliefs at the same time: you can believe that a COI policy needs improvement, without believing that people in any given situation in fact acted badly.

But Offit’s hard-line critics see a very different picture. They see someone who joined the ACIP panel primarily to enrich himself – using the influence of his position to create a vast and open market for his own (Merck/RotaTeq) vaccine by rushing through the approval of a competitor (Wyeth/RotaShield) vaccine with known safety issues. At its strongest, the narrative is that Offit was fully expecting RotaShield to be withdrawn, leaving the pre-established market wide open just as his own vaccine was available to fill the gap. The consequence of this was the suffering of around 100 children, of which 1 actually died and 50 had to have surgery. In return for this, he has profited to the tune of tens of millions of dollars.

In other words, the essential narrative for many of Offit’s opponents is that he has killed a child for money – that Paul Offit is literally a baby-murderer.

Everything else in people’s views of Offit flows from their interpretation around the RotaShield episode. To call this an “interpretation gap” is a huge understatement: it is a gulf, a chasm. It is impossible to reconcile the views. Offit is a medical researcher who has saved hundreds of lives. He has killed a child for money. He campaigns energetically to save lives in the face of death threats. He schemes endlessly to further his own interests, while children suffer and die as a direct consequence. He is a good man who is doing is best to do good things. He is “the devil’s servant” (whale.com).

And standing on the other side of the divide is Offit’s dual, the closing half of the feedback loop – Andrew Wakefield. The anti-Offit characterisation is echoed in many of the accusations that are flung at Andrew Wakefield by the more intemperate pro-vaccine parties. Truly, many from each side honestly believe that the other side’s prophet is a baby-murderer. This is such a deeply unpleasant thing to consider that the more decent among those on each side rarely articulate it openly – they don’t even like to call the thought fully to mind – but the thought is there on both sides all the same.

Both Offit and Wakefield give a lot of speeches, but they don’t use this kind of rhetoric directly about their opposite number, and there is a good reason for this. It is a deeply primal, a tremendously powerful thing to accuse someone of – there is so much energy to be tapped from the sense of revulsion that results. But it never ends well, because the energy is diseased, tainted at its source. Once you believe someone is a baby-murderer it is hard to even think of them as fully human. Discussion turns ugly even if the underlying accusation is never fully brought to the surface – the unspoken thought poisons everything that it touches, killing respect and goodwill.

More generally, it is healthy for all of us to reject accusations like this wherever they crop up, whoever they are aimed at. No monsters here – only us.