Regrets of a Technical Communicator

 2 Posted on by
Sunset over the desert in New Mexico.

I like to joke that I got into developer relations because I was the rare programmer that could carry on a conversation for more than five minutes. Like all good jokes, its mostly true -- I think one of the foundational abilities of the role is a strong ability to translate highly specific and nuanced technical concepts into something that's broadly consumable by other technologists or a general audience. I've noticed a worrying trend over the past couple of years about technical communication, however. In short, the gap between what people need to understand and what's being communicated to them has never been larger.

I don't have a ton of specific examples here, but a lot of this is driven by conversations I've been part of over on Bluesky for the past year or so. For the uninitiated, Bluesky is a federated microblogging platform built on a decentralized protocol known as ATProto. This article explains it more in-depth. There's been a lot of conversations about the hot technical topics of the day that have a social impact, such as generative AI and, uh, federated microblogging platforms. Both of these are highly technical in their implementation, are both very important to how the internet and software systems will function in the future, and are understood very poorly by most people. I'll be the first to admit that I am not an expert on either, although I've done some reading.

Feeling the AGI

For an example of the problem, I'm going to write out some thoughts on "AI". I'm gonna say the same thing twice; first, as I'd communicate with a co-worker or someone that works in modern software systems.

It's very obvious that the fever-dreams of the 'AI Bros' and e/acc numbskulls are hilariously unachievable given the current state of the art in AI. However, you'd be remiss to write off diffusion and transformer models, to say nothing of the open source work happening around them. Transformers and Large Language Models, specifically, will certainly become a huge part of interaction modalities with data over the next half-decade or so. It's even more impressive to consider that much of what's being turned into products and solutions today is based on research from decades ago. Over time, I expect we'll see further advancements as computational power continues to grow.

Ok, now I'm going to rewrite that last paragraph for a more general audience.

A lot of people who want to make money off 'AI' are promising some really big things, but there's not a lot of evidence that they'll be able to actually pull it off given what's achievable today. However, you shouldn't take this to mean that stuff like ChatGPT or Stable Diffusion is a dead-end or is going to go away. Those specific products and how people use them may, but the stuff that's happening behind the scenes is going to be used by developers and companies to make it easier for you to work with computers. It's also worth remembering that a lot of what's "new" in AI is based on research from the 70's, and computers have finally caught up to make that theoretical work possible. It's likely that this trend will continue as computers become more powerful and efficient.

This is a fairly basic version of the problem I'm talking about above. I didn't actually say anything different between the two paragraphs, but the audience for them is drastically different. The problem I have is that I don't think the latter is really any good at allaying someone's concerns over AI, because it doesn't really get into the whys. Can I, or anyone else, do a better job? Sure. AI isn't my speciality.

However, where do you even begin? Pick a point in the History section of Wikipedia for Neural Networks, I guarantee you that you're gonna miss some sort of context. Even a completely lay explanation of 'what an LLM is doing' should touch on some stuff from the past twenty years, give or take, and probably point out exactly how prevalent neural networks are in software systems today. If you waved a magic wand to get rid of 'AI', you'd be killing off everything from Google Translate, to predictive text, to most forms of fraud detection in banking. A useful explanation of attention and transformers is beyond me, certainly, but I know enough to know what I don't know.

Everything's a system, and systems are complex!

You can't throw a rock without hitting some example of this problem. Journalists interviewing objectionable people and getting pilloried from all sides for the act of interviewing them, even though it's part of the basic ethics of reporting. Developers building tools that meet with impossible standards from user communities. When you can't seemingly do anything that makes everyone happy, your options are pretty much to shut yourself off from the wider world or simply categorize people into a binary of 'haters' and 'not haters' and make up stories about the haters so you can ignore them more easily.

I'm not immune to this tendency either - it's far easier to just ignore people and arguments that you can tell aren't going to be super productive. What's disappointing, personally, is that I know that the audience for any of these arguments isn't to persuade the parties involved, it's to entertain or inspire the silent observers.

This is where I start to feel the regrets swell up, because I think that technologists have done a very poor job of consistently communicating systemic concepts in approachable ways. I don't think we're ever going to drown out the tech fuckbois who are trying to turn a quick buck, they have vested financial interests in getting their opinion over. What I do think we can do, though, is commit to some level of open and honest communication about hard technical concepts, without necessarily shutting ourselves off or getting too in the weeds about things.

I'm not saying this in the sense that we should put up with abuse, or suffer foolishness. If people don't want to be respectful, then they can go on their merry way. That said, we do have a responsibility to engage with people where they are, and help them understand the sprawling complexity of systems in the best way we can, rather than writing them off. As some of the principal groups spawning that complexity, it's our responsibility to society to be good stewards of it.

Webmentions