Our planet is full of communication. Elephants communicate over great distances, whales speak to each other across the depths and distances, and we do the same through a broad network we call the Internet now, built on the previous systems. Some might say it’s a nervous system of sorts, but if it’s a nervous system it does seem disconnected from a brain. Or maybe the brain hasn’t evolved yet. Maybe it never will.
I write this because when I was writing about the AI tools I use, which are spartan, I imagined a world where people relied so heavily on what’s marketed as artificial intelligence that they could no longer communicate to other human beings in person. It’s something that they’re writing papers on, and this one from 2021 seems pretty balanced. In some ways our technology helps, in some ways it hinders.
The paper, though, was before ‘AI’ became a popular thing, with even The Beatles helping make it famous. Maybe too famous for what it is, which at this point is really a bunch of clever algorithms trained on data that we collectively created. We’re amazed at well trained morons, who cleverly give us what it thinks we want like Netflix suggests things to show you. It’s different, but not very different.
When Grammarly came out, promising to make everyone better writers, I rolled my eyes because what it allowed was for more consistent output. It allowed really crappy writers to look like good ones, and unless they really wanted to learn how to write better – they wouldn’t.
The fact that they’re probably still subscribing to Grammarly would make the point. If something is going to make you better at something, like training wheels on a bicycle, you can only tell if you take off the wheels. I’m willing to bet that people who have consistently used Grammarly are probably still using it because it did not make them better writers, it simply made it easier for them to appear that they wrote well.
I could be wrong. I don’t think I am, but if someone has some data on this either way, I’d love to see it.
Speaking for myself, though most of my professional life was in technology, the press of what I actually did was communication. I could communicate with non-technical and technical people alike, which is something I still do online on RealityFragments.com and KnowProSE.com. I was known for it in some circles, making overcomplicated things simple and making the unnecessarily insulative dialects of technology more accessible to people.
In all that time, what I learned is that to become a better writer, one has to read and one has to write. Reading rubbish is only good if you know it’s rubbish, because it gives you examples of what not to do when you’re writing. If you don’t know it’s rubbish, you might think it’s the right way to do things and go around spreading more rubbish.
Which brings us back full circle to these large language models that can’t really tell what is rubbish or not. They use probability to determine what is most acceptable – think average, banal – based on their deep learning models. The same is true of images and video, I imagine. Without a human around with some skill in knowing what’s rubbish and what isn’t, people will just be regurgitating rubbish to each other.
But who picks who has that skill? You can all breathe, it isn’t me. I’ve played with the large language models and found them wanting. They’re like college graduates who crammed for tests, have an infallible memory, but don’t understand the underlying concepts- which, by the way, is something we also allow to run around in places of authority making really poor decisions. It’s popular, though.
Communication is a skill. It’s an important skill. It’s such an important skill that if you find yourself using AI tools all the time to do it, I offer that you’re not just robbing yourself…
You’re polluting.