Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Knowing What Something Is.

Thraupis Episcopus, Blue-gray tanager, also called the Blue Jean in Trinidad and Tobago.

Recovering yesterday from the silicon insult, there was a quote that I kept coming back to as I awoke now and then.

You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.

Richard P. Feynman, “What Do You Care What Other People Think?”: Further Adventures of a Curious Character

We use labels to communicate things to other people, and it’s all based on some common perception. The bird pictured is blue-grey, so some very smart person called it a blue-grey tanager, where tanager is a type of bird that has common characteristics to other birds we call tanagers. Then someone who was taught too much Latin in school decided it looked a lot like the ‘Bishop of Thraupi’ (the literal translation). I have no idea why it’s called a blue-jean in Trinidad and Tobago, but it is what it is.

As most creatures, they’re interesting in their own way. I spent a lot of time watching birds in Trinidad and Tobago, taking pictures of them as a challenge, most of which ended up on Flickr and most of which weren’t that great. In doing that, I learned about how the birds interacted with others, what they ate, and when I talk about a blue-grey tanager all of that is behind the label. I know what the bird is based on what it does, how it behaves, etc.

It’s not just a label.

In the movie ‘Good Will Hunting’, a similar point was made in one of the more epic tirades done by the late, great Robin Williams:

…You’re an orphan right? You think I know the first thing about how hard your life has been, how you feel, who you are, because I read Oliver Twist? Does that encapsulate you?

“Good Will Hunting” (1997), Sean speaking to Will.

The obvious way to go with this would be about identity politics and some of the silliness that ensues with it because clearly labels don’t mean as much as who the people we’re talking about actually are, but that’s not where I’m going with it – though in a way, I am.

When we look at generative AI, and how it can be trained on the way we have communicated in the past, be it art, writing, etc, all it’s really doing is using the labels as puppets. It doesn’t understand what it has spit out in response to a prompt.

I’ve met people like that. In fact, in my younger days, I was more like that than even now I care to admit – reading about things I didn’t understand, and having my world view defined by the views of others. Actual experience varies, and that’s the point of all of it. That diversity of experience is what enriches our society, or should. It’s additive.

It’s impossible for us to be able to share all of our experiences with others, but we can share more if we go beyond the labels. That one picture above of the blue-grey tanager did not just happen. It required me to understand the bird to get close enough with only 3x magnification on one of the original digital cameras to get the detail I did, it took trimming the plumb tree just right to allow the branches to be close enough from the top of the stairs, and it required a lot of patience in developing trust with the birds – that I wasn’t going to eat them.

The very experiences that make us human are the things we need to fall back on to be human these days, not the rote memorization and regurgitation of labels that generative artificial intelligences are much better at than we are.

We need to understand these things.

WordPress.com, Tumblr to Sell Information For AI Training: What You can do.

While I was figuring out how to be human in 2024, I missed that Tumblr and WordPress posts will reportedly be used for OpenAI and Midjourney training.

This could be a big deal for people who take the trouble to write their own content rather than filling the web with Generative AI text to just spam out posts.

If you’re involved with WordPress.org, it doesn’t apply to you.

WordPress.com has an option to use Tumblr as well, so when you post to WordPress.com it automagically posts to Tumblr. Therefore you might have to visit both of the posts below and adjust your settings if you don’t want your content to be used in training models.

This doesn’t mean that they haven’t already sent information to Midjourney and OpenAI yet. We don’t really know, but from the moment you change your settings…

  • WordPress.com: How to opt out of the AI training is available here.

    It boils down to this part in your blog settings on WordPress.com:


  • With Tumblr.com, you should check out this post. Tumblr is more tricky, and the link text is pretty small around the images – what you need to remember is after you select your blog on the left sidebar, you need to use the ‘Blog Settings’ link on the right sidebar.

Hot Take.

When I was looking into all of this, it ends up that Automattic, the owners of WordPress.com and Tumblr.com is doing the sale.

If you look at your settings, if you haven’t changed them yet, you’ll see that the default was set to allowing the use of content for training models. The average person who uses these sites to post their content are likely unaware, and in my opinion if they wanted to do this the right way the default setting would be to have these settings opt out.

It’s unclear whether they already sent posts. I’m sure that there’s an army of lawyers who will point out that they did post it in places and that the onus was on users to stay informed. It’s rare for me to use the word ‘shitty’ on KnowProSE.com, but I think it’s probably the best way to describe how this happened.

It was shitty of them to set it up like this. See? It works.

Now some people may not care. They may not be paying users, or they just don’t care, and that’s fine. Personal data? Well, let’s hope that got scrubbed.

Some of us do. I don’t know how many, so I can’t say a lot or a few. Yet if Automattic, the parent company of both Tumblr and WordPress.com, will post that they care about user choices, it hardly seems appropriate that the default choice was not to opt out.

As a paying user of WordPress.com, I think it’s shitty to think I would allow the use of what I write, using my own brain, to be used for a training model that the company gets paid for. I don’t see any of that money. To add injury to that insult of my intelligence, Midjourney and ChatGPT also have subscription to offer the trained AI which I also pay for (ChatGPT).

To make matters worse, we sort of have to take the training models on the word of those that use them. They don’t tell us what’s in them or where the content came from.

This is my opinion. It may not suit your needs, and if you don’t have a pleasant day. But if you agree with this, go ahead, make sure your blog is not allowing third party data sharing.

Personally, I’m unsurprised at how poorly this has been handled. Just follow some of the links early on in the post and revel in dismay.

AI is Not A Muse(d).

Image generated by Inspirobot.me

There’s been a lot said, written, and shoved in learning models without the permission of authors and artists to train artificial intelligence models.

Of course, artificial intelligence investors don’t want to pay for using copyrighted works. I’d wager in a legal sense one could make a case about fair use, and I must admit a bit of amusement at the conundrum that has been created in that regard.

Yet, when I’m stuck about writing something, I generally don’t turn to those AI models, which despite being accused of creative just have more access to data to draw a nexus from. What most people consider to be creative is almost always that. It’s how I like connecting seemingly unrelated topics.

AI is not a muse.

No, when I’m writing and I hit some kind of block, I generally go check out Inspirobot.me. It’s reminiscent of a silly program I wrote on an Apple IIe that simply spliced words and phrases together to form insults. As a teenager without many friends, I found this very amusing. When it began to get stale, I’d add more words and phrases.

Inspirobot does something similar in a technical sense, with some great imagery in the background – as above. Literature is a creation.

AI investors are trying to change that, as if all humanity has had to offer has already happened.

Silly investors.

Beyond Artificial Intelligence

In my daily readings yesterday, I came across Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements at a blog that I regularly read. It’s an odd one out for the blog, so it definitely caught my eye. The blog Be Inspired! is worth a read, this was just a single post that was tech related on something I have been writing about.

She hit some of the high notes, such as:

…Furthermore, the unequal access to and distribution of AI technology may exacerbate societal divisions. There is a significant risk of deepening the digital divide between those who have access to AI advancements and those who do not. To bridge this gap, it is crucial to implement inclusive policies that promote equal access to AI education and training across all demographics. Efforts should be made to democratize access to AI tools, ensuring that everyone has equal opportunities to benefit from this technological revolution…

Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements, Be Inspired, July 23rd, 2023.

While it is interesting that I noted a ‘voice change’ with the author, having read her blog, it isn’t her standard fare – and the high points were hit well. It also happens that the quotation above is something that keeps getting thrown around as if someone else is going to do it.

Someone else is not going to do it. I noted with ChatGPT and others when I have used them and asked questions about how artificial intelligence will impact human society, the large language models are very good at using those neural networks, deep learning algorithms and datasets we have no insight into to say in a very verbose way, “Well, you silly humans need to fix yourselves.

Of course, it’s not sentient. It’s predictive text on steroids, crack and LSD. It only has information given to it, and that information likely included stuff written by people who said, “Hey, we need to fix these issues”.

Well, we do need to fix a lot of issues with human society even as the hype cycle spits out a lot of gobbly-gook about the technological singularity while not highlighting issues of bias in the data the authors of our future point out every now and then.

Yet there will always be bias, and so what we are really talking about is a human bias, and when we speak as individuals we mean our own bias. If you’re worried about losing your income, that’s a fair bias and should be in there. It shouldn’t be glossed over as ‘well, we will need to retrain people and we have no clue about it other than that’. If you’re worried that the color of your skin doesn’t show up when you generate AI images, that too is a fair bias – but to be fair, we can’t all be shades of brown so in our expectation of how it should be we need to address that personal bias as well.

It does bug me that every time I generate an image of a human it’s of those with less pigment, and even then I’m not sure which shade of brown I want. It’s boggling to consider, and yes, it does reinforce stereotypes. Very complicated issue that happens because we all want to see familiar versions of ourselves. I have some ideas, but why would I share them publicly where some large language model may snatch them up? Another problem.

Most of the problems we have with dealing with the future of artificial intelligence stems from our past, and the past that we put into these neural networks, how we create our deep learning algorithms is a very small part of it. We have lines on maps that were drawn by now dead people reinforced by decades, if not centuries or even millenia of degrees of cultural isolation.

We just started throwing food at each other on the Internet when social media companies inadvertently reinforced much of the cultural isolation by giving people what they want and what they want is familiar and biased toward their views. It’s a human thing. We all do it and then say our way is best. We certainly lack originality in that frontier.

We have to face the fact that technology is an aspect of humanity. Quite a few humans these days drive cars without knowing much about how it works. I asked a young man recently if when they tuned his car they had lightened his flywheel, since I noticed his engine worked significantly harder on an incline, and he didn’t know what that was.

However, we do have rules on how we use cars. When drunk driving was an issue, some mothers stepped up and forced the issue and now drunk driving is something done at an even higher risk than it already is: you might go to jail over it to ponder why that night had become so expensive. People got active, they pressed the issue.

The way beyond AI is not through the technology, which is only one aspect of our humanity. It’s through ourselves, all our other aspects of humanity which should be more vocal about artificial intelligence and you might be surprised that they have been before this hype cycle began.

Being worried about the future is nothing new. Doing something about it, by discussing it openly beyond the technology perspectives, is a start in the right direction because all the things we’re worried about are… sadly… self-inflicted by human society.

After centuries of evolution that we think separates us from our primate cousins, we are still fighting over the best trees in the jungle, territory, in ways that are much the same, but also in new ways where dominant voices with just the right confidence projected make people afraid to look stupid even when their questions and their answers may be better than that of those voices.

It’s our territory – our human territory – and we need to take on the topics so ripe for discussion.

Communicating: Don’t Depend on AI.

Our planet is full of communication. Elephants communicate over great distances, whales speak to each other across the depths and distances, and we do the same through a broad network we call the Internet now, built on the previous systems. Some might say it’s a nervous system of sorts, but if it’s a nervous system it does seem disconnected from a brain. Or maybe the brain hasn’t evolved yet. Maybe it never will.

I write this because when I was writing about the AI tools I use, which are spartan, I imagined a world where people relied so heavily on what’s marketed as artificial intelligence that they could no longer communicate to other human beings in person. It’s something that they’re writing papers on, and this one from 2021 seems pretty balanced. In some ways our technology helps, in some ways it hinders.

The paper, though, was before ‘AI’ became a popular thing, with even The Beatles helping make it famous. Maybe too famous for what it is, which at this point is really a bunch of clever algorithms trained on data that we collectively created. We’re amazed at well trained morons, who cleverly give us what it thinks we want like Netflix suggests things to show you. It’s different, but not very different.

When Grammarly came out, promising to make everyone better writers, I rolled my eyes because what it allowed was for more consistent output. It allowed really crappy writers to look like good ones, and unless they really wanted to learn how to write better – they wouldn’t.

The fact that they’re probably still subscribing to Grammarly would make the point. If something is going to make you better at something, like training wheels on a bicycle, you can only tell if you take off the wheels. I’m willing to bet that people who have consistently used Grammarly are probably still using it because it did not make them better writers, it simply made it easier for them to appear that they wrote well.

I could be wrong. I don’t think I am, but if someone has some data on this either way, I’d love to see it.

Speaking for myself, though most of my professional life was in technology, the press of what I actually did was communication. I could communicate with non-technical and technical people alike, which is something I still do online on RealityFragments.com and KnowProSE.com. I was known for it in some circles, making overcomplicated things simple and making the unnecessarily insulative dialects of technology more accessible to people.

In all that time, what I learned is that to become a better writer, one has to read and one has to write. Reading rubbish is only good if you know it’s rubbish, because it gives you examples of what not to do when you’re writing. If you don’t know it’s rubbish, you might think it’s the right way to do things and go around spreading more rubbish.

Which brings us back full circle to these large language models that can’t really tell what is rubbish or not. They use probability to determine what is most acceptable – think average, banal – based on their deep learning models. The same is true of images and video, I imagine. Without a human around with some skill in knowing what’s rubbish and what isn’t, people will just be regurgitating rubbish to each other.

But who picks who has that skill? You can all breathe, it isn’t me. I’ve played with the large language models and found them wanting. They’re like college graduates who crammed for tests, have an infallible memory, but don’t understand the underlying concepts- which, by the way, is something we also allow to run around in places of authority making really poor decisions. It’s popular, though.

Communication is a skill. It’s an important skill. It’s such an important skill that if you find yourself using AI tools all the time to do it, I offer that you’re not just robbing yourself…

You’re polluting.

Artificial Creativity

artificial creativity in a cup overflowing with silicon impressionismOn the left I gave DeepAI a prompt of, “Artificial creativity in a cup overflowing with silicon.” in an impressionist style. It didn’t generate what I was thinking of, and it didn’t give me much.

I’d begun fiddling with ChatGPT, as I wrote about in, “A Chat With GPT on AI“. There, the answers were ambiguous – though necessarily so. In querying it on many things, though, I found the same style of answer more than once, and a certain fondness for words. This should be no surprise in present language models.

I’ve split myself in two world, since within me I have much overlap between technology and other interests and being creative. Previous experiments with doing it all on one site just seemed to confuse people. After all, they don’t expect someone to be both. In that regard, KnowProSE.com is more technical, while here on RealityFragments.com I do everything else.

Oddly, artificial intelligence is thought to challenge both the technical and creative and those with the overlap.

So I decided to ask ChatGPT to write something on creativity and see what it had to say. I was impressed with rationality yesterday. Today, let’s see what creativity I can get out of it.

ChatGPT Creative prompt
ChatGPT generated the following:

Artificial intelligence (AI) has made remarkable progress in recent years. From self-driving cars to natural language processing, AI is revolutionizing industries and transforming the way we live and work. But can AI be creative? The answer is a resounding yes.

AI algorithms can generate new and innovative ideas, produce original artwork, compose music, and even write novels. In this blog post, we’ll explore some examples of how creative an artificial intelligence can be.

  1. Art

AI algorithms can create unique and stunning pieces of artwork. These algorithms use machine learning techniques to analyze existing artwork, understand their composition and style, and then generate new pieces based on that information. One example of such AI-generated artwork is the “Portrait of Edmond de Belamy” by the Paris-based art collective Obvious. The portrait was sold for over $432,000 at Christie’s auction house in 2018.

  1. Music

AI algorithms can compose original music in various genres, including classical, jazz, and pop. These algorithms analyze existing music and use machine learning to identify patterns and structures. They can then use this information to create new pieces that are both unique and appealing to the human ear. One example of such an AI-generated piece of music is “Daddy’s Car,” which was composed by the Swedish AI startup Amper Music.

  1. Writing

AI algorithms can also write original works of fiction, poetry, and even news articles. These algorithms use natural language processing to understand the structure of language and then generate new pieces based on that understanding. One example of such an AI-generated work is “The Day a Computer Writes a Novel,” which was written by an AI program developed by a team at the Future University Hakodate in Japan.

  1. Inventing

AI algorithms can even invent new things. For example, a team of researchers at the University of Cambridge used an AI algorithm to design a new type of superconductor material. The algorithm analyzed data from thousands of existing superconductors and then used machine learning to identify new combinations of elements that could be used to create a more efficient material.

Conclusion

Artificial intelligence has come a long way in recent years, and its creative potential is becoming increasingly apparent. From generating new artwork and music to writing novels and inventing new materials, AI algorithms can produce original and innovative works that are both impressive and inspiring. As AI continues to advance, it’s exciting to think about what other creative feats it may achieve in the future.

Well, that is impressive. It’s a lot like someone showing a resume. But by itself, it doesn’t show much creativity, does it? It basically writes around some references. This is not to say that AI cannot be creative – the images I use these days are from DeepAI, as an example – but that creativity is limited to what the AI experiences through what it is trained on. It is limited by what it experiences, and what that limitation is… looks a lot like what we train it with.

Certainly, I think writing will be affected – and I have been mildly disillusioned about that. But I also believe that since an artificial intelligence only uses the information we share with it, it can’t truly be creative in a human sense because it lacks the human experience. It might throw words together or create images and music, but it can’t tell you how it feels the first time you eat ice cream or the way you feel when it falls.

It regurgitates in our language because it has no experience beyond what we feed it. It has no expectations, no way to love or hate other than the biases we feed them. When it comes to being human, it is not a competitor.

Unfortunately, though, it will be damned good at marketing.