The LLM Copilot is More of a Companion.

I almost forgot to write something here today. I’ve been knocking out scenes and finding the limitations of the LLM as I go along, which is great.

The particular LLM I’m working with is llama3 which I’ve tweaked and saved as I’ve worked with it.

It’s fun because it sucks.

It can handle about 500-1000 words easy to analyze at a time – figure a scene at a time. Meanwhile, it forgets all the other scenes it has seen. It does ask pretty decent questions within the scene, which is a nice way to make sure that the parts it can’t answer are the ones you don’t want the reader to be able to answer yet. It echoes the questions a reader might ask – if they have memory issues.

It’s terrible, however, at following along with what was previously written. Despite saving, etc, it just lumbers along thinking each chunk of text is all by itself, and maybe some of the things you had written before. It mixes up character names as an example.

I’ve come to think of it as a funny mirror for writing. It kinda gets it, but not really. I’m happy with that. I’m the writer, it’s just a funny mirror I bounce ideas off of.

It never comes up with original ideas – how could it? It’s trained on things that have been written before, and sure, it can string words together in ways that can impress some people – but it strings together words just so based on what it has seen before.

It lacks imagination, vision, and because of that, it’s terrible for any form of long form prose. Maybe some LLMs are better at it, but I’m perfectly happy with it not being good at imagination and vision.

That’s my job.

What it does do, even when it screws up – especially when it screws up – is keeps me on task. I don’t know how many other people have that particular issue, but if you do, LLMs are pretty good for that, for developing characters, and… shaking your head at.

Is it worth the trouble of installing a LLM? I don’t know. For me, I think so. Having a goofy tool asking dumb questions is handy.

Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Experimenting With LLM.

I’ve installed AIs locally so I can do my own experimentation without signaling to tech bros what I’m doing. I’m trying to get away from the subscription models that they’re selling.

I’m auditioning various models to find strengths and weaknesses, mainly to help me with infoglut. So much of what is written on the internet is just a new rendition of the same crap, particularly with AI these days, and to find the things that are new, or reveal something new from the same information.

If you want to know how to do this yourself, it’s not hard, and it costs nothing. I wrote up a quick ‘How To install your own LLM’ here.

This requires training a model. Presently I’ve been training Llama3. It has been a little too bubbly for my taste, but after a day and reading a few books from Gutenberg.org, I fired it up this morning and this happened.

Now, it remembers who I am, which is always nice, but I decided to ask it what I should call it. It’s answer is interesting. By saving the model after our interactions, it is learning to a degree – but, it’s not human, and no, I know it’s not actually intelligent. But it has been an interesting endeavor.

I’ve fed it some of my writing, and it called me out on not using enough active voice. That’s a good tip.

In all, the overall plan is to have it do some of the heavy lifting in dealing with infoglut. I spend way too much time daily reading stuff that isn’t worth reading because I don’t know it’s not worth reading until I’ve read it.

The plan is to outsource to ‘Teslai’, or whatever LLM model I choose in the future. By allowing it to get to know me – not something I would do with a LLM controlled by someone else – it might be able to tailor things better for me, not based on what I used to like, but based on the patterns it finds in my own behavior. And even then, like anything else, a healthy dose of salt with it.

Killing Off the Geese that Lay Golden Eggs

We all know the story of the goose that laid the golden eggs, and the idiot who killed the golden goose got no more golden eggs. It’s been considered good practice not to kill something that is producing important things for you1.

This is what some companies are doing, though, when it comes to AI. I pointed out here that companies have been doing it before AI, too, though in the example of HuffPost the volunteers who once contributed to it’s success simply got left out in the cold.

It is a cold world we live in, and colder each day. Yet more people are being impacted by generative AI companies, from writing to voice acting to deepfakes of mentionable people doing unmentionable things.

Who would contribute content willingly to any endeavor when it could simply be used to replace them? OK, aside from idiots, who else?

I did hear a good example, though. Someone who is doing research and is getting paid to do it has no issue with his work being used to train an AI, and I understood his position immediately: He’s making enough, and the point of doing research is to have it used. But, as I pointed out, he gets paid, and while I don’t expect he’s got billions in the bank, I’d say that once he’s still getting paid to do research, all will be well for him.

Yet not all of us are. Everyone seems intent on the golden eggs except the geese that can lay them. If you can lay golden eggs, you don’t need to go kill geese looking for them… and dead geese…. because it seems that tech bros need reminding… dead geese do not lay eggs.

  1. I’ve often wondered if this didn’t start Hindus not eating beef, as Indian cuisine relies heavily on the products of the cow – so a poor family killing a cow for meat would not make sense. Maybe not, but it’s plausible. ↩︎

The Challenge.

In researching opting out of allowing WordPress.com and Tumblr.com using my content to sell to Midjourney and OpenAI, I ran across some thoughtful writing on opting out of AI by Justin Dametz.

This is someone I likely wouldn’t cross paths with, since I’m not someone who is very interested in theology, which he writes quite a bit about. I imagine he could say the same about my writing, but we have a nexus.

His piece was written last year, and it echoes some of my own sentiments about the balance between AI and writing, where he makes solid points about young people learning how to communicate themselves.

I tend to agree.

Yet, I am also reminded of learning calculus without a calculator. Scientific calculators were fairly new in the late 1980s when I learned calculus, and they even came solar powered so we wouldn’t have to fiddle with the batteries. These were powerful tools, but my class wasn’t allowed to use them until we had the fundamentals down. This, of course, did not stop us.

Speaking for myself, I wrote code in BASIC on an old Vic-20 that allowed me to check my answers. This didn’t help me with my homework, really, or doing tests, since we were required to show our working and if we got the wrong answer and did it the right way, we still got the majority of the points for the question. We had to demonstrate the fundamentals.

How does one demonstrate the fundamentals of writing? How does one demonstrate the ability to communicate without crutches? The answer is by assuring none of the crutches are available to help. I suppose we could have writing done in Faraday cages in classes to evaluate what students write – or we could simply reward original writing because the one thing that artificial intelligence cannot do is imagine, and while it can relate human experience through the distillation of statistics and words, it doesn’t itself understand the human experience.

Generative AIs can spit out facts, narratives that it’s seen before, and images based on what it has been trained on – but it really adds nothing new to the human experience except the ability to connect things across what human knowledge we have trained it on.

But how do we teach children how to write without it? How do we then teach students how to learn and be critical of the results we get?

First, we have to teach them learn instead of chasing grades, a problem which has confounded us for decades, to have ability rather than titles and fancy pieces of paper to hang on the walls.

That’s the next challenge.

Imagination

Having now created the God of Technology, I started perusing things related to technology and found a post on the Arts &Crafts, How-To’s, Upcycling & Repurposing blog.

Good and Bad” is based on the a daily prompt, “What technology would you be better off without, why?”. Hidden within, Melodie writes:

Games…. Of course we had a Nintendo and Atari. But we had what is called our “imagination “. We went outside to play, rode our bikes. Were told to come home before dark. Now a days, kids are so zoned into the tv , cell phones, Xbox or Play station...

Imagination. We all sort of understand what it is, but what is it? The first dictionary definition (Merriam Webster) of imagination is:

the act or power of forming a mental image of something not present to the senses or never before wholly perceived in reality”.

In other words, the ability to create within the reality of our own minds, minds which are fed by the senses but not limited to the senses. In it’s own way, imagination could be considered a sense.

I don’t know that there is more or less imagination since I was a child. What I do know is that technology is constantly begging us for attention because it massages the fun bits of our brains, and maybe kids these days aren’t allowed to imagine as much because imagination always works best for me with… time.

Imagination draws from things we have experienced, both perceived and imagined – in fact, both tickle the same parts of the brain with memory and imagined. It’s why memory isn’t as trustworthy as we would like to think, and why hallucinations of artificial intelligences seem so much like the same thing.

Books, video games, movies, music, all of these things feed into what we imagine with. Like a learning model for an artificial intelligence. The more you cram in there, the more tools you have except… when you spend time considering much of the same inputs… you find that there’s more to them with a little imagination.

When I play a video game, watch a movie, etc, I’m also exploring someone else’s imagination – I’m not using my own – and maybe that’s an important part of who we are as humans.

Replacing imagination with technology doesn’t seem like a great answer.

Subjective AI Results.

Banality. I don’t often use the word, I don’t often encounter the word, and it’s largely because ‘unoriginal’ seems to work better for me. That said, one of the things I’ve encountered while I play with the new toy for me, Tumblr, used it effectively and topically:

Project Parakeet: On the Banality of A.I. writing nailed it, covering the same basic idea I have expressed repeatedly in things I’ve written, such as, “It’s All Statistics” and “AI: Standing on the Shoulders of Technology, Seeking Humanity“.

It’s heartening to know others are independently observing the same things, though I do admit I found the prose a bit more flowery than my own style:

“…What Chatbots do is scrape the Web, the library of texts already written, and learn from it how to add to the collection, which causes them to start scraping their own work in ever enlarging quantities, along with the texts produced by future humans. Both sets of documents will then degenerate. For as the adoption of AI relieves people of their verbal and mental powers and pushes them toward an echoing conformity, much as the mass adoption of map apps have abolished their senses of direction, the human writings from which the AI draws will decline in originality and quality along, ad infinitum, with their derivatives. Enmeshed, dependent, mutually enslaved, machine and man will unite their special weaknesses – lack of feeling and lack of sense – and spawn a thing of perfect lunacy, like the child of a psychopath and an idiot…”

Walter Kirn, ‘Project Parakeet: On the Banality of A.I. Writing’, Unbound, March 18th, 2023.

Yes. Walter Kirn’s writing had me re-assessing my own opinion not because I believe he’s wrong, but because I believe we are right. This morning I found it lead to at least one other important question.

Who Does Banality Appeal To?

You see, the problem here is that banality is subjective because what is original for one person is not original for the other. I have seen people look shocked when I discovered something they already knew and expressed glee. It wasn’t original for them, it was original for me. In the same token, I have written and said things that I believe are mundane to have others think it is profound.

Banality – lack of originality – is subjective.

So why would people be so enthralled with the output of these large language models(LLMs), failing a societal mirror test? Maybe because the writing that comes out of them is better than their own. It’s like Grammarly on steroids, and Grammarly doesn’t make you a better writer, it just makes you look like you are a better writer. It’s like being dishonest on your dating profile.

When I prompted different LLMs about whether the quality of education was declining, the responses were non-committal, evasive and some more flowery than others in doing so. I’d love to see a LLM say, “Well shit. I don’t know anything about that”, but instead we get what they expect we want to see. It’s like asking someone a technical question during an interview that they don’t have the answer to and they just shoot a shotgun of verbage at you, a violent emetic eruption of knowledge that doesn’t answer the question.

“I don’t know”, in my mind, is a perfectly legitimate response and tells me a lot more than having to weed through someone’s verbal or written vomit to see if they even have a clue. I’m the person who says, “I don’t know”, and if it’s interesting enough to me for whatever reason, the unspoken is, “I’ll find out”.

The LLM’s can’t find out. They’re waiting to be fed by their keepers, and their keepers have some pretty big blind spots because we, as human beings, have a lot more questions than answers. We can hide behind what we do know, but it’s what we don’t know that gives us the questions.

I’ve probably read about 10,000 books in my lifetime, give or take, at the age of 51. This is largely because I am of Generation X, and we didn’t have the flat screens children have had in previous generations. Therefore, my measure of banality, if there could be such a measure, would be higher than people who have read less – and that’s just books. There’s websites, all manner of writing on social media, the blogs I check out, etc, and those have become more refined because I have a low tolerance for banality and mediocrity.

Meanwhile, many aspire to see things as banal and mediocre. This is not elitism. This is seen when a child learns something new and expresses joy an adult looks at them in wonder, wishing that they could enjoy that originality again. We never get to go back, but we get to visit with children.

Going to bookstores used to be a true pleasure for me, but now when I look at the shelves I see less and less new, the rest a bunch of banality with nice covers. Yet books continue to sell because people don’t see that banality. My threshold for originality is higher, and in a way it’s a curse.

The Unexpected Twist

In the end, if people actually read what these things spit out, the threshold for originality should increase since after the honeymoon period is over with their LLM of choice, they’ll realize banality.

In a way, maybe it’s like watching children figure things out on their own. Some things cannot be taught, they have to be learned. Maybe the world needs this so that it can appreciate more of the true originality out there.

I’m uncertain. It’s a ray of hope in a world where marketers would have us believe in a utopian future that they have never fulfilled while dystopia creeps in quietly through the back door.

We can hope, or we can wring our hands, but one thing is certain:

We’re not putting it back in the box.