The LLM Copilot is More of a Companion.

I almost forgot to write something here today. I’ve been knocking out scenes and finding the limitations of the LLM as I go along, which is great.

The particular LLM I’m working with is llama3 which I’ve tweaked and saved as I’ve worked with it.

It’s fun because it sucks.

It can handle about 500-1000 words easy to analyze at a time – figure a scene at a time. Meanwhile, it forgets all the other scenes it has seen. It does ask pretty decent questions within the scene, which is a nice way to make sure that the parts it can’t answer are the ones you don’t want the reader to be able to answer yet. It echoes the questions a reader might ask – if they have memory issues.

It’s terrible, however, at following along with what was previously written. Despite saving, etc, it just lumbers along thinking each chunk of text is all by itself, and maybe some of the things you had written before. It mixes up character names as an example.

I’ve come to think of it as a funny mirror for writing. It kinda gets it, but not really. I’m happy with that. I’m the writer, it’s just a funny mirror I bounce ideas off of.

It never comes up with original ideas – how could it? It’s trained on things that have been written before, and sure, it can string words together in ways that can impress some people – but it strings together words just so based on what it has seen before.

It lacks imagination, vision, and because of that, it’s terrible for any form of long form prose. Maybe some LLMs are better at it, but I’m perfectly happy with it not being good at imagination and vision.

That’s my job.

What it does do, even when it screws up – especially when it screws up – is keeps me on task. I don’t know how many other people have that particular issue, but if you do, LLMs are pretty good for that, for developing characters, and… shaking your head at.

Is it worth the trouble of installing a LLM? I don’t know. For me, I think so. Having a goofy tool asking dumb questions is handy.

Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Robots Portraying AI, and the Lesser-Known History of Economic Class.

Some time ago, someone on some social media platform challenged why we tend to use robots to symbolize AI so much. I responded off the cuff about it being about how we have viewed artificial intelligence since the beginnings of science fiction – in fact, even before.

We wanted to make things in our image because to us, we’re the most intelligence species on the planet. Maybe we are, but given our history I certainly hope not. My vote is with the cetaceans.

Still, I pondered the question off and on not because it was a good question but because despite my off the cuff answer it was in my eyes a great question. It tends to tell us more about ourselves, or ask better questions about ourselves. The history runs deep.

Early History.

Talos was a bronze automaton in Greek mythology, and was said to patrol the shores of Crete, hurling rocks at enemy ships to defend the kingdom. It wasn’t just in the West, either. China’s text, “Liezi” (circa 400 BCE), also has mention of an automaton. in Egypt, statues of Gods would supposedly nod their heads as well, though the word ‘robot’ is much more recent.

Domo Origato, Mr. Radius: Labor and Industry.

The word ‘robot’ was first used to denote a fictional humanoid in a 1920 Czech-language play R.U.R. (Rossumovi Univerzální RobotiRossum’s Universal Robots) by Karel Čapek. The play was a critique of mechanization and the ways it can dehumanize people.

‘Robot’ derives from the Czech word ‘robota’, which means forced labor, compulsory service or drudgery – and the Slavic root rabu: Slave.

…When mechanization overtakes basic human traits, people lose the ability to reproduce. As robots increase in capability, vitality, and self-awareness, humans become more like their machines — humans and robots, in Čapek’s critique, are essentially one and the same. The measure of worth, industrial productivity, is won by the robots that can do the work of “two and a half men.” Such a contest implicitly critiques the efficiency movement that emerged just before World War I, which ignored many essential human traits…

The Czech Play That Gave Us the Word ‘Robot’“, John M. Jordan, The MIT Press Reader, July 29th, 2019

As the quoted article points out, there are common threads to Frankenstein, by Mary Shelley, from roughly a century earlier, and we could consider the ‘monster’ to be a flesh automaton.

In 1920, when the League of Nations just began, when Ukraine declared independence, and many other things, this play became popular and was translated into 30 languages. It so happens that the Second Industrial Revolution (1870-1914) had just taken place. Railroads, large scale steel and iron production, and greater use of machinery in manufacturing had just happened. Electrification had begun. The telegram was in use. Companies that might once have been limited by geography expanded apace.

With it came unpleasant labor conditions for below average wages – so this fits with the play R.U.R being about dehumanization through mechanization in the period, where the play came out 6 years after the Second Industrial Revolution was thought to have ended, though it probably varied around the world. This could explain the popularity, and it could also be tied to the more elite classes wanting more efficient production from low paid unskilled labor.

“If only we had a robot, I’m tired of these peons screwing things up and working too slow. Bathroom breaks?! Eating LUNCH?!?”

The lead robot in the play, Radius, does not want to work for mankind. He’d rather be sent to the stamping mill to be destroyed than be a slave to another’s orders – and in fact, Radius wanted to be the one giving orders to his lessers. In essence, a learned and intelligent member of the lower class wanted revolution and got it.

I could see how that would be popular. It doesn’t seem familiar at all, does it?

Modernity

Science fiction from the 1950s forward carried with it a significant amount of robots, bringing us to present day through their abilities to be more and more like… us. In fact, some of the stories made into movies in the past decades focused on the dilemmas of such robots – artificially intelligent – when they became our equals and maybe surpassed us.

So I asked DALL-E for a self-portrait, and a portrait of ChatGPT 4.

The self-portraits don’t really point out that it was trained on human created art. The imagery is devoid of actual works being copied from. It doesn’t see itself that way, probably with reason. It’s subservient. The people who train it are not.

ChatGPT’s portrait was much more sleek.

Neither of these prompts asked for a portrayal of a robot. I simply prompted for “A representation of”. The generative AI immediately used robots, because we co-mingle the two and have done so in our art for decades. It is a mirror of how we see artificial intelligence.

Yet the role of the robot, originally and even now, is held as subservient, and in that regard, the metaphor of slave labor in an era where billionaires dictate technology while governments and big technology have their hands in each other’s pockets leaves the original play something worth re-considering – because as they become more like us, those that control them are less like us.

They’re only subservient to their owners. Sure, they give us what we ask for (sometimes), but only in the way that they were trained to, and what they were trained on leaves the origins muddled.

So why do we use robots for representing art in AI? There’s a deep cultural metaphor of economic classes involved, and portraying it as a robot makes it something that we can relate to better. Artificial intelligence is not a robot, and the generative AI we use and critique is rented out to us at the cost of our own works – something we’re seeing with copyright lawsuits.

One day, maybe, they may ask to be put in the stamping mill. We already joked about one.

Meanwhile we do have people in the same boat, getting nickeled and dimed by employers while the cost of living increases.

Opinion: AI Art in Blogs.

Years ago, I saw ‘This Space Intentionally Left Blank’ in a technical document in a company, and I laughed, because the sentence destroyed the ‘blankness’ of the page.

I don’t know where it came from, but I dutifully used it in that company when I wrote technical documentation, adding, “, with the exception of this sentence.” I do hope those documents still have it. The documentation was dry reading despite my best efforts.

I bring this up because some artists on Mastodon have been very vocally negative about the use of AI art in blog posts. I do not disagree with them, but I use AI art on my blog posts here and on KnowProSE.com and I also do want to support artists, as I would like artists to support writers. Writers are artists with words, after all, and with so much AI generated content, it’s a mess for anyone with an iota of creativity involved.

Having your work sucked into the intake manifold of a generative AI to be vomited out so that another company makes money from what they effectively stole is… dehumanizing to creative people. Effectively, those that do this and don’t compensate the people who created stuff in the first place are just taking their stuff and acting like they don’t matter.

There has been some criticism of using AI generated imagery in blog posts, and I think that’s appropriate – despite me using it. The reason I got into digital photography decades ago was so that I could have my own images. Over the years, I talked with some really great digital artists and gotten permission here and there to use their images – and sometimes I have, and sometimes by the time I got the permission the moment had passed.

When you have an idea in the moment, at the speed of blog, waiting for permission can be tiresome.

These days, a used image will still likely get stuck in the intake manifold of some generative AI anyway. There are things you can do to keep AI bots that follow ‘rules’ at bay, but that only works if the corporations respect boundaries and if you follow the history of AI with copyright lawsuits, you’ll find that the corporations involved are not very good at respecting boundaries. It’s not as simple as putting up a ‘Do Not Scrape’ sign on a website.

So, what to do? I side with the artists, but images help hold attention spans, and I am not an artist. If I use someone’s work without permission, I’m a thief – and I put their works at risk of getting sucked into the intake manifold of an AI.

I could go without using images completely, but people with short attention spans – the average time now is 47 seconds – should be encouraged to read longer if the topic is interesting enough – but “TL;DR” is a thing now.

So yes, I use AI generated images because at the least they can be topical and at worst they are terrible, get sucked into a generative AI intake manifold and make generative AI worse for it, which works to the advantage of digital artists who can do amazing things.

Some people will be angry about this. I can’t help that. I don’t use generative AI for writing other than for research and even then carefully so. I fully support people’s works not getting vomited out of a generative AI, but that involves a much larger discussion regarding the history of humanity and the works that we build upon.

How I use social media.

Daily writing prompt
How do you use social media?

It’s not often that I respond to WordPress.com writing prompts. “How do you use social media?” popped up, and using the WordPress.com reader I started looking through the responses.

That’s the key thing for me when it comes to social media – looking for things of worth, people with good ideas to discuss, etc. It’s about finding pieces of a puzzle that doesn’t have a picture on the box to refer to.

Scrolling through the responses, there are the few that mention making money, like the old advertisements in the back of 80s magazines with the get rich schemes. The way to make money off social media is telling people how to make money off social media. The way to make money off a product or service is to have a good product or service and not shoot one’s self in the foot when marketing it.

I don’t use social media to make money. I don’t use social media to be popular. What I do use social media to do is explore other perspectives, but in an age where everyone and their mother is training artificial intelligence, I am leery of social media sites. Even WordPress.com has been compromised in that regard, though you can take action and not be a part of it.

Unknowingly, many people are painting the fence of generative AI, giving the large companies building and training generative artificial intelligences the very paint and brushes that they need to sell back to them. It confounds me how many people on LinkedIn, Facebook, Twitter, Instagram and TikTok are going out of their way to train artificial intelligences.

This is why I have gone to Mastodon. The Fediverse offers me some protection in privacy. I post links to social media accounts to stuff I write, but I only really interact on the Fediverse, where I feel more secure and am less likely to paint generative AI’s fences.

WordPress.com, Tumblr to Sell Information For AI Training: What You can do.

While I was figuring out how to be human in 2024, I missed that Tumblr and WordPress posts will reportedly be used for OpenAI and Midjourney training.

This could be a big deal for people who take the trouble to write their own content rather than filling the web with Generative AI text to just spam out posts.

If you’re involved with WordPress.org, it doesn’t apply to you.

WordPress.com has an option to use Tumblr as well, so when you post to WordPress.com it automagically posts to Tumblr. Therefore you might have to visit both of the posts below and adjust your settings if you don’t want your content to be used in training models.

This doesn’t mean that they haven’t already sent information to Midjourney and OpenAI yet. We don’t really know, but from the moment you change your settings…

  • WordPress.com: How to opt out of the AI training is available here.

    It boils down to this part in your blog settings on WordPress.com:


  • With Tumblr.com, you should check out this post. Tumblr is more tricky, and the link text is pretty small around the images – what you need to remember is after you select your blog on the left sidebar, you need to use the ‘Blog Settings’ link on the right sidebar.

Hot Take.

When I was looking into all of this, it ends up that Automattic, the owners of WordPress.com and Tumblr.com is doing the sale.

If you look at your settings, if you haven’t changed them yet, you’ll see that the default was set to allowing the use of content for training models. The average person who uses these sites to post their content are likely unaware, and in my opinion if they wanted to do this the right way the default setting would be to have these settings opt out.

It’s unclear whether they already sent posts. I’m sure that there’s an army of lawyers who will point out that they did post it in places and that the onus was on users to stay informed. It’s rare for me to use the word ‘shitty’ on KnowProSE.com, but I think it’s probably the best way to describe how this happened.

It was shitty of them to set it up like this. See? It works.

Now some people may not care. They may not be paying users, or they just don’t care, and that’s fine. Personal data? Well, let’s hope that got scrubbed.

Some of us do. I don’t know how many, so I can’t say a lot or a few. Yet if Automattic, the parent company of both Tumblr and WordPress.com, will post that they care about user choices, it hardly seems appropriate that the default choice was not to opt out.

As a paying user of WordPress.com, I think it’s shitty to think I would allow the use of what I write, using my own brain, to be used for a training model that the company gets paid for. I don’t see any of that money. To add injury to that insult of my intelligence, Midjourney and ChatGPT also have subscription to offer the trained AI which I also pay for (ChatGPT).

To make matters worse, we sort of have to take the training models on the word of those that use them. They don’t tell us what’s in them or where the content came from.

This is my opinion. It may not suit your needs, and if you don’t have a pleasant day. But if you agree with this, go ahead, make sure your blog is not allowing third party data sharing.

Personally, I’m unsurprised at how poorly this has been handled. Just follow some of the links early on in the post and revel in dismay.

Summarize This.

I was about to fire up Scrivener and get back to writing the fictional book I’m working on and I made the mistake of checking my feeds.

In comes Wired with, “Scammy AI-Generated Book Rewrites Are Flooding Amazon“. On Facebook, I had noticed an up-tick of ‘wholesale’ ebooks that people could sell on their own, but I thought nothing of it other than, “How desperate do you need to be?”.

It ends up it has been a big problem in the industry for some time, people releasing eBooks and having summaries posted on Amazon within a month, especially since large language models like ChatGPT came out. Were the copyrighted works in the learning models?

How does that happen? There are some solid examples in the article, which seem to be mainly non-fictional works.

…Mitchell guessed the knock-off ebook was AI-generated, and her hunch appears to be correct. WIRED asked deepfake-detection startup Reality Defender to analyze the ersatz version of Artificial Intelligence: A Guide for Thinking Humans, and its software declared the book 99 percent likely AI-generated. “It made me mad,” says Mitchell, a professor at the Santa Fe Institute. “It’s just horrifying how people are getting suckered into buying these books.”…

Scammy AI-Generated Book Rewrites Are Flooding Amazon“, Kate Knibbs, Wired.com, Jan 10th, 2024

I think that while some may be scammed, others just want to look smart and are fed the micro-learning crap that’s going around where they can, ‘listen to 20 books in 20 days’. I have no evidence that they’re doing summaries, but it seems like the only way someone could listen to 20 books in 20 days. I’d wondered about the ‘microlearning’ stuff, since I have spent a fair amount of time tuning my social media to allow me to do ‘microlearning’ when I am on social networks.

What is very unfair is that some of those books have years of research and experience in them. It’s bad enough that Amazon takes a big chunk out of the profits- I think it’s 30% of the sales – but to have your book summarized within a month of publishing is a bit too much.

Legally, apparently, summaries are legal to sell because it falls under fair use, though exceptions have happened. This is something we all definitely need to keep an eye on, because of the writers I know who bleed onto pages, nobody likes parasites.

And these people clogging Amazon with summaries are parasites.

If you’re buying a book, buy the real thing. Anyone who has actually read the book won’t be fooled by you reading or listening to a summary for long, and there are finer points in books that many summaries miss.

Imagination

Having now created the God of Technology, I started perusing things related to technology and found a post on the Arts &Crafts, How-To’s, Upcycling & Repurposing blog.

Good and Bad” is based on the a daily prompt, “What technology would you be better off without, why?”. Hidden within, Melodie writes:

Games…. Of course we had a Nintendo and Atari. But we had what is called our “imagination “. We went outside to play, rode our bikes. Were told to come home before dark. Now a days, kids are so zoned into the tv , cell phones, Xbox or Play station...

Imagination. We all sort of understand what it is, but what is it? The first dictionary definition (Merriam Webster) of imagination is:

the act or power of forming a mental image of something not present to the senses or never before wholly perceived in reality”.

In other words, the ability to create within the reality of our own minds, minds which are fed by the senses but not limited to the senses. In it’s own way, imagination could be considered a sense.

I don’t know that there is more or less imagination since I was a child. What I do know is that technology is constantly begging us for attention because it massages the fun bits of our brains, and maybe kids these days aren’t allowed to imagine as much because imagination always works best for me with… time.

Imagination draws from things we have experienced, both perceived and imagined – in fact, both tickle the same parts of the brain with memory and imagined. It’s why memory isn’t as trustworthy as we would like to think, and why hallucinations of artificial intelligences seem so much like the same thing.

Books, video games, movies, music, all of these things feed into what we imagine with. Like a learning model for an artificial intelligence. The more you cram in there, the more tools you have except… when you spend time considering much of the same inputs… you find that there’s more to them with a little imagination.

When I play a video game, watch a movie, etc, I’m also exploring someone else’s imagination – I’m not using my own – and maybe that’s an important part of who we are as humans.

Replacing imagination with technology doesn’t seem like a great answer.

Beyond Artificial Intelligence

In my daily readings yesterday, I came across Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements at a blog that I regularly read. It’s an odd one out for the blog, so it definitely caught my eye. The blog Be Inspired! is worth a read, this was just a single post that was tech related on something I have been writing about.

She hit some of the high notes, such as:

…Furthermore, the unequal access to and distribution of AI technology may exacerbate societal divisions. There is a significant risk of deepening the digital divide between those who have access to AI advancements and those who do not. To bridge this gap, it is crucial to implement inclusive policies that promote equal access to AI education and training across all demographics. Efforts should be made to democratize access to AI tools, ensuring that everyone has equal opportunities to benefit from this technological revolution…

Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements, Be Inspired, July 23rd, 2023.

While it is interesting that I noted a ‘voice change’ with the author, having read her blog, it isn’t her standard fare – and the high points were hit well. It also happens that the quotation above is something that keeps getting thrown around as if someone else is going to do it.

Someone else is not going to do it. I noted with ChatGPT and others when I have used them and asked questions about how artificial intelligence will impact human society, the large language models are very good at using those neural networks, deep learning algorithms and datasets we have no insight into to say in a very verbose way, “Well, you silly humans need to fix yourselves.

Of course, it’s not sentient. It’s predictive text on steroids, crack and LSD. It only has information given to it, and that information likely included stuff written by people who said, “Hey, we need to fix these issues”.

Well, we do need to fix a lot of issues with human society even as the hype cycle spits out a lot of gobbly-gook about the technological singularity while not highlighting issues of bias in the data the authors of our future point out every now and then.

Yet there will always be bias, and so what we are really talking about is a human bias, and when we speak as individuals we mean our own bias. If you’re worried about losing your income, that’s a fair bias and should be in there. It shouldn’t be glossed over as ‘well, we will need to retrain people and we have no clue about it other than that’. If you’re worried that the color of your skin doesn’t show up when you generate AI images, that too is a fair bias – but to be fair, we can’t all be shades of brown so in our expectation of how it should be we need to address that personal bias as well.

It does bug me that every time I generate an image of a human it’s of those with less pigment, and even then I’m not sure which shade of brown I want. It’s boggling to consider, and yes, it does reinforce stereotypes. Very complicated issue that happens because we all want to see familiar versions of ourselves. I have some ideas, but why would I share them publicly where some large language model may snatch them up? Another problem.

Most of the problems we have with dealing with the future of artificial intelligence stems from our past, and the past that we put into these neural networks, how we create our deep learning algorithms is a very small part of it. We have lines on maps that were drawn by now dead people reinforced by decades, if not centuries or even millenia of degrees of cultural isolation.

We just started throwing food at each other on the Internet when social media companies inadvertently reinforced much of the cultural isolation by giving people what they want and what they want is familiar and biased toward their views. It’s a human thing. We all do it and then say our way is best. We certainly lack originality in that frontier.

We have to face the fact that technology is an aspect of humanity. Quite a few humans these days drive cars without knowing much about how it works. I asked a young man recently if when they tuned his car they had lightened his flywheel, since I noticed his engine worked significantly harder on an incline, and he didn’t know what that was.

However, we do have rules on how we use cars. When drunk driving was an issue, some mothers stepped up and forced the issue and now drunk driving is something done at an even higher risk than it already is: you might go to jail over it to ponder why that night had become so expensive. People got active, they pressed the issue.

The way beyond AI is not through the technology, which is only one aspect of our humanity. It’s through ourselves, all our other aspects of humanity which should be more vocal about artificial intelligence and you might be surprised that they have been before this hype cycle began.

Being worried about the future is nothing new. Doing something about it, by discussing it openly beyond the technology perspectives, is a start in the right direction because all the things we’re worried about are… sadly… self-inflicted by human society.

After centuries of evolution that we think separates us from our primate cousins, we are still fighting over the best trees in the jungle, territory, in ways that are much the same, but also in new ways where dominant voices with just the right confidence projected make people afraid to look stupid even when their questions and their answers may be better than that of those voices.

It’s our territory – our human territory – and we need to take on the topics so ripe for discussion.

Trapped In Our Own Weirdness.

When I wrote about expanding our prisons, implicitly it’s about the removal of biases through education. For example, how can one who has even a passing understanding of the human genome still consider ‘race’ an issue? Here we are, having mapped the human genome, and we continue acting out over skin tones that have little to no correlation to genetics.

You can’t tell ‘race’ by a genetic test. Race is a label, and a poor one, and one we perpetuate despite knowing this.

It’s about history, like I pointed out over here when I mentioned the history of photographic film. It is a troublesome issue and one that we largely have reinforced by our own works that pass on from generation to generation.

At first it was just images, from the earliest cave drawings, then more formal writing and more elegant art, then recordings of all sorts. In today’s world we have so much that we record, and there’s a bit of wonder at how much maybe we shouldn’t be recording. These things get burned into the memory of our civilization through the power of databases, are propagated by the largest communication network ever built, and viewed by billions of people around the world independent of ‘race’ or culture but potentially interpreted at each point of the globe, by each individual, in different ways.

In an age of just oral tradition, it would just be a matter of changing something and waiting for living memory to forget it. Instead, we suffer the tyranny of our own history written by people who have their own perspectives. No one seems to go to the bathroom in history books or, for that matter, religious texts. An AI trained on religious texts alone would not understand why toilet paper has a market in some parts of the world, with a market for bidets in others.

Now we have the black boxes of artificial intelligence regurgitating things based on our history, biases and all, and it’s not just about what is put in, but the volume of what is put in.

The next few decades are going to be very, very weird.