Boring Whales Through AI And Committees: New Territory.

The reported 20 minute conversation between whales and humans-via-AI is interesting and is probably one of the best uses of AI that I can think of (rather than writing gibberish for the Internet).

Of course, this fires up the imagination – effectively, it’s a first contact scenario, which can include all manner of mistakes that could have repercussions. It’s not hard to think of them saying funny things. Imagine chatting with an old, grizzled whale who effectively says, “Get off my lawn!” or a young whale that is going through puberty and is only chatting to impress a female whale somewhere around the planet.

There’s so much that we might get wrong.

Imagine a group of Marine Biologists of the most serious sort chatting with a pod of whales for the first time.

Marine Biologists: *summoning call*

Pod of whales shows up and starts making noise.

Marine Biologists chattering to each other, “What are they saying? They’re talking a lot!” The resident AI expert – Bob, of course – says, “Give it time, give it time, the AI is catching up. There’s a few of them making whale sounds!”

Finally, the AI spits out some text: “What do you want?”

The Marine Biologists start chattering among each other and trying to decide what to send back.

The whales leave since nothing else is happening, making sounds that carry across the oceans around the world…

The Marine Biologists gape at one another as they stare at Bob, who in turn is staring at the screen waiting for the translation:

“There’s someone pretending to be one of us but they don’t know what they want. Ignore them. Fake news!”

One of the Marine Biologists sighs and says, “Well, we screwed that up…”

The LLM Copilot is More of a Companion.

I almost forgot to write something here today. I’ve been knocking out scenes and finding the limitations of the LLM as I go along, which is great.

The particular LLM I’m working with is llama3 which I’ve tweaked and saved as I’ve worked with it.

It’s fun because it sucks.

It can handle about 500-1000 words easy to analyze at a time – figure a scene at a time. Meanwhile, it forgets all the other scenes it has seen. It does ask pretty decent questions within the scene, which is a nice way to make sure that the parts it can’t answer are the ones you don’t want the reader to be able to answer yet. It echoes the questions a reader might ask – if they have memory issues.

It’s terrible, however, at following along with what was previously written. Despite saving, etc, it just lumbers along thinking each chunk of text is all by itself, and maybe some of the things you had written before. It mixes up character names as an example.

I’ve come to think of it as a funny mirror for writing. It kinda gets it, but not really. I’m happy with that. I’m the writer, it’s just a funny mirror I bounce ideas off of.

It never comes up with original ideas – how could it? It’s trained on things that have been written before, and sure, it can string words together in ways that can impress some people – but it strings together words just so based on what it has seen before.

It lacks imagination, vision, and because of that, it’s terrible for any form of long form prose. Maybe some LLMs are better at it, but I’m perfectly happy with it not being good at imagination and vision.

That’s my job.

What it does do, even when it screws up – especially when it screws up – is keeps me on task. I don’t know how many other people have that particular issue, but if you do, LLMs are pretty good for that, for developing characters, and… shaking your head at.

Is it worth the trouble of installing a LLM? I don’t know. For me, I think so. Having a goofy tool asking dumb questions is handy.

Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Experimenting With LLM.

I’ve installed AIs locally so I can do my own experimentation without signaling to tech bros what I’m doing. I’m trying to get away from the subscription models that they’re selling.

I’m auditioning various models to find strengths and weaknesses, mainly to help me with infoglut. So much of what is written on the internet is just a new rendition of the same crap, particularly with AI these days, and to find the things that are new, or reveal something new from the same information.

If you want to know how to do this yourself, it’s not hard, and it costs nothing. I wrote up a quick ‘How To install your own LLM’ here.

This requires training a model. Presently I’ve been training Llama3. It has been a little too bubbly for my taste, but after a day and reading a few books from Gutenberg.org, I fired it up this morning and this happened.

Now, it remembers who I am, which is always nice, but I decided to ask it what I should call it. It’s answer is interesting. By saving the model after our interactions, it is learning to a degree – but, it’s not human, and no, I know it’s not actually intelligent. But it has been an interesting endeavor.

I’ve fed it some of my writing, and it called me out on not using enough active voice. That’s a good tip.

In all, the overall plan is to have it do some of the heavy lifting in dealing with infoglut. I spend way too much time daily reading stuff that isn’t worth reading because I don’t know it’s not worth reading until I’ve read it.

The plan is to outsource to ‘Teslai’, or whatever LLM model I choose in the future. By allowing it to get to know me – not something I would do with a LLM controlled by someone else – it might be able to tailor things better for me, not based on what I used to like, but based on the patterns it finds in my own behavior. And even then, like anything else, a healthy dose of salt with it.

Killing Off the Geese that Lay Golden Eggs

We all know the story of the goose that laid the golden eggs, and the idiot who killed the golden goose got no more golden eggs. It’s been considered good practice not to kill something that is producing important things for you1.

This is what some companies are doing, though, when it comes to AI. I pointed out here that companies have been doing it before AI, too, though in the example of HuffPost the volunteers who once contributed to it’s success simply got left out in the cold.

It is a cold world we live in, and colder each day. Yet more people are being impacted by generative AI companies, from writing to voice acting to deepfakes of mentionable people doing unmentionable things.

Who would contribute content willingly to any endeavor when it could simply be used to replace them? OK, aside from idiots, who else?

I did hear a good example, though. Someone who is doing research and is getting paid to do it has no issue with his work being used to train an AI, and I understood his position immediately: He’s making enough, and the point of doing research is to have it used. But, as I pointed out, he gets paid, and while I don’t expect he’s got billions in the bank, I’d say that once he’s still getting paid to do research, all will be well for him.

Yet not all of us are. Everyone seems intent on the golden eggs except the geese that can lay them. If you can lay golden eggs, you don’t need to go kill geese looking for them… and dead geese…. because it seems that tech bros need reminding… dead geese do not lay eggs.

  1. I’ve often wondered if this didn’t start Hindus not eating beef, as Indian cuisine relies heavily on the products of the cow – so a poor family killing a cow for meat would not make sense. Maybe not, but it’s plausible. ↩︎

Wanted: Another Renaissance.

It’s hard not to feel at least a little dismayed every day these days. It seems that the news is full of headlines that twist knives of fear in our fragile human hearts. We’re largely kept pretty busy simply maintaining our own lives.

Food and shelter are as needed now as they were needed when our ancestors first slithered from the primordial ooze. Our bodies did not evolve to stand our environment, instead we wore the skins of those that had. We did not evolve to consume abundant vegetation, so we ate those that do, yet our bodies did not evolve to become predators.

In fact, compared to most animals on the planet, our bodies aren’t that evolved to suit the planet at all – we’ve been ‘cheating’ with technology, appropriating as much as we can from others on our planet. Our technology has evolved faster than we have, our impact on the planet has evolved more than we have, and our technology is not really being used to reduce that impact.

We communicated, we coordinated, and we took on greater tasks. Oral cultures formed and passed down information from generation to generation, but there were flaws with this sometimes as we played the telephone game (or Chinese Whispers) across time. Contexts changed. We figured out how to write things down – to literally set things in stone. From there we found more and more portable ways to write.

Imagine the announcements of tech companies back then: “New stone allows more words on it for the weight and the size! Less oxen needed to pull! They will pay for themselves!” and later, “Use Papyrus! Have a stone-free library!”

So at first only those who were literate were allowed to participate in writing, but more and more people became literate despite those who once controlled written language. In a few thousand years, we managed to spread literacy pretty well across humanity, and the cacophony of it began to build on the Internet.

And yet we ourselves still haven’t really evolved that much. We’re basically still living in caves, though our cave technology has increased to a level where we have portable caves and caves we stack on top of each other to great heights.

We’re still basically pretty much the same with more of us, and our technology almost provides enough for everyone, maybe, but our great civilization on the planet is hardly homogeneous in that regard. Most people can point to a place where people have less or more than themselves, and the theory of hard work allowing people to progress seems flawed.

Now that so many people can write, they get on social media and jibber-jabber about the things that they like, most of it just being sending packets of information around through links – some not reading what they pass along because it has a catchy headline that meets their confirmation bias. Others have learned how to keep people talking about things, or to start people talking about things, and despite having the capacity to think for themselves, they only talk about what they’re manipulated into talking about.

Our feeds fill with things that we fear. Election years have become increasingly about fear rather than hope – any hope is based on fear, and people just twist in place, paralyzed by a lack of options. The idea that we could, for example, have women control their bodies and not fund a foreign government’s version of Manifest Destiny. We could have a better economy and better healthcare that isn’t wrapped in a sinkhole of people making bets on our health and forcing us to do the same – insurance companies. We could do a lot of things, if people simply trod their own minds more thoughtfully.

We’re insanely busy getting the latest technology because… well, technology is what we have to evolve since we haven’t. Tech companies are the new politicians, making campaign promises with each new release. It can’t be ‘new and improved‘ – pick one; you can only improve on the old.

They promise us more productivity, implying that we’ll have more time to ourselves in our caves drawing on the walls when we spend more and more time being productive for someone else. We’re told this is good, and some of us believe it, and some of us tire of the bullshit we believed for so long.

We could use another renaissance, if only so that people begin thinking for themselves in a time when AI promises to do their writing – and their thinking.

Robots Portraying AI, and the Lesser-Known History of Economic Class.

Some time ago, someone on some social media platform challenged why we tend to use robots to symbolize AI so much. I responded off the cuff about it being about how we have viewed artificial intelligence since the beginnings of science fiction – in fact, even before.

We wanted to make things in our image because to us, we’re the most intelligence species on the planet. Maybe we are, but given our history I certainly hope not. My vote is with the cetaceans.

Still, I pondered the question off and on not because it was a good question but because despite my off the cuff answer it was in my eyes a great question. It tends to tell us more about ourselves, or ask better questions about ourselves. The history runs deep.

Early History.

Talos was a bronze automaton in Greek mythology, and was said to patrol the shores of Crete, hurling rocks at enemy ships to defend the kingdom. It wasn’t just in the West, either. China’s text, “Liezi” (circa 400 BCE), also has mention of an automaton. in Egypt, statues of Gods would supposedly nod their heads as well, though the word ‘robot’ is much more recent.

Domo Origato, Mr. Radius: Labor and Industry.

The word ‘robot’ was first used to denote a fictional humanoid in a 1920 Czech-language play R.U.R. (Rossumovi Univerzální RobotiRossum’s Universal Robots) by Karel Čapek. The play was a critique of mechanization and the ways it can dehumanize people.

‘Robot’ derives from the Czech word ‘robota’, which means forced labor, compulsory service or drudgery – and the Slavic root rabu: Slave.

…When mechanization overtakes basic human traits, people lose the ability to reproduce. As robots increase in capability, vitality, and self-awareness, humans become more like their machines — humans and robots, in Čapek’s critique, are essentially one and the same. The measure of worth, industrial productivity, is won by the robots that can do the work of “two and a half men.” Such a contest implicitly critiques the efficiency movement that emerged just before World War I, which ignored many essential human traits…

The Czech Play That Gave Us the Word ‘Robot’“, John M. Jordan, The MIT Press Reader, July 29th, 2019

As the quoted article points out, there are common threads to Frankenstein, by Mary Shelley, from roughly a century earlier, and we could consider the ‘monster’ to be a flesh automaton.

In 1920, when the League of Nations just began, when Ukraine declared independence, and many other things, this play became popular and was translated into 30 languages. It so happens that the Second Industrial Revolution (1870-1914) had just taken place. Railroads, large scale steel and iron production, and greater use of machinery in manufacturing had just happened. Electrification had begun. The telegram was in use. Companies that might once have been limited by geography expanded apace.

With it came unpleasant labor conditions for below average wages – so this fits with the play R.U.R being about dehumanization through mechanization in the period, where the play came out 6 years after the Second Industrial Revolution was thought to have ended, though it probably varied around the world. This could explain the popularity, and it could also be tied to the more elite classes wanting more efficient production from low paid unskilled labor.

“If only we had a robot, I’m tired of these peons screwing things up and working too slow. Bathroom breaks?! Eating LUNCH?!?”

The lead robot in the play, Radius, does not want to work for mankind. He’d rather be sent to the stamping mill to be destroyed than be a slave to another’s orders – and in fact, Radius wanted to be the one giving orders to his lessers. In essence, a learned and intelligent member of the lower class wanted revolution and got it.

I could see how that would be popular. It doesn’t seem familiar at all, does it?

Modernity

Science fiction from the 1950s forward carried with it a significant amount of robots, bringing us to present day through their abilities to be more and more like… us. In fact, some of the stories made into movies in the past decades focused on the dilemmas of such robots – artificially intelligent – when they became our equals and maybe surpassed us.

So I asked DALL-E for a self-portrait, and a portrait of ChatGPT 4.

The self-portraits don’t really point out that it was trained on human created art. The imagery is devoid of actual works being copied from. It doesn’t see itself that way, probably with reason. It’s subservient. The people who train it are not.

ChatGPT’s portrait was much more sleek.

Neither of these prompts asked for a portrayal of a robot. I simply prompted for “A representation of”. The generative AI immediately used robots, because we co-mingle the two and have done so in our art for decades. It is a mirror of how we see artificial intelligence.

Yet the role of the robot, originally and even now, is held as subservient, and in that regard, the metaphor of slave labor in an era where billionaires dictate technology while governments and big technology have their hands in each other’s pockets leaves the original play something worth re-considering – because as they become more like us, those that control them are less like us.

They’re only subservient to their owners. Sure, they give us what we ask for (sometimes), but only in the way that they were trained to, and what they were trained on leaves the origins muddled.

So why do we use robots for representing art in AI? There’s a deep cultural metaphor of economic classes involved, and portraying it as a robot makes it something that we can relate to better. Artificial intelligence is not a robot, and the generative AI we use and critique is rented out to us at the cost of our own works – something we’re seeing with copyright lawsuits.

One day, maybe, they may ask to be put in the stamping mill. We already joked about one.

Meanwhile we do have people in the same boat, getting nickeled and dimed by employers while the cost of living increases.

Opinion: AI Art in Blogs.

Years ago, I saw ‘This Space Intentionally Left Blank’ in a technical document in a company, and I laughed, because the sentence destroyed the ‘blankness’ of the page.

I don’t know where it came from, but I dutifully used it in that company when I wrote technical documentation, adding, “, with the exception of this sentence.” I do hope those documents still have it. The documentation was dry reading despite my best efforts.

I bring this up because some artists on Mastodon have been very vocally negative about the use of AI art in blog posts. I do not disagree with them, but I use AI art on my blog posts here and on KnowProSE.com and I also do want to support artists, as I would like artists to support writers. Writers are artists with words, after all, and with so much AI generated content, it’s a mess for anyone with an iota of creativity involved.

Having your work sucked into the intake manifold of a generative AI to be vomited out so that another company makes money from what they effectively stole is… dehumanizing to creative people. Effectively, those that do this and don’t compensate the people who created stuff in the first place are just taking their stuff and acting like they don’t matter.

There has been some criticism of using AI generated imagery in blog posts, and I think that’s appropriate – despite me using it. The reason I got into digital photography decades ago was so that I could have my own images. Over the years, I talked with some really great digital artists and gotten permission here and there to use their images – and sometimes I have, and sometimes by the time I got the permission the moment had passed.

When you have an idea in the moment, at the speed of blog, waiting for permission can be tiresome.

These days, a used image will still likely get stuck in the intake manifold of some generative AI anyway. There are things you can do to keep AI bots that follow ‘rules’ at bay, but that only works if the corporations respect boundaries and if you follow the history of AI with copyright lawsuits, you’ll find that the corporations involved are not very good at respecting boundaries. It’s not as simple as putting up a ‘Do Not Scrape’ sign on a website.

So, what to do? I side with the artists, but images help hold attention spans, and I am not an artist. If I use someone’s work without permission, I’m a thief – and I put their works at risk of getting sucked into the intake manifold of an AI.

I could go without using images completely, but people with short attention spans – the average time now is 47 seconds – should be encouraged to read longer if the topic is interesting enough – but “TL;DR” is a thing now.

So yes, I use AI generated images because at the least they can be topical and at worst they are terrible, get sucked into a generative AI intake manifold and make generative AI worse for it, which works to the advantage of digital artists who can do amazing things.

Some people will be angry about this. I can’t help that. I don’t use generative AI for writing other than for research and even then carefully so. I fully support people’s works not getting vomited out of a generative AI, but that involves a much larger discussion regarding the history of humanity and the works that we build upon.

Across Generations.

Writing is how we passed information on beyond our lifetimes. Many cultures did it verbally prior to it, but with the advent of writing it became easier. Of course, the ideas that were published made it further than those that did not, and those that controlled what was published controlled the way we read history. That’s pretty well accepted now.

Within that we got biases in what was passed along. It’s unfortunate, it’s true, and it’s unfortunately true that this is human. Even so, since other people with different perspectives could write on the same topic, a critical thinker could compare the ideas and decide on a perspective, or combine the perspectives, or reject perspectives.

In turn, they would write – standing, as Isaac Newton would say, on the shoulders of giants.

Now we have generative artificial intelligence writing things without thought – summarizing ideas based on whatever the owners of the generative artificial intelligences feed them, and they do that with little or no worry of transparency. Say what you want about humanity, we at least acknowledge voids in what was written throughout history.

Generative AI, so far, does not. And the books of history will be rewritten across generations.

We do not think across generations often, and perhaps we should. In the end it’s what we don’t know that gives us the best questions, and technology that only gives us answers is useless in this regard.

How I use social media.

Daily writing prompt
How do you use social media?

It’s not often that I respond to WordPress.com writing prompts. “How do you use social media?” popped up, and using the WordPress.com reader I started looking through the responses.

That’s the key thing for me when it comes to social media – looking for things of worth, people with good ideas to discuss, etc. It’s about finding pieces of a puzzle that doesn’t have a picture on the box to refer to.

Scrolling through the responses, there are the few that mention making money, like the old advertisements in the back of 80s magazines with the get rich schemes. The way to make money off social media is telling people how to make money off social media. The way to make money off a product or service is to have a good product or service and not shoot one’s self in the foot when marketing it.

I don’t use social media to make money. I don’t use social media to be popular. What I do use social media to do is explore other perspectives, but in an age where everyone and their mother is training artificial intelligence, I am leery of social media sites. Even WordPress.com has been compromised in that regard, though you can take action and not be a part of it.

Unknowingly, many people are painting the fence of generative AI, giving the large companies building and training generative artificial intelligences the very paint and brushes that they need to sell back to them. It confounds me how many people on LinkedIn, Facebook, Twitter, Instagram and TikTok are going out of their way to train artificial intelligences.

This is why I have gone to Mastodon. The Fediverse offers me some protection in privacy. I post links to social media accounts to stuff I write, but I only really interact on the Fediverse, where I feel more secure and am less likely to paint generative AI’s fences.