Manipulation In The Age of AI – And How We Got Here.

We understand things better when we can interact with them and see an effect. A light switch, as an example, is a perfectly good example.

If the light is off, we can assume that the light switch position is in the off position. Lack of electricity makes this flawed, so we look around and see if other things that require electricity are also on.

If the light is on, we can assume the light switch is in the on position.

Simple. Even if we can’t see, we have a 50% chance of getting this right.

It gets more complicated when we don’t have an immediate effect on something, or can’t have an effect at all. As I wrote about before, we have a lot of stuff that is used every day where the users don’t understand how it works. This is sometimes a problem. Are nuclear reactors safe? Will planting more trees in your yard impact air quality in a significant way?

This is where we end up trusting things. And sometimes, these things require skepticism. The world being flat deserves as much skepticism as it being round, but there’s evidence all around that the world is indeed round. There is little evidence that the world is flat. Why do people still believe the earth is flat?

Shared Reality Evolves.

As a child, we learn by experimentation with things around us. As we grow older, we lean on information and trusted sources more – like teachers and books – to tell us things that are true. My generation was the last before the Internet, and so whatever information we got was peer reviewed, passed the muster of publishers, etc. There were many hoops that had to be jumped through before something went out into the wild.

Yet if we read the same books, magazines, saw the same television shows, we had this shared reality that we had, to an extent, agreed upon, and to another extent in some ways, was forced on us.

The news was about reporting facts. Everyone who had access to the news had access to the same facts, and they could come to their own conclusions, though to say that there wasn’t bias then would be dishonest. It just happened slower, and because it happened slower, more skepticism would come into play so that faking stuff was harder to do.

Enter The Internet

It followed that the early adopters (I was one) were akin to the first car owners because we understood the basics of how things worked. If we wanted a faster connection, we figured out what was slowing our connections and we did it largely without search engines – and then search engines made it easier. Websites with good information were valued, websites with bad information were ignored.

Traditional media increasingly found that the Internet business model was based on advertising, and it didn’t translate as well to the traditional methods of advertising. To stay competitive, some news became opinions and began to spin toward getting advertisers to click on websites. The Internet was full of free information, and they had to compete.

Over a few decades, the Internet became more pervasive, and the move toward mobile phones – which are not used mainly as phones anymore – brought information to us immediately. The advertisers and marketers found that next to certain content, people were more likely to be interested in certain advertising so they started tracking that. They started tracking us and they stored all this information.

Enter Social Media

Soon enough, social media came into being and suddenly you could target and even microtarget based on what people wanted. When people give up their information freely online, and you can take that information and connect it to other things, you can target people based on clusters of things that they pay attention to.

Sure, you could just choose a political spectrum – but you could add religious beliefs, gender/identity, geography, etc, and tweak what people see based on a group they created from actual interactions on the Internet. Sound like science fiction? It’s not.

Instead of a shared reality on one axis, you could target people on multiple axes.

Cambridge Analytica

Enter the Facebook-Cambridge Analytica Data Scandal:

Cambridge Analytica came up with ideas for how to best sway users’ opinions, testing them out by targeting different groups of people on Facebook. It also analyzed Facebook profiles for patterns to build an algorithm to predict how to best target users.

“Cambridge Analytica needed to infect only a narrow sliver of the population, and then it could watch the narrative spread,” Wylie wrote.

Based on this data, Cambridge Analytica chose to target users that were  “more prone to impulsive anger or conspiratorial thinking than average citizens.” It used various methods, such as Facebook group posts, ads, sharing articles to provoke or even creating fake Facebook pages like “I Love My Country” to provoke these users.

The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections“, Rosalie Chan, Business Insider (Archived) October 6th, 2019

This had drawn my attention because it impacted the two countries I am linked to; the United States and Trinidad and Tobago. It is known to have impacted the Ted Cruz Campaign (2016), the Donald Trump Presidential Campaign (2016), and interfering in the Trinidad and Tobago Elections (2010).

The timeline of all of that, things were figured out years after the damage had already been done.

The Shared Realities By Algorithm

When you can splinter groups and feed them slightly different or even completely different information, you can impact outcomes, such as elections. In the U.S., you can see it with television channel news biases – Fox news was the first to be noted. When the average attention span of people is now 47 seconds, things like Twitter and Facebook (Technosocial dominant) can make this granularity more and more fine.

Don’t you know at least one person who believe some pretty whacky stuff? Follow them on social media, I guarantee you you’ll see where it’s coming from. And it gets worse now because since AI has become more persuasive than the majority of people and critical thinking has not kept pace.

When you like or share something on social media, ask yourself whether someone has a laser pointer and just adding a red dot to your life.

The Age of Generative AI And Splintered Shared Realities

An AI attached to the works of humans

Recently, people have been worrying about AI in elections and primarily focusing on deepfakes. Yet deepfakes are very niche and haven’t been that successful. This is probably also because it has been the focus, and therefore people are skeptical.

The generative AI we see, large language models (LLMs) were trained largely on Internet content, and what is Internet content largely? You can’t seem to view a web page without it? Advertising. Selling people stuff that they don’t want or need. Persuasively.

And what do sociotechnical dominant social media entities do? Why, they train their AIs on the data available, of course. Wouldn’t you? Of course you would. To imagine that they would never use your information to train an AI requires more imagination than the Muppets on Sesame Street could muster.

Remember when I wrote that AI is more persuasive? Imagine prompting an AI on what sort of messaging would be good for a specific microtarget. Imagine asking it how to persuade people to believe it.

And imagine in a society of averages that the majority of people will be persuaded about it. What is democracy? People forget that it’s about informed conversations and they go straight to the voting because they think that is the measure of a democracy. It’s a measure, and the health of that measure reflects the health of the discussion preceding the vote.

AI can be used – and I’d argue has been used – successfully in this way, much akin to the story of David and Goliath, where David used technology as a magnifier. A slingshot effect. Accurate when done right, multiplying the force and decreasing the striking surface area.

How To Move Beyond It?

Well, first, you have to understand it. You also have to be skeptical about why you’re seeing the content that you do, especially when you agree with it. You also have to understand that, much like drunk driving, you don’t have to be drinking to be a victim.

Next, you have to understand the context other people live in – their shared reality and their reality.

Probably more importantly, is not calling people names because they disagree with you. Calling someone racist or stupid is a recipe for them to stop listening to you.

Where people – including you – can manipulated by what is shown in your feeds by dividing, find the common ground. The things that connect. Don’t let entities divide us. We do that well enough by ourselves without suiting their purposes.

The future should be about what we agree on, our common shared identities, where we can appreciate the nuances of difference. And we can build.

Read From The Future, Words Look Silly

An image of SEO in scrabble tiles, standing on edge, with a light point at the top left that causes shadows to be to the lower right. Random blurred tiles lay flat behind 'SEO'

I read the news every morning with my coffee – the stereotypical man who reads the newspaper, modernized to scanning things through his phone and computer. It’s a terrible way to start the day, but in the information we have to know what’s going on or we’re stuck in the mud.

Certain words and phrases leap out. With wars all over the world – Sudan isn’t covered that well – I noted ‘massive attacks’ all over the place and it lead me to wonder how a massive attack now compares to a massive attack in the future or past. Then you generally see how many people were killed in the article, somewhere hidden maybe, and this morning I saw 4 as the number. I’ve seen ‘massive’ used for much larger numbers, but apparently now the threshold for a massive attack is 4 people dead.

This is not to say that every person that dies is not significant. Every life has value. Yet when we’re reading about loss of life, as in this example, SEO and Sensationalism create false weights in things just so that something is read.

In the digital age, where attention spans are short and competition for clicks is fierce, sensationalism and SEO-driven writing have taken center stage. While these techniques are undeniably effective at grabbing attention, they often come at the cost of depth, authenticity, and meaningful communication. The quest for virality and search engine dominance has diluted the essence of quality content.

And it will impact us through LLMs that are trained on scraped content.

The Rise of Sensationalism

Sensationalism thrives on exaggeration and emotional manipulation. Headlines like “You Won’t Believe What Happened Next” or “The Shocking Truth About [Topic]” are designed to spark curiosity, but they rarely deliver on their promises. Instead of providing valuable insights or fostering genuine understanding, sensational content often:

  • Overpromises and underdelivers.
  • Oversimplifies and even misrepresents complex issues.
  • Promotes clickbait over substance.
  • Diminishes reading comprehension.

While sensationalism might temporarily increase page views, it erodes trust over time. And time erodes some words too. Readers become wary of exaggerated claims and disengage, ultimately harming the credibility of the writer or publication. Do you want to trade short term revenue for reputation? That seems to be what a lot of online marketing focuses on.

Consider the word ‘New’ you find in an article from 1985. It has no real meaning in 2024 except telling people it was new when the article was written. Hopefully there’s a date on it- some sites don’t show the date something was published. If there is the reader might realize that the ‘new’ Windows 1.0 is not that new.

Talking about the future, too, didn’t work out well for most writers except those that imagined a future that was compelling enough for people to work towards it.

The Impact of SEO on Content Depth

Search Engine Optimization (SEO) is a powerful tool for visibility, but its misuse has led to a formulaic approach to writing. Content creators are often pressured to prioritize keywords, meta descriptions, and search algorithms over the quality of their message. This focus results in:

  • Keyword Stuffing: Repeating the same phrases disrupts the natural flow of content, making it feel robotic and unnatural.
  • Shallow Information: Articles are designed to rank high in search results but rarely offer comprehensive insights. The goal becomes “ranking” rather than “resonating.”
  • Homogenized Content: SEO encourages following trends, which can lead to an echo chamber where originality and diverse perspectives are lost.

In an age where, in 2024, we have been talking about echo chambers for years, social networks get blamed and yet people who share content to the networks have already been in algorithmic echo chambers based on some content.

The Dilution of Meaning

When sensationalism and SEO-driven tactics dominate content creation, the essence of meaningful writing is lost. Here’s how:

  • Complex Issues Are Simplified: Topics that deserve nuance and careful exploration are reduced to soundbites or listicles, or worse yet, sticky infographics surrounded by content made to come out on top of an algorithm.
  • Authenticity Is Compromised: Writers often prioritize what sells over what matters, leading to a loss of personal voice and integrity.
  • Readers Are Left Unsatisfied: Audiences craving depth and understanding find themselves wading through superficial content, unable to uncover real value. I know this is the main issue I have and have had for decades, and increasingly so.

Finding Balance: Writing with Integrity

Does this mean we should abandon SEO and engaging headlines altogether? Not at all. Instead, writers and marketers can aim for a balance between visibility and value:

  1. Prioritize Authenticity: Write with your audience in mind, not just algorithms. Focus on what they truly need and want to learn. We all have to play the game, but we can choose how we play the game.
  2. Use SEO Strategically: Incorporate keywords naturally, ensuring they enhance rather than detract from the content.
  3. Deliver on Promises: If your headline promises something extraordinary, make sure your content lives up to it.
  4. Focus on Depth: Invest time in research, analysis, and thoughtful writing. Readers appreciate content that goes beyond the surface.

Conclusion

Sensationalism and SEO writing are not inherently bad, but when they overshadow the purpose of content, meaning is inevitably diluted. As creators, we have a responsibility to prioritize authenticity and depth over cheap tricks and fleeting trends. In a world flooded with shallow content, meaningful writing stands out—and that’s what readers remember.

And if you need pain to reinforce this, consider what happens when algorithms change and your content is suddenly not as popular. It has happened before, it will happen again. What’s worse, that content might be scraped by someone training an LLM so that it will spit out that gobblygook thinking it qualifies as ‘good writing’.

Good content, at least in my opinion, should last. That’s why Gutenberg.org is filled with classics that people want to read.

By committing to substance over sensationalism, we can create content that not only captures attention but also earns respect and fosters trust. And in the long run, isn’t that what truly matters?

Further reading:

You can revel in how the SEO works in those articles, or doesn’t.

Scraping A Living Out Of the Age of Scrapers

One of the reasons I have not been writing as much for the past for months was analysis paralysis. For years in corporate technology settings, I promised myself that I would make good on getting to writing at some point. I made a decision in the early 1990s to pay bills and help support parents rather than be the broke writer that had to compromise himself to earn a living. The world changed.

My plans I was in the process of making concrete were hit with the phosphoric acid of LLM training and competition even as I was laying the cornerstone.

My mother was a writer. She self-published in the 1970s through the 1990s by getting her poetry printed and, as far as I know, she never broke even. She kept writing anyway, and I think she was pretty good despite some of the opinions she expressed – she expressed them well.

There was one poem she wrote about how Poets were esteemed in Somalia and given prominence – I can’t seem to find it as she sadly didn’t publish it online – but the gist of it was that there are, or were, parts of the world where poetry was important. By extension, writing was important, and writing was respected.

Writers were seen as noble artisans of the written word, earning their keep through the sweat of ink that scrawled out of their hands and, later, keyboards. In today’s digital Wild West, writing for money online feels a bit like leaving cookies on the counter of a house full of raccoons. You’re crafting something delightful, but someone, somewhere, is plotting to grab it and run—no credit given, no crumbs left behind. Even online publishing with Amazon.com is fraught with such things, and the only way to keep from the dilution of your work is to dilute it yourself.

We find ourselves aliens in a world we created. Inadvertently, I helped build this world, as did you, either by putting pieces of code together to feign intelligence (really, just recording our own for replays) or our demand for fresh content that sparked imagination.

Welcome to the age of content scraping, where your genius headlines and painstakingly researched prose are more at risk than a picnic basket in Jellystone Park.

The Scraper Apocalypse: Who Stole My Blog?

As mentioned previously, I moved off of WordPress.com mainly because of the business practices of Automattic – particularly the aspect where they made a deal to use the content on WordPress.com as training data for LLMs with the default set for users to agree to it. Why on Earth would anyone agree to it?

Scraping is nothing new. Companies—and bots with names like “Googlebot” (friendly) or “AggressiveRandoBot42” (not so much)—are prowling the internet, vacuuming up your hard-earned words faster than you can type them to train their Large Language Models, or AIs. They aren’t even considered shady. And it’s not just shady websites in the far corners of the web. You don’t know who is doing it.

What’s left for you? Crumbs, if you’re lucky. So, how do you stay ahead of the scrapers while still getting paid?

Step 1: Write With a Purpose

Personally, I’m not too much about monetization and yet I drink coffee that costs money. It’s a reality for all of us, a system born into where I can’t pay my bills by sending people words.

Let’s start with the golden rule of online writing: never write for free unless it’s a passion project—or revenge poetry about scrapers. Every blog post, article, or eBook should have a direct or indirect income stream attached.

  • Direct Revenue: Ads, sponsored posts, or affiliate marketing links.
  • Indirect Revenue: Use your content to build an email list or funnel readers toward a product or service you offer.
  • Be yourself: technologies are increasingly mimicking, but they can’t do what you do.

Scrapers might steal your content, but they can’t siphon your strategy directly. They can, however, adapt quickly based on the information they get, so you have to stay on your toes.

Step 2: Build the Fortress: Protecting Your Content

If you’ve ever tried to protect your lunch from a determined coworker, you’ll understand this analogy: scrapers don’t care about rules. But you can make their lives harder.

  • Add Internal Links: Keep scrapers busy by linking to other parts of your site. If they scrape one post, they get a tangled web that leads readers right back to you.
  • Use Watermarks in Imagery: For visual-heavy posts, watermark your images with your logo or website URL. It’s digital branding in action.
  • Insert Easter Eggs: Include subtle shout-outs to your own name or brand. Scrapers might miss these, but real readers won’t. You do know you’re on RealityFragments.com, right?
  • Consider Subscriptions: Some websites allow you to close your content to subscribers, but you need consistent readers to pull that off.

Step 3: Turn the Tables—Use Scrapers to Your Advantage

Here’s the plot twist: sometimes, scrapers inadvertently help you. When your content gets stolen but still links back to you, it can drive traffic to your site.

But how do you ensure that happens?

  • Embed Links Thoughtfully: Include links to high-value content (like an eBook sales page or an email sign-up form). If they scrape your post, their audience might still end up on your site.
  • Use Syndication Smartly: Syndicate your content to reputable platforms, as few as they are. These platforms might outrank the scrapers and help your original post shine. Also note when you post to them, you should still expect your content to be scraped
  • Use LLMs to check your own work: LLMs are trained by scraping. My own writing is something I like to assure is fresh and original, so I have LLMs I installed that are disconnected from the Internet (instructions on how to do that with 0llama here) to assure the same. I’ve found it very helpful to make sure I’m original since they scrape everyone else’s content… (and probably mine).

Step 4: Embrace the Impermanence

At the end of the day, the internet is a giant soup pot, and everyone’s stirring it. You can’t stop all the scrapers, but you can focus on making your content work harder for you.

  • Repurpose: Turn blogs into videos, podcasts, or infographics. It’s much harder for scrapers to steal a voiceover than it is to Ctrl+C your text. I stink at this and need to get better.
  • Engage Directly: Build relationships with your audience through comments, newsletters, and social media. Scrapers can’t steal community.
  • Focus on Creating: A creator creates, and the body of work is greater than the sum of its parts. I think of this as the bird that can land on a branch not because it trusts the strength of the branch but because it trusts it’s wings. Trust your wings.

Conclusion: Keep Writing Anyway

Writing for a living online in an era of content scraping is a lot like running a lemonade stand during a rainstorm. It’s messy. There are periods of disheartenment. Life is not easy. But when the sun comes out—or when the right reader finds your work—it’s worth it.

So, write boldly, monetize smartly, and remember: scrapers might steal your words, but they can’t steal your voice, though Sir David Attenborough can argue that. They cannot take away your ability to be human and create.

Stay witty, stay scrappy, and may your words always pay their rent and bring someone value.

The LLM Copilot is More of a Companion.

I almost forgot to write something here today. I’ve been knocking out scenes and finding the limitations of the LLM as I go along, which is great.

The particular LLM I’m working with is llama3 which I’ve tweaked and saved as I’ve worked with it.

It’s fun because it sucks.

It can handle about 500-1000 words easy to analyze at a time – figure a scene at a time. Meanwhile, it forgets all the other scenes it has seen. It does ask pretty decent questions within the scene, which is a nice way to make sure that the parts it can’t answer are the ones you don’t want the reader to be able to answer yet. It echoes the questions a reader might ask – if they have memory issues.

It’s terrible, however, at following along with what was previously written. Despite saving, etc, it just lumbers along thinking each chunk of text is all by itself, and maybe some of the things you had written before. It mixes up character names as an example.

I’ve come to think of it as a funny mirror for writing. It kinda gets it, but not really. I’m happy with that. I’m the writer, it’s just a funny mirror I bounce ideas off of.

It never comes up with original ideas – how could it? It’s trained on things that have been written before, and sure, it can string words together in ways that can impress some people – but it strings together words just so based on what it has seen before.

It lacks imagination, vision, and because of that, it’s terrible for any form of long form prose. Maybe some LLMs are better at it, but I’m perfectly happy with it not being good at imagination and vision.

That’s my job.

What it does do, even when it screws up – especially when it screws up – is keeps me on task. I don’t know how many other people have that particular issue, but if you do, LLMs are pretty good for that, for developing characters, and… shaking your head at.

Is it worth the trouble of installing a LLM? I don’t know. For me, I think so. Having a goofy tool asking dumb questions is handy.

Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Experimenting With LLM.

I’ve installed AIs locally so I can do my own experimentation without signaling to tech bros what I’m doing. I’m trying to get away from the subscription models that they’re selling.

I’m auditioning various models to find strengths and weaknesses, mainly to help me with infoglut. So much of what is written on the internet is just a new rendition of the same crap, particularly with AI these days, and to find the things that are new, or reveal something new from the same information.

If you want to know how to do this yourself, it’s not hard, and it costs nothing. I wrote up a quick ‘How To install your own LLM’ here.

This requires training a model. Presently I’ve been training Llama3. It has been a little too bubbly for my taste, but after a day and reading a few books from Gutenberg.org, I fired it up this morning and this happened.

Now, it remembers who I am, which is always nice, but I decided to ask it what I should call it. It’s answer is interesting. By saving the model after our interactions, it is learning to a degree – but, it’s not human, and no, I know it’s not actually intelligent. But it has been an interesting endeavor.

I’ve fed it some of my writing, and it called me out on not using enough active voice. That’s a good tip.

In all, the overall plan is to have it do some of the heavy lifting in dealing with infoglut. I spend way too much time daily reading stuff that isn’t worth reading because I don’t know it’s not worth reading until I’ve read it.

The plan is to outsource to ‘Teslai’, or whatever LLM model I choose in the future. By allowing it to get to know me – not something I would do with a LLM controlled by someone else – it might be able to tailor things better for me, not based on what I used to like, but based on the patterns it finds in my own behavior. And even then, like anything else, a healthy dose of salt with it.

Killing Off the Geese that Lay Golden Eggs

We all know the story of the goose that laid the golden eggs, and the idiot who killed the golden goose got no more golden eggs. It’s been considered good practice not to kill something that is producing important things for you1.

This is what some companies are doing, though, when it comes to AI. I pointed out here that companies have been doing it before AI, too, though in the example of HuffPost the volunteers who once contributed to it’s success simply got left out in the cold.

It is a cold world we live in, and colder each day. Yet more people are being impacted by generative AI companies, from writing to voice acting to deepfakes of mentionable people doing unmentionable things.

Who would contribute content willingly to any endeavor when it could simply be used to replace them? OK, aside from idiots, who else?

I did hear a good example, though. Someone who is doing research and is getting paid to do it has no issue with his work being used to train an AI, and I understood his position immediately: He’s making enough, and the point of doing research is to have it used. But, as I pointed out, he gets paid, and while I don’t expect he’s got billions in the bank, I’d say that once he’s still getting paid to do research, all will be well for him.

Yet not all of us are. Everyone seems intent on the golden eggs except the geese that can lay them. If you can lay golden eggs, you don’t need to go kill geese looking for them… and dead geese…. because it seems that tech bros need reminding… dead geese do not lay eggs.

  1. I’ve often wondered if this didn’t start Hindus not eating beef, as Indian cuisine relies heavily on the products of the cow – so a poor family killing a cow for meat would not make sense. Maybe not, but it’s plausible. ↩︎

The Challenge.

In researching opting out of allowing WordPress.com and Tumblr.com using my content to sell to Midjourney and OpenAI, I ran across some thoughtful writing on opting out of AI by Justin Dametz.

This is someone I likely wouldn’t cross paths with, since I’m not someone who is very interested in theology, which he writes quite a bit about. I imagine he could say the same about my writing, but we have a nexus.

His piece was written last year, and it echoes some of my own sentiments about the balance between AI and writing, where he makes solid points about young people learning how to communicate themselves.

I tend to agree.

Yet, I am also reminded of learning calculus without a calculator. Scientific calculators were fairly new in the late 1980s when I learned calculus, and they even came solar powered so we wouldn’t have to fiddle with the batteries. These were powerful tools, but my class wasn’t allowed to use them until we had the fundamentals down. This, of course, did not stop us.

Speaking for myself, I wrote code in BASIC on an old Vic-20 that allowed me to check my answers. This didn’t help me with my homework, really, or doing tests, since we were required to show our working and if we got the wrong answer and did it the right way, we still got the majority of the points for the question. We had to demonstrate the fundamentals.

How does one demonstrate the fundamentals of writing? How does one demonstrate the ability to communicate without crutches? The answer is by assuring none of the crutches are available to help. I suppose we could have writing done in Faraday cages in classes to evaluate what students write – or we could simply reward original writing because the one thing that artificial intelligence cannot do is imagine, and while it can relate human experience through the distillation of statistics and words, it doesn’t itself understand the human experience.

Generative AIs can spit out facts, narratives that it’s seen before, and images based on what it has been trained on – but it really adds nothing new to the human experience except the ability to connect things across what human knowledge we have trained it on.

But how do we teach children how to write without it? How do we then teach students how to learn and be critical of the results we get?

First, we have to teach them learn instead of chasing grades, a problem which has confounded us for decades, to have ability rather than titles and fancy pieces of paper to hang on the walls.

That’s the next challenge.

Imagination

Having now created the God of Technology, I started perusing things related to technology and found a post on the Arts &Crafts, How-To’s, Upcycling & Repurposing blog.

Good and Bad” is based on the a daily prompt, “What technology would you be better off without, why?”. Hidden within, Melodie writes:

Games…. Of course we had a Nintendo and Atari. But we had what is called our “imagination “. We went outside to play, rode our bikes. Were told to come home before dark. Now a days, kids are so zoned into the tv , cell phones, Xbox or Play station...

Imagination. We all sort of understand what it is, but what is it? The first dictionary definition (Merriam Webster) of imagination is:

the act or power of forming a mental image of something not present to the senses or never before wholly perceived in reality”.

In other words, the ability to create within the reality of our own minds, minds which are fed by the senses but not limited to the senses. In it’s own way, imagination could be considered a sense.

I don’t know that there is more or less imagination since I was a child. What I do know is that technology is constantly begging us for attention because it massages the fun bits of our brains, and maybe kids these days aren’t allowed to imagine as much because imagination always works best for me with… time.

Imagination draws from things we have experienced, both perceived and imagined – in fact, both tickle the same parts of the brain with memory and imagined. It’s why memory isn’t as trustworthy as we would like to think, and why hallucinations of artificial intelligences seem so much like the same thing.

Books, video games, movies, music, all of these things feed into what we imagine with. Like a learning model for an artificial intelligence. The more you cram in there, the more tools you have except… when you spend time considering much of the same inputs… you find that there’s more to them with a little imagination.

When I play a video game, watch a movie, etc, I’m also exploring someone else’s imagination – I’m not using my own – and maybe that’s an important part of who we are as humans.

Replacing imagination with technology doesn’t seem like a great answer.

Subjective AI Results.

Banality. I don’t often use the word, I don’t often encounter the word, and it’s largely because ‘unoriginal’ seems to work better for me. That said, one of the things I’ve encountered while I play with the new toy for me, Tumblr, used it effectively and topically:

Project Parakeet: On the Banality of A.I. writing nailed it, covering the same basic idea I have expressed repeatedly in things I’ve written, such as, “It’s All Statistics” and “AI: Standing on the Shoulders of Technology, Seeking Humanity“.

It’s heartening to know others are independently observing the same things, though I do admit I found the prose a bit more flowery than my own style:

“…What Chatbots do is scrape the Web, the library of texts already written, and learn from it how to add to the collection, which causes them to start scraping their own work in ever enlarging quantities, along with the texts produced by future humans. Both sets of documents will then degenerate. For as the adoption of AI relieves people of their verbal and mental powers and pushes them toward an echoing conformity, much as the mass adoption of map apps have abolished their senses of direction, the human writings from which the AI draws will decline in originality and quality along, ad infinitum, with their derivatives. Enmeshed, dependent, mutually enslaved, machine and man will unite their special weaknesses – lack of feeling and lack of sense – and spawn a thing of perfect lunacy, like the child of a psychopath and an idiot…”

Walter Kirn, ‘Project Parakeet: On the Banality of A.I. Writing’, Unbound, March 18th, 2023.

Yes. Walter Kirn’s writing had me re-assessing my own opinion not because I believe he’s wrong, but because I believe we are right. This morning I found it lead to at least one other important question.

Who Does Banality Appeal To?

You see, the problem here is that banality is subjective because what is original for one person is not original for the other. I have seen people look shocked when I discovered something they already knew and expressed glee. It wasn’t original for them, it was original for me. In the same token, I have written and said things that I believe are mundane to have others think it is profound.

Banality – lack of originality – is subjective.

So why would people be so enthralled with the output of these large language models(LLMs), failing a societal mirror test? Maybe because the writing that comes out of them is better than their own. It’s like Grammarly on steroids, and Grammarly doesn’t make you a better writer, it just makes you look like you are a better writer. It’s like being dishonest on your dating profile.

When I prompted different LLMs about whether the quality of education was declining, the responses were non-committal, evasive and some more flowery than others in doing so. I’d love to see a LLM say, “Well shit. I don’t know anything about that”, but instead we get what they expect we want to see. It’s like asking someone a technical question during an interview that they don’t have the answer to and they just shoot a shotgun of verbage at you, a violent emetic eruption of knowledge that doesn’t answer the question.

“I don’t know”, in my mind, is a perfectly legitimate response and tells me a lot more than having to weed through someone’s verbal or written vomit to see if they even have a clue. I’m the person who says, “I don’t know”, and if it’s interesting enough to me for whatever reason, the unspoken is, “I’ll find out”.

The LLM’s can’t find out. They’re waiting to be fed by their keepers, and their keepers have some pretty big blind spots because we, as human beings, have a lot more questions than answers. We can hide behind what we do know, but it’s what we don’t know that gives us the questions.

I’ve probably read about 10,000 books in my lifetime, give or take, at the age of 51. This is largely because I am of Generation X, and we didn’t have the flat screens children have had in previous generations. Therefore, my measure of banality, if there could be such a measure, would be higher than people who have read less – and that’s just books. There’s websites, all manner of writing on social media, the blogs I check out, etc, and those have become more refined because I have a low tolerance for banality and mediocrity.

Meanwhile, many aspire to see things as banal and mediocre. This is not elitism. This is seen when a child learns something new and expresses joy an adult looks at them in wonder, wishing that they could enjoy that originality again. We never get to go back, but we get to visit with children.

Going to bookstores used to be a true pleasure for me, but now when I look at the shelves I see less and less new, the rest a bunch of banality with nice covers. Yet books continue to sell because people don’t see that banality. My threshold for originality is higher, and in a way it’s a curse.

The Unexpected Twist

In the end, if people actually read what these things spit out, the threshold for originality should increase since after the honeymoon period is over with their LLM of choice, they’ll realize banality.

In a way, maybe it’s like watching children figure things out on their own. Some things cannot be taught, they have to be learned. Maybe the world needs this so that it can appreciate more of the true originality out there.

I’m uncertain. It’s a ray of hope in a world where marketers would have us believe in a utopian future that they have never fulfilled while dystopia creeps in quietly through the back door.

We can hope, or we can wring our hands, but one thing is certain:

We’re not putting it back in the box.