Beyond Artificial Intelligence

In my daily readings yesterday, I came across Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements at a blog that I regularly read. It’s an odd one out for the blog, so it definitely caught my eye. The blog Be Inspired! is worth a read, this was just a single post that was tech related on something I have been writing about.

She hit some of the high notes, such as:

…Furthermore, the unequal access to and distribution of AI technology may exacerbate societal divisions. There is a significant risk of deepening the digital divide between those who have access to AI advancements and those who do not. To bridge this gap, it is crucial to implement inclusive policies that promote equal access to AI education and training across all demographics. Efforts should be made to democratize access to AI tools, ensuring that everyone has equal opportunities to benefit from this technological revolution…

Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements, Be Inspired, July 23rd, 2023.

While it is interesting that I noted a ‘voice change’ with the author, having read her blog, it isn’t her standard fare – and the high points were hit well. It also happens that the quotation above is something that keeps getting thrown around as if someone else is going to do it.

Someone else is not going to do it. I noted with ChatGPT and others when I have used them and asked questions about how artificial intelligence will impact human society, the large language models are very good at using those neural networks, deep learning algorithms and datasets we have no insight into to say in a very verbose way, “Well, you silly humans need to fix yourselves.

Of course, it’s not sentient. It’s predictive text on steroids, crack and LSD. It only has information given to it, and that information likely included stuff written by people who said, “Hey, we need to fix these issues”.

Well, we do need to fix a lot of issues with human society even as the hype cycle spits out a lot of gobbly-gook about the technological singularity while not highlighting issues of bias in the data the authors of our future point out every now and then.

Yet there will always be bias, and so what we are really talking about is a human bias, and when we speak as individuals we mean our own bias. If you’re worried about losing your income, that’s a fair bias and should be in there. It shouldn’t be glossed over as ‘well, we will need to retrain people and we have no clue about it other than that’. If you’re worried that the color of your skin doesn’t show up when you generate AI images, that too is a fair bias – but to be fair, we can’t all be shades of brown so in our expectation of how it should be we need to address that personal bias as well.

It does bug me that every time I generate an image of a human it’s of those with less pigment, and even then I’m not sure which shade of brown I want. It’s boggling to consider, and yes, it does reinforce stereotypes. Very complicated issue that happens because we all want to see familiar versions of ourselves. I have some ideas, but why would I share them publicly where some large language model may snatch them up? Another problem.

Most of the problems we have with dealing with the future of artificial intelligence stems from our past, and the past that we put into these neural networks, how we create our deep learning algorithms is a very small part of it. We have lines on maps that were drawn by now dead people reinforced by decades, if not centuries or even millenia of degrees of cultural isolation.

We just started throwing food at each other on the Internet when social media companies inadvertently reinforced much of the cultural isolation by giving people what they want and what they want is familiar and biased toward their views. It’s a human thing. We all do it and then say our way is best. We certainly lack originality in that frontier.

We have to face the fact that technology is an aspect of humanity. Quite a few humans these days drive cars without knowing much about how it works. I asked a young man recently if when they tuned his car they had lightened his flywheel, since I noticed his engine worked significantly harder on an incline, and he didn’t know what that was.

However, we do have rules on how we use cars. When drunk driving was an issue, some mothers stepped up and forced the issue and now drunk driving is something done at an even higher risk than it already is: you might go to jail over it to ponder why that night had become so expensive. People got active, they pressed the issue.

The way beyond AI is not through the technology, which is only one aspect of our humanity. It’s through ourselves, all our other aspects of humanity which should be more vocal about artificial intelligence and you might be surprised that they have been before this hype cycle began.

Being worried about the future is nothing new. Doing something about it, by discussing it openly beyond the technology perspectives, is a start in the right direction because all the things we’re worried about are… sadly… self-inflicted by human society.

After centuries of evolution that we think separates us from our primate cousins, we are still fighting over the best trees in the jungle, territory, in ways that are much the same, but also in new ways where dominant voices with just the right confidence projected make people afraid to look stupid even when their questions and their answers may be better than that of those voices.

It’s our territory – our human territory – and we need to take on the topics so ripe for discussion.

Are You Human?

Jim Henson’s ‘Kermit the Frog’ has always been a favorite character of mine, and he was only one facet of the amazing Jim Henson. He likely spent more time with me than my parents, if one were to work out the puppet-hours.

In a somewhat flippant mood, I decided to ask one of the large language models how Kermit The Frog would explain a large language model. I used Google’s Bard, and the response was most certainly not what I wanted.

It largely didn’t seem to understand the concept of puppets, which is pretty amusing.

But it gave me a funny idea. It mentioned that Kermit might use an anecdote like, “I once got confused for a large language model”. If you think in the Kermit voice, it’s pretty funny.

In fact, read the rest of this post in that voice if you can. Really.

Maybe we should be asking everyone if they’re human. Every now and then, just look at someone suspiciously and say, “Are you one of us?”

I guarantee you that this is a very bad idea and you probably shouldn’t do it, because who knows what sort of cult you will unearth, or what sort of psychiatrist or psychologist law enforcement might put you in front of. If you point at this post and they find it, they will tell you this paragraph was the disclaimer.

Communicating: Don’t Depend on AI.

Our planet is full of communication. Elephants communicate over great distances, whales speak to each other across the depths and distances, and we do the same through a broad network we call the Internet now, built on the previous systems. Some might say it’s a nervous system of sorts, but if it’s a nervous system it does seem disconnected from a brain. Or maybe the brain hasn’t evolved yet. Maybe it never will.

I write this because when I was writing about the AI tools I use, which are spartan, I imagined a world where people relied so heavily on what’s marketed as artificial intelligence that they could no longer communicate to other human beings in person. It’s something that they’re writing papers on, and this one from 2021 seems pretty balanced. In some ways our technology helps, in some ways it hinders.

The paper, though, was before ‘AI’ became a popular thing, with even The Beatles helping make it famous. Maybe too famous for what it is, which at this point is really a bunch of clever algorithms trained on data that we collectively created. We’re amazed at well trained morons, who cleverly give us what it thinks we want like Netflix suggests things to show you. It’s different, but not very different.

When Grammarly came out, promising to make everyone better writers, I rolled my eyes because what it allowed was for more consistent output. It allowed really crappy writers to look like good ones, and unless they really wanted to learn how to write better – they wouldn’t.

The fact that they’re probably still subscribing to Grammarly would make the point. If something is going to make you better at something, like training wheels on a bicycle, you can only tell if you take off the wheels. I’m willing to bet that people who have consistently used Grammarly are probably still using it because it did not make them better writers, it simply made it easier for them to appear that they wrote well.

I could be wrong. I don’t think I am, but if someone has some data on this either way, I’d love to see it.

Speaking for myself, though most of my professional life was in technology, the press of what I actually did was communication. I could communicate with non-technical and technical people alike, which is something I still do online on RealityFragments.com and KnowProSE.com. I was known for it in some circles, making overcomplicated things simple and making the unnecessarily insulative dialects of technology more accessible to people.

In all that time, what I learned is that to become a better writer, one has to read and one has to write. Reading rubbish is only good if you know it’s rubbish, because it gives you examples of what not to do when you’re writing. If you don’t know it’s rubbish, you might think it’s the right way to do things and go around spreading more rubbish.

Which brings us back full circle to these large language models that can’t really tell what is rubbish or not. They use probability to determine what is most acceptable – think average, banal – based on their deep learning models. The same is true of images and video, I imagine. Without a human around with some skill in knowing what’s rubbish and what isn’t, people will just be regurgitating rubbish to each other.

But who picks who has that skill? You can all breathe, it isn’t me. I’ve played with the large language models and found them wanting. They’re like college graduates who crammed for tests, have an infallible memory, but don’t understand the underlying concepts- which, by the way, is something we also allow to run around in places of authority making really poor decisions. It’s popular, though.

Communication is a skill. It’s an important skill. It’s such an important skill that if you find yourself using AI tools all the time to do it, I offer that you’re not just robbing yourself…

You’re polluting.

AI On AI, in the style of Douglas Adams.

I decided to challenge one of the large language models to see what it thought Douglas Adams might write about artificial intelligence.

Here’s what it wrote:

Dear hoopy froods and fellow lovers of technology,

Let me tell you that when it comes to artificial intelligence, my mind starts racing faster than a hyperactive cheetah riding a rocket to Neptune. The possibilities are endless, the implications mind-boggling, and the potential for chaos and hilarity unsurpassed.

Now, some people might be alarmed by the idea of machines getting smarter than us. But let me ask you this: have you seen the current state of humanity? There are some things even a toaster could do better. Besides, we can always teach the machines the importance of sanity, logic, and the occasional towel.

Imagine a world where AI could help us solve the most complex problems, from climate change to world hunger, while also bringing us closer to each other and to the wonders of the cosmos. Think of how much more efficient and creative our lives could be if we had a personal Jeeves that could anticipate our needs and wants before we even knew them.

But of course, the road to AI enlightenment isn’t paved with harmless chatbots and benevolent cyborgs. As any fan of the Terminator franchise will tell you, there’s always the risk of a rogue AI going all Skynet on us. We could end up with killer drones, mind-reading algorithms, and dating apps that are just too good at finding our perfect match.

And then there are the ethical dilemmas. What if an AI system decides that the best way to protect humanity is to enslave us? What if it decides that some human lives are worth more than others? What if it decides that the ultimate goal of existence is to maximize paperclip production?

As a wise man once said, the only way to deal with these uncertainties is to carry a towel. Because a towel is not just a useful accessory, it’s also a symbol of our humble origins and our infinite potential. A towel reminds us that for all our flaws and limitations, we are still capable of creating something that can surpass us.

With that in mind, let us embrace the future of AI with curiosity, skepticism, and a pinch of humor. Let’s explore the frontiers of consciousness and intelligence, while also remembering to be kind to our fellow humans, to other animals, and to the Earth that sustains us.

In other words, let’s be like Marvin the Paranoid Android, who may be the most intelligent being in the universe, but who still knows how to appreciate a good cup of tea and a miserable existence.

That’s it for now, folks. Stay tuned for more musings on AI, technology, and life, the universe, and everything.

Until next time, so long, and thanks for all the fish!

Yours in frivolity and profundity,

Douglas Addams’s ghost.

Now, what’s most interesting to me about this is not that it mimics Douglas Adams’ style passably… while lacking the depth that Douglas Adams had in just about everything he wrote. No artificial intelligence will bring that back.

Subjective AI Results.

Banality. I don’t often use the word, I don’t often encounter the word, and it’s largely because ‘unoriginal’ seems to work better for me. That said, one of the things I’ve encountered while I play with the new toy for me, Tumblr, used it effectively and topically:

Project Parakeet: On the Banality of A.I. writing nailed it, covering the same basic idea I have expressed repeatedly in things I’ve written, such as, “It’s All Statistics” and “AI: Standing on the Shoulders of Technology, Seeking Humanity“.

It’s heartening to know others are independently observing the same things, though I do admit I found the prose a bit more flowery than my own style:

“…What Chatbots do is scrape the Web, the library of texts already written, and learn from it how to add to the collection, which causes them to start scraping their own work in ever enlarging quantities, along with the texts produced by future humans. Both sets of documents will then degenerate. For as the adoption of AI relieves people of their verbal and mental powers and pushes them toward an echoing conformity, much as the mass adoption of map apps have abolished their senses of direction, the human writings from which the AI draws will decline in originality and quality along, ad infinitum, with their derivatives. Enmeshed, dependent, mutually enslaved, machine and man will unite their special weaknesses – lack of feeling and lack of sense – and spawn a thing of perfect lunacy, like the child of a psychopath and an idiot…”

Walter Kirn, ‘Project Parakeet: On the Banality of A.I. Writing’, Unbound, March 18th, 2023.

Yes. Walter Kirn’s writing had me re-assessing my own opinion not because I believe he’s wrong, but because I believe we are right. This morning I found it lead to at least one other important question.

Who Does Banality Appeal To?

You see, the problem here is that banality is subjective because what is original for one person is not original for the other. I have seen people look shocked when I discovered something they already knew and expressed glee. It wasn’t original for them, it was original for me. In the same token, I have written and said things that I believe are mundane to have others think it is profound.

Banality – lack of originality – is subjective.

So why would people be so enthralled with the output of these large language models(LLMs), failing a societal mirror test? Maybe because the writing that comes out of them is better than their own. It’s like Grammarly on steroids, and Grammarly doesn’t make you a better writer, it just makes you look like you are a better writer. It’s like being dishonest on your dating profile.

When I prompted different LLMs about whether the quality of education was declining, the responses were non-committal, evasive and some more flowery than others in doing so. I’d love to see a LLM say, “Well shit. I don’t know anything about that”, but instead we get what they expect we want to see. It’s like asking someone a technical question during an interview that they don’t have the answer to and they just shoot a shotgun of verbage at you, a violent emetic eruption of knowledge that doesn’t answer the question.

“I don’t know”, in my mind, is a perfectly legitimate response and tells me a lot more than having to weed through someone’s verbal or written vomit to see if they even have a clue. I’m the person who says, “I don’t know”, and if it’s interesting enough to me for whatever reason, the unspoken is, “I’ll find out”.

The LLM’s can’t find out. They’re waiting to be fed by their keepers, and their keepers have some pretty big blind spots because we, as human beings, have a lot more questions than answers. We can hide behind what we do know, but it’s what we don’t know that gives us the questions.

I’ve probably read about 10,000 books in my lifetime, give or take, at the age of 51. This is largely because I am of Generation X, and we didn’t have the flat screens children have had in previous generations. Therefore, my measure of banality, if there could be such a measure, would be higher than people who have read less – and that’s just books. There’s websites, all manner of writing on social media, the blogs I check out, etc, and those have become more refined because I have a low tolerance for banality and mediocrity.

Meanwhile, many aspire to see things as banal and mediocre. This is not elitism. This is seen when a child learns something new and expresses joy an adult looks at them in wonder, wishing that they could enjoy that originality again. We never get to go back, but we get to visit with children.

Going to bookstores used to be a true pleasure for me, but now when I look at the shelves I see less and less new, the rest a bunch of banality with nice covers. Yet books continue to sell because people don’t see that banality. My threshold for originality is higher, and in a way it’s a curse.

The Unexpected Twist

In the end, if people actually read what these things spit out, the threshold for originality should increase since after the honeymoon period is over with their LLM of choice, they’ll realize banality.

In a way, maybe it’s like watching children figure things out on their own. Some things cannot be taught, they have to be learned. Maybe the world needs this so that it can appreciate more of the true originality out there.

I’m uncertain. It’s a ray of hope in a world where marketers would have us believe in a utopian future that they have never fulfilled while dystopia creeps in quietly through the back door.

We can hope, or we can wring our hands, but one thing is certain:

We’re not putting it back in the box.

Happy, Strong, Tough.

Sipping my coffee this morning I began thinking about how we start happy and we become strong and tough because the world demands it of us.

It happens faster for some of us than others. Some it seems don’t manage to become strong or tough. During all of this, we aspire for various reasons to be happy, some sort of Holy Grail that everyone seeks, charlatans claim to find, and maybe some of us enjoy for periods.

We have tools now that generate images, even videos, and I wondered what the difference would be between a happy boy, a strong boy, and a tough boy. I used DeepAI to generate the images, since I use DeepAI to generate most images these days.

The idea is that given all the images that are used for training these ‘artificial intelligences’ uses our own biases to show us what whatever we wish would look like – implicitly trying to create something we would like based on what that training data.

The first image is the happy boy. DeepAI conjured up a boy with a mild smile, not terminally ecstatic, with a dirty t-shirt and messed up hair. I think that’s relatable.

The strong boy, the next image, seems less happy. Determined, maybe, with musculature that some body builders are training for right now. The brow is slightly furrowed, it seems, and the eyes seem determined.

I remember in my youth always wanting to be stronger. Being stronger meant being more helpful with carrying things, moving things around, and the competition between boys as to who was stronger was a part of my own youth.

Society values the load we can carry, I suppose. I’m certain that there are happy and strong boys out there. I think I was. There is a certain accomplishment to being increasingly strong. Being smart wasn’t something I did here, but it was much the same thing.

Yet the images that DeepAI generated about strong boys did not seem to be happy. I could have cheated and asked for both, but I wanted to see what the large language model and image training would come up with.

This was one of the better images.

Being tough, though, comes from adversity. To an extent, so does strength, but being tough requires more. Grit, as it is, is being able to push past obstacles physically, emotionally and mentally.

This last image in the post is of a tough boy.

Being tough means going through periods where there is no happiness. Being tough requires strength, be it of body, mind and emotion.

‘Tough’ means pushing forward, maybe because of hope, or maybe because there is no other way. When things seem to be without hope, one has to be the toughest, pushing forward in the hope that one day it won’t be so tough.

Some people become tougher than others. Adversity creates ‘tough’, and maybe because of that we don’t see ‘tough’ as frequently in children in the developed world as we do the developing.

Yet when we think of a child – any child – we adults like to think of them as happy. Not strong. Not tough. We are stronger, we are tougher, and I think while it’s necessary for children to become these things we also miss our own childhood happiness and do not wish to see it leave the faces of children.

And what do adults want? “To be happy”, we hear all too often.

It might be interesting to see what these might look like with girls. Since I’m not a girl I chose not to do those because I wouldn’t relate. It might be interesting to see those results in a post by a woman.

Who Are We? Where Are We Headed?

We used to simply dangle from the DNA of our ancestors, then we ended up in groups, civilizations, and now that we have thoroughly infested the planet we keep running into each other and the results are so unpleasant that at least some people are renting a virtual, artificial girlfriend for $1/minute.

It’s hard not to get a little existential about the human race with all that’s going on these days with technology, the global economy, wars, and where people are focusing their attention. They’re not really separate things. They’re all connected in some weird way, just like most of humanity.

They are connected in logical ways, we like to think, but when you get large groups of people logic has an odd tendency to make way for rationalization. There are pulls and tugs on the rug under the group dynamics, eventually shaking some people free of it for better or worse.

This whole ‘artificial intelligence’ thing has certainly escalated technology. The present red dots in this regard are about just how much the world will be improved by it. We’ve heard that before, and you would think that with technology now reflecting more clearly our own societies through large language models that we might be more aware that we’ve all heard these promises before.


I can promise you that for the foreseeable future, despite technological advances, babies will continue being born naked. They will come into the world distinctly unhappy with having to leave a warm and fluid space to a colder, less fluid space. From there, they seem to be having less and less time before some form of glowing flat screen is made available to them, replete with things marketed toward them.

It would be foolish to think that the people marketing stuff on those flat screens are all altruistic and mean the best of the children as individuals and humanity. They’re trying to make money. Everyone’s trying to make money.

I don’t know that this is empirically true or not, but it seems to me that when I was a child, people were more interesting in creating value than making money. If they created value, they got paid so that they could continue creating value. It seems, at least to me, that we’ve been pretty good about removing value from the equation of life.

This is not to say I’m right. Maybe values have changed. Maybe I’m an increasingly dusty antique that every now and then shouts, “Get off my lawn!”. I don’t think I’m wrong, though, because I do encounter people of younger generations who are more interested in value than money, but when society makes money more important than value, then everything becomes about money and we lose… value.

To compensate, marketing tells people what they should be valuing to be the person that they are marketed to become.

I don’t know where this is going, but I think we need to switch drivers.

Maybe we should figure out who we are and where we want to go. Without advertising.

Incoming: The Tide of Marketing.

_google_ai_marketing

Browsing Facebook, I come across this in my feed and it’s as if they read what I wrote in Silent Bias:

…With social media companies, we have seen the effect of the social media echo chambers as groups become more and more isolated despite being more and more connected, aggregating to make it easier to sell advertising to. This is not to demonize them, many bloggers were doing it before them, and before bloggers there was the media, and before then as well. It might be amusing if we found out that cave paintings were actually advertising for someone’s spears or some hunting consulting service, or it might be depressing…

Almost on command, this shows up in the main feed on Facebook – sponsored content by Google. I haven’t used Bard, but I fear I have suffered Bard’s work because… I imagine that they used Bard to generate that advertising campaign for Bard.

The first thing that every sustainable technology has to do is pay for itself. The magnitude of this, though, is well beyond cave drawings. As it is, marketing has used a lot of psychology to get people to chase red dots.  Now that this has become that much ‘easier’ for humans, and now that it’s being marketed as a marketing tool…

How much crap do you not need? We need to be prepared for the coming tide of marketing bullshit.