The Psychology of Machines.

Most people are familiar with Robert A. Heinlein‘s work “Starship Troopers” because it was made into a movie. There were other movies based on his works, but never my favorites.

One of those favorites is, “The Moon Is a Harsh Mistress“. Within it’s pages, a young teenage version of myself found a lot to imagine about. In fact, the acronym TANSTAAFL became popular because of Heinlein. Yet that’s not my focus today.

My focus is on the narrator Manuel Garcia (“Mannie”) O’Kelly-Davis and his relationship with the Lunar Authority’s master computer, HOLMES IV (“High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV”).

HOLMES IV became self-aware, and developed a sense of humor. Mannie, who became friends with HOLMES IV, named it ‘Mike’ (Sherlock Holmes reference).

Mannie, a computer technician, ended up having a fairly complex relationship with Mike, and I thought about him being Mike’s psychologist. A computer technician as a psychologist for an artificial intelligence.

If you have read the book, you might see what I mean, and if you haven’t, I encourage it.

Throughout the years as a software engineer, I would jokingly call myself variations of a computer psychologist.

Now in 2023, Artificial Intelligence ‘hallucinations’ have become a thing, and if we reference Andy Clark’s work in his book, “The Experience Machine: How Our Minds Predict and Shape Reality“.

“Since brains are never simply “turned on” from scratch—not even first thing in the morning when I awake—predictions and expectations are always in play, proactively structuring human experience every moment of every day. On this alternative account, the perceiving brain is never passively responding to the world. Instead, it is actively trying to hallucinate the world but checking that hallucination against the evidence coming in via the senses. In other words, the brain is constantly painting a picture, and the role of the sensory information is mostly to nudge the brushstrokes when they fail to match up with the incoming evidence.”

The Experience Machine: How Our Minds Predict and Shape Reality“, Andy Clark, 2023.

The Marginalian has a post that pointed me to Andy Clark’s book, which I encourage you to take a look at.

When artificial intelligence folks talk about hallucinations, this is the only reference that makes sense, and yet I think ‘bullshitting’ might be more appropriate than hallucinating. Of course, ‘hallucinating’ is something more professional to say and it could be correct in instances where the large language models are attempting to predict what the user wants. I’d have to study the code. They have, I haven’t, so let’s go with hallucinations for now.

There may be a space in our future for ‘artificial intelligence psychologists’. Psychiatrists, not so much maybe.

This could be a fun topic worth exploring. Hacking what our minds create could help us understand ourselves better.

Are You Human?

Jim Henson’s ‘Kermit the Frog’ has always been a favorite character of mine, and he was only one facet of the amazing Jim Henson. He likely spent more time with me than my parents, if one were to work out the puppet-hours.

In a somewhat flippant mood, I decided to ask one of the large language models how Kermit The Frog would explain a large language model. I used Google’s Bard, and the response was most certainly not what I wanted.

It largely didn’t seem to understand the concept of puppets, which is pretty amusing.

But it gave me a funny idea. It mentioned that Kermit might use an anecdote like, “I once got confused for a large language model”. If you think in the Kermit voice, it’s pretty funny.

In fact, read the rest of this post in that voice if you can. Really.

Maybe we should be asking everyone if they’re human. Every now and then, just look at someone suspiciously and say, “Are you one of us?”

I guarantee you that this is a very bad idea and you probably shouldn’t do it, because who knows what sort of cult you will unearth, or what sort of psychiatrist or psychologist law enforcement might put you in front of. If you point at this post and they find it, they will tell you this paragraph was the disclaimer.

Communicating: Don’t Depend on AI.

Our planet is full of communication. Elephants communicate over great distances, whales speak to each other across the depths and distances, and we do the same through a broad network we call the Internet now, built on the previous systems. Some might say it’s a nervous system of sorts, but if it’s a nervous system it does seem disconnected from a brain. Or maybe the brain hasn’t evolved yet. Maybe it never will.

I write this because when I was writing about the AI tools I use, which are spartan, I imagined a world where people relied so heavily on what’s marketed as artificial intelligence that they could no longer communicate to other human beings in person. It’s something that they’re writing papers on, and this one from 2021 seems pretty balanced. In some ways our technology helps, in some ways it hinders.

The paper, though, was before ‘AI’ became a popular thing, with even The Beatles helping make it famous. Maybe too famous for what it is, which at this point is really a bunch of clever algorithms trained on data that we collectively created. We’re amazed at well trained morons, who cleverly give us what it thinks we want like Netflix suggests things to show you. It’s different, but not very different.

When Grammarly came out, promising to make everyone better writers, I rolled my eyes because what it allowed was for more consistent output. It allowed really crappy writers to look like good ones, and unless they really wanted to learn how to write better – they wouldn’t.

The fact that they’re probably still subscribing to Grammarly would make the point. If something is going to make you better at something, like training wheels on a bicycle, you can only tell if you take off the wheels. I’m willing to bet that people who have consistently used Grammarly are probably still using it because it did not make them better writers, it simply made it easier for them to appear that they wrote well.

I could be wrong. I don’t think I am, but if someone has some data on this either way, I’d love to see it.

Speaking for myself, though most of my professional life was in technology, the press of what I actually did was communication. I could communicate with non-technical and technical people alike, which is something I still do online on RealityFragments.com and KnowProSE.com. I was known for it in some circles, making overcomplicated things simple and making the unnecessarily insulative dialects of technology more accessible to people.

In all that time, what I learned is that to become a better writer, one has to read and one has to write. Reading rubbish is only good if you know it’s rubbish, because it gives you examples of what not to do when you’re writing. If you don’t know it’s rubbish, you might think it’s the right way to do things and go around spreading more rubbish.

Which brings us back full circle to these large language models that can’t really tell what is rubbish or not. They use probability to determine what is most acceptable – think average, banal – based on their deep learning models. The same is true of images and video, I imagine. Without a human around with some skill in knowing what’s rubbish and what isn’t, people will just be regurgitating rubbish to each other.

But who picks who has that skill? You can all breathe, it isn’t me. I’ve played with the large language models and found them wanting. They’re like college graduates who crammed for tests, have an infallible memory, but don’t understand the underlying concepts- which, by the way, is something we also allow to run around in places of authority making really poor decisions. It’s popular, though.

Communication is a skill. It’s an important skill. It’s such an important skill that if you find yourself using AI tools all the time to do it, I offer that you’re not just robbing yourself…

You’re polluting.

Beyond The Beatles: AI & The Music Industry.

I imagine George Harrison and John Lennon would have plenty to talk about if they were still alive today regarding the ethics of using artificial intelligence to get that one more Beatles song.

I wrote that in this instance, it might be ok, but it does open up a can of worms. And, as I noted on this good read about why the Beatles using AI should be ok, it seems more like The Beatles are being used to market artificial intelligence than vice versa. Gaining some credibility through The Beatles would definitely serve those invested in artificial intelligence with maybe some fleeting value for the song.

After all, whatever the song is, it will also be a song of controversy now, and for better or worse, how good the song is received will help define how AI is used in commercial music.

The music business is by itself a Game of Rights. Just a look at who owns the rights to The Beatles music over the years should be an eye opener. It’s presently Sir Paul McCartney and Sony/ATV, but it wasn’t always the case and it likely won’t always be the case.

Given the Copyright Extension Act, it may well change again. Copyrights and publishing rights rule the industry, and the actual artists involved generally have little to no say themselves on how their works will be used. Likeness of performance, though, seems to be falling under different rules.

“…The recording industry has quickly mobilised against artificial intelligence, launching a group called the “Human Artistry Campaign“, and warning that AI companies are violating copyright by training their software on commercially-released music.

Whether AI-written music can be copyrighted is still under debate. Under English copyright law, for example, works generated by AI, can theoretically be protected.

However, the US Copyright Office recently ruled that AI art, including music, can’t be copyrighted as it is “not the product of human authorship”…”

Sting warns against AI songs as he wins prestigious music prize“, BBC, Mark Savage, 18 May 2023.

So ‘protecting’ these works is something being… negotiated. The music industry lobbyists will meet with the regulators and deal with public upheaval, and whatever compromise is to happen will happen.

Sting, in that article, makes the point that he doesn’t think humans should be taken out of the process and should drive the process, and he’s in a position to say something. Sir Paul McCartney is in a similar position, if not better, and has done something.

Yet, really, the issue is even deeper than artificial intelligence and music. Popular music is demonstrably formulaic.

“…The findings are somewhat intuitive. Of course a genre will sell more once it forms an established sound that listeners can identify with. But the science is only proving the now-dominant truth of pop music: Record companies are only comfortable promoting things they already know will sell. And they know that now better than ever.

Record labels are pouring resources into data analysis tools, using them to predict which songs will be the next breakout hit. According to Derek Thompson at the Atlantic, executives can use services like Shazam and HitPredictor to see which songs will break out next with surprising accuracy…”

Scientists Just Discovered Why All Pop Music Sounds Exactly the Same“, Mic.com, Tom Barnes, Jan 7, 2015

They’ve already been publishing generic music, it’s what they do. Popular music is gauged by demand but defined by the music industry itself.

And from the artist angle – there’s already sampling. Speaking for myself, I like a good remix of songs done by DJs, and that allows the original artist to have their performance used in ways that they may not expect or like.

In essence, the problems with artificial intelligence simply highlight the problems with the modern music industry – but, the music industry itself doesn’t see a problem. It’s bias becomes our bias, and they just say, “Buy us.”

They’re making money and spitting out generic music, and people are buying it. AI is hardly going to change that.

I miss the days when music was less formulaic.

AI On AI, in the style of Douglas Adams.

I decided to challenge one of the large language models to see what it thought Douglas Adams might write about artificial intelligence.

Here’s what it wrote:

Dear hoopy froods and fellow lovers of technology,

Let me tell you that when it comes to artificial intelligence, my mind starts racing faster than a hyperactive cheetah riding a rocket to Neptune. The possibilities are endless, the implications mind-boggling, and the potential for chaos and hilarity unsurpassed.

Now, some people might be alarmed by the idea of machines getting smarter than us. But let me ask you this: have you seen the current state of humanity? There are some things even a toaster could do better. Besides, we can always teach the machines the importance of sanity, logic, and the occasional towel.

Imagine a world where AI could help us solve the most complex problems, from climate change to world hunger, while also bringing us closer to each other and to the wonders of the cosmos. Think of how much more efficient and creative our lives could be if we had a personal Jeeves that could anticipate our needs and wants before we even knew them.

But of course, the road to AI enlightenment isn’t paved with harmless chatbots and benevolent cyborgs. As any fan of the Terminator franchise will tell you, there’s always the risk of a rogue AI going all Skynet on us. We could end up with killer drones, mind-reading algorithms, and dating apps that are just too good at finding our perfect match.

And then there are the ethical dilemmas. What if an AI system decides that the best way to protect humanity is to enslave us? What if it decides that some human lives are worth more than others? What if it decides that the ultimate goal of existence is to maximize paperclip production?

As a wise man once said, the only way to deal with these uncertainties is to carry a towel. Because a towel is not just a useful accessory, it’s also a symbol of our humble origins and our infinite potential. A towel reminds us that for all our flaws and limitations, we are still capable of creating something that can surpass us.

With that in mind, let us embrace the future of AI with curiosity, skepticism, and a pinch of humor. Let’s explore the frontiers of consciousness and intelligence, while also remembering to be kind to our fellow humans, to other animals, and to the Earth that sustains us.

In other words, let’s be like Marvin the Paranoid Android, who may be the most intelligent being in the universe, but who still knows how to appreciate a good cup of tea and a miserable existence.

That’s it for now, folks. Stay tuned for more musings on AI, technology, and life, the universe, and everything.

Until next time, so long, and thanks for all the fish!

Yours in frivolity and profundity,

Douglas Addams’s ghost.

Now, what’s most interesting to me about this is not that it mimics Douglas Adams’ style passably… while lacking the depth that Douglas Adams had in just about everything he wrote. No artificial intelligence will bring that back.

Subjective AI Results.

Banality. I don’t often use the word, I don’t often encounter the word, and it’s largely because ‘unoriginal’ seems to work better for me. That said, one of the things I’ve encountered while I play with the new toy for me, Tumblr, used it effectively and topically:

Project Parakeet: On the Banality of A.I. writing nailed it, covering the same basic idea I have expressed repeatedly in things I’ve written, such as, “It’s All Statistics” and “AI: Standing on the Shoulders of Technology, Seeking Humanity“.

It’s heartening to know others are independently observing the same things, though I do admit I found the prose a bit more flowery than my own style:

“…What Chatbots do is scrape the Web, the library of texts already written, and learn from it how to add to the collection, which causes them to start scraping their own work in ever enlarging quantities, along with the texts produced by future humans. Both sets of documents will then degenerate. For as the adoption of AI relieves people of their verbal and mental powers and pushes them toward an echoing conformity, much as the mass adoption of map apps have abolished their senses of direction, the human writings from which the AI draws will decline in originality and quality along, ad infinitum, with their derivatives. Enmeshed, dependent, mutually enslaved, machine and man will unite their special weaknesses – lack of feeling and lack of sense – and spawn a thing of perfect lunacy, like the child of a psychopath and an idiot…”

Walter Kirn, ‘Project Parakeet: On the Banality of A.I. Writing’, Unbound, March 18th, 2023.

Yes. Walter Kirn’s writing had me re-assessing my own opinion not because I believe he’s wrong, but because I believe we are right. This morning I found it lead to at least one other important question.

Who Does Banality Appeal To?

You see, the problem here is that banality is subjective because what is original for one person is not original for the other. I have seen people look shocked when I discovered something they already knew and expressed glee. It wasn’t original for them, it was original for me. In the same token, I have written and said things that I believe are mundane to have others think it is profound.

Banality – lack of originality – is subjective.

So why would people be so enthralled with the output of these large language models(LLMs), failing a societal mirror test? Maybe because the writing that comes out of them is better than their own. It’s like Grammarly on steroids, and Grammarly doesn’t make you a better writer, it just makes you look like you are a better writer. It’s like being dishonest on your dating profile.

When I prompted different LLMs about whether the quality of education was declining, the responses were non-committal, evasive and some more flowery than others in doing so. I’d love to see a LLM say, “Well shit. I don’t know anything about that”, but instead we get what they expect we want to see. It’s like asking someone a technical question during an interview that they don’t have the answer to and they just shoot a shotgun of verbage at you, a violent emetic eruption of knowledge that doesn’t answer the question.

“I don’t know”, in my mind, is a perfectly legitimate response and tells me a lot more than having to weed through someone’s verbal or written vomit to see if they even have a clue. I’m the person who says, “I don’t know”, and if it’s interesting enough to me for whatever reason, the unspoken is, “I’ll find out”.

The LLM’s can’t find out. They’re waiting to be fed by their keepers, and their keepers have some pretty big blind spots because we, as human beings, have a lot more questions than answers. We can hide behind what we do know, but it’s what we don’t know that gives us the questions.

I’ve probably read about 10,000 books in my lifetime, give or take, at the age of 51. This is largely because I am of Generation X, and we didn’t have the flat screens children have had in previous generations. Therefore, my measure of banality, if there could be such a measure, would be higher than people who have read less – and that’s just books. There’s websites, all manner of writing on social media, the blogs I check out, etc, and those have become more refined because I have a low tolerance for banality and mediocrity.

Meanwhile, many aspire to see things as banal and mediocre. This is not elitism. This is seen when a child learns something new and expresses joy an adult looks at them in wonder, wishing that they could enjoy that originality again. We never get to go back, but we get to visit with children.

Going to bookstores used to be a true pleasure for me, but now when I look at the shelves I see less and less new, the rest a bunch of banality with nice covers. Yet books continue to sell because people don’t see that banality. My threshold for originality is higher, and in a way it’s a curse.

The Unexpected Twist

In the end, if people actually read what these things spit out, the threshold for originality should increase since after the honeymoon period is over with their LLM of choice, they’ll realize banality.

In a way, maybe it’s like watching children figure things out on their own. Some things cannot be taught, they have to be learned. Maybe the world needs this so that it can appreciate more of the true originality out there.

I’m uncertain. It’s a ray of hope in a world where marketers would have us believe in a utopian future that they have never fulfilled while dystopia creeps in quietly through the back door.

We can hope, or we can wring our hands, but one thing is certain:

We’re not putting it back in the box.

Who Are We? Where Are We Headed?

We used to simply dangle from the DNA of our ancestors, then we ended up in groups, civilizations, and now that we have thoroughly infested the planet we keep running into each other and the results are so unpleasant that at least some people are renting a virtual, artificial girlfriend for $1/minute.

It’s hard not to get a little existential about the human race with all that’s going on these days with technology, the global economy, wars, and where people are focusing their attention. They’re not really separate things. They’re all connected in some weird way, just like most of humanity.

They are connected in logical ways, we like to think, but when you get large groups of people logic has an odd tendency to make way for rationalization. There are pulls and tugs on the rug under the group dynamics, eventually shaking some people free of it for better or worse.

This whole ‘artificial intelligence’ thing has certainly escalated technology. The present red dots in this regard are about just how much the world will be improved by it. We’ve heard that before, and you would think that with technology now reflecting more clearly our own societies through large language models that we might be more aware that we’ve all heard these promises before.


I can promise you that for the foreseeable future, despite technological advances, babies will continue being born naked. They will come into the world distinctly unhappy with having to leave a warm and fluid space to a colder, less fluid space. From there, they seem to be having less and less time before some form of glowing flat screen is made available to them, replete with things marketed toward them.

It would be foolish to think that the people marketing stuff on those flat screens are all altruistic and mean the best of the children as individuals and humanity. They’re trying to make money. Everyone’s trying to make money.

I don’t know that this is empirically true or not, but it seems to me that when I was a child, people were more interesting in creating value than making money. If they created value, they got paid so that they could continue creating value. It seems, at least to me, that we’ve been pretty good about removing value from the equation of life.

This is not to say I’m right. Maybe values have changed. Maybe I’m an increasingly dusty antique that every now and then shouts, “Get off my lawn!”. I don’t think I’m wrong, though, because I do encounter people of younger generations who are more interested in value than money, but when society makes money more important than value, then everything becomes about money and we lose… value.

To compensate, marketing tells people what they should be valuing to be the person that they are marketed to become.

I don’t know where this is going, but I think we need to switch drivers.

Maybe we should figure out who we are and where we want to go. Without advertising.

Being Human.

One of the key things about trying to figure out our own humanity while standing on the shoulders of technology we create is that we all don’t agree on what being human is.

In this bit of writing I’m going to express some opinions, hopefully in a sensible way, and I’ll be getting into some controversial subject matter because being human is controversial subject matter. I’ll add a disclaimer I shouldn’t have to: these are opinions at this point of time that will be revised as I continue being human. I’m imperfect, after all, and I suspect you are too.

Homo Sapiens, as a species, have been around 300,000 years or so as far as we know. There are people who don’t believe this. There are people who do. As far as evidence goes, the science is for the 300,000 years.

The reality is that while we have these scientific knowledge, there are people who disagree. I’m not going to debate it here. The point is that when it comes to how long humanity has been around, we can’t seem to get people to agree on when we started being human.

The ‘how’ of being human is another annoying discussion that goes nowhere, and is linked to when we started passing stone tablets around. Well, stone tablets were a technological advancement in that they were more mobile than cave walls. There was nothing like taking your stone tablet out on a nice long walk with friends so you could read it to them in a lovely setting.

Our ability to write, and by extension read, is only about 5,500 years old. This fact doesn’t seem to be disputed as much, largely because of religious texts. We have had written communication for roughly 1.83% of our species time on the planet. For those of you bad with math, that’s not very long.

That’s just the basic historical stuff. During our recorded history, we’ve demonstrated great qualities and terrible qualities. Both are human. Some we deny or ignore out of convenience. We tend to think of ourselves in a very positive light because to do otherwise would be depressing, and nobody wants to be depressed.

“Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.”

Douglas Adams, Last Chance To See (1990)

There’s more history to go through that has filled books that very few people read and less people understand. To read all of our short written history and understand it would likely take a lifetime.

We just can’t agree on what being human is. That’s a problem when we are talking about anything, much less technology than reflects it all back at us with biases we still don’t completely understand. We haven’t agreed on what it is to be human beyond some human rights stuff that even though we sort of agree on, we have people violating anyway.

Civilization gave us rules. Laws. While the Code of Hammurabi is often cited as one of the earliest legal codes written, the Code of Ur-Nammu predates it by a good 600 years and is fairly intact. The Code of Urukagina predates that. It’s likely not a mistake that some people believe that the Earth is around as old as written history like this – we who claim to be part of literate civilizations would find it difficult to understand a world without writing.

The laws are interesting because nobody just makes up a law arbitrarily. Before it became illegal to kill someone, someone got killed and someone said, “This is bad. We need a Law.” It’s pretty clear that Laws don’t stop people from getting killed, or assure that murder is prosecuted. When we look back at these laws, we gain some insight into some of the issues that were prominent at the time.

One thing that stands out in the Code of Hammurabi is the implicit hierarchy, which reinforced the hierarchy:

  • If one finds a slave who has run away, and he brings the slave back to his owner, the owner will pay two shekels…
  • If one is in debt, and cannot pay, he can sell himself, his wife, his son and his daughter to work; after three years they shall be set free…
  • If anyone strikes a man whose rank is higher than his, the man shall be whipped sixty times with an ox-whip in public.
  • If someone strikes another man equally ranked, he shall pay one gold mina.
  • If a slave strikes its owner, his ear will be cut off.
  • If a man strikes a pregnant woman, and she therefore loses her child, he shall pay ten shekels for her.
Examples from the Laws of Hammurabi.

For the hierarchy to be codified, the hierarchy would have had to exist before the Laws existed. Where there are such hierarchies, some people have more value than others. There was no equality except within tiers of the hierarchy, and even then we can assume there was a pecking order. Equality or equity wasn’t something that was considered. There were no human rights except what the laws allowed for.

When we look at society today, we’re not too different. We aspire to be, or so we say, but we’re not very different at all.

And we’re talking about AI and it’s impacts on society.

Stewed Biases.

Nowhere small

Our lives are impacted by our decisions and the decisions of others, for better and worse, and humanity has this strange propensity to make things either/or when the dimmer switch has been around at least since 1959. We know that there’s more than 2 options for a light bulb, we have known for generations, but there is this cultural imperative to boiling things down to 2 choices.

If you see only 2 choices, you’re likely missing something. Even matter has at least three states we are taught of in basic physics and chemistry: Solid, Liquid, Gas.

One of the things I appreciated in the 1990s were thrift stores – books in particular, with knowledge on paper handed down from what I expect were dead people who did not throw away their books. I could walk into a thrift shop and use it as a used bookstore, where the books had been condensed into things that people found value in and so did not throw away. This, of course, was muddled by the books that were just donated to make space, but the good books to bad books ratio was pretty good, and it was a great way to get books cheap for someone like me. An intellectual omnivore.

Quite a few books changed my perspective on things, some because they had ideas worth adopting, and some had some antiquated ideas that formed the basis of modern ideas. Both had value. One of these books was “The Theory of Error” by Yardley Beers. A surprisingly slim book, it cost me all of 25 cents at the time, yet it gave me pause in dealing with calibrations, statistics, and everything else under the sun.

One of the major aspects of it was demonstrating something very simple that most people don’t consider: When there is a degree of accuracy, there is a degree of error. Understanding the nature of error and decreasing error is a powerful thing, the basis of which had me exploring fuzzy logic and bayesian probability for predictive things but also interpreting aspects of life. It was not Boolean (which oddly enough, accuracy and error are in an odd way), so there was room for more than 2 perspectives on anything.

As a then young software engineer that during formative years was obsessed with the idea of artificial intelligence, I found myself using these ideas where I could in code that wasn’t always understood by others but which worked. Some of it may still be working after some decades in what must now be an antiquated system that decided which medical transcriptionist to send a Doctor’s audio file to based on affinity and experience, weighted choices instead of the former ‘must match exactly’ choices.

What is hot for someone can be cold to someone else, what is warm for someone can be hot or cold for others. We live in an inexact world because what we as human beings process is subjective. I often wonder if this is why people have different favorite colors. Do certain colors appeal more to someone because of how they perceive it through senses? Maybe, maybe not, but we do know that we associate colors with things. We don’t really know, but we have some interesting guesses based on studies, statistics and… probability.

I bring up the favorite color because it’s a bias. And that bias demonstrates other biases, like the pseudoscience of racism, the idiocy of politics, and which brand one associates with a simple thing such as a hamburger. We are biased creatures, all of us, and we are often blind to our own biases.

The odds are good that by an accident of geography, you were born in one spot on the planet, with certain weather, with certain politics, with a certain predominant religion, with a culture, tradition and the bureaucracy that comes with them. When we encounter others who are different, they are the ‘thems’, and we are the ‘us’es. Are we open to others? That can have a lot to do with the red dots of life, too, where we are influenced by someone else’s laser pointers, and underlying it all is a stack of stuff we think we need to accomplish so that we have some purpose or worth.

It’s worth reflecting, every now and then, on this stuff because simply recognizing we are biased and allowing that our biases can be wrong can have impacts not just on ourselves and those around us, but in allowing that other’s biases impact them when they are dealing with you.

It’s a weird soup of reality we all share. Now that we’re so much more connected, we have gotten into our little tribes that throw rocks at each other and never find the commonalities as endearing, perhaps because we like our biases too much, guarding them against everything so that we can live a simpler life.

Sometimes we simplify too much, sometimes too little, and it’s in this grey soup of bias we see the worst and best of humanity. And now we’re seeing output from different things accused of being artificial intelligence reflecting those biases in interesting ways.

This could be an opportunity.  The choice is somewhere between “Now Here” and “Nowhere”.

Incoming: The Tide of Marketing.

_google_ai_marketing

Browsing Facebook, I come across this in my feed and it’s as if they read what I wrote in Silent Bias:

…With social media companies, we have seen the effect of the social media echo chambers as groups become more and more isolated despite being more and more connected, aggregating to make it easier to sell advertising to. This is not to demonize them, many bloggers were doing it before them, and before bloggers there was the media, and before then as well. It might be amusing if we found out that cave paintings were actually advertising for someone’s spears or some hunting consulting service, or it might be depressing…

Almost on command, this shows up in the main feed on Facebook – sponsored content by Google. I haven’t used Bard, but I fear I have suffered Bard’s work because… I imagine that they used Bard to generate that advertising campaign for Bard.

The first thing that every sustainable technology has to do is pay for itself. The magnitude of this, though, is well beyond cave drawings. As it is, marketing has used a lot of psychology to get people to chase red dots.  Now that this has become that much ‘easier’ for humans, and now that it’s being marketed as a marketing tool…

How much crap do you not need? We need to be prepared for the coming tide of marketing bullshit.