How To Be Unpopular.

Information, opinions, misinformation and misinformed and irrational opinions flood us every day and it seems to be accelerating. When I wrote about the lack of comprehensive human rebuttal about Noam Chomsky’s take on AI, it accidentally seeded this post because it’s about something we just don’t seem to have the opportunity to do much of anymore.

Reflect. Consider. Think things through.

I was looking through Facebook reels as I unfortunately do, and looking at the comments on them. There’s a lot of value signaling going on about videos that don’t give the greater contexts of the situation. It’s amazing, really, how because of that value signaling people go about constructing strong opinions without rationale. In turn, people want views because if you don’t have views, what’s the point of doing it?

For me, the point of doing it would be to share good knowledge. We didn’t get where we are today as a society by not sharing good knowledge, but the signal to noise ratio has become… well, a noise to signal ratio. Add into that the up and coming role of artificial intelligence in the mix, with the US Presidential election coming up and the continued aggression of Russia against Ukraine, the tragic affair of Hamas and the state of Israel with a lot of humans in between… things aren’t going to be better soon.

It’s time to be unpopular. To think things through. To find the meat that is hidden in all this tasty, cholesterol-ridden fat presented to us because people want views, likes, shares, etc. It’s a good analogy, actually, because some people have more trouble with cholesterol than others. If I walked past a steak my cholesterol would increase, so I won’t tell you what I had for dinner last night.

Some people are more gullible than others. Some people are more irrational than others. We know this. We see this every day. I often make jokes about it, as I told a cashier a few days ago in a store, because it’s my way of coping with the gross stupidity that we see.

So how does one become unpopular? It’s pretty simple.

Worry about being wrong. Decide not to do something because you’re not sure the impact it will have. Humanity tends to gravitate to strong opinions, however wrong they are. Marketing tends to maximize that, and marketing has become a part of our lives. I see lots of videos and ‘hot takes’ on Dave Chapelle and Ricky Gervais, but they tend to take them out of context and beat them with an imposed context – and somehow, despite all of that, they are popular and no one stops to consider why. Why are these comedians still held in regard? Because they dared to be unpopular. They, and I dare say this, dare to be authentic, thoughtful, and funny despite how many people think that they are unpopular.

All too often people are too busy value signaling to think about whatever it is. They need to have an opinion before they watch that next video, read that next tweet (It’s Twitter, Elon, it always will be)…

Slow down. That’s how to be unpopular. Think things through before communicating about it. The world will not end if you don’t have an opinion right now.

Once upon a time, there was value in that.

I’ll let you in on a secret: There still is.

Here’s what to do: Watch something/read something. Find out more about it. Think about it in different contexts. Maybe then do something or say something if you think there is value, or maybe just don’t say anything until you do.

On Disagreement With Noam Chomsky’s Opinion On AI.

Recently Noam Chomsky‘s opinion on AI as published in the New York Times has been making the rounds again, and the arguments against what he wrote are… aspiring to juvenile when it comes to the comments sections.

While he’s probably best known for his social activism, Chomsky is one of the founders of cognitive science and is by no means someone whose opinions should be dismissed lightly.

I disagreed with him on Ukraine and unless his opinion has changed I still do. He seemed to be focused on the quantity of life, whereas in listening to Ukrainian voices I heard them more concerned about their quality of life. “Life free or die” could summarize what I was hearing, with their option of being annexed in any way or form being something that didn’t seem like something they were willing to do. And why should they?

The point is that when you’re disagreeing with someone who has certainly demonstrated a high level of thinking, it should be at least at a similar level of thinking.

In that regard, searching for a rebuttal to his points was sad. The top result on a Google search was Jon Cronin’s asking ChatGPT what it thought about it.

The answer itself was evasive and went to it’s talking points like a politician. It glossed over much of what he said.

This rebuttal, written by a human and found much lower in search rankings, is guilty of pretty much the same thing by ignoring the points raised and starting off with, “but it’s good for the global economy…”

Which, amusingly, is sort of Chomsky’s point – that in pushing this advance in technology, which he did not deny, the chasing of the bottom line is somehow supposed to make up for a lot of rubbish.

That’s kind of what got us here, and where this is isn’t that palatable for everyone. He also pointed out the deficiencies.

His points are valid, whether liked or not, and true. This is why I wrote ‘A Tale of Two AIs‘. In Chomsky’s opinion piece, he’s discussing the reality, and in every rebuttal I have read so far which includes ChatGPT’s response posted on a LinkedIn page (people still use it to post stuff!) is about how AI is being marketed.

It’s not a disagreement. It’s just trying to talk over the reality of what these generative models actually are… so far.

That’s not debate.

Dealing with AI Generated Book Summaries.

           That’s not Jeff Bezos

When I wrote ‘Summarize This‘, I think expressing frustration at AI and copyright as well as AI being used to undermine the work of people would be more tangible on the Internet.

It wasn’t.

It shouldn’t be surprising, really. There aren’t as many authors out there who are concerned about it – it would seem that it’s only a non-fictional book issue. I don’t know if anyone really wants to get a book summary of Lord of the Rings, as an example.

Yet I thought about it and some days ago thought, “The best way to beat that is to come out with your own book summary. An ‘AUTHORized’ version.”

Amazon’s not going to stop anything once they’re making money. Bezos has to… well, I’m not sure what he has to do, but we can be certain he’ll need money to do that. What you can do is summarize the book yourself, toss in some tidbits that hint at the actual book and needing to read it for some stuff, and then if you’re feeling particularly grouchy, you can undercut the AI book summary prices.

In fact, it might be a healthy way to introduce a book to people.

In fact, why aren’t people doing it already?

Summarize This.

I was about to fire up Scrivener and get back to writing the fictional book I’m working on and I made the mistake of checking my feeds.

In comes Wired with, “Scammy AI-Generated Book Rewrites Are Flooding Amazon“. On Facebook, I had noticed an up-tick of ‘wholesale’ ebooks that people could sell on their own, but I thought nothing of it other than, “How desperate do you need to be?”.

It ends up it has been a big problem in the industry for some time, people releasing eBooks and having summaries posted on Amazon within a month, especially since large language models like ChatGPT came out. Were the copyrighted works in the learning models?

How does that happen? There are some solid examples in the article, which seem to be mainly non-fictional works.

…Mitchell guessed the knock-off ebook was AI-generated, and her hunch appears to be correct. WIRED asked deepfake-detection startup Reality Defender to analyze the ersatz version of Artificial Intelligence: A Guide for Thinking Humans, and its software declared the book 99 percent likely AI-generated. “It made me mad,” says Mitchell, a professor at the Santa Fe Institute. “It’s just horrifying how people are getting suckered into buying these books.”…

Scammy AI-Generated Book Rewrites Are Flooding Amazon“, Kate Knibbs, Wired.com, Jan 10th, 2024

I think that while some may be scammed, others just want to look smart and are fed the micro-learning crap that’s going around where they can, ‘listen to 20 books in 20 days’. I have no evidence that they’re doing summaries, but it seems like the only way someone could listen to 20 books in 20 days. I’d wondered about the ‘microlearning’ stuff, since I have spent a fair amount of time tuning my social media to allow me to do ‘microlearning’ when I am on social networks.

What is very unfair is that some of those books have years of research and experience in them. It’s bad enough that Amazon takes a big chunk out of the profits- I think it’s 30% of the sales – but to have your book summarized within a month of publishing is a bit too much.

Legally, apparently, summaries are legal to sell because it falls under fair use, though exceptions have happened. This is something we all definitely need to keep an eye on, because of the writers I know who bleed onto pages, nobody likes parasites.

And these people clogging Amazon with summaries are parasites.

If you’re buying a book, buy the real thing. Anyone who has actually read the book won’t be fooled by you reading or listening to a summary for long, and there are finer points in books that many summaries miss.

The Learning Model.

When I saw Professor Parthiban’s list of academic accomplishments on Facebook as seen at the top of this post, I was skeptical and amused.

I wrote, “Day 1: Introduce The Speaker…”

That got a few laughs, and after I got back home I sat down and thought, “If that’s legitimate, he’s got to be able to have an interesting conversation even if he crammed for everything.” So I looked around, and verily, Professor VN Parthiban holds 145 academic degrees and admits failure is a part of it.

He’s apparently having trouble with his memory now, which may or not be unrelated.

All that education in one person, an approved and accredited learning model… I would say that this is as close as we could get to a person educated near a size to a learning model used for an artificial intelligence.

What they want to create is an autodidact though. Like me. Quite the paradox.

We’re so silly sometimes.

AI is Not A Muse(d).

Image generated by Inspirobot.me

There’s been a lot said, written, and shoved in learning models without the permission of authors and artists to train artificial intelligence models.

Of course, artificial intelligence investors don’t want to pay for using copyrighted works. I’d wager in a legal sense one could make a case about fair use, and I must admit a bit of amusement at the conundrum that has been created in that regard.

Yet, when I’m stuck about writing something, I generally don’t turn to those AI models, which despite being accused of creative just have more access to data to draw a nexus from. What most people consider to be creative is almost always that. It’s how I like connecting seemingly unrelated topics.

AI is not a muse.

No, when I’m writing and I hit some kind of block, I generally go check out Inspirobot.me. It’s reminiscent of a silly program I wrote on an Apple IIe that simply spliced words and phrases together to form insults. As a teenager without many friends, I found this very amusing. When it began to get stale, I’d add more words and phrases.

Inspirobot does something similar in a technical sense, with some great imagery in the background – as above. Literature is a creation.

AI investors are trying to change that, as if all humanity has had to offer has already happened.

Silly investors.

Beyond Artificial Intelligence

In my daily readings yesterday, I came across Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements at a blog that I regularly read. It’s an odd one out for the blog, so it definitely caught my eye. The blog Be Inspired! is worth a read, this was just a single post that was tech related on something I have been writing about.

She hit some of the high notes, such as:

…Furthermore, the unequal access to and distribution of AI technology may exacerbate societal divisions. There is a significant risk of deepening the digital divide between those who have access to AI advancements and those who do not. To bridge this gap, it is crucial to implement inclusive policies that promote equal access to AI education and training across all demographics. Efforts should be made to democratize access to AI tools, ensuring that everyone has equal opportunities to benefit from this technological revolution…

Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements, Be Inspired, July 23rd, 2023.

While it is interesting that I noted a ‘voice change’ with the author, having read her blog, it isn’t her standard fare – and the high points were hit well. It also happens that the quotation above is something that keeps getting thrown around as if someone else is going to do it.

Someone else is not going to do it. I noted with ChatGPT and others when I have used them and asked questions about how artificial intelligence will impact human society, the large language models are very good at using those neural networks, deep learning algorithms and datasets we have no insight into to say in a very verbose way, “Well, you silly humans need to fix yourselves.

Of course, it’s not sentient. It’s predictive text on steroids, crack and LSD. It only has information given to it, and that information likely included stuff written by people who said, “Hey, we need to fix these issues”.

Well, we do need to fix a lot of issues with human society even as the hype cycle spits out a lot of gobbly-gook about the technological singularity while not highlighting issues of bias in the data the authors of our future point out every now and then.

Yet there will always be bias, and so what we are really talking about is a human bias, and when we speak as individuals we mean our own bias. If you’re worried about losing your income, that’s a fair bias and should be in there. It shouldn’t be glossed over as ‘well, we will need to retrain people and we have no clue about it other than that’. If you’re worried that the color of your skin doesn’t show up when you generate AI images, that too is a fair bias – but to be fair, we can’t all be shades of brown so in our expectation of how it should be we need to address that personal bias as well.

It does bug me that every time I generate an image of a human it’s of those with less pigment, and even then I’m not sure which shade of brown I want. It’s boggling to consider, and yes, it does reinforce stereotypes. Very complicated issue that happens because we all want to see familiar versions of ourselves. I have some ideas, but why would I share them publicly where some large language model may snatch them up? Another problem.

Most of the problems we have with dealing with the future of artificial intelligence stems from our past, and the past that we put into these neural networks, how we create our deep learning algorithms is a very small part of it. We have lines on maps that were drawn by now dead people reinforced by decades, if not centuries or even millenia of degrees of cultural isolation.

We just started throwing food at each other on the Internet when social media companies inadvertently reinforced much of the cultural isolation by giving people what they want and what they want is familiar and biased toward their views. It’s a human thing. We all do it and then say our way is best. We certainly lack originality in that frontier.

We have to face the fact that technology is an aspect of humanity. Quite a few humans these days drive cars without knowing much about how it works. I asked a young man recently if when they tuned his car they had lightened his flywheel, since I noticed his engine worked significantly harder on an incline, and he didn’t know what that was.

However, we do have rules on how we use cars. When drunk driving was an issue, some mothers stepped up and forced the issue and now drunk driving is something done at an even higher risk than it already is: you might go to jail over it to ponder why that night had become so expensive. People got active, they pressed the issue.

The way beyond AI is not through the technology, which is only one aspect of our humanity. It’s through ourselves, all our other aspects of humanity which should be more vocal about artificial intelligence and you might be surprised that they have been before this hype cycle began.

Being worried about the future is nothing new. Doing something about it, by discussing it openly beyond the technology perspectives, is a start in the right direction because all the things we’re worried about are… sadly… self-inflicted by human society.

After centuries of evolution that we think separates us from our primate cousins, we are still fighting over the best trees in the jungle, territory, in ways that are much the same, but also in new ways where dominant voices with just the right confidence projected make people afraid to look stupid even when their questions and their answers may be better than that of those voices.

It’s our territory – our human territory – and we need to take on the topics so ripe for discussion.

The Frame.

It rained today in various forms. There was the torrential, tropical rain, there was the gusty rain that knocked over one of my plants, and then there was a drizzle. Each was punctuated by a clearing of tropical sun and humidity.

It was wonderful.

The beauty sometimes can’t easily be conveyed, but after a good rain at the right hour and the light is just so, it’s worth casting your eyes about, focusing on different things to find a perspective, something that frames well, something that tells it’s own story.

It’s the same with most other things too. One of the things Instagram and other aspects of social media have enabled is creating an artificial image. Artificial intelligence tools like image generation are much the same. They’re about filling frames.

The social media ‘influencer’ generally masters illusion within a frame that doesn’t truly represent what’s going on out of the frame. The artificial intelligence doesn’t have anything outside of the frame. What I think of as real photography, in general, is less so. It’s less contrived, but it, too, tends toward artificiality in some regards.

The world we live in, though, is not artificial. It’s definitely artificially influenced at this point, but – water drops on a rainy day captured on flowers tells a story of rain, of life, of color. It’s becoming increasingly rare, too.

Something to think about the next time you look at a picture online. Is there a greater context? Does it truly represent what is outside of the frame, or was it cobbled together?

Are you inside the frame, or are you in the world surrounding the frame like the photographer?

Trapped In Our Own Weirdness.

When I wrote about expanding our prisons, implicitly it’s about the removal of biases through education. For example, how can one who has even a passing understanding of the human genome still consider ‘race’ an issue? Here we are, having mapped the human genome, and we continue acting out over skin tones that have little to no correlation to genetics.

You can’t tell ‘race’ by a genetic test. Race is a label, and a poor one, and one we perpetuate despite knowing this.

It’s about history, like I pointed out over here when I mentioned the history of photographic film. It is a troublesome issue and one that we largely have reinforced by our own works that pass on from generation to generation.

At first it was just images, from the earliest cave drawings, then more formal writing and more elegant art, then recordings of all sorts. In today’s world we have so much that we record, and there’s a bit of wonder at how much maybe we shouldn’t be recording. These things get burned into the memory of our civilization through the power of databases, are propagated by the largest communication network ever built, and viewed by billions of people around the world independent of ‘race’ or culture but potentially interpreted at each point of the globe, by each individual, in different ways.

In an age of just oral tradition, it would just be a matter of changing something and waiting for living memory to forget it. Instead, we suffer the tyranny of our own history written by people who have their own perspectives. No one seems to go to the bathroom in history books or, for that matter, religious texts. An AI trained on religious texts alone would not understand why toilet paper has a market in some parts of the world, with a market for bidets in others.

Now we have the black boxes of artificial intelligence regurgitating things based on our history, biases and all, and it’s not just about what is put in, but the volume of what is put in.

The next few decades are going to be very, very weird.

The Psychology of Machines.

Most people are familiar with Robert A. Heinlein‘s work “Starship Troopers” because it was made into a movie. There were other movies based on his works, but never my favorites.

One of those favorites is, “The Moon Is a Harsh Mistress“. Within it’s pages, a young teenage version of myself found a lot to imagine about. In fact, the acronym TANSTAAFL became popular because of Heinlein. Yet that’s not my focus today.

My focus is on the narrator Manuel Garcia (“Mannie”) O’Kelly-Davis and his relationship with the Lunar Authority’s master computer, HOLMES IV (“High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV”).

HOLMES IV became self-aware, and developed a sense of humor. Mannie, who became friends with HOLMES IV, named it ‘Mike’ (Sherlock Holmes reference).

Mannie, a computer technician, ended up having a fairly complex relationship with Mike, and I thought about him being Mike’s psychologist. A computer technician as a psychologist for an artificial intelligence.

If you have read the book, you might see what I mean, and if you haven’t, I encourage it.

Throughout the years as a software engineer, I would jokingly call myself variations of a computer psychologist.

Now in 2023, Artificial Intelligence ‘hallucinations’ have become a thing, and if we reference Andy Clark’s work in his book, “The Experience Machine: How Our Minds Predict and Shape Reality“.

“Since brains are never simply “turned on” from scratch—not even first thing in the morning when I awake—predictions and expectations are always in play, proactively structuring human experience every moment of every day. On this alternative account, the perceiving brain is never passively responding to the world. Instead, it is actively trying to hallucinate the world but checking that hallucination against the evidence coming in via the senses. In other words, the brain is constantly painting a picture, and the role of the sensory information is mostly to nudge the brushstrokes when they fail to match up with the incoming evidence.”

The Experience Machine: How Our Minds Predict and Shape Reality“, Andy Clark, 2023.

The Marginalian has a post that pointed me to Andy Clark’s book, which I encourage you to take a look at.

When artificial intelligence folks talk about hallucinations, this is the only reference that makes sense, and yet I think ‘bullshitting’ might be more appropriate than hallucinating. Of course, ‘hallucinating’ is something more professional to say and it could be correct in instances where the large language models are attempting to predict what the user wants. I’d have to study the code. They have, I haven’t, so let’s go with hallucinations for now.

There may be a space in our future for ‘artificial intelligence psychologists’. Psychiatrists, not so much maybe.

This could be a fun topic worth exploring. Hacking what our minds create could help us understand ourselves better.