Manipulation In The Age of AI – And How We Got Here.

We understand things better when we can interact with them and see an effect. A light switch, as an example, is a perfectly good example.

If the light is off, we can assume that the light switch position is in the off position. Lack of electricity makes this flawed, so we look around and see if other things that require electricity are also on.

If the light is on, we can assume the light switch is in the on position.

Simple. Even if we can’t see, we have a 50% chance of getting this right.

It gets more complicated when we don’t have an immediate effect on something, or can’t have an effect at all. As I wrote about before, we have a lot of stuff that is used every day where the users don’t understand how it works. This is sometimes a problem. Are nuclear reactors safe? Will planting more trees in your yard impact air quality in a significant way?

This is where we end up trusting things. And sometimes, these things require skepticism. The world being flat deserves as much skepticism as it being round, but there’s evidence all around that the world is indeed round. There is little evidence that the world is flat. Why do people still believe the earth is flat?

Shared Reality Evolves.

As a child, we learn by experimentation with things around us. As we grow older, we lean on information and trusted sources more – like teachers and books – to tell us things that are true. My generation was the last before the Internet, and so whatever information we got was peer reviewed, passed the muster of publishers, etc. There were many hoops that had to be jumped through before something went out into the wild.

Yet if we read the same books, magazines, saw the same television shows, we had this shared reality that we had, to an extent, agreed upon, and to another extent in some ways, was forced on us.

The news was about reporting facts. Everyone who had access to the news had access to the same facts, and they could come to their own conclusions, though to say that there wasn’t bias then would be dishonest. It just happened slower, and because it happened slower, more skepticism would come into play so that faking stuff was harder to do.

Enter The Internet

It followed that the early adopters (I was one) were akin to the first car owners because we understood the basics of how things worked. If we wanted a faster connection, we figured out what was slowing our connections and we did it largely without search engines – and then search engines made it easier. Websites with good information were valued, websites with bad information were ignored.

Traditional media increasingly found that the Internet business model was based on advertising, and it didn’t translate as well to the traditional methods of advertising. To stay competitive, some news became opinions and began to spin toward getting advertisers to click on websites. The Internet was full of free information, and they had to compete.

Over a few decades, the Internet became more pervasive, and the move toward mobile phones – which are not used mainly as phones anymore – brought information to us immediately. The advertisers and marketers found that next to certain content, people were more likely to be interested in certain advertising so they started tracking that. They started tracking us and they stored all this information.

Enter Social Media

Soon enough, social media came into being and suddenly you could target and even microtarget based on what people wanted. When people give up their information freely online, and you can take that information and connect it to other things, you can target people based on clusters of things that they pay attention to.

Sure, you could just choose a political spectrum – but you could add religious beliefs, gender/identity, geography, etc, and tweak what people see based on a group they created from actual interactions on the Internet. Sound like science fiction? It’s not.

Instead of a shared reality on one axis, you could target people on multiple axes.

Cambridge Analytica

Enter the Facebook-Cambridge Analytica Data Scandal:

Cambridge Analytica came up with ideas for how to best sway users’ opinions, testing them out by targeting different groups of people on Facebook. It also analyzed Facebook profiles for patterns to build an algorithm to predict how to best target users.

“Cambridge Analytica needed to infect only a narrow sliver of the population, and then it could watch the narrative spread,” Wylie wrote.

Based on this data, Cambridge Analytica chose to target users that were  “more prone to impulsive anger or conspiratorial thinking than average citizens.” It used various methods, such as Facebook group posts, ads, sharing articles to provoke or even creating fake Facebook pages like “I Love My Country” to provoke these users.

The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections“, Rosalie Chan, Business Insider (Archived) October 6th, 2019

This had drawn my attention because it impacted the two countries I am linked to; the United States and Trinidad and Tobago. It is known to have impacted the Ted Cruz Campaign (2016), the Donald Trump Presidential Campaign (2016), and interfering in the Trinidad and Tobago Elections (2010).

The timeline of all of that, things were figured out years after the damage had already been done.

The Shared Realities By Algorithm

When you can splinter groups and feed them slightly different or even completely different information, you can impact outcomes, such as elections. In the U.S., you can see it with television channel news biases – Fox news was the first to be noted. When the average attention span of people is now 47 seconds, things like Twitter and Facebook (Technosocial dominant) can make this granularity more and more fine.

Don’t you know at least one person who believe some pretty whacky stuff? Follow them on social media, I guarantee you you’ll see where it’s coming from. And it gets worse now because since AI has become more persuasive than the majority of people and critical thinking has not kept pace.

When you like or share something on social media, ask yourself whether someone has a laser pointer and just adding a red dot to your life.

The Age of Generative AI And Splintered Shared Realities

An AI attached to the works of humans

Recently, people have been worrying about AI in elections and primarily focusing on deepfakes. Yet deepfakes are very niche and haven’t been that successful. This is probably also because it has been the focus, and therefore people are skeptical.

The generative AI we see, large language models (LLMs) were trained largely on Internet content, and what is Internet content largely? You can’t seem to view a web page without it? Advertising. Selling people stuff that they don’t want or need. Persuasively.

And what do sociotechnical dominant social media entities do? Why, they train their AIs on the data available, of course. Wouldn’t you? Of course you would. To imagine that they would never use your information to train an AI requires more imagination than the Muppets on Sesame Street could muster.

Remember when I wrote that AI is more persuasive? Imagine prompting an AI on what sort of messaging would be good for a specific microtarget. Imagine asking it how to persuade people to believe it.

And imagine in a society of averages that the majority of people will be persuaded about it. What is democracy? People forget that it’s about informed conversations and they go straight to the voting because they think that is the measure of a democracy. It’s a measure, and the health of that measure reflects the health of the discussion preceding the vote.

AI can be used – and I’d argue has been used – successfully in this way, much akin to the story of David and Goliath, where David used technology as a magnifier. A slingshot effect. Accurate when done right, multiplying the force and decreasing the striking surface area.

How To Move Beyond It?

Well, first, you have to understand it. You also have to be skeptical about why you’re seeing the content that you do, especially when you agree with it. You also have to understand that, much like drunk driving, you don’t have to be drinking to be a victim.

Next, you have to understand the context other people live in – their shared reality and their reality.

Probably more importantly, is not calling people names because they disagree with you. Calling someone racist or stupid is a recipe for them to stop listening to you.

Where people – including you – can manipulated by what is shown in your feeds by dividing, find the common ground. The things that connect. Don’t let entities divide us. We do that well enough by ourselves without suiting their purposes.

The future should be about what we agree on, our common shared identities, where we can appreciate the nuances of difference. And we can build.

Who Are We? Where Are We Headed?

We used to simply dangle from the DNA of our ancestors, then we ended up in groups, civilizations, and now that we have thoroughly infested the planet we keep running into each other and the results are so unpleasant that at least some people are renting a virtual, artificial girlfriend for $1/minute.

It’s hard not to get a little existential about the human race with all that’s going on these days with technology, the global economy, wars, and where people are focusing their attention. They’re not really separate things. They’re all connected in some weird way, just like most of humanity.

They are connected in logical ways, we like to think, but when you get large groups of people logic has an odd tendency to make way for rationalization. There are pulls and tugs on the rug under the group dynamics, eventually shaking some people free of it for better or worse.

This whole ‘artificial intelligence’ thing has certainly escalated technology. The present red dots in this regard are about just how much the world will be improved by it. We’ve heard that before, and you would think that with technology now reflecting more clearly our own societies through large language models that we might be more aware that we’ve all heard these promises before.


I can promise you that for the foreseeable future, despite technological advances, babies will continue being born naked. They will come into the world distinctly unhappy with having to leave a warm and fluid space to a colder, less fluid space. From there, they seem to be having less and less time before some form of glowing flat screen is made available to them, replete with things marketed toward them.

It would be foolish to think that the people marketing stuff on those flat screens are all altruistic and mean the best of the children as individuals and humanity. They’re trying to make money. Everyone’s trying to make money.

I don’t know that this is empirically true or not, but it seems to me that when I was a child, people were more interesting in creating value than making money. If they created value, they got paid so that they could continue creating value. It seems, at least to me, that we’ve been pretty good about removing value from the equation of life.

This is not to say I’m right. Maybe values have changed. Maybe I’m an increasingly dusty antique that every now and then shouts, “Get off my lawn!”. I don’t think I’m wrong, though, because I do encounter people of younger generations who are more interested in value than money, but when society makes money more important than value, then everything becomes about money and we lose… value.

To compensate, marketing tells people what they should be valuing to be the person that they are marketed to become.

I don’t know where this is going, but I think we need to switch drivers.

Maybe we should figure out who we are and where we want to go. Without advertising.

The Red Dots of Life.

_red dot

There’s a life skill to have that I think these days is more important than most. Probably the easiest way to explain it is by the ubiquitous cat and laser pointer that, by now, people in the Amazon jungle likely know about by carrier parrot.

Those of us that have had a cat of any generation have played with cats in one form of the other, but when the Theodore Maiman created the first laser in 1960 at Hughes Research Laboratory I’m fairly certain that he didn’t think that it would become something carried in pet stores. For those of you who don’t know, in the early days of the laser pointer, it was marketed for humans to use on humans for much the same reason.

In the days of boards and projectors, it was marketed as a tool to focus people on things. It worked really well until Microsoft decided to put out PowerPoint and making every meeting involving it a snooze fest. There was that window where the laser pointer had it’s day, only to be promoted to cat tormentor.

We think the cat is playing, but what is ‘playing’? The dictionary definition is doing something for enjoyment, and yet we don’t know that a cat necessarily enjoys attacking something it can’t actually stick in it’s mouth, which is where every other cat toy and other household item that catches their interest ends up. It’s instinctual, and one can argue that it’s a way of practicing hunting.

famous-cat-meme-which-started-and-launched-the-website-i-can-haz-cheezburger

You Can Haz Cheezeburger?
How would you feel if someone kept sticking a cheeseburger image in front of you? You’d practice grabbing it and would never get it. I don’t imagine it would be fun. Granted, moving laser dots on the carpet don’t have a taste other than carpet, but work with me.

Now take a breath and look around you every day and find the red dots in life. These are basically just some group of people trying to direct you to do something. Maybe it’s a good thing like washing your hands.

Maybe it’s a thing where when you’re hungry or thirsty, maybe that last sticky advertisement will guide your money to a place where you think you’ll get what you’re thinking you want.

I don’t even need to name food chains, they likely already popped into your heads. Maybe just the word ‘cheeseburger’ had you thinking of a particular food chain because you associate that word with their product.

no cheeseburger

The movie ‘Detached’ has a clip going around now about ubiquitous assimilation. It’s about those red dots and developing our minds beyond the quick and dirty memes that get passed around like a joint at a barbecue. They get passed around by people who never read Richard Dawkins books much less ‘The Selfish Gene’. They likely have no idea why we call them memes. They’re just memes, which occupy attention like little red dots. We have marketing trying to sell products, we have people trying to market their own ideas with memes, and then sometimes some of those memes work to the benefit of everyone.

And sometimes you just get a mouthful of something that’s blech. Sometimes you might get a good cheeseburger, sometimes you might get a bad cheeseburger, you never know. Social media has people, little ones too, just chasing red dots.

That particular scene from ‘Detached’ has Adrian Body’s masterful delivery of such a simple concept that we should not only be teaching children but also reminding adults of. If your clicky clicky ain’t getting you cheeseburgers you like, stop chasing them.

Criticism is often met by gaslighting, blaming an individual for not getting the cheeseburger that was shown. Somewhere in some very fine print that you need to have compound eyes to read there’s a catch somewhere. As we grow older we learn to expect them – but rarely read the fine print because… you effectively need compound eyes. Imagine having your lawyer look over every software license, copyright license, terms of service document… you’d get nothing done, and you need to get things done.

What do you need to get things done? Are you chasing red dots again? What are you actually accomplishing? Do you have a sense of accomplishment? Do you get the cheeseburger in your mouth feeling, or do you get the red dot on carpet taste?

We need to spend time on ourselves so that we are less susceptible to bullshit red dots. Shine your own for yourself.

And maybe think about what the cat wants when you play with it.