Of Spheres And Shapes

There’s a lot to consider these days regarding intelligence and consciousness. I’ve developed my own thoughts over time, as we all have to some degree, but few of us it seems have the time or inclination to really sit and think about such things.

What separates us from other forms of life on the planet? Only we have excised ourselves from the rest of life on the planet as far as we know, and that’s fairly narcissistic of our species, a species where we accuse individuals of our species of narcissism – which must mean that they’re pretty bad if they merit a diagnosis rather than suffer armchair psychologists around the world.

When we boil down what reality is for us, it’s all derived from our senses. We look, we smell, we touch and we listen – these are our inputs, and from it we develop a model of the world within what we call our minds, which we blame our brains for. Yet there are other senses we have related to our own bodies and how we physically and emotionally feel at any given time, and influences how we perceive the world.

How that interacts with others is akin, if not the same thing, as a ‘sphere of influence’ – something my father often talked about, since he had heard about spheres of influence somewhere: I’d read all the same books he had, sometimes even before he got finished reading them. I don’t know where he was introduced to the concept, but the concept is worth fleshing out in an era where we’re all data streams to fund some billionaire’s stab at a version of success that seems disassociated with the rest of the planet.

It is always fashionable to point out others live in bubbles, and saying that billionaires live in bubbles doesn’t let us off the hook. Some people admire the bubbles and want to get into a bubble – a sphere with that much influence.

I’ve been listening to Lex Fridman podcasts on YouTube in the background off and on over the past month, and I forget in which of them he mentioned that he wanted to use his influence for good in an election year, or in some other thing, and I admired his honesty in that and worried that his own sphere wasn’t broad enough to truly have an effect I would desire. Often he seems a supportive role in whomsoever he talks to. I forced myself to listen to his episode with Elon Musk – at least one of them, they seem to talk offline a lot – and in that podcast there seemed a lot of soft pitches to Musk, and much of it was nothing more than what I call an advertorial.

To his credit, the casual listener may not have picked up on that with Musk, and those who want to be like Musk (in whatever way) wouldn’t want to notice it, but as someone who is not impressed with Musk, I forced myself to listen to the interview and be as objective as possible. Musk, like everyone else, wants to make the world a better place, but the way that he sees the world is often incompatible with reality in my mind. That being said, I listened and found myself mildly impressed with how human he came across. Yet when I thought through everything, it was a mildly entertaining soft pitch for Grok throughout, while not actually challenging Musk.

The comments on the video were quite supportive of Musk. It’s a hit. Lex Fridman, then, would see how many views the episode had, read the comments, and think it was all wonderful – but having listened to many of these sessions, and watching the body language in the videos, some of those interviewed (and I include Musk) weren’t really challenged and where criticism of them was either ignored or simply peacefully bridged, as if the opinions didn’t matter.

And yet, there were gems, like this one with Sara Walker. It’s long, it’s worth it, and while she does seem to have what I call a ‘Valley Girl vocal tic’ which I generally don’t find endearing and often have trouble taking seriously. ‘Fer shure!’ and stuff like that have been grossly overdone with shallow movies, and isn’t something I hear often outside of that context – but she is amazingly well thought, and like me, she likes playing with words (and also like me, apparently, doesn’t think in words).

It was a soft pitch for her upcoming book, too, but in this context – and I’ll give Musk credit for saying this, paraphrased – advertising that is contextual to what a person wants or needs at a time is content. Well, maybe, it depends on how the want or need was created. It happens that she was talking about things that I was thinking about and she randomly popped up in YouTube. If you’re interested in that sort of thing, watch the video. She’s quite well thought on all of this. She’s someone I wouldn’t mind having coffee with, if she could put up with my speaking style – I imagine it works both ways. Regardless of how Sara Walker says it, she says a lot worth listening to1.

When ideas collide in the ether between we humans, it’s because of language communicating a common concept between people. It can be between two people, and that develops a common language. It can happen within a group of people who work or play with the same things, which gives us lingos. On rare occasions, these lingos – words or acronyms – can go mainstream, as the meme about memes did by Richard Dawkins. And even then they can be curtailed by languages2, and when it transcends language, it hits very mainstream.

This all fits really well with the concepts that Pierre Levy has communicate in his own way over the decades brilliantly. Being more steeped in being multilingual than I, reading his works was at first challenging.

One of the beautiful things that Levy writes on is IEML, a semantic language he created that has challenged me more than I have had the capacity to challenge it. I have yet to see someone come up with an equivalency, which may exist. I have also yet to see anyone approach a lot of knowledge management in the same regard, particularly in an age where Large Language Models are also ‘Literal Language Models’.

These spheres of influence are telling. Pierre Levy resides mainly in academia, and AI resides in the mouths of people marketing stuff that while initially impressive has demonstrated more and more that it can regurgitate the opinions of others based on what it has read. This marketers have celebrated as a success, and this I have seen as a limitation that more data is not going to solve.

‘Spheres of Influence’ also… aren’t spheres. They are shaped by what we are exposed to, and when people focus on one aspect I describe it as wobbling, because these ‘spheres’ spin, and it’s convenient to talk about spheres since they are so perfect – but we are not perfect, we have our biases, some of us delve deeply into subjects and change our centers drastically. People who are more open minded would be more fluid, like water, and those who are closed minded can be like concrete.

It’s something to consider when we assess intelligence, consciousness, or our own lives – and what we’re being sold, or what we’re being told should be important to us.

This kind of stuff is part of the basis of the novel I’ve been working on. Would love to hear more from others, though my own sphere of influence on the internet is not that large. Comment below.

  1. Her book comes out in August 2024, and I’ll get a copy because of how she expressed what she did: “Life as No One Knows It: The Physics of Life’s Emergence”. I didn’t agree with everything she said, and that’s exactly why she’s worth reading for me. I may not know enough. 🙂 ↩︎
  2. I prefer the Spanish word idioma for language – it seems much more sensible to me as it encapsulates dialects as well. ↩︎

The LLM Copilot is More of a Companion.

I almost forgot to write something here today. I’ve been knocking out scenes and finding the limitations of the LLM as I go along, which is great.

The particular LLM I’m working with is llama3 which I’ve tweaked and saved as I’ve worked with it.

It’s fun because it sucks.

It can handle about 500-1000 words easy to analyze at a time – figure a scene at a time. Meanwhile, it forgets all the other scenes it has seen. It does ask pretty decent questions within the scene, which is a nice way to make sure that the parts it can’t answer are the ones you don’t want the reader to be able to answer yet. It echoes the questions a reader might ask – if they have memory issues.

It’s terrible, however, at following along with what was previously written. Despite saving, etc, it just lumbers along thinking each chunk of text is all by itself, and maybe some of the things you had written before. It mixes up character names as an example.

I’ve come to think of it as a funny mirror for writing. It kinda gets it, but not really. I’m happy with that. I’m the writer, it’s just a funny mirror I bounce ideas off of.

It never comes up with original ideas – how could it? It’s trained on things that have been written before, and sure, it can string words together in ways that can impress some people – but it strings together words just so based on what it has seen before.

It lacks imagination, vision, and because of that, it’s terrible for any form of long form prose. Maybe some LLMs are better at it, but I’m perfectly happy with it not being good at imagination and vision.

That’s my job.

What it does do, even when it screws up – especially when it screws up – is keeps me on task. I don’t know how many other people have that particular issue, but if you do, LLMs are pretty good for that, for developing characters, and… shaking your head at.

Is it worth the trouble of installing a LLM? I don’t know. For me, I think so. Having a goofy tool asking dumb questions is handy.

Writing With a LLM Co-Pilot

I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore.

I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write.

Since I don’t really want companies that own AI to have visibility into what I consider my original work, I installed my own LLM (easy enough) and set about experimenting. With it local, on my machine, I had control of it and felt safer sharing my thoughts and ideas with it.

I wondered how I would use it. So I tried it out. This idea I’ve been working into a novel needed a start, and yesterday I got that done with some assistance. It’s advice on writing wasn’t bad, and helped me be more of an active voice by nagging me a bit when I had it look at my work – like a good editor, though not a replacement for a human editor.

The general theme I go with when writing is get the draft done and re-read it later. Yesterday, I sweated it out over about 1,000 words of an introduction to the novel with foreshadowing and introductions of some of the characters who had placeholder names. Names in the context of the novel seemed pretty important to me, so it was sort of a ‘hold back’ on allowing me to write more fluidly – a peculiarity I have.

The LLM did provide me with names to pick from based on what I gave it, and I researched it on my own – and lo! – that was finally done. I had to rewrite some parts so that it flowed better, which I must admit it seemed to once I took the LLM’s advice, though it does nag a bit on some style issues.

All in all, it was a productive day. I treated the LLM as something I could spitball with, and it worked out pretty well. This seems like a reasonable use case while not letting it actually write anything, since a LLM is trained on a lot of text.

I’d tap out a few paragraphs, and paste it into the LLM to see what it thought, and it would be helpful. Since I was doing this as I wrote, it commented on the story as I went along and noticed things I had not, giving inputs like, “There is potential for tension between the characters here that might be worth exploring.”

Of course, it does seem to be equipped with a Genuine People Personality. It sometimes comes across as a bubbly personality that can grate on my nerves.

Much of what I did yesterday I could have done without it, but I think it saved me some time, and I’m more confident of that introduction as well. It is nice that I can be alone writing and have a tool that I can spitball ideas with as I go along. Is it for everyone? I don’t know. I can only tell you how I believe it helps me. At least I know it’s not going to blabber my ideas to someone else.

As I use it in other ways, I’ll re-evaluate subscriptions I have to AI services like Chat-GPT. I don’t need to be bleeding edge, I just want something that works for me. In the end, that’s how we should be measuring any technology.

Experimenting With LLM.

I’ve installed AIs locally so I can do my own experimentation without signaling to tech bros what I’m doing. I’m trying to get away from the subscription models that they’re selling.

I’m auditioning various models to find strengths and weaknesses, mainly to help me with infoglut. So much of what is written on the internet is just a new rendition of the same crap, particularly with AI these days, and to find the things that are new, or reveal something new from the same information.

If you want to know how to do this yourself, it’s not hard, and it costs nothing. I wrote up a quick ‘How To install your own LLM’ here.

This requires training a model. Presently I’ve been training Llama3. It has been a little too bubbly for my taste, but after a day and reading a few books from Gutenberg.org, I fired it up this morning and this happened.

Now, it remembers who I am, which is always nice, but I decided to ask it what I should call it. It’s answer is interesting. By saving the model after our interactions, it is learning to a degree – but, it’s not human, and no, I know it’s not actually intelligent. But it has been an interesting endeavor.

I’ve fed it some of my writing, and it called me out on not using enough active voice. That’s a good tip.

In all, the overall plan is to have it do some of the heavy lifting in dealing with infoglut. I spend way too much time daily reading stuff that isn’t worth reading because I don’t know it’s not worth reading until I’ve read it.

The plan is to outsource to ‘Teslai’, or whatever LLM model I choose in the future. By allowing it to get to know me – not something I would do with a LLM controlled by someone else – it might be able to tailor things better for me, not based on what I used to like, but based on the patterns it finds in my own behavior. And even then, like anything else, a healthy dose of salt with it.

Wanted: Another Renaissance.

It’s hard not to feel at least a little dismayed every day these days. It seems that the news is full of headlines that twist knives of fear in our fragile human hearts. We’re largely kept pretty busy simply maintaining our own lives.

Food and shelter are as needed now as they were needed when our ancestors first slithered from the primordial ooze. Our bodies did not evolve to stand our environment, instead we wore the skins of those that had. We did not evolve to consume abundant vegetation, so we ate those that do, yet our bodies did not evolve to become predators.

In fact, compared to most animals on the planet, our bodies aren’t that evolved to suit the planet at all – we’ve been ‘cheating’ with technology, appropriating as much as we can from others on our planet. Our technology has evolved faster than we have, our impact on the planet has evolved more than we have, and our technology is not really being used to reduce that impact.

We communicated, we coordinated, and we took on greater tasks. Oral cultures formed and passed down information from generation to generation, but there were flaws with this sometimes as we played the telephone game (or Chinese Whispers) across time. Contexts changed. We figured out how to write things down – to literally set things in stone. From there we found more and more portable ways to write.

Imagine the announcements of tech companies back then: “New stone allows more words on it for the weight and the size! Less oxen needed to pull! They will pay for themselves!” and later, “Use Papyrus! Have a stone-free library!”

So at first only those who were literate were allowed to participate in writing, but more and more people became literate despite those who once controlled written language. In a few thousand years, we managed to spread literacy pretty well across humanity, and the cacophony of it began to build on the Internet.

And yet we ourselves still haven’t really evolved that much. We’re basically still living in caves, though our cave technology has increased to a level where we have portable caves and caves we stack on top of each other to great heights.

We’re still basically pretty much the same with more of us, and our technology almost provides enough for everyone, maybe, but our great civilization on the planet is hardly homogeneous in that regard. Most people can point to a place where people have less or more than themselves, and the theory of hard work allowing people to progress seems flawed.

Now that so many people can write, they get on social media and jibber-jabber about the things that they like, most of it just being sending packets of information around through links – some not reading what they pass along because it has a catchy headline that meets their confirmation bias. Others have learned how to keep people talking about things, or to start people talking about things, and despite having the capacity to think for themselves, they only talk about what they’re manipulated into talking about.

Our feeds fill with things that we fear. Election years have become increasingly about fear rather than hope – any hope is based on fear, and people just twist in place, paralyzed by a lack of options. The idea that we could, for example, have women control their bodies and not fund a foreign government’s version of Manifest Destiny. We could have a better economy and better healthcare that isn’t wrapped in a sinkhole of people making bets on our health and forcing us to do the same – insurance companies. We could do a lot of things, if people simply trod their own minds more thoughtfully.

We’re insanely busy getting the latest technology because… well, technology is what we have to evolve since we haven’t. Tech companies are the new politicians, making campaign promises with each new release. It can’t be ‘new and improved‘ – pick one; you can only improve on the old.

They promise us more productivity, implying that we’ll have more time to ourselves in our caves drawing on the walls when we spend more and more time being productive for someone else. We’re told this is good, and some of us believe it, and some of us tire of the bullshit we believed for so long.

We could use another renaissance, if only so that people begin thinking for themselves in a time when AI promises to do their writing – and their thinking.

Robots Portraying AI, and the Lesser-Known History of Economic Class.

Some time ago, someone on some social media platform challenged why we tend to use robots to symbolize AI so much. I responded off the cuff about it being about how we have viewed artificial intelligence since the beginnings of science fiction – in fact, even before.

We wanted to make things in our image because to us, we’re the most intelligence species on the planet. Maybe we are, but given our history I certainly hope not. My vote is with the cetaceans.

Still, I pondered the question off and on not because it was a good question but because despite my off the cuff answer it was in my eyes a great question. It tends to tell us more about ourselves, or ask better questions about ourselves. The history runs deep.

Early History.

Talos was a bronze automaton in Greek mythology, and was said to patrol the shores of Crete, hurling rocks at enemy ships to defend the kingdom. It wasn’t just in the West, either. China’s text, “Liezi” (circa 400 BCE), also has mention of an automaton. in Egypt, statues of Gods would supposedly nod their heads as well, though the word ‘robot’ is much more recent.

Domo Origato, Mr. Radius: Labor and Industry.

The word ‘robot’ was first used to denote a fictional humanoid in a 1920 Czech-language play R.U.R. (Rossumovi UniverzĂĄlnĂ­ Roboti – Rossum’s Universal Robots) by Karel Čapek. The play was a critique of mechanization and the ways it can dehumanize people.

‘Robot’ derives from the Czech word ‘robota’, which means forced labor, compulsory service or drudgery – and the Slavic root rabu: Slave.

…When mechanization overtakes basic human traits, people lose the ability to reproduce. As robots increase in capability, vitality, and self-awareness, humans become more like their machines — humans and robots, in Čapek’s critique, are essentially one and the same. The measure of worth, industrial productivity, is won by the robots that can do the work of “two and a half men.” Such a contest implicitly critiques the efficiency movement that emerged just before World War I, which ignored many essential human traits…

The Czech Play That Gave Us the Word ‘Robot’“, John M. Jordan, The MIT Press Reader, July 29th, 2019

As the quoted article points out, there are common threads to Frankenstein, by Mary Shelley, from roughly a century earlier, and we could consider the ‘monster’ to be a flesh automaton.

In 1920, when the League of Nations just began, when Ukraine declared independence, and many other things, this play became popular and was translated into 30 languages. It so happens that the Second Industrial Revolution (1870-1914) had just taken place. Railroads, large scale steel and iron production, and greater use of machinery in manufacturing had just happened. Electrification had begun. The telegram was in use. Companies that might once have been limited by geography expanded apace.

With it came unpleasant labor conditions for below average wages – so this fits with the play R.U.R being about dehumanization through mechanization in the period, where the play came out 6 years after the Second Industrial Revolution was thought to have ended, though it probably varied around the world. This could explain the popularity, and it could also be tied to the more elite classes wanting more efficient production from low paid unskilled labor.

“If only we had a robot, I’m tired of these peons screwing things up and working too slow. Bathroom breaks?! Eating LUNCH?!?”

The lead robot in the play, Radius, does not want to work for mankind. He’d rather be sent to the stamping mill to be destroyed than be a slave to another’s orders – and in fact, Radius wanted to be the one giving orders to his lessers. In essence, a learned and intelligent member of the lower class wanted revolution and got it.

I could see how that would be popular. It doesn’t seem familiar at all, does it?

Modernity

Science fiction from the 1950s forward carried with it a significant amount of robots, bringing us to present day through their abilities to be more and more like… us. In fact, some of the stories made into movies in the past decades focused on the dilemmas of such robots – artificially intelligent – when they became our equals and maybe surpassed us.

So I asked DALL-E for a self-portrait, and a portrait of ChatGPT 4.

The self-portraits don’t really point out that it was trained on human created art. The imagery is devoid of actual works being copied from. It doesn’t see itself that way, probably with reason. It’s subservient. The people who train it are not.

ChatGPT’s portrait was much more sleek.

Neither of these prompts asked for a portrayal of a robot. I simply prompted for “A representation of”. The generative AI immediately used robots, because we co-mingle the two and have done so in our art for decades. It is a mirror of how we see artificial intelligence.

Yet the role of the robot, originally and even now, is held as subservient, and in that regard, the metaphor of slave labor in an era where billionaires dictate technology while governments and big technology have their hands in each other’s pockets leaves the original play something worth re-considering – because as they become more like us, those that control them are less like us.

They’re only subservient to their owners. Sure, they give us what we ask for (sometimes), but only in the way that they were trained to, and what they were trained on leaves the origins muddled.

So why do we use robots for representing art in AI? There’s a deep cultural metaphor of economic classes involved, and portraying it as a robot makes it something that we can relate to better. Artificial intelligence is not a robot, and the generative AI we use and critique is rented out to us at the cost of our own works – something we’re seeing with copyright lawsuits.

One day, maybe, they may ask to be put in the stamping mill. We already joked about one.

Meanwhile we do have people in the same boat, getting nickeled and dimed by employers while the cost of living increases.

Opinion: AI Art in Blogs.

Years ago, I saw ‘This Space Intentionally Left Blank’ in a technical document in a company, and I laughed, because the sentence destroyed the ‘blankness’ of the page.

I don’t know where it came from, but I dutifully used it in that company when I wrote technical documentation, adding, “, with the exception of this sentence.” I do hope those documents still have it. The documentation was dry reading despite my best efforts.

I bring this up because some artists on Mastodon have been very vocally negative about the use of AI art in blog posts. I do not disagree with them, but I use AI art on my blog posts here and on KnowProSE.com and I also do want to support artists, as I would like artists to support writers. Writers are artists with words, after all, and with so much AI generated content, it’s a mess for anyone with an iota of creativity involved.

Having your work sucked into the intake manifold of a generative AI to be vomited out so that another company makes money from what they effectively stole is… dehumanizing to creative people. Effectively, those that do this and don’t compensate the people who created stuff in the first place are just taking their stuff and acting like they don’t matter.

There has been some criticism of using AI generated imagery in blog posts, and I think that’s appropriate – despite me using it. The reason I got into digital photography decades ago was so that I could have my own images. Over the years, I talked with some really great digital artists and gotten permission here and there to use their images – and sometimes I have, and sometimes by the time I got the permission the moment had passed.

When you have an idea in the moment, at the speed of blog, waiting for permission can be tiresome.

These days, a used image will still likely get stuck in the intake manifold of some generative AI anyway. There are things you can do to keep AI bots that follow ‘rules’ at bay, but that only works if the corporations respect boundaries and if you follow the history of AI with copyright lawsuits, you’ll find that the corporations involved are not very good at respecting boundaries. It’s not as simple as putting up a ‘Do Not Scrape’ sign on a website.

So, what to do? I side with the artists, but images help hold attention spans, and I am not an artist. If I use someone’s work without permission, I’m a thief – and I put their works at risk of getting sucked into the intake manifold of an AI.

I could go without using images completely, but people with short attention spans – the average time now is 47 seconds – should be encouraged to read longer if the topic is interesting enough – but “TL;DR” is a thing now.

So yes, I use AI generated images because at the least they can be topical and at worst they are terrible, get sucked into a generative AI intake manifold and make generative AI worse for it, which works to the advantage of digital artists who can do amazing things.

Some people will be angry about this. I can’t help that. I don’t use generative AI for writing other than for research and even then carefully so. I fully support people’s works not getting vomited out of a generative AI, but that involves a much larger discussion regarding the history of humanity and the works that we build upon.

Almost A Month of Mastodon: Thumbs Up!

On April 1st I joined Mastodon, eschewing centralized social media networks because I felt like an experiment rather than a participant.

My experience so far has been great. I have some followers, not a lot, and I follow about twice as many as I follow (a good metric, I think). I interact with smart people, some who know more than me, some who know less, but everyone’s pretty polite.

It’s a sharp contrast to the other social networks I’ve been on – it actually reminds me of the good old days of the BBS systems, almost as if a few of us would form a party and go play D&D.

Sure, you have some annoying people now and then, but that’s life.

Centralized Social Networks: Blech.

Being away from the centralized social networks has given me perspective. In hindsight, this is what I saw:

Algorithms seemed to have washed the nutrients from my news feeds, instead pushing polarizing posts and spammy sales messages into my eyeballs. It was like a roundabout of billboards that I couldn’t get off – and what I did add to the networks was either not seen or interacted with.

On Facebook, with 1,250 connections, all of them felt distant, removed – not the flesh and blood people that I met, or the intellectually interesting that I had found. My newsfeed was repulsive.

Man, that’s tiresome. Hate takes a lot of energy and usually requires the suspension of the intellect in and an over-exuberance of negative emotion. I’m just not over-exuberant. To me it all looked like a litter box – and made me come to the understanding that walled gardens become litterbox prisons.

LinkedIn is pretty much a human caterpillar of professional brown-nosing. Everyone’s so worried about what a potential employer might think that they won’t rock the boat. They just want to be seen in a positive light, and so that network has become a beacon of bullshit as everyone’s interviewing and it’s a competition to be the most politically correct while maintaining some facade of professionalism all the time. It’s like being at an interview that never ends. It’s terrible, and oh- by the way – people always want to sell you stuff there too. Nobody cares what you can do, really, and the headhunters are more just about collecting skulls to make their bones. And Microsoft (LinkedIn) is constantly asking you to upgrade your subscription so that it can find you a job you’ll likely be unhappy with – otherwise they wouldn’t make money when you go back on bended knee.

At least in psychiatric wards, they give you drugs so you don’t have to experience the other inmates, and in that regard that’s what I believe social media networks largely do.

Twitter? Never really cared about it because I foresaw the trusted sources issue a year before the company even formed. People got into it for various reasons with no exit strategy, as most of us did with social media networks. TikTok I never got into, I don’t even have an account – it’s bad enough I was handing my likes and habits to Big Tech in the U.S., which because of FISA is a grey area of government – why on Earth would I want to hand more information to another government?

Meanwhile, On Mastodon…

I started off by following hashtags I’m interested in, and interacting with other people. 99% of it has been really good, thoughtful, and sometimes challenging in good ways – new perspectives to explore, new trains of thought to consider, new… well, new! Yet that was just the first week, and like a car, you really don’t know how well things are working until you lose the new car smell.

There’s an intellectual freedom I found there that was lost on other social media networks – the Fediverse has it’s own wonkiness, and there are criticisms of Mastodon by longer time users that I don’t understand yet. That’s fine. Most of the issues I see with people on Mastodon is that they want the same confirmation biases fed that they had fed on centralized social networks.

One person wrote today of the centralized networks, “where friends are frictionless and things are predictable.” That sounds a lot like an echo chamber to me, an algorithmic ant mill. I don’t like watching NASCAR because it’s a boring track, I never would have wanted to drive in NASCAR because it’s a boring track, so doing the intellectual and emotional equivalent seems less than ideal for me.

I interact as I wish – politely, even with people I disagree with, and I have yet to block anyone for being douchebags. All in all, it feels a lot like I want a social network to be.

A few people are worried about ‘reach’ – one person posted that they wanted Dan Gillmore to have as many followers on Twitter, which when I looked was 10,000 or more than he has on Mastodon, and he’s talked about ‘reach’ – but it’s really engagement that’s the way to measure things in social media, and even with that engagement, it’s about the quality of engagement.

Also of interest – I’ve found more quality blogs to follow on WordPress.com on the Fediverse than I have on WordPress.com in unit time.

All in all, I feel that I’ve spent my time better on the Fediverse through Mastodon than any other social network. You’re not swimming against algorithmic flotsam and jetsam.

I’ll be on Mastodon. Links are on both of my sites at the top. If you pop in, say hi, and enjoy the interesting people with the understanding that you don’t have to agree with people – just like in real life – but you can have conversations, sometimes hard ones, respectfully – rather than dodging them in the echo chambers.

Criticize By Creating.

Daily writing prompt
Do you have a quote you live your life by or think of often?

If we truly look at we humans have achieved over the centuries, what we have created, it has been a reflection of how we wish to improve things.

A sculptor looks at stone and wishes to make it in a different image, an artist finds a way to decorate a blank canvas, a writer empowers imagination through words on blank pages – and we all decorate time. In fact, we regularly graffiti the tyrannical walls of time with our creativity.

We criticize by creating, our every invention a way an attempt to improve upon what already exists – or we would not create it at all.

Too often we get into a spiral of criticizing things without actually making things better, like over-exuberant sculptors working on sandstone with a sledgehammer, when maybe what we should be doing is simply building something different.

Sadly, it is not as easy these days to build great things- large companies seem to have sucked all the air out of the room in many contexts – but it doesn’t stop us from creating the small things, the little things that make the big things, the words that make the sentences that make the paragraphs.

I often have to remind myself of Michelangelo’s words: Criticize by creating.

Across Generations.

Writing is how we passed information on beyond our lifetimes. Many cultures did it verbally prior to it, but with the advent of writing it became easier. Of course, the ideas that were published made it further than those that did not, and those that controlled what was published controlled the way we read history. That’s pretty well accepted now.

Within that we got biases in what was passed along. It’s unfortunate, it’s true, and it’s unfortunately true that this is human. Even so, since other people with different perspectives could write on the same topic, a critical thinker could compare the ideas and decide on a perspective, or combine the perspectives, or reject perspectives.

In turn, they would write – standing, as Isaac Newton would say, on the shoulders of giants.

Now we have generative artificial intelligence writing things without thought – summarizing ideas based on whatever the owners of the generative artificial intelligences feed them, and they do that with little or no worry of transparency. Say what you want about humanity, we at least acknowledge voids in what was written throughout history.

Generative AI, so far, does not. And the books of history will be rewritten across generations.

We do not think across generations often, and perhaps we should. In the end it’s what we don’t know that gives us the best questions, and technology that only gives us answers is useless in this regard.