Killing Off the Geese that Lay Golden Eggs

We all know the story of the goose that laid the golden eggs, and the idiot who killed the golden goose got no more golden eggs. It’s been considered good practice not to kill something that is producing important things for you1.

This is what some companies are doing, though, when it comes to AI. I pointed out here that companies have been doing it before AI, too, though in the example of HuffPost the volunteers who once contributed to it’s success simply got left out in the cold.

It is a cold world we live in, and colder each day. Yet more people are being impacted by generative AI companies, from writing to voice acting to deepfakes of mentionable people doing unmentionable things.

Who would contribute content willingly to any endeavor when it could simply be used to replace them? OK, aside from idiots, who else?

I did hear a good example, though. Someone who is doing research and is getting paid to do it has no issue with his work being used to train an AI, and I understood his position immediately: He’s making enough, and the point of doing research is to have it used. But, as I pointed out, he gets paid, and while I don’t expect he’s got billions in the bank, I’d say that once he’s still getting paid to do research, all will be well for him.

Yet not all of us are. Everyone seems intent on the golden eggs except the geese that can lay them. If you can lay golden eggs, you don’t need to go kill geese looking for them… and dead geese…. because it seems that tech bros need reminding… dead geese do not lay eggs.

  1. I’ve often wondered if this didn’t start Hindus not eating beef, as Indian cuisine relies heavily on the products of the cow – so a poor family killing a cow for meat would not make sense. Maybe not, but it’s plausible. ↩︎

Knowing What Something Is.

Thraupis Episcopus, Blue-gray tanager, also called the Blue Jean in Trinidad and Tobago.

Recovering yesterday from the silicon insult, there was a quote that I kept coming back to as I awoke now and then.

You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.

Richard P. Feynman, “What Do You Care What Other People Think?”: Further Adventures of a Curious Character

We use labels to communicate things to other people, and it’s all based on some common perception. The bird pictured is blue-grey, so some very smart person called it a blue-grey tanager, where tanager is a type of bird that has common characteristics to other birds we call tanagers. Then someone who was taught too much Latin in school decided it looked a lot like the ‘Bishop of Thraupi’ (the literal translation). I have no idea why it’s called a blue-jean in Trinidad and Tobago, but it is what it is.

As most creatures, they’re interesting in their own way. I spent a lot of time watching birds in Trinidad and Tobago, taking pictures of them as a challenge, most of which ended up on Flickr and most of which weren’t that great. In doing that, I learned about how the birds interacted with others, what they ate, and when I talk about a blue-grey tanager all of that is behind the label. I know what the bird is based on what it does, how it behaves, etc.

It’s not just a label.

In the movie ‘Good Will Hunting’, a similar point was made in one of the more epic tirades done by the late, great Robin Williams:

…You’re an orphan right? You think I know the first thing about how hard your life has been, how you feel, who you are, because I read Oliver Twist? Does that encapsulate you?

“Good Will Hunting” (1997), Sean speaking to Will.

The obvious way to go with this would be about identity politics and some of the silliness that ensues with it because clearly labels don’t mean as much as who the people we’re talking about actually are, but that’s not where I’m going with it – though in a way, I am.

When we look at generative AI, and how it can be trained on the way we have communicated in the past, be it art, writing, etc, all it’s really doing is using the labels as puppets. It doesn’t understand what it has spit out in response to a prompt.

I’ve met people like that. In fact, in my younger days, I was more like that than even now I care to admit – reading about things I didn’t understand, and having my world view defined by the views of others. Actual experience varies, and that’s the point of all of it. That diversity of experience is what enriches our society, or should. It’s additive.

It’s impossible for us to be able to share all of our experiences with others, but we can share more if we go beyond the labels. That one picture above of the blue-grey tanager did not just happen. It required me to understand the bird to get close enough with only 3x magnification on one of the original digital cameras to get the detail I did, it took trimming the plumb tree just right to allow the branches to be close enough from the top of the stairs, and it required a lot of patience in developing trust with the birds – that I wasn’t going to eat them.

The very experiences that make us human are the things we need to fall back on to be human these days, not the rote memorization and regurgitation of labels that generative artificial intelligences are much better at than we are.

We need to understand these things.

The Challenge.

In researching opting out of allowing WordPress.com and Tumblr.com using my content to sell to Midjourney and OpenAI, I ran across some thoughtful writing on opting out of AI by Justin Dametz.

This is someone I likely wouldn’t cross paths with, since I’m not someone who is very interested in theology, which he writes quite a bit about. I imagine he could say the same about my writing, but we have a nexus.

His piece was written last year, and it echoes some of my own sentiments about the balance between AI and writing, where he makes solid points about young people learning how to communicate themselves.

I tend to agree.

Yet, I am also reminded of learning calculus without a calculator. Scientific calculators were fairly new in the late 1980s when I learned calculus, and they even came solar powered so we wouldn’t have to fiddle with the batteries. These were powerful tools, but my class wasn’t allowed to use them until we had the fundamentals down. This, of course, did not stop us.

Speaking for myself, I wrote code in BASIC on an old Vic-20 that allowed me to check my answers. This didn’t help me with my homework, really, or doing tests, since we were required to show our working and if we got the wrong answer and did it the right way, we still got the majority of the points for the question. We had to demonstrate the fundamentals.

How does one demonstrate the fundamentals of writing? How does one demonstrate the ability to communicate without crutches? The answer is by assuring none of the crutches are available to help. I suppose we could have writing done in Faraday cages in classes to evaluate what students write – or we could simply reward original writing because the one thing that artificial intelligence cannot do is imagine, and while it can relate human experience through the distillation of statistics and words, it doesn’t itself understand the human experience.

Generative AIs can spit out facts, narratives that it’s seen before, and images based on what it has been trained on – but it really adds nothing new to the human experience except the ability to connect things across what human knowledge we have trained it on.

But how do we teach children how to write without it? How do we then teach students how to learn and be critical of the results we get?

First, we have to teach them learn instead of chasing grades, a problem which has confounded us for decades, to have ability rather than titles and fancy pieces of paper to hang on the walls.

That’s the next challenge.