In my daily readings yesterday, I came across Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements at a blog that I regularly read. It’s an odd one out for the blog, so it definitely caught my eye. The blog Be Inspired! is worth a read, this was just a single post that was tech related on something I have been writing about.
She hit some of the high notes, such as:
…Furthermore, the unequal access to and distribution of AI technology may exacerbate societal divisions. There is a significant risk of deepening the digital divide between those who have access to AI advancements and those who do not. To bridge this gap, it is crucial to implement inclusive policies that promote equal access to AI education and training across all demographics. Efforts should be made to democratize access to AI tools, ensuring that everyone has equal opportunities to benefit from this technological revolution…
Unleashing the Power of AI: Preparing for Social Change alongside Technological Advancements, Be Inspired, July 23rd, 2023.
While it is interesting that I noted a ‘voice change’ with the author, having read her blog, it isn’t her standard fare – and the high points were hit well. It also happens that the quotation above is something that keeps getting thrown around as if someone else is going to do it.
Someone else is not going to do it. I noted with ChatGPT and others when I have used them and asked questions about how artificial intelligence will impact human society, the large language models are very good at using those neural networks, deep learning algorithms and datasets we have no insight into to say in a very verbose way, “Well, you silly humans need to fix yourselves.“
Of course, it’s not sentient. It’s predictive text on steroids, crack and LSD. It only has information given to it, and that information likely included stuff written by people who said, “Hey, we need to fix these issues”.
Well, we do need to fix a lot of issues with human society even as the hype cycle spits out a lot of gobbly-gook about the technological singularity while not highlighting issues of bias in the data the authors of our future point out every now and then.
Yet there will always be bias, and so what we are really talking about is a human bias, and when we speak as individuals we mean our own bias. If you’re worried about losing your income, that’s a fair bias and should be in there. It shouldn’t be glossed over as ‘well, we will need to retrain people and we have no clue about it other than that’. If you’re worried that the color of your skin doesn’t show up when you generate AI images, that too is a fair bias – but to be fair, we can’t all be shades of brown so in our expectation of how it should be we need to address that personal bias as well.
It does bug me that every time I generate an image of a human it’s of those with less pigment, and even then I’m not sure which shade of brown I want. It’s boggling to consider, and yes, it does reinforce stereotypes. Very complicated issue that happens because we all want to see familiar versions of ourselves. I have some ideas, but why would I share them publicly where some large language model may snatch them up? Another problem.
Most of the problems we have with dealing with the future of artificial intelligence stems from our past, and the past that we put into these neural networks, how we create our deep learning algorithms is a very small part of it. We have lines on maps that were drawn by now dead people reinforced by decades, if not centuries or even millenia of degrees of cultural isolation.
We just started throwing food at each other on the Internet when social media companies inadvertently reinforced much of the cultural isolation by giving people what they want and what they want is familiar and biased toward their views. It’s a human thing. We all do it and then say our way is best. We certainly lack originality in that frontier.
We have to face the fact that technology is an aspect of humanity. Quite a few humans these days drive cars without knowing much about how it works. I asked a young man recently if when they tuned his car they had lightened his flywheel, since I noticed his engine worked significantly harder on an incline, and he didn’t know what that was.
However, we do have rules on how we use cars. When drunk driving was an issue, some mothers stepped up and forced the issue and now drunk driving is something done at an even higher risk than it already is: you might go to jail over it to ponder why that night had become so expensive. People got active, they pressed the issue.
The way beyond AI is not through the technology, which is only one aspect of our humanity. It’s through ourselves, all our other aspects of humanity which should be more vocal about artificial intelligence and you might be surprised that they have been before this hype cycle began.
Being worried about the future is nothing new. Doing something about it, by discussing it openly beyond the technology perspectives, is a start in the right direction because all the things we’re worried about are… sadly… self-inflicted by human society.
After centuries of evolution that we think separates us from our primate cousins, we are still fighting over the best trees in the jungle, territory, in ways that are much the same, but also in new ways where dominant voices with just the right confidence projected make people afraid to look stupid even when their questions and their answers may be better than that of those voices.
It’s our territory – our human territory – and we need to take on the topics so ripe for discussion.