The Vultures.

They ask how to escape.
But never why they are in the trap.

They ask how to win.
But never whether the game is worth playing.

The wrong question attracts the wrong answers.
And the longer you stay with the wrong question,
the more the wrong answers start to look like wisdom.

The vultures circle.
They do not need to attack.
They only need to wait.

Manipulation In The Age of AI – And How We Got Here.

We understand things better when we can interact with them and see an effect. A light switch, as an example, is a perfectly good example.

If the light is off, we can assume that the light switch position is in the off position. Lack of electricity makes this flawed, so we look around and see if other things that require electricity are also on.

If the light is on, we can assume the light switch is in the on position.

Simple. Even if we can’t see, we have a 50% chance of getting this right.

It gets more complicated when we don’t have an immediate effect on something, or can’t have an effect at all. As I wrote about before, we have a lot of stuff that is used every day where the users don’t understand how it works. This is sometimes a problem. Are nuclear reactors safe? Will planting more trees in your yard impact air quality in a significant way?

This is where we end up trusting things. And sometimes, these things require skepticism. The world being flat deserves as much skepticism as it being round, but there’s evidence all around that the world is indeed round. There is little evidence that the world is flat. Why do people still believe the earth is flat?

Shared Reality Evolves.

As a child, we learn by experimentation with things around us. As we grow older, we lean on information and trusted sources more – like teachers and books – to tell us things that are true. My generation was the last before the Internet, and so whatever information we got was peer reviewed, passed the muster of publishers, etc. There were many hoops that had to be jumped through before something went out into the wild.

Yet if we read the same books, magazines, saw the same television shows, we had this shared reality that we had, to an extent, agreed upon, and to another extent in some ways, was forced on us.

The news was about reporting facts. Everyone who had access to the news had access to the same facts, and they could come to their own conclusions, though to say that there wasn’t bias then would be dishonest. It just happened slower, and because it happened slower, more skepticism would come into play so that faking stuff was harder to do.

Enter The Internet

It followed that the early adopters (I was one) were akin to the first car owners because we understood the basics of how things worked. If we wanted a faster connection, we figured out what was slowing our connections and we did it largely without search engines – and then search engines made it easier. Websites with good information were valued, websites with bad information were ignored.

Traditional media increasingly found that the Internet business model was based on advertising, and it didn’t translate as well to the traditional methods of advertising. To stay competitive, some news became opinions and began to spin toward getting advertisers to click on websites. The Internet was full of free information, and they had to compete.

Over a few decades, the Internet became more pervasive, and the move toward mobile phones – which are not used mainly as phones anymore – brought information to us immediately. The advertisers and marketers found that next to certain content, people were more likely to be interested in certain advertising so they started tracking that. They started tracking us and they stored all this information.

Enter Social Media

Soon enough, social media came into being and suddenly you could target and even microtarget based on what people wanted. When people give up their information freely online, and you can take that information and connect it to other things, you can target people based on clusters of things that they pay attention to.

Sure, you could just choose a political spectrum – but you could add religious beliefs, gender/identity, geography, etc, and tweak what people see based on a group they created from actual interactions on the Internet. Sound like science fiction? It’s not.

Instead of a shared reality on one axis, you could target people on multiple axes.

Cambridge Analytica

Enter the Facebook-Cambridge Analytica Data Scandal:

Cambridge Analytica came up with ideas for how to best sway users’ opinions, testing them out by targeting different groups of people on Facebook. It also analyzed Facebook profiles for patterns to build an algorithm to predict how to best target users.

“Cambridge Analytica needed to infect only a narrow sliver of the population, and then it could watch the narrative spread,” Wylie wrote.

Based on this data, Cambridge Analytica chose to target users that were  “more prone to impulsive anger or conspiratorial thinking than average citizens.” It used various methods, such as Facebook group posts, ads, sharing articles to provoke or even creating fake Facebook pages like “I Love My Country” to provoke these users.

The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections“, Rosalie Chan, Business Insider (Archived) October 6th, 2019

This had drawn my attention because it impacted the two countries I am linked to; the United States and Trinidad and Tobago. It is known to have impacted the Ted Cruz Campaign (2016), the Donald Trump Presidential Campaign (2016), and interfering in the Trinidad and Tobago Elections (2010).

The timeline of all of that, things were figured out years after the damage had already been done.

The Shared Realities By Algorithm

When you can splinter groups and feed them slightly different or even completely different information, you can impact outcomes, such as elections. In the U.S., you can see it with television channel news biases – Fox news was the first to be noted. When the average attention span of people is now 47 seconds, things like Twitter and Facebook (Technosocial dominant) can make this granularity more and more fine.

Don’t you know at least one person who believe some pretty whacky stuff? Follow them on social media, I guarantee you you’ll see where it’s coming from. And it gets worse now because since AI has become more persuasive than the majority of people and critical thinking has not kept pace.

When you like or share something on social media, ask yourself whether someone has a laser pointer and just adding a red dot to your life.

The Age of Generative AI And Splintered Shared Realities

An AI attached to the works of humans

Recently, people have been worrying about AI in elections and primarily focusing on deepfakes. Yet deepfakes are very niche and haven’t been that successful. This is probably also because it has been the focus, and therefore people are skeptical.

The generative AI we see, large language models (LLMs) were trained largely on Internet content, and what is Internet content largely? You can’t seem to view a web page without it? Advertising. Selling people stuff that they don’t want or need. Persuasively.

And what do sociotechnical dominant social media entities do? Why, they train their AIs on the data available, of course. Wouldn’t you? Of course you would. To imagine that they would never use your information to train an AI requires more imagination than the Muppets on Sesame Street could muster.

Remember when I wrote that AI is more persuasive? Imagine prompting an AI on what sort of messaging would be good for a specific microtarget. Imagine asking it how to persuade people to believe it.

And imagine in a society of averages that the majority of people will be persuaded about it. What is democracy? People forget that it’s about informed conversations and they go straight to the voting because they think that is the measure of a democracy. It’s a measure, and the health of that measure reflects the health of the discussion preceding the vote.

AI can be used – and I’d argue has been used – successfully in this way, much akin to the story of David and Goliath, where David used technology as a magnifier. A slingshot effect. Accurate when done right, multiplying the force and decreasing the striking surface area.

How To Move Beyond It?

Well, first, you have to understand it. You also have to be skeptical about why you’re seeing the content that you do, especially when you agree with it. You also have to understand that, much like drunk driving, you don’t have to be drinking to be a victim.

Next, you have to understand the context other people live in – their shared reality and their reality.

Probably more importantly, is not calling people names because they disagree with you. Calling someone racist or stupid is a recipe for them to stop listening to you.

Where people – including you – can manipulated by what is shown in your feeds by dividing, find the common ground. The things that connect. Don’t let entities divide us. We do that well enough by ourselves without suiting their purposes.

The future should be about what we agree on, our common shared identities, where we can appreciate the nuances of difference. And we can build.

Don’t Let The World Squeeze The Creativity Out of You.

A large robot in a colorful forest setting.

I made the mistake of reading some posts on LinkedIn and in one of the comments someone mentioned that the creative industry is ‘esoteric’, a word that basically meant in the context that they didn’t matter. This in the context of AI, content scraping, et al, seems pretty insulting to me and I’m not one that wears creativity on my sleeve.

It’s dismissive, though I grant you the use of the word ‘esoteric’ was somewhat creative. It also does the whole ecosystem of creativity an injustice. After all, what is a lack of creativity?

Conformity, stagnancy, conventional, repetitive, rigid, derivative, apathetic, disengaged, mechanical, formulaic… the list goes on. Now, show me an industry that markets itself as any of those things.

Yet the original comment demonstrates a rigidity of thought that doesn’t likely have the capacity to fathom creativity at scale.

Continue reading

2024 Ruminations: Navigating Toward 2025

This post has been in the making over the course of a few days, much longer than usual, but I have been ruminating and getting interrupted by life and it’s distractions which ended up helping me finish it. Writing is like that sometimes.

Everyone’s going to be writing lists and going over the highlights of 2024, making predictions about 2025, and otherwise fighting for readership in the “Everyone Else Is Doing It” spiral toward zero. Sure, when you’re younger, it seems bold and new – but trust me, it’s not that bold or that new.

It’s outright boring when you start abstracting it away. What matters is what matters to you, and if you’re going to spend your time talking about other people, or waxing nostalgic about a single year (!) I have bad news: AI can probably do it better than you. It probably should, too, since those are low hanging fruit.

Lemme see what happened this year and write about it! And I can write about next year and it will likely be all wrong but if I get one thing right the whole planet will bow to my wisdom!

What should I write?“, Boring People, 0-2025
Continue reading

How Did We Get Here?

An AI attached to the works of humans

I was fiddling around with DALL-E today and it generated the image on the left, and it hit me squarely. I wanted a visual representation of what scraping does so that people could understand…

After all, people don’t care about having their work scraped unless they perceive that they have value that is being scraped.

It’s not a great image. I’d wager a human artist could do much better. While I do not appreciate the work of others scraped to generate stuff like this, I think it’s a good use of a Generative AI.

What I really wanted to generate was a little billionaire sitting on the shoulder of the AI holding a little leash, while the AI is connected to everything. There’s an idea for a real visual artist – go nuts!

AIs aren’t bad. It’s really the corporations behind them which practice tweedism in a democracy. They get to spend more on election campaigns, and they have a lot more of a say over nominations than anyone that can spend less than them. If you can control the nominations, you can control the vote.

If you can control the media, you can control the vote because you can manage the perceptions of the people who think their vote matters, and constantly polarizing things is good for business and managing perceptions.

I don’t know how we got here. I was just a latch-key kid in the 70s in Ohio, watching black and white re-runs of Superman with all that, “Truth! Justice! And the American Way!” Now being a latch-key kid is decidedly more dangerous. Just going to school goes beyond dealing with the bullies (easy enough, just hit them in the nose), now you have to worry about people unable to punch noses (for whatever reason) coming back to school and putting holes in people with their grandma’s AR-15. OK, I don’t think that’s happened yet, but it seems sadly plausible.

My big escapades included being shot with a BB gun – metal BBs – and getting cracked over the head with a baseball bat. Getting shot with an actual gun is some next level stuff. I don’t get how we got there, either.

Yet – when we’re young, if our parents are doing their jobs even 25% right1, we feel safe. I felt safe, with my greatest fear being the words, “Wait til your father gets home”. Things weren’t perfect, but overall, I felt pretty safe. I’d be in the front yard in suburbia, or riding my bicycle, or… something other than staring at a flat screen: Those Superman episodes were stolen, but what my parents didn’t know when they weren’t around…

Because I felt safe, I bought into the “Truth! Justice! And the American Way!” naively. Little House On the Prairie preached values, and when my mother wasn’t around, I got to watch “Gunsmoke” and “Wild Wild West” with my father – where other values were instilled.

Yet when I look around, I don’t see those values in a place of authority. In my lifetime, I’m pretty sure I haven’t seen them. It’s like watching a band lip synch at a concert: Something’s off. You can tell.

And how did we get to billionaires making money off of work they haven’t bought, haven’t even looked at themselves, created by people they don’t care about, and used to regenerate things without attribution or even thought. Just lawsuits in a world where words have gained the flexibility of contortionists.

I wonder how it happened so I can know where we should be – and then we have to figure out a way to get there, those of us that are interested.

  1. Yes, I made that up, but I don’t think any parent gets things 100% right, and it’s probably for the best or there wouldn’t be a constant interest in improving parenting. Plus, we humans don’t come with instruction manuals. We are just tax breaks that grow up to pay taxes. ↩︎

Marketing As Data Dilution

We are stuck with technology when what we really want is just stuff that works.

It’s pretty hard to find solid information about artificial intelligence these days, which got me thinking about why. There are issues with energy usage and water consumption that you would think would be on people’s radar a bit more even as the UN mentions it. On LinkedIn, I was deluged with updates from COP29 as I happen to know people on islands, and AOSIS was a big part of that – but they just kept talking about plastics and curbing the manufacture of plastics.

They should probably be looking a bit more at tires, maybe. But I didn’t hear much about that either.

This got me thinking about how in the context of AI, you hear more product announcements than anything else, and those propagate more quickly than STDs. If AI were a drug, they’d probably have to list the side effects on the box – but it isn’t, it doesn’t come in a box, and this got me thinking about how important information gets diluted by marketing and press releases and people constantly jabbering about what’s being marketed and it’s press releases.

Continue reading

It Doesn’t Love You: Artificial Empathy

Last month I read a claim that we’re in an ongoing loneliness crisis. I dug around. There are a few articles about it over the past year, and the way I found out about it was someone pitching their AI bot ‘Replika’ as a cure. Will it help? I don’t know, but as with most things, I’m skeptical.

I’m generally alone myself, but that doesn’t make me lonely. On occasion I might have a bout of loneliness that lasts a bit. I think that’s natural. I like being alone. I like quiet. I like playing whatever songs I feel like playing without having to worry about anyone else not liking it. For whatever reasons, this is the way I am and it works for me – but it does not work for a lot of people with a loneliness epidemic. So if you are lonely, realize I’m not an expert at being lonely. Only at being alone.

Thus, the hook is set. And yet, an AI ‘companion’ seems a bridge too far. For one, I understand that LLMs and other generative AI aren’t really empathetic. The Conversation has a great article on the language tricks associated with ChatGPT that applies to most generative AI used in bots.

Romance-bots? Can I write that? It feels icky.

I’ll let you in on another secret. When platitudes rain down and people start using the words they learned somewhere else rather than being their authentic selves, it’s pretty much the same thing as chatting with an AI. With the AI, though, there are no human clues. No dead eyes. No forced smiles. No manufactured laughs or giggles.

It’s just text – and that could be a hint, because if you only text people on a phone, you need to go outside and greet the aliens outside your door. Even strike up conversations and don’t worry about rejection – but certainly do worry about some pretty weird conversations.

AI. Loneliness. It seems like these days they are trying to treat a problem with another problem.

Go be human. I’ll tell you, it’s not as fun as some might say, but it definitely has it’s high points. Typing on a keyboard – something I know a great deal about – and reading texts is just not the same as hanging out with humans.

If you are truly lonely, depressed, or anything like that, I’d suggest counseling. It’s good to have someone not involved in your life be the stranger you talk to without worry. It doesn’t hurt, either.

Modern Oracles: How AI and Technology Shape Our Uncertain Future

A glowing human asking a digital oracle for guidance
A glowing human asking a digital oracle for guidance

We humans have always loved our oracles. For thousands of years, we’ve sought their guidance—whether it was a priestess in Delphi inhaling sacred fumes or an oak tree whispering wisdom to a priest. Oracles promised to make sense of an unpredictable world, offering glimpses of the future and answers to our deepest questions. Who made those promises? We did.

Fast forward to today, and not much has changed. The world is as unpredictable as ever, and our need for guidance remains. But now, instead of visiting temples or deciphering omens, we consult technological oracles. For years, it was Yahoo, until Google dethroned it to become the omnipotent arbiter of answers. Now, the oracles have evolved into AI-powered models, capable of crafting essays, diagnosing illnesses, and even deciding who gets hired -or fired.

Yet these new oracles aren’t divine. They’re products of corporate greed, built by companies controlled by billionaires who are nothing like us. These modern seers don’t whisper truths; they process data – our data – fed to them by algorithms designed to maximize profits, not wisdom.

And we listen. We sacrifice jobs to these oracles in the name of progress, worrying about income and bills while chasing the next product a marketing team convinced us we couldn’t live without. It’s a vicious cycle, and we humans are its willing participants.

Maybe the problem isn’t the oracles themselves.

Maybe the problem is us—our relentless search for certainty in an uncertain world. After all, the oracle is just a mirror, reflecting back what we already know but don’t want to admit:

We’ve always been the creators of our own chaos.

Read From The Future, Words Look Silly

An image of SEO in scrabble tiles, standing on edge, with a light point at the top left that causes shadows to be to the lower right. Random blurred tiles lay flat behind 'SEO'

I read the news every morning with my coffee – the stereotypical man who reads the newspaper, modernized to scanning things through his phone and computer. It’s a terrible way to start the day, but in the information we have to know what’s going on or we’re stuck in the mud.

Certain words and phrases leap out. With wars all over the world – Sudan isn’t covered that well – I noted ‘massive attacks’ all over the place and it lead me to wonder how a massive attack now compares to a massive attack in the future or past. Then you generally see how many people were killed in the article, somewhere hidden maybe, and this morning I saw 4 as the number. I’ve seen ‘massive’ used for much larger numbers, but apparently now the threshold for a massive attack is 4 people dead.

This is not to say that every person that dies is not significant. Every life has value. Yet when we’re reading about loss of life, as in this example, SEO and Sensationalism create false weights in things just so that something is read.

In the digital age, where attention spans are short and competition for clicks is fierce, sensationalism and SEO-driven writing have taken center stage. While these techniques are undeniably effective at grabbing attention, they often come at the cost of depth, authenticity, and meaningful communication. The quest for virality and search engine dominance has diluted the essence of quality content.

And it will impact us through LLMs that are trained on scraped content.

The Rise of Sensationalism

Sensationalism thrives on exaggeration and emotional manipulation. Headlines like “You Won’t Believe What Happened Next” or “The Shocking Truth About [Topic]” are designed to spark curiosity, but they rarely deliver on their promises. Instead of providing valuable insights or fostering genuine understanding, sensational content often:

  • Overpromises and underdelivers.
  • Oversimplifies and even misrepresents complex issues.
  • Promotes clickbait over substance.
  • Diminishes reading comprehension.

While sensationalism might temporarily increase page views, it erodes trust over time. And time erodes some words too. Readers become wary of exaggerated claims and disengage, ultimately harming the credibility of the writer or publication. Do you want to trade short term revenue for reputation? That seems to be what a lot of online marketing focuses on.

Consider the word ‘New’ you find in an article from 1985. It has no real meaning in 2024 except telling people it was new when the article was written. Hopefully there’s a date on it- some sites don’t show the date something was published. If there is the reader might realize that the ‘new’ Windows 1.0 is not that new.

Talking about the future, too, didn’t work out well for most writers except those that imagined a future that was compelling enough for people to work towards it.

The Impact of SEO on Content Depth

Search Engine Optimization (SEO) is a powerful tool for visibility, but its misuse has led to a formulaic approach to writing. Content creators are often pressured to prioritize keywords, meta descriptions, and search algorithms over the quality of their message. This focus results in:

  • Keyword Stuffing: Repeating the same phrases disrupts the natural flow of content, making it feel robotic and unnatural.
  • Shallow Information: Articles are designed to rank high in search results but rarely offer comprehensive insights. The goal becomes “ranking” rather than “resonating.”
  • Homogenized Content: SEO encourages following trends, which can lead to an echo chamber where originality and diverse perspectives are lost.

In an age where, in 2024, we have been talking about echo chambers for years, social networks get blamed and yet people who share content to the networks have already been in algorithmic echo chambers based on some content.

The Dilution of Meaning

When sensationalism and SEO-driven tactics dominate content creation, the essence of meaningful writing is lost. Here’s how:

  • Complex Issues Are Simplified: Topics that deserve nuance and careful exploration are reduced to soundbites or listicles, or worse yet, sticky infographics surrounded by content made to come out on top of an algorithm.
  • Authenticity Is Compromised: Writers often prioritize what sells over what matters, leading to a loss of personal voice and integrity.
  • Readers Are Left Unsatisfied: Audiences craving depth and understanding find themselves wading through superficial content, unable to uncover real value. I know this is the main issue I have and have had for decades, and increasingly so.

Finding Balance: Writing with Integrity

Does this mean we should abandon SEO and engaging headlines altogether? Not at all. Instead, writers and marketers can aim for a balance between visibility and value:

  1. Prioritize Authenticity: Write with your audience in mind, not just algorithms. Focus on what they truly need and want to learn. We all have to play the game, but we can choose how we play the game.
  2. Use SEO Strategically: Incorporate keywords naturally, ensuring they enhance rather than detract from the content.
  3. Deliver on Promises: If your headline promises something extraordinary, make sure your content lives up to it.
  4. Focus on Depth: Invest time in research, analysis, and thoughtful writing. Readers appreciate content that goes beyond the surface.

Conclusion

Sensationalism and SEO writing are not inherently bad, but when they overshadow the purpose of content, meaning is inevitably diluted. As creators, we have a responsibility to prioritize authenticity and depth over cheap tricks and fleeting trends. In a world flooded with shallow content, meaningful writing stands out—and that’s what readers remember.

And if you need pain to reinforce this, consider what happens when algorithms change and your content is suddenly not as popular. It has happened before, it will happen again. What’s worse, that content might be scraped by someone training an LLM so that it will spit out that gobblygook thinking it qualifies as ‘good writing’.

Good content, at least in my opinion, should last. That’s why Gutenberg.org is filled with classics that people want to read.

By committing to substance over sensationalism, we can create content that not only captures attention but also earns respect and fosters trust. And in the long run, isn’t that what truly matters?

Further reading:

You can revel in how the SEO works in those articles, or doesn’t.

Scraping A Living Out Of the Age of Scrapers

One of the reasons I have not been writing as much for the past for months was analysis paralysis. For years in corporate technology settings, I promised myself that I would make good on getting to writing at some point. I made a decision in the early 1990s to pay bills and help support parents rather than be the broke writer that had to compromise himself to earn a living. The world changed.

My plans I was in the process of making concrete were hit with the phosphoric acid of LLM training and competition even as I was laying the cornerstone.

My mother was a writer. She self-published in the 1970s through the 1990s by getting her poetry printed and, as far as I know, she never broke even. She kept writing anyway, and I think she was pretty good despite some of the opinions she expressed – she expressed them well.

There was one poem she wrote about how Poets were esteemed in Somalia and given prominence – I can’t seem to find it as she sadly didn’t publish it online – but the gist of it was that there are, or were, parts of the world where poetry was important. By extension, writing was important, and writing was respected.

Writers were seen as noble artisans of the written word, earning their keep through the sweat of ink that scrawled out of their hands and, later, keyboards. In today’s digital Wild West, writing for money online feels a bit like leaving cookies on the counter of a house full of raccoons. You’re crafting something delightful, but someone, somewhere, is plotting to grab it and run—no credit given, no crumbs left behind. Even online publishing with Amazon.com is fraught with such things, and the only way to keep from the dilution of your work is to dilute it yourself.

We find ourselves aliens in a world we created. Inadvertently, I helped build this world, as did you, either by putting pieces of code together to feign intelligence (really, just recording our own for replays) or our demand for fresh content that sparked imagination.

Welcome to the age of content scraping, where your genius headlines and painstakingly researched prose are more at risk than a picnic basket in Jellystone Park.

The Scraper Apocalypse: Who Stole My Blog?

As mentioned previously, I moved off of WordPress.com mainly because of the business practices of Automattic – particularly the aspect where they made a deal to use the content on WordPress.com as training data for LLMs with the default set for users to agree to it. Why on Earth would anyone agree to it?

Scraping is nothing new. Companies—and bots with names like “Googlebot” (friendly) or “AggressiveRandoBot42” (not so much)—are prowling the internet, vacuuming up your hard-earned words faster than you can type them to train their Large Language Models, or AIs. They aren’t even considered shady. And it’s not just shady websites in the far corners of the web. You don’t know who is doing it.

What’s left for you? Crumbs, if you’re lucky. So, how do you stay ahead of the scrapers while still getting paid?

Step 1: Write With a Purpose

Personally, I’m not too much about monetization and yet I drink coffee that costs money. It’s a reality for all of us, a system born into where I can’t pay my bills by sending people words.

Let’s start with the golden rule of online writing: never write for free unless it’s a passion project—or revenge poetry about scrapers. Every blog post, article, or eBook should have a direct or indirect income stream attached.

  • Direct Revenue: Ads, sponsored posts, or affiliate marketing links.
  • Indirect Revenue: Use your content to build an email list or funnel readers toward a product or service you offer.
  • Be yourself: technologies are increasingly mimicking, but they can’t do what you do.

Scrapers might steal your content, but they can’t siphon your strategy directly. They can, however, adapt quickly based on the information they get, so you have to stay on your toes.

Step 2: Build the Fortress: Protecting Your Content

If you’ve ever tried to protect your lunch from a determined coworker, you’ll understand this analogy: scrapers don’t care about rules. But you can make their lives harder.

  • Add Internal Links: Keep scrapers busy by linking to other parts of your site. If they scrape one post, they get a tangled web that leads readers right back to you.
  • Use Watermarks in Imagery: For visual-heavy posts, watermark your images with your logo or website URL. It’s digital branding in action.
  • Insert Easter Eggs: Include subtle shout-outs to your own name or brand. Scrapers might miss these, but real readers won’t. You do know you’re on RealityFragments.com, right?
  • Consider Subscriptions: Some websites allow you to close your content to subscribers, but you need consistent readers to pull that off.

Step 3: Turn the Tables—Use Scrapers to Your Advantage

Here’s the plot twist: sometimes, scrapers inadvertently help you. When your content gets stolen but still links back to you, it can drive traffic to your site.

But how do you ensure that happens?

  • Embed Links Thoughtfully: Include links to high-value content (like an eBook sales page or an email sign-up form). If they scrape your post, their audience might still end up on your site.
  • Use Syndication Smartly: Syndicate your content to reputable platforms, as few as they are. These platforms might outrank the scrapers and help your original post shine. Also note when you post to them, you should still expect your content to be scraped
  • Use LLMs to check your own work: LLMs are trained by scraping. My own writing is something I like to assure is fresh and original, so I have LLMs I installed that are disconnected from the Internet (instructions on how to do that with 0llama here) to assure the same. I’ve found it very helpful to make sure I’m original since they scrape everyone else’s content… (and probably mine).

Step 4: Embrace the Impermanence

At the end of the day, the internet is a giant soup pot, and everyone’s stirring it. You can’t stop all the scrapers, but you can focus on making your content work harder for you.

  • Repurpose: Turn blogs into videos, podcasts, or infographics. It’s much harder for scrapers to steal a voiceover than it is to Ctrl+C your text. I stink at this and need to get better.
  • Engage Directly: Build relationships with your audience through comments, newsletters, and social media. Scrapers can’t steal community.
  • Focus on Creating: A creator creates, and the body of work is greater than the sum of its parts. I think of this as the bird that can land on a branch not because it trusts the strength of the branch but because it trusts it’s wings. Trust your wings.

Conclusion: Keep Writing Anyway

Writing for a living online in an era of content scraping is a lot like running a lemonade stand during a rainstorm. It’s messy. There are periods of disheartenment. Life is not easy. But when the sun comes out—or when the right reader finds your work—it’s worth it.

So, write boldly, monetize smartly, and remember: scrapers might steal your words, but they can’t steal your voice, though Sir David Attenborough can argue that. They cannot take away your ability to be human and create.

Stay witty, stay scrappy, and may your words always pay their rent and bring someone value.