I’m sorry for the rant, but I think this is relevant to this whole discussion of getting your subs up so the algos recognize you. I’m AGAINST recommender algorithms that promote engagement as their primary success metric. I think it’s damaging to us as artists here, but in the broader sense it’s damaging to the culture as a whole, with mega platforms like Facebook and Twitter and all of tech shaping how people view the world with these recommender algorithms. I was just at a conference for developers of these algorithms, and I was absolutely stunned to see that not only is engagement the only analyzed success metric, but that ‘accurately predicting engagement’ is becoming the way a company rates the success of their algorithms.
Let’s do a bit of a thought experiment. Pavlov’s dogs. If you’re not familiar with the experiment, it showed that a dog could be trained to salivate on command with the ringing of a bell by using food as a conditioning medium. Eventually, the dog would consistently salivate with the bell ringing even when no food was involved, over and over. Pavlov won a Nobel Prize for his work. Now imagine that Pavlov is a Robot, an AI, a machine, with the goal of getting you to engage with content online. How would a machine accomplish this? It’ll look at your patterns of engagement, what ‘food’ do you like, what makes you do the required behavior? Then it will feed you more of this in order to train your responses until it can reliably predict your behavior that it has trained you to do. What does that mean? Sharing content that you like and agree with, obviously, for one. But that’s not the only way to generate engagement. It’ll share content that is outrageous to you to provoke an emotional response and thus engagement as well. I think this is causing a lot of political polarization, and it’s entirely machine driven. Remember, as long as the machine can train you to the point that it will predict your behavior, it’s satisfied its goal condition of ‘engagement’. What about non-polarizing content, maybe things that make you think? Well you’re not as likely to state an opinion and ‘engage’ with something like that, it requires thought, it takes time. Educational content is like this, thinking deeply and critically is like this. It’s not going to have the same high engagement metrics as clickbait. Obviously. Or clickbait wouldn’t be clickbait.
The recommender algorithms actively de-rank content that is thought-provoking rather than response-provoking! What are the implications for society when you have a machine that’s goal condition is fulfilled by training the humans using it to fulfill a conditioned response? The implications are absolutely staggering.
As a different metric I propose using time spent on a page as a better goal than engagement. I don’t know, would this have the same problems? AI keeps trying to make you stay online all day? (They already do that)
Fight the system, go screw with a recommender algorithm today by liking a bunch of content that you don’t actually like. Add some noise. You might actually find something you didn’t expect. Stop the spread of the singularity.