On Algorithmic Media
As my algorithmic media tools (FB, Twitter, etc.) learn more about me, they build a model of my consciousness that is even more detailed than my own self-awareness. While my aesthetic preferences and understanding of the world evolve, so too do these models, reflecting these changes so closely that it’s hard to detangle correlation and causation. Is this process shaped by my consciousness, or vice versa? My mind and my media now form a composite organism: years ahead of the singularity, I have already become the union of human and machine.
Each of us is responsible for the interplay between our conscious thoughts and our subconsciousness, and we are similarly responsible for the impact of our algorithmic media systems on our overall cognition. These prescient extensions of self bring us favorable ideas more efficiently, but they also reduce the conceptual territory we must traverse, and the amount of conscious processing we need to do along the way. If we’re not aware of this distillation and actively subverting its narrowing influence, our confirmation bias will quietly guide us into ideological rabbit-holes.
It’s also worth considering that these algorithmic systems are generally designed by for-profit companies. They are at best morally neutral, optimizing instead for growth within modern-day capitalist economics. Either by intentional design or by Darwinian “survival of the most profitable”, these algorithms will be tuned to cultivate a mass consciousness favorable to the economic system that powers them.
Use your algorithmic media tools to more coherently process this ridiculously complicated world, but don’t mistake them for an unbiased cross-section of reality.