Conceptualizing commodification bias in algorithmic modern news exchange 49 of the so-called fake news and strategic disinformation gave rise to the “disinformation order,” where radical right and radical left movements became the common response to the usage of cor- porate social media for information distribution. Since corporate social media and search engines (largely dominated by Google) are significantly shaping contem- porary dissemination of information regardless to its production source, the neutral nature of such technologies also may be questioned. Firstly, because corporate social media (such as Twitter, Facebook) are commercial companies whose model of capital accumulation depends on personal data accumulation and usage. Such personal data are stored, profiled and used to classify, categorize and sort users in order to propose them, targeted to their interests, advertising or search links (in case of Google). The decision about what to propose to a particular user is made automatically based on the platform algorithm, calculating automatically the price for advertising. Thus, at the cen- ter of this transaction are not interests of the user but capital accumulation interests of the platform. Platforms are involving users into content making or its sharing to encourage them to produce more data and more social relations data (who is liking whom) because such data contributes to enlarging the number of potential targets for the advertised spots. From this point of view, platforms are encouraging virality. From this perspective, we may speak about algorithmic bias, that is, the bias which is created by the sorting and visibility managing algorithms of platforms. As Turner Lee et al. write: “Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left un- checked, biased algorithms can lead to decisions which can have a collective, disparate impact on certain groups of people evenwithout the program- mer’s intention to discriminate” (Turner Lee et al., 2019). The big data solutionism (the idea that big data can help in any situation), according to Vin- cent Mosco, is just oversimplifying the human na- ture: “It is uncertain which is worse: that big data
RkJQdWJsaXNoZXIy MjEzNzYz