Kalev Leetaru<https://www.forbes.com/sites/kalevleetaru/>Contributor
Forbes, July 7, 2019


  The Rise Of 'Fake News' Coincides With Society Outsourcing Its
  Thinking To Algorithms



  *
  *
  *


The rise of “fake news,” misinformation, disinformation, digital 
falsehoods and foreign influence coincides with society’s increasing 
outsourcing of its thinking to algorithms. We no longer actively explore 
the informational landscape, we passively await the decisions of 
algorithms to tell us what to read, what to watch, what to share and 
what to buy. Whereas search algorithms once decided for us what the most 
important Websites were, today we no longer even play a role in what we 
consume, merely sitting passively in our chairs as infinitely scrolling 
streams of content are force-fed us by algorithms. Even scholars and 
scientists no longer spend their days immersed in the literature of 
their fields, they let Google Scholar tell them what papers to cite. As 
smart speakers eliminate the last shreds of informational context from 
our lives, willinformation literacy 
<https://www.forbes.com/sites/kalevleetaru/2019/07/07/a-reminder-that-fake-news-is-an-information-literacy-problem-not-a-technology-problem/>completely 
fade away?

The outsourcing of thought to machines dates back to the origins of the 
computing revolution, as the science fiction canon of the era touted a 
brave new world in which machines would do society’s thinking for it. 
The outcomes of this intellectual outsourcing, as seen through the years 
of science fiction, have ranged from enlightenment, with humans free to 
focus on unfettered creativity, to slow or sudden annihilation, as those 
intelligent machines see their creators as competition or useless nuisances.

The digital world was supposed to put the informational riches of the 
world at our fingertips. Every piece of information published in the 
history of humanity was to be available with a mouse click at precisely 
the moment we needed it. The early Web was presented as a globalized 
extension of the then-user-centric computing world. Users would turn to 
their computers with an informational need and the machine would act as 
a tool to assist them in locating the documents they needed, much as 
machines had done since the dawn of thekeyword search 
<https://www.forbes.com/sites/kalevleetaru/2019/01/09/why-are-we-still-using-keyword-searches-half-a-century-later/>.

The information seeking process of the early Web mirrored that of the 
traditional offline world, with humans driving the process and the 
machine merely offering a keyword search alternative to subject tags and 
a larger available collection of content.

Over time, however, machines inevitably took over the role of 
gatekeeper, with their scoring algorithms deciding what was the most 
“relevant” and “reputable.” From sorting results by mere keyword density 
to incorporating myriad and increasingly personalized metrics, machines 
began to decide what we should see and not see.

Web users willingly placed their informational needs in the hands of 
these all-powerful algorithms, only too happy to let a machine make the 
hard judgement calls about relevance and reputation. Rather than spend 
hours scrolling through search results, comparing and contrasting each 
entry and researching its provenance and context, users simply clicked 
on the first search result and moved on, trusting that an opaque black 
box algorithm somewhere had magically selected the “best” result out of 
everything on the entire Web and done so in the blink of an eye.

Even academic literature reviews, once a revered component of the 
research process, have rapidly devolved into quick keyword searches of 
Google Scholar, leaving it to Google’s algorithms to inadvertently 
arbitrate what constitutes the most “significant” findings of an entire 
field’s literature.

The rise of smart speakers has increased our dependency on algorithms, 
as we no longer can see beyond the first search result. When we ask our 
digital assistant for an answer, we get only an answer plucked from the 
open Web, without the benefit of knowing that had we run that search 
ourselves using a Web browser we would have seen that every other search 
result contradicts the answer of the top site that was used to surface 
the answer we heard.

The end result is that we have become ever more disconnected from the 
sources of the information we use and the decision-making process of how 
those sources are scored, sorted and filtered to present to us.

As we increasingly outsource our thinking to algorithms, we are losing 
our information literacy and the skills necessary to think critically 
about the information we see.

Rather than take a skeptical view of what we find online and seek out 
alternative viewpoints, we blindly trust that the algorithms powering 
the modern Web are giving us the best results.

Algorithms that are optimized to hold our attention and deliver the most 
addictive information are being trusted by the public to deliver 
unbiased results that represent the “best” available information.

Increasingly we no longer even actively search for information. We 
merely sign into social platforms and allow ourselves to be passively 
force-fed an endless stream of what profit-optimized algorithms believe 
will addict us the most.

These social media algorithms are optimized for virality and 
addictiveness, rather than truthfulness and evidentiary reporting. The 
more emotional, fact-free and false a post is, the more likely it is to 
be pushed viral by these algorithms.

Putting this all together, as we have outsourced our thinking to 
algorithms, we have set aside our search for enlightenment and placed 
ourselves in the hands of algorithms optimizing for entertainment.

The information we see is no longer based on what is the most relevant 
or useful to us, but rather what will extract our attention for the 
longest period of time in order to show us the most ads.

In other words, we are not outsourcing our thinking to algorithms 
designed to surface the “best” information. We are outsourcing our 
thinking to algorithms designed to moderate, monetize, manipulate and 
mine us.

These algorithms make decisions for us based on what makes us the most 
economically valuable to their creators, rather than what is best for 
us. Digital falsehoods may be bad for society, but they are economic 
gold for social media companies and thus prioritized by their algorithms 
for us.

In the end, in many ways our modern epidemic of digital falsehoods 
exists because we have become so reliant on technology that we’ve 
stopped teaching our citizenry how tothink 
<https://www.forbes.com/sites/kalevleetaru/2019/07/07/a-reminder-that-fake-news-is-an-information-literacy-problem-not-a-technology-problem/>.



-- 
*Verlag Dr. Christian Müller-Straten*
Crossmedia-Spezialverlag für die Bewahrung von Kultur und Natur
Redaktion: Dr. Adelheid Straten ([log in to unmask])
Herausgeber: Dr. Christian Mueller-Straten
Kunzweg 23, 81243 München, Germany
https://www.museumaktuell.de T. 0049-(0)89-839 690 43, Fax -44, 
[log in to unmask]
Üblicherweise erreichbar: 7-20 h MEZ
Media: MUSEUM AKTUELL print und MUSEUM AKTUELL Online (mit Volltextsuche 
und aktiven Links), EXPOTIME!,
RESTAURATORENBLÄTTER - Papers in Conservation, das große 
Restauratoren-Wörterbuch KONSERVATIV,
Europäisches Museumsportal www.museumaktuell.de
Kommerzielle Anliegen: Medienberatung Lutz F. Boden
Marktstr. 6, 21698 Harsefeld
T. 0049-(0)4164 9063507, mobil: 0175 – 3328668
[log in to unmask]
Office hours: 9 am - 6 pm

=========================================================
Important Subscriber Information:

The Museum-L FAQ file is located at http://www.finalchapter.com/museum-l-faq/ . You may obtain detailed information about the listserv commands by sending a one line e-mail message to [log in to unmask] . The body of the message should read "help" (without the quotes).

If you decide to leave Museum-L, please send a one line e-mail message to [log in to unmask] . The body of the message should read "Signoff Museum-L" (without the quotes).