Zucked

zucked.jpg

Having spent entire career working and investing in tech industry, the author is truly a tech insider. He’s an early investor of Facebook and a Facebook CEO’s mentor. Even though this book focuses on Facebook, it also applies to Google, YouTube, Twitter and Amazon. This book is a culmination of years of work in raising awareness of the harmful effects of big tech to society; a serious and ongoing collaborations between technologists, non-profit think tanks and academics.

What really is Facebook?

Attention merchants

As consumers, we crave convenience. We crave connection. We crave free. Facebook offers all three in a persuasive package that offers enough surprise and delight to cause us to visit daily, if not more often. It has taken on a central role in our lives.

Everything we do, every action we take on Facebook be it post, like, comment or even watching a clip gives Facebook information that they use to manipulate our and our friends’ attention. They combine psychology and persuasion concepts (a need for approval, a desire for reciprocity, and a fear of missing out) with techniques from slot machines (unpredictable, variable rewards stimulate behaviour of addiction). Like buttons and notifications trigger social validation/approval loops. Autoplay and endless feeds eliminate cues to stop. Third-party applications e.g. games, surveys, tests etc. increased time spent on Facebook. The more time a user spends on Facebook, the more ads he or she will see and the more money Facebook make. From Facebook’s perspective, anything that increases usage is good for Facebook.

Emotion

The design of Facebook trained users to unlock their emotions, to react without critical thought. On a small scale this would not normally be a problem, but at Facebook’s scale (2.4 billions monthly active users) it enables emotional contagion, where emotions overwhelm reason, the problems of which will be highlighted below.

Founder attitude toward users

The followings are messages between Facebook founder and his friend on the early days of Facebook at Harvard.

Zuck: Yeah so if you ever need info about anyone at Harvard

Zuck: Just ask.

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend’s Name]: What? How’d you manage that one?

Zuck: People just submitted it.

Zuck: I don’t know why.

Zuck: They “trust me”

Zuck: Dumb fucks.

What are the threats?

Public health

Several paediatric research articles documented permanent cognitive decline, shortened attention span and even early onset dementia in young children with long-term exposure to social media.

Cyberbullying becomes easy and rampant over texting and social media because when technology mediates human relationships, the social cues and feedback loops that would normally cause a bully to experience shunning or disgust by their peers are not present.

Facebook news feeds are usually packed with beautiful lives, we sometimes fail to realise that they are highly select content, well-manicured images projected by all os us in search of social approval and validation. It’s highly unlikely that shitty lives will ever be posted. All these selective exposures may lead to jealousy, depression and may contribute to increased suicide rate among teens.

Experts

Google’s ability to deliver results in milliseconds provides an illusion of authority that users have misinterpreted. They confuse speed and comprehensiveness with accuracy, not realizing that Google results may be influenced by advertiser needs, as well as search engine optimization and manipulation. Google is NOT neutral arbiter of information as it portrays itself to be. Users mistakenly believe their ability to get an answer to any question means they themselves are now experts, no longer dependent on people who actually know what they are talking about.

Democracy

Filter bubbles are defined as a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. The result is that everyone sees a different version of the internet individually tailored to create the illusion that everyone else agrees with them. Continuous reinforcement of existing beliefs through filter bubbles tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts.

Reinforcement of beliefs every day for a year or two will have an effect. Not on every user in every case, but on enough users in enough situations to be both effective advertising and harmful to democracy. It undermines basic democratic processes like deliberation and compromise.

Democracy depends on shared facts and values. It depends on communication and deliberation. It depends on having a free press and other countervailing forces to hold the powerful accountable. Facebook, Google, and Twitter have undercut the free press from two directions: they have starved journalism of advertising revenue and then overwhelmed it with disinformation. On internet platforms, information and disinformation look the same; the only difference is that disinformation generates more revenue because it gets more user attention, so it gets much better treatment i.e. it gets amplified and widely disseminated.

Extremists

The competition for attention, which is the platform business model, rewards the worst social behaviour. Extreme views attract more attention, so platforms recommend them.

Social media has enabled personal views that had previously been kept in check by social pressure—white nationalism is an example—to find an outlet. Before the platforms arrived, extreme views were often moderated because it was hard for adherents to find one another. Expressing extreme views in the real world can lead to social stigma, which also keeps them in check. By enabling anonymity and/or private Groups, the platforms removed the stigma, enabling like-minded people, including extremists, to find one another, communicate, and, eventually, to lose the fear of social stigma.

Whether by design or by accident, platforms empower extreme views in a variety of ways. The ease with which like-minded extremists can find one another creates the illusion of legitimacy. Protected from real-world stigma, communication among extreme voices over internet platforms generally evolves to more dangerous language and eventually spill out to the real world.

Violence

A particularly well-known example occurred in Houston, Texas, where Russian agents organized separate events for pro- and anti-Muslim Facebook Groups at the same mosque at the same time. The goal was to start a confrontation. All of this is possible because users trust what they find on social media. They trust it because it appears to originate from friends and, thanks to filter bubbles, conforms to each user’s preexisting beliefs.

A United Nations report accused Facebook of enabling religious persecution and ethnic cleansing of the Rohingya minority in Myanmar. Reuters released a special report that uncovered more than “1,000 examples of posts, comments, and pornographic images attacking the Rohingya and other Muslims”. As in Sri Lanka, hate speech on Facebook triggered physical violence against innocent victims. According to Médecins Sans Frontières, the death toll from August through December 2017 was at least 9,000.

Innocent people have died in Pittsburgh, El Paso, Christchurch, Myanmar, and the Philippines due to the actions of people negatively influenced by content on internet platforms. They are repeatedly saying they can’t do anything about what people post on the platforms and therefore are not responsible for the consequences. They do nothing on posts that incite violence, not because they lack the capacity to do something. They monitors, tracks and collect data on everything we do on the platforms through sophisticated AI. They have the capacity to do something about it, but they choose to do nothing because it’s good for business. Emotionally charged content attracts attention i.e. more interactions, more engagement and importantly more money for the business.

To what degree have the filter bubbles and advertising tools of internet platforms contributed to the success of climate change denial, white supremacy, gun violence, and the campaign to avoid vaccinations?

What can we do about it?

Recommendations

Our only choices were to persuade Facebook to harm its business for the good of the country and the world. Yup, and a pig may fly.

For government

Extending data intensive companies the fiduciary rule that applies to professionals like doctors and lawyers. They couldn’t sell clients’ private data to a third party or use them in anyway to benefit their bottom line, neither should internet platforms.

FDA for tech to ensure that large scale projects serve public interest. We have high bar for manufactures of new drugs to jump through. They are required to demonstrate safety and efficacy; the absence of which plus wide spread used could have devastating effects. People lives are on the line. Technology deliberately utilised by internet platforms has shown harmful effects to individual and society with far reaching consequences. FDA model might be able to curb further harm.

The experience with the chemical industry taught us that it is not sufficient to force cleanups; it is necessary to change incentives to prevent toxic spills in the first place. That means business model of internet platforms needs to be modified through regulations.

For individuals

The author recommends all of us to be aware of what internet platforms really are, their harmful effects and to engage local politicians to voice out our concerns. They will not change their business practices without pressure from authority. Why would they? They’ve made gazillions of dollars because of it. Other than that, we should do the followings.

Treat content on internet platform as a novel or simulation or game. It’s NOT real.

By prioritizing convenience, we increase our vulnerability. Friction in our lives allows for adaptation and self-determination. We can defend ourselves by changing the way we use technology.

Don’t allow Facebook to press our emotional buttons. Do not post anything political or react to any emotionally charged posts. Ask ourself “What could go wrong?” If there’s anything at all, stop right there. 

Do not use Facebook Connect or Google Sign in to log in to other sites or press Like buttons found around the web. That’s how they stalk and spy on us.

Use DuckDuckGo browser and search engine because it does not collect search data.

Use Signal for texting because messages are encrypted and not tracked.

Do not use Gmail, Google Docs, Waze, Google Maps or any other Google products and services. They may be convenient, but they are definitely NOT free. We pay them dearly with our private data allowing them to manipulate our emotion, behaviours, benefiting them many times over.

Employ several tracking blockers, including Ghostery and Disconnect, to make it much harder for internet platforms and others to collect data about us.

Turned off notifications for practically everything so we can pay attention to something that’s genuinely worth our attention.

From time to time put iPhone in monochrome mode to reduce the visual intensity of the device and, therefore, the dopamine hit.

We need to view internet platforms for what they are: profit-seeking businesses that are happy to sacrifice our well-being to improve their bottom line. We need to view products like Facebook, Instagram, Google, YouTube, Twitter, Reddit, 4chan, and 8chan as forms of entertainment loosely based on real life.

Chankhrit Sathorn